WO2021193086A1 - Information processing device, method, and program - Google Patents

Information processing device, method, and program Download PDF

Info

Publication number
WO2021193086A1
WO2021193086A1 PCT/JP2021/009733 JP2021009733W WO2021193086A1 WO 2021193086 A1 WO2021193086 A1 WO 2021193086A1 JP 2021009733 W JP2021009733 W JP 2021009733W WO 2021193086 A1 WO2021193086 A1 WO 2021193086A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
presentation
task
information processing
option
Prior art date
Application number
PCT/JP2021/009733
Other languages
French (fr)
Japanese (ja)
Inventor
陽方 川名
茜 近藤
至 清水
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021193086A1 publication Critical patent/WO2021193086A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • the present technology relates to an information processing device, a method, and a program, and more particularly to an information processing device, a method, and a program capable of providing an opportunity to eliminate a user's negative psychological state.
  • Patent Document 1 proposes that an agent communicates influentially with a user even if the user is in a negative psychological state (hereinafter, also referred to as a negative state).
  • Patent Document 2 it is proposed to make the user appropriately recognize the change in the state of the user.
  • This technology was made in view of such a situation, and makes it possible to provide an opportunity to eliminate the negative psychological state of the user.
  • the information processing device of one aspect of the present technology includes a presentation control unit that controls the presentation of the option task to the user in response to the estimation of the negative state of the user from the estimation unit that estimates the negative state of the user. ..
  • the presentation of the option task to the user is controlled according to the estimation of the negative state of the user by the estimation unit that estimates the negative state of the user.
  • FIG. 1 It is a block diagram which shows the structure of one Embodiment of the information presentation system to which this technique is applied. It is a block diagram which shows the functional structure example of an information processing apparatus. It is a figure which shows the example of the estimation method of a negative state. It is a flowchart explaining the information presentation process of an information presentation system. It is a figure which shows the presentation example of a choice task. It is a figure which shows the other presentation example of a choice task. It is a figure which shows the other presentation example of a choice task. It is a figure which shows the other presentation example of a choice task. It is a figure which shows the other presentation example of a choice task. It is a figure which shows the other presentation example of a choice task. It is a figure which shows the example of the presentation timing of a choice task. It is a figure which shows the presentation example of the option task in extended example 1.
  • FIG. 1 shows the presentation example of a choice task.
  • FIG. It is a figure which shows the other presentation example of the option task in extended example 1.
  • FIG. It is a figure which shows the presentation example of the option task in extended example 2.
  • FIG. It is a figure which shows the presentation example of the option task in extended example 3.
  • FIG. It is a block diagram which shows the configuration example of a computer.
  • FIG. 1 is a block diagram showing a configuration of an embodiment of an information presentation system to which the present technology is applied.
  • the information presentation system 1 of FIG. 1 acquires data indicating the behavioral status of the user, estimates whether the user is in a negative state, and prompts selection according to the presumed negative state of the user. It controls the presentation of alternative tasks so that the user makes a decision.
  • the option task is a task that allows the user to select one of the options at the will of the user in response to the question.
  • the user selecting an option in response to the presentation of the option task is referred to as performing the option task.
  • the act of selecting an option itself does not impose an additional burden on the user.
  • the reason is that the selection act itself may cause new stress, for example, when the presentation of options forces the user to think unnecessarily. In addition, depending on the question and the content of the question, it may impose a new burden on the user. ⁇ It is not unnatural to present. The reason is that when the user makes an unnatural presentation, for example, when the user interrupts the action during an action such as viewing the content and causes the user to perform a selection action, the presentation mode itself may become a new negative factor. Because.
  • FIG. 1 as an example of such an option task, when a negative state of the user is estimated, an option task consisting of two options, "Would you like to turn on the TV or listen to music?" The example presented is shown.
  • the negative state may be defined as, for example, a state in which the sympathetic nervous system becomes active and the activity of the parasympathetic nervous system is suppressed. Such a state can also be called a stress state.
  • the negative state may be estimated using, for example, fluctuations in heart rate, sympathetic nerve activity represented by LF (Low Frequency) / HF (High Frequency) in heart rate variability, or mental sweating as an index. good.
  • the negative state may be defined as, for example, a state in which the right frontal lobe is activated with respect to the left frontal lobe.
  • the negative state is estimated by measuring the ⁇ -band power of the electroencephalogram.
  • the negative state may be estimated based on, for example, prosodic features in the user's utterance. At this time, if the predetermined frequency band in the prosody is lower than the reference value, it can be estimated that the state is in the negative state.
  • the negative state may be estimated based on, for example, the recognition of the user's facial expression.
  • the feature amount of the captured facial expression of the user is classified into the negative state, it can be estimated that the user is in the negative state.
  • the negative state may be estimated based on various models that define the emotional state, for example, the so-called Russell's emotional ring model that defines the human emotional state based on the two axes of arousal / inactivity and comfort / discomfort. May be defined as an unpleasant condition in.
  • negative state may be defined as a state that the system statistically estimates according to the user's attributes and behavioral status.
  • the content is video
  • a person or animal is screaming or screaming, screaming, hurting, or painful
  • these The display is presumed to be one of the factors that put the user in a negative state.
  • the text generated by the user by input etc. contains characters such as depression, wanting to die, unable to return due to overtime work, or rain all the time today, it can be estimated that the user is in a negative state. It will be one.
  • the text is not limited to the input information, and may be voice-recognized speech of the user.
  • This system can estimate the user state based on one or more of the factors that are presumed to put the user in a negative state.
  • the information presentation system 1 is composed of an input device 11, an information processing device 12, and an output device 13.
  • the input device 11, the information processing device 12, and the output device 13 are connected to each other via a network 21 such as a wireless LAN (Local Area Network).
  • the input device 11 and the output device 13 may be an input unit and an output unit of the information processing device 12, respectively.
  • the input device 11 is composed of a sensor unit 31, a sensor unit 32, and a sensor unit 33.
  • the sensor unit 31 recognizes the user's external environment including the room, and outputs external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
  • the sensor unit 31 recognizes the shape of the living space, existing home appliances / furniture, and grasps their arrangement. In addition, the sensor unit 31 acquires information outside the living space such as weather and temperature.
  • the sensor unit 31 is composed of, for example, a LiDAR, a temperature sensor, an illuminance sensor, a Web camera connected via the Internet, an input unit for acquiring information from a website or the like.
  • the sensor unit 32 recognizes various information of a person and grasps the behavior, and outputs the biometric information of the user and the behavior status of the user obtained as a result of the recognition and grasping to the information processing device 12.
  • the sensor unit 32 grasps information such as the presence / absence of users in the living space, the number of people, the posture, and the face orientation. In addition, the sensor unit 32 grasps what the user is doing now and the behavioral status of the user.
  • the sensor unit 32 includes a motion capture system such as OptiTrack (trademark), a distance image sensing system that measures the distance to an object using an image sensor, an infrared camera, a high-resolution depth sensor, and the like.
  • the sensor unit 32 further acquires the user's biological information (heartbeat, etc.).
  • the sensor unit 32 may be a heartbeat sensor, a sweating sensor, a brain wave sensor, a temperature sensor, or the like, and may be a wristband, an HMD (Head Mount Display), a glass type display, or the like including these.
  • a camera capable of measuring the heart rate, a microphone capable of inputting utterances and prosody, and the like may be used.
  • the sensor unit 33 acquires operation information by user operation input, voice input, etc., and outputs the acquired operation information to the information processing device 12.
  • the sensor unit 33 acquires the input operation information such as the voice and operation when the user performs the presented option task, or the operation when inputting information such as the user's to-do list.
  • the sensor unit 33 is composed of a keyboard, a touch panel, operation buttons, a controller, a smart phone, a tablet terminal, a microphone, or the like.
  • the information processing device 12 is composed of, for example, a personal computer.
  • the information processing device 12 estimates the user's emotions, for example, whether or not the user is in a negative state, based on the information supplied from the input device 11.
  • the information processing device 12 generates an option task based on the information supplied from the input device 11 and causes the output device 13 to present the generated option task.
  • the output device 13 is composed of a visual presentation device 41, a voice presentation device 42, and the like.
  • the output device 13 outputs the option tasks supplied from the information processing device 12.
  • the visual presentation device 41 is composed of a TV, a projector, or the like.
  • the visual presentation device 41 visually presents the option task to the user.
  • the voice presentation device 42 is composed of a speaker, a smart speaker, or the like.
  • the voice presentation device 42 presents the option task to the user as voice.
  • FIG. 2 is a block diagram showing a functional configuration example of the information processing device.
  • the information processing device 12 is configured to include an emotion estimation unit 61, a task generation unit 62, an output control unit 63, a database 64, and a database update unit 65. It should be noted that these functions are expanded and configured in RAM (Random Access Memory) or the like by the CPU (Central Processing Unit) of the information processing device 12.
  • RAM Random Access Memory
  • CPU Central Processing Unit
  • the emotion estimation unit 61 determines whether the user's emotion, for example, the user is in a negative state, based on at least one of the external environment information supplied from the input device 11, the user's biological information, and the user's behavioral situation. Estimate whether or not. When the user estimates that the user is in a negative state, the emotion estimation unit 61 causes the task generation unit 62 to generate an alternative task.
  • the task generation unit 62 presents the content and the presentation destination as options based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the information registered in the database 64, and the like. Determine the device for and generate a choice task.
  • the task generation unit 62 outputs the generated option task to the output control unit 63.
  • the output control unit 63 causes the output device 13 to output the optional tasks supplied from the task generation unit 62.
  • the output control unit 63 also controls the on / off of the power supply of the output device 13.
  • personal information including user attribute information, user preference information, personality characteristics, etc. are registered as user information. Further, as the user information, the user's behavior tendency information including the user's to-do list, the user's habit information, the response tendency to the user's choice task, and the behavior tendency information is also registered in the database 64.
  • the database update unit 65 updates the information registered in the database 64 based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the operation information of the user, and the like.
  • FIG. 3 is a diagram showing an example of a method of estimating a negative state by the emotion estimation unit 61.
  • the emotion estimation unit 61 estimates whether or not the user is in a negative state based on at least one of the user's biological information, the user's behavioral status, and the external environment information.
  • the emotion estimation unit 61 estimates whether or not the user is in a negative state based on the user's biological information, for example, information such as a decrease in heart rate, the number of sighs, or the state of the line of sight.
  • the emotion estimation unit 61 uses the decrease in heart rate, for example, as a comparison with a predetermined reference state, the user is negative based on the difference being equal to or more than a certain value or falling below the reference value for a certain period of time. Presumed to be in a state.
  • the reference state and the reference value may be defined based on the average value of the user or the measured value at a specific timing such as when waking up, or may be the average value of a plurality of users or a value defined by the system.
  • the emotion estimation unit 61 estimates that the user is in a negative state when the number of sighs is used, for example, when the number of sighs in a certain period of time is equal to or greater than the reference value.
  • the emotion estimation unit 61 estimates that the user is in the negative state, for example, based on the ratio of the time when the line of sight is pointing downward at a predetermined time.
  • the emotion estimation unit 61 is in a negative state of the user based on various states such as content viewing, sentence generation, utterance state, communication state, exercise state, and body movement state as the user's behavioral state. Estimate whether or not.
  • the emotion estimation unit 61 estimates whether or not the user is in a negative state based on information such as weather, temperature, or traffic condition as external environmental information.
  • the user is in a negative state, such as when it is raining, when it is hotter than the standard, or when it is cold, or when the traffic condition is congested.
  • Whether or not the state of the combination corresponds to the negative state for the user may be determined by using the learning result based on the data accumulated in the past.
  • FIG. 4 is a flowchart illustrating the information presentation process of the information presentation system 1.
  • step S11 the sensor unit 31 of the input device 11 recognizes the room and the external environment, and outputs the external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
  • step S12 the sensor unit 32 of the input device 11 recognizes various information of a person and grasps the behavior, and outputs the biometric information of the user and the behavior status of the user obtained as a result of the recognition and grasping to the information processing device 12. do.
  • step S13 the emotion estimation unit 61 of the information processing device 12 is based on at least one of the external environment information supplied from the input device 11, the biometric information of the user, and the behavioral status of the user. Negative state) is estimated.
  • step S14 the emotion estimation unit 61 determines whether or not the user is in a negative state. If it is determined in step S14 that the user is not in the negative state, the process returns to step S11 and the subsequent processes are repeated.
  • step S14 If it is determined in step S14 that the user is in a negative state, the process proceeds to step S15.
  • step S15 the task generation unit 62 determines the content to be presented as an option.
  • step S16 the task generation unit 62 determines the device to be presented.
  • step S17 the task generation unit 62 generates an option task based on the content to be presented as the determined option and the device to be presented, and outputs the generated option task to the output control unit 63.
  • steps S15 to S17 is performed based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the information registered in the database 64, and the like.
  • step S18 the output control unit 63 waits until it is determined that it is time to present the option task. If it is determined in step S18 that it is time to present the option task, the process proceeds to step S19.
  • step S19 the output control unit 63 controls the presentation of the option task by outputting the option task to the device determined as the presentation device in step S16 among the output devices 13.
  • step S31 the presentation device of the output device 13 presents an optional task.
  • the user starts the option task in response to the presentation of the option task in the output device 13.
  • step S20 the sensor unit 33 of the input device 11 acquires the operation information from the user, and outputs the acquired operation information to the information processing device 12.
  • step S21 the output control unit 63 of the information processing device 12 generates a response corresponding to the user's operation information and outputs it to the presentation device.
  • step S32 the presenting device in the output device 13 presents the response generated by the output control unit 63.
  • the user ends the option task in response to the presentation of the response in the output device 13.
  • the user who is watching TV and the negative state is presumed responds not only to watching TV but also to the option task presented at a timing that is not unnatural. This is expected to eliminate the negative state of the user.
  • the content of the option task presented to the user is, for example, a content that satisfies at least one of the following three conditions.
  • the presentation of the content options that satisfy these conditions is generated based on the user's attribute information, the content of the content that the user views, and the time information thereof.
  • FIG. 5 is a diagram showing an example of presenting an optional task.
  • presentation example 101 and presentation example 102 of a quiz-style option task according to the user's attribute and behavioral situation are shown.
  • presentation example 101 is an example presented when the user is from Tochigi prefecture, or when the user has just gone to Irohazaka.
  • Presentation example 101 is an optional task that satisfies at least the above condition 1.
  • presentation example 102 is an example presented when the user is interested in rugby.
  • Presentation example 102 is an optional task that satisfies at least the above conditions 1 and 2.
  • FIG. 6 is a diagram showing another presentation example of the option task.
  • FIG. 6 a presentation example 103 of an option task when selecting a present is shown.
  • the presentation example 103 is an optional task that satisfies at least the above condition 3.
  • presentation example 103 unlike the questionnaire, there is no incorrect answer and the option does not suffer any disadvantage, so the user can easily select the option.
  • FIG. 7 is a diagram showing still another presentation example of the option task.
  • FIG. 7 a presentation example 104 of an option task as a vote at the time of viewing the content is shown.
  • the presentation example 104 is an optional task that satisfies the above conditions 2 and 3.
  • FIG. 8 is a diagram showing another presentation example of the option task.
  • FIG. 8 a presentation example 105 by voice is shown.
  • the option task "Do you want to turn on the TV?” Is presented to the user from the audio presentation device 42 in a state where the power of the TV, which is the visual presentation device 41, is off. In this case, two options, "yes” and “no", are assumed.
  • the presentation example 105 is an optional task that satisfies the above condition 3.
  • the user's next action is an option and there is no incorrect answer, so that the user can naturally select the option.
  • FIG. 9 is a diagram showing an example of the presentation timing of the option task.
  • the option task is selected according to the degree of the negative state (hereinafter, also referred to as the negative degree).
  • the negative degree is the degree of the negative state.
  • the user is watching the content of the dinosaur rampaging on the TV, which is the visual presentation device 41, for example.
  • the arrow shown on the right side of the user indicates the degree of negativeness of the user estimated by the information presentation system 1, and the degree of negativeness increases from the bottom to the top.
  • the information presentation system 1 causes the user to present the option task in the corner of the screen even while the user is viewing the content.
  • the information presentation system 1 causes the option task to be presented at a well-separated timing such as the timing of entering the CM, the timing of ending the content, or the timing of turning off the power.
  • the presentation time of the option task is set to, for example, 10 seconds to 2 minutes as the time required for the task response.
  • the number of optional tasks presented is preferably one or two. This is because if the time required for answering is long or the number of alternative tasks presented is large, the presentation of the alternative tasks itself becomes a burden on the user.
  • the alternative task may be continuously presented if the user is still in a negative state, unless the timing is unnatural. For example, if there is a timing when the option task can be presented three times an hour, and the user is continuously estimated to be negative during the three presentation timings, the option task continues. It may be presented.
  • the timing is controlled according to the degree of negativeness in FIG. 9, the number of presentations and the duration of presentation may be controlled according to the negative state.
  • a selection task is performed so as to prompt the user to make a decision.
  • the presentation is controlled.
  • the user can get a chance to eliminate the negative state and eliminate or improve the negative state.
  • Expansion example 1 (action based on To Do list)>
  • the information presentation system 1 of FIG. 1 described above generates a choice task based on a to-do list in which a user's action schedule registered in a database 64 or the like is registered, or a user's habit information, thereby generating a user's action. It also functions as an application that encourages.
  • FIG. 10 is a diagram showing an example of presenting an option task in the extended example 1.
  • FIG. 10 shows an example of presenting an option task generated based on the user's habit information when the user returns home.
  • the database 64 it is registered in the database 64 as user habit information that it is customary for the user to first turn on the TV when returning home in his or her daily life.
  • the task generation unit 62 When it is determined that the user is in a negative state, the task generation unit 62 "Welcome back. Do you want to turn on the TV first?" Based on the user's habit information and the user's behavior tendency information registered in the database 64. Generate an option task with two options: “Or do you want to turn on the light?” In this case, since it is assumed that neither is attached, it can be said that it is an option task having three options in a strict sense.
  • the output control unit 63 controls the generated option task to be output from the smart speaker, which is the voice presentation device 42, when the user returns home.
  • the smart speaker outputs a voice choice task, "Welcome back. Do you want to turn on the TV or turn on the light?"
  • the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing device 12.
  • the output control unit 63 turns on the TV, which is the visual presentation device 41, based on the user's response.
  • the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
  • the burden on the user can be reduced by selecting the one that the user can easily respond to as one option of the option task based on the usual response tendency and behavior tendency.
  • FIG. 11 is a diagram showing another presentation example of the option task in the extended example 1.
  • FIG. 11 shows an example of presenting an option task generated based on the user's to-do list 10 minutes before presenting the user's outing schedule.
  • an outing schedule is registered as one of the action schedules of the user's to-do list presented at the scheduled time.
  • a TV which is a visual presentation device 41, is attached.
  • the task generation unit 62 sets "10 minutes before the time when the user's outing schedule is presented based on the user's to-do list registered in the database 64. The time to go out is approaching. Generate an option task with two options: "Turn off TV? (Would you like to turn it off?)”.
  • the output control unit 63 controls to output the generated option task from the smart speaker, which is the voice presentation device 42, 10 minutes before the time when the user's outing schedule is presented.
  • the smart speaker outputs a voice choice task, "The time to go out is approaching. Do you want to turn off the TV? (Do you want to turn it off?)”.
  • the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing device 12.
  • the output control unit 63 turns off the power of the TV, which is the visual presentation device 41, based on the user's response.
  • the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
  • the presentation timing is modified with respect to the original timing of presenting the action schedule to the user, and the option task is intentionally presented early.
  • NS the option task
  • Expansion example 2 (dialogue with character / voice device)>
  • the information presentation system 1 of FIG. 1 described above also functions as a dialogue application that causes a voice device with a character function to have a dialogue with a user by generating an option task in consideration of the content of the dialogue with the character.
  • a voice device with a character function is a voice device that can communicate by interacting with the user while the character displayed on the liquid crystal hologram or the like is operating.
  • FIG. 12 is a diagram showing an example of presenting an option task in the extended example 2.
  • the fact that the user is usually watching TV at this time is registered in the database 64 as the user's behavioral tendency information.
  • the task generation unit 62 not only attaches the TV but also intentionally includes a plurality of other actions as options. To generate a choice task.
  • actions such as music, email check, or meals are included in the options.
  • the output control unit 63 controls to output the generated option task from the voice device with a character function, which is the voice presentation device 42.
  • a character function which is the voice presentation device 42.
  • the character (penguins) of the voice device with the character function presents a voice option task such as "It's about time to turn on the TV, listen to music, or check the mail?" As one of the dialogues.
  • the sensor unit 33 detects the voice indicating the user's response and provides information. Output to the processing device 12.
  • the output control unit 63 advances the dialogue with the voice device with the character function by outputting "what kind of music is good?" Based on the user's response.
  • the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
  • an option task that intentionally includes a plurality of actions in the options is presented. This allows the user to "choose” naturally, which is in a negative state and would not want to do anything.
  • Extended Example 2 as a voice device capable of communicating with a user, for example, a voice device with a character function having visual information and a character function such as GateBox (registered trademark) will be described as an example. bottom.
  • a voice presentation device 42 without visual information or a character function may be used as long as it is a voice device capable of communicating with the user.
  • Expansion example 3 (content viewing / game)>
  • the information presentation system 1 of FIG. 1 described above also functions as a content viewing application or a game application for presenting an optional task during content viewing / game play when the user is presumed to be in a negative state.
  • FIG. 13 is a diagram showing an example of presenting an option task in the extended example 3.
  • FIG. 13 shows an example of presenting an advertisement including an option task in the advertisement content when the user is presumed to be in a negative state while viewing the content.
  • the task generation unit 62 generates an alternative task.
  • the output control unit 63 replaces the normal advertisement with the women of A to C, which are the options, based on the original advertisement display timing, and "choose who to choose. Have the smartphone present an ad containing an optional task with the question ".”
  • the advertisement including the option task may be acquired from the advertisement server via the network by transmitting the user's preference information of the database 64, as in the original advertisement.
  • the sensor unit 33 detects the operation content of the user and outputs it to the information processing device 12.
  • the information processing device 12 transmits, for example, information indicating a user's response content to an advertising server via a network.
  • the output control unit 63 displays the content again when the user's response is input.
  • the database update unit 65 updates the user preference information of the database 64 based on the user's response.
  • the advertisement including the option task is presented based on the timing when the original advertisement is displayed. It may be presented at a timing based on the timing. Specifically, the timing at which the advertisement including the option task is presented is earlier than the timing at which the advertisement is displayed, such as when the original advertisement is displayed at the fourth time, the advertisement is presented at the second time. You may let me present it.
  • the degree of speeding up the display may be determined according to the degree of the negative state.
  • the content of the options may be the content that the manufacturer to be advertised asks the user for consultation, the content that asks the user for their opinions, or the content that the user simply selects the one that he / she likes. It may be the content.
  • the interactivity of the advertisement is realized.
  • users who are not interested in the ad simply wait for the ad to finish and feel that time has passed for a long time, whereas by presenting an ad that includes a choice task. , You can feel the passage of time quickly.
  • the visual presentation device 41 may be a device having a visual display function, and may be a smartphone, a tablet, a personal computer, or a TV. Further, the content to be viewed may be an online video or a video distribution content.
  • FIG. 13 although it was an example of viewing the content, for example, during game play, during a time-consuming process such as loading a game, transitioning a scene, or searching for an opponent, It is possible to display an advertisement including a choice task and a choice task. For example, during a time-consuming process, some users may just be waiting for time to pass, so by presenting an ad containing a choice task or a choice task to such a user, the time You can feel the progress of.
  • an advertisement including an option task or an option task itself is presented during content viewing or game play.
  • the alternative task can be presented naturally even during content viewing or game play. For example, by presenting an advertisement including an option task that asks the user's opinion, there are merits for both the advertisement side and the user side.
  • menu display and the like are presented according to the user's operation when the user wants to select or switch something, it is an optional task of the present technology aimed at prompting the user to make a selection. Is different.
  • FIG. 14 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU 301, ROM (Read Only Memory) 302, and RAM 303 are connected to each other by the bus 304.
  • An input / output interface 305 is further connected to the bus 304.
  • An input unit 306 including a keyboard, a mouse, and the like, and an output unit 307 including a display, a speaker, and the like are connected to the input / output interface 305.
  • the input / output interface 305 is connected to a storage unit 308 made of a hard disk or a non-volatile memory, a communication unit 309 made of a network interface or the like, and a drive 310 for driving the removable media 311.
  • the CPU 301 loads the program stored in the storage unit 308 into the RAM 303 via the input / output interface 305 and the bus 304 and executes the program, thereby executing the series of processes described above. Is done.
  • the program executed by the CPU 301 is recorded on the removable media 311 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or a digital broadcast, and is installed in the storage unit 308.
  • the program executed by the computer may be a program that is processed in chronological order according to the order described in this specification, or may be a program that is processed in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • this technology can have a cloud computing configuration in which one function is shared and jointly processed by a plurality of devices via a network.
  • each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
  • one step includes a plurality of processes
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • the present technology can also have the following configurations.
  • An information processing device including a presentation control unit that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state from the estimation unit that estimates the user's negative state.
  • the option task is a task that causes the user to select one of the options in response to a question.
  • the information processing device according to (1) or (2), wherein the option task is generated by referring to the attribute information of the user.
  • the presentation control unit changes at least one of the number of times the option task is presented, the timing of presentation, and the time for continuing the presentation according to the degree of the negative state of the user. Processing equipment.
  • the presentation control unit controls the presentation of an advertisement including the option task during content viewing or game play.
  • the presentation control unit controls presentation of the option task including an action to be performed next by the user as an option.
  • the presentation control unit controls presentation of the option task including an action estimated to be performed by the user next based on the behavior tendency of the user.
  • the information processing device controls presentation of the option task including an action estimated to be performed by the user next based on the schedule of the user.
  • the presentation control unit controls the presentation of the option task including the action presumed to be performed next in the options so as to be presented earlier than the scheduled time of presenting the user's schedule (9).
  • the information processing device described in. (11) The estimation unit estimates the negative state of the user based on at least one of the biometric information of the user, the behavioral status of the user, and the external environment information of the user. Information processing device described in Crab.
  • Information processing device An information processing method that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state by the estimation unit that estimates the user's negative state.

Abstract

The technology of the present invention relates to an information processing device, method, and program with which it is possible to provide an opportunity to relieve a user's negative psychological state. This information processing device controls the presentation of alternative tasks to the user, in accordance with an estimation of a user's negative state by an estimation unit which estimates the user's negative state. The technology of the present invention can be applied to an information presentation system for presenting information to a user.

Description

情報処理装置および方法、並びにプログラムInformation processing equipment and methods, and programs
 本技術は、情報処理装置および方法、並びにプログラムに関し、特に、ユーザのネガティブな心理状態を解消するきっかけを提供することができるようにした情報処理装置および方法、並びにプログラムに関する。 The present technology relates to an information processing device, a method, and a program, and more particularly to an information processing device, a method, and a program capable of providing an opportunity to eliminate a user's negative psychological state.
 日常生活のあらゆる場面において、人がネガティブな感情を生じることが多くなってきている。これに対して、ネガティブな感情を解消するために何かしらの介入を行うことが提案されている。 People are becoming more and more likely to have negative emotions in every aspect of their daily lives. In response, some intervention has been proposed to eliminate negative emotions.
 例えば、特許文献1においては、ユーザがネガティブな心理状態(以下、ネガティブ状態とも称する)であっても、エージェントがユーザに対して影響力のあるコミュニケーションを行うことが提案されている。 For example, Patent Document 1 proposes that an agent communicates influentially with a user even if the user is in a negative psychological state (hereinafter, also referred to as a negative state).
 また、特許文献2においては、ユーザの状態の変化を、ユーザに適切に認識させることが提案されている。 Further, in Patent Document 2, it is proposed to make the user appropriately recognize the change in the state of the user.
特開2005-258820号公報Japanese Unexamined Patent Publication No. 2005-258820 特開2019-125358号公報JP-A-2019-125358
 しかしながら、ユーザがネガティブ状態である場合に対する具体的な介入方法として、何らかのタスクが提示されるという提案はなかった。 However, there was no suggestion that some task would be presented as a specific intervention method for the case where the user is in a negative state.
 本技術はこのような状況に鑑みてなされたものであり、ユーザのネガティブな心理状態を解消するきっかけを提供することができるようにするものである。 This technology was made in view of such a situation, and makes it possible to provide an opportunity to eliminate the negative psychological state of the user.
 本技術の一側面の情報処理装置は、ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部を備える。 The information processing device of one aspect of the present technology includes a presentation control unit that controls the presentation of the option task to the user in response to the estimation of the negative state of the user from the estimation unit that estimates the negative state of the user. ..
 本技術の一側面においては、ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示が制御される。 In one aspect of the present technology, the presentation of the option task to the user is controlled according to the estimation of the negative state of the user by the estimation unit that estimates the negative state of the user.
本技術を適用した情報提示システムの一実施の形態の構成を示すブロック図である。It is a block diagram which shows the structure of one Embodiment of the information presentation system to which this technique is applied. 情報処理装置の機能構成例を示すブロック図である。It is a block diagram which shows the functional structure example of an information processing apparatus. ネガティブ状態の推定方法の例を示す図である。It is a figure which shows the example of the estimation method of a negative state. 情報提示システムの情報提示処理を説明するフローチャートである。It is a flowchart explaining the information presentation process of an information presentation system. 選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of a choice task. 選択肢タスクの他の提示例を示す図である。It is a figure which shows the other presentation example of a choice task. 選択肢タスクのさらに他の提示例を示す図である。It is a figure which shows the other presentation example of a choice task. 選択肢タスクの他の提示例を示す図である。It is a figure which shows the other presentation example of a choice task. 選択肢タスクの提示タイミングの例を示す図である。It is a figure which shows the example of the presentation timing of a choice task. 拡張例1における選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of the option task in extended example 1. FIG. 拡張例1における選択肢タスクの他の提示例を示す図である。It is a figure which shows the other presentation example of the option task in extended example 1. FIG. 拡張例2における選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of the option task in extended example 2. FIG. 拡張例3における選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of the option task in extended example 3. FIG. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a computer.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
1.基本構成
2.拡張例1(To Doリストに基づく行動)
3.拡張例2(キャラクター・音声デバイスとの対話)
4.拡張例3(コンテンツ視聴・ゲーム)
5.その他
Hereinafter, modes for implementing the present technology will be described. The explanation will be given in the following order.
1. 1. Basic configuration 2. Expansion example 1 (action based on a to-do list)
3. 3. Expansion example 2 (dialogue with character / voice device)
4. Expansion example 3 (content viewing / game)
5. others
<1.基本構成>
(情報提示システムの構成例)
 図1は、本技術を適用した情報提示システムの一実施の形態の構成を示すブロック図である。
<1. Basic configuration>
(Configuration example of information presentation system)
FIG. 1 is a block diagram showing a configuration of an embodiment of an information presentation system to which the present technology is applied.
 図1の情報提示システム1は、ユーザの行動状況を示すデータを取得し、ユーザがネガティブ状態であるかを推定し、ユーザがネガティブ状態であると推定されたことに応じて、選択を促し、ユーザに意思決定させるように選択肢タスクの提示を制御するものである。 The information presentation system 1 of FIG. 1 acquires data indicating the behavioral status of the user, estimates whether the user is in a negative state, and prompts selection according to the presumed negative state of the user. It controls the presentation of alternative tasks so that the user makes a decision.
 選択肢タスクとは、問いかけに対して、ユーザの意思により、いずれかの選択肢を選択させるためのタスクである。なお、以下、ユーザが選択肢タスクの提示に対して選択肢の選択を行うことを、選択肢タスクを行うという。 The option task is a task that allows the user to select one of the options at the will of the user in response to the question. Hereinafter, the user selecting an option in response to the presentation of the option task is referred to as performing the option task.
 この選択肢タスクを、下記の点を踏まえて提示することにより、「ユーザのネガティブ状態において選択肢の選択によりネガティブ状態が解消される」効果を得ることができる。 By presenting this option task based on the following points, it is possible to obtain the effect that "the negative state is eliminated by selecting the option in the user's negative state".
 ・選択肢の選択行為自体がユーザのさらなる負担にならない。
  その理由は、例えば、選択肢の提示がユーザに対して不必要な熟考を強いる場合など、選択行為自体が新たなストレスの要因になる場合があるためである。また、質問や質問内容によっては、ユーザに対して新たな負担となる場合もあるためである。
 ・提示することが不自然ではない。
  その理由は、例えば、ユーザがコンテンツ視聴などの行動中に、その行動を中断して選択行為を行わせる場合など、不自然に提示を行った場合、その提示態様そのものが新たなネガティブ要因となりうるためである。
-The act of selecting an option itself does not impose an additional burden on the user.
The reason is that the selection act itself may cause new stress, for example, when the presentation of options forces the user to think unnecessarily. In addition, depending on the question and the content of the question, it may impose a new burden on the user.
・ It is not unnatural to present.
The reason is that when the user makes an unnatural presentation, for example, when the user interrupts the action during an action such as viewing the content and causes the user to perform a selection action, the presentation mode itself may become a new negative factor. Because.
 図1においては、このような選択肢タスクの一例として、ユーザのネガティブ状態が推定された場合、「テレビをつけましょうか?それとも音楽聞きますか?」という2つの選択肢からなる選択肢タスクが、音声により提示されている例が示されている。 In FIG. 1, as an example of such an option task, when a negative state of the user is estimated, an option task consisting of two options, "Would you like to turn on the TV or listen to music?" The example presented is shown.
 ここで、ネガティブ状態は、例えば、交感神経系が活発になり、副交感神経系の活動が抑制された状態として定義されてもよい。このような状態は、ストレス状態と呼ぶこともできる。この場合、ネガティブ状態は、例えば、心拍数の変動や、心拍変動におけるLF(Low Frequency)/HF(High Frequency)で表される交感神経活発度、あるいは精神性発汗などを指標として推定されてもよい。 Here, the negative state may be defined as, for example, a state in which the sympathetic nervous system becomes active and the activity of the parasympathetic nervous system is suppressed. Such a state can also be called a stress state. In this case, the negative state may be estimated using, for example, fluctuations in heart rate, sympathetic nerve activity represented by LF (Low Frequency) / HF (High Frequency) in heart rate variability, or mental sweating as an index. good.
 また、ネガティブ状態は、例えば、右前頭葉が左前頭葉に対して活発化されている状態として定義されてもよい。この場合、ネガティブ状態は、脳波のα帯域パワーの測定で推定される。 Further, the negative state may be defined as, for example, a state in which the right frontal lobe is activated with respect to the left frontal lobe. In this case, the negative state is estimated by measuring the α-band power of the electroencephalogram.
 ネガティブ状態は、例えば、ユーザの発声における韻律特長に基づき推定されてもよい。このとき、韻律における所定の周波数帯域が基準値よりも下回る場合に、ネガティブ状態にあると推定することができる。 The negative state may be estimated based on, for example, prosodic features in the user's utterance. At this time, if the predetermined frequency band in the prosody is lower than the reference value, it can be estimated that the state is in the negative state.
 ネガティブ状態は、例えば、ユーザの表情の認識に基づき推定されてもよい。撮像されたユーザの表情について、その特徴量がネガティブ状態に分類される場合には、ユーザがネガティブ状態にあると推定することができる。 The negative state may be estimated based on, for example, the recognition of the user's facial expression. When the feature amount of the captured facial expression of the user is classified into the negative state, it can be estimated that the user is in the negative state.
 ネガティブ状態は、情動状態を定義する各種モデルに基づき推定されてもよく、例えば、覚醒・不活性、および快・不快の2つの軸に基づき人間の情動状態を定義するいわゆるラッセルの感情円環モデルにおける、不快状態として定義されてもよい。 The negative state may be estimated based on various models that define the emotional state, for example, the so-called Russell's emotional ring model that defines the human emotional state based on the two axes of arousal / inactivity and comfort / discomfort. May be defined as an unpleasant condition in.
 また、ネガティブ状態は、本システムが、ユーザの属性や行動状況に応じて統計的に推定した状態として定義されてもよい。 Further, the negative state may be defined as a state that the system statistically estimates according to the user's attributes and behavioral status.
 例えば、ユーザがWebコンテンツの閲覧や文章生成をしている場合、そのコンテンツの中身が、一般的にネガティブな情報と相関があるとき、そのユーザはネガティブ状態であると推定することができる。 For example, when a user is browsing Web content or generating sentences, and the content of the content is generally correlated with negative information, it can be estimated that the user is in a negative state.
 具体的には、コンテンツが映像であれば、人や動物が、悲鳴あげたり、絶叫したりしている映像、痛がっている映像、または辛そうな映像が表示されている場合、これらの表示はユーザをネガティブ状態にする要因の1つになると推定される。また、ユーザが入力などにより生成する文章(text)に、憂鬱、死にたい、残業で帰れない、または、今日はずっと雨などの文字が含まれる場合、ユーザがネガティブ状態であると推定できる要因の1つとなる。なお、文章は入力情報に限らず、ユーザの発話を音声認識したものであってもよい。 Specifically, if the content is video, if a person or animal is screaming or screaming, screaming, hurting, or painful, these The display is presumed to be one of the factors that put the user in a negative state. Also, if the text generated by the user by input etc. contains characters such as depression, wanting to die, unable to return due to overtime work, or rain all the time today, it can be estimated that the user is in a negative state. It will be one. The text is not limited to the input information, and may be voice-recognized speech of the user.
 また、ユーザがTVでスポーツの試合を観戦している場合に、応援しているチームが負けているとき、そのユーザがネガティブ状態であると推定する要因の1つになる。例えば、試合の経緯において、応援しているチームが逆転されてしまったり、あるいは、大差で負けていたりしたとき、ユーザがネガティブ状態であると推定する要因の1つになる。 Also, when a user is watching a sports match on TV and the supporting team is losing, it is one of the factors that presumes that the user is in a negative state. For example, in the course of the game, when the supporting team is reversed or loses by a large margin, it becomes one of the factors for presuming that the user is in a negative state.
 本システムはユーザをネガティブ状態にすると推定される要因について、その1つまたは複数に基づき、ユーザ状態を推定することができる。 This system can estimate the user state based on one or more of the factors that are presumed to put the user in a negative state.
 図1において、情報提示システム1は、入力装置11、情報処理装置12、および出力装置13から構成される。 In FIG. 1, the information presentation system 1 is composed of an input device 11, an information processing device 12, and an output device 13.
 また、この情報提示システム1において、入力装置11、情報処理装置12、および出力装置13は、無線LAN(Local Area Network)などのネットワーク21を介して、相互に接続されている。なお、入力装置11および出力装置13は、それぞれ情報処理装置12の入力部および出力部であってもよい。 Further, in the information presentation system 1, the input device 11, the information processing device 12, and the output device 13 are connected to each other via a network 21 such as a wireless LAN (Local Area Network). The input device 11 and the output device 13 may be an input unit and an output unit of the information processing device 12, respectively.
 入力装置11は、センサ部31、センサ部32、およびセンサ部33から構成される。 The input device 11 is composed of a sensor unit 31, a sensor unit 32, and a sensor unit 33.
 センサ部31は、部屋を含むユーザの外部の環境認識を行い、認識の結果得られる外部環境を示す外部環境情報を、情報処理装置12に出力する。 The sensor unit 31 recognizes the user's external environment including the room, and outputs external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
 センサ部31は、具体的には、住空間の形状や、存在する家電・家具を認識したり、それらの配置などを把握したりする。また、センサ部31は、天気や気温といった住空間外の情報を取得する。この場合、センサ部31は、例えば、LiDAR、温度センサ、照度センサ、インターネットを介して接続されるWebカメラ、およびWebサイトなどから情報を取得する入力部などから構成される。 Specifically, the sensor unit 31 recognizes the shape of the living space, existing home appliances / furniture, and grasps their arrangement. In addition, the sensor unit 31 acquires information outside the living space such as weather and temperature. In this case, the sensor unit 31 is composed of, for example, a LiDAR, a temperature sensor, an illuminance sensor, a Web camera connected via the Internet, an input unit for acquiring information from a website or the like.
 センサ部32は、人の各種の情報の認識や行動把握を行い、認識および把握の結果得られるユーザの生体情報およびユーザの行動状況を、情報処理装置12に出力する。 The sensor unit 32 recognizes various information of a person and grasps the behavior, and outputs the biometric information of the user and the behavior status of the user obtained as a result of the recognition and grasping to the information processing device 12.
 センサ部32は、具体的には、住空間内のユーザの有無、人数、姿勢、顔向きなどの情報を把握する。また、センサ部32は、ユーザがいま何をしているか、ユーザの行動状況を把握する。この場合、センサ部32は、OptiTrack(商標)などのモーションキャプチャシステム、イメージセンサを用いて対象物までの距離を測る方式の距離画像センシングシステム、赤外線カメラ、および高解像度デプスセンサなどから構成される。 Specifically, the sensor unit 32 grasps information such as the presence / absence of users in the living space, the number of people, the posture, and the face orientation. In addition, the sensor unit 32 grasps what the user is doing now and the behavioral status of the user. In this case, the sensor unit 32 includes a motion capture system such as OptiTrack (trademark), a distance image sensing system that measures the distance to an object using an image sensor, an infrared camera, a high-resolution depth sensor, and the like.
 センサ部32は、さらに、ユーザの生体情報(心拍など)を取得する。この場合、センサ部32は、心拍センサ、発汗センサ、脳波センサ、温度センサなどでよく、これらを含むリストバンドやHMD(Head Mount Display)、グラス型ディスプレイなどであってもよい。あるいは、心拍数が計測可能なカメラや、発話や韻律を入力可能なマイクなどであってもよい。 The sensor unit 32 further acquires the user's biological information (heartbeat, etc.). In this case, the sensor unit 32 may be a heartbeat sensor, a sweating sensor, a brain wave sensor, a temperature sensor, or the like, and may be a wristband, an HMD (Head Mount Display), a glass type display, or the like including these. Alternatively, a camera capable of measuring the heart rate, a microphone capable of inputting utterances and prosody, and the like may be used.
 センサ部33は、ユーザの操作入力や音声入力などによる操作情報を取得し、取得した操作情報を、情報処理装置12に出力する。 The sensor unit 33 acquires operation information by user operation input, voice input, etc., and outputs the acquired operation information to the information processing device 12.
 センサ部33は、具体的には、ユーザが、提示された選択肢タスクを行う際の音声や動作、または、ユーザのTo Doリストなどの情報を入力する際の操作入力した操作情報を取得する。この場合、センサ部33は、キーボードやタッチパネル、操作ボタン、コントローラ、スマートホン、タブレット端末、またはマイクロフォンなどから構成される。 Specifically, the sensor unit 33 acquires the input operation information such as the voice and operation when the user performs the presented option task, or the operation when inputting information such as the user's to-do list. In this case, the sensor unit 33 is composed of a keyboard, a touch panel, operation buttons, a controller, a smart phone, a tablet terminal, a microphone, or the like.
 情報処理装置12は、例えば、パーソナルコンピュータなどから構成される。情報処理装置12は、入力装置11から供給された情報に基づいて、ユーザの感情、例えば、ユーザがネガティブ状態であるか否かを推定する。情報処理装置12は、ユーザがネガティブ状態であると推定した場合、入力装置11から供給された情報などに基づいて、選択肢タスクを生成し、生成した選択肢タスクを、出力装置13に提示させる。 The information processing device 12 is composed of, for example, a personal computer. The information processing device 12 estimates the user's emotions, for example, whether or not the user is in a negative state, based on the information supplied from the input device 11. When the user estimates that the user is in a negative state, the information processing device 12 generates an option task based on the information supplied from the input device 11 and causes the output device 13 to present the generated option task.
 出力装置13は、視覚提示デバイス41および音声提示デバイス42などから構成される。出力装置13は、情報処理装置12から供給される選択肢タスクを出力する。 The output device 13 is composed of a visual presentation device 41, a voice presentation device 42, and the like. The output device 13 outputs the option tasks supplied from the information processing device 12.
 視覚提示デバイス41は、TVやプロジェクタなどから構成される。視覚提示デバイス41は、選択肢タスクを、ユーザに対して視覚的に提示する。 The visual presentation device 41 is composed of a TV, a projector, or the like. The visual presentation device 41 visually presents the option task to the user.
 音声提示デバイス42は、スピーカやスマートスピーカなどで構成される。音声提示デバイス42は、選択肢タスクを、ユーザに対して音声として提示する。 The voice presentation device 42 is composed of a speaker, a smart speaker, or the like. The voice presentation device 42 presents the option task to the user as voice.
(情報処理装置の構成例)
 図2は、情報処理装置の機能構成例を示すブロック図である。
(Configuration example of information processing device)
FIG. 2 is a block diagram showing a functional configuration example of the information processing device.
 図2において、情報処理装置12は、感情推定部61、タスク生成部62、出力制御部63、データベース64、およびデータベース更新部65を含むように構成される。なお、これらの機能は、情報処理装置12のCPU(Central Processing Unit)によりRAM(Random Access Memory)などに展開されて構成される。 In FIG. 2, the information processing device 12 is configured to include an emotion estimation unit 61, a task generation unit 62, an output control unit 63, a database 64, and a database update unit 65. It should be noted that these functions are expanded and configured in RAM (Random Access Memory) or the like by the CPU (Central Processing Unit) of the information processing device 12.
 感情推定部61は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況のうちの少なくとも1つに基づいて、ユーザの感情、例えば、ユーザがネガティブ状態であるか否かを推定する。感情推定部61は、ユーザがネガティブ状態であると推定した場合、タスク生成部62に、選択肢タスクを生成させる。 The emotion estimation unit 61 determines whether the user's emotion, for example, the user is in a negative state, based on at least one of the external environment information supplied from the input device 11, the user's biological information, and the user's behavioral situation. Estimate whether or not. When the user estimates that the user is in a negative state, the emotion estimation unit 61 causes the task generation unit 62 to generate an alternative task.
 タスク生成部62は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況、並びにデータベース64に登録されている情報などに基づいて、選択肢として提示する内容と提示先のデバイスを決定し、選択肢タスクを生成する。タスク生成部62は、生成した選択肢タスクを、出力制御部63に出力する。 The task generation unit 62 presents the content and the presentation destination as options based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the information registered in the database 64, and the like. Determine the device for and generate a choice task. The task generation unit 62 outputs the generated option task to the output control unit 63.
 出力制御部63は、タスク生成部62から供給される選択肢タスクを、出力装置13に出力させる。また、出力制御部63は、出力装置13の電源のオンオフも制御する。 The output control unit 63 causes the output device 13 to output the optional tasks supplied from the task generation unit 62. The output control unit 63 also controls the on / off of the power supply of the output device 13.
 データベース64には、ユーザ情報として、ユーザの属性情報、ユーザの嗜好情報、およびパーソナリティ特性などを含む個人情報などが登録されている。さらに、データベース64には、ユーザ情報として、ユーザのTo Doリストやユーザの習慣情報、ユーザの選択肢タスクに対する応答の傾向や行動傾向の情報からなるユーザの行動傾向情報なども登録されている。 In the database 64, personal information including user attribute information, user preference information, personality characteristics, etc. are registered as user information. Further, as the user information, the user's behavior tendency information including the user's to-do list, the user's habit information, the response tendency to the user's choice task, and the behavior tendency information is also registered in the database 64.
 データベース更新部65は、入力装置11から供給される外部環境情報、ユーザの生体情報、ユーザの行動状況、およびユーザの操作情報などに基づいて、データベース64に登録されている情報を更新する。 The database update unit 65 updates the information registered in the database 64 based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the operation information of the user, and the like.
(ネガティブ状態の推定方法)
 図3は、感情推定部61によるネガティブ状態の推定方法の例を示す図である。
(Estimation method of negative state)
FIG. 3 is a diagram showing an example of a method of estimating a negative state by the emotion estimation unit 61.
 感情推定部61は、ユーザの生体情報、ユーザの行動状況、および外部環境情報の少なくとも1つに基づいて、ユーザがネガティブ状態であるか否かを推定する。 The emotion estimation unit 61 estimates whether or not the user is in a negative state based on at least one of the user's biological information, the user's behavioral status, and the external environment information.
 感情推定部61は、ユーザの生体情報、例えば、心拍値の低下、ため息の回数、または視線の状態などの情報に基づいて、ユーザがネガティブ状態であるか否かを推定する。 The emotion estimation unit 61 estimates whether or not the user is in a negative state based on the user's biological information, for example, information such as a decrease in heart rate, the number of sighs, or the state of the line of sight.
 感情推定部61は、心拍値の低下を用いる場合、例えば、所定の基準状態との比較として、その差分が一定値以上であったり、一定時間、基準値を下回ったことに基づき、ユーザがネガティブ状態であると推定する。このとき、基準状態や基準値は、当該ユーザの平均値や、起床時など特定タイミングにおける測定値に基づき定義されてよく、あるいは複数ユーザの平均値や、システムが定義する値としてもよい。 When the emotion estimation unit 61 uses the decrease in heart rate, for example, as a comparison with a predetermined reference state, the user is negative based on the difference being equal to or more than a certain value or falling below the reference value for a certain period of time. Presumed to be in a state. At this time, the reference state and the reference value may be defined based on the average value of the user or the measured value at a specific timing such as when waking up, or may be the average value of a plurality of users or a value defined by the system.
 感情推定部61は、ため息の回数を用いる場合、例えば、一定時間におけるため息の回数が基準値回以上であるとき、ユーザがネガティブ状態であると推定する。 The emotion estimation unit 61 estimates that the user is in a negative state when the number of sighs is used, for example, when the number of sighs in a certain period of time is equal to or greater than the reference value.
 感情推定部61は、視線の状態を用いる場合、例えば、視線が下を向いている時間の所定時間における比率に基づき、ユーザがネガティブ状態であると推定する。 When using the state of the line of sight, the emotion estimation unit 61 estimates that the user is in the negative state, for example, based on the ratio of the time when the line of sight is pointing downward at a predetermined time.
 また、感情推定部61は、ユーザの行動状況として、例えば、コンテンツ視聴や、文章生成、発話状態、コミュニケーション状態、運動状態、体動状態などの種々の状態に基づいて、ユーザがネガティブ状態であるか否かを推定する。 In addition, the emotion estimation unit 61 is in a negative state of the user based on various states such as content viewing, sentence generation, utterance state, communication state, exercise state, and body movement state as the user's behavioral state. Estimate whether or not.
 具体的には、Webコンテンツを閲覧中のネガティブな用語の発見や、ゲームプレイ中の勝敗状態やアイテムの取得状態、スポーツ観戦時における応援チームの勝敗状態、ギャンブル(パチンコや競馬)をしている場合の勝敗状態、歩行傾向などに基づき判断してもよい。 Specifically, we are discovering negative terms while browsing web content, winning / losing status during game play, acquiring items, winning / losing status of support teams when watching sports, and gambling (pachinko and horse racing). The judgment may be made based on the winning / losing state of the case, the walking tendency, and the like.
 さらに、感情推定部61は、外部環境情報として、例えば、天気、気温、または交通状態などの情報に基づいて、ユーザがネガティブ状態であるか否かを推定する。 Further, the emotion estimation unit 61 estimates whether or not the user is in a negative state based on information such as weather, temperature, or traffic condition as external environmental information.
 具体的には、降雨時や、基準以上に暑い場合、または寒い場合、交通状態が混雑状態にある場合など、ユーザがネガティブ状態であると推定する。 Specifically, it is estimated that the user is in a negative state, such as when it is raining, when it is hotter than the standard, or when it is cold, or when the traffic condition is congested.
 なお、これらの推定方法は、並行して複数行われてもよいし、組み合わせて行われてもよい。それらの組み合わせの状態がユーザにとってネガティブ状態に当たるか否かについて、過去に蓄積されたデータに基づく学習結果を用いて判断されてもよい。 Note that these estimation methods may be performed in parallel or in combination. Whether or not the state of the combination corresponds to the negative state for the user may be determined by using the learning result based on the data accumulated in the past.
(情報提示システムの動作)
 図4は、情報提示システム1の情報提示処理を説明するフローチャートである。
(Operation of information presentation system)
FIG. 4 is a flowchart illustrating the information presentation process of the information presentation system 1.
 なお、図4において、入力装置11の処理は、情報処理装置12の処理と組み合わされて示される。 Note that in FIG. 4, the processing of the input device 11 is shown in combination with the processing of the information processing device 12.
 ステップS11において、入力装置11のセンサ部31は、部屋や外部環境の認識を行い、認識の結果得られる外部環境を示す外部環境情報を、情報処理装置12に出力する。 In step S11, the sensor unit 31 of the input device 11 recognizes the room and the external environment, and outputs the external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
 なお、以下、ユーザが、行動の一例として、TVで放送されているコンテンツを見ている場合の例について説明する。 In the following, as an example of behavior, an example in which a user is watching content broadcast on TV will be described.
 ステップS12において、入力装置11のセンサ部32は、人の各種の情報の認識や行動把握を行い、認識および把握の結果得られるユーザの生体情報およびユーザの行動状況を、情報処理装置12に出力する。 In step S12, the sensor unit 32 of the input device 11 recognizes various information of a person and grasps the behavior, and outputs the biometric information of the user and the behavior status of the user obtained as a result of the recognition and grasping to the information processing device 12. do.
 ステップS13において、情報処理装置12の感情推定部61は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況のうちの少なくとも1つに基づいて、ユーザの感情(ネガティブ状態)を推定する。 In step S13, the emotion estimation unit 61 of the information processing device 12 is based on at least one of the external environment information supplied from the input device 11, the biometric information of the user, and the behavioral status of the user. Negative state) is estimated.
 ステップS14において、感情推定部61は、ユーザがネガティブ状態であるか否かを判定する。ユーザがネガティブ状態ではないと、ステップS14において判定された場合、ステップS11に戻り、それ以降の処理が繰り返される。 In step S14, the emotion estimation unit 61 determines whether or not the user is in a negative state. If it is determined in step S14 that the user is not in the negative state, the process returns to step S11 and the subsequent processes are repeated.
 ステップS14において、ユーザがネガティブ状態であると判定された場合、処理は、ステップS15に進む。 If it is determined in step S14 that the user is in a negative state, the process proceeds to step S15.
 ステップS15において、タスク生成部62は、選択肢として提示する内容を決定する。 In step S15, the task generation unit 62 determines the content to be presented as an option.
 ステップS16において、タスク生成部62は、提示先のデバイスを決定する。 In step S16, the task generation unit 62 determines the device to be presented.
 ステップS17において、タスク生成部62は、決定した選択肢として提示する内容および提示先のデバイスに基づいて、選択肢タスクを生成し、生成した選択肢タスクを出力制御部63に出力する。 In step S17, the task generation unit 62 generates an option task based on the content to be presented as the determined option and the device to be presented, and outputs the generated option task to the output control unit 63.
 なお、ステップS15乃至S17の処理は、入力装置11から供給される外部環境情報、ユーザの生体情報、ユーザの行動状況、およびデータベース64に登録されている情報などに基づいて行われる。 The processing of steps S15 to S17 is performed based on the external environment information supplied from the input device 11, the biometric information of the user, the behavioral status of the user, the information registered in the database 64, and the like.
 ステップS18において、出力制御部63は、選択肢タスクの提示タイミングになったと判定するまで待機する。選択肢タスクの提示タイミングになったと、ステップS18において判定された場合、処理は、ステップS19に進む。 In step S18, the output control unit 63 waits until it is determined that it is time to present the option task. If it is determined in step S18 that it is time to present the option task, the process proceeds to step S19.
 ステップS19において、出力制御部63は、出力装置13のうち、ステップS16で提示デバイスとして決定されたデバイスに、選択肢タスクを出力することで、選択肢タスクの提示制御を行う。 In step S19, the output control unit 63 controls the presentation of the option task by outputting the option task to the device determined as the presentation device in step S16 among the output devices 13.
 ステップS31において、出力装置13の該提示デバイスは、選択肢タスクの提示を行う。 In step S31, the presentation device of the output device 13 presents an optional task.
 出力装置13における選択肢タスクの提示に対して、ユーザは、選択肢タスクを開始する。 The user starts the option task in response to the presentation of the option task in the output device 13.
 ステップS20において、入力装置11のセンサ部33は、ユーザからの操作情報を取得し、取得した操作情報を、情報処理装置12に出力する。 In step S20, the sensor unit 33 of the input device 11 acquires the operation information from the user, and outputs the acquired operation information to the information processing device 12.
 ステップS21において、情報処理装置12の出力制御部63は、ユーザの操作情報に対応する応答を生成し、提示デバイスに出力する。 In step S21, the output control unit 63 of the information processing device 12 generates a response corresponding to the user's operation information and outputs it to the presentation device.
 ステップS32において、出力装置13における提示デバイスは、出力制御部63により生成された応答を提示する。 In step S32, the presenting device in the output device 13 presents the response generated by the output control unit 63.
 出力装置13における応答の提示に対して、ユーザは、選択肢タスクを終了する。 The user ends the option task in response to the presentation of the response in the output device 13.
 以上のように、TVを見ていて、ネガティブ状態が推定されたユーザは、TVを見ているだけでなく、不自然でないタイミングで提示される選択肢タスクに対して、応答する。これにより、ユーザのネガティブ状態の解消が期待される。 As described above, the user who is watching TV and the negative state is presumed responds not only to watching TV but also to the option task presented at a timing that is not unnatural. This is expected to eliminate the negative state of the user.
(選択肢タスクの内容)
 ユーザに提示される選択肢タスクの内容は、例えば、下記の3つの条件のうちの少なくともいずれかを満たす内容とされる。
(Contents of alternative task)
The content of the option task presented to the user is, for example, a content that satisfies at least one of the following three conditions.
 1.正解を選択することが容易な内容である。例えば、回答の難易度が著しく低かったり、ユーザが過去に体験したり、学習して間もない事項に関する内容である。
 2.ユーザが興味を持っている内容である。例えば、ユーザの嗜好に合致するコンテンツに関する内容や、ユーザが視聴中のコンテンツの内容に関連が深い内容である。
 3.どれを選んでも正解となる内容である。例えば、ユーザのその時の状態を問い合わせる内容や、一般的な正解の存在しない(すなわち、不正解を含まない)内容である。これにより、ユーザに選ぶこと自体に満足度を持たせたり、選択の結果、不利益を被らないようにすることができる。
1. 1. It is easy to select the correct answer. For example, the difficulty of answering is extremely low, or the content is related to a matter that the user has experienced in the past or has just learned.
2. The content that the user is interested in. For example, the content is closely related to the content that matches the user's taste and the content that the user is viewing.
3. 3. Whichever you choose, the answer will be correct. For example, it is a content that inquires about the state of the user at that time, or a content that does not have a general correct answer (that is, does not include an incorrect answer). As a result, the user can be satisfied with the selection itself, and the selection can be prevented from suffering a disadvantage.
 これら条件を満たす内容の選択肢の提示は、ユーザの属性情報や、ユーザが視聴するコンテンツの内容、およびその時間情報などに基づき生成される。 The presentation of the content options that satisfy these conditions is generated based on the user's attribute information, the content of the content that the user views, and the time information thereof.
 図5は、選択肢タスクの提示例を示す図である。 FIG. 5 is a diagram showing an example of presenting an optional task.
 図5の例においては、ユーザの属性や行動状況に合わせたクイズ形式の選択肢タスクの提示例101および提示例102が示されている。 In the example of FIG. 5, presentation example 101 and presentation example 102 of a quiz-style option task according to the user's attribute and behavioral situation are shown.
 提示例101には、「栃木県日光市にある観光道路「いろは坂」にあるカーブの数は上りと下りで全部でいくつ?」という問題と、「48」、「72」、「96」という3つの選択肢とが提示されている。 In the presentation example 101, "How many curves are there in total on the tourist road" Irohazaka "in Nikko City, Tochigi Prefecture? And three options, "48", "72", and "96", are presented.
 すなわち、提示例101は、ユーザが栃木県出身である場合、またはユーザがいろは坂に行ったばかりの場合などに提示される例である。提示例101は、上記の条件1を少なくとも満たす選択肢タスクである。 That is, the presentation example 101 is an example presented when the user is from Tochigi prefecture, or when the user has just gone to Irohazaka. Presentation example 101 is an optional task that satisfies at least the above condition 1.
 ユーザが栃木県出身である場合、またはユーザがいろは坂に行ったばかりの場合、そのユーザは、正解を簡単に選択することができる。 If the user is from Tochigi prefecture, or if the user has just gone to Irohazaka, the user can easily select the correct answer.
 提示例102には、「試合が終了すれば敵味方なし、という意味で試合終了のことを「ノーサイド」というスポーツは何?」という問題と、「1 ラグビー」、「2 ラクロス」、「3 ホッケー」、「4 アメフト」という4つの選択肢とが提示されている。 In the presentation example 102, "What is the sport of" no side "that means the end of the match in the sense that there is no enemy or ally when the match is over? And four options, "1 rugby", "2 lacrosse", "3 hockey", and "4 American football" are presented.
 すなわち、提示例102は、ユーザがラグビーに興味がある場合などに提示される例である。提示例102は、上記の条件1及び条件2を少なくとも満たす選択肢タスクである。 That is, the presentation example 102 is an example presented when the user is interested in rugby. Presentation example 102 is an optional task that satisfies at least the above conditions 1 and 2.
 ユーザがラグビーに興味がある場合、そのユーザは、正解の選択肢を簡単に選択することができる。 If the user is interested in rugby, the user can easily select the correct answer option.
 図6は、選択肢タスクの他の提示例を示す図である。 FIG. 6 is a diagram showing another presentation example of the option task.
 図6の例においては、プレゼントを選択する場合の選択肢タスクの提示例103が示されている。 In the example of FIG. 6, a presentation example 103 of an option task when selecting a present is shown.
 提示例103には、「母の日のプレゼントを選んでください。」という問題と、「A:カーネーションなどの花束」、「B:スイーツやグルメ商品のセット」、「C:日傘やハンカチなどの実用品」という3つの選択肢とが提示されている。 In the presentation example 103, the problem "Please choose a gift for Mother's Day", "A: Bouquet of carnations", "B: Set of sweets and gourmet products", "C: Parasol, handkerchief, etc." Three options, "actual products", are presented.
 すなわち、提示例103は、上記の条件3を少なくとも満たす選択肢タスクである。 That is, the presentation example 103 is an optional task that satisfies at least the above condition 3.
 提示例103の場合、アンケートのように不正解がなく、不利益を被らない選択肢であるので、ユーザは、選択肢を気軽に選ぶことができる。 In the case of presentation example 103, unlike the questionnaire, there is no incorrect answer and the option does not suffer any disadvantage, so the user can easily select the option.
 図7は、選択肢タスクのさらに他の提示例を示す図である。 FIG. 7 is a diagram showing still another presentation example of the option task.
 図7の例においては、コンテンツ視聴時における投票としての選択肢タスクの提示例104が示されている。 In the example of FIG. 7, a presentation example 104 of an option task as a vote at the time of viewing the content is shown.
 提示例104には、クイズ番組において、視聴者に向けての投票として、「プリンに醤油をかけたらウニだと思うか?」という問題と、「賛成」、「反対」の2つの選択肢とが提示されている。 In the presentation example 104, in the quiz program, as a vote for the viewer, the question "Do you think that it is a sea urchin if you sprinkle soy sauce on the pudding?" And two options of "agree" and "disagree" are given. It is presented.
 すなわち、提示例104は、上記の条件2および条件3を満たす選択肢タスクである。 That is, the presentation example 104 is an optional task that satisfies the above conditions 2 and 3.
 提示例104の場合も、アンケートのように不正解がなく、不利益を被らない選択肢であるので、ユーザは、選択肢を気軽に選ぶことができる。 In the case of the presentation example 104 as well, unlike the questionnaire, there is no incorrect answer and the option does not suffer any disadvantage, so the user can easily select the option.
 図8は、選択肢タスクの他の提示例を示す図である。 FIG. 8 is a diagram showing another presentation example of the option task.
 図8においては、音声での提示例105が示されている。 In FIG. 8, a presentation example 105 by voice is shown.
 提示例105には、視覚提示デバイス41であるTVの電源がOFFの状態において、音声提示デバイス42から「TVをつけますか?」という選択肢タスクが、ユーザに提示されている。この場合、「はい」、「いいえ」の2つの選択肢が想定される。 In the presentation example 105, the option task "Do you want to turn on the TV?" Is presented to the user from the audio presentation device 42 in a state where the power of the TV, which is the visual presentation device 41, is off. In this case, two options, "yes" and "no", are assumed.
 すなわち、提示例105は、上記の条件3を満たす選択肢タスクである。 That is, the presentation example 105 is an optional task that satisfies the above condition 3.
 提示例105の場合、ユーザの次の行動が選択肢であり、不正解がないので、ユーザは、自然に選択肢を選ぶことができる。 In the case of the presentation example 105, the user's next action is an option and there is no incorrect answer, so that the user can naturally select the option.
(選択肢タスクの提示タイミング)
 図9は、選択肢タスクの提示タイミングの例を示す図である。
(Timing of presentation of alternative tasks)
FIG. 9 is a diagram showing an example of the presentation timing of the option task.
 図9の例においては、視覚提示デバイス41であるTVにおけるコンテンツの視聴時に、ユーザがネガティブ状態であると推定された場合、ネガティブ状態の度合(以下、ネガティブ度合とも称する)に応じて、選択肢タスクの提示タイミングを変化させる例が示されている。 In the example of FIG. 9, when the user is estimated to be in a negative state when viewing the content on the TV which is the visual presentation device 41, the option task is selected according to the degree of the negative state (hereinafter, also referred to as the negative degree). An example of changing the presentation timing of is shown.
 ユーザは、例えば、視覚提示デバイス41であるTVで、恐竜が暴れているコンテンツを視聴している。ユーザの右側に図示されている矢印は、情報提示システム1により推定されたユーザのネガティブ度合を示しており、下から上に行くほど、ネガティブ度合が大きくなる。 The user is watching the content of the dinosaur rampaging on the TV, which is the visual presentation device 41, for example. The arrow shown on the right side of the user indicates the degree of negativeness of the user estimated by the information presentation system 1, and the degree of negativeness increases from the bottom to the top.
 ユーザのネガティブ度合が大きい場合、情報提示システム1は、ユーザがコンテンツを見ている最中でも、画面隅に選択肢タスクを提示させる。 When the degree of negativeness of the user is large, the information presentation system 1 causes the user to present the option task in the corner of the screen even while the user is viewing the content.
 一方、ユーザのネガティブ度合が小さい場合、情報提示システム1は、CMに入ったタイミング、コンテンツの終了タイミング、または電源をOFFしたタイミングなど、区切りのよいタイミングで選択肢タスクを提示させる。 On the other hand, when the degree of negativeness of the user is small, the information presentation system 1 causes the option task to be presented at a well-separated timing such as the timing of entering the CM, the timing of ending the content, or the timing of turning off the power.
 なお、選択肢タスクの提示時間は、例えば、タスク回答の所要時間として、10秒乃至2分に設定される。選択肢タスクの提示数は、1問乃至2問が好ましい。回答所要時間が長かったり、選択肢タスクの提示数が多い場合には、選択肢タスクの提示自体がユーザの負担となってしまうためである。 The presentation time of the option task is set to, for example, 10 seconds to 2 minutes as the time required for the task response. The number of optional tasks presented is preferably one or two. This is because if the time required for answering is long or the number of alternative tasks presented is large, the presentation of the alternative tasks itself becomes a burden on the user.
 また、選択肢タスクは、タイミングが不自然でない限り、ユーザが依然としてネガティブ状態である場合は継続して提示されてもよい。例えば、1時間に3回、選択肢タスクが提示可能なタイミングがあった場合、3回の提示可能なタイミングの間、ユーザがネガティブであると継続して推定されるときには、選択肢タスクが継続して提示されるようにしてもよい。 Also, the alternative task may be continuously presented if the user is still in a negative state, unless the timing is unnatural. For example, if there is a timing when the option task can be presented three times an hour, and the user is continuously estimated to be negative during the three presentation timings, the option task continues. It may be presented.
 なお、図9においては、ネガティブ度合に応じて、タイミングを制御する例を説明したが、ネガティブ状態に応じて、提示回数や提示継続時間が制御されるようにしてもよい。 Although the timing is controlled according to the degree of negativeness in FIG. 9, the number of presentations and the duration of presentation may be controlled according to the negative state.
 以上のように、本技術においては、ユーザがネガティブ状態であるかが推定され、ユーザがネガティブ状態であると推定されたことに応じて、選択を促し、ユーザに意思決定させるように選択肢タスクの提示が制御される。 As described above, in the present technology, it is presumed whether the user is in a negative state, and in response to the presumed that the user is in a negative state, a selection task is performed so as to prompt the user to make a decision. The presentation is controlled.
 これにより、ユーザは、ネガティブ状態を解消するきっかけを得て、ネガティブ状態を解消または改善することができる。 As a result, the user can get a chance to eliminate the negative state and eliminate or improve the negative state.
<2.拡張例1(To Doリストに基づく行動)>
 上述した図1の情報提示システム1は、選択肢タスクを、データベース64などに登録されたユーザの行動予定が登録されるTo Doリストや、ユーザの習慣情報に基づいて生成することで、ユーザの行動を促すアプリケーションとしても機能する。
<2. Expansion example 1 (action based on To Do list)>
The information presentation system 1 of FIG. 1 described above generates a choice task based on a to-do list in which a user's action schedule registered in a database 64 or the like is registered, or a user's habit information, thereby generating a user's action. It also functions as an application that encourages.
 図10は、拡張例1における選択肢タスクの提示例を示す図である。 FIG. 10 is a diagram showing an example of presenting an option task in the extended example 1.
 図10においては、ユーザの帰宅時に、ユーザの習慣情報に基づいて生成された選択肢タスクを提示する例が示されている。 FIG. 10 shows an example of presenting an option task generated based on the user's habit information when the user returns home.
 例えば、普段生活する中で、帰宅時に、ユーザが、まずTVの電源をつけるのが習慣であることが、ユーザの習慣情報としてデータベース64に登録されている。 For example, it is registered in the database 64 as user habit information that it is customary for the user to first turn on the TV when returning home in his or her daily life.
 ユーザがネガティブ状態であると判定された場合、タスク生成部62は、データベース64に登録されているユーザの習慣情報およびユーザの行動傾向情報に基づいて、「お帰りなさい。まずはTVをつけますか?それとも電気をつけますか?」という2つの選択肢を有する選択肢タスクを生成する。なお、この場合、どちらもつけないという選択肢も想定されるので、厳密には、3つの選択肢を有する選択肢タスクともいえる。 When it is determined that the user is in a negative state, the task generation unit 62 "Welcome back. Do you want to turn on the TV first?" Based on the user's habit information and the user's behavior tendency information registered in the database 64. Generate an option task with two options: "Or do you want to turn on the light?" In this case, since it is assumed that neither is attached, it can be said that it is an option task having three options in a strict sense.
 出力制御部63は、ユーザの帰宅時に、音声提示デバイス42であるスマートスピーカから、生成した選択肢タスクを出力するように制御する。その結果、スマートスピーカから、「お帰りなさい。まずはTVをつけますか?それとも電気をつけますか?」という音声の選択肢タスクが出力される。 The output control unit 63 controls the generated option task to be output from the smart speaker, which is the voice presentation device 42, when the user returns home. As a result, the smart speaker outputs a voice choice task, "Welcome back. Do you want to turn on the TV or turn on the light?"
 例えば、ユーザが、提示された選択肢タスクに対して「TVをつけたいな」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。出力制御部63は、ユーザの応答に基づいて、視覚提示デバイス41であるTVの電源を付けさせる。その際、データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザの行動傾向情報を更新する。 For example, when the user responds to the presented option task with "I want to turn on the TV", the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing device 12. The output control unit 63 turns on the TV, which is the visual presentation device 41, based on the user's response. At that time, the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
 以上のように、本技術においては、ユーザの習慣情報およびユーザの行動傾向情報に基づいて、選択肢タスクが提示される。これにより、生活の中で、選択肢タスクを自然に提示することができる。 As described above, in this technology, alternative tasks are presented based on the user's habit information and the user's behavior tendency information. This makes it possible to naturally present alternative tasks in life.
 また、その際、普段の応答傾向や行動傾向に基づいて、ユーザが簡単に対応できそうなものを選択肢タスクの1つの選択肢として選ぶことにより、ユーザへの負担を少なくすることができる。 Also, at that time, the burden on the user can be reduced by selecting the one that the user can easily respond to as one option of the option task based on the usual response tendency and behavior tendency.
 図11は、拡張例1における選択肢タスクの他の提示例を示す図である。 FIG. 11 is a diagram showing another presentation example of the option task in the extended example 1.
 図11においては、ユーザの外出予定を提示する10分前に、ユーザのTo Doリストに基づいて生成された選択肢タスクを提示する例が示されている。 FIG. 11 shows an example of presenting an option task generated based on the user's to-do list 10 minutes before presenting the user's outing schedule.
 例えば、データベース64には、予定時刻になったら提示されるユーザのTo Doリストの行動予定の1つとして、外出予定が登録されている。ユーザの部屋では、視覚提示デバイス41であるTVがついている。 For example, in the database 64, an outing schedule is registered as one of the action schedules of the user's to-do list presented at the scheduled time. In the user's room, a TV, which is a visual presentation device 41, is attached.
 ユーザがネガティブ状態であると判定された場合、タスク生成部62は、データベース64に登録されているユーザのTo Doリストに基づいて、ユーザの外出予定を提示する時刻の10分より前に、「外出の時間が迫っています。TVを消しますか?(消しませんか?)」という2つの選択肢を有する選択肢タスクを生成する。 When it is determined that the user is in a negative state, the task generation unit 62 sets "10 minutes before the time when the user's outing schedule is presented based on the user's to-do list registered in the database 64. The time to go out is approaching. Generate an option task with two options: "Turn off TV? (Would you like to turn it off?)".
 出力制御部63は、ユーザの外出予定を提示する時刻の10分前に、音声提示デバイス42であるスマートスピーカから、生成した選択肢タスクを出力するように制御する。その結果、スマートスピーカから、「外出の時間が迫っています。TVを消しますか?(消しませんか?)」という音声の選択肢タスクが出力される。 The output control unit 63 controls to output the generated option task from the smart speaker, which is the voice presentation device 42, 10 minutes before the time when the user's outing schedule is presented. As a result, the smart speaker outputs a voice choice task, "The time to go out is approaching. Do you want to turn off the TV? (Do you want to turn it off?)".
 例えば、ユーザが、提示された選択肢タスクに対して「TVを消してくれる?」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。出力制御部63は、ユーザの応答に基づいて、視覚提示デバイス41であるTVの電源を消させる。その際、データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザの行動傾向情報を更新する。 For example, when the user responds to the presented option task with "Can you turn off the TV?", The sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing device 12. The output control unit 63 turns off the power of the TV, which is the visual presentation device 41, based on the user's response. At that time, the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
 以上のように、本技術においては、ユーザのTo Doリストに基づいて、ユーザに対して行動予定を提示する本来のタイミングに対して、その提示タイミングを修正し、敢えて早めに選択肢タスクが提示される。これにより、より自然で、より効果的な選択肢タスクの提示を行うことができ、ユーザのネガティブ状態を解消し、ユーザを予定の行動へ誘発することができる。 As described above, in the present technology, based on the user's to-do list, the presentation timing is modified with respect to the original timing of presenting the action schedule to the user, and the option task is intentionally presented early. NS. As a result, it is possible to present a more natural and more effective option task, eliminate the negative state of the user, and induce the user to perform the planned action.
<3.拡張例2(キャラクター・音声デバイスとの対話)>
 上述した図1の情報提示システム1は、キャラクターとの対話内容を考慮して選択肢タスクを生成することで、キャラクター機能付き音声デバイスにユーザとの対話を行わせる対話アプリケーションとしても機能する。
<3. Expansion example 2 (dialogue with character / voice device)>
The information presentation system 1 of FIG. 1 described above also functions as a dialogue application that causes a voice device with a character function to have a dialogue with a user by generating an option task in consideration of the content of the dialogue with the character.
 キャラクター機能付き音声デバイスは、液晶ホログラムなどで表示されるキャラクターが動作をしながら、ユーザと対話をすることで、コミュニケーション可能な音声デバイスである。 A voice device with a character function is a voice device that can communicate by interacting with the user while the character displayed on the liquid crystal hologram or the like is operating.
 図12は、拡張例2における選択肢タスクの提示例を示す図である。 FIG. 12 is a diagram showing an example of presenting an option task in the extended example 2.
 図12においては、ユーザが1人で部屋にいるとき、ユーザの状態とユーザの行動傾向情報から推定できる次の行動だけでなく、他の行動も選択肢に含めた選択肢タスクを、キャラクター機能付きの音声デバイスから提示する例が示されている。 In FIG. 12, when the user is alone in the room, not only the next action that can be estimated from the user's state and the user's behavior tendency information, but also the option task that includes other actions as options is provided with a character function. An example presented from a voice device is shown.
 例えば、ユーザは、普段、この時間、TVを視聴していることが、ユーザの行動傾向情報としてデータベース64に登録されている。 For example, the fact that the user is usually watching TV at this time is registered in the database 64 as the user's behavioral tendency information.
 しかしながら、TVの電源が入っておらず、ユーザがTVに背を向けている状況なので、ユーザがネガティブ状態であると推測される。その際、ユーザの行動傾向情報から、TVを明らかに見たいであろうと推定された場合であっても、タスク生成部62は、TVを付けるだけでなく、他の複数の行動も敢えて選択肢に含めて、選択肢タスクを生成する。 However, since the TV is not turned on and the user is turning his back on the TV, it is presumed that the user is in a negative state. At that time, even if it is estimated from the user's behavior tendency information that he / she clearly wants to watch the TV, the task generation unit 62 not only attaches the TV but also intentionally includes a plurality of other actions as options. To generate a choice task.
 TVだけでなく、例えば、音楽、メールチェック、または食事などの行動が選択肢に含められる。 In addition to TV, actions such as music, email check, or meals are included in the options.
 出力制御部63は、音声提示デバイス42であるキャラクター機能付き音声デバイスから、生成した選択肢タスクを出力するように制御する。その結果、キャラクター機能付き音声デバイスのキャラクター(ペンギン)から、対話の1つとして、「そろそろ○時になるけど、TVつける?音楽聞く?それともメールチェックする?」という音声による選択肢タスクが提示される。 The output control unit 63 controls to output the generated option task from the voice device with a character function, which is the voice presentation device 42. As a result, the character (penguins) of the voice device with the character function presents a voice option task such as "It's about time to turn on the TV, listen to music, or check the mail?" As one of the dialogues.
 例えば、ユーザが、提示された選択肢タスクに対して「今日はTV見る気分じゃないから、音楽でも聞こうかな」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。出力制御部63は、ユーザの応答に基づいて、「どんな音楽がいい?」と出力させることにより、キャラクター機能付き音声デバイスと対話を進めていく。その際、データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザの行動傾向情報を更新する。 For example, when the user responds to the presented option task, "I don't feel like watching TV today, so let's listen to music", the sensor unit 33 detects the voice indicating the user's response and provides information. Output to the processing device 12. The output control unit 63 advances the dialogue with the voice device with the character function by outputting "what kind of music is good?" Based on the user's response. At that time, the database update unit 65 updates the behavior tendency information of the user in the database 64 based on the response of the user.
 以上のように、本技術においては、ユーザの行動傾向情報に基づいて推定される選択肢以外にも、複数の行動を敢えて選択肢に含めた選択肢タスクが提示される。これにより、ネガティブ状態であり、何もしたくないであろう、ユーザに自然に「選ばせる」ことができる。 As described above, in this technology, in addition to the options estimated based on the user's behavior tendency information, an option task that intentionally includes a plurality of actions in the options is presented. This allows the user to "choose" naturally, which is in a negative state and would not want to do anything.
 なお、この選択肢タスクを、雑談中、TVでニュースを視聴中、または、ユーザの帰宅タイミングなどにキャラクター機能付き音声デバイスに提示させることで、ユーザの生活の動線の中で、フレキシブルな対話が可能となる。これにより、違和感のない提示が可能となったり、アドバイスの役割を行うことが可能となったりする。 By presenting this option task to a voice device with a character function during chatting, watching news on TV, or when the user returns home, flexible dialogue can be performed in the flow line of the user's life. It will be possible. As a result, it becomes possible to present without a sense of discomfort, or to play the role of advice.
 なお、拡張例2においては、ユーザとのコミュニケーションが可能な音声デバイスとして、例えば、GateBox(登録商標)のような視覚情報やキャラクター機能が付いている、キャラクター機能付き音声デバイスを例に挙げて説明した。ただし、拡張例2では、ユーザとのコミュニケーションが可能な音声デバイスであれば、視覚情報やキャラクター機能の付いていない音声提示デバイス42が用いられてもよい。 In Extended Example 2, as a voice device capable of communicating with a user, for example, a voice device with a character function having visual information and a character function such as GateBox (registered trademark) will be described as an example. bottom. However, in the extended example 2, a voice presentation device 42 without visual information or a character function may be used as long as it is a voice device capable of communicating with the user.
<4.拡張例3(コンテンツ視聴・ゲーム)>
 上述した図1の情報提示システム1は、ユーザがネガティブ状態であると推定された場合に、選択肢タスクを、コンテンツ視聴・ゲームプレイ中に提示させるコンテンツ視聴アプリケーションまたはゲームアプリケーションとしても機能する。
<4. Expansion example 3 (content viewing / game)>
The information presentation system 1 of FIG. 1 described above also functions as a content viewing application or a game application for presenting an optional task during content viewing / game play when the user is presumed to be in a negative state.
 図13は、拡張例3における選択肢タスクの提示例を示す図である。 FIG. 13 is a diagram showing an example of presenting an option task in the extended example 3.
 図13においては、コンテンツ視聴中に、ユーザがネガティブ状態であると推定された場合、広告内容に選択肢タスクを含む広告を提示する例が示されている。 FIG. 13 shows an example of presenting an advertisement including an option task in the advertisement content when the user is presumed to be in a negative state while viewing the content.
 図13の左側に示されるように、ユーザが、コンテンツ視聴中に広告が提示されるアプリケーションを用いて、視覚提示デバイス41であるスマートホンでコンテンツを視聴中に、ネガティブ状態であると判定された場合、タスク生成部62は、選択肢タスクを生成する。 As shown on the left side of FIG. 13, it is determined that the user is in a negative state while viewing the content on the smartphone which is the visual presentation device 41 by using the application in which the advertisement is presented while viewing the content. In this case, the task generation unit 62 generates an alternative task.
 矢印P1に示されるように、出力制御部63は、本来の広告表示タイミングに基づいて、通常の広告に差し替えて、選択肢であるA乃至Cの女性たちと、「誰にするか選んでください。」という質問が表示された選択肢タスクを含む広告をスマートホンに提示させる。なお、選択肢タスクを含む広告は、本来の広告と同様に、データベース64のユーザの嗜好情報を送信することにより、ネットワークを介して、広告用のサーバから取得されてもよい。 As shown by the arrow P1, the output control unit 63 replaces the normal advertisement with the women of A to C, which are the options, based on the original advertisement display timing, and "choose who to choose. Have the smartphone present an ad containing an optional task with the question "." The advertisement including the option task may be acquired from the advertisement server via the network by transmitting the user's preference information of the database 64, as in the original advertisement.
 例えば、ユーザが、提示された選択肢タスクに対して「A」を選択する操作を行った場合、センサ部33は、ユーザの操作内容を検出し、情報処理装置12に出力する。情報処理装置12は、例えば、ユーザの応答内容を示す情報を、ネットワークを介して、広告用のサーバに送信する。出力制御部63は、ユーザの応答が入力されると、コンテンツを再び表示させる。その際、データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザの嗜好情報を更新する。 For example, when the user performs an operation of selecting "A" for the presented option task, the sensor unit 33 detects the operation content of the user and outputs it to the information processing device 12. The information processing device 12 transmits, for example, information indicating a user's response content to an advertising server via a network. The output control unit 63 displays the content again when the user's response is input. At that time, the database update unit 65 updates the user preference information of the database 64 based on the user's response.
 なお、選択肢タスクを含む広告が、本来の広告が表示されるタイミングに基づいて提示される場合を説明したが、例えば、広告が表示されるタイミングのうち、ユーザがネガティブ状態であると推定されたタイミングに基づくタイミングで提示させるようにしてもよい。具体的には、選択肢タスクを含む広告が提示されるタイミングは、広告が表示されるタイミングのうち、4回目に本来の広告が表示されるタイミングであった場合、2回目に提示させるなど早めて提示させもよい。表示を早める度合は、ネガティブ状態の度合に応じて決定されてもよい。 In addition, although the case where the advertisement including the option task is presented based on the timing when the original advertisement is displayed has been described, for example, it is presumed that the user is in a negative state during the timing when the advertisement is displayed. It may be presented at a timing based on the timing. Specifically, the timing at which the advertisement including the option task is presented is earlier than the timing at which the advertisement is displayed, such as when the original advertisement is displayed at the fourth time, the advertisement is presented at the second time. You may let me present it. The degree of speeding up the display may be determined according to the degree of the negative state.
 選択肢の内容としては、広告対象のメーカがユーザに相談に乗ってもらう内容であってもよいし、ユーザに意見を伺う内容であってもよいし、ユーザが好きなものを単に選択するだけの内容であってもよい。これにより、広告のインタラクティブ化が実現される。広告が単に表示されるだけの場合、その広告に興味がないユーザは、広告が終了するのを単に待つだけで、時間の経過を長く感じるのに対し、選択肢タスクを含む広告を提示することで、時間の経過を早く感じさせることができる。 The content of the options may be the content that the manufacturer to be advertised asks the user for consultation, the content that asks the user for their opinions, or the content that the user simply selects the one that he / she likes. It may be the content. As a result, the interactivity of the advertisement is realized. When an ad is simply displayed, users who are not interested in the ad simply wait for the ad to finish and feel that time has passed for a long time, whereas by presenting an ad that includes a choice task. , You can feel the passage of time quickly.
 視覚提示デバイス41としては、視覚表示機能を有するデバイスであればよく、スマートホン、タブレット、パーソナルコンピュータ、TVであってもよい。また、視聴するコンテンツとしては、オンライン動画でもよいし、動画配信コンテンツであってもよい。 The visual presentation device 41 may be a device having a visual display function, and may be a smartphone, a tablet, a personal computer, or a TV. Further, the content to be viewed may be an online video or a video distribution content.
 なお、図13においては、選択肢タスクを含む広告だけではなく、選択肢タスクのみが表示されてもよい。 Note that, in FIG. 13, not only the advertisement including the option task but also only the option task may be displayed.
 また、図13においては、コンテンツ視聴中の例であったが、例えば、ゲームプレイ中の場合、ゲームのロード中、シーンの遷移中、または、対戦相手の探索中など時間のかかる処理中に、選択肢タスクや選択肢タスクを含む広告を表示させるようにすることができる。例えば、時間のかかる処理中に、ユーザによっては時間が経つのを待っているだけの場合があるため、そのようなユーザに対して、選択肢タスクや選択肢タスクを含む広告を提示することで、時間の経過を早く感じさせることができる。 Further, in FIG. 13, although it was an example of viewing the content, for example, during game play, during a time-consuming process such as loading a game, transitioning a scene, or searching for an opponent, It is possible to display an advertisement including a choice task and a choice task. For example, during a time-consuming process, some users may just be waiting for time to pass, so by presenting an ad containing a choice task or a choice task to such a user, the time You can feel the progress of.
 以上のように、本技術においては、コンテンツ視聴中やゲームプレイ中に、選択肢タスクが含まれた広告や選択肢タスク自体が提示される。これにより、コンテンツ視聴中やゲームプレイ中であっても、選択肢タスクを自然に提示することができる。例えば、ユーザの意見を伺うような選択肢タスクが含まれた広告が提示されることで、広告側にも、ユーザ側にもメリットがある。 As described above, in this technology, an advertisement including an option task or an option task itself is presented during content viewing or game play. As a result, the alternative task can be presented naturally even during content viewing or game play. For example, by presenting an advertisement including an option task that asks the user's opinion, there are merits for both the advertisement side and the user side.
<5.その他>
 (本技術の効果)
 本技術においては、ユーザのネガティブ状態が推定され、ユーザのネガティブ状態が推定されたことに応じて、ユーザに対する選択肢タスクの提示が制御される。
<5. Others>
(Effect of this technology)
In the present technology, the negative state of the user is estimated, and the presentation of the option task to the user is controlled according to the estimated negative state of the user.
 これにより、ユーザのネガティブな心理状態を解消するきっかけを提供することができる。 This can provide an opportunity to eliminate the negative psychological state of the user.
 なお、メニュー表示などは、ユーザが何かを選択したい、または切り替えたいときにユーザの操作に応じて提示されるものであるので、ユーザに選択を促すことを目的としている本技術の選択肢タスクとは異なる。 Since the menu display and the like are presented according to the user's operation when the user wants to select or switch something, it is an optional task of the present technology aimed at prompting the user to make a selection. Is different.
 (コンピュータの構成例)
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または汎用のパーソナルコンピュータなどに、プログラム記録媒体からインストールされる。
(Computer configuration example)
The series of processes described above can be executed by hardware or software. When a series of processes are executed by software, the programs constituting the software are installed from the program recording medium on a computer embedded in dedicated hardware, a general-purpose personal computer, or the like.
 図14は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。 FIG. 14 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 CPU301、ROM(Read Only Memory)302、RAM303は、バス304により相互に接続されている。 The CPU 301, ROM (Read Only Memory) 302, and RAM 303 are connected to each other by the bus 304.
 バス304には、さらに、入出力インタフェース305が接続されている。入出力インタフェース305には、キーボード、マウスなどよりなる入力部306、ディスプレイ、スピーカなどよりなる出力部307が接続される。また、入出力インタフェース305には、ハードディスクや不揮発性のメモリなどよりなる記憶部308、ネットワークインタフェースなどよりなる通信部309、リムーバブルメディア311を駆動するドライブ310が接続される。 An input / output interface 305 is further connected to the bus 304. An input unit 306 including a keyboard, a mouse, and the like, and an output unit 307 including a display, a speaker, and the like are connected to the input / output interface 305. Further, the input / output interface 305 is connected to a storage unit 308 made of a hard disk or a non-volatile memory, a communication unit 309 made of a network interface or the like, and a drive 310 for driving the removable media 311.
 以上のように構成されるコンピュータでは、CPU301が、例えば、記憶部308に記憶されているプログラムを入出力インタフェース305及びバス304を介してRAM303にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 301 loads the program stored in the storage unit 308 into the RAM 303 via the input / output interface 305 and the bus 304 and executes the program, thereby executing the series of processes described above. Is done.
 CPU301が実行するプログラムは、例えばリムーバブルメディア311に記録して、あるいは、ローカルエリアネットワーク、インターネット、デジタル放送といった、有線または無線の伝送媒体を介して提供され、記憶部308にインストールされる。 The program executed by the CPU 301 is recorded on the removable media 311 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or a digital broadcast, and is installed in the storage unit 308.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたときなどの必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in chronological order according to the order described in this specification, or may be a program that is processed in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 なお、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)など)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can have a cloud computing configuration in which one function is shared and jointly processed by a plurality of devices via a network.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes a plurality of processes, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
<構成の組み合わせ例>
 本技術は、以下のような構成をとることもできる。
(1)
 ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部を備える
 情報処理装置。
(2)
 前記選択肢タスクとは、問いかけに対して、前記ユーザの意思により、いずれかの選択肢を選択させるタスクである
 前記(1)に記載の情報処理装置。
(3)
 前記選択肢タスクは、前記ユーザの属性情報を参照して生成される
 前記(1)または(2)に記載の情報処理装置。
(4)
 前記選択肢タスクは、不正解の選択肢を含まないタスクとして生成される
 前記(1)または(2)に記載の情報処理装置。
(5)
 前記提示制御部は、前記ユーザのネガティブ状態の度合に応じて、前記選択肢タスク提示する回数、提示するタイミング、および、提示を継続する時間のうち少なくとも1つを変える
 前記(4)に記載の情報処理装置。
(6)
 前記提示制御部は、コンテンツ視聴中またはゲームプレイ中に、前記選択肢タスクを含む広告の提示を制御する
 前記(4)に記載の情報処理装置。
(7)
 前記提示制御部は、前記ユーザが次に行う行動を選択肢に含む前記選択肢タスクの提示を制御する
 前記(3)に記載の情報処理装置。
(8)
 前記提示制御部は、前記ユーザの行動傾向に基づいて前記ユーザが次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を制御する
 前記(7)に記載の情報処理装置。
(9)
 前記提示制御部は、前記ユーザの予定に基づいて前記ユーザが次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を制御する
 前記(7)に記載の情報処理装置。
(10)
 前記提示制御部は、前記次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を、前記ユーザの予定を提示する予定の時刻よりも前倒しで提示するように制御する
 前記(9)に記載の情報処理装置。
(11)
 前記推定部は、前記ユーザの生体情報、前記ユーザの行動状況、および前記ユーザの外部環境情報の少なくともいずれかに基づいて、前記ユーザのネガティブ状態を推定する
 前記(1)乃至(10)のいずれかに記載の情報処理装置。
(12)
 情報処理装置が、
 ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する
 情報処理方法。
(13)
 ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部
 として、コンピュータを機能させるためのプログラム。
<Example of configuration combination>
The present technology can also have the following configurations.
(1)
An information processing device including a presentation control unit that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state from the estimation unit that estimates the user's negative state.
(2)
The information processing device according to (1) above, wherein the option task is a task that causes the user to select one of the options in response to a question.
(3)
The information processing device according to (1) or (2), wherein the option task is generated by referring to the attribute information of the user.
(4)
The information processing device according to (1) or (2) above, wherein the option task is generated as a task that does not include incorrect answer options.
(5)
The information according to (4) above, wherein the presentation control unit changes at least one of the number of times the option task is presented, the timing of presentation, and the time for continuing the presentation according to the degree of the negative state of the user. Processing equipment.
(6)
The information processing device according to (4), wherein the presentation control unit controls the presentation of an advertisement including the option task during content viewing or game play.
(7)
The information processing device according to (3) above, wherein the presentation control unit controls presentation of the option task including an action to be performed next by the user as an option.
(8)
The information processing device according to (7), wherein the presentation control unit controls presentation of the option task including an action estimated to be performed by the user next based on the behavior tendency of the user.
(9)
The information processing device according to (7), wherein the presentation control unit controls presentation of the option task including an action estimated to be performed by the user next based on the schedule of the user.
(10)
The presentation control unit controls the presentation of the option task including the action presumed to be performed next in the options so as to be presented earlier than the scheduled time of presenting the user's schedule (9). The information processing device described in.
(11)
The estimation unit estimates the negative state of the user based on at least one of the biometric information of the user, the behavioral status of the user, and the external environment information of the user. Information processing device described in Crab.
(12)
Information processing device
An information processing method that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state by the estimation unit that estimates the user's negative state.
(13)
A program for operating a computer as a presentation control unit that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state from the estimation unit that estimates the user's negative state.
 1 情報提示システム, 11 入力装置, 12 情報処理装置, 13 出力装置, 21 ネットワーク 31乃至33 センサ部, 41 視覚提示デバイス, 42 音声提示デバイス, 61 感情推定部, 62 タスク生成部, 63 出力制御部, 64 データベース, 65 データベース更新部 1 Information presentation system, 11 input device, 12 information processing device, 13 output device, 21 network 31 to 33 sensor unit, 41 visual presentation device, 42 voice presentation device, 61 emotion estimation unit, 62 task generation unit, 63 output control unit , 64 database, 65 database update department

Claims (13)

  1.  ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部を備える
     情報処理装置。
    An information processing device including a presentation control unit that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state from the estimation unit that estimates the user's negative state.
  2.  前記選択肢タスクとは、問いかけに対して、前記ユーザの意思により、いずれかの選択肢を選択させるためのタスクである
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the option task is a task for causing the user to select one of the options in response to a question.
  3.  前記選択肢タスクは、前記ユーザの属性情報を参照して生成される
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the option task is generated by referring to the attribute information of the user.
  4.  前記選択肢タスクは、不正解の選択肢を含まないタスクとして生成される
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the option task is generated as a task that does not include an incorrect answer option.
  5.  前記提示制御部は、前記ユーザのネガティブ状態の度合に応じて、前記選択肢タスク提示する回数、提示するタイミング、および、提示を継続する時間のうち少なくとも1つを変える
     請求項4に記載の情報処理装置。
    The information processing according to claim 4, wherein the presentation control unit changes at least one of the number of times the option task is presented, the timing of presentation, and the time for continuing the presentation according to the degree of the negative state of the user. Device.
  6.  前記提示制御部は、コンテンツ視聴中またはゲームプレイ中に、前記選択肢タスクを含む広告の提示を制御する
     請求項4に記載の情報処理装置。
    The information processing device according to claim 4, wherein the presentation control unit controls the presentation of an advertisement including the option task during content viewing or game play.
  7.  前記提示制御部は、前記ユーザが次に行う行動を選択肢に含む前記選択肢タスクの提示を制御する
     請求項3に記載の情報処理装置。
    The information processing device according to claim 3, wherein the presentation control unit controls presentation of the option task including an action to be performed next by the user as an option.
  8.  前記提示制御部は、前記ユーザの行動傾向に基づいて前記ユーザが次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を制御する
     請求項7に記載の情報処理装置。
    The information processing device according to claim 7, wherein the presentation control unit controls presentation of the option task including an action presumed to be performed by the user next based on the behavior tendency of the user.
  9.  前記提示制御部は、前記ユーザの予定に基づいて前記ユーザが次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を制御する
     請求項7に記載の情報処理装置。
    The information processing device according to claim 7, wherein the presentation control unit controls the presentation of the option task including an action estimated to be performed by the user next based on the schedule of the user.
  10.  前記提示制御部は、前記次に行うと推定される行動を選択肢に含む前記選択肢タスクの提示を、前記ユーザの予定を提示する予定の時刻よりも前倒しで提示するように制御する
     請求項9に記載の情報処理装置。
    According to claim 9, the presentation control unit controls the presentation of the option task including the action presumed to be performed next in the options so as to be presented earlier than the scheduled time of presenting the user's schedule. The information processing device described.
  11.  前記推定部は、前記ユーザの生体情報、前記ユーザの行動状況、および前記ユーザの外部環境情報の少なくともいずれかに基づいて、前記ユーザのネガティブ状態を推定する
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the estimation unit estimates the negative state of the user based on at least one of the biometric information of the user, the behavioral state of the user, and the external environment information of the user.
  12.  情報処理装置が、
     ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する
     情報処理方法。
    Information processing device
    An information processing method that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state by the estimation unit that estimates the user's negative state.
  13.  ユーザのネガティブ状態を推定する推定部より前記ユーザのネガティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部
     として、コンピュータを機能させるためのプログラム。
    A program for operating a computer as a presentation control unit that controls the presentation of alternative tasks to the user in response to the estimation of the user's negative state from the estimation unit that estimates the user's negative state.
PCT/JP2021/009733 2020-03-25 2021-03-11 Information processing device, method, and program WO2021193086A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020053701 2020-03-25
JP2020-053701 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021193086A1 true WO2021193086A1 (en) 2021-09-30

Family

ID=77891714

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/009733 WO2021193086A1 (en) 2020-03-25 2021-03-11 Information processing device, method, and program

Country Status (1)

Country Link
WO (1) WO2021193086A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110134026A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Image display apparatus and method for operating the same
JP2018085120A (en) * 2017-12-14 2018-05-31 ヤフー株式会社 Device, method and program
WO2019021575A1 (en) * 2017-07-27 2019-01-31 ソニー株式会社 Information processing system, information processing device, information processing method, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110134026A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Image display apparatus and method for operating the same
WO2019021575A1 (en) * 2017-07-27 2019-01-31 ソニー株式会社 Information processing system, information processing device, information processing method, and recording medium
JP2018085120A (en) * 2017-12-14 2018-05-31 ヤフー株式会社 Device, method and program

Similar Documents

Publication Publication Date Title
Van Gorp et al. Design for emotion
JP7424285B2 (en) Information processing system, information processing method, and recording medium
CN110996796B (en) Information processing apparatus, method, and program
Marlowe A sense of humor
Brossard Fighting with oneself to maintain the interaction order: a sociological approach to self‐injury daily process
Pellicciari et al. Drama therapy and eating disorders: a historical perspective and an overview of a Bolognese project for adolescents
Borg Body Language: How to know what's REALLY being said
Harrison et al. Sensory curation: Theorizing media use for sensory regulation and implications for family media conflict
Van Damme et al. Sex as spectacle: An overview of gender and sexual scripts in teen series popular with Flemish teenagers
WO2021230100A1 (en) Information processing device and method, and program
Prado Social media and your brain: Web-based communication is changing how we think and express ourselves
WO2021193086A1 (en) Information processing device, method, and program
CN110214301B (en) Information processing apparatus, information processing method, and program
Cleary Misfitting and hater blocking: A feminist disability analysis of the extraordinary body on reality television
Chan et al. Exploring theater experiences among Hong Kong audiences
Löfgren Excessive living
Sefel et al. At the Intersection of Disability and Drama: A Critical Anthology of New Plays
Marlowe A sense of humor
JP2022524093A (en) Virtual agent team
Bate Emptying Media: Sleep Podcasts in the Attention and Experience Economies
Kuru Exploration of user experience of personal informatics systems
Hubbard Hollywood Cinematic Excess: Black Swan’s Direct and Contradictory Address to the Body/Mind
Ona Escaping the Emotional Roller Coaster
van de Ven Attention and affective proximity: Alleviating loneliness and isolation through virtual girlfriends and boyfriends
Roald et al. Atmospheric infancy.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776324

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776324

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP