WO2021230100A1 - Information processing device and method, and program - Google Patents

Information processing device and method, and program Download PDF

Info

Publication number
WO2021230100A1
WO2021230100A1 PCT/JP2021/017145 JP2021017145W WO2021230100A1 WO 2021230100 A1 WO2021230100 A1 WO 2021230100A1 JP 2021017145 W JP2021017145 W JP 2021017145W WO 2021230100 A1 WO2021230100 A1 WO 2021230100A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
option
information processing
task
presentation
Prior art date
Application number
PCT/JP2021/017145
Other languages
French (fr)
Japanese (ja)
Inventor
茜 近藤
陽方 川名
至 清水
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021230100A1 publication Critical patent/WO2021230100A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/0245Detecting, measuring or recording pulse rate or heart rate by using sensing means generating electric signals, i.e. ECG signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the present technology relates to information processing devices and methods, and programs, and in particular, to information processing devices and methods, and programs that enable the user to more reliably obtain answers to questions including a plurality of options.
  • Patent Document 1 focusing on the ego state of the speaker, the response of the dialogue scenario (agent) obtained by collating the text with the scenario database is changed, which is easy for the speaker to accept and is natural. It is proposed to have a good dialogue.
  • Patent Document 2 it is proposed that the mood of the operator and the immediate mood, which is a temporary mood, are analyzed and specified, and the operating device is controlled based on the specified immediate mood.
  • Human emotions can be broadly divided into positive emotions and negative emotions.
  • a negative psychological state hereinafter referred to as a negative state
  • the answer to the question or questionnaire is rarely given.
  • This technology was made in view of such a situation, and makes it possible to more reliably obtain the user's answer to a question including multiple options.
  • the information processing device of one aspect of the present technology includes a presentation control unit that controls the presentation of the option task to the user in response to the estimation of the positive state of the user by the estimation unit that estimates the positive state of the user. Be prepared.
  • the presentation of the option task to the user is controlled according to the estimation of the positive state of the user by the estimation unit that estimates the positive state of the user.
  • FIG. It is a block diagram which shows the structure of one Embodiment of the information presentation system to which this technique is applied. It is a block diagram which shows the functional structure example of an information processing apparatus. It is a figure which shows the example of the estimation method of a positive state. It is a flowchart explaining the information presentation process of an information presentation system. It is a figure which shows the presentation example of a choice task. It is a figure which shows the example of the presentation timing of a choice task. It is a figure which shows the presentation example of the option task in extended example 1. FIG. It is a figure which shows the example when it is determined that the user is not a positive state in extended example 2. It is a figure which shows the presentation example of the option task in extended example 2. FIG. It is a block diagram which shows the configuration example of a computer.
  • FIG. 1 is a block diagram showing a configuration of an embodiment of an information presentation system to which the present technology is applied.
  • the information presentation system 1 of FIG. 1 acquires data indicating a user's behavioral status, estimates whether the user is in a positive state, and if the user estimates that the user is in a positive state, prompts a selection and makes a decision to the user. It controls the presentation of alternative tasks so that they are allowed to do so.
  • the choice task is a task in which a question including a plurality of choices is given and the choice is selected by the user's decision.
  • a finite number for example, about 2 to 5
  • the user selecting an option in response to the presentation of the option task is referred to as performing the option task.
  • the selection result of the option by the user is used as user information about the user such as the attribute information of the user and the preference information of the user. NS.
  • the user information will be described in more detail later.
  • the selection result of the option by the user is, for example, to transition the state of the service in the information presentation system 1 to the next state. It is used as a trigger for.
  • the service (application) for the user is, for example, a schedule or a to-do list
  • an option task including options related to items (such as an appointment or event) included in the schedule or to-do list is presented.
  • the information presentation system 1 can shift the state of each item to a state such as preferentially presenting or processing in the background. Specifically, if you change the schedule A, which is one of the items, to participate, the schedule A will be presented preferentially, and if you change the schedule B, which is one of the items, to cancel, the cancellation to the schedule B will be performed. Processed in the background.
  • the information presentation system 1 can, for example, determine the content to be reproduced next and move it to the reproduction state.
  • the option task can be quickly performed by the user in the positive state. That is, according to the present technology, it is possible to make the user answer a question including choices more reliably. In other words, according to the present technology, it is possible to more reliably obtain the user's answer to the question including the choices.
  • the conditions of the option task will be described below.
  • the option task when the user is estimated to be in a positive state, the option task including the two options "Are you tired when you wake up? Yes / No" is voiced.
  • the example presented by is shown.
  • the option task in FIG. 1 is an example of an option task whose presentation purpose is acquisition of user's subjective data and whose selection result is used as user information.
  • the positive state means, for example, a state in which the sympathetic nervous system is suppressed and the activity of the parasympathetic nervous system is activated.
  • the positive state may be estimated using, for example, fluctuations in heart rate, sympathetic nerve activity represented by LF (Low Frequency) / HF (High Frequency) in heart rate variability, or mental sweating as an index. ..
  • the positive state is, for example, a state in which the right frontal lobe is suppressed with respect to the left frontal lobe.
  • the positive state is estimated by measuring the ⁇ -band power of the EEG.
  • the positive state may be estimated based on, for example, prosodic features in the user's utterance. At this time, if the predetermined frequency band in the prosody exceeds the reference value, it can be estimated to be in a positive state.
  • the positive state may be estimated based on the recognition of the user's facial expression, for example.
  • the captured facial expression of the user is classified into a facial expression in which the feature amount of the facial expression is positive, it can be estimated that the user is in a positive state.
  • the positive state may be estimated based on various models that define the emotional state.
  • the positive state may be defined as a pleasant state in the so-called Russell's emotional ring model, which defines a human emotional state based on, for example, two axes of arousal / inactivity and comfort / discomfort.
  • the positive state may be defined based on the result statistically estimated by the information presentation system 1 according to the user's attribute and behavioral situation.
  • laughter and smiles are caused by, for example, conversational exchanges and viewing of television and Web contents.
  • the content is a video
  • the video of a celebrity you like is displayed, or the video of a person or animal smiling or performing a happy movement is displayed.
  • These images are one of the factors that put the user in a positive state.
  • the text created by the user contains characters such as love, happiness, deliciousness, or fun
  • these texts are one of the factors that can be presumed to be a positive state by the user.
  • the text is not limited to the input information, and may be voice recognition of the user's utterance.
  • the situation that the supporting team is winning is one of the factors that presumes that the user is in a positive state.
  • the situation when the supporting team reverses or the favored player plays an active part is one of the factors for presuming that the user is in a positive state.
  • the information presentation system 1 can estimate the user state based on one or more of the factors presumed to put the user in the positive state.
  • the information presentation system 1 is composed of an input device 11, an information processing device 12, and an output device 13.
  • the input device 11, the information processing device 12, and the output device 13 are connected to each other via a network 21 such as a wireless LAN (Local Area Network).
  • the input device 11 and the output device 13 may be an input unit and an output unit of the information processing device 12, respectively.
  • the input device 11 is composed of a sensor unit 31, a sensor unit 32, and a sensor unit 33.
  • the sensor unit 31 recognizes the user's external environment including the room, and outputs external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
  • the sensor unit 31 recognizes the shape of the living space, existing home appliances / furniture, and grasps their arrangement. In addition, the sensor unit 31 acquires information outside the living space such as weather and temperature.
  • the sensor unit 31 is composed of, for example, an image sensor, LiDAR, a temperature sensor, an illuminance sensor, a Web camera connected via the Internet, an input unit for acquiring information from a website or the like.
  • the sensor unit 32 recognizes various information and actions of the user, and outputs the biometric information of the user obtained as a result of the recognition and the action status information indicating the action status of the user to the information processing device 12.
  • the sensor unit 32 recognizes information such as the presence / absence of users in the living space, the number of people, the posture, and the face orientation. Further, the sensor unit 32 recognizes what the user is doing now and the behavioral status of the user.
  • the sensor unit 32 includes a motion capture system such as OptiTrack (trademark), a distance image sensing system that measures the distance to an object using an image sensor, an infrared camera, a high-resolution depth sensor, and the like.
  • the sensor unit 32 further acquires the user's biological information (heartbeat, etc.).
  • the sensor unit 32 may be a heart rate sensor, a sweating sensor, a brain wave sensor, a temperature sensor, or the like, and may be a wristband, an HMD (Head Mount Display), a glass type display, or the like including these.
  • the sensor unit 32 may be a camera capable of measuring a heart rate value, a microphone capable of inputting prosody in utterance or utterance, or the like.
  • the sensor unit 33 acquires operation information by user's operation input or voice input, and outputs the acquired operation information to the information processing device 12.
  • the sensor unit 33 acquires the input operation information such as the voice and operation when the user performs the presented option task, or the operation when inputting information such as the user's to-do list.
  • the sensor unit 33 is composed of a keyboard, a touch panel, operation buttons, a controller, a smart phone, a tablet terminal, a microphone, or the like.
  • the information processing device 12 is composed of, for example, a personal computer or the like.
  • the information processing device 12 estimates the user's emotions, for example, whether or not the user is in a positive state, based on the information supplied from the input device 11.
  • the information processing device 12 generates an option task based on the information supplied from the input device 11 and causes the output device 13 to present the generated option task.
  • the output device 13 is composed of a visual presentation device 41, a voice presentation device 42, and the like.
  • the output device 13 presents an optional task supplied by the information processing device 12.
  • the visual presentation device 41 is composed of a TV, a projector, a display of a smart phone or a tablet terminal, or the like.
  • the visual presentation device 41 visually presents the option task to the user.
  • the voice presentation device 42 is composed of a speaker, a smart speaker, or the like.
  • the voice presentation device 42 presents the option task to the user by voice.
  • FIG. 2 is a block diagram showing a functional configuration example of the information processing apparatus.
  • the information processing apparatus 12 is configured to include an emotion estimation unit 61, a task generation unit 62, an output control unit 63, a database 64, a database update unit 65, and a content determination unit 66.
  • these functions are configured by expanding a predetermined program or the like into RAM (Random Access Memory) or the like by the CPU (Central Processing Unit) of the information processing apparatus 12.
  • the emotion estimation unit 61 is based on at least one of the external environment information, the user's biometric information, and the user's behavioral status information supplied from the input device 11, and the user's emotion, for example, the user is in a positive state. Estimate whether or not. When the user estimates that the user is in a positive state, the emotion estimation unit 61 causes the task generation unit 62 to generate a choice task.
  • the content determination unit 66 causes the content determination unit 66 to determine the positive content to be presented to the user.
  • Positive content is content that makes the user's emotions positive, and is not particularly limited.
  • the positive content may be a video in which a celebrity of the user's taste appears or a song of the user's taste.
  • the positive content may be an instruction of an action that makes the user's emotions positive.
  • the task generation unit 62 determines the content and presentation destination of the option task based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the information registered in the database 64, and the like. Determine the device for and generate a choice task.
  • the task generation unit 62 outputs the generated option task to the output control unit 63.
  • the output control unit 63 causes the output device 13 to present the optional tasks supplied from the task generation unit 62.
  • the output control unit 63 causes the output device 13 to output the positive content supplied from the content determination unit 66.
  • the output control unit 63 also controls the on / off of the power supply of the output device 13. Further, the output control unit 63 transmits a response from the user, for example, as questionnaire information to a corresponding server (not shown) via a network (not shown).
  • personal information including user attribute information, user preference information, personality characteristics, etc. are registered as user information regarding the user. Further, as user information, user behavior tendency information including user to-do list and schedule, user habit information, and information on response tendency and behavior tendency to user's choice task is also registered in the database 64. ..
  • the database update unit 65 updates the information registered in the database 64 based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the operation information of the user, and the like.
  • the content determination unit 66 determines the positive content to be presented to the user based on the user information registered in the database 64 and the like.
  • the content determination unit 66 reproduces or generates the determined positive content and outputs the determined positive content to the output control unit 63.
  • FIG. 3 is a diagram showing an example of a method of estimating a positive state by the emotion estimation unit 61.
  • the emotion estimation unit 61 estimates whether or not the user is in a positive state based on at least one of the user's biological information, the user's behavioral status, and the external environmental information.
  • the emotion estimation unit 61 estimates whether or not the user is in a positive state based on the user's biological information, for example, information such as an increase in heart rate, a smile degree, or a state of gaze.
  • the emotion estimation unit 61 uses the increase in the heart rate, the user is positive based on, for example, that the difference from the predetermined reference value is equal to or more than a certain value or exceeds the predetermined reference value for a certain period of time. Presumed to be in a state.
  • the reference value may be defined based on the average value of the user or the measured value at a specific timing such as when waking up, or may be the average value of a plurality of users or a value defined by the information presentation system 1.
  • the emotion estimation unit 61 estimates that the user is in a positive state when the smile degree / laughter / tone is used, for example, when the facial expression is a smile and the number of laughter in a certain period of time is equal to or more than the reference value. Further, the emotion estimation unit 61 estimates that the user is in a positive state when the difference between the voice tone and the reference value of the tone is equal to or greater than the reference Hz.
  • the emotion estimation unit 61 estimates that the user is in a positive state when the state of the line of sight is used, for example, when the time when the line of sight does not point downward continues for a certain second or longer.
  • the emotion estimation unit 61 is in a positive state of the user based on various states such as content browsing, sentence creation, utterance state, communication state, exercise state, and body movement state as the user's behavioral state. Estimate whether or not.
  • the emotion estimation unit 61 detects a positive term or image on the content while the user is browsing the Web content, it is estimated that the user is in a positive state.
  • the emotion estimation unit 61 includes the user's situation (win / loss, etc.) in the game being played, the acquired item, the situation of the supporting team in the sports game being watched (win / loss, etc.), and the user's behavior information in entertainment. , Or, based on the walking tendency at that time, it may be estimated whether or not the user is in a positive state.
  • the emotion estimation unit 61 estimates that the user is in a positive state, such as when the supporting team wins in reverse, or when the user meets a favorite character at a theme park and walks in a bouncy manner. ..
  • the emotion estimation unit 61 estimates whether or not the user is in a positive state based on information such as weather, temperature, humidity, traffic condition, and environmental sound as external environmental information. For example, the emotion estimation unit 61 estimates whether or not the user is in a positive state when the train is not crowded or when the user's favorite music is played.
  • the emotion estimation unit 61 estimates that the user is in a positive state when it is sunny, when the humidity is low, or when the temperature is appropriate.
  • a plurality of these estimation methods may be performed in parallel or may be performed in combination. Whether or not the state of the combination corresponds to the positive state for the user may be estimated using the learning result based on the data accumulated in the past.
  • FIG. 4 is a flowchart illustrating the information presentation process of the information presentation system 1.
  • FIG. 4 the processing of the input device 11 is shown in combination with the processing of the information processing device 12.
  • step S11 the sensor unit 31 of the input device 11 recognizes the room and the external environment, and outputs the external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
  • step S12 the sensor unit 32 of the input device 11 recognizes various information of the user and grasps the behavior.
  • the sensor unit 32 outputs the user's biological information and the user's behavior status information obtained as a result of recognition and grasp to the information processing device 12.
  • step S13 the emotion estimation unit 61 of the information processing apparatus 12 is based on at least one of the external environment information supplied from the input device 11, the biometric information of the user, and the behavioral status information of the user, and the emotion of the user. Estimate (positive state).
  • step S14 the emotion estimation unit 61 determines whether or not the user is in a positive state. If it is determined in step S14 that the user is not in the positive state, the process proceeds to step S15.
  • step S15 the content determination unit 66 determines positive content based on the user information registered in the database 64, reproduces or generates the determined positive content, and outputs the determined positive content to the output control unit 63.
  • the output control unit 63 controls the presentation of the positive content by outputting the positive content supplied from the content determination unit 66 to a predetermined presentation device in the output device 13.
  • step S31 the presentation device in the output device 13 presents the positive content supplied by the output control unit 63.
  • the time for presenting the positive content may be a predetermined time set in advance, or may be until it is determined in the next step S14 that the positive content is in the positive state.
  • step S15 After the process of step S15, the process returns to step S11, and the subsequent processes are repeated.
  • step S14 If it is determined in step S14 that the user is in a positive state, the process proceeds to step S16.
  • step S16 the task generation unit 62 determines the content of the option task. For example, the task generation unit 62 determines the content of the question (including the option) in the option task.
  • step S17 the task generation unit 62 determines the device to be presented.
  • step S18 the task generation unit 62 generates an option task based on the content of the determined option task and the device to be presented, and outputs the generated option task to the output control unit 63.
  • steps S16 to S18 is performed based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the user information registered in the database 64, and the like.
  • step S19 the output control unit 63 waits until it is determined that it is time to present the option task. If it is determined in step S19 that it is time to present the option task, the process proceeds to step S20.
  • step S20 the output control unit 63 controls the presentation of the option task by outputting the option task to the presentation device determined as the device to be presented in step S17 among the output devices 13.
  • step S32 the presentation device of the output device 13 presents an option task.
  • the user starts the option task in response to the presentation of the option task in the output device 13.
  • step S21 the sensor unit 33 of the input device 11 acquires the operation information from the user, and outputs the acquired operation information to the information processing device 12.
  • step S22 the database update unit 65 of the information processing apparatus 12 registers the response corresponding to the user's operation information in the database 64 as user information such as user attribute information or user preference information.
  • This response is, for example, transmitted by the output control unit 63 to the server of the corresponding company via the network as the answer result of the questionnaire from the company or the like.
  • the option task is presented to the user. Since the user is in a positive state, it is estimated that the probability of responding to the presented choice task is high. That is, in the information presentation system 1, it is expected that the user's answer to the question including the choice can be obtained promptly and surely by utilizing the user's positive state.
  • the contents of the option task presented to the user include, for example, the following three contents.
  • the content of the option task is not limited in the presentation timing, but is what the information presentation system 1 wants the user to answer, or what the user wants to do.
  • the content of the option task may be for acquiring the user's subjective data, or for confirming the user's To Do task or intention by the system.
  • the confirmation of the user's intention by the information presentation system 1 includes, for example, confirmation of a person to whom the information presentation system 1 responds in the background to an incoming call or e-mail notification.
  • the user's subjective data includes (1) subjective data related to health (physical condition and mental state) such as physical condition and mood, (2) subjective data related to presented content / presented service, (3) advertising / marketing purpose, etc. There is subjective data in.
  • Subjective data on health such as physical condition and mood are based on optional tasks such as "whether you slept", “how you are feeling now”, “whether you are frustrated", and “whether you are anxious”. To be acquired. These health questions may be, for example, based on stress check items based on the Industrial Safety and Health Act or various psychological indicators.
  • biometric information heartbeat, sweating, brain wave, etc.
  • the biometric information and this option task are acquired. It is also possible to compare it with subjective data.
  • Subjective data related to the presented content / presented service includes the preference and evaluation of the content presented to the user, the evaluation of the quality of the service presented to the user, and the like. These subjective data are acquired and used by the information presentation system 1 as feedback information.
  • the service presented to the user is a music distribution service or a user behavior management service
  • the user's subjective data is acquired as feedback information at a fixed point (for example, once a day or once a week).
  • the service will be more personalized.
  • Subjective data for advertising and marketing purposes is acquired regardless of the content presented to the user.
  • the selection result can be obtained as a marketing survey result and sent to the corresponding server, or the selection presentation itself can function as an advertisement (for example, improving the user's awareness of the presented vehicle type). can.
  • the content of the choice task is the content that the user is interested in.
  • the content is closely related to the content that matches the user's taste and the content that the user is viewing.
  • the content of the option task may be, for example, a player's popularity voting questionnaire.
  • the content of the option task may be, for example, an opinion poll.
  • the act of selecting itself is important, but the selected result may be an incorrect answer. That is, a less important content such that the selected result can be corrected later is suitable for the option task.
  • the content of the choice task is, for example, what time is the wake-up time tomorrow?
  • the contents themselves, such as weekday settings and holiday settings, are not very important and may be modified later.
  • Choice tasks that satisfy these conditions are generated based on the user's attribute information, the content of the content that the user views, and the time information.
  • FIG. 5 is a diagram showing an example of presenting an option task.
  • FIG. 5 shows an example of presentation by voice.
  • the voice presentation device 42 presents to the user an optional task with the content of "Which of XXX or AAA will you go first tomorrow?". In this case, three options are assumed: "XXX”, “AAA”, and "Stop both”.
  • the sensor unit 33 detects a voice indicating the user's response and outputs it to the information processing apparatus 12.
  • the database update unit 65 updates the user's to-do list of the database 64 based on the user's response, and presents the state of the corresponding item (plan) with priority, for example, or processes it in the background. Move to such a state.
  • the option task of FIG. 5 is for confirming the intention of the user, is what the user wants to perform in the user's to-do list, and the information presentation system 1 responds to the user. It's also what I want. That is, the presentation example is the option task of the first content described above.
  • the TV which is the visual presentation device 41
  • the visual presentation device 41 is presented with the above-mentioned option task. You may let it.
  • FIG. 6 is a diagram showing an example of the presentation timing of the option task.
  • the user is watching the content in which a dinosaur appears in the city on a TV, which is a visual presentation device 41, for example.
  • the arrow shown on the right side of the user indicates the degree of positiveness of the user estimated by the information presentation system 1, and the degree of positiveness increases from the bottom to the top.
  • the information presentation system 1 causes the user to present the option task in the corner of the screen even while the user is viewing the content.
  • the option task is composed of, for example, a task having a large number of options, a task having a plurality of screens, or a task in which the options are divided into a plurality of screens and have screen transitions.
  • the information presentation system 1 causes the option task to be presented at a well-separated timing such as the timing of entering the CM, the timing of ending the content, or the timing of turning off the power.
  • the choice task is composed of, for example, a task consisting of two choices of Yes / No, or a task having few choices such as a task of notification and a task of confirmation of OK / NG.
  • the degree of positiveness is negative, that is, when the user is estimated to be in a negative state, the above-mentioned positive content is presented so as to direct the user to the positive state.
  • the presentation time of the option task may be within the time during which the user's positive state continues. However, since the user makes a selection in a positive state, the answer can actually be answered in a few tens of seconds, for example.
  • the option task may be continuously presented as long as the positive state of the user does not suddenly drop. For example, when there are a plurality of questions to be asked to the user, the option task may be continuously presented at the timing when the user can be presented, as long as the positive state of the user continues. The longest presentation time is the time during which the user's positive state continues.
  • the presentation of the option task to the user is controlled according to the degree of positiveness of the user.
  • Expansion example 1 (questionnaire conducted while watching a game)>
  • the information presentation system 1 of FIG. 1 described above presents an option task including a questionnaire related to the game when the situation of the game is calm and the user is estimated to be in a positive state while watching the game such as sports. It also functions as an application to let you.
  • the extended example 1 will be described with reference to FIG. 7.
  • FIG. 7 is a diagram showing an example of presenting an option task in the extended example 1.
  • FIG. 7 shows an example of presenting an option task including a questionnaire related to a match when the situation of the match is settled and the user is estimated to be in a positive state while watching the match.
  • the user is watching a soccer game, for example, at the game venue. If it is estimated that the user is in a positive state at the timing when the game development is settled (for example, half-time time) after the score is given to the team supported by the user and the game is excited, the task generation unit 62 will perform the task generation unit 62. Generate a choice task that includes a match-related survey.
  • the game venue is equipped with a visual presentation device 41 consisting of a large-sized display.
  • the output control unit 63 causes the visual presentation device 41 of the match venue to present "What is today's MVP? A: XXX, B: RRR, vote on your smartphone!” As an optional task including a questionnaire related to the match. ..
  • the user who sees the option task selects A or B by using the smart phone which is the sensor unit 33 owned by the user.
  • the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing apparatus 12.
  • the database update unit 65 updates the user information of the database 64 based on the user's response, and the output control unit 63 transmits the questionnaire information to the corresponding server via the network.
  • the task generation unit 62 when the option task is generated, the task generation unit 62 generates the option task so as to present in order from the option task including the question having the highest priority for which the response rate is to be increased. As a result, even if a part of the user leaves in the middle, the response rate to the high-priority question can be increased.
  • the visual presentation device 41 to be presented is a smartphone, a tablet terminal, a personal computer, a TV, or an AR (augmented reality). / Augmented Reality) It may be a device owned by each user such as a device.
  • an example of watching a game at a game venue is shown, but it is not limited to a sports game, but may be an event or a live performance.
  • the content such as a match or an event may be an online video or video distribution content.
  • the option task is presented superimposed on the content.
  • Extension example 2 (presentation based on a to-do list)>
  • the information presentation system 1 of FIG. 1 described above is an application that promotes a user's action by generating an option task based on a to-do list or a schedule in which the action schedule of the user registered in the database 64 or the like is registered. Also works.
  • An extended example 2 will be described with reference to FIGS. 8 and 9.
  • FIG. 8 is a diagram illustrating expansion example 2.
  • FIG. 8 shows an example in which music is played when it is determined that the user is not in the positive state in the extended example 2.
  • the voice presentation device 42 when it is determined that the user is not in a positive state in daily life, as shown in FIG. 8, for example, from the voice presentation device 42, as an example of the above-mentioned positive content, the user You can play your favorite music.
  • the positive content is determined based on the user information registered in the database 64 and the like.
  • the task generation unit 62 may further intervene (estimated to allow interrupts). It is determined whether or not it is in the state.
  • FIG. 9 is a diagram showing an example of presenting an option task in the extended example 2.
  • the task generation unit 62 When it is determined that the user is in a positive state, the task generation unit 62 first refers to the user's behavioral status and the like, and determines whether or not the user is in a state where intervention is acceptable.
  • the task generation unit 62 When it is determined that the user is in a state where the user may intervene, such as when the user is not busy and there is room for movement, the task generation unit 62 confirms the user's to-do list registered in the database 64. In order to do so, we will generate a choice task with two choices: "By the way, I had to decide A and B. Which would you like?"
  • the output control unit 63 controls to output the generated option task from the smart speaker which is the voice presentation device 42.
  • the voice presentation device 42 presents the option task by the voice "By the way, it was necessary to decide A and B. Which would you like?"
  • the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing apparatus 12.
  • the database update unit 65 updates the user's to-do list of the database 64 based on the user's response, and as described above, presents the state of the item preferentially, or processes it in the background. Move to a state such as.
  • the task is an option task for confirming the to-do list, but it may be an option task consisting of proposals for the user's next action, which is generated based on user information or the like.
  • the option task may be presented even if it is before the original presentation timing (determined time) in the to-do list. good. That is, since the option task is presented at the timing when the user is in a positive state and may intervene, the user can respond more quickly and move to the next action.
  • the choice task is later than the original presentation timing if the user is not in a positive state or the user is busy at the original presentation timing (fixed time) in the to-do list. It may be presented. In this case as well, since the user is in a positive state and the option task is presented at the timing when the user may intervene, the user can answer more quickly and move to the next action.
  • the option task is presented based on the user information (user's to-do list).
  • the alternative task can be presented naturally without burdening the user.
  • menu display and the like are presented according to the user's operation when the user wants to select or switch something, so this is an optional task of the present technology aimed at encouraging the user to make a selection. Is different.
  • FIG. 10 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU 301, ROM (Read Only Memory) 302, and RAM 303 are connected to each other by the bus 304.
  • the input / output interface 305 is further connected to the bus 304.
  • An input unit 306 including a keyboard, a mouse, and the like, and an output unit 307 including a display, a speaker, and the like are connected to the input / output interface 305.
  • the input / output interface 305 is connected to a storage unit 308 made of a hard disk, a non-volatile memory, etc., a communication unit 309 made of a network interface, etc., and a drive 310 for driving the removable media 311.
  • the CPU 301 loads the program stored in the storage unit 308 into the RAM 303 via the input / output interface 305 and the bus 304, and executes the above-mentioned series of processes. Is done.
  • the program executed by the CPU 301 is recorded on the removable media 311 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting, and installed in the storage unit 308.
  • the program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • this technology can take a cloud computing configuration in which one function is shared by multiple devices via a network and processed jointly.
  • each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • the present technology can also have the following configurations.
  • An information processing device including a presentation control unit that controls the presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
  • the option task is a task for causing a question to be selected by the user's will.
  • the presentation control unit controls presentation of the option task in order to acquire subjective data of the user.
  • the information processing apparatus according to (3), wherein the selection result of the option is used as information about the user.
  • the information processing apparatus controls presentation of the option task including a question of low importance of the selection result of the option.
  • the presentation control unit changes at least one of the timing of presenting the option task, the number of options, and the number of screens constituting the option task according to the degree of the positive state of the user (2).
  • To (9). (11) The information processing device according to (2), wherein the presentation control unit controls presentation of the option task including a questionnaire regarding the content, based on the status of the content being viewed by the user. (12) The information processing device according to (11), wherein the presentation control unit controls the presentation order of the option task based on the priority based on the response rate expected for the option task.
  • the information processing apparatus (13) The information processing apparatus according to (2) to (12), wherein the presentation control unit controls presentation of the option task when the state of the user is presumed to allow an interrupt. (14) The information processing apparatus according to (2) or (13), wherein the option task includes the option for confirming the user's schedule. (15) The information processing apparatus according to (14), wherein the option task includes, in addition to the user's schedule, the option for the next action that can be recommended based on the user's schedule. (16) The information processing apparatus according to (14), wherein the presentation control unit controls the presentation of the option task to be presented before or after the scheduled time for presenting the user's schedule.

Abstract

The present technology relates to an information processing device and method, and a program, which make it possible to encourage a user to make a rapid choice. This information processing device controls the presentation of a choice task to the user, in response to estimation of a positive state of the user by an estimating unit which estimates the user's positive state. The present technology is applicable to information presenting systems for presenting information to a user.

Description

情報処理装置および方法、並びにプログラムInformation processing equipment and methods, and programs
 本技術は、情報処理装置および方法、並びにプログラムに関し、特に、複数の選択肢を含む質問に対するユーザの回答をより確実に取得できるようにした情報処理装置および方法、並びにプログラムに関する。 The present technology relates to information processing devices and methods, and programs, and in particular, to information processing devices and methods, and programs that enable the user to more reliably obtain answers to questions including a plurality of options.
 日常生活のあらゆる場面において、人の感情は変化する。このように変化する人の感情に応じて、何かしらの介入を行うことが多く提案されている。 Human emotions change in every aspect of daily life. It is often proposed to perform some kind of intervention in response to such changing emotions of the person.
 例えば、特許文献1においては、話者の自我状態に着目して、テキストとシナリオデータベースに照合して得られる対話シナリオ(エージェント)の応答を変化させ、話者に受け入れやすく、かつ違和感のない自然な対話を行うことが提案されている。 For example, in Patent Document 1, focusing on the ego state of the speaker, the response of the dialogue scenario (agent) obtained by collating the text with the scenario database is changed, which is easy for the speaker to accept and is natural. It is proposed to have a good dialogue.
 また、特許文献2においては、操作者の気分と、一時的な気分である即時気分とを解析して特定し、特定した即時気分に基づいて操作装置を制御することが提案されている。 Further, in Patent Document 2, it is proposed that the mood of the operator and the immediate mood, which is a temporary mood, are analyzed and specified, and the operating device is controlled based on the specified immediate mood.
特開2004-310034号公報Japanese Unexamined Patent Publication No. 2004-31394 特開2019-028732号公報Japanese Unexamined Patent Publication No. 2019-028732
 人の感情は、ポジティブな感情とネガティブな感情に大きく分けられる。ここで、ユーザがネガティブな心理状態(以下、ネガティブ状態と称する)である場合、例えば、システムから質問やアンケートがあったとしても、その質問やアンケートに対する回答が進んで行われることがほとんどない。 Human emotions can be broadly divided into positive emotions and negative emotions. Here, when the user is in a negative psychological state (hereinafter referred to as a negative state), for example, even if there is a question or questionnaire from the system, the answer to the question or questionnaire is rarely given.
 本技術はこのような状況に鑑みてなされたものであり、複数の選択肢を含む質問に対するユーザの回答をより確実に取得できるようにするものである。 This technology was made in view of such a situation, and makes it possible to more reliably obtain the user's answer to a question including multiple options.
 本技術の一側面の情報処理装置は、ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部とを備える。 The information processing device of one aspect of the present technology includes a presentation control unit that controls the presentation of the option task to the user in response to the estimation of the positive state of the user by the estimation unit that estimates the positive state of the user. Be prepared.
 本技術の一側面においては、ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示が制御される。 In one aspect of the present technology, the presentation of the option task to the user is controlled according to the estimation of the positive state of the user by the estimation unit that estimates the positive state of the user.
本技術を適用した情報提示システムの一実施の形態の構成を示すブロック図である。It is a block diagram which shows the structure of one Embodiment of the information presentation system to which this technique is applied. 情報処理装置の機能構成例を示すブロック図である。It is a block diagram which shows the functional structure example of an information processing apparatus. ポジティブ状態の推定方法の例を示す図である。It is a figure which shows the example of the estimation method of a positive state. 情報提示システムの情報提示処理を説明するフローチャートである。It is a flowchart explaining the information presentation process of an information presentation system. 選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of a choice task. 選択肢タスクの提示タイミングの例を示す図である。It is a figure which shows the example of the presentation timing of a choice task. 拡張例1における選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of the option task in extended example 1. FIG. 拡張例2においてユーザがポジティブ状態ではないと判定された場合の例を示す図である。It is a figure which shows the example when it is determined that the user is not a positive state in extended example 2. 拡張例2における選択肢タスクの提示例を示す図である。It is a figure which shows the presentation example of the option task in extended example 2. FIG. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a computer.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
1.基本構成
2.拡張例1(試合観戦中のアンケート実施)
3.拡張例2(To Doリストに基づく提示)
4.その他
Hereinafter, a mode for implementing the present technology will be described. The explanation will be given in the following order.
1. 1. Basic configuration 2. Expansion example 1 (questionnaire conducted while watching a game)
3. 3. Extension example 2 (presentation based on a to-do list)
4. others
<1.基本構成>
(情報提示システムの構成例)
 図1は、本技術を適用した情報提示システムの一実施の形態の構成を示すブロック図である。
<1. Basic configuration>
(Configuration example of information presentation system)
FIG. 1 is a block diagram showing a configuration of an embodiment of an information presentation system to which the present technology is applied.
 図1の情報提示システム1は、ユーザの行動状況を示すデータを取得し、ユーザがポジティブ状態であるかを推定し、ユーザがポジティブ状態であると推定した場合、選択を促し、ユーザに意思決定させるように選択肢タスクの提示を制御するものである。 The information presentation system 1 of FIG. 1 acquires data indicating a user's behavioral status, estimates whether the user is in a positive state, and if the user estimates that the user is in a positive state, prompts a selection and makes a decision to the user. It controls the presentation of alternative tasks so that they are allowed to do so.
 選択肢タスクとは、複数の選択肢を含む質問を与えて、ユーザの意思決定により選択肢を選択させるタスクである。選択肢タスクにおいては、有限個(例えば2乃至5程度)の複数の選択肢が提示される。なお、以下、ユーザが選択肢タスクの提示に対して選択肢の選択を行うことを、選択肢タスクを行うという。 The choice task is a task in which a question including a plurality of choices is given and the choice is selected by the user's decision. In the choice task, a finite number (for example, about 2 to 5) of multiple choices are presented. Hereinafter, the user selecting an option in response to the presentation of the option task is referred to as performing the option task.
 また、選択肢タスクの提示目的が、例えばアンケートなどによるユーザの主観データの取得である場合、ユーザによる選択肢の選択結果は、例えば、ユーザの属性情報やユーザの嗜好情報などユーザに関するユーザ情報として利用される。ユーザ情報については、さらに詳しく後述される。 Further, when the purpose of presenting the option task is to acquire the subjective data of the user by, for example, a questionnaire, the selection result of the option by the user is used as user information about the user such as the attribute information of the user and the preference information of the user. NS. The user information will be described in more detail later.
 一方、選択肢タスクの提示目的が、情報提示システム1におけるユーザ向けのサービスの一環である場合、ユーザによる選択肢の選択結果は、例えば、情報提示システム1におけるサービスの状態を次の状態に遷移させるためのトリガーとして利用される。 On the other hand, when the purpose of presenting the option task is a part of the service for the user in the information presentation system 1, the selection result of the option by the user is, for example, to transition the state of the service in the information presentation system 1 to the next state. It is used as a trigger for.
 具体的には、ユーザ向けのサービス(アプリケーション)が、例えば、スケジュールやTo Doリストである場合、スケジュールやTo Doリストに含まれるアイテム(予定やイベントなど)に関する選択肢を含む選択肢タスクが提示され、各アイテムに関してユーザに確認させる。その選択結果により、情報提示システム1は、各アイテムの状態を、優先的に提示する、または、バックグラウンドで処理するなどの状態に移すことができる。具体的には、アイテムの1つである予定Aを参加に変更した場合、予定Aが優先的に提示され、アイテムの1つである予定Bをキャンセルに変更した場合、予定Bへのキャンセルがバックグラウンドで処理される。 Specifically, when the service (application) for the user is, for example, a schedule or a to-do list, an option task including options related to items (such as an appointment or event) included in the schedule or to-do list is presented. Ask the user to confirm each item. Depending on the selection result, the information presentation system 1 can shift the state of each item to a state such as preferentially presenting or processing in the background. Specifically, if you change the schedule A, which is one of the items, to participate, the schedule A will be presented preferentially, and if you change the schedule B, which is one of the items, to cancel, the cancellation to the schedule B will be performed. Processed in the background.
 ユーザ向けのサービスが、例えば、曲や番組などのコンテンツの再生である場合、複数のコンテンツを選択肢として含む選択肢タスクを提示し、ユーザにコンテンツを選択させる。その選択結果により、情報提示システム1は、例えば、次に再生するコンテンツを決定し、再生状態に移すことができる。 When the service for the user is, for example, playback of content such as a song or a program, an option task including a plurality of contents as options is presented, and the user is made to select the content. Based on the selection result, the information presentation system 1 can, for example, determine the content to be reproduced next and move it to the reproduction state.
 そして、以上のように構成される選択肢タスクを、下記の条件を踏まえて提示することにより、ポジティブ状態であるユーザにより、選択肢タスクが迅速に行われる。すなわち、本技術によれば、ユーザに、選択肢を含む質問に対して、より確実に回答を行わせることができる。換言すれば、本技術によれば、選択肢を含む質問に対するユーザの回答をより確実に取得することができる。以下、選択肢タスクの条件について説明する。 Then, by presenting the option task configured as described above based on the following conditions, the option task can be quickly performed by the user in the positive state. That is, according to the present technology, it is possible to make the user answer a question including choices more reliably. In other words, according to the present technology, it is possible to more reliably obtain the user's answer to the question including the choices. The conditions of the option task will be described below.
 ・回答内容の重要度が高くない質問を用いる。
  その理由は、例えば、人がポジティブ状態であるとき、「熟考をしない」という傾向があるためである。したがって、選択が不可逆であったり、一度きりであったりするような回答内容の重要度が高い質問は、選択肢タスクに適していない。一方、例えば、ユーザが回答したいと思いつつ、回答がついつい後回しになっている質問など、選択肢タスクに適している。
 ・選択肢タスクの提示または内容が、ユーザを不快にさせないようにする。
  その理由は、例えば、気分を害すような質問を含む選択肢タスクの提示が行われたり、気分を害するようなタイミングで選択肢の提示が行われたりした場合、ユーザのポジティブ状態を軽減させてしまうことに繋がるためである。
・ Use questions that are not of high importance.
The reason is that, for example, when a person is in a positive state, he or she tends to "do not ponder." Therefore, questions with high importance of answer content, such as irreversible selection or one-time selection, are not suitable for the choice task. On the other hand, it is suitable for a choice task, for example, a question that the user wants to answer but the answer is delayed.
• Make sure that the presentation or content of the alternative task does not offend the user.
The reason is that, for example, when an option task including an offensive question is presented, or when an option is presented at an offensive timing, the positive state of the user is reduced. This is to connect to.
 図1においては、選択肢タスクの一例として、ユーザがポジティブ状態であることが推定された場合、「起きたとき疲れが取れていましたか? Yes/No」という2つの選択肢を含む選択肢タスクが、音声により提示されている例が示されている。 In FIG. 1, as an example of the option task, when the user is estimated to be in a positive state, the option task including the two options "Are you tired when you wake up? Yes / No" is voiced. The example presented by is shown.
 図1の選択肢タスクは、提示目的が、ユーザの主観データの取得であり、選択結果が、ユーザ情報として利用される選択肢タスクの例である。 The option task in FIG. 1 is an example of an option task whose presentation purpose is acquisition of user's subjective data and whose selection result is used as user information.
 このような選択肢タスクを、ユーザがポジティブ状態であるときに提示することにより、選択肢を含む質問に対するユーザの回答をより確実に取得することができる。 By presenting such an option task when the user is in a positive state, it is possible to more reliably obtain the user's answer to the question including the option.
 ここで、ポジティブ状態は、例えば、交感神経系が抑制され、副交感神経系の活動が活発化された状態をいう。この場合、ポジティブ状態は、例えば、心拍値の変動、心拍変動におけるLF(Low Frequency)/HF(High Frequency)で表される交感神経活発度、または精神性発汗などを指標として推定されてもよい。 Here, the positive state means, for example, a state in which the sympathetic nervous system is suppressed and the activity of the parasympathetic nervous system is activated. In this case, the positive state may be estimated using, for example, fluctuations in heart rate, sympathetic nerve activity represented by LF (Low Frequency) / HF (High Frequency) in heart rate variability, or mental sweating as an index. ..
 また、ポジティブ状態は、例えば、右前頭葉が左前頭葉に対して抑制されている状態をいう。この場合、ポジティブ状態は、脳波のα帯域パワーを測定することにより推定される。 The positive state is, for example, a state in which the right frontal lobe is suppressed with respect to the left frontal lobe. In this case, the positive state is estimated by measuring the α-band power of the EEG.
 ポジティブ状態は、例えば、ユーザの発声における韻律特長に基づき推定されてもよい。このとき、韻律における所定の周波数帯域が基準値よりも上回る場合に、ポジティブ状態であると推定することができる。 The positive state may be estimated based on, for example, prosodic features in the user's utterance. At this time, if the predetermined frequency band in the prosody exceeds the reference value, it can be estimated to be in a positive state.
 ポジティブ状態は、例えば、ユーザの表情の認識に基づき推定されてもよい。撮像されたユーザの表情について、表情の特徴量がポジティブとされる表情に分類される場合には、ユーザがポジティブ状態であると推定することができる。 The positive state may be estimated based on the recognition of the user's facial expression, for example. When the captured facial expression of the user is classified into a facial expression in which the feature amount of the facial expression is positive, it can be estimated that the user is in a positive state.
 ポジティブ状態は、情動状態を定義する各種モデルに基づき推定されてもよい。ポジティブ状態は、例えば、覚醒・不活性、および快・不快の2つの軸に基づき人間の情動状態を定義するいわゆるラッセルの感情円環モデルにおける、快状態として定義されてもよい。 The positive state may be estimated based on various models that define the emotional state. The positive state may be defined as a pleasant state in the so-called Russell's emotional ring model, which defines a human emotional state based on, for example, two axes of arousal / inactivity and comfort / discomfort.
 また、ポジティブ状態は、情報提示システム1が、ユーザの属性や行動状況に応じて統計的に推定した結果に基づいて定義されてもよい。 Further, the positive state may be defined based on the result statistically estimated by the information presentation system 1 according to the user's attribute and behavioral situation.
 例えば、ユーザがリビング空間で過ごしている最中に、ユーザから笑い声が聞こえたとき、または、ユーザの表情が笑顔になったとき、ユーザはポジティブ状態であると推定することができる。笑い声や笑顔は、例えば、会話のやり取りによるものや、テレビやWebコンテンツの視聴によるものなどが想定される。 For example, when the user hears a laughter while the user is spending time in the living space, or when the user's facial expression smiles, it can be estimated that the user is in a positive state. It is assumed that laughter and smiles are caused by, for example, conversational exchanges and viewing of television and Web contents.
 例えば、ユーザがWebコンテンツの閲覧や文章作成をしている場合、そのコンテンツの中身が、一般的にポジティブな情報と相関があるとき、そのユーザはポジティブ状態であると推定することができる。 For example, when a user is browsing Web content or writing a sentence, it can be estimated that the user is in a positive state when the content of the content is generally correlated with positive information.
 具体的には、コンテンツが映像であれば、好きな著名人の映像が表示されている場合、または、人や動物が笑顔でいる映像または喜ぶ動作をしている映像などが表示されている場合、これらの映像は、ユーザをポジティブ状態にする要因の1つとなる。また、ユーザが作成する文章(text)に、大好き、幸せ、美味しい、または、楽しみ、などの文字が含まれる場合、これらの文章は、ユーザがポジティブ状態であると推定できる要因の1つとなる。なお、文章は、入力情報に限らず、ユーザの発話を音声認識したものであってもよい。 Specifically, if the content is a video, the video of a celebrity you like is displayed, or the video of a person or animal smiling or performing a happy movement is displayed. , These images are one of the factors that put the user in a positive state. In addition, when the text created by the user contains characters such as love, happiness, deliciousness, or fun, these texts are one of the factors that can be presumed to be a positive state by the user. The text is not limited to the input information, and may be voice recognition of the user's utterance.
 また、ユーザがTVでスポーツの試合を観戦している場合、応援しているチームが勝っているという状況は、そのユーザがポジティブ状態であると推定する要因の1つとなる。例えば、試合の経緯において、応援しているチームが逆転したり、あるいは、贔屓の選手が活躍したりしたときの状況は、ユーザがポジティブ状態であると推定する要因の1つとなる。 Also, when a user is watching a sports match on TV, the situation that the supporting team is winning is one of the factors that presumes that the user is in a positive state. For example, in the course of the game, the situation when the supporting team reverses or the favored player plays an active part is one of the factors for presuming that the user is in a positive state.
 情報提示システム1は、ユーザをポジティブ状態にすると推定される要因の1つまたは複数に基づき、ユーザ状態を推定することができる。 The information presentation system 1 can estimate the user state based on one or more of the factors presumed to put the user in the positive state.
 図1において、情報提示システム1は、入力装置11、情報処理装置12、および出力装置13から構成される。 In FIG. 1, the information presentation system 1 is composed of an input device 11, an information processing device 12, and an output device 13.
 また、この情報提示システム1において、入力装置11、情報処理装置12、および出力装置13は、無線LAN(Local Area Network)などのネットワーク21を介して、相互に接続されている。なお、入力装置11および出力装置13は、それぞれ情報処理装置12の入力部および出力部であってもよい。 Further, in the information presentation system 1, the input device 11, the information processing device 12, and the output device 13 are connected to each other via a network 21 such as a wireless LAN (Local Area Network). The input device 11 and the output device 13 may be an input unit and an output unit of the information processing device 12, respectively.
 入力装置11は、センサ部31、センサ部32、およびセンサ部33から構成される。 The input device 11 is composed of a sensor unit 31, a sensor unit 32, and a sensor unit 33.
 センサ部31は、部屋を含むユーザの外部の環境認識を行い、認識の結果得られる外部環境を示す外部環境情報を、情報処理装置12に出力する。 The sensor unit 31 recognizes the user's external environment including the room, and outputs external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
 センサ部31は、具体的には、住空間の形状や、存在する家電・家具を認識したり、それらの配置などを把握したりする。また、センサ部31は、天気や気温といった住空間外の情報を取得する。この場合、センサ部31は、例えば、イメージセンサ、LiDAR、温度センサ、照度センサ、インターネットを介して接続されるWebカメラ、およびWebサイト等から情報を取得する入力部などから構成される。 Specifically, the sensor unit 31 recognizes the shape of the living space, existing home appliances / furniture, and grasps their arrangement. In addition, the sensor unit 31 acquires information outside the living space such as weather and temperature. In this case, the sensor unit 31 is composed of, for example, an image sensor, LiDAR, a temperature sensor, an illuminance sensor, a Web camera connected via the Internet, an input unit for acquiring information from a website or the like.
 センサ部32は、ユーザの各種の情報や行動の認識を行い、認識の結果得られるユーザの生体情報およびユーザの行動状況を示す行動状況情報を、情報処理装置12に出力する。 The sensor unit 32 recognizes various information and actions of the user, and outputs the biometric information of the user obtained as a result of the recognition and the action status information indicating the action status of the user to the information processing device 12.
 センサ部32は、具体的には、住空間内のユーザの有無、人数、姿勢、顔向きなどの情報を認識する。また、センサ部32は、ユーザがいま何をしているか、ユーザの行動状況を認識する。この場合、センサ部32は、OptiTrack(商標)などのモーションキャプチャシステム、イメージセンサを用いて対象物までの距離を測る方式の距離画像センシングシステム、赤外線カメラ、および高解像度デプスセンサなどから構成される。 Specifically, the sensor unit 32 recognizes information such as the presence / absence of users in the living space, the number of people, the posture, and the face orientation. Further, the sensor unit 32 recognizes what the user is doing now and the behavioral status of the user. In this case, the sensor unit 32 includes a motion capture system such as OptiTrack (trademark), a distance image sensing system that measures the distance to an object using an image sensor, an infrared camera, a high-resolution depth sensor, and the like.
 センサ部32は、さらに、ユーザの生体情報(心拍など)を取得する。この場合、センサ部32は、心拍センサ、発汗センサ、脳波センサ、温度センサなどでよく、これらを含むリストバンドやHMD(Head Mount Display)、グラス型ディスプレイなどであってもよい。あるいは、センサ部32は、心拍値が計測可能なカメラや、発話や発声における韻律を入力可能なマイクなどであってもよい。 The sensor unit 32 further acquires the user's biological information (heartbeat, etc.). In this case, the sensor unit 32 may be a heart rate sensor, a sweating sensor, a brain wave sensor, a temperature sensor, or the like, and may be a wristband, an HMD (Head Mount Display), a glass type display, or the like including these. Alternatively, the sensor unit 32 may be a camera capable of measuring a heart rate value, a microphone capable of inputting prosody in utterance or utterance, or the like.
 センサ部33は、ユーザの操作入力や音声入力などによる操作情報を取得し、取得した操作情報を、情報処理装置12に出力する。 The sensor unit 33 acquires operation information by user's operation input or voice input, and outputs the acquired operation information to the information processing device 12.
 センサ部33は、具体的には、ユーザが、提示された選択肢タスクを行う際の音声や動作、または、ユーザのTo Doリストなどの情報を入力する際の操作入力した操作情報を取得する。この場合、センサ部33は、キーボードやタッチパネル、操作ボタン、コントローラ、スマートホン、タブレット端末、またはマイクロフォンなどから構成される。 Specifically, the sensor unit 33 acquires the input operation information such as the voice and operation when the user performs the presented option task, or the operation when inputting information such as the user's to-do list. In this case, the sensor unit 33 is composed of a keyboard, a touch panel, operation buttons, a controller, a smart phone, a tablet terminal, a microphone, or the like.
 情報処理装置12は、例えば、パーソナルコンピュータなどから構成される。情報処理装置12は、入力装置11から供給された情報に基づいて、ユーザの感情、例えば、ユーザがポジティブ状態であるか否かを推定する。情報処理装置12は、ユーザがポジティブ状態であると推定した場合、入力装置11から供給された情報などに基づいて、選択肢タスクを生成し、生成した選択肢タスクを、出力装置13に提示させる。 The information processing device 12 is composed of, for example, a personal computer or the like. The information processing device 12 estimates the user's emotions, for example, whether or not the user is in a positive state, based on the information supplied from the input device 11. When the user estimates that the user is in a positive state, the information processing device 12 generates an option task based on the information supplied from the input device 11 and causes the output device 13 to present the generated option task.
 出力装置13は、視覚提示デバイス41および音声提示デバイス42などから構成される。出力装置13は、情報処理装置12から供給される選択肢タスクを提示する。 The output device 13 is composed of a visual presentation device 41, a voice presentation device 42, and the like. The output device 13 presents an optional task supplied by the information processing device 12.
 視覚提示デバイス41は、TV、プロジェクタ、あるいは、スマートホンまたはタブレット端末のディスプレイなどから構成される。視覚提示デバイス41は、選択肢タスクを、ユーザに対して視覚的に提示する。 The visual presentation device 41 is composed of a TV, a projector, a display of a smart phone or a tablet terminal, or the like. The visual presentation device 41 visually presents the option task to the user.
 音声提示デバイス42は、スピーカやスマートスピーカなどで構成される。音声提示デバイス42は、選択肢タスクを、ユーザに対して音声により提示する。 The voice presentation device 42 is composed of a speaker, a smart speaker, or the like. The voice presentation device 42 presents the option task to the user by voice.
(情報処理装置の構成例)
 図2は、情報処理装置の機能構成例を示すブロック図である。
(Configuration example of information processing device)
FIG. 2 is a block diagram showing a functional configuration example of the information processing apparatus.
 図2において、情報処理装置12は、感情推定部61、タスク生成部62、出力制御部63、データベース64、データベース更新部65、およびコンテンツ決定部66を含むように構成される。なお、これらの機能は、情報処理装置12のCPU(Central Processing Unit)により所定のプログラムなどがRAM(Random Access Memory)などに展開されて構成される。 In FIG. 2, the information processing apparatus 12 is configured to include an emotion estimation unit 61, a task generation unit 62, an output control unit 63, a database 64, a database update unit 65, and a content determination unit 66. In addition, these functions are configured by expanding a predetermined program or the like into RAM (Random Access Memory) or the like by the CPU (Central Processing Unit) of the information processing apparatus 12.
 感情推定部61は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況情報のうちの少なくとも1つに基づいて、ユーザの感情、例えば、ユーザがポジティブ状態であるか否かを推定する。感情推定部61は、ユーザがポジティブ状態であると推定した場合、タスク生成部62に、選択肢タスクを生成させる。 The emotion estimation unit 61 is based on at least one of the external environment information, the user's biometric information, and the user's behavioral status information supplied from the input device 11, and the user's emotion, for example, the user is in a positive state. Estimate whether or not. When the user estimates that the user is in a positive state, the emotion estimation unit 61 causes the task generation unit 62 to generate a choice task.
 また、感情推定部61は、ユーザがポジティブ状態ではないと推定した場合、コンテンツ決定部66に、ユーザに提示するためのポジティブコンテンツを決定させる。 Further, when the emotion estimation unit 61 estimates that the user is not in the positive state, the content determination unit 66 causes the content determination unit 66 to determine the positive content to be presented to the user.
 ポジティブコンテンツは、ユーザの感情をポジティブに向かわせるためのコンテンツのことであり、特に限定されない。例えば、ポジティブコンテンツは、ユーザの好みの著名人が出演する映像やユーザ好みの曲であってもよい。また、ポジティブコンテンツは、ユーザの感情をポジティブに向かわせるような行動の指示であってもよい。 Positive content is content that makes the user's emotions positive, and is not particularly limited. For example, the positive content may be a video in which a celebrity of the user's taste appears or a song of the user's taste. In addition, the positive content may be an instruction of an action that makes the user's emotions positive.
 タスク生成部62は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況情報、並びにデータベース64に登録されている情報などに基づいて、選択肢タスクの内容と提示先のデバイスを決定し、選択肢タスクを生成する。タスク生成部62は、生成した選択肢タスクを、出力制御部63に出力する。 The task generation unit 62 determines the content and presentation destination of the option task based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the information registered in the database 64, and the like. Determine the device for and generate a choice task. The task generation unit 62 outputs the generated option task to the output control unit 63.
 出力制御部63は、タスク生成部62から供給される選択肢タスクを、出力装置13に提示させる。出力制御部63は、コンテンツ決定部66から供給されるポジティブコンテンツを、出力装置13に出力させる。出力制御部63は、出力装置13の電源のオンオフも制御する。また、出力制御部63は、ユーザからの応答を、例えば、アンケート情報として、図示せぬネットワークを介して、図示せぬ対応するサーバに送信する。 The output control unit 63 causes the output device 13 to present the optional tasks supplied from the task generation unit 62. The output control unit 63 causes the output device 13 to output the positive content supplied from the content determination unit 66. The output control unit 63 also controls the on / off of the power supply of the output device 13. Further, the output control unit 63 transmits a response from the user, for example, as questionnaire information to a corresponding server (not shown) via a network (not shown).
 データベース64には、ユーザに関するユーザ情報として、ユーザの属性情報、ユーザの嗜好情報、およびパーソナリティ特性などを含む個人情報などが登録されている。さらに、データベース64には、ユーザ情報として、ユーザのTo Doリストやスケジュール、ユーザの習慣情報、ユーザの選択肢タスクに対する応答の傾向や行動傾向の情報からなるユーザの行動傾向情報なども登録されている。 In the database 64, personal information including user attribute information, user preference information, personality characteristics, etc. are registered as user information regarding the user. Further, as user information, user behavior tendency information including user to-do list and schedule, user habit information, and information on response tendency and behavior tendency to user's choice task is also registered in the database 64. ..
 データベース更新部65は、入力装置11から供給される外部環境情報、ユーザの生体情報、ユーザの行動状況情報、およびユーザの操作情報などに基づいて、データベース64に登録されている情報を更新する。 The database update unit 65 updates the information registered in the database 64 based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the operation information of the user, and the like.
 コンテンツ決定部66は、データベース64に登録されているユーザ情報などに基づいて、ユーザに提示するポジティブコンテンツを決定する。コンテンツ決定部66は、決定したポジティブコンテンツを再生または生成して、出力制御部63に出力する。 The content determination unit 66 determines the positive content to be presented to the user based on the user information registered in the database 64 and the like. The content determination unit 66 reproduces or generates the determined positive content and outputs the determined positive content to the output control unit 63.
(ポジティブ状態の推定方法)
 図3は、感情推定部61によるポジティブ状態の推定方法の例を示す図である。
(Estimation method of positive state)
FIG. 3 is a diagram showing an example of a method of estimating a positive state by the emotion estimation unit 61.
 感情推定部61は、ユーザの生体情報、ユーザの行動状況、および外部環境情報の少なくとも1つに基づいて、ユーザがポジティブ状態であるか否かを推定する。 The emotion estimation unit 61 estimates whether or not the user is in a positive state based on at least one of the user's biological information, the user's behavioral status, and the external environmental information.
 感情推定部61は、ユーザの生体情報、例えば、心拍値の上昇、笑顔度など、または視線の状態などの情報に基づいて、ユーザがポジティブ状態であるか否かを推定する。 The emotion estimation unit 61 estimates whether or not the user is in a positive state based on the user's biological information, for example, information such as an increase in heart rate, a smile degree, or a state of gaze.
 感情推定部61は、心拍値の上昇を用いる場合、例えば、所定の基準値との差分が一定値以上であること、または、一定時間、所定の基準値を上回ったことに基づき、ユーザがポジティブ状態であると推定する。このとき、基準値は、当該ユーザの平均値や、起床時など特定タイミングにおける測定値に基づき定義されてよく、あるいは、複数ユーザの平均値や情報提示システム1が定義する値としてもよい。 When the emotion estimation unit 61 uses the increase in the heart rate, the user is positive based on, for example, that the difference from the predetermined reference value is equal to or more than a certain value or exceeds the predetermined reference value for a certain period of time. Presumed to be in a state. At this time, the reference value may be defined based on the average value of the user or the measured value at a specific timing such as when waking up, or may be the average value of a plurality of users or a value defined by the information presentation system 1.
 感情推定部61は、笑顔度/笑い声/トーンを用いる場合、例えば、表情が笑顔であるとき、一定時間における笑い声の回数が基準値以上であるとき、ユーザがポジティブ状態であると推定する。また、感情推定部61は、声のトーンとトーンの基準値との差が基準Hz以上であるとき、ユーザがポジティブ状態であると推定する。 The emotion estimation unit 61 estimates that the user is in a positive state when the smile degree / laughter / tone is used, for example, when the facial expression is a smile and the number of laughter in a certain period of time is equal to or more than the reference value. Further, the emotion estimation unit 61 estimates that the user is in a positive state when the difference between the voice tone and the reference value of the tone is equal to or greater than the reference Hz.
 感情推定部61は、視線の状態を用いる場合、例えば、視線が下を向かない時間が一定秒以上継続したとき、ユーザがポジティブ状態であると推定する。 The emotion estimation unit 61 estimates that the user is in a positive state when the state of the line of sight is used, for example, when the time when the line of sight does not point downward continues for a certain second or longer.
 また、感情推定部61は、ユーザの行動状況として、例えば、コンテンツ閲覧や、文章作成、発話状態、コミュニケーション状態、運動状態、体動状態などの種々の状態に基づいて、ユーザがポジティブ状態であるか否かを推定する。 Further, the emotion estimation unit 61 is in a positive state of the user based on various states such as content browsing, sentence creation, utterance state, communication state, exercise state, and body movement state as the user's behavioral state. Estimate whether or not.
 具体的には、ユーザがWebコンテンツを閲覧中に、感情推定部61が、コンテンツ上にポジティブな用語や画像を検出した場合、ユーザがポジティブ状態であると推定する。また、感情推定部61は、プレイ中のゲームにおけるユーザの状況(勝敗等)や取得したアイテム、観戦中のスポーツの試合における応援しているチームの状況(勝敗等)、エンターテインメントにおけるユーザの行動情報、または、その際の歩行傾向などに基づいて、ユーザがポジティブ状態であるか否かを推定するようにしてもよい。 Specifically, when the emotion estimation unit 61 detects a positive term or image on the content while the user is browsing the Web content, it is estimated that the user is in a positive state. In addition, the emotion estimation unit 61 includes the user's situation (win / loss, etc.) in the game being played, the acquired item, the situation of the supporting team in the sports game being watched (win / loss, etc.), and the user's behavior information in entertainment. , Or, based on the walking tendency at that time, it may be estimated whether or not the user is in a positive state.
 例えば、感情推定部61は、応援しているチームが逆転勝ちしたとき、または、テーマパークで好きなキャラクタに会い、弾んだ歩き方をしているときなど、ユーザがポジティブ状態であると推定する。 For example, the emotion estimation unit 61 estimates that the user is in a positive state, such as when the supporting team wins in reverse, or when the user meets a favorite character at a theme park and walks in a bouncy manner. ..
 さらに、感情推定部61は、外部環境情報として、例えば、天気、気温、湿度、または交通状態、環境音などの情報に基づいて、ユーザがポジティブ状態であるか否かを推定する。例えば、感情推定部61は、電車の混雑がないとき、または、ユーザの好みの音楽がかかっているとき、ユーザがポジティブ状態であるか否かを推定する。 Further, the emotion estimation unit 61 estimates whether or not the user is in a positive state based on information such as weather, temperature, humidity, traffic condition, and environmental sound as external environmental information. For example, the emotion estimation unit 61 estimates whether or not the user is in a positive state when the train is not crowded or when the user's favorite music is played.
 具体的には、感情推定部61は、晴れているとき、湿度が低いとき、または、適切な温度であるとき、ユーザがポジティブ状態であると推定する。 Specifically, the emotion estimation unit 61 estimates that the user is in a positive state when it is sunny, when the humidity is low, or when the temperature is appropriate.
 なお、これらの推定方法は、並行して複数行われてもよいし、組み合わせて行われてもよい。それらの組み合わせの状態がユーザにとってポジティブ状態に当たるか否かについて、過去に蓄積されたデータに基づく学習結果を用いて推定されてもよい。 It should be noted that a plurality of these estimation methods may be performed in parallel or may be performed in combination. Whether or not the state of the combination corresponds to the positive state for the user may be estimated using the learning result based on the data accumulated in the past.
(情報提示システムの動作)
 図4は、情報提示システム1の情報提示処理を説明するフローチャートである。
(Operation of information presentation system)
FIG. 4 is a flowchart illustrating the information presentation process of the information presentation system 1.
 なお、図4において、入力装置11の処理は、情報処理装置12の処理と組み合わされて示される。 Note that, in FIG. 4, the processing of the input device 11 is shown in combination with the processing of the information processing device 12.
 また、以下、ユーザが、行動の一例として、TVで放送されているコンテンツを見ている場合の例について説明する。 In addition, the following will explain an example in which the user is watching the content being broadcast on TV as an example of the action.
 ステップS11において、入力装置11のセンサ部31は、部屋や外部環境の認識を行い、認識の結果得られる外部環境を示す外部環境情報を、情報処理装置12に出力する。 In step S11, the sensor unit 31 of the input device 11 recognizes the room and the external environment, and outputs the external environment information indicating the external environment obtained as a result of the recognition to the information processing device 12.
 ステップS12において、入力装置11のセンサ部32は、ユーザの各種の情報の認識や行動把握を行う。センサ部32は、認識および把握の結果得られるユーザの生体情報およびユーザの行動状況情報を、情報処理装置12に出力する。 In step S12, the sensor unit 32 of the input device 11 recognizes various information of the user and grasps the behavior. The sensor unit 32 outputs the user's biological information and the user's behavior status information obtained as a result of recognition and grasp to the information processing device 12.
 ステップS13において、情報処理装置12の感情推定部61は、入力装置11から供給される外部環境情報、ユーザの生体情報、およびユーザの行動状況情報のうちの少なくとも1つに基づいて、ユーザの感情(ポジティブ状態)を推定する。 In step S13, the emotion estimation unit 61 of the information processing apparatus 12 is based on at least one of the external environment information supplied from the input device 11, the biometric information of the user, and the behavioral status information of the user, and the emotion of the user. Estimate (positive state).
 ステップS14において、感情推定部61は、ユーザがポジティブ状態であるか否かを判定する。ユーザがポジティブ状態ではないと、ステップS14において判定された場合、ステップS15に進む。 In step S14, the emotion estimation unit 61 determines whether or not the user is in a positive state. If it is determined in step S14 that the user is not in the positive state, the process proceeds to step S15.
 ステップS15において、コンテンツ決定部66は、データベース64に登録されているユーザ情報に基づいて、ポジティブコンテンツを決定し、決定したポジティブコンテンツを再生または生成して、出力制御部63に出力する。出力制御部63は、コンテンツ決定部66から供給されるポジティブコンテンツを、出力装置13における所定の提示デバイスに出力することで、ポジティブコンテンツの提示を制御する。 In step S15, the content determination unit 66 determines positive content based on the user information registered in the database 64, reproduces or generates the determined positive content, and outputs the determined positive content to the output control unit 63. The output control unit 63 controls the presentation of the positive content by outputting the positive content supplied from the content determination unit 66 to a predetermined presentation device in the output device 13.
 ステップS31において、出力装置13における提示デバイスは、出力制御部63により供給されるポジティブコンテンツを提示する。ポジティブコンテンツを提示する時間は、予め設定された所定の時間であってもよいし、次のステップS14においてポジティブ状態であると判定されるまでであってもよい。 In step S31, the presentation device in the output device 13 presents the positive content supplied by the output control unit 63. The time for presenting the positive content may be a predetermined time set in advance, or may be until it is determined in the next step S14 that the positive content is in the positive state.
 ステップS15の処理の後、ステップS11に戻り、それ以降の処理が繰り返される。 After the process of step S15, the process returns to step S11, and the subsequent processes are repeated.
 ステップS14において、ユーザがポジティブ状態であると判定された場合、処理は、ステップS16に進む。 If it is determined in step S14 that the user is in a positive state, the process proceeds to step S16.
 ステップS16において、タスク生成部62は、選択肢タスクの内容を決定する。例えば、タスク生成部62は、選択肢タスクにおける質問(選択肢を含む)の内容を決定する。 In step S16, the task generation unit 62 determines the content of the option task. For example, the task generation unit 62 determines the content of the question (including the option) in the option task.
 ステップS17において、タスク生成部62は、提示先のデバイスを決定する。 In step S17, the task generation unit 62 determines the device to be presented.
 ステップS18において、タスク生成部62は、決定した選択肢タスクの内容および提示先のデバイスに基づいて、選択肢タスクを生成し、生成した選択肢タスクを出力制御部63に出力する。 In step S18, the task generation unit 62 generates an option task based on the content of the determined option task and the device to be presented, and outputs the generated option task to the output control unit 63.
 なお、ステップS16乃至S18の処理は、入力装置11から供給される外部環境情報、ユーザの生体情報、ユーザの行動状況情報、およびデータベース64に登録されているユーザ情報などに基づいて行われる。 The processing of steps S16 to S18 is performed based on the external environment information supplied from the input device 11, the biometric information of the user, the behavior status information of the user, the user information registered in the database 64, and the like.
 ステップS19において、出力制御部63は、選択肢タスクの提示タイミングになったと判定するまで待機する。選択肢タスクの提示タイミングになったと、ステップS19において判定された場合、処理は、ステップS20に進む。 In step S19, the output control unit 63 waits until it is determined that it is time to present the option task. If it is determined in step S19 that it is time to present the option task, the process proceeds to step S20.
 ステップS20において、出力制御部63は、出力装置13のうち、ステップS17で提示先のデバイスとして決定された提示デバイスに、選択肢タスクを出力することで、選択肢タスクの提示制御を行う。 In step S20, the output control unit 63 controls the presentation of the option task by outputting the option task to the presentation device determined as the device to be presented in step S17 among the output devices 13.
 ステップS32において、出力装置13の該提示デバイスは、選択肢タスクの提示を行う。 In step S32, the presentation device of the output device 13 presents an option task.
 出力装置13における選択肢タスクの提示に対して、ユーザは、選択肢タスクを開始する。 The user starts the option task in response to the presentation of the option task in the output device 13.
 ステップS21において、入力装置11のセンサ部33は、ユーザからの操作情報を取得し、取得した操作情報を、情報処理装置12に出力する。 In step S21, the sensor unit 33 of the input device 11 acquires the operation information from the user, and outputs the acquired operation information to the information processing device 12.
 ステップS22において、情報処理装置12のデータベース更新部65は、ユーザの操作情報に対応する応答を、ユーザ属性情報またはユーザ嗜好情報などのユーザ情報としてデータベース64に登録する。この応答は、例えば、出力制御部63により、企業などからのアンケートの回答結果として、対応する企業のサーバにネットワークを介して送信される。 In step S22, the database update unit 65 of the information processing apparatus 12 registers the response corresponding to the user's operation information in the database 64 as user information such as user attribute information or user preference information. This response is, for example, transmitted by the output control unit 63 to the server of the corresponding company via the network as the answer result of the questionnaire from the company or the like.
 以上のように、TVの視聴中のユーザのポジティブ状態が推定された場合、ユーザに対して選択肢タスクが提示される。ユーザはポジティブ状態なので、提示される選択肢タスクに対して応答する確率が高くなると推定される。すなわち、情報提示システム1においては、ユーザのポジティブ状態を利用することにより、選択肢を含む質問に対してユーザの回答を迅速かつ確実に得ることが期待される。 As described above, when the positive state of the user who is watching TV is estimated, the option task is presented to the user. Since the user is in a positive state, it is estimated that the probability of responding to the presented choice task is high. That is, in the information presentation system 1, it is expected that the user's answer to the question including the choice can be obtained promptly and surely by utilizing the user's positive state.
(選択肢タスクの内容)
 ユーザに提示される選択肢タスクの内容は、例えば、下記の3つの内容があげられる。
(Contents of choice task)
The contents of the option task presented to the user include, for example, the following three contents.
 1.選択肢タスクの内容は、提示タイミングに制約はないが、情報提示システム1がユーザに回答してほしいものであるか、または、ユーザが行いたいと思っているものである。 1. The content of the option task is not limited in the presentation timing, but is what the information presentation system 1 wants the user to answer, or what the user wants to do.
 例えば、選択肢タスクの内容は、ユーザの主観データの取得するためのものであったり、システムによるユーザのTo Doタスクや意思の確認を行うためのものであったりする。情報提示システム1によるユーザの意思の確認とは、例えば、電話やメールの着信通知に対して、情報提示システム1がバックグラウンドで返信する相手の確認などがあげられる。 For example, the content of the option task may be for acquiring the user's subjective data, or for confirming the user's To Do task or intention by the system. The confirmation of the user's intention by the information presentation system 1 includes, for example, confirmation of a person to whom the information presentation system 1 responds in the background to an incoming call or e-mail notification.
 なお、ユーザの主観データには、(1)体調、気分などの健康(身体の状態や精神状態)に関する主観データ、(2)提示コンテンツ・提示サービスに関する主観データ、(3)広告・マーケティング目的などにおける主観データなどがある。 The user's subjective data includes (1) subjective data related to health (physical condition and mental state) such as physical condition and mood, (2) subjective data related to presented content / presented service, (3) advertising / marketing purpose, etc. There is subjective data in.
 (1)体調、気分などの健康に関する主観データは、「眠れたかどうか」、「現在の気分はどうか」、「イライラしていないか」、「不安はないか」などの状態を問いかける選択肢タスクにより取得される。これらの健康に関する質問は、例えば、労働安全衛生法に基づくストレスチェックの項目や、心理学上用いられる各種指標に基づくものであってもよい。 (1) Subjective data on health such as physical condition and mood are based on optional tasks such as "whether you slept", "how you are feeling now", "whether you are frustrated", and "whether you are anxious". To be acquired. These health questions may be, for example, based on stress check items based on the Industrial Safety and Health Act or various psychological indicators.
 なお、この場合、センサ部32を構成する接触センサやカメラ、マイクなどにより、生体情報(心拍、発汗、脳波など)が取得されているとき、これらの生体情報と、この選択肢タスクにより取得された主観データとを照らし合わせることも可能である。 In this case, when biometric information (heartbeat, sweating, brain wave, etc.) is acquired by the contact sensor, camera, microphone, etc. constituting the sensor unit 32, the biometric information and this option task are acquired. It is also possible to compare it with subjective data.
 (2)提示コンテンツ・提示サービスに関する主観データは、ユーザに提示したコンテンツに対する好みや評価、ユーザに提示したサービスの質に関する評価などである。これらの主観データは、フィードバック情報として、情報提示システム1に取得され、利用される。 (2) Subjective data related to the presented content / presented service includes the preference and evaluation of the content presented to the user, the evaluation of the quality of the service presented to the user, and the like. These subjective data are acquired and used by the information presentation system 1 as feedback information.
 例えば、ユーザに提示するサービスが音楽配信サービスや、ユーザの行動管理サービスの場合、定点的に(例えば1日1回、あるいは1週間に1回など)、ユーザの主観データをフィードバック情報として取得して利用することで、サービスがより個人化(パーソナライズ)される。 For example, if the service presented to the user is a music distribution service or a user behavior management service, the user's subjective data is acquired as feedback information at a fixed point (for example, once a day or once a week). By using the service, the service will be more personalized.
 (3)広告・マーケティング目的などにおける主観データは、ユーザに提示するコンテンツの内容によらず、取得される。例えば、自動車のマーケティングであれば、複数の車種を選択肢として含む選択肢タスクを提示することで、ユーザの好みの車種に関する情報を取得することができる。この場合、選択結果をマーケティングの調査結果として得て、対応するサーバに送信することもできるし、選択肢の提示自体を広告(例えば、提示された車種に対するユーザの認知度向上)として機能させることができる。 (3) Subjective data for advertising and marketing purposes is acquired regardless of the content presented to the user. For example, in the case of automobile marketing, it is possible to acquire information on a user's favorite vehicle type by presenting an option task that includes a plurality of vehicle types as options. In this case, the selection result can be obtained as a marketing survey result and sent to the corresponding server, or the selection presentation itself can function as an advertisement (for example, improving the user's awareness of the presented vehicle type). can.
 2.選択肢タスクの内容は、ユーザが興味を持っている内容である。例えば、ユーザの嗜好に合致する内容や、ユーザが視聴中のコンテンツの内容に関連が深い内容である。具体的には、ユーザがスポーツの試合を視聴しているとき、選択肢タスクの内容は、例えば、選手の人気投票アンケートであってもよい。また、ユーザがニュース番組を視聴しているとき、選択肢タスクの内容は、例えば、世論調査などであってもよい。 2. The content of the choice task is the content that the user is interested in. For example, the content is closely related to the content that matches the user's taste and the content that the user is viewing. Specifically, when the user is watching a sports match, the content of the option task may be, for example, a player's popularity voting questionnaire. Further, when the user is watching a news program, the content of the option task may be, for example, an opinion poll.
 3.選択肢タスクの内容は、選択する行為自体は重要であるが、選択された結果は、不正解であってもよい内容がよい。すなわち、選択肢タスクには、選択された結果が、後から修正ができるというような、重要性が高くない内容が適している。選択肢タスクの内容は、例えば、明日の起床時間は何時ですか?や、平日の設定や休日の設定など、内容自体は、あまり重要性はなく、後から修正しても支障がないものであってもよい。 3. As for the content of the option task, the act of selecting itself is important, but the selected result may be an incorrect answer. That is, a less important content such that the selected result can be corrected later is suitable for the option task. The content of the choice task is, for example, what time is the wake-up time tomorrow? The contents themselves, such as weekday settings and holiday settings, are not very important and may be modified later.
 これらの条件を満たす選択肢タスクは、ユーザの属性情報やユーザが視聴するコンテンツの内容、およびその時間情報などに基づき生成される。 Choice tasks that satisfy these conditions are generated based on the user's attribute information, the content of the content that the user views, and the time information.
 図5は、選択肢タスクの提示例を示す図である。 FIG. 5 is a diagram showing an example of presenting an option task.
 図5においては、音声での提示例が示されている。 FIG. 5 shows an example of presentation by voice.
 図5には、音声提示デバイス42から「明日はXXXとAAAどちらを先に行きますか?」という内容の選択肢タスクが、ユーザに提示されている。この場合、「XXX」、「AAA」、「どっちともやめる」の3つの選択肢が想定される。 In FIG. 5, the voice presentation device 42 presents to the user an optional task with the content of "Which of XXX or AAA will you go first tomorrow?". In this case, three options are assumed: "XXX", "AAA", and "Stop both".
 例えば、ユーザが、提示された選択肢タスクに対して「XXXかな」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザのTo Doリストを更新し、該当するアイテム(予定)の状態を、例えば、優先的に提示する、または、バックグラウンドで処理するなどの状態に移す。 For example, when the user responds with "XXX kana" to the presented option task, the sensor unit 33 detects a voice indicating the user's response and outputs it to the information processing apparatus 12. The database update unit 65 updates the user's to-do list of the database 64 based on the user's response, and presents the state of the corresponding item (plan) with priority, for example, or processes it in the background. Move to such a state.
 図5の選択肢タスクは、ユーザの意思を確認するためのものであり、ユーザのTo Doリストにおいて、ユーザが行いたいと思っているものであり、かつ、情報提示システム1がユーザに回答してほしいものでもある。すなわち、提示例は、上記の1番目の内容の選択肢タスクである。 The option task of FIG. 5 is for confirming the intention of the user, is what the user wants to perform in the user's to-do list, and the information presentation system 1 responds to the user. It's also what I want. That is, the presentation example is the option task of the first content described above.
 なお、図5の例においては、視覚提示デバイス41であるTVがユーザの近くに示されているが、ユーザがTVのコンテンツを視聴中である場合、視覚提示デバイス41に、上記選択肢タスクを提示させるようにしてもよい。 In the example of FIG. 5, the TV, which is the visual presentation device 41, is shown near the user, but when the user is watching the contents of the TV, the visual presentation device 41 is presented with the above-mentioned option task. You may let it.
(選択肢タスクの提示タイミング)
 図6は、選択肢タスクの提示タイミングの例を示す図である。
(Timing of presentation of alternative tasks)
FIG. 6 is a diagram showing an example of the presentation timing of the option task.
 図6の例においては、視覚提示デバイス41であるTVにおけるコンテンツの視聴中に、ユーザがポジティブ状態であると推定された場合、ポジティブ状態の度合(以下、ポジティブ度合とも称する)に応じて、選択肢タスクの提示タイミングを変化させる例が示されている。 In the example of FIG. 6, when the user is estimated to be in a positive state while viewing the content on the TV which is the visual presentation device 41, the options are selected according to the degree of the positive state (hereinafter, also referred to as the positive degree). An example of changing the presentation timing of a task is shown.
 ユーザは、例えば、視覚提示デバイス41であるTVで、街に恐竜が現れたコンテンツを視聴している。ユーザの右側に図示されている矢印は、情報提示システム1により推定されたユーザのポジティブ度合を示しており、下から上に行くほど、ポジティブ度合が大きくなる。 The user is watching the content in which a dinosaur appears in the city on a TV, which is a visual presentation device 41, for example. The arrow shown on the right side of the user indicates the degree of positiveness of the user estimated by the information presentation system 1, and the degree of positiveness increases from the bottom to the top.
 ユーザのポジティブ度合が大きい場合、情報提示システム1は、ユーザがコンテンツを見ている最中でも、画面隅に選択肢タスクを提示させる。その際、選択肢タスクは、例えば、選択肢の数が多いタスク、複数の画面を有するタスク、または、選択肢が複数の画面に分かれ画面遷移があるタスクにより構成される。 When the degree of positiveness of the user is large, the information presentation system 1 causes the user to present the option task in the corner of the screen even while the user is viewing the content. At that time, the option task is composed of, for example, a task having a large number of options, a task having a plurality of screens, or a task in which the options are divided into a plurality of screens and have screen transitions.
 一方、ユーザのポジティブ度合が小さい場合、情報提示システム1は、CMに入ったタイミング、コンテンツの終了タイミング、または電源をオフするタイミングなど、区切りのよいタイミングで選択肢タスクを提示させる。その際、選択肢タスクは、例えば、Yes/Noの2択からなるタスク、または、通知およびOK/NGの確認のタスクなどの選択肢が少ないタスクにより構成される。 On the other hand, when the degree of positiveness of the user is small, the information presentation system 1 causes the option task to be presented at a well-separated timing such as the timing of entering the CM, the timing of ending the content, or the timing of turning off the power. At that time, the choice task is composed of, for example, a task consisting of two choices of Yes / No, or a task having few choices such as a task of notification and a task of confirmation of OK / NG.
 また、ポジティブ度合がマイナスの場合、すなわち、ユーザがネガティブ状態であると推定される場合、ユーザをポジティブ状態に向かわせるように、上述したポジティブコンテンツが提示される。 Further, when the degree of positiveness is negative, that is, when the user is estimated to be in a negative state, the above-mentioned positive content is presented so as to direct the user to the positive state.
 なお、選択肢タスクの提示時間は、ユーザのポジティブ状態が続いている時間内であればよい。ただし、ユーザがポジティブ状態で選択することになるので、実際には、例えば、数十秒で答えられる。 Note that the presentation time of the option task may be within the time during which the user's positive state continues. However, since the user makes a selection in a positive state, the answer can actually be answered in a few tens of seconds, for example.
 また、選択肢タスクは、ユーザのポジティブ状態を一気に急降下させることがない限り、継続して提示されるようにしてもよい。例えば、ユーザに対して問いかける質問が複数ある場合、ユーザのポジティブ状態が継続している間であれば、提示可能なタイミングに選択肢タスクを提示し続けていてもよい。最長提示時間は、ユーザのポジティブ状態が継続している時間である。 Further, the option task may be continuously presented as long as the positive state of the user does not suddenly drop. For example, when there are a plurality of questions to be asked to the user, the option task may be continuously presented at the timing when the user can be presented, as long as the positive state of the user continues. The longest presentation time is the time during which the user's positive state continues.
 以上のように、本技術においては、ユーザのポジティブ度合に応じて、ユーザに対する選択肢タスクの提示が制御される。 As described above, in this technology, the presentation of the option task to the user is controlled according to the degree of positiveness of the user.
 これにより、ユーザのポジティブ状態を維持させつつ、ユーザに対して、迅速な選択肢の回答を促すことができる。 This makes it possible to prompt the user to quickly answer the options while maintaining the positive state of the user.
<2.拡張例1(試合観戦中のアンケート実施)>
 上述した図1の情報提示システム1は、スポーツなどの試合観戦中に、試合の状況が落ち着き、かつ、ユーザがポジティブ状態であると推定された場合、試合に関連するアンケートを含む選択肢タスクを提示させるアプリケーションとしても機能する。拡張例1について図7を参照して説明する。
<2. Expansion example 1 (questionnaire conducted while watching a game)>
The information presentation system 1 of FIG. 1 described above presents an option task including a questionnaire related to the game when the situation of the game is calm and the user is estimated to be in a positive state while watching the game such as sports. It also functions as an application to let you. The extended example 1 will be described with reference to FIG. 7.
 図7は、拡張例1における選択肢タスクの提示例を示す図である。 FIG. 7 is a diagram showing an example of presenting an option task in the extended example 1.
 図7においては、試合観戦中に、試合の状況が落ち着き、かつ、ユーザがポジティブ状態であると推定された場合、試合に関連するアンケートを含む選択肢タスクを提示する例が示されている。 FIG. 7 shows an example of presenting an option task including a questionnaire related to a match when the situation of the match is settled and the user is estimated to be in a positive state while watching the match.
 ユーザは、例えば、試合会場において、サッカーの試合を観戦している。ユーザが応援しているチームに点数が入り、盛り上がった後、試合展開が落ち着いたタイミング(例えば、ハーフタイム時間など)において、ユーザがポジティブ状態であると推定された場合、タスク生成部62は、試合に関連するアンケートを含む選択肢タスクを生成する。 The user is watching a soccer game, for example, at the game venue. If it is estimated that the user is in a positive state at the timing when the game development is settled (for example, half-time time) after the score is given to the team supported by the user and the game is excited, the task generation unit 62 will perform the task generation unit 62. Generate a choice task that includes a match-related survey.
 試合会場には、大サイズのディスプレイからなる視覚提示デバイス41が設けられている。出力制御部63は、試合に関連するアンケートを含む選択肢タスクとして、「今日のMVPは? A:XXX、B:RRR ・スマホで投票してね!」を試合会場の視覚提示デバイス41に提示させる。 The game venue is equipped with a visual presentation device 41 consisting of a large-sized display. The output control unit 63 causes the visual presentation device 41 of the match venue to present "What is today's MVP? A: XXX, B: RRR, vote on your smartphone!" As an optional task including a questionnaire related to the match. ..
 選択肢タスクを見たユーザは、自身が持っているセンサ部33であるスマートホンを用いて、AまたはBを選択する。 The user who sees the option task selects A or B by using the smart phone which is the sensor unit 33 owned by the user.
 例えば、ユーザが、提示された選択肢タスクに対して「A:×××」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。その際、ユーザの応答に基づいて、データベース更新部65は、データベース64のユーザ情報を更新し、出力制御部63は、アンケート情報として、対応するサーバにネットワークを介して送信する。 For example, when the user responds with "A: XXX" to the presented option task, the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing apparatus 12. At that time, the database update unit 65 updates the user information of the database 64 based on the user's response, and the output control unit 63 transmits the questionnaire information to the corresponding server via the network.
 ここで、選択肢タスクの生成時、タスク生成部62は、回答率を上げたい優先度の高い質問を含む選択肢タスクから順に提示するように、選択肢タスクを生成する。これにより、ユーザの一部が途中で離脱したとしても、優先度の高い質問に対する回答率を上げることができる。 Here, when the option task is generated, the task generation unit 62 generates the option task so as to present in order from the option task including the question having the highest priority for which the response rate is to be increased. As a result, even if a part of the user leaves in the middle, the response rate to the high-priority question can be increased.
 なお、図7の例においては、試合会場のディスプレイに選択肢タスクを提示する例を示したが、提示する視覚提示デバイス41は、スマートホン、タブレット端末、パーソナルコンピュータ、TV、または、AR(拡張現実/Augmented Reality)デバイスなどの各ユーザが所持するデバイスであってもよい。 In the example of FIG. 7, an example of presenting an option task on the display of the match venue is shown, but the visual presentation device 41 to be presented is a smartphone, a tablet terminal, a personal computer, a TV, or an AR (augmented reality). / Augmented Reality) It may be a device owned by each user such as a device.
 また、図7の例においては、試合会場で試合を観戦する例が示されていたが、スポーツの試合に限らず、イベントやライブなどであってもよい。また、試合やイベントなどのコンテンツは、オンライン動画や動画配信コンテンツであってもよい。その際、例えば、選択肢タスクは、コンテンツと重畳させて提示される。 Further, in the example of FIG. 7, an example of watching a game at a game venue is shown, but it is not limited to a sports game, but may be an event or a live performance. Further, the content such as a match or an event may be an online video or video distribution content. At that time, for example, the option task is presented superimposed on the content.
 以上のように、本技術においては、コンテンツ視聴中に、コンテンツの展開が落ち着き、かつ、ユーザがポジティブ状態であると推定された場合に、コンテンツに関連したアンケートを含む選択肢タスクが提示される。これにより、ユーザの回答の即時性を上げることができる。 As described above, in the present technology, when it is estimated that the development of the content is calm and the user is in a positive state while viewing the content, an option task including a questionnaire related to the content is presented. This makes it possible to increase the immediacy of the user's response.
<3.拡張例2(To Doリストに基づく提示)>
 上述した図1の情報提示システム1は、選択肢タスクを、データベース64などに登録されたユーザの行動予定が登録されるTo Doリストやスケジュールに基づいて生成することで、ユーザの行動を促すアプリケーションとしても機能する。拡張例2について図8および図9を参照して説明する。
<3. Extension example 2 (presentation based on a to-do list)>
The information presentation system 1 of FIG. 1 described above is an application that promotes a user's action by generating an option task based on a to-do list or a schedule in which the action schedule of the user registered in the database 64 or the like is registered. Also works. An extended example 2 will be described with reference to FIGS. 8 and 9.
 図8は、拡張例2を説明する図である。 FIG. 8 is a diagram illustrating expansion example 2.
 図8においては、拡張例2において、ユーザがポジティブ状態ではないと判定された場合に、音楽がかけられる例が示されている。 FIG. 8 shows an example in which music is played when it is determined that the user is not in the positive state in the extended example 2.
 情報提示システム1においては、生活の中で、ユーザがポジティブ状態ではないと判定された場合、図8に示されるように、例えば、音声提示デバイス42から、上述したポジティブコンテンツの一例として、ユーザの好みの音楽などがかけられる。 In the information presentation system 1, when it is determined that the user is not in a positive state in daily life, as shown in FIG. 8, for example, from the voice presentation device 42, as an example of the above-mentioned positive content, the user You can play your favorite music.
 ポジティブコンテンツは、上述したように、データベース64に登録されているユーザ情報などに基づいて決定される。 As described above, the positive content is determined based on the user information registered in the database 64 and the like.
 これに応じて、ユーザの状態がポジティブ状態に変化し、ユーザがポジティブ状態であると判定された場合、タスク生成部62においては、さらに、ユーザが介入してもよい(割り込みを許容すると推定される)状態であるか否かが判定される。 In response to this, when the user's state changes to a positive state and it is determined that the user is in the positive state, the task generation unit 62 may further intervene (estimated to allow interrupts). It is determined whether or not it is in the state.
 図9は、拡張例2における選択肢タスクの提示例を示す図である。 FIG. 9 is a diagram showing an example of presenting an option task in the extended example 2.
 図9においては、ユーザがポジティブ状態であると判定され、さらに、ユーザが介入してもよい状態であると判定されたとき、ユーザのTo Doリストに基づいて生成された選択肢タスクを提示する例が示されている。 In FIG. 9, when it is determined that the user is in a positive state and further, it is determined that the user is in a state where the user may intervene, an example of presenting an option task generated based on the user's to-do list. It is shown.
 ユーザがポジティブ状態であると判定された場合、タスク生成部62においては、まず、ユーザの行動状況などが参照されて、介入してもよい状態であるか否かが判定される。 When it is determined that the user is in a positive state, the task generation unit 62 first refers to the user's behavioral status and the like, and determines whether or not the user is in a state where intervention is acceptable.
 ユーザが、忙しくなく、動きにゆとりがあるときなど、ユーザが介入してもよい状態であると判定された場合、タスク生成部62は、データベース64に登録されているユーザのTo Doリストを確認するために、「そういえば、AとBを決める必要がありましたね。どちらにしますか?」という2つの選択肢を有する選択肢タスクを生成する。 When it is determined that the user is in a state where the user may intervene, such as when the user is not busy and there is room for movement, the task generation unit 62 confirms the user's to-do list registered in the database 64. In order to do so, we will generate a choice task with two choices: "By the way, I had to decide A and B. Which would you like?"
 出力制御部63は、音声提示デバイス42であるスマートスピーカから、生成した選択肢タスクを出力するように制御する。その結果、音声提示デバイス42から、「そういえば、AとBを決める必要がありましたね。どちらにしますか?」という音声により選択肢タスクが提示される。 The output control unit 63 controls to output the generated option task from the smart speaker which is the voice presentation device 42. As a result, the voice presentation device 42 presents the option task by the voice "By the way, it was necessary to decide A and B. Which would you like?"
 例えば、ユーザが、提示された選択肢タスクに対して「Aにするよ」と応答した場合、センサ部33は、ユーザの応答を示す音声を検出し、情報処理装置12に出力する。データベース更新部65は、ユーザの応答に基づいて、データベース64のユーザのTo Doリストを更新し、上述したように、該アイテムの状態を、例えば、優先的に提示する、または、バックグラウンドで処理するなどの状態に移す。 For example, when the user responds to the presented option task with "A", the sensor unit 33 detects the voice indicating the user's response and outputs it to the information processing apparatus 12. The database update unit 65 updates the user's to-do list of the database 64 based on the user's response, and as described above, presents the state of the item preferentially, or processes it in the background. Move to a state such as.
 なお、図9においては、To Doリストの確認を行う選択肢タスクであったが、ユーザ情報などに基づいて生成される、ユーザの次の行動に対する提案からなる選択肢タスクであってもよい。 Note that, in FIG. 9, the task is an option task for confirming the to-do list, but it may be an option task consisting of proposals for the user's next action, which is generated based on user information or the like.
 また、選択肢タスクは、ユーザがポジティブ状態であり、介入してもよいタイミングであれば、To Doリストにおける本来の提示タイミング(決められた時間)よりも前の時間であっても提示してもよい。すなわち、ユーザがポジティブ状態であり、介入してもよいタイミングに選択肢タスクが提示されるので、ユーザは、より迅速に回答を行うことができ、次の行動に移ることができる。 In addition, if the user is in a positive state and the user is allowed to intervene, the option task may be presented even if it is before the original presentation timing (determined time) in the to-do list. good. That is, since the option task is presented at the timing when the user is in a positive state and may intervene, the user can respond more quickly and move to the next action.
 逆に、選択肢タスクは、To Doリストにおける本来の提示タイミング(決められた時間)に、ユーザがポジティブ状態ではない、または、ユーザが多忙なタイミングである場合には、本来の提示タイミングよりも後に提示するようにしてもよい。この場合も、ユーザがポジティブ状態であり、介入してもよいタイミングに選択肢タスクが提示されるので、ユーザは、より迅速に回答を行うことができ、次の行動に移ることができる。 Conversely, the choice task is later than the original presentation timing if the user is not in a positive state or the user is busy at the original presentation timing (fixed time) in the to-do list. It may be presented. In this case as well, since the user is in a positive state and the option task is presented at the timing when the user may intervene, the user can answer more quickly and move to the next action.
 以上のように、本技術においては、ユーザがポジティブ状態であり、介入してもよいタイミングのとき、ユーザ情報(ユーザのTo Doリスト)に基づいて、選択肢タスクが提示される。これにより、ユーザに負担なく、選択肢タスクを自然に提示することができる。 As described above, in this technology, when the user is in a positive state and it is time to intervene, the option task is presented based on the user information (user's to-do list). As a result, the alternative task can be presented naturally without burdening the user.
<4.その他>
 (本技術の効果)
 本技術においては、ユーザのポジティブ状態を推定する推定部によりユーザのポジティブ状態が推定されたことに応じて、ユーザに対する選択肢タスクの提示が制御される。
<4. Others>
(Effect of this technology)
In the present technology, the presentation of the option task to the user is controlled according to the estimation of the positive state of the user by the estimation unit that estimates the positive state of the user.
 これにより、ユーザに対して、迅速な選択肢の回答を促し、ユーザから迅速な回答を得ることができる。 As a result, it is possible to prompt the user to answer a prompt option and obtain a prompt answer from the user.
 なお、メニュー表示などは、ユーザが何かを選択したい、または切り替えたいときにユーザの操作に応じて提示されるものであるので、ユーザに選択を促すことを目的としている本技術の選択肢タスクとは異なる。 It should be noted that the menu display and the like are presented according to the user's operation when the user wants to select or switch something, so this is an optional task of the present technology aimed at encouraging the user to make a selection. Is different.
 (コンピュータの構成例)
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または汎用のパーソナルコンピュータなどに、プログラム記録媒体からインストールされる。
(Computer configuration example)
The series of processes described above can be executed by hardware or software. When a series of processes are executed by software, the programs constituting the software are installed from a program recording medium on a computer embedded in dedicated hardware, a general-purpose personal computer, or the like.
 図10は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。 FIG. 10 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 CPU301、ROM(Read Only Memory)302、RAM303は、バス304により相互に接続されている。 The CPU 301, ROM (Read Only Memory) 302, and RAM 303 are connected to each other by the bus 304.
 バス304には、さらに、入出力インタフェース305が接続されている。入出力インタフェース305には、キーボード、マウスなどよりなる入力部306、ディスプレイ、スピーカなどよりなる出力部307が接続される。また、入出力インタフェース305には、ハードディスクや不揮発性のメモリなどよりなる記憶部308、ネットワークインタフェースなどよりなる通信部309、リムーバブルメディア311を駆動するドライブ310が接続される。 The input / output interface 305 is further connected to the bus 304. An input unit 306 including a keyboard, a mouse, and the like, and an output unit 307 including a display, a speaker, and the like are connected to the input / output interface 305. Further, the input / output interface 305 is connected to a storage unit 308 made of a hard disk, a non-volatile memory, etc., a communication unit 309 made of a network interface, etc., and a drive 310 for driving the removable media 311.
 以上のように構成されるコンピュータでは、CPU301が、例えば、記憶部308に記憶されているプログラムを入出力インタフェース305及びバス304を介してRAM303にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 301 loads the program stored in the storage unit 308 into the RAM 303 via the input / output interface 305 and the bus 304, and executes the above-mentioned series of processes. Is done.
 CPU301が実行するプログラムは、例えばリムーバブルメディア311に記録して、あるいは、ローカルエリアネットワーク、インターネット、デジタル放送といった、有線または無線の伝送媒体を介して提供され、記憶部308にインストールされる。 The program executed by the CPU 301 is recorded on the removable media 311 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting, and installed in the storage unit 308.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 なお、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can take a cloud computing configuration in which one function is shared by multiple devices via a network and processed jointly.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
<構成の組み合わせ例>
 本技術は、以下のような構成をとることもできる。
(1)
 ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部と
 を備える情報処理装置。
(2)
 前記選択肢タスクとは、質問に対して、前記ユーザの意思により、選択肢を選択させるためのタスクである
 前記(1)に記載の情報処理装置。
(3)
 前記提示制御部は、前記ユーザの主観データを取得するために前記選択肢タスクの提示を制御する
 前記(2)に記載の情報処理装置。
(4)
 前記選択肢の選択結果は、前記ユーザに関する情報として利用される
 前記(3)に記載の情報処理装置。
(5)
 前記提示制御部は、前記情報処理装置におけるユーザ向けサービスの一環として前記選択肢タスクの提示を制御する
 前記(2)に記載の情報処理装置。
(6)
 前記選択肢の選択結果は、前記情報処理装置の状態を遷移させるトリガーとなる
 前記(5)に記載の情報処理装置。
(7)
 前記提示制御部は、前記ユーザの選択が望まれる選択肢を含む前記選択肢タスク、または前記ユーザが行いたい行動に関する前記選択肢タスクの提示を制御する
 前記(2)乃至(6)のいずれかに記載の情報処理装置。
(8)
 前記提示制御部は、前記ユーザの興味がある情報に関する前記選択肢タスクの提示を制御する
 前記(2)乃至(6)のいずれかに記載の情報処理装置。
(9)
 前記提示制御部は、前記選択肢の選択結果の重要度が低い質問を含む前記選択肢タスクの提示を制御する
 前記(2)乃至(6)のいずれかに記載の情報処理装置。
(10)
 前記提示制御部は、前記ユーザのポジティブ状態の度合に応じて、前記選択肢タスクを提示するタイミング、前記選択肢の数、および前記選択肢タスクを構成する画面数の少なくともいずれか1つを変える
 前記(2)乃至(9)のいずれかに記載の情報処理装置。
(11)
 前記提示制御部は、前記ユーザが視聴中のコンテンツの状況に基づいて、前記コンテンツに関するアンケートを含む前記選択肢タスクの提示を制御する
 前記(2)に記載の情報処理装置。
(12)
 前記提示制御部は、前記選択肢タスクに対して期待される回答率に基づく優先度に基づいて前記選択肢タスクの提示順を制御する
 前記(11)に記載の情報処理装置。
(13)
 前記提示制御部は、前記ユーザの状態が割り込みを許容すると推定される状態のとき、前記選択肢タスクの提示を制御する
 前記(2)乃至(12)に記載の情報処理装置。
(14)
 前記選択肢タスクは、前記ユーザの予定を確認するための前記選択肢を含む
 前記(2)または(13)に記載の情報処理装置。
(15)
 前記選択肢タスクは、前記ユーザの予定の他に、前記ユーザの予定に基づき推薦可能な次の行動についての前記選択肢を含む
 前記(14)に記載の情報処理装置。
(16)
 前記提示制御部は、前記選択肢タスクの提示を、前記ユーザの予定を提示する予定時刻よりも前または後に提示するように制御する
 前記(14)に記載の情報処理装置。
(17)
 前記提示制御部は、前記推定部により前記ユーザのポジティブ状態が推定されなかったことに応じて、前記ユーザに関する情報に基づくコンテンツを提示する
 前記(1)乃至(16)のいずれかに記載の情報処理装置。
(18)
 前記推定部は、前記ユーザの生体情報、前記ユーザの行動状況、および前記ユーザの外部環境情報の少なくともいずれかに基づいて、前記ユーザのポジティブ状態を推定する
 前記(1)乃至(17)のいずれかに記載の情報処理装置。
(19)
 情報処理装置が、
 ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する
 情報処理方法。
(20)
 ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部と
 して、コンピュータを機能させるためのプログラム。
<Example of configuration combination>
The present technology can also have the following configurations.
(1)
An information processing device including a presentation control unit that controls the presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
(2)
The information processing device according to (1) above, wherein the option task is a task for causing a question to be selected by the user's will.
(3)
The information processing device according to (2), wherein the presentation control unit controls presentation of the option task in order to acquire subjective data of the user.
(4)
The information processing apparatus according to (3), wherein the selection result of the option is used as information about the user.
(5)
The information processing apparatus according to (2), wherein the presentation control unit controls presentation of the option task as a part of a service for users in the information processing apparatus.
(6)
The information processing apparatus according to (5) above, wherein the selection result of the option is a trigger for transitioning the state of the information processing apparatus.
(7)
The presentation control unit is described in any one of (2) to (6) above, which controls the presentation of the option task including the options that the user wants to select, or the option task related to the action that the user wants to perform. Information processing device.
(8)
The information processing device according to any one of (2) to (6), wherein the presentation control unit controls presentation of the option task regarding information of interest to the user.
(9)
The information processing apparatus according to any one of (2) to (6), wherein the presentation control unit controls presentation of the option task including a question of low importance of the selection result of the option.
(10)
The presentation control unit changes at least one of the timing of presenting the option task, the number of options, and the number of screens constituting the option task according to the degree of the positive state of the user (2). ) To (9).
(11)
The information processing device according to (2), wherein the presentation control unit controls presentation of the option task including a questionnaire regarding the content, based on the status of the content being viewed by the user.
(12)
The information processing device according to (11), wherein the presentation control unit controls the presentation order of the option task based on the priority based on the response rate expected for the option task.
(13)
The information processing apparatus according to (2) to (12), wherein the presentation control unit controls presentation of the option task when the state of the user is presumed to allow an interrupt.
(14)
The information processing apparatus according to (2) or (13), wherein the option task includes the option for confirming the user's schedule.
(15)
The information processing apparatus according to (14), wherein the option task includes, in addition to the user's schedule, the option for the next action that can be recommended based on the user's schedule.
(16)
The information processing apparatus according to (14), wherein the presentation control unit controls the presentation of the option task to be presented before or after the scheduled time for presenting the user's schedule.
(17)
The information according to any one of (1) to (16) above, wherein the presentation control unit presents content based on information about the user in response to the fact that the estimation unit did not estimate the positive state of the user. Processing equipment.
(18)
The estimation unit estimates the positive state of the user based on at least one of the biometric information of the user, the behavioral status of the user, and the external environment information of the user. Information processing device described in Crab.
(19)
Information processing equipment
An information processing method that controls the presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
(20)
A program for operating a computer as a presentation control unit that controls presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
 1 情報提示システム, 11 入力装置, 12 情報処理装置, 13 出力装置, 21 ネットワーク 31乃至33 センサ部, 41 視覚提示デバイス, 42 音声提示デバイス, 61 感情推定部, 62 タスク生成部, 63 出力制御部, 64 データベース, 65 データベース更新部,66 コンテンツ決定部 1 Information presentation system, 11 Input device, 12 Information processing device, 13 Output device, 21 Network 31 to 33 Sensor unit, 41 Visual presentation device, 42 Voice presentation device, 61 Emotion estimation unit, 62 Task generation unit, 63 Output control unit , 64 database, 65 database update department, 66 content determination department

Claims (20)

  1.  ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部と
     を備える情報処理装置。
    An information processing device including a presentation control unit that controls the presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
  2.  前記選択肢タスクとは、質問に対して、前記ユーザの意思により、選択肢を選択させるためのタスクである
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the option task is a task for causing a question to be selected by the user's will.
  3.  前記提示制御部は、前記ユーザの主観データを取得するために前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task in order to acquire the subjective data of the user.
  4.  前記選択肢の選択結果は、前記ユーザに関する情報として利用される
     請求項3に記載の情報処理装置。
    The information processing apparatus according to claim 3, wherein the selection result of the option is used as information about the user.
  5.  前記提示制御部は、前記情報処理装置におけるユーザ向けサービスの一環として前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task as a part of a service for users in the information processing device.
  6.  前記選択肢の選択結果は、前記情報処理装置の状態を遷移させるトリガーとなる
     請求項5に記載の情報処理装置。
    The information processing apparatus according to claim 5, wherein the selection result of the option is a trigger for transitioning the state of the information processing apparatus.
  7.  前記提示制御部は、前記ユーザの選択が望まれる選択肢を含む前記選択肢タスク、または前記ユーザが行いたい行動に関する前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task including the options that the user wants to select, or the option task related to the action that the user wants to perform.
  8.  前記提示制御部は、前記ユーザの興味がある情報に関する前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task regarding information of interest to the user.
  9.  前記提示制御部は、前記選択肢の選択結果の重要度が低い質問を含む前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task including a question of low importance of the selection result of the option.
  10.  前記提示制御部は、前記ユーザのポジティブ状態の度合に応じて、前記選択肢タスクを提示するタイミング、前記選択肢の数、および前記選択肢タスクを構成する画面数の少なくともいずれか1つを変える
     請求項2に記載の情報処理装置。
    2. The presentation control unit changes at least one of the timing of presenting the option task, the number of options, and the number of screens constituting the option task according to the degree of the positive state of the user. The information processing device described in.
  11.  前記提示制御部は、前記ユーザが視聴中のコンテンツの状況に基づいて、前記コンテンツに関するアンケートを含む前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task including a questionnaire regarding the content, based on the status of the content being viewed by the user.
  12.  前記提示制御部は、前記選択肢タスクに対して期待される回答率に基づく優先度に基づいて前記選択肢タスクの提示順を制御する
     請求項11に記載の情報処理装置。
    The information processing device according to claim 11, wherein the presentation control unit controls the presentation order of the option task based on the priority based on the response rate expected for the option task.
  13.  前記提示制御部は、前記ユーザの状態が割り込みを許容すると推定される状態のとき、前記選択肢タスクの提示を制御する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit controls the presentation of the option task when the state of the user is presumed to allow an interrupt.
  14.  前記選択肢タスクは、前記ユーザの予定を確認するための前記選択肢を含む
     請求項13に記載の情報処理装置。
    The information processing apparatus according to claim 13, wherein the option task includes the option for confirming the schedule of the user.
  15.  前記選択肢タスクは、前記ユーザの予定の他に、前記ユーザの予定に基づき推薦可能な次の行動についての前記選択肢を含む
     請求項14に記載の情報処理装置。
    The information processing apparatus according to claim 14, wherein the option task includes, in addition to the user's schedule, the option for the next action that can be recommended based on the user's schedule.
  16.  前記提示制御部は、前記選択肢タスクの提示を、前記ユーザの予定を提示する予定時刻よりも前または後に提示するように制御する
     請求項13に記載の情報処理装置。
    The information processing device according to claim 13, wherein the presentation control unit controls the presentation of the option task to be presented before or after the scheduled time for presenting the user's schedule.
  17.  前記提示制御部は、前記推定部により前記ユーザのポジティブ状態が推定されなかったことに応じて、前記ユーザに関する情報に基づくコンテンツを提示する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the presentation control unit presents content based on information about the user in response to the fact that the estimation unit has not estimated the positive state of the user.
  18.  前記推定部は、前記ユーザの生体情報、前記ユーザの行動状況、および前記ユーザの外部環境の少なくともいずれかに基づいて、前記ユーザのポジティブ状態を推定する
     請求項2に記載の情報処理装置。
    The information processing device according to claim 2, wherein the estimation unit estimates a positive state of the user based on at least one of the biometric information of the user, the behavioral state of the user, and the external environment of the user.
  19.  情報処理装置が、
     ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する
     情報処理方法。
    Information processing equipment
    An information processing method that controls the presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
  20.  ユーザのポジティブ状態を推定する推定部により前記ユーザのポジティブ状態が推定されたことに応じて、前記ユーザに対する選択肢タスクの提示を制御する提示制御部と
     して、コンピュータを機能させるためのプログラム。
    A program for operating a computer as a presentation control unit that controls presentation of an option task to the user in response to the estimation of the user's positive state by the estimation unit that estimates the user's positive state.
PCT/JP2021/017145 2020-05-13 2021-04-30 Information processing device and method, and program WO2021230100A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020084278 2020-05-13
JP2020-084278 2020-05-13

Publications (1)

Publication Number Publication Date
WO2021230100A1 true WO2021230100A1 (en) 2021-11-18

Family

ID=78525738

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/017145 WO2021230100A1 (en) 2020-05-13 2021-04-30 Information processing device and method, and program

Country Status (1)

Country Link
WO (1) WO2021230100A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023157237A1 (en) * 2022-02-18 2023-08-24 株式会社EarBrain Information processing system, information processing system control method, and measurement device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017175351A1 (en) * 2016-04-07 2017-10-12 株式会社ソニー・インタラクティブエンタテインメント Information processing device
JP2017201499A (en) * 2015-10-08 2017-11-09 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Control method of information presentation apparatus, and information presentation apparatus
JP2017215468A (en) * 2016-05-31 2017-12-07 トヨタ自動車株式会社 Voice interactive device and voice interactive method
WO2018142686A1 (en) * 2017-01-31 2018-08-09 ソニー株式会社 Information processing device, information processing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017201499A (en) * 2015-10-08 2017-11-09 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Control method of information presentation apparatus, and information presentation apparatus
WO2017175351A1 (en) * 2016-04-07 2017-10-12 株式会社ソニー・インタラクティブエンタテインメント Information processing device
JP2017215468A (en) * 2016-05-31 2017-12-07 トヨタ自動車株式会社 Voice interactive device and voice interactive method
WO2018142686A1 (en) * 2017-01-31 2018-08-09 ソニー株式会社 Information processing device, information processing method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023157237A1 (en) * 2022-02-18 2023-08-24 株式会社EarBrain Information processing system, information processing system control method, and measurement device

Similar Documents

Publication Publication Date Title
US11743527B2 (en) System and method for enhancing content using brain-state data
US20210034141A1 (en) Information processing system, client terminal, information processing method, and recording medium
Coan et al. The specific affect coding system (SPAFF)
JP7424285B2 (en) Information processing system, information processing method, and recording medium
US8984065B2 (en) Systems and methods for online matching using non-self-identified data
US20150058327A1 (en) Responding to apprehension towards an experience with an explanation indicative of similarity to a prior experience
US20150162000A1 (en) Context aware, proactive digital assistant
Konijn The role of emotion in media use and effects
US20140215505A1 (en) Systems and methods for supplementing content with audience-requested information
KR20170085422A (en) Apparatus and method for operating personal agent
JP2006012171A (en) System and method for using biometrics to manage review
JPWO2018142686A1 (en) Information processing apparatus, information processing method, and program
CN110152314B (en) Session output system, session output server, session output method, and storage medium
WO2021230100A1 (en) Information processing device and method, and program
Wijaya Desire and Pleasure in the Branded Reality Show as a Discursive Psychoanalysis
Prado Social media and your brain: Web-based communication is changing how we think and express ourselves
Kurtzberg et al. Distracted: Staying connected without losing focus
CN110214301B (en) Information processing apparatus, information processing method, and program
WO2021193086A1 (en) Information processing device, method, and program
US20210136323A1 (en) Information processing device, information processing method, and program
Li et al. The research on the usage behavior of TikTok short video platform in the elderly group
Di Bona Emotional Accounts of Musical Experience and Musical Object: On the Relationship Between Music and Emotion 1
JPWO2020026799A1 (en) Information processing equipment, information processing methods, and programs
Robson Sound, Self and Crisis: Mapping the Affective Dimensions of Podcast Media
Cohen et al. Emotions and Technological Affordances

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804924

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804924

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP