WO2021070733A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2021070733A1
WO2021070733A1 PCT/JP2020/037434 JP2020037434W WO2021070733A1 WO 2021070733 A1 WO2021070733 A1 WO 2021070733A1 JP 2020037434 W JP2020037434 W JP 2020037434W WO 2021070733 A1 WO2021070733 A1 WO 2021070733A1
Authority
WO
WIPO (PCT)
Prior art keywords
discussion
user
information
support
presentation
Prior art date
Application number
PCT/JP2020/037434
Other languages
French (fr)
Japanese (ja)
Inventor
真秀 林
哲男 池田
英佑 藤縄
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021070733A1 publication Critical patent/WO2021070733A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • This technology relates to information processing devices, information processing methods, and programs, and in particular, to information processing devices, information processing methods, and programs that have enabled discussions.
  • This technology was made in view of such a situation, and makes it possible to easily activate discussions at meetings and the like.
  • the information processing device of one aspect of the present technology includes a situation detection unit that detects the status of the discussion based on sensor data that senses the state of the discussion with respect to the discussion target that is visually presented to the user, and a stagnation of the discussion. When is detected, it is provided with an output control unit that controls to present support information for supporting the discussion.
  • the information processing method of one aspect of the present technology detects the situation of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion object presented to the user, and the stagnation of the discussion is detected. In this case, control is performed to present support information for supporting the discussion.
  • the program of one aspect of the present technology detects the situation of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion object presented to the user, and when the stagnation of the discussion is detected, The computer is made to execute a process of controlling the presentation of support information for supporting the discussion.
  • Embodiment >> Embodiments of the present technology will be described with reference to FIGS. 1 to 42.
  • FIG. 1 is a block diagram showing a configuration example of an information processing system 1 to which the present technology is applied.
  • the information processing system 1 is a system that supports discussions.
  • the type of discussion that can be supported by the information processing system 1 is not particularly limited as long as it is a discussion about the discussion target that is visually presented to the user.
  • the discussion target visibly presented to the user is, for example, a discussion target consisting of a real object, a discussion target shown in visible data such as image data or text data, AR (Augmented Reality), VR (Virtual). It is a subject of discussion, etc., which is indicated by virtual visual information presented in the user's field of view by Reality) or the like.
  • the subject of discussion consisting of real objects is, for example, goods, works of art, and the like.
  • the objects of discussion shown in the visible data or visual information are, for example, ideas, opinions, various types of information (for example, articles, news, product information, etc.), videos, and the like.
  • the information processing system 1 includes an input unit 11, an information processing unit 12, a storage unit 13, and an output unit 14.
  • the input unit 11 includes, for example, an input device such as a switch, a button, and a keyboard for inputting various data to the information processing system 1, and supplies the input data to the information processing unit 12.
  • an input device such as a switch, a button, and a keyboard for inputting various data to the information processing system 1, and supplies the input data to the information processing unit 12.
  • the input unit 11 includes sensors that sense the state of the discussion.
  • the input unit 11 includes a video display surface on which the discussion is held, an object on the video display surface, and sensors that sense the state of the user participating in the discussion. More specifically, for example, the input unit 11 includes an image sensor, a touch sensor, a microphone, and the like.
  • the image sensor is composed of, for example, a visible light camera, an infrared camera, etc. capable of capturing a two-dimensional image.
  • the image sensor is composed of a stereo camera, a depth sensor, or the like that can acquire three-dimensional data in which the depth direction is further added.
  • the depth sensor for example, an arbitrary method such as a Time of Flight method or a Structured Light method can be used.
  • the touch sensor is a sensor that detects the movement of the user's hand, marker, etc. with respect to the image display surface.
  • the touch sensor is realized by a method of providing a touch panel on the image display surface or a method of detecting the movement of a hand or a marker based on an image or depth data taken by a camera or a depth sensor from the front surface or the back surface side of the image display surface. Will be done.
  • the microphone collects the voices of users under discussion.
  • the image display surface is, for example, the surface of a display on which an image is displayed or a projection surface on which an image is projected.
  • the information processing unit 12 performs various processes related to supporting discussions.
  • the information processing unit 12 includes, for example, a data processing unit 21, a support unit 22, an output information generation unit 23, and an output control unit 24.
  • the data processing unit 21 performs various processes on the input data and the sensor data supplied from the input unit 11 as necessary, and supplies them to the support unit 22 and the output information generation unit 23.
  • the support unit 22 performs processing for supporting the discussion.
  • the support unit 22 includes a situation detection unit 31, a support method selection unit 32, and a presentation method setting unit 33.
  • the situation detection unit 31 detects the status of the discussion based on the input data, the sensor data, the information stored in the situation detection method storage unit 41, and the like.
  • the status of the discussion includes, for example, the stage of the discussion (progress of the discussion) and the stagnation status, the amount and type of the discussion subject presented by the user, the status and content of the discussion regarding each discussion subject, the presentation position of the discussion subject, and the user. It is represented by one or more of the states of.
  • the user's state is represented by, for example, one or more of the user's ability, role, activity amount, position, and the like.
  • the amount of activity of the user indicates the amount of activity of the user in the discussion, and is represented by, for example, one or more of the amount of discussion objects presented, the amount of speech, the amount of hand movement, and the like.
  • the support method selection unit 32 selects a discussion support method based on the status of the discussion and the information stored in the action information storage unit 42. For example, the support method selection unit 32 selects a support action to be executed from a plurality of actions (hereinafter, referred to as support actions) for supporting the discussion based on the situation of the discussion.
  • the support method selection unit 32 supplies information indicating the selected support action to the output information generation unit 23 and the output control unit 24.
  • the presentation method setting unit 33 presents information at the time of executing the support action selected by the support method selection unit 32 based on the status of the discussion and the information stored in the presentation method information storage unit 43 (hereinafter, Set the presentation method (referred to as support information). For example, the presentation method setting unit 33 sets the presentation position, the presentation amount (presentation number), the presentation timing, and the like of the support information. The presentation method setting unit 33 supplies information indicating the set presentation method to the output information generation unit 23 and the output control unit 24.
  • the output information generation unit 23 can be used for input data, sensor data, support actions selected by the support method selection unit 32, presentation methods set by the presentation method setting unit 33, information stored in the storage unit 13, and the like. Based on this, the output information to be output from the output unit 14 is generated.
  • the output information includes, for example, support information for promoting a change in the user's behavior and supporting the discussion.
  • the support information includes, for example, information that supports the presentation of the subject of discussion, information that supports the discussion, and the like.
  • the output control unit 24 is an output information generation unit based on the support action selected by the support method selection unit 32, the presentation method set by the presentation method setting unit 33, the information stored in the storage unit 13, and the like. The output from the output unit 14 of the output information generated by 23 is controlled.
  • the output control unit 24 functions as a control layer of a general OS (Operating System) that controls drawing of multi-contents such as a window for displaying an application and distributes an event such as a touch to each content. Has.
  • OS Operating System
  • the storage unit 13 stores information on the method of detecting the status of the discussion, the support action, and the method of presenting the support information.
  • the situation detection method storage unit 41 accumulates information on the method for detecting the situation of the discussion.
  • Information about the method of detecting the status of the discussion includes, for example, the type of detection method and the selection method.
  • the action information storage unit 42 stores information on support actions.
  • the information regarding the support action includes, for example, the type and selection method of the support action, the support information presented to the user in each support action, and the like.
  • the presentation method information storage unit 43 stores information on the presentation method of support information.
  • the information regarding the method of presenting the support information includes, for example, the type of the presentation method, the selection method, and the like.
  • the output unit 14 includes an output device that presents visual information and auditory information to the user.
  • the output unit 14 includes a touch panel, a display, a projector, a speaker, and the like.
  • FIG. 2 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by an upward projection type system.
  • the projector attached to the sensor 101 is installed above the desk 102.
  • the projector 101 attached to the sensor projects an image from above onto the top plate of the desk 102, like a pendant light or a desk stand light. Then, the top plate of the desk 102 becomes the image display surface, and discussions are held on the image display surface. Further, the projector 101 attached to the sensor captures the state of the discussion by photographing the periphery of the top plate of the desk 102 with the attached visible light camera and the depth sensor, and processes the obtained image data and depth data into the information processing unit. Supply to 12.
  • FIG. 3 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a rear projection type system.
  • the projector 111 is installed below the desk 112.
  • the top plate of the desk 112 is a translucent screen, and the projector 111 projects an image on the top plate of the desk 112 from below. Then, the top plate of the desk 112 becomes the image display surface, and discussions are held on the image display surface.
  • a visible light camera and a depth sensor are provided at the same positions as the projector attached to the sensor 101 in FIG.
  • the visible light camera and the depth sensor photograph the state of the discussion by photographing the vicinity of the top plate of the desk 112, and supply the obtained image data and the depth data to the information processing unit 12.
  • FIG. 4 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a side projection type system.
  • the projector attached to the sensor 121 is installed on the side of the wall 122.
  • the projector attached to the sensor 121 projects an image on the wall 122 from the side. Then, the wall 122 becomes the image display surface, and discussions are held on the image display surface. Further, the projector attached to the sensor 121 captures the state of the discussion by photographing the periphery of the wall 122 with the attached visible light camera and the depth sensor, and supplies the obtained image data and depth data to the information processing unit 12. To do.
  • FIG. 5 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a flat display type system.
  • This system has a table-like shape by providing the foot 132 on the touch panel 131 attached to the sensor.
  • the touch panel 131 attached to the sensor displays an image on a display which is an image display surface, and discussions are held on the image display surface. Further, a touch sensor is provided on the image display surface, and the touch panel 131 attached to the sensor can detect an operation such as touching the image display surface.
  • a visible light camera and a depth sensor are provided at the same positions as the projector attached to the sensor 101 in FIG. 2, if necessary.
  • the depth sensor captures the state of the discussion by photographing the periphery of the touch panel 131 attached to the sensor, and supplies the obtained image data and depth data to the information processing unit 12.
  • FIG. 6 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by the eyewear type wearable terminal 141.
  • each user wears a wearable terminal 141. Then, the visual information displayed by the wearable terminal 141 is superimposed on the field of view of each user.
  • Sensors such as a visible light camera and a depth sensor are provided separately from the wearable terminal 141.
  • the discussion will proceed in the order of the stage where each user issues an idea to be discussed (hereinafter referred to as the idea generation stage) and the stage where each user discusses the idea (hereinafter referred to as the discussion stage). And. Depending on the situation of the discussion, the stage of the discussion may go back.
  • each user individually produces an idea on the video display surface 201. For example, each user writes an idea by hand on a sticky note 202, or inputs an idea using a digital device 203 such as a PC, a tablet, or a smartphone.
  • a digital device 203 such as a PC, a tablet, or a smartphone.
  • the handwritten sticky note 202 is converted into image data by being photographed by a visible light camera (not shown) installed so as to photograph the image display surface 201, and is converted into an image.
  • the data 231 is supplied to the data processing unit 21.
  • the data processing unit 21 supplies the image data 231 of the sticky note 202 to the output information generation unit 23.
  • the output information generation unit 23 generates the image data of the electronic sticky note 211-1 by binarizing the color image data 231, detecting the character contour, and converting the raster image into a vector image, and the output control unit 23. Supply to 24.
  • the output control unit 24 controls the output unit 14 to display the electronic sticky note 211-1 on the video display surface 201.
  • the data processing unit 21 converts the content described in the image data 231 into text data by OCR (Optical Character Recognition) and supplies it to the support unit 22.
  • OCR Optical Character Recognition
  • the situation detection unit 31 extracts nouns and verbs in the text data as keywords.
  • the digital device 203 generates image data of the electronic sticky note 211-2 indicating the input idea, and inputs the image data to the information processing unit 12 via the input unit 11.
  • the data processing unit 21 supplies the image data of the electronic sticky note 211-2 to the output control unit 24 via the output information generation unit 23.
  • the output control unit 24 controls the output unit 14 to present the electronic sticky note 211-2 on the video display surface 201.
  • the data processing unit 21 supplies text data indicating the contents of the electronic sticky note 211-2 to the support unit 22.
  • the situation detection unit 31 extracts nouns and verbs in the text data as keywords.
  • each user creates an electronic sticky note 211-1 to an electronic sticky note 211-4 on which the idea is described at the stage of producing an idea.
  • each user discusses the ideas described in the electronic sticky notes 211-1 to the electronic sticky notes 211-4 presented on the video display surface 201.
  • Each user can move the electronic sticky notes 211-1 to the electronic sticky notes 211-4 presented on the image display surface 201 by operating with a finger or the like.
  • the discussion will be classified into the individual work stage, the group work stage, and the summary stage according to the user's work form, and the discussion will proceed in the order of the individual work stage, the group work stage, and the summary stage.
  • the details of each stage will be described later, but the individual work stage is included in the idea generation stage, and the group work stage and the summary stage are included in the discussion stage.
  • This process starts, for example, when the discussion starts and ends when the discussion ends.
  • the timing of the start and end of the discussion may be, for example, automatically detected by the information processing system 1, or may be explicitly input by the user to the information processing system 1.
  • step S1 the input unit 11 acquires the input information.
  • the input unit 11 generates image data and depth data by photographing the image display surface 201 with a visible light camera and a depth sensor, and supplies the image data and the depth data to the data processing unit 21.
  • an electronic sticky note is generated from the handwritten sticky note and displayed on the video display surface 201 as described above with reference to FIG.
  • the contents of the handwritten sticky note are converted into text data and supplied to the support unit 22.
  • the data processing unit 21 supplies the depth data to the support unit 22 after performing processing such as noise removal on the depth data.
  • the input unit 11 converts the sound around the image display surface 201 into audio data which is an electric signal by the microphone and supplies it to the information processing unit 12.
  • the data processing unit 21 performs processing such as digitization and noise removal on the voice data, and then supplies the voice data to the support unit 22.
  • the input unit 11 supplies the image data of the electronic sticky note to the information processing unit 12.
  • the input electronic sticky note is displayed on the video display surface 201, and text data indicating the contents of the electronic sticky note is supplied to the support unit 22.
  • step S2 the situation detection unit 31 detects the stage of discussion. Specifically, the situation detection unit 31 detects the state of each user based on the depth data. Then, the situation detection unit 31 detects the current stage of discussion based on the state of each user. For example, the situation detection unit 31 determines whether the current discussion stage is an individual work stage, a group work stage, or a summary stage.
  • the individual work stage is the stage where each user is working individually. For example, in brainstorming and discussions, each user is at the stage of individually thinking about ideas and giving ideas.
  • FIG. 11 schematically shows the state around the video display surface 201 in the personal work stage.
  • everyone is creating electronic sticky notes individually. Therefore, for example, when most of the users are creating electronic sticky notes, for example, when the ratio of users who are creating electronic sticky notes is equal to or higher than a predetermined threshold value, the situation detection unit 31 is in the individual work stage. Judge that there is.
  • the group work stage is the stage where each user participates and has a discussion. For example, in brainstorming and discussions, all users are discussing ideas and coming up with new ideas as needed.
  • FIG. 12 schematically shows the state around the video display surface 201 at the group work stage.
  • a user who creates an electronic sticky note and a user who points to or operates the electronic sticky note are mixed. Therefore, for example, when a part of the users is creating an electronic sticky note, for example, the situation detection unit 31 creates an electronic sticky note when there is at least one user who is creating the electronic sticky note.
  • the ratio of users who are performing is less than a predetermined threshold value, it is determined that the group work stage is in progress.
  • the summary stage is, for example, the stage where all users discuss, summarize the content of the discussion, and try to draw a conclusion. At this stage, the idea has already been created.
  • FIG. 13 schematically shows the state around the video display surface 201 at the summary stage.
  • the situation detection unit 31 determines that it is in the summary stage when there is no user who is creating the electronic sticky note.
  • step S3 the situation detection unit 31 determines whether or not it is in the individual work stage based on the determination result in step S2. If it is determined that it is not the individual work stage, that is, if it is determined that it is the group work stage or the summary stage, the process proceeds to step S4.
  • step S4 the situation detection unit 31 executes the situation detection method selection process, and then the process proceeds to step S5.
  • step S51 the situation detection unit 31 determines whether or not the intimacy of the user is high. If it is determined that the intimacy of the user is high, the process proceeds to step S51.
  • the intimacy of the user may be input by the user before the start of the discussion, or may be automatically determined by the information processing system 1.
  • the situation detection unit 31 recognizes each user by face authentication, ID authentication, card authentication using an employee ID card, a student ID card, or the like. Then, the situation detection unit 31 indicates that the combination of users is included in the combination of users previously discussed together using the information processing system 1, or each user is in the same organization (for example, department or class). If it belongs to, it is judged that the intimacy of the user is high.
  • step S52 the situation detection unit 31 decides to use only the movement of the hand. That is, the situation detection unit 31 determines to detect the situation of the discussion using only the movement of the user's hand based on the information stored in the situation detection method storage unit 41.
  • step S51 if it is determined in step S51 that the intimacy of the user is low, the process proceeds to step S53.
  • step S53 the situation detection unit 31 determines whether or not there is work using a hand other than the operation of the electronic sticky note. If it is determined that there is work using a hand other than the operation of the electronic sticky note, the process proceeds to step S54.
  • the work of using hands other than the operation of the electronic sticky note is, for example, the work of taking minutes and memos.
  • step S54 the situation detection unit 31 decides to use only the voice. That is, the situation detection unit 31 determines to detect the situation of the discussion using only voice based on the information stored in the situation detection method storage unit 41.
  • step S53 determines whether there is no work to use other than the operation of the electronic sticky note. If it is determined in step S53 that there is no work to use other than the operation of the electronic sticky note, the process proceeds to step S55.
  • step S55 the situation detection unit 31 determines the use of voice and hand movement. That is, the situation detection unit 31 determines to detect the situation of the discussion using both voice and hand movement based on the information stored in the situation detection method storage unit 41.
  • the user's voice becomes loud, and when the discussion is stagnant, the user's voice becomes quiet.
  • chats that are not related to the discussion are held, and the chats may make the user's voice louder.
  • voice is used as a condition for detecting the situation of the discussion.
  • voice is not used as a condition for detecting the situation of the discussion.
  • the movement of the hand is used as a condition for detecting the situation of the discussion.
  • the movement of the hand is not used as a detection condition of the situation of discussion.
  • step S4 if it is determined in step S3 that it is an individual work stage, the process of step S4 is skipped and the process proceeds to step S5.
  • step S5 the status detection unit 31 executes the status detection process, and then the process proceeds to step S6.
  • step S101 the situation detection unit 31 determines whether or not it is in the individual work stage. If it is determined that it is in the personal work stage, the process proceeds to step S102.
  • step S102 the situation detection unit 31 determines whether or not the idea generation is delayed. For example, the situation detection unit 31 determines whether or not the idea is delayed based on the number of electronic sticky notes on which the idea is created.
  • FIG. 16 is a graph showing changes in the total number of electronic sticky notes created by all users.
  • the horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the total number of electronic sticky notes.
  • the situation detection unit 31 slows down the creation speed of electronic sticky notes, that is, the presentation of new ideas from each user is delayed. It is determined that there is, and the process proceeds to step S103.
  • T seconds and Q values used for the judgment are adjusted according to the number of participants and the content of the discussion.
  • step S103 the situation detection unit 31 determines that the discussion is stagnant.
  • step S102 when the number of sticky notes created in the latest T seconds is Q or more, the status detection unit 31 does not slow down the sticky note creation speed, that is, each user presents a new idea. It is determined that there is no delay, and the process proceeds to step S104.
  • step S104 the situation detection unit 31 determines that the discussion is active.
  • step S101 determines whether it is the individual work stage, that is, if it is determined that it is in the group work stage or the summary stage.
  • step S105 the situation detection unit 31 determines whether or not to use voice as the detection condition.
  • the situation detection unit 31 determines in the process of step S4 described above to use voice as the detection condition of the situation of discussion, it determines that voice is used as the detection condition, and the process proceeds to step S106.
  • step S106 the situation detection unit 31 determines whether or not the voice is low.
  • FIG. 17 is a graph showing the transition of the volume of the voice data acquired by the microphone included in the input unit 11.
  • the horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the volume (unit: dB).
  • step S107 For example, if the volume of the voice data continues to be less than B (dB) in the latest T seconds, the situation detection unit 31 determines that the voice is low, and the process proceeds to step S107. For example, if the discussion is stagnant and there are few comments from each user, the process proceeds to step S107.
  • step S105 determines whether voice is not used as the detection condition. If it is determined in step S105 that voice is not used as the detection condition, the process of step S106 is skipped and the process proceeds to step S107.
  • step S107 the situation detection unit 31 determines whether or not to use the movement of the hand as the detection condition.
  • the situation detection unit 31 decides to use the hand movement as the detection condition of the discussion situation in the process of step S4 described above, it determines that the hand movement is used as the detection condition, and the process proceeds to step S108. move on.
  • step S108 the situation detection unit 31 determines whether or not the hand is moving.
  • FIG. 18 is a graph showing an estimation of the total amount of hand movements of all users.
  • the horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the total amount of hand movements of all users (unit: mm).
  • the situation detection unit 31 constantly detects the movement of each user's hand based on the depth data. Further, the situation detection unit 31 calculates the amount of movement of each user's hand based on the time transition of the position of each user's hand within the latest T seconds. Then, when the total amount of hand movements of all users within the latest T seconds is less than M (mm), the situation detection unit 31 determines that the hands are not moving, and the process proceeds to step S109. For example, if the discussion is stagnant and each user rarely points to or moves the electronic sticky note, the process proceeds to step S109.
  • step S107 determines whether the movement of the hand is not used for the detection condition. If it is determined in step S107 that the movement of the hand is not used for the detection condition, the process of step S108 is skipped and the process proceeds to step S109.
  • step S109 the situation detection unit 31 determines that the discussion is stagnant.
  • step S108 the situation detection unit 31 determines that the hand is moving when the total amount of movement of the hands of all users within the latest T seconds is M (mm) or more, and the process is performed in step S110. Proceed to. For example, if the discussion is active and each user often points to or moves the electronic sticky note, the process proceeds to step S110.
  • step S106 the situation detection unit 31 determines that the voice is loud when there is a moment when the volume of the voice data becomes B (dB) or more in the latest T seconds, and the process proceeds to step S110. For example, if the discussion is active and there are many comments from each user, the process proceeds to step S110.
  • step S110 the situation detection unit 31 determines that the discussion is active.
  • step S6 the situation detection unit 31 determines whether or not the discussion is stagnant based on the result of the process in step S5. If it is determined that the discussion is active, the process returns to step S1.
  • step S6 the processes of steps S1 to S6 are repeatedly executed until it is determined that the discussion is stagnant.
  • step S6 determines that the discussion is stagnant. If it is determined in step S6 that the discussion is stagnant, the process proceeds to step S7.
  • step S7 the situation detection unit 31 determines whether or not it is in the individual work stage. If it is determined that it is in the personal work stage, the process proceeds to step S8.
  • step S8 the information processing unit 12 executes the support target determination process, and then the process proceeds to step S8.
  • step S151 the situation detection unit 31 extracts all the keywords of the electronic sticky note. That is, the situation detection unit 31 extracts keywords from the ideas described in all the electronic sticky notes by the method described above with reference to FIGS. 8 and 9.
  • step S152 the status detection unit 31 selects one of the unprocessed electronic sticky notes.
  • step S153 the status detection unit 31 compares the selected electronic sticky note keyword with the processed electronic sticky note keyword.
  • the processed electronic sticky note is an electronic sticky note that has been selected in the process of step S152 and has been subjected to the processes of steps S153 to S155.
  • step S154 the situation detection unit 31 determines whether or not there is an electronic sticky note that matches all the keywords. If there is no electronic sticky note whose keywords match the selected electronic sticky note among the processed electronic sticky notes, the status detection unit 31 determines that there is no electronic sticky note whose keywords match all the keywords, and the process proceeds to step S155. ..
  • step S155 the situation detection unit 31 counts as non-overlapping ideas. That is, the number of ideas is incremented by one.
  • step S154 if the processed electronic sticky note includes an electronic sticky note whose keywords match the selected electronic sticky note, the status detection unit 31 determines that there is an electronic sticky note whose keywords match all the keywords. The process of step S155 is skipped, and the process proceeds to step S156. That is, the idea described in the selected electronic sticky note is determined to match the idea described in the processed electronic sticky note, and is not counted in the number of ideas.
  • step S156 the status detection unit 31 determines whether or not all the electronic sticky note processing has been completed. If it is determined that the processing of all the electronic sticky notes has not been completed, the processing returns to step S152.
  • step S156 the processes of steps S152 to S156 are repeatedly executed until it is determined that the processing of all the electronic sticky notes has been completed. As a result, the number of ideas described on all electronic sticky notes is counted excluding duplicates.
  • step S156 determines whether the processing of all electronic sticky notes has been completed. If it is determined in step S156 that the processing of all electronic sticky notes has been completed, the processing proceeds to step S157.
  • step S157 the situation detection unit 31 determines whether or not the number of ideas is less than the threshold value. If it is determined that the number of ideas is less than the threshold value, that is, if the number of ideas presented by each user is insufficient, the process proceeds to step S158.
  • step S158 the support method selection unit 32 determines the execution of support for issuing an idea based on the information stored in the action information storage unit 42.
  • step S157 determines whether the number of ideas is equal to or greater than the threshold value, that is, if the number of ideas presented by each user is sufficient. If it is determined in step S157 that the number of ideas is equal to or greater than the threshold value, that is, if the number of ideas presented by each user is sufficient, the process proceeds to step S159.
  • step S159 the support method selection unit 32 determines the execution of support for discussion based on the information stored in the action information storage unit 42.
  • step S9 the support method selection unit 32 determines whether or not to support the idea generation based on the result of the process in step S8. If it is determined to support the idea generation, the process proceeds to step S10.
  • step S10 the information processing unit 12 executes an idea generation support method selection process.
  • step S201 the situation detection unit 31 extracts the keyword of the idea.
  • the situation detection unit 31 extracts the keyword of the idea described in all the electronic sticky notes by the same process as in step S151 of FIG.
  • step S202 the situation detection unit 31 determines whether or not the type of keyword is insufficient. If it is determined that the type of keyword is insufficient, the process proceeds to step S203. This is, for example, when the type of idea (spread of idea) presented by each user is insufficient.
  • step S203 the support method selection unit 32 selects the proposal of the idea divergence method based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal of the idea divergence method has been selected.
  • FIG. 21 shows an example of a method of proposing an idea divergence method.
  • the electronic sticky note 302-1 and the electronic sticky note 302-2 are presented between the electronic sticky note 301-1 and the electronic sticky note 301-2 showing the idea presented by the user.
  • Electronic sticky notes 302-1 and electronic sticky notes 302-2 show templates for idea divergence methods such as 5W2H, mandarat, and scenario graphs.
  • the user can be prompted to propose various ideas and spread the ideas by executing the idea divergence method shown in the electronic sticky notes 302-1 and the electronic sticky notes 302-2.
  • the support method selection unit 32 selects an appropriate idea divergence method according to the content and situation of the discussion.
  • the template of the selected idea divergence method is used as the background image on the image display surface 201. It may be displayed in.
  • the electronic sticky note 301-1 and the electronic sticky note 301-2 are user input sticky notes
  • the electronic sticky note 302-1 and the electronic sticky note 302-2 are support information sticky notes.
  • step S202 determines whether the types of keywords are sufficient. If it is determined in step S202 that the types of keywords are sufficient, the process proceeds to step S204. This is, for example, when the type of idea (spread of idea) presented by each user is sufficient.
  • step S204 the support method selection unit 32 selects the presentation of related information based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the presentation of the related information has been selected.
  • FIG. 22 shows an example of a method of presenting related information.
  • the support information sticky note 322-1 and the support information sticky note 322-2 are presented around the user input sticky note 321.
  • the support information sticky note 322-1 and the support information sticky note 322-2 show, for example, information related to the theme of the discussion or the idea shown in the user input sticky note 321. For example, images, videos, news, etc. searched by web search are presented as related information.
  • the user input sticky note 321 shows an idea about coffee
  • the support information sticky note 322-1 and the support information sticky note 322-2 show information related to coffee.
  • the support information sticky note 322-1 shows an image of coffee
  • the support information sticky note 322-2 shows news related to coffee.
  • the idea of another user may be presented to a certain user as related information.
  • the ideas of other groups may be presented as related information.
  • related information related to the utterance content of each user may be presented.
  • frequently-used keywords may be extracted from the utterance contents of each user, and related information related to the extracted keywords may be presented.
  • step S202 of FIG. 20 a specific example of the determination process in step S202 of FIG. 20 will be described with reference to the flowcharts of FIGS. 23 to 25.
  • step S221 the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
  • step S222 the status detection unit 31 clusters keywords.
  • an analysis method that does not specify the number of clusters to be generated for example, the NN (Nearest Neighbors) method, the group average method, etc. is used.
  • step S223 the status detection unit 31 determines whether or not the number of clusters is less than the threshold value. If it is determined that the number of clusters is less than the threshold value, the process proceeds to step S224.
  • step S224 the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
  • step S223 determines whether the number of clusters is equal to or greater than the threshold value. If it is determined in step S223 that the number of clusters is equal to or greater than the threshold value, the process proceeds to step S225.
  • step S225 the presentation of related information is selected as in the process of step S204 of FIG.
  • step S241 the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
  • step S242 the status detection unit 31 clusters keywords.
  • an analysis method for example, k-means method, k-NN method, etc.
  • the number of clusters to be generated is specified in advance.
  • step S243 the status detection unit 31 determines whether or not there is a cluster in which the number of included keywords is less than the threshold value. If it is determined that there is a cluster in which the number of keywords included is less than the threshold value, the process proceeds to step S244.
  • step S244 the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
  • step S243 if it is determined in step S243 that there is no cluster in which the number of keywords included is less than the threshold value, the process proceeds to step S245.
  • step S245 the presentation of related information is selected as in the process of step S204 of FIG.
  • step S261 the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
  • step S262 the situation detection unit 31 determines whether or not all the specified keywords have yet appeared. For example, keywords related to essential ideas to be discussed are specified in advance. Then, when the designated keywords include those that are not included in the keywords extracted in the process of step S261, the situation detection unit 31 determines that all the designated keywords have not yet appeared. The process proceeds to step S263.
  • step S263 the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
  • step S262 when the designated keywords are all included in the keywords extracted in the process of step S261, the situation detection unit 31 determines that all the specified keywords have appeared, and the process is performed in step S264. Proceed to.
  • step S264 the presentation of related information is selected as in the process of step S204 of FIG.
  • step S9 determines that the discussion is supported. If it is determined in step S9 that the discussion is supported, the process proceeds to step S11.
  • step S7 determines whether it is a group work stage or a summary stage. If it is determined in step S7 that it is a group work stage or a summary stage, the process proceeds to step S11.
  • step S11 the information processing unit 12 executes the discussion support method selection process, and then the process proceeds to step S12.
  • step S301 the situation detection unit 31 determines whether or not there is an idea for which discussion is insufficient. For example, among the ideas presented by each user, the situation detection unit 31 has an idea that the discussion is insufficient when the discussion time (hereinafter referred to as the discussion time) is less than a predetermined time. The process proceeds to step S302.
  • the discussion time hereinafter referred to as the discussion time
  • This process starts, for example, when the discussion starts and ends when the discussion ends.
  • step S351 the situation detection unit 31 determines whether or not there is an electronic sticky note (user input sticky note) pointed to for a predetermined time or longer based on the depth data. This process is repeatedly executed until it is determined that there is an electronic sticky note pointed for a predetermined time or longer, and when it is determined that there is an electronic sticky note pointed for for a predetermined time or longer, the process proceeds to step S352.
  • step S352 the situation detection unit 31 starts measuring the discussion time for the idea (hereinafter referred to as the idea to be measured) described on the pointed electronic sticky note.
  • step S353 the situation detection unit 31 determines whether or not the keyword of the idea to be measured is included in the utterance content based on the voice data. If it is determined that the keyword of the idea to be measured is included in the utterance content, the process proceeds to step S354.
  • step S354 the situation detection unit 31 updates the discussion time.
  • step S353 if it is determined in step S353 that the keyword of the idea to be measured is not included in the utterance content, the process of step S354 is skipped, the discussion time is not updated, and the process proceeds to step S355. As a result, only the period during which the keyword of the idea to be measured is included in the utterance content is measured as the discussion time of the idea.
  • step S355 the situation detection unit 31 determines whether or not another electronic sticky note (user input sticky note) has been pointed for a predetermined time or longer based on the depth data. If it is determined that another electronic sticky note has not been pointed for for a predetermined time or longer, the process returns to step S353.
  • another electronic sticky note user input sticky note
  • step S355 the processes of steps S353 to S355 are repeatedly executed until it is determined that another electronic sticky note has been pointed for a predetermined time or longer.
  • step S355 if it is determined in step S355 that another electronic sticky note has been pointed for a predetermined time or longer, the process proceeds to step S356.
  • step S356 the situation detection unit 31 ends the measurement of the discussion time for the idea being measured.
  • step S352 After that, the process returns to step S352, and the processes after step S352 are executed. As a result, the measurement of the discussion time for the idea described in the electronic sticky note determined to have been pointed to in the process of step S355 is started.
  • step S302 the support method selection unit 32 selects a proposal for changing the subject of discussion based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal for the change to be discussed has been selected.
  • FIG. 28 shows an example of a method of proposing a change to be discussed.
  • any kind of visual effect can be applied to the visual effect 342 within a range in which each user can pay attention to the user input sticky note 341-3.
  • the visual effect 342 causes the surroundings of the user-input sticky note 341-3 to shine or blink, or to display a background color different from other areas.
  • the display mode for example, color, shape, size, transparency, etc.
  • the display mode for example, color, shape, size, transparency, etc.
  • step S301 for example, if there is no idea proposed by each user whose discussion time is less than a predetermined time, the situation detection unit 31 determines that there is no idea for which the discussion is insufficient. , The process proceeds to step S303.
  • step S303 the situation detection unit 31 determines whether or not a negative opinion has been given. For example, when the situation detection unit 31 detects a word indicating a negative opinion such as "boring" or "no good” in the voice data for the latest T seconds, it determines that a negative opinion is given and processes it. Proceeds to step S304.
  • step S304 the situation detection unit 31 selects the presentation of a positive evaluation based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the presentation of the positive evaluation has been selected.
  • FIG. 29 shows an example of a method of presenting a positive evaluation.
  • visual information 362 which is support information indicating a positive evaluation
  • a user input sticky note 361 in which an idea given a positive evaluation in the discussion so far is described.
  • FIG. 30 is also a modified example of the discussion time measurement process of FIG. 27.
  • step S401 similarly to the process of step S351 of FIG. 27, it is determined whether or not there is an electronic sticky note pointed to for a predetermined time or longer. This process is repeatedly executed until it is determined that there is an electronic sticky note pointed to for a predetermined time or longer, and when it is determined that there is an electronic sticky note pointed for for a predetermined time or longer, the process proceeds to step S402. ..
  • step S402 as in the process of step S352 of FIG. 27, the measurement of the discussion time for the idea described in the pointed electronic sticky note is started.
  • step S403 the situation detection unit 31 starts counting positive keywords based on the voice data. For example, the situation detection unit 31 starts counting the number of times that a positive keyword such as "good” or "interesting" is detected in the voice data.
  • positive keywords to be counted are set in advance.
  • steps S404 to S406 the same processing as in steps S353 to S355 of FIG. 27 is executed.
  • step S406 if it is determined that another electronic sticky note has been pointed for a predetermined time or longer, the process proceeds to step S407.
  • step S407 the situation detection unit 31 ends the measurement of the discussion time for the idea being measured and the counting of the positive keywords.
  • step S402 After that, the process returns to step S402, and the processes after step S402 are executed. As a result, the measurement of the discussion time and the counting of positive keywords for the idea described in the electronic sticky note determined to have been pointed to in the process of step S406 are started.
  • an idea in which the count number of positive keywords is equal to or greater than a predetermined threshold value is regarded as an idea given a positive evaluation.
  • step S303 when the situation detection unit 31 does not detect a word indicating a negative opinion in the voice data for the latest T seconds, it determines that no negative opinion has been given. , The process proceeds to step S305.
  • step S305 the situation detection unit 31 determines whether or not the number of times the discussion has stagnated is equal to or greater than the threshold value. For example, the situation detection unit 31 counts the number of times it is determined that the discussion is stagnant in step S6 of FIG. 10 for each stage of the discussion. Then, when the situation detection unit 31 determines that the number of times the discussion has been determined to be stagnant at the current discussion stage is equal to or greater than the threshold value, the process proceeds to step S306.
  • step S306 the support method selection unit 32 selects a mood change proposal based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the mood change proposal has been selected.
  • FIG. 31 shows an example of a method of proposing a change of mood.
  • the support information sticky note 382-1 and the support information sticky note 382-2 are presented between the user input sticky note 381-1 and the user input sticky note 381-2.
  • the support information sticky note 382-1 and the support information sticky note 382-2 show information indicating a method of changing mood. For example, games, chat topics, non-discussion videos or images, food delivery, etc. are proposed.
  • This encourages each user to leave the discussion and change their mood, and when each user changes their mood, the discussion is activated.
  • the count of the number of stagnant discussions at the current discussion stage may be reset. This prevents repeated mood change proposals from being executed.
  • step S305 if the situation detection unit 31 determines that the number of times the discussion has been determined to be stagnant at the current discussion stage is less than the threshold value, the process proceeds to step S307.
  • step S307 the support method selection unit 32 selects a proposal for an idea organizing method based on the information stored in the action information storage unit 42.
  • the support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal of the idea organizing method has been selected.
  • FIG. 32 shows an example of a method of proposing an idea organizing method.
  • the support information sticky note 402-1 and the support information sticky note 402-2 are presented between the user input sticky note 401-1 and the user input sticky note 401-2.
  • the support information sticky note 402-1 and the support information sticky note 402-2 show templates for the KJ method and the idea organizing method such as a two-axis graph.
  • the user can organize the ideas and converge the divergent discussion.
  • the support method selection unit 32 selects an appropriate idea organizing method according to the content and situation of the discussion.
  • the template of the selected idea organizing method is displayed as a background image. It may be displayed on the surface 201.
  • step S12 the presentation method setting unit 33 sets the presentation method of the support information based on the information stored in the presentation method information storage unit 43.
  • the presentation method setting unit 33 sets the presentation position of the support information based on the stage of discussion and the proficiency level of the user according to the table of FIG. 33.
  • the user's proficiency level is one of the user's ability to discuss, and indicates the proficiency level for ideas and discussions. For example, a user who has a lot of experience in brainstorming and discussion is judged to have a high proficiency level, and a user who has little experience in brainstorming and discussion is judged to have a low proficiency level.
  • the presentation position of the support information is set around the individual in the individual work stage, and is set around the predetermined object in the group work stage and the summary stage. Further, when the proficiency level of all the users is high, the presentation position of the support information is set to a blank space at any stage of the discussion. Furthermore, when the proficiency level of all users is low, the presentation position of the support information is set in front of the user's line of sight in the individual work stage, and is set in the center of the screen (video display surface) in the group work stage. At the stage, it is set around a mass of sticky notes.
  • FIGS. 34 to 39 the diagonally shaded rectangular squares indicate the positions of the support information sticky notes indicating the support information, and the other rectangular squares indicate the positions of the user input sticky notes.
  • FIG. 34 shows an example of presenting support information around an individual. For example, a support information sticky note is presented near each user participating in the discussion.
  • FIG. 35 shows an example of presenting support information around a predetermined object. For example, a support information sticky note is presented around the object 451 placed on the video display surface 201.
  • FIG. 36 shows an example of presenting support information in a blank space.
  • the support information sticky note is presented at a position on the video display surface 201 where the density of the user input sticky note is low (a position where the user input sticky note is not so arranged).
  • FIG. 37 shows an example of presenting support information ahead of the user's line of sight.
  • the support information sticky note is presented ahead of each user's line of sight.
  • FIG. 38 shows an example in which support information is presented at the center of the video display surface 201.
  • a support information sticky note is presented near the center of the video display surface 201.
  • FIG. 39 shows an example of presenting support information around a mass of sticky notes.
  • the support information sticky note is displayed on the video display surface 201 near a position where the density of the user input sticky notes is high (a position where the user input sticky notes are densely packed).
  • the presentation method setting unit 33 presents the number of related information (hereinafter, based on the number of clusters of keywords obtained in the process of step S222 of FIG. 23). , Called the amount of information).
  • 40 to 42 are graphs showing an example of the relationship between the number of clusters and the amount of information to be presented.
  • the horizontal axis of each graph shows the number of clusters, and the vertical axis shows the amount of information to be presented.
  • the amount of information presented linearly decreases from I_max to I_min.
  • the amount of information presented monotonously decreases as the number of clusters increases, and the rate of decrease in the amount of information presented increases as the number of clusters increases.
  • the amount of information presented decreases stepwise as the number of clusters increases.
  • the number of related information to be presented may be set instead of the number of clusters (the number of types of ideas) or based on the total number of ideas presented by each user together with the number of clusters.
  • the presentation method setting unit 33 supplies information indicating the set presentation method to the output information generation unit 23 and the output control unit 24.
  • step S13 the information processing system 1 executes a support action. Specifically, the output information generation unit 23 generates support information to be presented to the user based on the selected support action, and supplies the support information to the output control unit 24. The output unit 14 presents the support information under the control of the output control unit 24 according to the set presentation method.
  • a support information sticky note showing the template of the idea divergence method is generated and presented on the video display surface 201.
  • a support information sticky note indicating the related information is generated and presented on the video display surface 201.
  • a support information sticky note is presented around each user as shown in FIG. 34. That is, the support information sticky note is presented at a position that is easy for each user to see and notice.
  • a support information sticky note is presented in a blank space as shown in FIG. 36.
  • a highly proficient user may be hindered from concentrating when the support information sticky note is presented near the center of the field of vision. Therefore, the support information sticky note is presented at a position that does not hinder the concentration state of each user.
  • the support information sticky note is presented in front of each user's line of sight. That is, the support information sticky note is presented at a position that is easier for each user to see and notice than the surroundings of each user.
  • a support information sticky note indicating the mood change method is generated and presented on the video display surface 201.
  • a support information sticky note showing a template for the idea organizing method is generated and presented on the video display surface 201.
  • the presentation position of the support information sticky note differs depending on the stage of discussion and the proficiency level of the user.
  • each user Support information sticky notes are presented in the surrounding area.
  • the support information sticky note is presented in the blank space as shown in FIG. 36.
  • the support information sticky note is presented in front of each user's line of sight.
  • a support information sticky note is presented around a predetermined object as shown in FIG. 35.
  • a user with a low proficiency level can easily obtain support information by looking around a predetermined object.
  • the support information sticky note is presented at the center of the video display surface 201 as shown in FIG. 38. That is, the support information sticky note is presented at a position that is easy for each user to see and notice.
  • the support information sticky note is presented around the position where the density of the user input sticky note is high.
  • the support information sticky note is presented around the position where the density of the user input sticky note is high so that each user can easily see and notice it.
  • step S1 After that, the process returns to step S1, and the processes after step S1 are executed.
  • each user can voluntarily activate the discussion.
  • each user can easily activate and proceed with the discussion without setting a facilitator of the discussion.
  • the support information may be presented by voice to the user A and the user B who are discussing around the video display surface 201.
  • a voice that reads out the contents of news, websites, etc. included in the related information may be output.
  • a voice such as "Why don't you use the template of the KJ method" may be used to encourage the user to use various templates of the idea divergence method or the idea organization method.
  • the user may be prompted to change the subject of discussion by voice such as "Why don't you discuss the opinion written on Mr. A's sticky note?"
  • the user may be encouraged to change his / her mood by voices such as "Why don't you change your mood”, “Let's take a lunch break”, and "Let's play music”.
  • a positive evaluation may be presented by voice such as "I like the opinion written on Mr. A's sticky note” or "I have been discussing for a long time, thank you for your hard work”.
  • the BGM may be played during the discussion so that the BGM can be changed depending on the situation of the discussion. For example, when the discussion is active, a slow-paced relaxing BGM may be played, and when the discussion is stagnant, a fast-paced BGM may be played.
  • the user may set the tempo of the BGM to be played when the discussion is stagnant in advance.
  • BGMs having different tempos may be played for several songs from the start of the discussion, and the activity of the discussion when each BGM is played may be measured. Then, for example, when the discussion is stagnant, the BGM having a tempo close to the tempo of the BGM having the highest activity of the discussion may be played.
  • the correlation between the tempo of the BGM and the activity of the discussion may be constantly measured, the optimum tempo may be constantly updated, and the BGM close to the optimum tempo may flow when the discussion is stagnant.
  • the support information may be presented individually for each user based on the state of each user.
  • the activity amount of the user may be detected as the state of each user, and the presentation of support information may be controlled based on the activity amount of the user.
  • the situation detection unit 31 detects the amount of activity (for example, the amount of presented ideas) in each user's idea generation based on the number of electronic sticky notes presented by each user. Then, for example, the support method selection unit 32 and the presentation method setting unit 33 present the content and amount of support information for each user based on at least one of the activity amount and proficiency level in the idea generation of each user. In addition, the position and timing of presenting support information are controlled. For example, the presentation method setting unit 33 selects a user whose activity amount in idea generation is small (for example, idea generation is stagnant) and has a low proficiency level, and supports information at a position easily visible to the selected user. Set the presentation position of.
  • the presentation method setting unit 33 selects a user whose activity amount in idea generation is small (for example, idea generation is stagnant) and has a low proficiency level, and supports information at a position easily visible to the selected user. Set the presentation position of.
  • the situation detection unit 31 detects the amount of activity (for example, the amount of speech) in the discussion of each user based on at least one of the volume of voice and the amount of movement of the hand for each user. Then, for example, the support method selection unit 32 and the presentation method setting unit 33 present the content and amount of support information for each user based on at least one of the activity amount and proficiency level in the discussion of each user, and the presentation method setting unit 33. , Control the position and timing of presenting support information. For example, the presentation method setting unit 33 selects a user who has a small amount of activity in the discussion (for example, a small amount of speech) and a low proficiency level, and sets the presentation position of the support information at a position easily visible to the selected user. Set.
  • the presentation method setting unit 33 selects a user who has a small amount of activity in the discussion (for example, a small amount of speech) and a low proficiency level, and sets the presentation position of the support information at a position easily visible to the selected user. Set.
  • the presentation of support information may be controlled according to the role of the user in the discussion.
  • a support information sticky note showing how to organize ideas may be presented at a position that is easily visible to the facilitator of the discussion.
  • the timing of presenting the support information may be controlled based on the instruction of the user.
  • the support information may be presented at the timing instructed by the facilitator.
  • the actual prototype objects 501 to 505 are placed on the desk, and the discussion is conducted with the objects 501 to 505 as the subject of discussion.
  • the size and color information of the objects 501 to 505 is registered in advance, and the position of each object is specified based on the depth data acquired by the depth sensor and the image data acquired by the RGB camera. ..
  • the user creates an idea by himself / herself.
  • the user inputs an idea using a handwritten sticky note 521 or a digital device, and creates a user input sticky note 522-1 or the like indicating the input idea.
  • the system assists the user in expanding or deepening the idea by presenting, for example, the support information sticky note 523-1 indicating the related information and the support information sticky note 523-2.
  • the user organizes ideas using the user-input sticky note 522-1, the user-input sticky note 522-2, and the like.
  • the system assists the user in organizing the ideas and drawing a conclusion by presenting, for example, the support information sticky note 524-1 and the support information sticky note 524-2 indicating the idea organizing method.
  • the system may support the divergence, deep digging, organization, etc. of ideas by presenting Question 525-1 and Question 525-2 on the video display surface 201. Good.
  • a general-purpose question such as SCAMBER (Osborn checklist) is prepared in advance.
  • the system may support the discussion by answering the user's question.
  • the time and content of discussion about each of the products 541 to 545 placed on the desk in the store are sensed.
  • the position of each product is specified by the same method as in the example of FIG. 45. This makes it possible to indirectly evaluate the needs and opinions of each product.
  • the system identifies the reason based on the content of the utterance. Then, the system can recommend a product that the user is more likely to purchase by making the user pay attention to other products according to the specified reason.
  • the visual effect 551 causes the user to pay attention to the product 542.
  • stage of discussion to which this technique applies is not limited to the examples described above.
  • the present technology can be applied to a discussion that includes only the stage of presenting the subject of discussion (for example, an idea, an opinion, etc.) or a discussion that includes only the stage of discussing the subject of discussion.
  • the classification method at the stage of discussion can be changed depending on the form of discussion and the like.
  • step S51 and step S53 in FIG. 14 can be exchanged.
  • FIG. 50 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 1005 is further connected to the bus 1004.
  • An input unit 1006, an output unit 1007, a recording unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
  • the input unit 1006 includes an input switch, a button, a microphone, an image sensor, and the like.
  • the output unit 1007 includes a display, a speaker, and the like.
  • the recording unit 1008 includes a hard disk, a non-volatile memory, and the like.
  • the communication unit 1009 includes a network interface and the like.
  • the drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 1001 loads and executes the program recorded in the recording unit 1008 into the RAM 1003 via the input / output interface 1005 and the bus 1004, as described above. A series of processing is performed.
  • the program executed by the computer 1000 can be recorded and provided on the removable media 1011 as a package media or the like, for example. Programs can also be provided via wired or wireless transmission media such as local area networks, the Internet, and digital satellite broadcasting.
  • the program can be installed in the recording unit 1008 via the input / output interface 1005 by mounting the removable media 1011 in the drive 1010. Further, the program can be received by the communication unit 1009 and installed in the recording unit 1008 via a wired or wireless transmission medium. In addition, the program can be installed in advance in the ROM 1002 or the recording unit 1008.
  • the program executed by the computer may be a program that is processed in chronological order according to the order described in this specification, or may be a program that is processed in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • the embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
  • this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and jointly processed.
  • each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
  • one step includes a plurality of processes
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • the present technology can also have the following configurations.
  • a situation detection unit that detects the status of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion target that is visually presented to the user.
  • An information processing device including an output control unit that controls to present support information for supporting the discussion when a stagnation of the discussion is detected.
  • the support method selection unit is at least one of the stage and stagnation status of the discussion, the amount and type of the discussion target presented by the user, the state of the user, and the status and content of the discussion regarding each discussion target.
  • the information processing apparatus wherein the support method for the discussion is selected based on the above.
  • the state of the user includes at least one of the position, ability, activity amount, and role of the user.
  • the discussion support method includes at least one of a method of supporting the presentation of the discussion target and a method of supporting the discussion.
  • the method of supporting the presentation of the subject of discussion includes at least one of the presentation of relevant information related to the subject of discussion and the proposal of an idea divergence method.
  • the method of supporting the discussion includes at least one of the proposal of the idea organizing method, the proposal of the change of the subject of discussion, the presentation of a positive evaluation for the subject of discussion, and the proposal of change of mood in the above (5).
  • the support method selection unit sets the amount of the related information to be presented based on at least one of the amount and the type of the discussion subject presented by the user.
  • the support information includes at least one of information that supports the presentation of the subject of discussion and information that supports discussion.
  • the information that supports the presentation of the subject of discussion includes at least one of the relevant information related to the subject of discussion and the information indicating the idea divergence method.
  • the information that supports the presentation of the discussion subject is at least one of information indicating an idea organizing method, information indicating a positive evaluation of the discussion subject, information prompting the change of the discussion subject, and information indicating a method of changing mood.
  • the information processing apparatus according to (8) above, which includes one. (10) Further provided with a presentation method setting unit for setting the presentation method of the support information based on the situation of the discussion.
  • the information processing device according to any one of (1) to (9), wherein the output control unit controls the presentation of the support information based on the set presentation method.
  • the presentation method setting unit sets the presentation position of the support information based on the state of the user, the stage of the discussion, and at least one of the presentation positions of the discussion target.
  • Information processing device (12)
  • the presentation method setting unit is at least one of a position that is easily visible to a specific user, a position that is easy for each user to see, a position where the density of the discussion target is high, a position where the density of the discussion target is low, and the periphery of a predetermined object.
  • the information processing apparatus according to (11) above, wherein the presentation position is set in one.
  • the information processing device according to (12), wherein the presentation method setting unit selects the specific user based on the state of each user.
  • the information processing apparatus according to any one of (11) to (13), wherein the state of the user includes at least one of the position, ability, activity amount, and role of the user.
  • the information processing device according to any one of (10) to (14), wherein the presentation method setting unit sets a timing for presenting the support information.
  • the situation detection unit detects the stagnation of the discussion based on at least one of the amount of the discussion target presented by the user, the volume of the user's voice, and the movement of the user's hand.
  • the information processing apparatus according to any one of (1) to (15).
  • the situation detection unit detects the stagnation of the discussion based on at least one of the volume of the user's voice and the movement of the user's hand at the stage of discussion (16) or (17).
  • the information processing device described in. (19) The situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.
  • An information processing method that controls the presentation of support information for supporting the discussion when the stagnation of the discussion is detected.
  • the situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.

Abstract

The present technology pertains to an information processing device, an information processing method, and a program that enable easy activation of a discussion. The information processing device is provided with: a state detection unit for, on the basis of sensor data obtained by sensing performed on the situation of a discussion about a discussion subject presented to users in a visually recognizable manner, detecting a state of the discussion; and an output control unit for, when stagnation of the discussion has been detected, performing control of presenting assistance information for assisting the discussion. The present technology can be applied, for example, to a system for facilitating a discussion. 

Description

情報処理装置、情報処理方法、及び、プログラムInformation processing equipment, information processing methods, and programs
 本技術は、情報処理装置、情報処理方法、及び、プログラムに関し、特に、議論を活性化することができるようにした情報処理装置、情報処理方法、及び、プログラムに関する。 This technology relates to information processing devices, information processing methods, and programs, and in particular, to information processing devices, information processing methods, and programs that have enabled discussions.
 従来、音声データから発話者をリアルタイムに特定し、発話者の推移や発話時間といった客観的データを進行役にフィードバックすることで、会議の活性化を支援することが提案されている(例えば、特許文献1参照)。 Conventionally, it has been proposed to support the activation of a conference by identifying the speaker from voice data in real time and feeding back objective data such as the transition of the speaker and the utterance time to the facilitator (for example, a patent). Reference 1).
特開2006-208482号公報Japanese Unexamined Patent Publication No. 2006-208482
 しかしながら、特許文献1に記載の発明では、会議の活性化の実現は進行役に委ねられる。従って、例えば、進行役の経験が浅い場合、進行役が、フィードバックされたデータに基づいて、議論の状況の良し悪しの判断や、会議を活性化させる具体的な方法の提案を行えずに、会議が活性化されないおそれがある。 However, in the invention described in Patent Document 1, the realization of activation of the conference is left to the facilitator. Therefore, for example, if the facilitator is inexperienced, the facilitator cannot judge whether the situation of the discussion is good or bad based on the fed-back data and propose a concrete method for activating the meeting. The meeting may not be activated.
 本技術は、このような状況に鑑みてなされたものであり、会議等において、容易に議論を活性化することができるようにするものである。 This technology was made in view of such a situation, and makes it possible to easily activate discussions at meetings and the like.
 本技術の一側面の情報処理装置は、ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出する状況検出部と、前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う出力制御部とを備える。 The information processing device of one aspect of the present technology includes a situation detection unit that detects the status of the discussion based on sensor data that senses the state of the discussion with respect to the discussion target that is visually presented to the user, and a stagnation of the discussion. When is detected, it is provided with an output control unit that controls to present support information for supporting the discussion.
 本技術の一側面の情報処理方法は、ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う。 The information processing method of one aspect of the present technology detects the situation of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion object presented to the user, and the stagnation of the discussion is detected. In this case, control is performed to present support information for supporting the discussion.
 本技術の一側面のプログラムは、ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う処理をコンピュータに実行させる。 The program of one aspect of the present technology detects the situation of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion object presented to the user, and when the stagnation of the discussion is detected, The computer is made to execute a process of controlling the presentation of support information for supporting the discussion.
 本技術の一側面においては、ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況が検出され、前記議論の停滞が検出された場合、前記議論を支援するための支援情報が提示される。 In one aspect of the present technology, when the situation of the discussion is detected and the stagnation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion object presented to the user, the above. Support information is presented to support the discussion.
本技術を適用した情報処理システムの一実施の形態を示すブロック図である。It is a block diagram which shows one Embodiment of the information processing system to which this technology is applied. 情報処理システムの入力部及び出力部の具体例を示す図である。It is a figure which shows the specific example of the input part and the output part of an information processing system. 情報処理システムの入力部及び出力部の具体例を示す図である。It is a figure which shows the specific example of the input part and the output part of an information processing system. 情報処理システムの入力部及び出力部の具体例を示す図である。It is a figure which shows the specific example of the input part and the output part of an information processing system. 情報処理システムの入力部及び出力部の具体例を示す図である。It is a figure which shows the specific example of the input part and the output part of an information processing system. 情報処理システムの入力部及び出力部の具体例を示す図である。It is a figure which shows the specific example of the input part and the output part of an information processing system. 議論の実施方法の例を示す図である。It is a figure which shows the example of the implementation method of a discussion. 手書きの付箋に記載されたアイディアのキーワードの抽出方法の例を示す図である。It is a figure which shows the example of the extraction method of the keyword of the idea described in the handwritten sticky note. デジタルデバイスにより入力されたアイディアのキーワードの抽出方法の例を示す図である。It is a figure which shows the example of the extraction method of the keyword of the idea input by a digital device. 議論支援処理を説明するためのフローチャートである。It is a flowchart for demonstrating the discussion support process. 個人作業段階の議論の様子を示す図である。It is a figure which shows the state of the discussion at the individual work stage. グループ作業段階の議論の様子を示す図である。It is a figure which shows the state of the discussion at the group work stage. まとめ段階の議論の様子を示す図である。It is a figure which shows the state of the discussion at the summary stage. 状況検出方法選択処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the situation detection method selection process. 状況判定処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the situation determination processing. ユーザにより作成された電子付箋の総数の推移の例を示すグラフである。It is a graph which shows the example of the transition of the total number of electronic sticky notes created by a user. 音声データの音量の推移の例を示すグラフである。It is a graph which shows the example of the transition of the volume of voice data. 全ユーザの手の移動量の推移の例を示すグラフである。It is a graph which shows the example of the transition of the movement amount of the hand of all users. 支援対象決定処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the support target determination process. アイディア出し支援方法選択処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the idea generation support method selection process. アイディア発散法の提案方法の例を示す図である。It is a figure which shows the example of the proposal method of the idea divergence method. 関連情報の提示方法の例を示す図である。It is a figure which shows the example of the presentation method of the related information. アイディア出し支援方法選択処理の第1の変形例を説明するためのフローチャートである。It is a flowchart for demonstrating the 1st modification of the idea generation support method selection process. アイディア出し支援方法選択処理の第2の変形例を説明するためのフローチャートである。It is a flowchart for demonstrating the 2nd modification of the idea generation support method selection process. アイディア出し支援方法選択処理の第3の変形例を説明するためのフローチャートである。It is a flowchart for demonstrating the 3rd modification of the idea generation support method selection process. 話し合い支援方法選択処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the discussion support method selection process. 話し合い時間計測処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the discussion time measurement process. 議論対象の変更の提案方法の例を示す図である。It is a figure which shows the example of the proposal method of the change of the subject of discussion. ポジティブな評価の提示方法の例を示す図である。It is a figure which shows the example of the presentation method of a positive evaluation. 話し合い時間計測処理の詳細を説明するためのフローチャートである。It is a flowchart for demonstrating the detail of the discussion time measurement process. 気分転換の提案方法の例を示す図である。It is a figure which shows an example of the suggestion method of a change of mood. アイディア整理法の提案方法の例を示す図である。It is a figure which shows the example of the proposal method of the idea arrangement method. 支援情報の提示位置の設定方法の例を示すテーブルである。It is a table which shows an example of the setting method of the presentation position of support information. 個人の周辺に支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information around an individual. 所定の物体の周辺に支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information around a predetermined object. 空白スペースに支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information in a blank space. ユーザの視線の先に支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information in front of the line of sight of a user. 映像表示面の中心に支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information in the center of the image display surface. 付箋の固まりの周辺に支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information around a mass of sticky notes. クラスタ数と提示する関連情報の情報量との関係の例を示すグラフである。It is a graph which shows the example of the relationship between the number of clusters and the amount of information of the related information to be presented. クラスタ数と提示する関連情報の情報量との関係の例を示すグラフである。It is a graph which shows the example of the relationship between the number of clusters and the amount of information of the related information to be presented. クラスタ数と提示する関連情報の情報量との関係の例を示すグラフである。It is a graph which shows the example of the relationship between the number of clusters and the amount of information of the related information to be presented. 音声により支援情報を提示する例を示す図である。It is a figure which shows the example which presents support information by voice. 各ユーザの状態を検出する例を示す図である。It is a figure which shows the example which detects the state of each user. 物体に対して議論を行う例を示す図である。It is a figure which shows an example of having a discussion about an object. システムと議論を行う例を示す図である。It is a figure which shows the example which discusses with a system. システムと議論を行う例を示す図である。It is a figure which shows the example which discusses with a system. システムと議論を行う例を示す図である。It is a figure which shows the example which discusses with a system. 商品の推薦を行う例を示す図である。It is a figure which shows the example of recommending a product. コンピュータの構成例を示す図である。It is a figure which shows the configuration example of a computer.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
 1.実施の形態
 2.変形例
 3.その他
Hereinafter, modes for implementing the present technology will be described. The explanation will be given in the following order.
1. 1. Embodiment 2. Modification example 3. Other
 <<1.実施の形態>>
 図1乃至図42を参照して、本技術の実施の形態について説明する。
<< 1. Embodiment >>
Embodiments of the present technology will be described with reference to FIGS. 1 to 42.
  <情報処理システム1の構成例>
 図1は、本技術を適用した情報処理システム1の構成例を示すブロック図である。情報処理システム1は、議論の支援を行うシステムである。
<Configuration example of information processing system 1>
FIG. 1 is a block diagram showing a configuration example of an information processing system 1 to which the present technology is applied. The information processing system 1 is a system that supports discussions.
 なお、情報処理システム1が支援可能な議論の種類は、ユーザに視認可能に提示された議論対象に関して行う議論であれば、特に限定されない。また、ユーザに視認可能に提示された議論対象は、例えば、現実の物体からなる議論対象、画像データやテキストデータ等の視認可能なデータに示される議論対象、AR(Augmented Reality)、VR(Virtual Reality)等によりユーザの視界内に提示された仮想の視覚情報により示される議論対象等である。現実の物体からなる議論対象は、例えば、商品、芸術品等である。視認可能なデータ又は視覚情報に示される議論対象は、例えば、アイディア、意見、各種の情報(例えば、記事、ニュース、商品情報等)、映像等である。 The type of discussion that can be supported by the information processing system 1 is not particularly limited as long as it is a discussion about the discussion target that is visually presented to the user. In addition, the discussion target visibly presented to the user is, for example, a discussion target consisting of a real object, a discussion target shown in visible data such as image data or text data, AR (Augmented Reality), VR (Virtual). It is a subject of discussion, etc., which is indicated by virtual visual information presented in the user's field of view by Reality) or the like. The subject of discussion consisting of real objects is, for example, goods, works of art, and the like. The objects of discussion shown in the visible data or visual information are, for example, ideas, opinions, various types of information (for example, articles, news, product information, etc.), videos, and the like.
 また、以下、議論に参加している人をユーザと称する。 In the following, the person participating in the discussion will be referred to as the user.
 情報処理システム1は、入力部11、情報処理部12、蓄積部13、及び、出力部14を備える。 The information processing system 1 includes an input unit 11, an information processing unit 12, a storage unit 13, and an output unit 14.
 入力部11は、例えば、スイッチ、ボタン、キーボード等の、情報処理システム1に各種のデータを入力する入力デバイスを備え、入力データを情報処理部12に供給する。 The input unit 11 includes, for example, an input device such as a switch, a button, and a keyboard for inputting various data to the information processing system 1, and supplies the input data to the information processing unit 12.
 また、入力部11は、議論の様子をセンシングするセンサ類を備える。例えば、入力部11は、議論が行われる映像表示面、映像表示面上の物体、及び、議論に参加するユーザの状態等をセンシングするセンサ類を備える。より具体的には、例えば、入力部11は、画像センサ、タッチセンサ、マイクロフォン等を備える。 In addition, the input unit 11 includes sensors that sense the state of the discussion. For example, the input unit 11 includes a video display surface on which the discussion is held, an object on the video display surface, and sensors that sense the state of the user participating in the discussion. More specifically, for example, the input unit 11 includes an image sensor, a touch sensor, a microphone, and the like.
 画像センサは、例えば、2次元の映像を撮影することが可能な可視光カメラ、赤外線カメラ等により構成される。或いは、例えば、画像センサは、さらに奥行き方向を加えた3次元のデータを取得可能なステレオカメラ、デプスセンサ等により構成される。デプスセンサは、例えば、Time of Flight方式、Structured Light方式等の任意の方式のものを用いることができる。 The image sensor is composed of, for example, a visible light camera, an infrared camera, etc. capable of capturing a two-dimensional image. Alternatively, for example, the image sensor is composed of a stereo camera, a depth sensor, or the like that can acquire three-dimensional data in which the depth direction is further added. As the depth sensor, for example, an arbitrary method such as a Time of Flight method or a Structured Light method can be used.
 なお、以下、画像センサとして可視光カメラ及びデプスセンサを用いる例について説明する。 An example of using a visible light camera and a depth sensor as the image sensor will be described below.
 タッチセンサは、映像表示面に対するユーザの手やマーカ等の動きを検出するセンサである。例えば、タッチセンサは、映像表示面にタッチパネルを設ける方式や、映像表示面の表面又は裏面側からカメラ又はデプスセンサにより撮影した画像又はデプスデータに基づいて、手やマーカの動きを検出する方式により実現される。 The touch sensor is a sensor that detects the movement of the user's hand, marker, etc. with respect to the image display surface. For example, the touch sensor is realized by a method of providing a touch panel on the image display surface or a method of detecting the movement of a hand or a marker based on an image or depth data taken by a camera or a depth sensor from the front surface or the back surface side of the image display surface. Will be done.
 マイクロフォンは、議論中のユーザの声等を収集する。 The microphone collects the voices of users under discussion.
 映像表示面は、例えば、映像が表示されるディスプレイの表面、又は、映像が投影される投影面とされる。 The image display surface is, for example, the surface of a display on which an image is displayed or a projection surface on which an image is projected.
 情報処理部12は、議論の支援に関わる各種の処理を行う。情報処理部12は、例えば、データ処理部21、支援部22、出力情報生成部23、及び、出力制御部24を備える。 The information processing unit 12 performs various processes related to supporting discussions. The information processing unit 12 includes, for example, a data processing unit 21, a support unit 22, an output information generation unit 23, and an output control unit 24.
 データ処理部21は、入力部11から供給される入力データ及びセンサデータに対して、必要に応じて各種の処理を行い、支援部22及び出力情報生成部23に供給する。 The data processing unit 21 performs various processes on the input data and the sensor data supplied from the input unit 11 as necessary, and supplies them to the support unit 22 and the output information generation unit 23.
 支援部22は、議論の支援を行うための処理を行う。支援部22は、状況検出部31、支援方法選択部32、及び、提示方法設定部33を備える。 The support unit 22 performs processing for supporting the discussion. The support unit 22 includes a situation detection unit 31, a support method selection unit 32, and a presentation method setting unit 33.
 状況検出部31は、入力データ、センサデータ、及び、状況検出方法蓄積部41に蓄積されている情報等に基づいて、議論の状況を検出する。議論の状況は、例えば、議論の段階(議論の経過)及び停滞状況、ユーザから提示された議論対象の量及び種類、各議論対象に関する話し合いの状況及び内容、議論対象の提示位置、並びに、ユーザの状態等のうちの1つ以上により表される。ユーザの状態は、例えば、ユーザの能力、役割、活動量、及び、位置等のうち1つ以上により表される。ユーザの活動量は、議論においてユーザが活動した量を示し、例えば、提示した議論対象の量、発言量、及び、手の移動量等のうち1つ以上により表される。 The situation detection unit 31 detects the status of the discussion based on the input data, the sensor data, the information stored in the situation detection method storage unit 41, and the like. The status of the discussion includes, for example, the stage of the discussion (progress of the discussion) and the stagnation status, the amount and type of the discussion subject presented by the user, the status and content of the discussion regarding each discussion subject, the presentation position of the discussion subject, and the user. It is represented by one or more of the states of. The user's state is represented by, for example, one or more of the user's ability, role, activity amount, position, and the like. The amount of activity of the user indicates the amount of activity of the user in the discussion, and is represented by, for example, one or more of the amount of discussion objects presented, the amount of speech, the amount of hand movement, and the like.
 支援方法選択部32は、議論の状況、及び、アクション情報蓄積部42に蓄積されている情報等に基づいて、議論の支援方法を選択する。例えば、支援方法選択部32は、議論の状況に基づいて、議論を支援するための複数のアクション(以下、支援アクションと称する)の中から、実行する支援アクションを選択する。支援方法選択部32は、選択した支援アクションを示す情報を出力情報生成部23及び出力制御部24に供給する。 The support method selection unit 32 selects a discussion support method based on the status of the discussion and the information stored in the action information storage unit 42. For example, the support method selection unit 32 selects a support action to be executed from a plurality of actions (hereinafter, referred to as support actions) for supporting the discussion based on the situation of the discussion. The support method selection unit 32 supplies information indicating the selected support action to the output information generation unit 23 and the output control unit 24.
 提示方法設定部33は、議論の状況、及び、提示方法情報蓄積部43に蓄積されている情報等に基づいて、支援方法選択部32により選択された支援アクションの実行時に提示する情報(以下、支援情報と称する)の提示方法を設定する。例えば、提示方法設定部33は、支援情報の提示位置、提示量(提示数)、及び、提示タイミング等を設定する。提示方法設定部33は、設定した提示方法を示す情報を出力情報生成部23及び出力制御部24に供給する。 The presentation method setting unit 33 presents information at the time of executing the support action selected by the support method selection unit 32 based on the status of the discussion and the information stored in the presentation method information storage unit 43 (hereinafter, Set the presentation method (referred to as support information). For example, the presentation method setting unit 33 sets the presentation position, the presentation amount (presentation number), the presentation timing, and the like of the support information. The presentation method setting unit 33 supplies information indicating the set presentation method to the output information generation unit 23 and the output control unit 24.
 出力情報生成部23は、入力データ、センサデータ、支援方法選択部32により選択された支援アクション、提示方法設定部33により設定された提示方法、及び、蓄積部13に蓄積されている情報等に基づいて、出力部14から出力する出力情報を生成する。出力情報は、例えば、ユーザの行動変化を促し、議論を支援するための支援情報を含む。支援情報は、例えば、議論対象の提示を支援する情報、話し合いを支援する情報等を含む。 The output information generation unit 23 can be used for input data, sensor data, support actions selected by the support method selection unit 32, presentation methods set by the presentation method setting unit 33, information stored in the storage unit 13, and the like. Based on this, the output information to be output from the output unit 14 is generated. The output information includes, for example, support information for promoting a change in the user's behavior and supporting the discussion. The support information includes, for example, information that supports the presentation of the subject of discussion, information that supports the discussion, and the like.
 出力制御部24は、支援方法選択部32により選択された支援アクション、提示方法設定部33により設定された提示方法、及び、蓄積部13に蓄積されている情報等に基づいて、出力情報生成部23により生成された出力情報の出力部14からの出力を制御する。 The output control unit 24 is an output information generation unit based on the support action selected by the support method selection unit 32, the presentation method set by the presentation method setting unit 33, the information stored in the storage unit 13, and the like. The output from the output unit 14 of the output information generated by 23 is controlled.
 また、出力制御部24は、例えば、アプリケーションを表示するウインドウ等のマルチコンテンツの描画制御、及び、各コンテンツに対してタッチ等のイベント配信を行う一般的なOS(Operating System)の制御レイヤの機能を有する。 Further, the output control unit 24 functions as a control layer of a general OS (Operating System) that controls drawing of multi-contents such as a window for displaying an application and distributes an event such as a touch to each content. Has.
 蓄積部13は、議論の状況の検出方法、支援アクション、及び、支援情報の提示方法に関する情報等を蓄積する。 The storage unit 13 stores information on the method of detecting the status of the discussion, the support action, and the method of presenting the support information.
 状況検出方法蓄積部41は、議論の状況の検出方法に関する情報を蓄積する。議論の状況の検出方法に関する情報は、例えば、検出方法の種類及び選択方法等を含む。 The situation detection method storage unit 41 accumulates information on the method for detecting the situation of the discussion. Information about the method of detecting the status of the discussion includes, for example, the type of detection method and the selection method.
 アクション情報蓄積部42は、支援アクションに関する情報を蓄積する。支援アクションに関する情報は、例えば、支援アクションの種類及び選択方法、並びに、各支援アクションにおいてユーザに提示する支援情報等を含む。 The action information storage unit 42 stores information on support actions. The information regarding the support action includes, for example, the type and selection method of the support action, the support information presented to the user in each support action, and the like.
 提示方法情報蓄積部43は、支援情報の提示方法に関する情報を蓄積する。支援情報の提示方法に関する情報は、例えば、提示方法の種類及び選択方法等を含む。 The presentation method information storage unit 43 stores information on the presentation method of support information. The information regarding the method of presenting the support information includes, for example, the type of the presentation method, the selection method, and the like.
 出力部14は、ユーザに対して視覚情報及び聴覚情報の提示を行う出力装置を備える。例えば、出力部14は、タッチパネル、ディスプレイ、プロジェクタ、スピーカ等を備える。 The output unit 14 includes an output device that presents visual information and auditory information to the user. For example, the output unit 14 includes a touch panel, a display, a projector, a speaker, and the like.
  <情報処理システム1の入力部11及び出力部14の具体例>
 次に、図2乃至図6を参照して、情報処理システム1の入力部11及び出力部14の具体例について説明する。
<Specific example of the input unit 11 and the output unit 14 of the information processing system 1>
Next, specific examples of the input unit 11 and the output unit 14 of the information processing system 1 will be described with reference to FIGS. 2 to 6.
 図2は、情報処理システム1の入力部11及び出力部14が、上方プロジェクション型のシステムにより構成される例を示している。このシステムでは、センサ付属プロジェクタ101が、机102の上方に設置されている。 FIG. 2 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by an upward projection type system. In this system, the projector attached to the sensor 101 is installed above the desk 102.
 センサ付属プロジェクタ101は、ペンダントライトやデスクスタンドライトのように、上方から机102の天板に映像を投影する。そして、机102の天板が映像表示面となり、映像表示面において議論が行われる。また、センサ付属プロジェクタ101は、例えば、付属の可視光カメラ及びデプスセンサにより、机102の天板周辺を撮影することにより、議論の様子を撮影し、得られた画像データ及びデプスデータを情報処理部12に供給する。 The projector 101 attached to the sensor projects an image from above onto the top plate of the desk 102, like a pendant light or a desk stand light. Then, the top plate of the desk 102 becomes the image display surface, and discussions are held on the image display surface. Further, the projector 101 attached to the sensor captures the state of the discussion by photographing the periphery of the top plate of the desk 102 with the attached visible light camera and the depth sensor, and processes the obtained image data and depth data into the information processing unit. Supply to 12.
 図3は、情報処理システム1の入力部11及び出力部14が、リアプロジェクション型のシステムにより構成される例を示している。このシステムでは、プロジェクタ111が、机112の下方に設置されている。 FIG. 3 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a rear projection type system. In this system, the projector 111 is installed below the desk 112.
 机112の天板は半透過型のスクリーンになっており、プロジェクタ111は、下方から机112の天板に映像を投影する。そして、机112の天板が映像表示面となり、映像表示面において議論が行われる。 The top plate of the desk 112 is a translucent screen, and the projector 111 projects an image on the top plate of the desk 112 from below. Then, the top plate of the desk 112 becomes the image display surface, and discussions are held on the image display surface.
 また、図2のセンサ付属プロジェクタ101と同様の位置に可視光カメラ及びデプスセンサ(不図示)が設けられる。可視光カメラ及びデプスセンサは、机112の天板周辺を撮影することにより、議論の様子を撮影し、得られた画像データ及びデプスデータを情報処理部12に供給する。 Further, a visible light camera and a depth sensor (not shown) are provided at the same positions as the projector attached to the sensor 101 in FIG. The visible light camera and the depth sensor photograph the state of the discussion by photographing the vicinity of the top plate of the desk 112, and supply the obtained image data and the depth data to the information processing unit 12.
 図4は、情報処理システム1の入力部11及び出力部14が、側面プロジェクション型のシステムにより構成される例を示している。このシステムでは、センサ付属プロジェクタ121が、壁122の側方に設置されている。 FIG. 4 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a side projection type system. In this system, the projector attached to the sensor 121 is installed on the side of the wall 122.
 センサ付属プロジェクタ121は、側方から壁122に映像を投影する。そして、壁122が映像表示面となり、映像表示面において議論が行われる。また、センサ付属プロジェクタ121は、例えば、付属の可視光カメラ及びデプスセンサにより、壁122周辺を撮影することにより、議論の様子を撮影し、得られた画像データ及びデプスデータを情報処理部12に供給する。 The projector attached to the sensor 121 projects an image on the wall 122 from the side. Then, the wall 122 becomes the image display surface, and discussions are held on the image display surface. Further, the projector attached to the sensor 121 captures the state of the discussion by photographing the periphery of the wall 122 with the attached visible light camera and the depth sensor, and supplies the obtained image data and depth data to the information processing unit 12. To do.
 図5は、情報処理システム1の入力部11及び出力部14が、平置きディスプレイ型のシステムにより構成される例を示している。このシステムは、センサ付属タッチパネル131に足132が設けられることにより、テーブル状の形態となっている。 FIG. 5 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by a flat display type system. This system has a table-like shape by providing the foot 132 on the touch panel 131 attached to the sensor.
 センサ付属タッチパネル131は、映像表示面であるディスプレイに映像を表示し、映像表示面において議論が行われる。また、映像表示面にタッチセンサが設けられ、センサ付属タッチパネル131は、映像表示面に対するタッチ等の操作を検出することができる。 The touch panel 131 attached to the sensor displays an image on a display which is an image display surface, and discussions are held on the image display surface. Further, a touch sensor is provided on the image display surface, and the touch panel 131 attached to the sensor can detect an operation such as touching the image display surface.
 また、例えば、必要に応じて、図2のセンサ付属プロジェクタ101と同様の位置に可視光カメラ及びデプスセンサ(不図示)が設けられる。デプスセンサは、センサ付属タッチパネル131周辺を撮影することにより、議論の様子を撮影し、得られた画像データ及びデプスデータを情報処理部12に供給する。 Further, for example, a visible light camera and a depth sensor (not shown) are provided at the same positions as the projector attached to the sensor 101 in FIG. 2, if necessary. The depth sensor captures the state of the discussion by photographing the periphery of the touch panel 131 attached to the sensor, and supplies the obtained image data and depth data to the information processing unit 12.
 図6は、情報処理システム1の入力部11及び出力部14が、アイウエア型のウエアラブル端末141により構成される例を示している。 FIG. 6 shows an example in which the input unit 11 and the output unit 14 of the information processing system 1 are configured by the eyewear type wearable terminal 141.
 このシステムでは、各ユーザがウエアラブル端末141を装着する。そして、ウエアラブル端末141により表示される視覚情報が、各ユーザの視界に重畳される。 In this system, each user wears a wearable terminal 141. Then, the visual information displayed by the wearable terminal 141 is superimposed on the field of view of each user.
 なお、可視光カメラ、デプスセンサ等のセンサ類は、ウエアラブル端末141とは別に設けられる。 Sensors such as a visible light camera and a depth sensor are provided separately from the wearable terminal 141.
 なお、以下、図7に示されるように、映像表示面201において、各ユーザが付箋を用いてアイディアを出し合って議論を行う例について説明する。 Hereinafter, as shown in FIG. 7, an example will be described in which each user exchanges ideas and discusses using sticky notes on the video display surface 201.
 また、以下、各ユーザが議論対象となるアイディアを出す段階(以下、アイディア出し段階と称する)、各ユーザが出したアイディアについて話し合う段階(以下、話し合い段階と称する)の順番に議論が進行するものとする。なお、議論の状況によっては、議論の段階が後戻りすることもあるものとする。 In addition, the discussion will proceed in the order of the stage where each user issues an idea to be discussed (hereinafter referred to as the idea generation stage) and the stage where each user discusses the idea (hereinafter referred to as the discussion stage). And. Depending on the situation of the discussion, the stage of the discussion may go back.
 アイディア出しの段階では、各ユーザは、映像表示面201上において個別にアイディア出しを行う。例えば、各ユーザは、アイディアを手書きで付箋202に記載したり、PC、タブレット、スマートフォン等のデジタルデバイス203を用いてアイディアを入力したりする。 At the idea generation stage, each user individually produces an idea on the video display surface 201. For example, each user writes an idea by hand on a sticky note 202, or inputs an idea using a digital device 203 such as a PC, a tablet, or a smartphone.
 手書きされた付箋202は、例えば、図8に示されるように、映像表示面201を撮影するように設置されている可視光カメラ(不図示)により撮影されることにより、画像データ化され、画像データ231がデータ処理部21に供給される。 As shown in FIG. 8, the handwritten sticky note 202 is converted into image data by being photographed by a visible light camera (not shown) installed so as to photograph the image display surface 201, and is converted into an image. The data 231 is supplied to the data processing unit 21.
 データ処理部21は、付箋202の画像データ231を出力情報生成部23に供給する。出力情報生成部23は、例えば、カラーの画像データ231を2値化し、文字輪郭を検出し、ラスタ画像をベクタ画像化することにより、電子付箋211-1の画像データを生成し、出力制御部24に供給する。出力制御部24は、出力部14を制御して、電子付箋211-1を映像表示面201に表示させる。 The data processing unit 21 supplies the image data 231 of the sticky note 202 to the output information generation unit 23. The output information generation unit 23 generates the image data of the electronic sticky note 211-1 by binarizing the color image data 231, detecting the character contour, and converting the raster image into a vector image, and the output control unit 23. Supply to 24. The output control unit 24 controls the output unit 14 to display the electronic sticky note 211-1 on the video display surface 201.
 また、データ処理部21は、OCR(Optical Character Recognition)により、画像データ231に記載された内容をテキストデータ化し、支援部22に供給する。状況検出部31は、テキストデータ内の名詞及び動詞をキーワードとして抽出する。 Further, the data processing unit 21 converts the content described in the image data 231 into text data by OCR (Optical Character Recognition) and supplies it to the support unit 22. The situation detection unit 31 extracts nouns and verbs in the text data as keywords.
 さらに、デジタルデバイス203は、例えば、図9に示されるように、入力されたアイディアを示す電子付箋211-2の画像データを生成し、入力部11を介して情報処理部12に入力する。データ処理部21は、電子付箋211-2の画像データを、出力情報生成部23を介して出力制御部24に供給する。出力制御部24は、出力部14を制御して、電子付箋211-2を映像表示面201に提示させる。 Further, as shown in FIG. 9, the digital device 203 generates image data of the electronic sticky note 211-2 indicating the input idea, and inputs the image data to the information processing unit 12 via the input unit 11. The data processing unit 21 supplies the image data of the electronic sticky note 211-2 to the output control unit 24 via the output information generation unit 23. The output control unit 24 controls the output unit 14 to present the electronic sticky note 211-2 on the video display surface 201.
 また、データ処理部21は、電子付箋211-2の内容を示すテキストデータを支援部22に供給する。状況検出部31は、テキストデータ内の名詞及び動詞をキーワードとして抽出する。 Further, the data processing unit 21 supplies text data indicating the contents of the electronic sticky note 211-2 to the support unit 22. The situation detection unit 31 extracts nouns and verbs in the text data as keywords.
 このように、各ユーザは、アイディア出しの段階で、アイディアを記載した電子付箋211-1乃至電子付箋211-4を作成する。 In this way, each user creates an electronic sticky note 211-1 to an electronic sticky note 211-4 on which the idea is described at the stage of producing an idea.
 次に、各ユーザは、話し合いの段階で、映像表示面201に提示されている電子付箋211-1乃至電子付箋211-4に記載されているアイディアについて話し合う。各ユーザは、指等で操作することにより、映像表示面201に提示されている電子付箋211-1乃至電子付箋211-4を移動させることができる。 Next, at the stage of discussion, each user discusses the ideas described in the electronic sticky notes 211-1 to the electronic sticky notes 211-4 presented on the video display surface 201. Each user can move the electronic sticky notes 211-1 to the electronic sticky notes 211-4 presented on the image display surface 201 by operating with a finger or the like.
 さらに、以下、ユーザの作業形態により、個人作業段階、グループ作業段階、及び、まとめ段階に議論が分類され、個人作業段階、グループ作業段階、まとめ段階の順に議論が進行するものとする。各段階の詳細は後述するが、個人作業段階は、アイディア出し段階に含まれ、グループ作業段階及びまとめ段階は、話し合い段階に含まれる。 Furthermore, the discussion will be classified into the individual work stage, the group work stage, and the summary stage according to the user's work form, and the discussion will proceed in the order of the individual work stage, the group work stage, and the summary stage. The details of each stage will be described later, but the individual work stage is included in the idea generation stage, and the group work stage and the summary stage are included in the discussion stage.
  <議論支援処理>
 次に、図10のフローチャートを参照して、情報処理システム1により実行される議論支援処理について説明する。
<Discussion support processing>
Next, the discussion support process executed by the information processing system 1 will be described with reference to the flowchart of FIG.
 この処理は、例えば、議論が開始されたとき開始され、議論が終了したとき終了する。なお、議論の開始及び終了のタイミングは、例えば、情報処理システム1が自動的に検出するようにしてもよいし、ユーザが明示的に情報処理システム1に入力するようにしてもよい。 This process starts, for example, when the discussion starts and ends when the discussion ends. The timing of the start and end of the discussion may be, for example, automatically detected by the information processing system 1, or may be explicitly input by the user to the information processing system 1.
 ステップS1において、入力部11は、入力情報を取得する。 In step S1, the input unit 11 acquires the input information.
 例えば、入力部11は、可視光カメラ及びデプスセンサにより映像表示面201を撮影することにより、画像データ及びデプスデータを生成し、データ処理部21に供給する。 For example, the input unit 11 generates image data and depth data by photographing the image display surface 201 with a visible light camera and a depth sensor, and supplies the image data and the depth data to the data processing unit 21.
 画像データに手書きの付箋が含まれる場合、図8を参照して上述したように、手書きの付箋から電子付箋が生成され、映像表示面201に表示される。また、手書きの付箋の内容がテキストデータ化され、支援部22に供給される。 When the image data includes a handwritten sticky note, an electronic sticky note is generated from the handwritten sticky note and displayed on the video display surface 201 as described above with reference to FIG. In addition, the contents of the handwritten sticky note are converted into text data and supplied to the support unit 22.
 また、データ処理部21は、デプスデータに対してノイズ除去等の処理を行った後、デプスデータを支援部22に供給する。 Further, the data processing unit 21 supplies the depth data to the support unit 22 after performing processing such as noise removal on the depth data.
 例えば、入力部11は、マイクロフォンにより映像表示面201の周辺の音声を電気信号である音声データに変換し、情報処理部12に供給する。 For example, the input unit 11 converts the sound around the image display surface 201 into audio data which is an electric signal by the microphone and supplies it to the information processing unit 12.
 データ処理部21は、音声データに対して、デジタル化、ノイズ除去等の処理を行った後、音声データを支援部22に供給する。 The data processing unit 21 performs processing such as digitization and noise removal on the voice data, and then supplies the voice data to the support unit 22.
 さらに、入力部11は、デジタルデバイスから電子付箋の画像データが入力された場合、電子付箋の画像データを情報処理部12に供給する。これに対して、図9を参照して上述したように、入力された電子付箋が映像表示面201に表示されるとともに、電子付箋の内容を示すテキストデータが支援部22に供給される。 Further, when the image data of the electronic sticky note is input from the digital device, the input unit 11 supplies the image data of the electronic sticky note to the information processing unit 12. On the other hand, as described above with reference to FIG. 9, the input electronic sticky note is displayed on the video display surface 201, and text data indicating the contents of the electronic sticky note is supplied to the support unit 22.
 ステップS2において、状況検出部31は、議論の段階を検出する。具体的には、状況検出部31は、デプスデータに基づいて、各ユーザの状態を検出する。そして、状況検出部31は、各ユーザの状態に基づいて、現在の議論の段階を検出する。例えば、状況検出部31は、現在の議論の段階が、個人作業段階、グループ作業段階、及び、まとめ段階のいずれであるかを判定する。 In step S2, the situation detection unit 31 detects the stage of discussion. Specifically, the situation detection unit 31 detects the state of each user based on the depth data. Then, the situation detection unit 31 detects the current stage of discussion based on the state of each user. For example, the situation detection unit 31 determines whether the current discussion stage is an individual work stage, a group work stage, or a summary stage.
 個人作業段階は、各ユーザが個別に作業を行っている段階である。例えば、ブレインストーミングやディスカッションにおいては、各ユーザが個別にアイディアを考えたり、アイディア出しを行ったりしている段階である。 The individual work stage is the stage where each user is working individually. For example, in brainstorming and discussions, each user is at the stage of individually thinking about ideas and giving ideas.
 図11は、個人作業段階における映像表示面201周辺の様子を模式的に示している。この例では、全員が個別に電子付箋を作成している様子が示されている。従って、例えば、状況検出部31は、ほとんどのユーザが電子付箋の作成を行っている場合、例えば、電子付箋の作成を行っているユーザの割合が所定の閾値以上である場合、個人作業段階であると判定する。 FIG. 11 schematically shows the state around the video display surface 201 in the personal work stage. In this example, it is shown that everyone is creating electronic sticky notes individually. Therefore, for example, when most of the users are creating electronic sticky notes, for example, when the ratio of users who are creating electronic sticky notes is equal to or higher than a predetermined threshold value, the situation detection unit 31 is in the individual work stage. Judge that there is.
 グループ作業段階は、各ユーザが参加して話し合いを行っている段階である。例えば、ブレインストーミングやディスカッションにおいては、ユーザ全員でアイディアについて話し合いながら、必要に応じて新しいアイディアを出している段階である。 The group work stage is the stage where each user participates and has a discussion. For example, in brainstorming and discussions, all users are discussing ideas and coming up with new ideas as needed.
 図12は、グループ作業段階における映像表示面201周辺の様子を模式的に示している。この例では、電子付箋を作成しているユーザと、電子付箋を指さしたり、操作したりしているユーザが混在している様子が示されている。従って、例えば、状況検出部31は、ユーザの一部が電子付箋の作成を行っている場合、例えば、電子付箋の作成を行っているユーザが1人以上存在し、かつ、電子付箋の作成を行っているユーザの割合が所定の閾値未満である場合、グループ作業段階であると判定する。 FIG. 12 schematically shows the state around the video display surface 201 at the group work stage. In this example, it is shown that a user who creates an electronic sticky note and a user who points to or operates the electronic sticky note are mixed. Therefore, for example, when a part of the users is creating an electronic sticky note, for example, the situation detection unit 31 creates an electronic sticky note when there is at least one user who is creating the electronic sticky note. When the ratio of users who are performing is less than a predetermined threshold value, it is determined that the group work stage is in progress.
 まとめ段階は、例えば、ユーザ全員で話し合い、議論の内容をまとめ、結論を出そうとしている段階である。この段階では、すでにアイディア出しは完了している。 The summary stage is, for example, the stage where all users discuss, summarize the content of the discussion, and try to draw a conclusion. At this stage, the idea has already been created.
 図13は、まとめ段階における映像表示面201周辺の様子を模式的に示している。この例では、電子付箋を作成しているユーザは存在せず、ほとんどのユーザが電子付箋を指さしたり、操作したりしている様子が示されている。従って、例えば、状況検出部31は、電子付箋の作成を行っているユーザが存在しない場合、まとめ段階であると判定する。 FIG. 13 schematically shows the state around the video display surface 201 at the summary stage. In this example, there is no user who creates the electronic sticky note, and it is shown that most users point to or operate the electronic sticky note. Therefore, for example, the situation detection unit 31 determines that it is in the summary stage when there is no user who is creating the electronic sticky note.
 図10に戻り、ステップS3において、状況検出部31は、ステップS2の判定結果に基づいて、個人作業段階であるか否かを判定する。個人作業段階でないと判定された場合、すなわち、グループ作業段階又はまとめ段階であると判定された場合、処理はステップS4に進む。 Returning to FIG. 10, in step S3, the situation detection unit 31 determines whether or not it is in the individual work stage based on the determination result in step S2. If it is determined that it is not the individual work stage, that is, if it is determined that it is the group work stage or the summary stage, the process proceeds to step S4.
 ステップS4において、状況検出部31は、状況検出方法選択処理を実行し、その後、処理はステップS5に進む。 In step S4, the situation detection unit 31 executes the situation detection method selection process, and then the process proceeds to step S5.
 ここで、図14のフローチャートを参照して、状況検出方法選択処理の詳細について説明する。 Here, the details of the situation detection method selection process will be described with reference to the flowchart of FIG.
 ステップS51において、状況検出部31は、ユーザの親密度が高いか否かを判定する。ユーザの親密度が高いと判定された場合、処理はステップS51に進む。 In step S51, the situation detection unit 31 determines whether or not the intimacy of the user is high. If it is determined that the intimacy of the user is high, the process proceeds to step S51.
 なお、ユーザの親密度は、例えば、議論の開始前にユーザが入力するようにしてもよいし、情報処理システム1が自動的に判定するようにしてもよい。 Note that the intimacy of the user may be input by the user before the start of the discussion, or may be automatically determined by the information processing system 1.
 後者の場合、例えば、状況検出部31は、顔認証、ID認証、社員証や学生証等を用いたカード認証等により、各ユーザを認識する。そして、状況検出部31は、ユーザの組合せが、以前情報処理システム1を用いて一緒に議論を行ったユーザの組合せに含まれる場合、又は、各ユーザが同じ組織(例えば、部署やクラス等)に属する場合、ユーザの親密度が高いと判定する。 In the latter case, for example, the situation detection unit 31 recognizes each user by face authentication, ID authentication, card authentication using an employee ID card, a student ID card, or the like. Then, the situation detection unit 31 indicates that the combination of users is included in the combination of users previously discussed together using the information processing system 1, or each user is in the same organization (for example, department or class). If it belongs to, it is judged that the intimacy of the user is high.
 ステップS52において、状況検出部31は、手の動きのみの使用を決定する。すなわち、状況検出部31は、状況検出方法蓄積部41に蓄積されている情報に基づいて、ユーザの手の動きのみを用いて議論の状況を検出するように決定する。 In step S52, the situation detection unit 31 decides to use only the movement of the hand. That is, the situation detection unit 31 determines to detect the situation of the discussion using only the movement of the user's hand based on the information stored in the situation detection method storage unit 41.
 その後、状況検出方法選択処理は終了する。 After that, the status detection method selection process ends.
 一方、ステップS51において、ユーザの親密度が低いと判定された場合、処理はステップS53に進む。 On the other hand, if it is determined in step S51 that the intimacy of the user is low, the process proceeds to step S53.
 ステップS53において、状況検出部31は、電子付箋の操作以外に手を使う作業があるか否かを判定する。電子付箋の操作以外に手を使う作業があると判定された場合、処理はステップS54に進む。 In step S53, the situation detection unit 31 determines whether or not there is work using a hand other than the operation of the electronic sticky note. If it is determined that there is work using a hand other than the operation of the electronic sticky note, the process proceeds to step S54.
 ここで、電子付箋の操作以外に手を使う作業とは、例えば、議事録やメモをとる作業等である。 Here, the work of using hands other than the operation of the electronic sticky note is, for example, the work of taking minutes and memos.
 ステップS54において、状況検出部31は、音声のみの使用を決定する。すなわち、状況検出部31は、状況検出方法蓄積部41に蓄積されている情報に基づいて、音声のみを用いて議論の状況を検出するように決定する。 In step S54, the situation detection unit 31 decides to use only the voice. That is, the situation detection unit 31 determines to detect the situation of the discussion using only voice based on the information stored in the situation detection method storage unit 41.
 その後、状況検出方法選択処理は終了する。 After that, the status detection method selection process ends.
 一方、ステップS53において、電子付箋の操作以外に手を使う作業がないと判定された場合、処理はステップS55に進む。 On the other hand, if it is determined in step S53 that there is no work to use other than the operation of the electronic sticky note, the process proceeds to step S55.
 ステップS55において、状況検出部31は、音声と手の動きの使用を決定する。すなわち、状況検出部31は、状況検出方法蓄積部41に蓄積されている情報に基づいて、音声と手の動きの両方を用いて議論の状況を検出するように決定する。 In step S55, the situation detection unit 31 determines the use of voice and hand movement. That is, the situation detection unit 31 determines to detect the situation of the discussion using both voice and hand movement based on the information stored in the situation detection method storage unit 41.
 その後、状況検出方法選択処理は終了する。 After that, the status detection method selection process ends.
 例えば、グループ作業段階又はまとめ段階では、ユーザ間で話し合いが活発に行われている場合、ユーザの声が大きくなり、話し合いが停滞している場合、ユーザの声が小さくなる。一方、ユーザが親密である場合、議論とは関係のない雑談が行われ、雑談によりユーザの声が大きくなる可能性がある。 For example, in the group work stage or the summary stage, when the discussion is actively carried out between the users, the user's voice becomes loud, and when the discussion is stagnant, the user's voice becomes quiet. On the other hand, when the users are intimate, chats that are not related to the discussion are held, and the chats may make the user's voice louder.
 従って、ユーザが親密でない場合、音声が議論の状況の検出条件に用いられる。一方、ユーザが親密である場合、音声は議論の状況の検出条件に用いられない。 Therefore, if the user is not intimate, voice is used as a condition for detecting the situation of the discussion. On the other hand, if the user is intimate, the voice is not used as a condition for detecting the situation of the discussion.
 また、例えば、ユーザが議論を活発に行っている場合、電子付箋を操作する(例えば、電子付箋を動かしたり、指さしたりする)ために、ユーザの手の動きが大きくなると想定される。一方、議論が停滞している場合、電子付箋の操作はあまり行われず、ユーザの手の動きが小さくなると想定される。ただし、電子付箋の操作以外に手を使う作業がある場合、ユーザの手の動きが大きくても、必ずしも議論が活発に行われているとは限らない。 Also, for example, when the user is actively engaged in discussions, it is assumed that the movement of the user's hand becomes large in order to operate the electronic sticky note (for example, move or point to the electronic sticky note). On the other hand, when the discussion is stagnant, it is assumed that the operation of the electronic sticky note is not performed so much and the movement of the user's hand becomes small. However, when there is work that uses hands other than the operation of electronic sticky notes, even if the movement of the user's hand is large, the discussion is not always active.
 従って、電子付箋の操作以外に手を使う作業がない場合、手の動きが議論の状況の検出条件に用いられる。一方、電子付箋の操作以外に手を使う作業がある場合、手の動きは議論の状況の検出条件に用いられない。 Therefore, when there is no work to use the hand other than the operation of the electronic sticky note, the movement of the hand is used as a condition for detecting the situation of the discussion. On the other hand, when there is work using the hand other than the operation of the electronic sticky note, the movement of the hand is not used as a detection condition of the situation of discussion.
 図10に戻り、一方、ステップS3において、個人作業段階であると判定された場合、ステップS4の処理はスキップされ、処理はステップS5に進む。 Returning to FIG. 10, on the other hand, if it is determined in step S3 that it is an individual work stage, the process of step S4 is skipped and the process proceeds to step S5.
 ステップS5において、状況検出部31は、状況検出処理を実行し、その後、処理はステップS6に進む。 In step S5, the status detection unit 31 executes the status detection process, and then the process proceeds to step S6.
 ここで、図15のフローチャートを参照して、状況検出処理の詳細について説明する。 Here, the details of the situation detection process will be described with reference to the flowchart of FIG.
 ステップS101において、状況検出部31は、個人作業段階であるか否かを判定する。個人作業段階であると判定された場合、処理はステップS102に進む。 In step S101, the situation detection unit 31 determines whether or not it is in the individual work stage. If it is determined that it is in the personal work stage, the process proceeds to step S102.
 ステップS102において、状況検出部31は、アイディア出しが滞っているか否かを判定する。例えば、状況検出部31は、アイディアを記載した電子付箋の作成数に基づいて、アイディア出しが滞っているか否かを判定する。 In step S102, the situation detection unit 31 determines whether or not the idea generation is delayed. For example, the situation detection unit 31 determines whether or not the idea is delayed based on the number of electronic sticky notes on which the idea is created.
 図16は、全ユーザにより作成された電子付箋の総数の推移を示すグラフである。横軸は、議論の開始時からの経過時間を示し、縦軸は、電子付箋の総数を示している。 FIG. 16 is a graph showing changes in the total number of electronic sticky notes created by all users. The horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the total number of electronic sticky notes.
 例えば、状況検出部31は、直近のT秒内の電子付箋の作成数がQ個未満である場合、電子付箋の作成スピードが落ちている、すなわち、各ユーザからの新しいアイディアの提示が滞っていると判定し、処理はステップS103に進む。 For example, when the number of electronic sticky notes created in the latest T seconds is less than Q, the situation detection unit 31 slows down the creation speed of electronic sticky notes, that is, the presentation of new ideas from each user is delayed. It is determined that there is, and the process proceeds to step S103.
 なお、判定に用いるT秒及びQ個の値は、議論の参加人数や内容等により調整される。 The T seconds and Q values used for the judgment are adjusted according to the number of participants and the content of the discussion.
 ステップS103において、状況検出部31は、議論が停滞していると判定する。 In step S103, the situation detection unit 31 determines that the discussion is stagnant.
 その後、状況検出処理は終了する。 After that, the status detection process ends.
 一方、ステップS102において、状況検出部31は、直近のT秒内の付箋の作成数がQ個以上である場合、付箋の作成スピードが落ちていない、すなわち、各ユーザからの新しいアイディアの提示が滞っていないと判定し、処理はステップS104に進む。 On the other hand, in step S102, when the number of sticky notes created in the latest T seconds is Q or more, the status detection unit 31 does not slow down the sticky note creation speed, that is, each user presents a new idea. It is determined that there is no delay, and the process proceeds to step S104.
 ステップS104において、状況検出部31は、議論が活発であると判定する。 In step S104, the situation detection unit 31 determines that the discussion is active.
 その後、状況検出処理は終了する。 After that, the status detection process ends.
 一方、ステップS101において、個人作業段階でないと判定された場合、すなわち、グループ作業段階、又は、まとめ段階であると判定された場合、処理はステップS105に進む。 On the other hand, if it is determined in step S101 that it is not the individual work stage, that is, if it is determined that it is in the group work stage or the summary stage, the process proceeds to step S105.
 ステップS105において、状況検出部31は、検出条件に音声を使用するか否かを判定する。状況検出部31は、上述したステップS4の処理において、議論の状況の検出条件に音声を使用するように決定した場合、検出条件に音声を使用すると判定し、処理はステップS106に進む。 In step S105, the situation detection unit 31 determines whether or not to use voice as the detection condition. When the situation detection unit 31 determines in the process of step S4 described above to use voice as the detection condition of the situation of discussion, it determines that voice is used as the detection condition, and the process proceeds to step S106.
 ステップS106において、状況検出部31は、声が小さいか否かを判定する。 In step S106, the situation detection unit 31 determines whether or not the voice is low.
 図17は、入力部11が備えるマイクロフォンにより取得された音声データの音量の推移を示すグラフである。横軸は、議論の開始時からの経過時間を示し、縦軸は、音量(単位はdB)を示している。 FIG. 17 is a graph showing the transition of the volume of the voice data acquired by the microphone included in the input unit 11. The horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the volume (unit: dB).
 例えば、状況検出部31は、直近のT秒間において、音声データの音量がB(dB)未満の状態が続いている場合、声が小さいと判定し、処理はステップS107に進む。例えば、議論が停滞しており、各ユーザの発言が少ない場合、処理はステップS107に進む。 For example, if the volume of the voice data continues to be less than B (dB) in the latest T seconds, the situation detection unit 31 determines that the voice is low, and the process proceeds to step S107. For example, if the discussion is stagnant and there are few comments from each user, the process proceeds to step S107.
 一方、ステップS105において、検出条件に音声を使用しないと判定された場合、ステップS106の処理はスキップされ、処理はステップS107に進む。 On the other hand, if it is determined in step S105 that voice is not used as the detection condition, the process of step S106 is skipped and the process proceeds to step S107.
 ステップS107において、状況検出部31は、検出条件に手の動きを使用するか否かを判定する。状況検出部31は、上述したステップS4の処理において、議論の状況の検出条件に手の動きを使用するように決定した場合、検出条件に手の動きを使用すると判定し、処理はステップS108に進む。 In step S107, the situation detection unit 31 determines whether or not to use the movement of the hand as the detection condition. When the situation detection unit 31 decides to use the hand movement as the detection condition of the discussion situation in the process of step S4 described above, it determines that the hand movement is used as the detection condition, and the process proceeds to step S108. move on.
 ステップS108において、状況検出部31は、手が動いていないか否かを判定する。 In step S108, the situation detection unit 31 determines whether or not the hand is moving.
 図18は、全ユーザの手の移動量の合計の推定を示すグラフである。横軸は、議論の開始時からの経過時間を示し、縦軸は、全ユーザの手の移動量の合計(単位はmm)を示している。 FIG. 18 is a graph showing an estimation of the total amount of hand movements of all users. The horizontal axis shows the elapsed time from the start of the discussion, and the vertical axis shows the total amount of hand movements of all users (unit: mm).
 例えば、状況検出部31は、デプスデータに基づいて、各ユーザの手の動きを常時検出する。また、状況検出部31は、直近のT秒内の各ユーザの手の位置の時間推移に基づいて、各ユーザの手の移動量を算出する。そして、状況検出部31は、直近のT秒内の全ユーザの手の移動量の合計がM(mm)未満である場合、手が動いていないと判定し、処理はステップS109に進む。例えば、議論が停滞しており、各ユーザが電子付箋を指したり動かしたりすることが少ない場合、処理はステップS109に進む。 For example, the situation detection unit 31 constantly detects the movement of each user's hand based on the depth data. Further, the situation detection unit 31 calculates the amount of movement of each user's hand based on the time transition of the position of each user's hand within the latest T seconds. Then, when the total amount of hand movements of all users within the latest T seconds is less than M (mm), the situation detection unit 31 determines that the hands are not moving, and the process proceeds to step S109. For example, if the discussion is stagnant and each user rarely points to or moves the electronic sticky note, the process proceeds to step S109.
 一方、ステップS107において、検出条件に手の動きを使用しないと判定された場合、ステップS108の処理はスキップされ、処理はステップS109に進む。 On the other hand, if it is determined in step S107 that the movement of the hand is not used for the detection condition, the process of step S108 is skipped and the process proceeds to step S109.
 ステップS109において、状況検出部31は、議論が停滞していると判定する。 In step S109, the situation detection unit 31 determines that the discussion is stagnant.
 このように、検出条件に音声と手の動きを使用する場合に、声が小さく、かつ、手が動いていないと判定されたとき、又は、検出条件に手の動きのみを使用する場合に、手が動いていないと判定されたとき、又は、検出条件に音声のみを使用する場合に、声が小さいと判定されたとき、議論が停滞していると判定される。 In this way, when voice and hand movement are used for the detection condition, when it is determined that the voice is quiet and the hand is not moving, or when only the hand movement is used for the detection condition, When it is determined that the hand is not moving, or when it is determined that the voice is low when only voice is used as the detection condition, it is determined that the discussion is stagnant.
 その後、状況検出処理は終了する。 After that, the status detection process ends.
 一方、ステップS108において、状況検出部31は、直近のT秒内の全ユーザの手の移動量の合計がM(mm)以上である場合、手が動いていると判定し、処理はステップS110に進む。例えば、議論が活発であり、各ユーザが電子付箋を指したり動かしたりすることが多い場合、処理はステップS110に進む。 On the other hand, in step S108, the situation detection unit 31 determines that the hand is moving when the total amount of movement of the hands of all users within the latest T seconds is M (mm) or more, and the process is performed in step S110. Proceed to. For example, if the discussion is active and each user often points to or moves the electronic sticky note, the process proceeds to step S110.
 また、ステップS106において、状況検出部31は、直近のT秒間において、音声データの音量がB(dB)以上となる瞬間が存在する場合、声が大きいと判定し、処理はステップS110に進む。例えば、議論が活発であり、各ユーザの発言が多い場合、処理はステップS110に進む。 Further, in step S106, the situation detection unit 31 determines that the voice is loud when there is a moment when the volume of the voice data becomes B (dB) or more in the latest T seconds, and the process proceeds to step S110. For example, if the discussion is active and there are many comments from each user, the process proceeds to step S110.
 ステップS110において、状況検出部31は、議論が活発であると判定する。 In step S110, the situation detection unit 31 determines that the discussion is active.
 このように、検出条件に音声と手の動きを使用する場合、若しくは、手の動きのみを使用する場合に、手が動いていると判定されたてき、又は、検出条件に音声と手の動きを使用する場合、若しくは、声のみを使用する場合に、声が大きいと判定されたとき、議論が活発であると判定される。 In this way, when voice and hand movement are used as the detection condition, or when only hand movement is used, it has been determined that the hand is moving, or voice and hand movement are used as the detection condition. When it is determined that the voice is loud when using, or when using only the voice, it is determined that the discussion is active.
 その後、状況検出処理は終了する。 After that, the status detection process ends.
 図10に戻り、ステップS6において、状況検出部31は、ステップS5の処理の結果に基づいて、議論が停滞しているか否かを判定する。議論が活発であると判定された場合、処理はステップS1に戻る。 Returning to FIG. 10, in step S6, the situation detection unit 31 determines whether or not the discussion is stagnant based on the result of the process in step S5. If it is determined that the discussion is active, the process returns to step S1.
 その後、ステップS6において、議論が停滞していると判定されるまで、ステップS1乃至ステップS6の処理が繰り返し実行される。 After that, in step S6, the processes of steps S1 to S6 are repeatedly executed until it is determined that the discussion is stagnant.
 一方、ステップS6において、議論が停滞していると判定された場合、処理はステップS7に進む。 On the other hand, if it is determined in step S6 that the discussion is stagnant, the process proceeds to step S7.
 ステップS7において、状況検出部31は、個人作業段階であるか否かを判定する。個人作業段階であると判定された場合、処理はステップS8に進む。 In step S7, the situation detection unit 31 determines whether or not it is in the individual work stage. If it is determined that it is in the personal work stage, the process proceeds to step S8.
 ステップS8において、情報処理部12は、支援対象決定処理を実行し、その後、処理はステップS8に進む。 In step S8, the information processing unit 12 executes the support target determination process, and then the process proceeds to step S8.
 ここで、図19のフローチャートを参照して、支援対象決定処理の詳細について説明する。 Here, the details of the support target determination process will be described with reference to the flowchart of FIG.
 ステップS151において、状況検出部31は、全ての電子付箋のキーワードを抽出する。すなわち、状況検出部31は、図8及び図9を参照して上述した方法により、全ての電子付箋に記載されているアイディアからキーワードを抽出する。 In step S151, the situation detection unit 31 extracts all the keywords of the electronic sticky note. That is, the situation detection unit 31 extracts keywords from the ideas described in all the electronic sticky notes by the method described above with reference to FIGS. 8 and 9.
 なお、過去の処理において、キーワードを抽出済みの電子付箋がある場合、その電子付箋からのキーワードの抽出は省略することが可能である。 If there is an electronic sticky note from which keywords have been extracted in the past processing, it is possible to omit the extraction of the keyword from the electronic sticky note.
 ステップS152において、状況検出部31は、未処理の電子付箋のうちの1つを選択する。 In step S152, the status detection unit 31 selects one of the unprocessed electronic sticky notes.
 ステップS153において、状況検出部31は、選択した電子付箋のキーワードと処理済みの電子付箋のキーワードを比較する。 In step S153, the status detection unit 31 compares the selected electronic sticky note keyword with the processed electronic sticky note keyword.
 ここで、処理済みの電子付箋とは、ステップS152の処理で選択され、ステップS153乃至ステップS155の処理を実行済みの電子付箋のことである。 Here, the processed electronic sticky note is an electronic sticky note that has been selected in the process of step S152 and has been subjected to the processes of steps S153 to S155.
 ステップS154において、状況検出部31は、全てのキーワードが一致する電子付箋があるか否かを判定する。状況検出部31は、処理済みの電子付箋の中に、選択した電子付箋とキーワードが一致する電子付箋がない場合、全てのキーワードが一致する電子付箋がないと判定し、処理はステップS155に進む。 In step S154, the situation detection unit 31 determines whether or not there is an electronic sticky note that matches all the keywords. If there is no electronic sticky note whose keywords match the selected electronic sticky note among the processed electronic sticky notes, the status detection unit 31 determines that there is no electronic sticky note whose keywords match all the keywords, and the process proceeds to step S155. ..
 ステップS155において、状況検出部31は、重複していないアイディアとしてカウントする。すなわち、アイディア数が1つインクリメントされる。 In step S155, the situation detection unit 31 counts as non-overlapping ideas. That is, the number of ideas is incremented by one.
 その後、処理はステップS156に進む。 After that, the process proceeds to step S156.
 一方、ステップS154において、状況検出部31は、処理済みの電子付箋の中に、選択した電子付箋とキーワードが一致する電子付箋がある場合、全てのキーワードが一致する電子付箋があると判定し、ステップS155の処理はスキップされ、処理はステップS156に進む。すなわち、選択した電子付箋に記載されているアイディアは、処理済みの電子付箋に記載されているアイディアと一致すると判定され、アイディア数にカウントされない。 On the other hand, in step S154, if the processed electronic sticky note includes an electronic sticky note whose keywords match the selected electronic sticky note, the status detection unit 31 determines that there is an electronic sticky note whose keywords match all the keywords. The process of step S155 is skipped, and the process proceeds to step S156. That is, the idea described in the selected electronic sticky note is determined to match the idea described in the processed electronic sticky note, and is not counted in the number of ideas.
 ステップS156において、状況検出部31は、全ての電子付箋の処理が終了したか否かを判定する。まだ全ての電子付箋の処理が終了していないと判定された場合、処理はステップS152に戻る。 In step S156, the status detection unit 31 determines whether or not all the electronic sticky note processing has been completed. If it is determined that the processing of all the electronic sticky notes has not been completed, the processing returns to step S152.
 その後、ステップS156において、全ての電子付箋の処理が終了したと判定されるまで、ステップS152乃至ステップS156の処理が繰り返し実行される。これにより、全ての電子付箋に記載されているアイディアの数が、重複分を除いてカウントされる。 After that, in step S156, the processes of steps S152 to S156 are repeatedly executed until it is determined that the processing of all the electronic sticky notes has been completed. As a result, the number of ideas described on all electronic sticky notes is counted excluding duplicates.
 一方、ステップS156において、全ての電子付箋の処理が終了したと判定された場合、処理はステップS157に進む。 On the other hand, if it is determined in step S156 that the processing of all electronic sticky notes has been completed, the processing proceeds to step S157.
 ステップS157において、状況検出部31は、アイディア数が閾値未満であるか否かを判定する。アイディア数が閾値未満であると判定された場合、すなわち、各ユーザから提示されたアイディア数が不足している場合、処理はステップS158に進む。 In step S157, the situation detection unit 31 determines whether or not the number of ideas is less than the threshold value. If it is determined that the number of ideas is less than the threshold value, that is, if the number of ideas presented by each user is insufficient, the process proceeds to step S158.
 ステップS158において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、アイディア出しの支援の実行を決定する。 In step S158, the support method selection unit 32 determines the execution of support for issuing an idea based on the information stored in the action information storage unit 42.
 その後、支援対象決定処理は終了する。 After that, the support target determination process ends.
 一方、ステップS157において、アイディア数が閾値以上であると判定された場合、すなわち、各ユーザから提示されたアイディア数が十分である場合、処理はステップS159に進む。 On the other hand, if it is determined in step S157 that the number of ideas is equal to or greater than the threshold value, that is, if the number of ideas presented by each user is sufficient, the process proceeds to step S159.
 ステップS159において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、話し合いの支援の実行を決定する。 In step S159, the support method selection unit 32 determines the execution of support for discussion based on the information stored in the action information storage unit 42.
 その後、支援対象決定処理は終了する。 After that, the support target determination process ends.
 例えば、個人作業段階において、アイディア出しの支援を続けすぎると、グループ作業段階への移行が遅れることが想定される。 For example, if you continue to support the idea generation in the individual work stage, it is expected that the transition to the group work stage will be delayed.
 これに対して、個人作業段階でも、アイディア数が閾値以上であれば、話し合いの支援を行うようにすることにより、例えば、議論の能力が低いユーザであっても、グループ作業段階にスムーズに移ることが期待できる。 On the other hand, even in the individual work stage, if the number of ideas is above the threshold value, by supporting the discussion, for example, even a user with low discussion ability can smoothly move to the group work stage. Can be expected.
 図10に戻り、ステップS9において、支援方法選択部32は、ステップS8の処理の結果に基づいて、アイディア出しの支援を行うか否かを判定する。アイディア出しの支援を行うと判定された場合、処理はステップS10に進む。 Returning to FIG. 10, in step S9, the support method selection unit 32 determines whether or not to support the idea generation based on the result of the process in step S8. If it is determined to support the idea generation, the process proceeds to step S10.
 ステップS10において、情報処理部12は、アイディア出し支援方法選択処理を実行する。 In step S10, the information processing unit 12 executes an idea generation support method selection process.
 ここで、図20のフローチャートを参照して、アイディア出し支援方法選択処理の詳細について説明する。 Here, the details of the idea generation support method selection process will be described with reference to the flowchart of FIG.
 ステップS201において、状況検出部31は、アイディアのキーワードを抽出する。例えば、状況検出部31は、図19のステップS151と同様の処理により、全ての電子付箋に記載されたアイディアのキーワードを抽出する。 In step S201, the situation detection unit 31 extracts the keyword of the idea. For example, the situation detection unit 31 extracts the keyword of the idea described in all the electronic sticky notes by the same process as in step S151 of FIG.
 ステップS202において、状況検出部31は、キーワードの種類が不十分であるか否かを判定する。キーワードの種類が不十分であると判定された場合、処理はステップS203に進む。これは、例えば、各ユーザから提示されたアイディアの種類(アイディアの広がり)が不十分である場合である。 In step S202, the situation detection unit 31 determines whether or not the type of keyword is insufficient. If it is determined that the type of keyword is insufficient, the process proceeds to step S203. This is, for example, when the type of idea (spread of idea) presented by each user is insufficient.
 ステップS203において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、アイディア発散法の提案を選択する。支援方法選択部32は、アイディア発散法の提案を選択したことを出力情報生成部23及び出力制御部24に通知する。 In step S203, the support method selection unit 32 selects the proposal of the idea divergence method based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal of the idea divergence method has been selected.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 図21は、アイディア発散法の提案方法の例を示している。 FIG. 21 shows an example of a method of proposing an idea divergence method.
 例えば、映像表示面201において、ユーザから提示されたアイディアを示す電子付箋301-1及び電子付箋301-2の間に、電子付箋302-1及び電子付箋302-2が提示される。電子付箋302-1及び電子付箋302-2には、5W2H、マンダラート、シナリオグラフ等のアイディア発散法のテンプレートが示されている。 For example, on the video display surface 201, the electronic sticky note 302-1 and the electronic sticky note 302-2 are presented between the electronic sticky note 301-1 and the electronic sticky note 301-2 showing the idea presented by the user. Electronic sticky notes 302-1 and electronic sticky notes 302-2 show templates for idea divergence methods such as 5W2H, mandarat, and scenario graphs.
 これに対して、例えば、ユーザは、電子付箋302-1及び電子付箋302-2に示されるアイディア発散法を実行することにより、多様なアイディアの提案を促され、アイディアを広げることができる。 On the other hand, for example, the user can be prompted to propose various ideas and spread the ideas by executing the idea divergence method shown in the electronic sticky notes 302-1 and the electronic sticky notes 302-2.
 なお、支援方法選択部32は、議論の内容や状況に応じて、適切なアイディア発散法を選択する。 The support method selection unit 32 selects an appropriate idea divergence method according to the content and situation of the discussion.
 また、例えば、ユーザが、電子付箋302-1又は電子付箋302-2を指さす等により、提案されたアイディア発散法を選択した場合、選択されたアイディア発散法のテンプレートが背景画像として映像表示面201に表示されるようにしてもよい。 Further, for example, when the user selects the proposed idea divergence method by pointing to the electronic sticky note 302-1 or the electronic sticky note 302-2, the template of the selected idea divergence method is used as the background image on the image display surface 201. It may be displayed in.
 なお、以下、ユーザが提示したアイディアを示す電子付箋と、議論を支援するために情報処理システム1により提示される電子付箋とを区別する場合、前者をユーザ入力付箋と称し、後者を支援情報付箋と称する。 Hereinafter, when distinguishing between an electronic sticky note showing an idea presented by a user and an electronic sticky note presented by the information processing system 1 to support discussion, the former is referred to as a user input sticky note and the latter is referred to as a support information sticky note. It is called.
 例えば、図21の例では、電子付箋301-1及び電子付箋301-2がユーザ入力付箋となり、電子付箋302-1及び電子付箋302-2が支援情報付箋となる。 For example, in the example of FIG. 21, the electronic sticky note 301-1 and the electronic sticky note 301-2 are user input sticky notes, and the electronic sticky note 302-1 and the electronic sticky note 302-2 are support information sticky notes.
 一方、ステップS202において、キーワードの種類が十分であると判定された場合、処理はステップS204に進む。これは、例えば、各ユーザから提示されたアイディアの種類(アイディアの広がり)が十分である場合である。 On the other hand, if it is determined in step S202 that the types of keywords are sufficient, the process proceeds to step S204. This is, for example, when the type of idea (spread of idea) presented by each user is sufficient.
 ステップS204において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、関連情報の提示を選択する。支援方法選択部32は、関連情報の提示を選択したことを出力情報生成部23及び出力制御部24に通知する。 In step S204, the support method selection unit 32 selects the presentation of related information based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the presentation of the related information has been selected.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 図22は、関連情報の提示方法の例を示している。 FIG. 22 shows an example of a method of presenting related information.
 例えば、映像表示面201において、ユーザ入力付箋321の周囲に支援情報付箋322-1及び支援情報付箋322-2が提示される。支援情報付箋322-1及び支援情報付箋322-2には、例えば、議論のテーマ、又は、ユーザ入力付箋321に示されているアイディアに関連する情報が示される。例えば、ウエブ検索により検索された画像、動画、ニュース等が、関連情報として提示される。 For example, on the video display surface 201, the support information sticky note 322-1 and the support information sticky note 322-2 are presented around the user input sticky note 321. The support information sticky note 322-1 and the support information sticky note 322-2 show, for example, information related to the theme of the discussion or the idea shown in the user input sticky note 321. For example, images, videos, news, etc. searched by web search are presented as related information.
 この例では、ユーザ入力付箋321にコーヒーに関するアイディアが示されており、支援情報付箋322-1及び支援情報付箋322-2に、コーヒーに関連する情報が示されている。例えば、支援情報付箋322-1には、コーヒーの画像が示され、支援情報付箋322-2には、コーヒーに関連するニュースが示されている。 In this example, the user input sticky note 321 shows an idea about coffee, and the support information sticky note 322-1 and the support information sticky note 322-2 show information related to coffee. For example, the support information sticky note 322-1 shows an image of coffee, and the support information sticky note 322-2 shows news related to coffee.
 なお、例えば、あるユーザに対して、他のユーザのアイディアを関連情報として提示するようにしてもよい。また、例えば、複数のグループが並行して議論を行っている場合、他のグループのアイディアを関連情報として提示するようにしてもよい。 Note that, for example, the idea of another user may be presented to a certain user as related information. Further, for example, when a plurality of groups are discussing in parallel, the ideas of other groups may be presented as related information.
 さらに、例えば、各ユーザの発話内容に関連した関連情報が提示されるようにしてもよい。例えば、各ユーザの発話内容から頻出のキーワードが抽出され、抽出されたキーワードに関連する関連情報が提示されるようにしてもよい。 Further, for example, related information related to the utterance content of each user may be presented. For example, frequently-used keywords may be extracted from the utterance contents of each user, and related information related to the extracted keywords may be presented.
 このように、各ユーザは、提示された関連情報に基づいて、すでに出ているアイディアをさらに深めることができる。 In this way, each user can further deepen the ideas that have already come out based on the related information presented.
 ここで、図23乃至図25のフローチャートを参照して、図20のステップS202の判定処理の具体例について説明する。 Here, a specific example of the determination process in step S202 of FIG. 20 will be described with reference to the flowcharts of FIGS. 23 to 25.
 まず、図23のフローチャートを参照して、第1の例について説明する。 First, the first example will be described with reference to the flowchart of FIG. 23.
 ステップS221において、図20のステップS201の処理と同様に、アイディアのキーワードが抽出される。 In step S221, the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
 ステップS222において、状況検出部31は、キーワードをクラスタリングする。ここでは、生成されるクラスタ数を指定しない解析方法(例えば、NN(Nearest Neighbors)法、群平均法等)が用いられる。 In step S222, the status detection unit 31 clusters keywords. Here, an analysis method that does not specify the number of clusters to be generated (for example, the NN (Nearest Neighbors) method, the group average method, etc.) is used.
 ステップS223において、状況検出部31は、クラスタ数が閾値未満であるか否かを判定する。クラスタ数が閾値未満であると判定された場合、処理はステップS224に進む。 In step S223, the status detection unit 31 determines whether or not the number of clusters is less than the threshold value. If it is determined that the number of clusters is less than the threshold value, the process proceeds to step S224.
 ステップS224において、図20のステップS203の処理と同様に、アイディア発散法の提案が選択される。 In step S224, the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 一方、ステップS223において、クラスタ数が閾値以上であると判定された場合、処理はステップS225に進む。 On the other hand, if it is determined in step S223 that the number of clusters is equal to or greater than the threshold value, the process proceeds to step S225.
 ステップS225において、図20のステップS204の処理と同様に、関連情報の提示が選択される。 In step S225, the presentation of related information is selected as in the process of step S204 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 次に、図24のフローチャートを参照して、第2の例について説明する。 Next, a second example will be described with reference to the flowchart of FIG. 24.
 ステップS241において、図20のステップS201の処理と同様に、アイディアのキーワードが抽出される。 In step S241, the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
 ステップS242において、状況検出部31は、キーワードをクラスタリングする。ここでは、生成されるクラスタ数を指定する解析方法(例えば、k-means法、k-NN法等)が用いられる。なお、生成するクラスタ数は、事前に指定される。 In step S242, the status detection unit 31 clusters keywords. Here, an analysis method (for example, k-means method, k-NN method, etc.) for specifying the number of clusters to be generated is used. The number of clusters to be generated is specified in advance.
 ステップS243において、状況検出部31は、含まれるキーワード数が閾値未満のクラスタが存在するか否かを判定する。含まれるキーワード数が閾値未満のクラスタが存在すると判定された場合、処理はステップS244に進む。 In step S243, the status detection unit 31 determines whether or not there is a cluster in which the number of included keywords is less than the threshold value. If it is determined that there is a cluster in which the number of keywords included is less than the threshold value, the process proceeds to step S244.
 ステップS244において、図20のステップS203の処理と同様に、アイディア発散法の提案が選択される。 In step S244, the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 一方、ステップS243において、含まれるキーワード数が閾値未満のクラスタが存在しないと判定された場合、処理はステップS245に進む。 On the other hand, if it is determined in step S243 that there is no cluster in which the number of keywords included is less than the threshold value, the process proceeds to step S245.
 ステップS245において、図20のステップS204の処理と同様に、関連情報の提示が選択される。 In step S245, the presentation of related information is selected as in the process of step S204 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 次に、図25のフローチャートを参照して、第3の例について説明する。 Next, a third example will be described with reference to the flowchart of FIG.
 ステップS261において、図20のステップS201の処理と同様に、アイディアのキーワードが抽出される。 In step S261, the keyword of the idea is extracted in the same manner as the process of step S201 of FIG.
 ステップS262において、状況検出部31は、指定されたキーワードがまだ全て出ていないか否かを判定する。例えば、議論で出すべき必須アイディアに関するキーワードが事前に指定される。そして、状況検出部31は、その指定されたキーワードの中に、ステップS261の処理で抽出されたキーワードに含まれないものが存在する場合、指定されたキーワードがまだ全て出ていないと判定し、処理はステップS263に進む。 In step S262, the situation detection unit 31 determines whether or not all the specified keywords have yet appeared. For example, keywords related to essential ideas to be discussed are specified in advance. Then, when the designated keywords include those that are not included in the keywords extracted in the process of step S261, the situation detection unit 31 determines that all the designated keywords have not yet appeared. The process proceeds to step S263.
 ステップS263において、図20のステップS203の処理と同様に、アイディア発散法の提案が選択される。 In step S263, the proposal of the idea divergence method is selected as in the process of step S203 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 一方、ステップS262において、状況検出部31は、指定されたキーワードが、ステップS261の処理で抽出されたキーワードに全て含まれる場合、指定されたキーワードが全て出ていると判定し、処理はステップS264に進む。 On the other hand, in step S262, when the designated keywords are all included in the keywords extracted in the process of step S261, the situation detection unit 31 determines that all the specified keywords have appeared, and the process is performed in step S264. Proceed to.
 ステップS264において、図20のステップS204の処理と同様に、関連情報の提示が選択される。 In step S264, the presentation of related information is selected as in the process of step S204 of FIG.
 その後、アイディア出し支援方法選択処理は終了する。 After that, the idea generation support method selection process ends.
 図10に戻り、一方、ステップS9において、話し合いの支援を行うと判定された場合、処理はステップS11に進む。 Returning to FIG. 10, on the other hand, if it is determined in step S9 that the discussion is supported, the process proceeds to step S11.
 また、ステップS7において、グループ作業段階、又は、まとめ段階であるあると判定された場合、処理はステップS11に進む。 Further, if it is determined in step S7 that it is a group work stage or a summary stage, the process proceeds to step S11.
 ステップS11において、情報処理部12は、話し合い支援方法選択処理を実行し、その後、処理はステップS12に進む。 In step S11, the information processing unit 12 executes the discussion support method selection process, and then the process proceeds to step S12.
 ここで、図26のフローチャートを参照して、話し合い支援方法選択処理の詳細について説明する。 Here, the details of the discussion support method selection process will be described with reference to the flowchart of FIG.
 ステップS301において、状況検出部31は、話し合いが不足しているアイディアがあるか否かを判定する。例えば、状況検出部31は、各ユーザから提示されたアイディアのうち、話し合われた時間(以下、話し合い時間と称する)が所定の時間未満のアイディアがある場合、話し合いが不足しているアイディアがあると判定し、処理はステップS302に進む。 In step S301, the situation detection unit 31 determines whether or not there is an idea for which discussion is insufficient. For example, among the ideas presented by each user, the situation detection unit 31 has an idea that the discussion is insufficient when the discussion time (hereinafter referred to as the discussion time) is less than a predetermined time. The process proceeds to step S302.
 ここで、図27のフローチャートを参照して、各アイディアの話し合い時間の計測方法の例について説明する。 Here, an example of a method of measuring the discussion time of each idea will be described with reference to the flowchart of FIG. 27.
 この処理は、例えば、議論が開始されたとき開始され、議論が終了したとき終了する。 This process starts, for example, when the discussion starts and ends when the discussion ends.
 ステップS351において、状況検出部31は、デプスデータに基づいて、所定の時間以上指さされた電子付箋(ユーザ入力付箋)があるか否かを判定する。この処理は、所定の時間以上指さされた電子付箋があると判定されるまで繰り返し実行され、所定の時間以上指さされた電子付箋があると判定された場合、処理はステップS352に進む。 In step S351, the situation detection unit 31 determines whether or not there is an electronic sticky note (user input sticky note) pointed to for a predetermined time or longer based on the depth data. This process is repeatedly executed until it is determined that there is an electronic sticky note pointed for a predetermined time or longer, and when it is determined that there is an electronic sticky note pointed for for a predetermined time or longer, the process proceeds to step S352.
 ステップS352において、状況検出部31は、指さされた電子付箋に記載されているアイディア(以下、計測対象のアイディアと称する)に対する話し合い時間の計測を開始する。 In step S352, the situation detection unit 31 starts measuring the discussion time for the idea (hereinafter referred to as the idea to be measured) described on the pointed electronic sticky note.
 ステップS353において、状況検出部31は、音声データに基づいて、計測対象のアイディアのキーワードが発話内容に含まれるか否かを判定する。計測対象のアイディアのキーワードが発話内容に含まれると判定された場合、処理はステップS354に進む。 In step S353, the situation detection unit 31 determines whether or not the keyword of the idea to be measured is included in the utterance content based on the voice data. If it is determined that the keyword of the idea to be measured is included in the utterance content, the process proceeds to step S354.
 ステップS354において、状況検出部31は、話し合い時間を更新する。 In step S354, the situation detection unit 31 updates the discussion time.
 その後、処理はステップS355に進む。 After that, the process proceeds to step S355.
 一方、ステップS353において、計測対象のアイディアのキーワードが発話内容に含まれないと判定された場合、ステップS354の処理はスキップされ、話し合い時間は更新されずに、処理はステップS355に進む。これにより、計測対象のアイディアのキーワードが発話内容に含まれる期間のみ、当該アイディアの話し合い時間として計測されるようになる。 On the other hand, if it is determined in step S353 that the keyword of the idea to be measured is not included in the utterance content, the process of step S354 is skipped, the discussion time is not updated, and the process proceeds to step S355. As a result, only the period during which the keyword of the idea to be measured is included in the utterance content is measured as the discussion time of the idea.
 ステップS355において、状況検出部31は、デプスデータに基づいて、他の電子付箋(ユーザ入力付箋)が所定の時間以上指さされたか否かを判定する。他の電子付箋が所定の時間以上指さされていないと判定された場合、処理はステップS353に戻る。 In step S355, the situation detection unit 31 determines whether or not another electronic sticky note (user input sticky note) has been pointed for a predetermined time or longer based on the depth data. If it is determined that another electronic sticky note has not been pointed for for a predetermined time or longer, the process returns to step S353.
 その後、ステップS355において、他の電子付箋が所定の時間以上指さされたと判定されるまで、ステップS353乃至ステップS355の処理が繰り返し実行される。 After that, in step S355, the processes of steps S353 to S355 are repeatedly executed until it is determined that another electronic sticky note has been pointed for a predetermined time or longer.
 一方、ステップS355において、他の電子付箋が所定の時間以上指さされたと判定された場合、処理はステップS356に進む。 On the other hand, if it is determined in step S355 that another electronic sticky note has been pointed for a predetermined time or longer, the process proceeds to step S356.
 ステップS356において、状況検出部31は、計測中のアイディアに対する話し合い時間の計測を終了する。 In step S356, the situation detection unit 31 ends the measurement of the discussion time for the idea being measured.
 その後、処理はステップS352に戻り、ステップS352以降の処理が実行される。これにより、ステップS355の処理で指さされたと判定された電子付箋に記載されているアイディアに対する話し合い時間の計測が開始される。 After that, the process returns to step S352, and the processes after step S352 are executed. As a result, the measurement of the discussion time for the idea described in the electronic sticky note determined to have been pointed to in the process of step S355 is started.
 このようにして、各アイディアに対する話し合い時間が計測される。 In this way, the discussion time for each idea is measured.
 なお、同じアイディアが複数回計測対象になった場合は、当該アイディアに対する話し合い時間が積算される。 If the same idea is measured multiple times, the discussion time for that idea will be added up.
 図26に戻り、ステップS302において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、議論対象の変更の提案を選択する。支援方法選択部32は、議論対象の変更の提案を選択したことを出力情報生成部23及び出力制御部24に通知する。 Returning to FIG. 26, in step S302, the support method selection unit 32 selects a proposal for changing the subject of discussion based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal for the change to be discussed has been selected.
 その後、話し合い支援方法選択処理は終了する。 After that, the discussion support method selection process ends.
 図28は、議論対象の変更の提案方法の例を示している。 FIG. 28 shows an example of a method of proposing a change to be discussed.
 例えば、映像表示面201に表示されているユーザ入力付箋341-1乃至ユーザ入力付箋341-4のうち、話し合いが不足しているアイディアが記載されているユーザ入力付箋341-3を強調する支援情報である視覚効果342が表示される。 For example, among the user-input sticky notes 341-1 to user-input sticky notes 341-4 displayed on the video display surface 201, support information that emphasizes the user-input sticky notes 341-3 in which ideas that are insufficiently discussed are described. The visual effect 342 is displayed.
 視覚効果342には、各ユーザをユーザ入力付箋341-3に注目させることができる範囲内で任意の種類の視覚効果を適用することができる。例えば、視覚効果342により、ユーザ入力付箋341-3の周囲が光ったり、点滅したり、他の領域と異なる背景色が表示されたりする。 Any kind of visual effect can be applied to the visual effect 342 within a range in which each user can pay attention to the user input sticky note 341-3. For example, the visual effect 342 causes the surroundings of the user-input sticky note 341-3 to shine or blink, or to display a background color different from other areas.
 また、例えば、視覚効果342として、ユーザ入力付箋341-3の表示態様(例えば、色、形、大きさ、透過度等)を変化させるようにしてもよい。 Further, for example, as a visual effect 342, the display mode (for example, color, shape, size, transparency, etc.) of the user input sticky note 341-3 may be changed.
 これにより、ユーザは話し合いが不足しているユーザ入力付箋341-3に記載されているアイディアに対する話し合いが促され、すなわち、議論対象の変更が促される。そして、話し合いが不足しているアイディアに議論対象が変更されることにより、議論が活性化される。 This encourages the user to discuss the idea described in the user input sticky note 341-3, which is lacking in discussion, that is, to change the subject of discussion. Then, the discussion is activated by changing the subject of discussion to an idea that is lacking in discussion.
 一方、ステップS301において、例えば、状況検出部31は、各ユーザから提案されているアイディアのうち、話し合い時間が所定の時間未満のアイディアがない場合、話し合いが不足しているアイディアがないと判定し、処理はステップS303に進む。 On the other hand, in step S301, for example, if there is no idea proposed by each user whose discussion time is less than a predetermined time, the situation detection unit 31 determines that there is no idea for which the discussion is insufficient. , The process proceeds to step S303.
 ステップS303において、状況検出部31は、否定的な意見が出ているか否かを判定する。例えば、状況検出部31は、直近のT秒間の音声データにおいて、「つまらない」「ダメ」等の否定的な意見を示す言葉を検出した場合、否定的な意見が出ていると判定し、処理はステップS304に進む。 In step S303, the situation detection unit 31 determines whether or not a negative opinion has been given. For example, when the situation detection unit 31 detects a word indicating a negative opinion such as "boring" or "no good" in the voice data for the latest T seconds, it determines that a negative opinion is given and processes it. Proceeds to step S304.
 ステップS304において、状況検出部31は、アクション情報蓄積部42に蓄積されている情報に基づいて、ポジティブな評価の提示を選択する。支援方法選択部32は、ポジティブな評価の提示を選択したことを出力情報生成部23及び出力制御部24に通知する。 In step S304, the situation detection unit 31 selects the presentation of a positive evaluation based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the presentation of the positive evaluation has been selected.
 その後、話し合い支援方法選択処理は終了する。 After that, the discussion support method selection process ends.
 図29は、ポジティブな評価の提示方法の例を示している。 FIG. 29 shows an example of a method of presenting a positive evaluation.
 例えば、これまでの議論においてポジティブな評価が与えられたアイディアが記載されたユーザ入力付箋361に対して、ポジティブな評価を示す支援情報である視覚情報362が提示される。 For example, visual information 362, which is support information indicating a positive evaluation, is presented to a user input sticky note 361 in which an idea given a positive evaluation in the discussion so far is described.
 これにより、例えば、ネガティブな話し合いが続いている場合に、各ユーザにポジティブな評価が与えられたアイディアに注目させ、話し合いをポジティブな方向に誘導することにより、議論が活性化される。 As a result, for example, when negative discussions continue, the discussions are activated by focusing on the ideas given to each user with a positive evaluation and guiding the discussions in a positive direction.
 ここで、図30のフローチャートを参照して、ポジティブな評価が与えられたアイディアの検出方法について説明する。なお、図30は、図27の話し合い時間計測処理の変形例でもある。 Here, a method for detecting an idea given a positive evaluation will be described with reference to the flowchart of FIG. Note that FIG. 30 is also a modified example of the discussion time measurement process of FIG. 27.
 ステップS401において、図27のステップS351の処理と同様に、所定の時間以上指さされた電子付箋があるか否かが判定される。この処理は、所定の時間以上指さされた電子付箋があると判定されるまで繰り返し実行され、所定の時間以上指さされたで電子付箋があると判定された場合、処理はステップS402に進む。 In step S401, similarly to the process of step S351 of FIG. 27, it is determined whether or not there is an electronic sticky note pointed to for a predetermined time or longer. This process is repeatedly executed until it is determined that there is an electronic sticky note pointed to for a predetermined time or longer, and when it is determined that there is an electronic sticky note pointed for for a predetermined time or longer, the process proceeds to step S402. ..
 ステップS402において、図27のステップS352の処理と同様に、指さされた電子付箋に記載されているアイディアに対する話し合い時間の計測が開始される。 In step S402, as in the process of step S352 of FIG. 27, the measurement of the discussion time for the idea described in the pointed electronic sticky note is started.
 ステップS403において、状況検出部31は、音声データに基づいて、ポジティブなキーワードのカウントを開始する。例えば、状況検出部31は、音声データ中に「良い」、「おもしろい」等のポジティブなキーワードを検出した回数のカウントを開始する。 In step S403, the situation detection unit 31 starts counting positive keywords based on the voice data. For example, the situation detection unit 31 starts counting the number of times that a positive keyword such as "good" or "interesting" is detected in the voice data.
 なお、例えば、カウントの対象となるポジティブなキーワードは、予め設定される。 For example, positive keywords to be counted are set in advance.
 ステップS404乃至ステップS406において、図27のステップS353乃至ステップS355と同様の処理が実行される。 In steps S404 to S406, the same processing as in steps S353 to S355 of FIG. 27 is executed.
 そして、ステップS406において、他の電子付箋が所定の時間以上指さされたと判定された場合、処理はステップS407に進む。 Then, in step S406, if it is determined that another electronic sticky note has been pointed for a predetermined time or longer, the process proceeds to step S407.
 ステップS407において、状況検出部31は、計測中のアイディアに対する話し合い時間の計測及びポジティブなキーワードのカウントを終了する。 In step S407, the situation detection unit 31 ends the measurement of the discussion time for the idea being measured and the counting of the positive keywords.
 その後、処理はステップS402に戻り、ステップS402以降の処理が実行される。これにより、ステップS406の処理で指さされたと判定された電子付箋に記載されているアイディアに対する話し合い時間の計測及びポジティブなキーワードのカウントが開始される。 After that, the process returns to step S402, and the processes after step S402 are executed. As a result, the measurement of the discussion time and the counting of positive keywords for the idea described in the electronic sticky note determined to have been pointed to in the process of step S406 are started.
 なお、同じアイディアが複数回計測対象になった場合は、当該アイディアに対する話し合い時間、及び、ポジティブなキーワードのカウントが積算される。 If the same idea is measured multiple times, the discussion time for the idea and the count of positive keywords will be added up.
 そして、例えば、ポジティブなキーワードのカウント数が所定の閾値以上のアイディアが、ポジティブな評価が与えられたアイディアとされる。 And, for example, an idea in which the count number of positive keywords is equal to or greater than a predetermined threshold value is regarded as an idea given a positive evaluation.
 図26に戻り、一方、ステップS303において、状況検出部31は、直近のT秒間の音声データにおいて否定的な意見を示す言葉が検出されなかった場合、否定的な意見が出ていないと判定し、処理はステップS305に進む。 Returning to FIG. 26, on the other hand, in step S303, when the situation detection unit 31 does not detect a word indicating a negative opinion in the voice data for the latest T seconds, it determines that no negative opinion has been given. , The process proceeds to step S305.
 ステップS305において、状況検出部31は、議論が停滞した回数が閾値以上であるか否かを判定する。例えば、状況検出部31は、議論の段階毎に、図10のステップS6において、議論が停滞していると判定された回数をカウントする。そして、状況検出部31は、現在の議論の段階において議論が停滞したと判定された回数が閾値以上であると判定した場合、処理はステップS306に進む。 In step S305, the situation detection unit 31 determines whether or not the number of times the discussion has stagnated is equal to or greater than the threshold value. For example, the situation detection unit 31 counts the number of times it is determined that the discussion is stagnant in step S6 of FIG. 10 for each stage of the discussion. Then, when the situation detection unit 31 determines that the number of times the discussion has been determined to be stagnant at the current discussion stage is equal to or greater than the threshold value, the process proceeds to step S306.
 ステップS306において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、気分転換の提案を選択する。支援方法選択部32は、気分転換の提案を選択したことを出力情報生成部23及び出力制御部24に通知する。 In step S306, the support method selection unit 32 selects a mood change proposal based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the mood change proposal has been selected.
 その後、話し合い支援方法選択処理は終了する。 After that, the discussion support method selection process ends.
 図31は、気分転換の提案方法の例を示している。 FIG. 31 shows an example of a method of proposing a change of mood.
 例えば、映像表示面201において、ユーザ入力付箋381-1及びユーザ入力付箋381-2の間に、支援情報付箋382-1及び支援情報付箋382-2が提示される。支援情報付箋382-1及び支援情報付箋382-2には、気分転換の方法を示す情報が示される。例えば、ゲーム、雑談の話題、議論に関係ない動画又は画像、食べ物のデリバリ等が提案される。 For example, on the video display surface 201, the support information sticky note 382-1 and the support information sticky note 382-2 are presented between the user input sticky note 381-1 and the user input sticky note 381-2. The support information sticky note 382-1 and the support information sticky note 382-2 show information indicating a method of changing mood. For example, games, chat topics, non-discussion videos or images, food delivery, etc. are proposed.
 これにより、例えば、各ユーザが一旦議論を離れて気分転換することが促され、各ユーザが気分転換を行うことにより、議論が活性化される。 This, for example, encourages each user to leave the discussion and change their mood, and when each user changes their mood, the discussion is activated.
 なお、例えば、気分転換の提案が選択された場合、現在の議論の段階における議論の停滞回数のカウントがリセットされるようにしてもよい。これにより、繰り返し気分転換の提案が実行されることが防止される。 Note that, for example, when a mood change proposal is selected, the count of the number of stagnant discussions at the current discussion stage may be reset. This prevents repeated mood change proposals from being executed.
 一方、ステップS305において、状況検出部31は、現在の議論の段階において議論が停滞したと判定された回数が閾値未満であると判定した場合、処理はステップS307に進む。 On the other hand, in step S305, if the situation detection unit 31 determines that the number of times the discussion has been determined to be stagnant at the current discussion stage is less than the threshold value, the process proceeds to step S307.
 ステップS307において、支援方法選択部32は、アクション情報蓄積部42に蓄積されている情報に基づいて、アイディア整理法の提案を選択する。支援方法選択部32は、アイディア整理法の提案を選択したことを出力情報生成部23及び出力制御部24に通知する。 In step S307, the support method selection unit 32 selects a proposal for an idea organizing method based on the information stored in the action information storage unit 42. The support method selection unit 32 notifies the output information generation unit 23 and the output control unit 24 that the proposal of the idea organizing method has been selected.
 その後、話し合い支援方法選択処理は終了する。 After that, the discussion support method selection process ends.
 図32は、アイディア整理法の提案方法の例を示している。 FIG. 32 shows an example of a method of proposing an idea organizing method.
 例えば、映像表示面201において、ユーザ入力付箋401-1及びユーザ入力付箋401-2の間に、支援情報付箋402-1及び支援情報付箋402-2が提示される。支援情報付箋402-1及び支援情報付箋402-2には、KJ法、2軸グラフ等のアイディア整理法のテンプレートが示されている。 For example, on the video display surface 201, the support information sticky note 402-1 and the support information sticky note 402-2 are presented between the user input sticky note 401-1 and the user input sticky note 401-2. The support information sticky note 402-1 and the support information sticky note 402-2 show templates for the KJ method and the idea organizing method such as a two-axis graph.
 ユーザは、支援情報付箋402-1及び支援情報付箋402-2に示されるアイディア整理法を実行することにより、例えば、アイディアを整理し、発散した話し合いを収束させることができる。 By executing the idea organizing method shown in the support information sticky note 402-1 and the support information sticky note 402-2, for example, the user can organize the ideas and converge the divergent discussion.
 なお、支援方法選択部32は、議論の内容や状況に応じて、適切なアイディア整理法を選択する。 The support method selection unit 32 selects an appropriate idea organizing method according to the content and situation of the discussion.
 また、例えば、ユーザが、支援情報付箋402-1又は支援情報付箋402-2を指さす等により、提案されたアイディア整理法を選択した場合、選択されたアイディア整理法のテンプレートが背景画像として映像表示面201に表示されるようにしてもよい。 Further, for example, when the user selects the proposed idea organizing method by pointing to the support information sticky note 402-1 or the support information sticky note 402-2, the template of the selected idea organizing method is displayed as a background image. It may be displayed on the surface 201.
 図10に戻り、ステップS12において、提示方法設定部33は、提示方法情報蓄積部43に蓄積されている情報に基づいて、支援情報の提示方法を設定する。 Returning to FIG. 10, in step S12, the presentation method setting unit 33 sets the presentation method of the support information based on the information stored in the presentation method information storage unit 43.
 例えば、提示方法設定部33は、図33のテーブルに従って、議論の段階及びユーザの習熟度に基づいて、支援情報の提示位置を設定する。 For example, the presentation method setting unit 33 sets the presentation position of the support information based on the stage of discussion and the proficiency level of the user according to the table of FIG. 33.
 ユーザの習熟度は、ユーザの議論の能力の1つであり、アイディア出しや話し合いに対する習熟度を示す。例えば、ブレインストーミングやディスカッションの経験が多いユーザは、習熟度が高いと判定され、ブレインストーミングやディスカッションの経験が少ないユーザは、習熟度が低いと判定される。 The user's proficiency level is one of the user's ability to discuss, and indicates the proficiency level for ideas and discussions. For example, a user who has a lot of experience in brainstorming and discussion is judged to have a high proficiency level, and a user who has little experience in brainstorming and discussion is judged to have a low proficiency level.
 例えば、ユーザの習熟度に差がある場合、支援情報の提示位置は、個人作業段階では、個人の周辺に設定され、グループ作業段階及びまとめ段階では、所定の物体の周辺に設定される。また、ユーザ全員の習熟度が高い場合、支援情報の提示位置は、いずれの議論の段階においても、空白スペースに設定される。さらに、ユーザ全員の習熟度が低い場合、支援情報の提示位置は、個人作業段階では、ユーザの視線の先に設定され、グループ作業段階では、画面(映像表示面)の中心に設定され、まとめ段階では、付箋の固まりの周辺に設定される。 For example, when there is a difference in the proficiency level of the users, the presentation position of the support information is set around the individual in the individual work stage, and is set around the predetermined object in the group work stage and the summary stage. Further, when the proficiency level of all the users is high, the presentation position of the support information is set to a blank space at any stage of the discussion. Furthermore, when the proficiency level of all users is low, the presentation position of the support information is set in front of the user's line of sight in the individual work stage, and is set in the center of the screen (video display surface) in the group work stage. At the stage, it is set around a mass of sticky notes.
 ここで、図34乃至図39を参照して、支援情報の提示位置の具体例について説明する。なお、図34乃至図39において、斜線の矩形のマスが、支援情報を示す支援情報付箋の位置を示し、その他の矩形のマスが、ユーザ入力付箋の位置を示している。 Here, a specific example of the presentation position of the support information will be described with reference to FIGS. 34 to 39. In FIGS. 34 to 39, the diagonally shaded rectangular squares indicate the positions of the support information sticky notes indicating the support information, and the other rectangular squares indicate the positions of the user input sticky notes.
 図34は、個人の周辺に支援情報を提示する例を示している。例えば、議論に参加している各ユーザの近くに支援情報付箋が提示される。 FIG. 34 shows an example of presenting support information around an individual. For example, a support information sticky note is presented near each user participating in the discussion.
 図35は、所定の物体の周辺に支援情報を提示する例を示している。例えば、映像表示面201上に置かれている物体451の周辺に支援情報付箋が提示される。 FIG. 35 shows an example of presenting support information around a predetermined object. For example, a support information sticky note is presented around the object 451 placed on the video display surface 201.
 図36は、空白スペースに支援情報を提示する例を示している。例えば、映像表示面201上のユーザ入力付箋の密度が低い位置(ユーザ入力付箋があまり配置されていない位置)に支援情報付箋が提示される。 FIG. 36 shows an example of presenting support information in a blank space. For example, the support information sticky note is presented at a position on the video display surface 201 where the density of the user input sticky note is low (a position where the user input sticky note is not so arranged).
 図37は、ユーザの視線の先に支援情報を提示する例を示している。例えば、映像表示面201上において、各ユーザの視線の先に支援情報付箋が提示される。 FIG. 37 shows an example of presenting support information ahead of the user's line of sight. For example, on the video display surface 201, the support information sticky note is presented ahead of each user's line of sight.
 図38は、映像表示面201の中心に支援情報を提示する例を示している。例えば、映像表示面201の中心付近に支援情報付箋が提示される。 FIG. 38 shows an example in which support information is presented at the center of the video display surface 201. For example, a support information sticky note is presented near the center of the video display surface 201.
 図39は、付箋の固まりの周辺に支援情報を提示する例を示している。例えば、映像表示面201上で、ユーザ入力付箋の密度が高い位置(ユーザ入力付箋が密集している位置)付近に支援情報付箋が表示される。 FIG. 39 shows an example of presenting support information around a mass of sticky notes. For example, the support information sticky note is displayed on the video display surface 201 near a position where the density of the user input sticky notes is high (a position where the user input sticky notes are densely packed).
 また、提示方法設定部33は、支援アクションとして関連情報の提示が行われる場合、例えば、図23のステップS222の処理において得られたキーワードのクラスタ数に基づいて、提示する関連情報の数(以下、情報量と称する)を設定する。 Further, when the related information is presented as a support action, the presentation method setting unit 33 presents the number of related information (hereinafter, based on the number of clusters of keywords obtained in the process of step S222 of FIG. 23). , Called the amount of information).
 図40乃至図42は、クラスタ数と提示する情報量との関係の例を示すグラフである。各グラフの横軸はクラスタ数を示し、縦軸は提示する情報量を示している。 40 to 42 are graphs showing an example of the relationship between the number of clusters and the amount of information to be presented. The horizontal axis of each graph shows the number of clusters, and the vertical axis shows the amount of information to be presented.
 どの例においても、クラスタ数が増加するほど、提示する情報量が減少している。 In each example, as the number of clusters increased, the amount of information presented decreased.
 具体的には、図40の例では、クラスタ数の増加に伴い、提示する情報量がI_max個からI_min個まで線形に減少している。 Specifically, in the example of FIG. 40, as the number of clusters increases, the amount of information presented linearly decreases from I_max to I_min.
 図41の例では、クラスタ数の増加に伴い、提示する情報量が単調に減少するとともに、クラスタ数が増加するほど、提示する情報量の減少率が大きくなっている。 In the example of FIG. 41, the amount of information presented monotonously decreases as the number of clusters increases, and the rate of decrease in the amount of information presented increases as the number of clusters increases.
 図42の例では、クラスタ数の増加に伴い、提示する情報量が階段状に減少している。 In the example of FIG. 42, the amount of information presented decreases stepwise as the number of clusters increases.
 このように、クラスタ数の増加に伴い、提示する関連情報の数を減少させることにより、情報過多によりアイディア出しが終わらず、グループ作業段階への移行が遅れることが防止される。 In this way, by reducing the number of related information to be presented as the number of clusters increases, it is possible to prevent the transition to the group work stage from being delayed due to the overload of information.
 また、例えば、アクティブラーニング等の教育的な場面において、前半のアイディア数が少ない時は多くヒント(関連情報)を出すことで、各ユーザがアイディアを出すことや話し合うことに慣れることができる。一方、後半ではヒントを減らすことで、各ユーザが自力でアイディアを考え出すことを学ぶことができる。 Also, for example, in educational situations such as active learning, when the number of ideas in the first half is small, by giving many hints (related information), each user can get used to giving ideas and discussing. On the other hand, in the second half, by reducing hints, each user can learn to come up with ideas on their own.
 なお、クラスタ数(アイディアの種類の数)の代わりに、又は、クラスタ数とともに、各ユーザから提示されたアイディアの総数に基づいて、提示する関連情報の数を設定するようにしてもよい。 Note that the number of related information to be presented may be set instead of the number of clusters (the number of types of ideas) or based on the total number of ideas presented by each user together with the number of clusters.
 提示方法設定部33は、設定した提示方法を示す情報を、出力情報生成部23及び出力制御部24に供給する。 The presentation method setting unit 33 supplies information indicating the set presentation method to the output information generation unit 23 and the output control unit 24.
 図10に戻り、ステップS13において、情報処理システム1は、支援アクションを実行する。具体的には、出力情報生成部23は、選択された支援アクションに基づいて、ユーザに提示する支援情報を生成し、出力制御部24に供給する。出力部14は、出力制御部24の制御の下に、設定された提示方法に従って支援情報を提示する。 Returning to FIG. 10, in step S13, the information processing system 1 executes a support action. Specifically, the output information generation unit 23 generates support information to be presented to the user based on the selected support action, and supplies the support information to the output control unit 24. The output unit 14 presents the support information under the control of the output control unit 24 according to the set presentation method.
 例えば、個人作業段階においてアイディア出しの支援が行われる場合、アイディア発散法の提案、又は、関連情報の提示が行われる。 For example, when support for idea generation is provided at the individual work stage, an idea divergence method is proposed or related information is presented.
 アイディア発散法の提案が行われる場合、図21を参照して上述したように、アイディア発散法のテンプレートを示す支援情報付箋が生成され、映像表示面201に提示される。 When the idea divergence method is proposed, as described above with reference to FIG. 21, a support information sticky note showing the template of the idea divergence method is generated and presented on the video display surface 201.
 関連情報の提示が行われる場合、図22を参照して上述したように、関連情報を示す支援情報付箋が生成され、映像表示面201に提示される。 When the related information is presented, as described above with reference to FIG. 22, a support information sticky note indicating the related information is generated and presented on the video display surface 201.
 いずれの場合においても、ユーザ習熟度に差がある場合、図34に示されるように、各ユーザの周辺に支援情報付箋が提示される。すなわち、各ユーザが見やすく、気付きやすい位置に支援情報付箋が提示される。 In any case, if there is a difference in user proficiency, a support information sticky note is presented around each user as shown in FIG. 34. That is, the support information sticky note is presented at a position that is easy for each user to see and notice.
 また、ユーザ全員の習熟度が高い場合、図36に示されるように、空白スペースに支援情報付箋が提示される。例えば、習熟度が高いユーザは、支援情報付箋が視界の中心付近に提示されると、却って集中状態が妨げられるおそれがある。従って、各ユーザの集中状態を阻害しない位置に支援情報付箋が提示される。 If all users are highly proficient, a support information sticky note is presented in a blank space as shown in FIG. 36. For example, a highly proficient user may be hindered from concentrating when the support information sticky note is presented near the center of the field of vision. Therefore, the support information sticky note is presented at a position that does not hinder the concentration state of each user.
 さらに、ユーザ全員の習熟度が低い場合、図37に示されるように、各ユーザの視線の先に支援情報付箋が提示される。すなわち、各ユーザの周辺よりも、さらに各ユーザが見やすく、気付きやすい位置に支援情報付箋が提示される。 Further, when the proficiency level of all the users is low, as shown in FIG. 37, the support information sticky note is presented in front of each user's line of sight. That is, the support information sticky note is presented at a position that is easier for each user to see and notice than the surroundings of each user.
 また、例えば、個人作業段階において話し合いの支援が行われる場合、グループ作業段階の場合、又は、まとめ段階の場合、議論対象の変更の提案、ポジティブな評価の提示、気分転換の提案、又は、アイディア整理法の提案が行われる。 Also, for example, when discussion support is provided at the individual work stage, at the group work stage, or at the summary stage, proposals for changes in the subject of discussion, presentation of positive evaluations, suggestions for change of mood, or ideas. A proposal for a rearrangement method is made.
 議論対象の変更の提案が行われる場合、図28を参照して上述したように、映像表示面201において、話し合いが不足しているアイディアを示すユーザ入力付箋に対して、視覚効果が提示される。 When a proposal to change the subject of discussion is made, as described above with reference to FIG. 28, a visual effect is presented on the video display surface 201 for the user input sticky note indicating the idea of lack of discussion. ..
 ポジティブな評価の提示が行われる場合、図29を参照して上述したように、映像表示面201において、ポジティブな評価が与えられたアイディアを示すユーザ入力付箋に対して、視覚情報が提示される。 When a positive evaluation is presented, as described above with reference to FIG. 29, visual information is presented on the video display surface 201 with respect to the user input sticky note indicating the idea given the positive evaluation. ..
 気分転換の提案が行われる場合、図31を参照して上述したように、気分転換の方法を示す支援情報付箋が生成され、映像表示面201に提示される。 When a mood change proposal is made, as described above with reference to FIG. 31, a support information sticky note indicating the mood change method is generated and presented on the video display surface 201.
 アイディア整理法の提案が行われる場合、図32を参照して上述したように、アイディア整理法のテンプレートを示す支援情報付箋が生成され、映像表示面201に提示される。 When a proposal for an idea organizing method is made, as described above with reference to FIG. 32, a support information sticky note showing a template for the idea organizing method is generated and presented on the video display surface 201.
 気分転換の提案又はアイディア整理法の提案が行われる場合、図33を参照して上述したように、議論の段階、及び、ユーザの習熟度により、支援情報付箋の提示位置が異なる。 When a mood change proposal or an idea organization method proposal is made, as described above with reference to FIG. 33, the presentation position of the support information sticky note differs depending on the stage of discussion and the proficiency level of the user.
 例えば、個人作業段階においては、上述したアイディア発散法の提案、又は、関連情報の提示が行われる場合と同様に、ユーザ習熟度に差がある場合、図34に示されるように、各ユーザの周辺に支援情報付箋が提示される。ユーザ全員の習熟度が高い場合、図36に示されるように、空白スペースに支援情報付箋が提示される。ユーザ全員の習熟度が低い場合、図37に示されるように、各ユーザの視線の先に支援情報付箋が提示される。 For example, in the individual work stage, when there is a difference in user proficiency as in the case where the above-mentioned idea divergence method is proposed or related information is presented, as shown in FIG. 34, each user Support information sticky notes are presented in the surrounding area. When all the users are highly proficient, the support information sticky note is presented in the blank space as shown in FIG. 36. When the proficiency level of all the users is low, as shown in FIG. 37, the support information sticky note is presented in front of each user's line of sight.
 例えば、グループ作業段階においては、ユーザ習熟度に差がある場合、図35に示されるように、所定の物体の周辺に支援情報付箋が提示される。これにより、例えば、各ユーザの妨げにならない位置に、議論支援対象を提示することが可能になる。また、例えば、習熟度が低いユーザは、所定の物体の周辺を見ることにより、支援情報を容易に得ることが可能になる。 For example, in the group work stage, when there is a difference in user proficiency, a support information sticky note is presented around a predetermined object as shown in FIG. 35. As a result, for example, it becomes possible to present the discussion support target at a position that does not interfere with each user. Further, for example, a user with a low proficiency level can easily obtain support information by looking around a predetermined object.
 また、ユーザ全員の習熟度が高い場合、上述したのと同様の理由により、図36に示されるように、空白スペースに支援情報付箋が提示される。 Further, when the proficiency level of all the users is high, the support information sticky note is presented in the blank space as shown in FIG. 36 for the same reason as described above.
 さらに、ユーザ全員の習熟度が低い場合、図38に示されるように、映像表示面201の中心に支援情報付箋が提示される。すなわち、各ユーザが見やすく、気付きやすい位置に支援情報付箋が提示される。 Further, when the proficiency level of all the users is low, the support information sticky note is presented at the center of the video display surface 201 as shown in FIG. 38. That is, the support information sticky note is presented at a position that is easy for each user to see and notice.
 例えば、まとめ段階においては、ユーザ習熟度に差がある場合、上述したのと同様の理由により、図35に示されるように、所定の物体の周辺に支援情報付箋が提示される。 For example, in the summary stage, when there is a difference in user proficiency, a support information sticky note is presented around a predetermined object as shown in FIG. 35 for the same reason as described above.
 また、ユーザ全員の習熟度が高い場合、上述したのと同様の理由により、図36に示されるように、空白スペースに支援情報付箋が提示される。 Further, when the proficiency level of all the users is high, the support information sticky note is presented in the blank space as shown in FIG. 36 for the same reason as described above.
 さらに、ユーザ全員の習熟度が低い場合、図39に示されるように、ユーザ入力付箋の密度が高い位置周辺に支援情報付箋が提示される。例えば、まとめ段階においては、話題の中心となっているアイディアが記載されたユーザ入力付箋や、類似するアイディアが記載されたユーザ入力付箋が一カ所に集められることが多い。従って、各ユーザが見やすく、気付きやすいように、ユーザ入力付箋の密度が高い位置周辺に支援情報付箋が提示される。 Further, when the proficiency level of all the users is low, as shown in FIG. 39, the support information sticky note is presented around the position where the density of the user input sticky note is high. For example, at the summary stage, user-input sticky notes containing ideas that are the center of the topic and user-input sticky notes containing similar ideas are often collected in one place. Therefore, the support information sticky note is presented around the position where the density of the user input sticky note is high so that each user can easily see and notice it.
 その後、処理はステップS1に戻り、ステップS1以降の処理が実行される。 After that, the process returns to step S1, and the processes after step S1 are executed.
 以上のようにして、議論の停滞時に、各ユーザにより自主的に議論を活性化させることが可能になる。また、各ユーザの集中を阻害することなく、議論の活性化を促すことが可能になる。さらに、各ユーザに議論への自主的な取り組みを促すことが可能になる。その結果、例えば、議論の進行役を設定してなくても、各ユーザが、容易に議論を活性化し、進行することが可能になる。 As described above, when the discussion is stagnant, each user can voluntarily activate the discussion. In addition, it is possible to promote the activation of discussion without hindering the concentration of each user. Furthermore, it becomes possible to encourage each user to voluntarily engage in discussions. As a result, for example, each user can easily activate and proceed with the discussion without setting a facilitator of the discussion.
 また、例えば、適切な支援情報が適切なタイミングで適切な位置に提示されるので、各ユーザの議論の能力を向上させることができる。 Also, for example, since appropriate support information is presented at an appropriate position at an appropriate timing, it is possible to improve the discussion ability of each user.
 さらに、例えば、議論を活性化することにより、ユーザ間のコミュニケーションを活性化することが可能になる。 Furthermore, for example, by activating discussions, it becomes possible to activate communication between users.
 <<2.変形例>>
 以下、上述した本技術の実施の形態の変形例について説明する。
<< 2. Modification example >>
Hereinafter, a modified example of the above-described embodiment of the present technology will be described.
  <支援情報に関する変形例>
 以上の説明では、支援情報を視覚的に提示する例を示したが、例えば、支援情報を聴覚的に提示するようにしてもよい。
<Modified example of support information>
In the above description, an example of visually presenting the support information has been shown, but for example, the support information may be presented audibly.
 例えば、図43に示されるように、映像表示面201の周囲で議論をしているユーザA及びユーザBに対して、音声により支援情報を提示するようにしてもよい。 For example, as shown in FIG. 43, the support information may be presented by voice to the user A and the user B who are discussing around the video display surface 201.
 例えば、関連情報に含まれるニュースやウエブサイト等の内容を読み上げた音声が出力されるようにしてもよい。 For example, a voice that reads out the contents of news, websites, etc. included in the related information may be output.
 例えば、「KJ法のテンプレートを使ってみませんか」等の音声により、アイディア発散法又はアイディア整理法の各種のテンプレートの使用をユーザに促すようにしてもよい。 For example, a voice such as "Why don't you use the template of the KJ method" may be used to encourage the user to use various templates of the idea divergence method or the idea organization method.
 例えば、「Aさんの付箋に記載されている意見について話し合ってみませんか」等の音声により、議論対象の変更をユーザに促すようにしてもよい。 For example, the user may be prompted to change the subject of discussion by voice such as "Why don't you discuss the opinion written on Mr. A's sticky note?"
 例えば、「気分転換してみませんか」、「お昼休憩にしましょう」、「音楽をかけましょうか」等の音声により、気分転換をユーザに促すようにしてもよい。 For example, the user may be encouraged to change his / her mood by voices such as "Why don't you change your mood", "Let's take a lunch break", and "Let's play music".
 例えば、「Aさんの付箋に記載されている意見はよいですね」、「長時間議論しています、お疲れ様です」等の音声により、ポジティブな評価が提示されるようにしてもよい。 For example, a positive evaluation may be presented by voice such as "I like the opinion written on Mr. A's sticky note" or "I have been discussing for a long time, thank you for your hard work".
 また、例えば、議論の最中にBGMを流しておき、議論の状況によってBGMが変更されるようにしてもよい。例えば、議論が活発なときに、テンポの遅いリラックスできるBGMが流れ、議論が停滞しているときに、テンポの速いBGMが流れるようにしてもよい。 Also, for example, the BGM may be played during the discussion so that the BGM can be changed depending on the situation of the discussion. For example, when the discussion is active, a slow-paced relaxing BGM may be played, and when the discussion is stagnant, a fast-paced BGM may be played.
 また、例えば、議論が停滞しているときに流すBGMのテンポを事前にユーザが設定するようにしてもよい。 Further, for example, the user may set the tempo of the BGM to be played when the discussion is stagnant in advance.
 さらに、例えば、議論の開始から数曲はテンポが異なるBGMが流れ、各BGMが流れているときの議論の活発度が計測されるようにしてもよい。そして、例えば、議論が停滞している場合、議論の活発度が最も高かったBGMのテンポに近いテンポのBGMが流れるようにしてもよい。 Further, for example, BGMs having different tempos may be played for several songs from the start of the discussion, and the activity of the discussion when each BGM is played may be measured. Then, for example, when the discussion is stagnant, the BGM having a tempo close to the tempo of the BGM having the highest activity of the discussion may be played.
 なお、常に、BGMのテンポと議論の活発度の相関関係を計測し、最適なテンポを常時更新し、議論の停滞時に最適なテンポに近いBGMが流れるようにしてもよい。 It should be noted that the correlation between the tempo of the BGM and the activity of the discussion may be constantly measured, the optimum tempo may be constantly updated, and the BGM close to the optimum tempo may flow when the discussion is stagnant.
  <支援情報の提示方法に関する変形例>
 以上の説明では、ユーザ全体の議論の状況が検出される例を示したが、例えば、図44に示されるように、ユーザ全体の議論の状況の検出と並行して、又は、ユーザ全体の議論の状況の代わりに、各ユーザの状態が検出されるようにしてもよい。この例では、ユーザA及びユーザBの提示した電子付箋数、声の音量、並びに、手の移動量が個別に検出され、それらに基づいて、各ユーザの状態が個別に検出される例が示されている。
<Modified example of how to present support information>
In the above description, an example in which the status of the discussion of the entire user is detected has been shown. For example, as shown in FIG. 44, in parallel with the detection of the status of the discussion of the entire user, or the discussion of the entire user. Instead of the situation of, the state of each user may be detected. In this example, the number of electronic sticky notes presented by users A and B, the volume of voice, and the amount of movement of the hand are individually detected, and based on these, the state of each user is individually detected. Has been done.
 また、各ユーザの状態に基づいて、ユーザ毎に個別に支援情報が提示されるようにしてもよい。例えば、各ユーザの状態としてユーザの活動量が検出され、ユーザの活動量に基づいて、支援情報の提示が制御されるようにしてもよい。 Further, the support information may be presented individually for each user based on the state of each user. For example, the activity amount of the user may be detected as the state of each user, and the presentation of support information may be controlled based on the activity amount of the user.
 例えば、状況検出部31は、各ユーザから提示される電子付箋数に基づいて、各ユーザのアイディア出しにおける活動量(例えば、提示したアイディアの量)を検出する。そして、例えば、支援方法選択部32及び提示方法設定部33は、各ユーザのアイディア出しにおける活動量及び習熟度等のうち少なくとも1つに基づいて、ユーザ毎に提示する支援情報の内容及び量、並びに、支援情報を提示する位置及びタイミング等を制御する。例えば、提示方法設定部33は、アイディア出しにおける活動量が小さく(例えば、アイディア出しが停滞しており)、かつ、習熟度が低いユーザを選択し、選択したユーザから見えやすい位置に、支援情報の提示位置を設定する。 For example, the situation detection unit 31 detects the amount of activity (for example, the amount of presented ideas) in each user's idea generation based on the number of electronic sticky notes presented by each user. Then, for example, the support method selection unit 32 and the presentation method setting unit 33 present the content and amount of support information for each user based on at least one of the activity amount and proficiency level in the idea generation of each user. In addition, the position and timing of presenting support information are controlled. For example, the presentation method setting unit 33 selects a user whose activity amount in idea generation is small (for example, idea generation is stagnant) and has a low proficiency level, and supports information at a position easily visible to the selected user. Set the presentation position of.
 また、例えば、状況検出部31は、ユーザ毎の声の音量及び手の移動量のうち少なくとも1つに基づいて、各ユーザの話し合いにおける活動量(例えば、発言量)を検出する。そして、例えば、支援方法選択部32及び提示方法設定部33は、各ユーザの話し合いにおける活動量及び習熟度等のうち少なくとも1つに基づいて、ユーザ毎に提示する支援情報の内容及び量、並びに、支援情報を提示する位置及びタイミング等を制御する。例えば、提示方法設定部33は、話し合いにおける活動量が小さく(例えば、発言量が少なく)、かつ、習熟度が低いユーザを選択し、選択したユーザから見えやすい位置に、支援情報の提示位置を設定する。 Further, for example, the situation detection unit 31 detects the amount of activity (for example, the amount of speech) in the discussion of each user based on at least one of the volume of voice and the amount of movement of the hand for each user. Then, for example, the support method selection unit 32 and the presentation method setting unit 33 present the content and amount of support information for each user based on at least one of the activity amount and proficiency level in the discussion of each user, and the presentation method setting unit 33. , Control the position and timing of presenting support information. For example, the presentation method setting unit 33 selects a user who has a small amount of activity in the discussion (for example, a small amount of speech) and a low proficiency level, and sets the presentation position of the support information at a position easily visible to the selected user. Set.
 また、図28の例では、話し合いが不足しているアイディアが記載されているユーザ入力付箋が強調表示される例を示したが、逆に、話し合いが活発に行われているアイディアが記載されているユーザ入力付箋が強調表示されるようにしてもよい。 Further, in the example of FIG. 28, an example in which a user input sticky note containing an idea for which discussion is insufficient is highlighted is shown, but conversely, an idea in which discussion is actively carried out is described. The user-entered sticky note may be highlighted.
 さらに、例えば、ユーザの議論における役割に応じて、支援情報の提示を制御するようにしてもよい。例えば、議論の進行役から見えやすい位置に、アイディアの整理法を示す支援情報付箋が提示されるようにしてもよい。 Furthermore, for example, the presentation of support information may be controlled according to the role of the user in the discussion. For example, a support information sticky note showing how to organize ideas may be presented at a position that is easily visible to the facilitator of the discussion.
 また、例えば、ユーザの指示に基づいて、支援情報を提示するタイミングが制御されるようにしてもよい。例えば、進行役が指示したタイミングで、支援情報が提示されるようにしてもよい。 Further, for example, the timing of presenting the support information may be controlled based on the instruction of the user. For example, the support information may be presented at the timing instructed by the facilitator.
  <議論対象に関する変形例>
 以上の説明では、議論対象を電子付箋に記載されたアイディアとする例を示したが、例えば、上述したように、議論対象は、アイディアに限定されない。
<Modification example of the subject of discussion>
In the above description, an example is shown in which the subject of discussion is an idea described on an electronic sticky note, but as described above, the subject of discussion is not limited to the idea.
 例えば、商品開発の際に、製品のプロトタイプを持ち寄って議論を行うケースが想定される。この場合、プロトタイプを画像等によりデータ化したものより、実際のプロトタイプを用いて議論した方が、各ユーザが製品のイメージを掴みやすく、より効果的な議論を行うことが可能になる。 For example, when developing a product, it is assumed that a prototype of the product is brought in for discussion. In this case, it is easier for each user to grasp the image of the product and more effective discussion can be conducted by discussing using the actual prototype rather than converting the prototype into data by an image or the like.
 この場合、例えば、図45に示されるように、実際のプロトタイプである物体501乃至物体505が机の上に置かれ、物体501乃至物体505を議論対象として議論が行われる。 In this case, for example, as shown in FIG. 45, the actual prototype objects 501 to 505 are placed on the desk, and the discussion is conducted with the objects 501 to 505 as the subject of discussion.
 この場合、例えば、物体501乃至物体505の大きさや色情報が事前に登録され、デプスセンサにより取得されるデプスデータや、RGBカメラにより取得される画像データに基づいて、各物体の位置が特定される。 In this case, for example, the size and color information of the objects 501 to 505 is registered in advance, and the position of each object is specified based on the depth data acquired by the depth sensor and the image data acquired by the RGB camera. ..
  <応用例>
 以上の説明では、複数のユーザが議論を行う例を示したが、本技術は、例えば、1人のユーザが、システム(例えば、対話型ロボット等)を相手に議論を行う場合にも適用することができる。
<Application example>
In the above description, an example in which a plurality of users have a discussion is shown, but the present technology is also applied to, for example, one user having a discussion with a system (for example, an interactive robot). be able to.
 例えば、まず、ユーザは、図46に示されるように、1人でアイディア出しを行う。例えば、ユーザは、上述したように、手書きの付箋521やデジタルデバイスを用いて、アイディアを入力し、入力したアイディアを示すユーザ入力付箋522-1等を作成する。このとき、システムは、例えば、関連情報を示す支援情報付箋523-1及び支援情報付箋523-2等を提示することにより、ユーザがアイディアを広げたり、深めたりできるように支援する。 For example, first, as shown in FIG. 46, the user creates an idea by himself / herself. For example, as described above, the user inputs an idea using a handwritten sticky note 521 or a digital device, and creates a user input sticky note 522-1 or the like indicating the input idea. At this time, the system assists the user in expanding or deepening the idea by presenting, for example, the support information sticky note 523-1 indicating the related information and the support information sticky note 523-2.
 次に、ユーザは、図47に示されるように、ユーザ入力付箋522-1及びユーザ入力付箋522-2等を用いて、アイディアの整理等行う。このとき、システムは、例えば、アイディア整理法を示す支援情報付箋524-1及び支援情報付箋524-2等を提示することにより、ユーザがアイディアを整理し、結論に導けるように支援する。 Next, as shown in FIG. 47, the user organizes ideas using the user-input sticky note 522-1, the user-input sticky note 522-2, and the like. At this time, the system assists the user in organizing the ideas and drawing a conclusion by presenting, for example, the support information sticky note 524-1 and the support information sticky note 524-2 indicating the idea organizing method.
 また、例えば、図48に示されるように、システムが、映像表示面201に質問525-1及び質問525-2を提示することにより、アイディアの発散、深堀、整理等を支援するようにしてもよい。この場合、システムからの質問として、例えば、SCAMBER(オズボーンのチェックリスト)等の汎用的に使える質問が事前に用意される。 Further, for example, as shown in FIG. 48, the system may support the divergence, deep digging, organization, etc. of ideas by presenting Question 525-1 and Question 525-2 on the video display surface 201. Good. In this case, as a question from the system, a general-purpose question such as SCAMBER (Osborn checklist) is prepared in advance.
 さらに、例えば、システムが、ユーザの質問に答えることにより、議論の支援を行うようにしてもよい。 Furthermore, for example, the system may support the discussion by answering the user's question.
 なお、ユーザが1人のみの場合、チャット等のデータによる話し合いが想定され、ユーザが発話しないと想定されるため、話し合い段階の議論の状況の検出には、手の動きのみが用いられる。ただし、例えば、システムが対話型ロボット等により構成され、会話が可能な場合、議論の状況の検出に、ユーザの声の音量を用いることも可能である。 If there is only one user, it is assumed that the discussion will be based on data such as chat, and the user will not speak. Therefore, only hand movements are used to detect the status of the discussion at the discussion stage. However, for example, when the system is composed of an interactive robot or the like and conversation is possible, it is also possible to use the volume of the user's voice to detect the situation of the discussion.
 また、例えば、議論対象となる付箋や物体等に対する話し合い時間の計測を応用して、商品のニーズの調査等を行うことが可能である。 Also, for example, it is possible to investigate the needs of products by applying the measurement of the discussion time for sticky notes, objects, etc. to be discussed.
 例えば、図49に示されるように、店頭の机上に置かれた商品541乃至商品545に対し、それぞれについて話し合われた時間と内容がセンシングされる。なお、各商品の位置は、図45の例と同様の方法により特定される。これにより、各商品に対するニーズや意見を間接的に評価することができる。 For example, as shown in FIG. 49, the time and content of discussion about each of the products 541 to 545 placed on the desk in the store are sensed. The position of each product is specified by the same method as in the example of FIG. 45. This makes it possible to indirectly evaluate the needs and opinions of each product.
 また、例えば、ユーザが指さした商品の購入をためらっているとき、システムは、その理由を発話内容に基づいて特定する。そして、システムは、特定した理由に応じて、ユーザに他の商品に注目させることで、ユーザが購入する可能性がより高い商品を推薦することができる。 Also, for example, when the user is hesitant to purchase the product pointed to by the user, the system identifies the reason based on the content of the utterance. Then, the system can recommend a product that the user is more likely to purchase by making the user pay attention to other products according to the specified reason.
 例えば、図49の例では、ユーザが、商品543の購入をためらっている場合に、視覚効果551により、ユーザに商品542に注目させる例が示されている。 For example, in the example of FIG. 49, when the user hesitates to purchase the product 543, the visual effect 551 causes the user to pay attention to the product 542.
  <その他の変形例>
 本技術が適用される議論の段階は、上述した例に限定されない。例えば、本技術は、議論対象(例えば、アイディア、意見等)の提示を行う段階のみを含む議論、又は、議論対象に対して話し合う段階のみを含む議論を行う場合にも適用することができる。また、議論の段階の分類方法も、議論の形態等により変更することが可能である。
<Other variants>
The stage of discussion to which this technique applies is not limited to the examples described above. For example, the present technology can be applied to a discussion that includes only the stage of presenting the subject of discussion (for example, an idea, an opinion, etc.) or a discussion that includes only the stage of discussing the subject of discussion. In addition, the classification method at the stage of discussion can be changed depending on the form of discussion and the like.
 また、例えば、図14のステップS51とステップS53の処理の順序は入れ替えることが可能である。 Further, for example, the processing order of step S51 and step S53 in FIG. 14 can be exchanged.
 <<3.その他>>
  <コンピュータの構成例>
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウェアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<< 3. Others >>
<Computer configuration example>
The series of processes described above can be executed by hardware or software. When a series of processes are executed by software, the programs that make up the software are installed on the computer. Here, the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 図50は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。 FIG. 50 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
 コンピュータ1000において、CPU(Central Processing Unit)1001,ROM(Read Only Memory)1002,RAM(Random Access Memory)1003は、バス1004により相互に接続されている。 In the computer 1000, the CPU (Central Processing Unit) 1001, the ROM (Read Only Memory) 1002, and the RAM (Random Access Memory) 1003 are connected to each other by the bus 1004.
 バス1004には、さらに、入出力インタフェース1005が接続されている。入出力インタフェース1005には、入力部1006、出力部1007、記録部1008、通信部1009、及びドライブ1010が接続されている。 An input / output interface 1005 is further connected to the bus 1004. An input unit 1006, an output unit 1007, a recording unit 1008, a communication unit 1009, and a drive 1010 are connected to the input / output interface 1005.
 入力部1006は、入力スイッチ、ボタン、マイクロフォン、撮像素子などよりなる。出力部1007は、ディスプレイ、スピーカなどよりなる。記録部1008は、ハードディスクや不揮発性のメモリなどよりなる。通信部1009は、ネットワークインタフェースなどよりなる。ドライブ1010は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア1011を駆動する。 The input unit 1006 includes an input switch, a button, a microphone, an image sensor, and the like. The output unit 1007 includes a display, a speaker, and the like. The recording unit 1008 includes a hard disk, a non-volatile memory, and the like. The communication unit 1009 includes a network interface and the like. The drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータ1000では、CPU1001が、例えば、記録部1008に記録されているプログラムを、入出力インタフェース1005及びバス1004を介して、RAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer 1000 configured as described above, the CPU 1001 loads and executes the program recorded in the recording unit 1008 into the RAM 1003 via the input / output interface 1005 and the bus 1004, as described above. A series of processing is performed.
 コンピュータ1000(CPU1001)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア1011に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer 1000 (CPU1001) can be recorded and provided on the removable media 1011 as a package media or the like, for example. Programs can also be provided via wired or wireless transmission media such as local area networks, the Internet, and digital satellite broadcasting.
 コンピュータ1000では、プログラムは、リムーバブルメディア1011をドライブ1010に装着することにより、入出力インタフェース1005を介して、記録部1008にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部1009で受信し、記録部1008にインストールすることができる。その他、プログラムは、ROM1002や記録部1008に、あらかじめインストールしておくことができる。 In the computer 1000, the program can be installed in the recording unit 1008 via the input / output interface 1005 by mounting the removable media 1011 in the drive 1010. Further, the program can be received by the communication unit 1009 and installed in the recording unit 1008 via a wired or wireless transmission medium. In addition, the program can be installed in advance in the ROM 1002 or the recording unit 1008.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in chronological order according to the order described in this specification, or may be a program that is processed in parallel or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 また、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 Further, in the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 さらに、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Further, the embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and jointly processed.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes a plurality of processes, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  <構成の組み合わせ例>
 本技術は、以下のような構成をとることもできる。
<Example of configuration combination>
The present technology can also have the following configurations.
(1)
 ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出する状況検出部と、
 前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う出力制御部と
 を備える情報処理装置。
(2)
 前記議論の状況に基づいて、前記議論の支援方法を選択する支援方法選択部を
 さらに備え、
 前記出力制御部は、選択された前記支援方法に基づいて、前記支援情報の提示を制御する
 前記(1)に記載の情報処理装置。
(3)
 前記支援方法選択部は、前記議論の段階及び停滞状況、前記ユーザから提示された前記議論対象の量及び種類、前記ユーザの状態、並びに、各前記議論対象に関する話し合いの状況及び内容のうち少なくとも1つに基づいて、前記議論の支援方法を選択する
 前記(2)に記載の情報処理装置。
(4)
 前記ユーザの状態は、前記ユーザの位置、能力、活動量、及び、役割のうち少なくとも1つを含む
 前記(3)に記載の情報処理装置。
(5)
 前記議論の支援方法は、前記議論対象の提示を支援する方法、及び、話し合いを支援する方法のうち少なくとも1つを含む
 前記(2)乃至(4)のいずれかに記載の情報処理装置。
(6)
 前記議論対象の提示を支援する方法は、前記議論対象に関連する関連情報の提示、及び、アイディア発散法の提案のうち少なくとも1つを含み、
 前記話し合いを支援する方法は、アイディア整理法の提案、前記議論対象の変更の提案、前記議論対象に対するポジティブな評価の提示、及び、気分転換の提案のうち少なくとも1つを含む
 前記(5)に記載の情報処理装置。
(7)
 前記支援方法選択部は、前記ユーザから提示された前記議論対象の量及び種類のうち少なくとも1つに基づいて、提示する前記関連情報の量を設定する
 前記(6)に記載の情報処理装置。
(8)
 前記支援情報は、前記議論対象の提示を支援する情報、及び、話し合いを支援する情報のうち少なくとも1つを含む
 前記(1)乃至(7)のいずれかに記載の情報処理装置。
(9)
 前記議論対象の提示を支援する情報は、前記議論対象に関連する関連情報、及び、アイディア発散法を示す情報のうち少なくとも1つを含み、
 前記議論対象の提示を支援する情報は、アイディア整理法を示す情報、前記議論対象に対するポジティブな評価を示す情報、前記議論対象の変更を促す情報、及び、気分転換の方法を示す情報のうち少なくとも1つを含む
 前記(8)に記載の情報処理装置。
(10)
 前記議論の状況に基づいて、前記支援情報の提示方法を設定する提示方法設定部を
 さらに備え、
 前記出力制御部は、設定された前記提示方法に基づいて、前記支援情報の提示を制御する
 前記(1)乃至(9)のいずれかに記載の情報処理装置。
(11)
 前記提示方法設定部は、前記ユーザの状態、前記議論の段階、並びに、前記議論対象の提示位置のうち少なくとも1つに基づいて、前記支援情報の提示位置を設定する
 前記(10)に記載の情報処理装置。
(12)
 前記提示方法設定部は、特定のユーザから見えやすい位置、各前記ユーザから見やすい位置、前記議論対象の密度が高い位置、前記議論対象の密度が低い位置、及び、所定の物体の周辺のうち少なくとも1つに前記提示位置を設定する
 前記(11)に記載の情報処理装置。
(13)
 前記提示方法設定部は、各前記ユーザの状態に基づいて、前記特定のユーザを選択する
 前記(12)に記載の情報処理装置。
(14)
 前記ユーザの状態は、前記ユーザの位置、能力、活動量、及び、役割のうち少なくとも1つを含む
 前記(11)乃至(13)のいずれかに記載の情報処理装置。
(15)
 前記提示方法設定部は、前記支援情報を提示するタイミングを設定する
 前記(10)乃至(14)のいずれかに記載の情報処理装置。
(16)
 前記状況検出部は、前記ユーザから提示された前記議論対象の量、前記ユーザの声の音量、及び、前記ユーザの手の動きのうち少なくとも1つに基づいて、前記議論の停滞を検出する
 前記(1)乃至(15)のいずれかに記載の情報処理装置。
(17)
 前記状況検出部は、前記ユーザが前記議論対象を提示する段階において、前記ユーザから提示された前記議論対象の量に基づいて、前記議論の停滞を検出する
 前記(16)に記載の情報処理装置。
(18)
 前記状況検出部は、話し合いの段階において、前記ユーザの声の音量、及び、前記ユーザの手の動きのうち少なくとも1つに基づいて、前記議論の停滞を検出する
 前記(16)又は(17)に記載の情報処理装置。
(19)
 ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、
 前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う
 情報処理方法。
(20)
 ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、
 前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う
 処理をコンピュータに実行させるためのプログラム。
(1)
A situation detection unit that detects the status of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion target that is visually presented to the user.
An information processing device including an output control unit that controls to present support information for supporting the discussion when a stagnation of the discussion is detected.
(2)
Further provided with a support method selection unit for selecting a support method for the discussion based on the situation of the discussion.
The information processing device according to (1), wherein the output control unit controls the presentation of the support information based on the selected support method.
(3)
The support method selection unit is at least one of the stage and stagnation status of the discussion, the amount and type of the discussion target presented by the user, the state of the user, and the status and content of the discussion regarding each discussion target. The information processing apparatus according to (2) above, wherein the support method for the discussion is selected based on the above.
(4)
The information processing apparatus according to (3), wherein the state of the user includes at least one of the position, ability, activity amount, and role of the user.
(5)
The information processing apparatus according to any one of (2) to (4) above, wherein the discussion support method includes at least one of a method of supporting the presentation of the discussion target and a method of supporting the discussion.
(6)
The method of supporting the presentation of the subject of discussion includes at least one of the presentation of relevant information related to the subject of discussion and the proposal of an idea divergence method.
The method of supporting the discussion includes at least one of the proposal of the idea organizing method, the proposal of the change of the subject of discussion, the presentation of a positive evaluation for the subject of discussion, and the proposal of change of mood in the above (5). The information processing device described.
(7)
The information processing device according to (6) above, wherein the support method selection unit sets the amount of the related information to be presented based on at least one of the amount and the type of the discussion subject presented by the user.
(8)
The information processing apparatus according to any one of (1) to (7) above, wherein the support information includes at least one of information that supports the presentation of the subject of discussion and information that supports discussion.
(9)
The information that supports the presentation of the subject of discussion includes at least one of the relevant information related to the subject of discussion and the information indicating the idea divergence method.
The information that supports the presentation of the discussion subject is at least one of information indicating an idea organizing method, information indicating a positive evaluation of the discussion subject, information prompting the change of the discussion subject, and information indicating a method of changing mood. The information processing apparatus according to (8) above, which includes one.
(10)
Further provided with a presentation method setting unit for setting the presentation method of the support information based on the situation of the discussion.
The information processing device according to any one of (1) to (9), wherein the output control unit controls the presentation of the support information based on the set presentation method.
(11)
The presentation method setting unit sets the presentation position of the support information based on the state of the user, the stage of the discussion, and at least one of the presentation positions of the discussion target. Information processing device.
(12)
The presentation method setting unit is at least one of a position that is easily visible to a specific user, a position that is easy for each user to see, a position where the density of the discussion target is high, a position where the density of the discussion target is low, and the periphery of a predetermined object. The information processing apparatus according to (11) above, wherein the presentation position is set in one.
(13)
The information processing device according to (12), wherein the presentation method setting unit selects the specific user based on the state of each user.
(14)
The information processing apparatus according to any one of (11) to (13), wherein the state of the user includes at least one of the position, ability, activity amount, and role of the user.
(15)
The information processing device according to any one of (10) to (14), wherein the presentation method setting unit sets a timing for presenting the support information.
(16)
The situation detection unit detects the stagnation of the discussion based on at least one of the amount of the discussion target presented by the user, the volume of the user's voice, and the movement of the user's hand. The information processing apparatus according to any one of (1) to (15).
(17)
The information processing apparatus according to (16), wherein the situation detection unit detects the stagnation of the discussion based on the amount of the discussion target presented by the user at the stage where the user presents the discussion target. ..
(18)
The situation detection unit detects the stagnation of the discussion based on at least one of the volume of the user's voice and the movement of the user's hand at the stage of discussion (16) or (17). The information processing device described in.
(19)
The situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.
An information processing method that controls the presentation of support information for supporting the discussion when the stagnation of the discussion is detected.
(20)
The situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.
A program for causing a computer to execute a process of controlling the presentation of support information for supporting the discussion when the stagnation of the discussion is detected.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limited, and other effects may be obtained.
 1 情報処理システム, 11 入力部, 12 情報処理部, 13 出力部, 21 データ処理部, 22 支援部, 23 出力情報生成部, 24 出力制御部, 31 状況検出部, 32 支援方法選択部, 33 提示方法設定部 1 Information processing system, 11 Input unit, 12 Information processing unit, 13 Output unit, 21 Data processing unit, 22 Support unit, 23 Output information generation unit, 24 Output control unit, 31 Situation detection unit, 32 Support method selection unit, 33 Presentation method setting section

Claims (20)

  1.  ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出する状況検出部と、
     前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う出力制御部と
     を備える情報処理装置。
    A situation detection unit that detects the status of the discussion based on the sensor data that senses the state of the discussion with respect to the discussion target that is visually presented to the user.
    An information processing device including an output control unit that controls to present support information for supporting the discussion when a stagnation of the discussion is detected.
  2.  前記議論の状況に基づいて、前記議論の支援方法を選択する支援方法選択部を
     さらに備え、
     前記出力制御部は、選択された前記支援方法に基づいて、前記支援情報の提示を制御する
     請求項1に記載の情報処理装置。
    Further provided with a support method selection unit for selecting a support method for the discussion based on the situation of the discussion.
    The information processing device according to claim 1, wherein the output control unit controls the presentation of the support information based on the selected support method.
  3.  前記支援方法選択部は、前記議論の段階及び停滞状況、前記ユーザから提示された前記議論対象の量及び種類、前記ユーザの状態、並びに、各前記議論対象に関する話し合いの状況及び内容のうち少なくとも1つに基づいて、前記議論の支援方法を選択する
     請求項2に記載の情報処理装置。
    The support method selection unit is at least one of the stage and stagnation status of the discussion, the amount and type of the discussion target presented by the user, the state of the user, and the status and content of the discussion regarding each discussion target. The information processing apparatus according to claim 2, wherein the support method for the discussion is selected based on the above.
  4.  前記ユーザの状態は、前記ユーザの位置、能力、活動量、及び、役割のうち少なくとも1つを含む
     請求項3に記載の情報処理装置。
    The information processing apparatus according to claim 3, wherein the state of the user includes at least one of the position, ability, activity amount, and role of the user.
  5.  前記議論の支援方法は、前記議論対象の提示を支援する方法、及び、話し合いを支援する方法のうち少なくとも1つを含む
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein the discussion support method includes at least one of a method of supporting the presentation of the discussion target and a method of supporting the discussion.
  6.  前記議論対象の提示を支援する方法は、前記議論対象に関連する関連情報の提示、及び、アイディア発散法の提案のうち少なくとも1つを含み、
     前記話し合いを支援する方法は、アイディア整理法の提案、前記議論対象の変更の提案、前記議論対象に対するポジティブな評価の提示、及び、気分転換の提案のうち少なくとも1つを含む
     請求項5に記載の情報処理装置。
    The method of supporting the presentation of the subject of discussion includes at least one of the presentation of relevant information related to the subject of discussion and the proposal of an idea divergence method.
    The method for supporting the discussion is described in claim 5, which includes at least one of a proposal for organizing ideas, a proposal for changing the subject of discussion, a presentation of a positive evaluation for the subject of discussion, and a proposal for a change of mood. Information processing equipment.
  7.  前記支援方法選択部は、前記ユーザから提示された前記議論対象の量及び種類のうち少なくとも1つに基づいて、提示する前記関連情報の量を設定する
     請求項6に記載の情報処理装置。
    The information processing apparatus according to claim 6, wherein the support method selection unit sets an amount of the related information to be presented based on at least one of the amount and the type of the discussion subject presented by the user.
  8.  前記支援情報は、前記議論対象の提示を支援する情報、及び、話し合いを支援する情報のうち少なくとも1つを含む
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the support information includes at least one of information that supports the presentation of the subject of discussion and information that supports discussion.
  9.  前記議論対象の提示を支援する情報は、前記議論対象に関連する関連情報、及び、アイディア発散法を示す情報のうち少なくとも1つを含み、
     前記議論対象の提示を支援する情報は、アイディア整理法を示す情報、前記議論対象に対するポジティブな評価を示す情報、前記議論対象の変更を促す情報、及び、気分転換の方法を示す情報のうち少なくとも1つを含む
     請求項8に記載の情報処理装置。
    The information that supports the presentation of the subject of discussion includes at least one of the relevant information related to the subject of discussion and the information indicating the idea divergence method.
    The information that supports the presentation of the discussion subject is at least one of the information indicating the idea organizing method, the information indicating the positive evaluation of the discussion subject, the information prompting the change of the discussion subject, and the information indicating the method of changing the mood. The information processing apparatus according to claim 8, which includes one.
  10.  前記議論の状況に基づいて、前記支援情報の提示方法を設定する提示方法設定部を
     さらに備え、
     前記出力制御部は、設定された前記提示方法に基づいて、前記支援情報の提示を制御する
     請求項1に記載の情報処理装置。
    Further provided with a presentation method setting unit for setting the presentation method of the support information based on the situation of the discussion.
    The information processing device according to claim 1, wherein the output control unit controls the presentation of the support information based on the set presentation method.
  11.  前記提示方法設定部は、前記ユーザの状態、前記議論の段階、並びに、前記議論対象の提示位置のうち少なくとも1つに基づいて、前記支援情報の提示位置を設定する
     請求項10に記載の情報処理装置。
    The information according to claim 10, wherein the presentation method setting unit sets a presentation position of the support information based on the state of the user, the stage of the discussion, and at least one of the presentation positions of the discussion target. Processing equipment.
  12.  前記提示方法設定部は、特定のユーザから見えやすい位置、各前記ユーザから見やすい位置、前記議論対象の密度が高い位置、前記議論対象の密度が低い位置、及び、所定の物体の周辺のうち少なくとも1つに前記提示位置を設定する
     請求項11に記載の情報処理装置。
    The presentation method setting unit is at least one of a position easily visible to a specific user, a position easily viewed by each user, a position having a high density of the discussion target, a position having a low density of the discussion target, and a periphery of a predetermined object. The information processing apparatus according to claim 11, wherein one of the presentation positions is set.
  13.  前記提示方法設定部は、各前記ユーザの状態に基づいて、前記特定のユーザを選択する
     請求項12に記載の情報処理装置。
    The information processing device according to claim 12, wherein the presentation method setting unit selects the specific user based on the state of each user.
  14.  前記ユーザの状態は、前記ユーザの位置、能力、活動量、及び、役割のうち少なくとも1つを含む
     請求項11に記載の情報処理装置。
    The information processing apparatus according to claim 11, wherein the state of the user includes at least one of the position, ability, activity amount, and role of the user.
  15.  前記提示方法設定部は、前記支援情報を提示するタイミングを設定する
     請求項10に記載の情報処理装置。
    The information processing device according to claim 10, wherein the presentation method setting unit sets a timing for presenting the support information.
  16.  前記状況検出部は、前記ユーザから提示された前記議論対象の量、前記ユーザの声の音量、及び、前記ユーザの手の動きのうち少なくとも1つに基づいて、前記議論の停滞を検出する
     請求項1に記載の情報処理装置。
    The situation detection unit detects the stagnation of the discussion based on at least one of the amount of the discussion target, the volume of the user's voice, and the movement of the user's hand presented by the user. Item 1. The information processing apparatus according to item 1.
  17.  前記状況検出部は、前記ユーザが前記議論対象を提示する段階において、前記ユーザから提示された前記議論対象の量に基づいて、前記議論の停滞を検出する
     請求項16に記載の情報処理装置。
    The information processing device according to claim 16, wherein the situation detection unit detects the stagnation of the discussion based on the amount of the discussion target presented by the user at the stage where the user presents the discussion target.
  18.  前記状況検出部は、話し合いの段階において、前記ユーザの声の音量、及び、前記ユーザの手の動きのうち少なくとも1つに基づいて、前記議論の停滞を検出する
     請求項16に記載の情報処理装置。
    The information processing according to claim 16, wherein the situation detection unit detects the stagnation of the discussion based on at least one of the volume of the user's voice and the movement of the user's hand at the stage of discussion. apparatus.
  19.  ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、
     前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う
     情報処理方法。
    The situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.
    An information processing method that controls the presentation of support information for supporting the discussion when the stagnation of the discussion is detected.
  20.  ユーザに視認可能に提示された議論対象に対する議論の様子をセンシングしたセンサデータに基づいて、前記議論の状況を検出し、
     前記議論の停滞が検出された場合、前記議論を支援するための支援情報を提示する制御を行う
     処理をコンピュータに実行させるためのプログラム。
    The situation of the discussion is detected based on the sensor data that senses the state of the discussion with respect to the discussion target presented to the user.
    A program for causing a computer to execute a process of controlling the presentation of support information for supporting the discussion when the stagnation of the discussion is detected.
PCT/JP2020/037434 2019-10-11 2020-10-01 Information processing device, information processing method, and program WO2021070733A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019187425 2019-10-11
JP2019-187425 2019-10-11

Publications (1)

Publication Number Publication Date
WO2021070733A1 true WO2021070733A1 (en) 2021-04-15

Family

ID=75437941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037434 WO2021070733A1 (en) 2019-10-11 2020-10-01 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2021070733A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000105748A (en) * 1998-09-29 2000-04-11 Fuji Xerox Co Ltd Cooperative work supporting device, and recording medium
JP2017111678A (en) * 2015-12-17 2017-06-22 株式会社イトーキ Idea extraction support system
JP2018045676A (en) * 2016-09-07 2018-03-22 パナソニックIpマネジメント株式会社 Information processing method, information processing system and information processor
JP2018152645A (en) * 2017-03-10 2018-09-27 富士ゼロックス株式会社 Information processing device and information processing program
JP2018186326A (en) * 2017-04-24 2018-11-22 富士ゼロックス株式会社 Robot apparatus and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000105748A (en) * 1998-09-29 2000-04-11 Fuji Xerox Co Ltd Cooperative work supporting device, and recording medium
JP2017111678A (en) * 2015-12-17 2017-06-22 株式会社イトーキ Idea extraction support system
JP2018045676A (en) * 2016-09-07 2018-03-22 パナソニックIpマネジメント株式会社 Information processing method, information processing system and information processor
JP2018152645A (en) * 2017-03-10 2018-09-27 富士ゼロックス株式会社 Information processing device and information processing program
JP2018186326A (en) * 2017-04-24 2018-11-22 富士ゼロックス株式会社 Robot apparatus and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OZONO, TADACHIKA ET AL.: "Augmented reality- based e-sticky notes for reusing discussion cases", IEICE TECHNICAL REPORT, vol. 116, no. 350, 2 December 2016 (2016-12-02), pages 33 - 38, ISSN: 0913-5685 *

Similar Documents

Publication Publication Date Title
Henze et al. Free-hand gestures for music playback: deriving gestures with a user-centred process
US9244533B2 (en) Camera navigation for presentations
US20150312520A1 (en) Telepresence apparatus and method enabling a case-study approach to lecturing and teaching
Hoffman et al. Effects of robotic companionship on music enjoyment and agent perception
KR20190053278A (en) Controls and interfaces for user interaction in virtual space
US20120216151A1 (en) Using Gestures to Schedule and Manage Meetings
Fdili Alaoui et al. Dance interaction with physical model visuals based on movement qualities
WO2019119314A1 (en) Simulated sandbox system
JP2016100033A (en) Reproduction control apparatus
Nakano et al. Generating robot gaze on the basis of participation roles and dominance estimation in multiparty interaction
Xambó et al. Exploring social interaction with a tangible music interface
US11677575B1 (en) Adaptive audio-visual backdrops and virtual coach for immersive video conference spaces
Decortis et al. Mediating effects of active and distributed instruments on narrative activities
WO2021070733A1 (en) Information processing device, information processing method, and program
CN106113057B (en) Audio-video advertising method and system based on robot
US20220189200A1 (en) Information processing system and information processing method
Ji et al. Demonstration of VRBubble: enhancing peripheral avatar awareness for people with visual impairments in social virtual reality
Lytle et al. Toward live streamed improvisational game experiences
Nishida et al. Synthetic evidential study as augmented collective thought process–preliminary report
Lee et al. Attention meter: a vision-based input toolkit for interaction designers
Ohmoto et al. Effect of an agent's contingent responses on maintaining an intentional stance
Obiorah et al. U! Scientist: designing for people-powered research in museums
WO2022102550A1 (en) Information processing device and information processing method
WO2022249555A1 (en) Image output device, image output method, and program
US11652654B2 (en) Systems and methods to cooperatively perform virtual actions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20875394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20875394

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP