WO2016189685A1 - Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage Download PDF

Info

Publication number
WO2016189685A1
WO2016189685A1 PCT/JP2015/065217 JP2015065217W WO2016189685A1 WO 2016189685 A1 WO2016189685 A1 WO 2016189685A1 JP 2015065217 W JP2015065217 W JP 2015065217W WO 2016189685 A1 WO2016189685 A1 WO 2016189685A1
Authority
WO
WIPO (PCT)
Prior art keywords
extracted
speech
utterance
statement
history
Prior art date
Application number
PCT/JP2015/065217
Other languages
English (en)
Japanese (ja)
Inventor
和宏 友田
香緒里 西井
Original Assignee
楽天株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 楽天株式会社 filed Critical 楽天株式会社
Priority to JP2016565361A priority Critical patent/JP6186519B2/ja
Priority to PCT/JP2015/065217 priority patent/WO2016189685A1/fr
Publication of WO2016189685A1 publication Critical patent/WO2016189685A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, a program, and a storage medium, and more specifically, to extraction and presentation of comments in a virtual conference room.
  • This invention is made
  • An information processing apparatus searches for a participant based on a condition receiving unit that executes a condition input receiving process that receives an input of a search condition for searching for a participant, and the search condition.
  • a participant search processing unit that presents each extracted extracted person in a selectable manner, and a speech history extraction process that extracts a speech history of the selected person selected from the extracted persons presented by the participant search processing unit
  • a speech history extraction unit that performs a speech history presentation process for selectively presenting each speech of the speech history extracted in the speech history extraction process, and a selection from the speech presented to be selectable
  • a pre- and post-utterance extraction unit that executes a pre- and post-utterance extraction process for extracting a part of the pre- and post-selected utterances, It is obtained by a longitudinal speech presentation unit for executing Shimesuru longitudinal speech presentation process.
  • the utterance history of the person who wants to pay attention (the selected person selected from the extracted persons extracted by the search condition) is extracted and presented, and the environment in which the selected utterance is extracted and presented together with the utterances of other people before and after is presented.
  • the identification information of at least some of the participants who have made the speech is presented together with the presentation of the extracted speech. This makes it easy to grasp the position and relationship of each statement.
  • the information processing apparatus includes a conclusion specifying unit that specifies a comment indicating a conclusion of the discussion as a collective comment, and in the preceding and following comment presenting process, presents the summarized comment that is remarked after the selected comment and the selected comment. To do. This presents both the selected utterance and the conclusion of the discussion.
  • the updated statement history is presented each time the statement history is updated by a new statement of the selected person.
  • the remark history of the selected person is updated without entering the conference room.
  • An information processing method includes a condition input reception processing step for receiving an input of a search condition for searching for a participant, a search for a participant based on the search condition, and an extracted person extracted as a search result Participant extraction processing step presenting each selectable, a speech history extraction processing step for extracting a speech history of the selected person selected from the extracted persons presented in the participant extraction processing step, and the speech history extraction
  • a speech history presentation processing step for selectively presenting each speech in the speech history extracted in the processing step, and extracting a part of the speech before or after the selected speech selected from the speech presented to be selectable Before / after utterance extraction processing step, and before / after utterance presentation processing step for presenting the utterance extracted in the preceding / following utterance extraction processing step together with the selected utterance And those to be executed by the information processing apparatus.
  • a program according to the present invention is a program that causes an arithmetic processing unit to execute processing executed as the information processing method.
  • a storage medium according to the present invention is a storage medium storing the above program. The above information processing apparatus is realized by these programs and storage media.
  • a virtual conference room server is taken as an example of an information processing apparatus that provides a virtual conference room and provides various functions that help participants evaluate.
  • embodiments will be described in the following order.
  • the virtual meeting room server 1 provides a function of dividing participants who participate in meetings and discussions using the virtual meeting room into several groups as necessary. In addition, a function of assigning a virtual conference room to each group is provided. In addition, a search function for participants and speech is provided. In addition, it provides a function for presenting participants and comments extracted as search results.
  • the virtual conference room server 1 manages various DBs (Databases) in order to provide the various functions described above.
  • DBs Databases
  • a conference room DB 50 that stores information on virtual conference rooms
  • a participant DB 51 that stores information on participants who participate in discussions using virtual conference rooms
  • a log of statements made in each virtual conference room are stored.
  • Log DB 52 or the like. Details of each DB will be described later.
  • the configuration of the virtual conference room server 1 will be described in more detail with reference to FIG.
  • the virtual conference room server 1 includes a condition reception unit 1a, a participant search processing unit 1b, a speech history extraction unit 1c, a speech history presentation unit 1d, a front and rear speech extraction unit 1e, a front and rear speech presentation unit 1f, and a conclusion specifying unit 1g.
  • the condition receiving unit 1a executes a condition input receiving process for receiving an input of a search condition for searching for a participant or a statement.
  • the participant search processing unit 1b performs a search based on a search condition for searching for a specific participant, and executes an extraction process for extracting a participant that matches the condition as an extracted person.
  • the extraction participant presentation process which presents the extracted search result to the participant terminal 3 and the administrator terminal 4 is executed.
  • the speech history extraction unit 1c executes a speech history extraction process that extracts a speech history of a person (selected person) selected by the user from the extracted persons.
  • the statement history presentation unit 1 d executes a statement history presentation process for presenting the extracted statement history to the participant terminal 3 and the administrator terminal 4.
  • the front and rear utterance extraction unit 1e executes a front and rear utterance extraction process that extracts a selected utterance (selected utterance) from the utterance history of the presented selected person and a part of the previous and subsequent utterances (front and rear utterances).
  • the front / rear speech presenting unit 1 f executes a pre- and post-drawing speech presentation process for presenting the extracted front / rear speech to the participant terminal 3 and the administrator terminal 4.
  • the conclusion specifying unit 1g executes a conclusion specifying process for specifying a comment that is a conclusion part of the discussion or discussion.
  • the configuration of the communication network 2 is not particularly limited.
  • the Internet an intranet, an extranet, a LAN (Local Area Network), a CATV (Community Antenna TeleVision) communication network, a virtual private network (Virtual) Private network), telephone line network, mobile communication network, satellite communication network, etc.
  • LAN Local Area Network
  • CATV Common Antenna TeleVision
  • Virtual Virtual
  • telephone line network mobile communication network
  • satellite communication network etc.
  • Various examples of transmission media constituting all or part of the communication network 2 are also envisaged.
  • IEEE Institute of Electrical and Electronics Engineers 1394, USB (Universal Serial Bus), power line carrier, telephone line, etc., infrared such as IrDA (Infrared Data Association), Bluetooth (registered trademark), 802.11 wireless It can also be used wirelessly, such as mobile phone networks, satellite lines, and digital terrestrial networks.
  • a participant terminal 3 shown in FIG. 1 is a terminal used by a participant who participates in a web internship.
  • the manager terminal 4 is an information processing apparatus used by, for example, a personnel manager of a company that manages a virtual conference room and hosts a web internship. It should be noted that the present invention is also applicable when there is a separate company that manages the virtual meeting room and the company that hosts the web internship borrows the virtual meeting room and allows participants to discuss.
  • various transmission / reception processes and the like are executed as necessary.
  • the participant terminal 3 and the administrator terminal 4 are, for example, a PC (Personal Computer), a feature phone, a PDA (Personal Digital Assistants) having a communication function, or a smart device such as a smartphone or a tablet terminal.
  • FIG. 3 is a diagram illustrating hardware of the virtual conference room server 1, the participant terminal 3, and the administrator terminal 4 illustrated in FIG.
  • a CPU (Central Processing Unit) 101 of a computer device in each server or terminal follows a program stored in a ROM (Read Only Memory) 102 or a program loaded from a storage unit 108 into a RAM (Random Access Memory) 103. Perform various processes.
  • the RAM 103 also appropriately stores data necessary for the CPU 101 to execute various processes.
  • the CPU 101, ROM 102, and RAM 103 are connected to each other via a bus 104.
  • An input / output interface 105 is also connected to the bus 104.
  • the input / output interface 105 includes an input device 106 composed of a keyboard, mouse, touch panel, etc., a display composed of a liquid crystal display (LCD), a cathode ray tube (CRT), an organic EL (electroluminescence) panel, and an output composed of a speaker.
  • a storage unit 108 including an HDD (Hard Disk Drive), a flash memory device, and the like
  • a communication unit 109 that performs communication processing and communication between devices via the communication network 2 are connected.
  • a media drive 110 is also connected to the input / output interface 105 as necessary, and a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and information can be written to the removable medium 111. Reading is performed.
  • a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and information can be written to the removable medium 111. Reading is performed.
  • each information processing apparatus which comprises the virtual meeting room server 1, the participant terminal 3, and the administrator terminal 4 is not limited to a single computer apparatus as shown in FIG. It may be configured by a plurality of computer devices.
  • the plurality of computer devices may be systemized by a LAN or the like, or may be arranged in a remote place in a communicable state by a VPN (Virtual Private Network) using the Internet or the like.
  • the conference room DB 50 is a DB that stores information on a virtual conference room managed by the virtual conference room server 1.
  • the conference room DB 50 stores, for example, a conference room ID (Identification) for identifying a virtual conference room, a usage status (availability), a usage start date and time, and the like.
  • the participant DB 51 is a DB that stores information on participants who participate in discussions and discussions performed in a virtual conference room.
  • a participant ID for example, a participant ID, a login PW (Password), a name and address of a participant, a contact address (telephone number or mail address), and the like are stored as participant information.
  • Log DB52 is DB which memorize
  • the log DB 52 stores, for example, a conference room ID of a virtual conference room and a participant ID of a participant who participated in discussion or discussion.
  • the entry / exit information is also stored along with the time information.
  • speech information made in the virtual conference room is also stored.
  • the utterance information the utterance time information, the uttered participant ID (or participant name), the utterance content, and the like are stored.
  • step S101 the administrator terminal 4 executes a condition setting screen display process for displaying on the administrator terminal 4 a search screen for designating a search condition for searching for a specific participant.
  • An example of the search screen is shown in FIG.
  • the search screen is displayed on, for example, the web browser 5 installed in the administrator terminal 4, and includes an input field 6 for inputting a free word as a search condition, and various drop-down lists 7 for narrowing down search results. , 7,...
  • a search button 8 for executing a search is provided.
  • the character string input to the input field 6 is a character string for extracting a participant, and is, for example, a character string related to a participant's utterance content or a character string related to a participant's attribute or profile.
  • a plurality of character strings (or sentences) are input in the input field 6, AND search and OR search are executed.
  • the drop-down list 7 is configured so that the attributes and profiles of the participants for extracting the participants can be selected. For example, in the education drop-down list 7, items such as “high school graduate”, “university graduate”, “university graduate prospect” can be selected.
  • the search button 8 is a button for causing the virtual conference room server 1 to execute a search process based on information input or selected by the input field 6 or the drop-down lists 7, 7,.
  • the condition transmission process in step S ⁇ b> 102 of FIG. 4 is executed in the administrator terminal 4.
  • the search condition that is, information input or selected by the input field 6 or the drop-down lists 7, 7, etc Is transmitted to the virtual conference room server 1.
  • step S201 the virtual conference room server 1 executes a condition input reception process for receiving a search condition.
  • step S202 the virtual meeting room server 1 performs a participant extraction process in step S202.
  • the participant extraction process a search based on the received search condition is executed, and a target participant (extracted person) is extracted.
  • step S203 the virtual meeting room server 1 executes extraction participant presentation processing for presenting the extracted person extracted in step S202 to the administrator terminal 4.
  • web page data to be displayed on the administrator terminal 4 is transmitted in a state where a selection operation for each extracted participant is possible (for example, in a state where a click operation is possible).
  • the administrator terminal 4 that has received the web page data including the extracted person information executes the extracted participant display process in step S103.
  • the extracted participant display process is a process for displaying on the web browser 5 an extracted participant display screen on which participants as search results extracted based on the search conditions specified on the search screen of FIG.
  • FIG. 5 An example of the extracted participant display screen is shown in FIG.
  • various search conditions are displayed as in FIG. 5, and an extracted participant display field 9 is provided below the search conditions.
  • each participant that matches the search condition and is extracted is displayed as the extracted participants 10, 10, 10.
  • the participant selection process in step S104 of FIG.
  • a process of transmitting information (participant selection information) of the participant (selected person) selected by the personnel manager to the virtual conference room server 1 is executed.
  • step S204 the virtual conference room server 1 that has received the participant selection information executes a speech history extraction process for extracting the speech history of the selected person. Then, in step S205, the virtual conference room server 1 transmits the extracted statement history to the administrator terminal 4 and executes a statement history presentation process for displaying on the administrator terminal 4. In this process, web page data to be displayed on the administrator terminal 4 in a state where a selection operation can be performed on each statement in the statement history (for example, in a state where a click operation is possible) is transmitted as the statement history information.
  • the administrator terminal 4 that has received the message history information executes a message history display process for displaying the message history information on the message history display screen on the administrator terminal 4 in the next step S105.
  • FIG. 7 shows a case where the personnel manager selects Mr. B from each participant (Mr. A, Mr. B, Mr. G) displayed on the extracted participant display screen shown in FIG.
  • Mr. A, Mr. B, Mr. G the contents already spoken by Mr. B are displayed in chronological order.
  • old utterances are arranged so as to be displayed at the top, but new utterances may be arranged to be displayed at the top.
  • Each remark of Mr. B shown in FIG. 7 is displayed as a character string that can be selected.
  • each utterance may be provided with a button for displaying the utterance before and after each utterance, instead of being composed of a selectable character string.
  • the manager terminal 4 executes a remark selection process in step S106.
  • a speech (selected speech) selected by the personnel manager is transmitted to the virtual conference room server 1 as selected speech information.
  • the virtual conference room server 1 executes the anteroposterior utterance extraction process.
  • the pre-post utterance extraction process is a process of extracting utterances before and after the selected utterance. As the utterances before and after, for example, only the utterances before the selected utterances may be extracted in order to grasp the flow of the selected utterances. Further, in order to grasp what kind of discussion has been made based on the selected utterance, only the utterance after the selected utterance may be extracted. Moreover, you may extract a part of utterances before and after. Specific examples of these will be described later.
  • step S207 the virtual conference room server 1 transmits the extracted front-rear message to the administrator terminal 4 and executes the front-rear message presenting process for displaying on the administrator terminal 4.
  • the extracted front / rear speech is transmitted to the administrator terminal 4 as front / rear speech information.
  • the front / rear speech information information of a speaker to be displayed together with the front / rear speech is also transmitted.
  • the administrator terminal 4 that has received the front-rear message information performs a front-rear message display process for displaying the front-rear message on the front-rear message display screen on the administrator terminal 4 in step S107.
  • the personnel manager can confirm the selected speech and the previous / next speech of the selected person displayed on the administrator terminal 4, and can evaluate the selected person.
  • identification information for example, the participant ID of the utterer, the participant name, etc.
  • Display The identification information does not need to be displayed for all the messages displayed on the administrator terminal 4, and a specific example will be described below.
  • 21 messages are displayed on the front and rear message display screen.
  • identification information for identifying the speaker is displayed for each of the 21 statements.
  • a selected utterance clarified by being surrounded by a surrounding line 11 and 20 front and rear utterances are displayed.
  • a participant name is displayed as identification information for identifying a speaker (Mr. A, Mr. B, Mr. C, and Ms. D).
  • the participants at the time of the discussion are displayed on the right side of the speech history. For example, the participants participating in the discussion were Mr. A, Mr. B, Mr. C, Mr. D, Mr. E, Mr. F, and Mr. G.
  • the person in charge of personnel can grasp what sort of discussion Mr. B has made in the discussion and, as a result, what kind of discussion has continued as a result of browsing the message display screen before and after FIG.
  • the identification information for identifying the speaker is displayed only for the speech made by a person other than the extracted person among the 21 speeches displayed on the front and rear speech display screen.
  • a person other than the extracted person is a person other than the extracted person (that is, Mr. A, Mr. B, and Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 9, there are five participants A, B, C, D, and E who entered the room, but participants C, D, and E who are participants other than the extracted person.
  • Identification information identifying the speaker is displayed for each of.
  • the identification information for identifying the speaker is displayed only for the speech made by the extracted person among the 21 speeches displayed on the front and rear speech display screen.
  • the extracted person is an extracted person (that is, Mr. A, Mr. B, Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 10, the identification information for identifying the speaker is displayed only for the speeches of Mr. A and Mr. B among the participants A, B, C, D, and E.
  • the identification information for identifying the speaker is displayed only for the speech of the participant who speaks before and after the selected speech among the 21 speeches displayed on the front and rear speech display screen. Specifically, as shown in FIG. 11, Mr. A, Mr. B, Mr. C, and Mr. D are speaking before and after the selected statement. On the other hand, Mr. E is speaking only after the selected speech. Therefore, the identification information for identifying the participant is displayed only for the speech of each participant except Mr. E.
  • the identification information for identifying the utterer is displayed only for the utterances of the participants who are speaking before and after the selected utterance.
  • the present invention is not limited to the 21 utterances.
  • the identification information for identifying the speaker may be displayed only for the speech of the participant who is speaking before and after the selected speech.
  • this example may be combined with the previous second example.
  • the speaker is specified only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance among Mr. C, D, and E who are not extracted persons.
  • Display identification information That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C and Mr. D.
  • this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, the identification that identifies the speaker only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance Display information. That is, the identification information for identifying the speaker is displayed only for the messages of Mr. A and Mr. B.
  • Example 5 In the fifth example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 12). Further, among the 21 utterances displayed on the front and rear utterance display screen, the utterance (12a in FIG. 12) immediately before the selected utterance made by the utterer (ie, Mr. B) of the selected utterance (12 in FIG. 12). ) And the next utterance (12b in FIG. 12), and the participant uttering between the selected utterance 12 and the previous utterance 12a, or between the selected utterance 12 and the next utterance 12b. Only for the participant who is speaking, identification information for identifying the speaker is displayed. Specifically, in FIG.
  • identification information for identifying the speaker is displayed only for the utterances of Mr. A, Mr. C, Mr. D, and Mr. E.
  • this example may be combined with the previous second example. Specifically, among C, D, and E who are not extracted persons, participants who speak between the selected utterance 12 and the previous utterance 12a, or one after the selected utterance 12 Only the participant who is speaking during the statement 12b displays the identification information for identifying the speaker. That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C, D, and E.
  • this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, participants who speak between the selected message 12 and the previous message 12a, or the selected message 12 and the next message Only the participant who speaks during 12b displays the identification information which identifies a speaker. That is, identification information for identifying the speaker is displayed only for Mr. A's statement.
  • utterances such as simple conflicts that are not necessary for grasping the outline of the discussion are excluded from the extraction targets of the preceding and following utterance extraction processing, and the contents of the discussion can be easily grasped.
  • the identification of the summary utterance may be, for example, automatically determining the content of the utterance (for example, automatically determining a utterance including a predetermined character string such as “summary” or “conclusion” as a summary utterance)
  • the virtual conference room may be equipped with a function that allows each participant or HR representative to specify a summary statement from among the statements, or only a greeting such as the final statement of a series of discussions May be determined as a summary statement.
  • the front-rear message display screen shown in FIG. 14 is managed. Displayed on the user terminal 4. Specifically, as shown in FIG. 14, Mr. A's summary utterance is displayed below the utterance history. Thereby, it is possible to grasp the position of the selected utterance of Mr. B who is the object to be evaluated with respect to the summary utterance, and B can be evaluated more appropriately.
  • the message history displayed on the administrator terminal 4 may always be the latest. Specifically, when the selected person makes a new utterance after executing the utterance history presentation process in step S205, the utterance displayed on the administrator terminal 4 by executing the processes in steps S204 and S205 again. The history may be updated. Thereby, since the person in charge of personnel can always browse the latest statement history, the selected person can be evaluated appropriately.
  • identification information for example, a participant's name of a speaker
  • a specific speaker is displayed for a speech made by a specific speaker.
  • identification information for example, a participant's name of a speaker
  • a speaker may be clearly specified for all the statements, and identification information (speaker name) of a specific speaker may be displayed in bold or colored characters.
  • the number of statements made by each participant may be displayed together with the participant list displayed on the right side of the message history.
  • the number of utterances may be the number of utterances in the extracted preceding and following utterances, or the total number of utterances including other utterances. Thereby, the number of utterances as an index for evaluating each participant can be presented.
  • the identification information (for example, participant name) of the speaker may be displayed in a different color for each participant. Thereby, each participant can be easily identified. Further, not only the participant name but also the color of the speech may differ depending on the participant.
  • the virtual meeting room server 1 is based on the condition receiving unit 1a for executing the condition input receiving process (step S201) for receiving the input of the search condition for searching for the participant, and the search condition.
  • Participant search processing unit 1b that performs search for participants and presents each extracted person extracted as a search result in a selectable manner, and remarks of a selected person selected from the extracted persons presented by the participant search processing unit
  • a speech history extraction unit 1c that executes a speech history extraction process (step S204) for extracting a history, and a speech history presentation process (step S205) that presents each of the speech histories extracted in the speech history extraction process in a selectable manner.
  • Statement history presentation unit 1d to be executed, and pre- and post-utterance extraction processing for extracting a part of the utterance before or after the selected utterance selected from the utterances presented to be selectable Front and rear utterance extractor 1e executing step S206), and a longitudinal speech presentation unit 1f to perform before and after speaking presentation processing (step S207) presenting the extracted utterance before and after utterance extraction process with the selected utterance.
  • step S207 in the front-rear message presentation process (step S207), along with the presentation of the extracted message, at least a part of the participants who performed the message Participant identification information (participant ID, participant name, etc.) is presented.
  • step S207 only the participants other than the extracted person are presented with the identification information among the participants who have made the speech.
  • the utterances of participants who do not match the search conditions are presented together with the identification information, so the participants who made the utterances are grasped from the remarks made by participants who do not match the search conditions. It becomes easy to do and it can prevent that the said participant leaks from the object which should be noted. That is, it is possible to focus on participants other than the participant (selected person) initially focused on by the personnel manager.
  • step S207 only the extracted person is presented with the identification information among the participants who have made the speech.
  • the speech was made both before and after the selected speech. Only participants are presented with identification information.
  • the selected speech and the selected person are one before or one of the selected speech.
  • Identification information is presented only to the participant who made a speech between any of the later speeches.
  • step S206 only the speech of a predetermined number of characters or more is extracted.
  • step S207 in the front / rear speech presentation process (step S207), all participants when the selected speech is performed are presented.
  • the conclusion specifying unit 1g for specifying the statement indicating the conclusion of the discussion as a collective statement is provided.
  • the selection is performed. A summary remark made after the selected remark along with the remark is presented.
  • the updated speech history is presented each time the speech history is updated by a new speech of the selected person.
  • a predetermined number of speeches may be extracted as part of the speech before or after the selected speech.
  • the virtual conference room server 1 of the present invention has been described above.
  • the program according to the embodiment is a program that causes an arithmetic processing device (such as a CPU) to execute processing in the virtual conference room server 1.
  • the program according to the embodiment causes the arithmetic processing device to execute a procedure for receiving an input of a search condition for searching for a participant. Further, a search is performed for participants based on the search condition, and the processing unit is caused to execute a procedure for selectively presenting each extracted person extracted as a search result. Further, the arithmetic processing apparatus is caused to execute a procedure for extracting the utterance history of the selected person selected from the extracted persons. Furthermore, the arithmetic processing unit is caused to execute a procedure for presenting each of the extracted utterance histories in a selectable manner.
  • this program is a program for causing the arithmetic processing unit to execute the processes of steps S201 to S207 in FIG.
  • the virtual conference room server 1 described above can be realized by such a program.
  • a program can be stored in advance in an HDD as a storage medium built in a device such as a computer device or a ROM in a microcomputer having a CPU. Alternatively, it can be stored (stored) temporarily or permanently in a removable storage medium such as a semiconductor memory, memory card, optical disk, magneto-optical disk, or magnetic disk. Such a removable storage medium can be provided as so-called package software. Further, such a program can be installed from a removable storage medium to a personal computer or the like, or can be downloaded from a download site via a network such as a LAN or the Internet.
  • 1 virtual conference room server 1a condition reception unit, 1b participant search processing unit, 1c statement history extraction unit, 1d statement history presentation unit, 1e before and after statement extraction unit, 1f before and after statement presentation unit, 1g conclusion identification unit, 2 communication network 3, Participant terminal, 4 Administrator terminal, 50 Conference room DB, 51 Participant DB, 52 Log DB

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

L'objectif de la présente invention est de fournir un environnement dans lequel il est possible d'évaluer efficacement des participants qui participent à une discussion pour laquelle une salle de conférence virtuelle est utilisée. Pour cela, l'invention concerne un dispositif de traitement d'informations comprenant : une unité d'acceptation de condition qui exécute un processus d'acceptation d'entrée de condition destiné à accepter une entrée d'une condition de recherche pour rechercher un participant ; une unité de traitement de recherche de participant qui effectue une recherche du participant en fonction de la condition de recherche, et présente de façon sélectionnable chaque personne extraite qui a été extraite en tant que résultat de recherche ; une unité d'extraction d'historique de déclarations qui exécute un processus d'extraction d'historique de déclarations destiné à extraire un historique de déclarations d'une personne sélectionnée qui a été sélectionnée parmi les personnes extraites présentées par l'unité de traitement de recherche de participant ; une unité de présentation d'historique de déclarations qui exécute un processus de présentation d'historique de déclarations destiné à présenter de façon sélectionnable chaque déclaration de l'historique de déclarations qui a été extrait avec le processus d'extraction d'historique de déclarations ; une unité d'extraction de déclaration précédente/suivante qui exécute un processus d'extraction de déclaration précédente/suivante destiné à extraire une partie des déclarations qui précèdent ou suivent une déclaration sélectionnée qui a été sélectionnée parmi les déclarations qui ont été présentées de façon sélectionnable ; et une unité de présentation de déclaration précédente/suivante qui exécute un processus de présentation de déclaration précédente/suivante destiné à présenter, avec la déclaration sélectionnée, une déclaration qui a été extraite avec le processus d'extraction de déclaration précédente/suivante.
PCT/JP2015/065217 2015-05-27 2015-05-27 Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage WO2016189685A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2016565361A JP6186519B2 (ja) 2015-05-27 2015-05-27 情報処理装置、情報処理方法、プログラム、記憶媒体
PCT/JP2015/065217 WO2016189685A1 (fr) 2015-05-27 2015-05-27 Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/065217 WO2016189685A1 (fr) 2015-05-27 2015-05-27 Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage

Publications (1)

Publication Number Publication Date
WO2016189685A1 true WO2016189685A1 (fr) 2016-12-01

Family

ID=57394017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/065217 WO2016189685A1 (fr) 2015-05-27 2015-05-27 Dispositif de traitement d'informations, procédé de traitement d'informations, programme, et support de stockage

Country Status (2)

Country Link
JP (1) JP6186519B2 (fr)
WO (1) WO2016189685A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11242545A (ja) * 1998-02-24 1999-09-07 Sharp Corp リアルタイムチャットシステム
JPH11249990A (ja) * 1998-02-27 1999-09-17 Fujitsu Ltd チャットシステムにおける発言履歴管理システム
JP2001195428A (ja) * 1999-11-02 2001-07-19 Atr Media Integration & Communications Res Lab 連想的情報探索装置
WO2003046764A1 (fr) * 2001-11-26 2003-06-05 Fujitsu Limited Procede et appareil d'analyse d'informations

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323215A (ja) * 2006-05-31 2007-12-13 Fuji Xerox Co Ltd 会議情報処理装置、会議情報処理方法およびコンピュータ・プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11242545A (ja) * 1998-02-24 1999-09-07 Sharp Corp リアルタイムチャットシステム
JPH11249990A (ja) * 1998-02-27 1999-09-17 Fujitsu Ltd チャットシステムにおける発言履歴管理システム
JP2001195428A (ja) * 1999-11-02 2001-07-19 Atr Media Integration & Communications Res Lab 連想的情報探索装置
WO2003046764A1 (fr) * 2001-11-26 2003-06-05 Fujitsu Limited Procede et appareil d'analyse d'informations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KATASHI NAGAO ET AL.: "Discussion Mining: Knowledge Discovery from Discussions in Face- to-Face Meetings", IEICE TECHNICAL REPORT, THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 112, no. 339, 1 December 2012 (2012-12-01), pages 59 - 64, XP055332015 *

Also Published As

Publication number Publication date
JPWO2016189685A1 (ja) 2017-06-08
JP6186519B2 (ja) 2017-08-23

Similar Documents

Publication Publication Date Title
JP7464098B2 (ja) 電子会議システム
US10601739B2 (en) Smart messaging for computer-implemented devices
CN104144154B (zh) 发起预约会议的方法、装置及系统
JP5814490B1 (ja) 情報処理装置、情報処理方法、プログラム、記憶媒体
US10250540B2 (en) Idea generation platform for distributed work environments
CN108139918A (zh) 以每用户为基础定制程序特征
US9992142B2 (en) Messages from absent participants in online conferencing
CN109753635A (zh) 使用优质示例的自动化文档助手
CN104756056A (zh) 虚拟会议
US10387506B2 (en) Systems and methods for online matchmaking
WO2015120789A1 (fr) Procédé de traitement d'informations et serveur de jeu
JP2008282191A (ja) 計算機システム及び第2計算機
JP2010211569A (ja) 評価装置、プログラム及び情報処理システム
KR20210064048A (ko) 전문가 상담 서비스를 제공하는 방법, 시스템, 및 컴퓨터 프로그램
US9104297B2 (en) Indicating organization of visitor on user interface of user engaged in collaborative activity with visitor
JP2016045737A (ja) ランチのメンバー通知方法、ランチのメンバー通知プログラム、および情報処理装置
US8249996B1 (en) Artificial intelligence for social media
JP6186519B2 (ja) 情報処理装置、情報処理方法、プログラム、記憶媒体
JP2010191808A (ja) スケジューリングプログラム、スケジューリング方法、及びスケジューリング装置
TW201931273A (zh) 社群入口目錄生成系統及其方法
US20200394582A1 (en) Communication system, communication method, and non-transitory recording medium
Takahashi et al. Two persons dialogue corpus made by multiple crowd-workers
CN114513480B (zh) 基于群聊的信息处理方法、装置、设备及计算机存储介质
CN102804232A (zh) 人气投票系统
JP2002099494A (ja) チャットシステム及びサーバ装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016565361

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15893319

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15893319

Country of ref document: EP

Kind code of ref document: A1