WO2016189685A1 - Information processing device, information processing method, program, and storage medium - Google Patents

Information processing device, information processing method, program, and storage medium Download PDF

Info

Publication number
WO2016189685A1
WO2016189685A1 PCT/JP2015/065217 JP2015065217W WO2016189685A1 WO 2016189685 A1 WO2016189685 A1 WO 2016189685A1 JP 2015065217 W JP2015065217 W JP 2015065217W WO 2016189685 A1 WO2016189685 A1 WO 2016189685A1
Authority
WO
WIPO (PCT)
Prior art keywords
extracted
speech
utterance
statement
history
Prior art date
Application number
PCT/JP2015/065217
Other languages
French (fr)
Japanese (ja)
Inventor
和宏 友田
香緒里 西井
Original Assignee
楽天株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 楽天株式会社 filed Critical 楽天株式会社
Priority to JP2016565361A priority Critical patent/JP6186519B2/en
Priority to PCT/JP2015/065217 priority patent/WO2016189685A1/en
Publication of WO2016189685A1 publication Critical patent/WO2016189685A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management

Definitions

  • the present invention relates to an information processing apparatus, an information processing method, a program, and a storage medium, and more specifically, to extraction and presentation of comments in a virtual conference room.
  • This invention is made
  • An information processing apparatus searches for a participant based on a condition receiving unit that executes a condition input receiving process that receives an input of a search condition for searching for a participant, and the search condition.
  • a participant search processing unit that presents each extracted extracted person in a selectable manner, and a speech history extraction process that extracts a speech history of the selected person selected from the extracted persons presented by the participant search processing unit
  • a speech history extraction unit that performs a speech history presentation process for selectively presenting each speech of the speech history extracted in the speech history extraction process, and a selection from the speech presented to be selectable
  • a pre- and post-utterance extraction unit that executes a pre- and post-utterance extraction process for extracting a part of the pre- and post-selected utterances, It is obtained by a longitudinal speech presentation unit for executing Shimesuru longitudinal speech presentation process.
  • the utterance history of the person who wants to pay attention (the selected person selected from the extracted persons extracted by the search condition) is extracted and presented, and the environment in which the selected utterance is extracted and presented together with the utterances of other people before and after is presented.
  • the identification information of at least some of the participants who have made the speech is presented together with the presentation of the extracted speech. This makes it easy to grasp the position and relationship of each statement.
  • the information processing apparatus includes a conclusion specifying unit that specifies a comment indicating a conclusion of the discussion as a collective comment, and in the preceding and following comment presenting process, presents the summarized comment that is remarked after the selected comment and the selected comment. To do. This presents both the selected utterance and the conclusion of the discussion.
  • the updated statement history is presented each time the statement history is updated by a new statement of the selected person.
  • the remark history of the selected person is updated without entering the conference room.
  • An information processing method includes a condition input reception processing step for receiving an input of a search condition for searching for a participant, a search for a participant based on the search condition, and an extracted person extracted as a search result Participant extraction processing step presenting each selectable, a speech history extraction processing step for extracting a speech history of the selected person selected from the extracted persons presented in the participant extraction processing step, and the speech history extraction
  • a speech history presentation processing step for selectively presenting each speech in the speech history extracted in the processing step, and extracting a part of the speech before or after the selected speech selected from the speech presented to be selectable Before / after utterance extraction processing step, and before / after utterance presentation processing step for presenting the utterance extracted in the preceding / following utterance extraction processing step together with the selected utterance And those to be executed by the information processing apparatus.
  • a program according to the present invention is a program that causes an arithmetic processing unit to execute processing executed as the information processing method.
  • a storage medium according to the present invention is a storage medium storing the above program. The above information processing apparatus is realized by these programs and storage media.
  • a virtual conference room server is taken as an example of an information processing apparatus that provides a virtual conference room and provides various functions that help participants evaluate.
  • embodiments will be described in the following order.
  • the virtual meeting room server 1 provides a function of dividing participants who participate in meetings and discussions using the virtual meeting room into several groups as necessary. In addition, a function of assigning a virtual conference room to each group is provided. In addition, a search function for participants and speech is provided. In addition, it provides a function for presenting participants and comments extracted as search results.
  • the virtual conference room server 1 manages various DBs (Databases) in order to provide the various functions described above.
  • DBs Databases
  • a conference room DB 50 that stores information on virtual conference rooms
  • a participant DB 51 that stores information on participants who participate in discussions using virtual conference rooms
  • a log of statements made in each virtual conference room are stored.
  • Log DB 52 or the like. Details of each DB will be described later.
  • the configuration of the virtual conference room server 1 will be described in more detail with reference to FIG.
  • the virtual conference room server 1 includes a condition reception unit 1a, a participant search processing unit 1b, a speech history extraction unit 1c, a speech history presentation unit 1d, a front and rear speech extraction unit 1e, a front and rear speech presentation unit 1f, and a conclusion specifying unit 1g.
  • the condition receiving unit 1a executes a condition input receiving process for receiving an input of a search condition for searching for a participant or a statement.
  • the participant search processing unit 1b performs a search based on a search condition for searching for a specific participant, and executes an extraction process for extracting a participant that matches the condition as an extracted person.
  • the extraction participant presentation process which presents the extracted search result to the participant terminal 3 and the administrator terminal 4 is executed.
  • the speech history extraction unit 1c executes a speech history extraction process that extracts a speech history of a person (selected person) selected by the user from the extracted persons.
  • the statement history presentation unit 1 d executes a statement history presentation process for presenting the extracted statement history to the participant terminal 3 and the administrator terminal 4.
  • the front and rear utterance extraction unit 1e executes a front and rear utterance extraction process that extracts a selected utterance (selected utterance) from the utterance history of the presented selected person and a part of the previous and subsequent utterances (front and rear utterances).
  • the front / rear speech presenting unit 1 f executes a pre- and post-drawing speech presentation process for presenting the extracted front / rear speech to the participant terminal 3 and the administrator terminal 4.
  • the conclusion specifying unit 1g executes a conclusion specifying process for specifying a comment that is a conclusion part of the discussion or discussion.
  • the configuration of the communication network 2 is not particularly limited.
  • the Internet an intranet, an extranet, a LAN (Local Area Network), a CATV (Community Antenna TeleVision) communication network, a virtual private network (Virtual) Private network), telephone line network, mobile communication network, satellite communication network, etc.
  • LAN Local Area Network
  • CATV Common Antenna TeleVision
  • Virtual Virtual
  • telephone line network mobile communication network
  • satellite communication network etc.
  • Various examples of transmission media constituting all or part of the communication network 2 are also envisaged.
  • IEEE Institute of Electrical and Electronics Engineers 1394, USB (Universal Serial Bus), power line carrier, telephone line, etc., infrared such as IrDA (Infrared Data Association), Bluetooth (registered trademark), 802.11 wireless It can also be used wirelessly, such as mobile phone networks, satellite lines, and digital terrestrial networks.
  • a participant terminal 3 shown in FIG. 1 is a terminal used by a participant who participates in a web internship.
  • the manager terminal 4 is an information processing apparatus used by, for example, a personnel manager of a company that manages a virtual conference room and hosts a web internship. It should be noted that the present invention is also applicable when there is a separate company that manages the virtual meeting room and the company that hosts the web internship borrows the virtual meeting room and allows participants to discuss.
  • various transmission / reception processes and the like are executed as necessary.
  • the participant terminal 3 and the administrator terminal 4 are, for example, a PC (Personal Computer), a feature phone, a PDA (Personal Digital Assistants) having a communication function, or a smart device such as a smartphone or a tablet terminal.
  • FIG. 3 is a diagram illustrating hardware of the virtual conference room server 1, the participant terminal 3, and the administrator terminal 4 illustrated in FIG.
  • a CPU (Central Processing Unit) 101 of a computer device in each server or terminal follows a program stored in a ROM (Read Only Memory) 102 or a program loaded from a storage unit 108 into a RAM (Random Access Memory) 103. Perform various processes.
  • the RAM 103 also appropriately stores data necessary for the CPU 101 to execute various processes.
  • the CPU 101, ROM 102, and RAM 103 are connected to each other via a bus 104.
  • An input / output interface 105 is also connected to the bus 104.
  • the input / output interface 105 includes an input device 106 composed of a keyboard, mouse, touch panel, etc., a display composed of a liquid crystal display (LCD), a cathode ray tube (CRT), an organic EL (electroluminescence) panel, and an output composed of a speaker.
  • a storage unit 108 including an HDD (Hard Disk Drive), a flash memory device, and the like
  • a communication unit 109 that performs communication processing and communication between devices via the communication network 2 are connected.
  • a media drive 110 is also connected to the input / output interface 105 as necessary, and a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and information can be written to the removable medium 111. Reading is performed.
  • a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and information can be written to the removable medium 111. Reading is performed.
  • each information processing apparatus which comprises the virtual meeting room server 1, the participant terminal 3, and the administrator terminal 4 is not limited to a single computer apparatus as shown in FIG. It may be configured by a plurality of computer devices.
  • the plurality of computer devices may be systemized by a LAN or the like, or may be arranged in a remote place in a communicable state by a VPN (Virtual Private Network) using the Internet or the like.
  • the conference room DB 50 is a DB that stores information on a virtual conference room managed by the virtual conference room server 1.
  • the conference room DB 50 stores, for example, a conference room ID (Identification) for identifying a virtual conference room, a usage status (availability), a usage start date and time, and the like.
  • the participant DB 51 is a DB that stores information on participants who participate in discussions and discussions performed in a virtual conference room.
  • a participant ID for example, a participant ID, a login PW (Password), a name and address of a participant, a contact address (telephone number or mail address), and the like are stored as participant information.
  • Log DB52 is DB which memorize
  • the log DB 52 stores, for example, a conference room ID of a virtual conference room and a participant ID of a participant who participated in discussion or discussion.
  • the entry / exit information is also stored along with the time information.
  • speech information made in the virtual conference room is also stored.
  • the utterance information the utterance time information, the uttered participant ID (or participant name), the utterance content, and the like are stored.
  • step S101 the administrator terminal 4 executes a condition setting screen display process for displaying on the administrator terminal 4 a search screen for designating a search condition for searching for a specific participant.
  • An example of the search screen is shown in FIG.
  • the search screen is displayed on, for example, the web browser 5 installed in the administrator terminal 4, and includes an input field 6 for inputting a free word as a search condition, and various drop-down lists 7 for narrowing down search results. , 7,...
  • a search button 8 for executing a search is provided.
  • the character string input to the input field 6 is a character string for extracting a participant, and is, for example, a character string related to a participant's utterance content or a character string related to a participant's attribute or profile.
  • a plurality of character strings (or sentences) are input in the input field 6, AND search and OR search are executed.
  • the drop-down list 7 is configured so that the attributes and profiles of the participants for extracting the participants can be selected. For example, in the education drop-down list 7, items such as “high school graduate”, “university graduate”, “university graduate prospect” can be selected.
  • the search button 8 is a button for causing the virtual conference room server 1 to execute a search process based on information input or selected by the input field 6 or the drop-down lists 7, 7,.
  • the condition transmission process in step S ⁇ b> 102 of FIG. 4 is executed in the administrator terminal 4.
  • the search condition that is, information input or selected by the input field 6 or the drop-down lists 7, 7, etc Is transmitted to the virtual conference room server 1.
  • step S201 the virtual conference room server 1 executes a condition input reception process for receiving a search condition.
  • step S202 the virtual meeting room server 1 performs a participant extraction process in step S202.
  • the participant extraction process a search based on the received search condition is executed, and a target participant (extracted person) is extracted.
  • step S203 the virtual meeting room server 1 executes extraction participant presentation processing for presenting the extracted person extracted in step S202 to the administrator terminal 4.
  • web page data to be displayed on the administrator terminal 4 is transmitted in a state where a selection operation for each extracted participant is possible (for example, in a state where a click operation is possible).
  • the administrator terminal 4 that has received the web page data including the extracted person information executes the extracted participant display process in step S103.
  • the extracted participant display process is a process for displaying on the web browser 5 an extracted participant display screen on which participants as search results extracted based on the search conditions specified on the search screen of FIG.
  • FIG. 5 An example of the extracted participant display screen is shown in FIG.
  • various search conditions are displayed as in FIG. 5, and an extracted participant display field 9 is provided below the search conditions.
  • each participant that matches the search condition and is extracted is displayed as the extracted participants 10, 10, 10.
  • the participant selection process in step S104 of FIG.
  • a process of transmitting information (participant selection information) of the participant (selected person) selected by the personnel manager to the virtual conference room server 1 is executed.
  • step S204 the virtual conference room server 1 that has received the participant selection information executes a speech history extraction process for extracting the speech history of the selected person. Then, in step S205, the virtual conference room server 1 transmits the extracted statement history to the administrator terminal 4 and executes a statement history presentation process for displaying on the administrator terminal 4. In this process, web page data to be displayed on the administrator terminal 4 in a state where a selection operation can be performed on each statement in the statement history (for example, in a state where a click operation is possible) is transmitted as the statement history information.
  • the administrator terminal 4 that has received the message history information executes a message history display process for displaying the message history information on the message history display screen on the administrator terminal 4 in the next step S105.
  • FIG. 7 shows a case where the personnel manager selects Mr. B from each participant (Mr. A, Mr. B, Mr. G) displayed on the extracted participant display screen shown in FIG.
  • Mr. A, Mr. B, Mr. G the contents already spoken by Mr. B are displayed in chronological order.
  • old utterances are arranged so as to be displayed at the top, but new utterances may be arranged to be displayed at the top.
  • Each remark of Mr. B shown in FIG. 7 is displayed as a character string that can be selected.
  • each utterance may be provided with a button for displaying the utterance before and after each utterance, instead of being composed of a selectable character string.
  • the manager terminal 4 executes a remark selection process in step S106.
  • a speech (selected speech) selected by the personnel manager is transmitted to the virtual conference room server 1 as selected speech information.
  • the virtual conference room server 1 executes the anteroposterior utterance extraction process.
  • the pre-post utterance extraction process is a process of extracting utterances before and after the selected utterance. As the utterances before and after, for example, only the utterances before the selected utterances may be extracted in order to grasp the flow of the selected utterances. Further, in order to grasp what kind of discussion has been made based on the selected utterance, only the utterance after the selected utterance may be extracted. Moreover, you may extract a part of utterances before and after. Specific examples of these will be described later.
  • step S207 the virtual conference room server 1 transmits the extracted front-rear message to the administrator terminal 4 and executes the front-rear message presenting process for displaying on the administrator terminal 4.
  • the extracted front / rear speech is transmitted to the administrator terminal 4 as front / rear speech information.
  • the front / rear speech information information of a speaker to be displayed together with the front / rear speech is also transmitted.
  • the administrator terminal 4 that has received the front-rear message information performs a front-rear message display process for displaying the front-rear message on the front-rear message display screen on the administrator terminal 4 in step S107.
  • the personnel manager can confirm the selected speech and the previous / next speech of the selected person displayed on the administrator terminal 4, and can evaluate the selected person.
  • identification information for example, the participant ID of the utterer, the participant name, etc.
  • Display The identification information does not need to be displayed for all the messages displayed on the administrator terminal 4, and a specific example will be described below.
  • 21 messages are displayed on the front and rear message display screen.
  • identification information for identifying the speaker is displayed for each of the 21 statements.
  • a selected utterance clarified by being surrounded by a surrounding line 11 and 20 front and rear utterances are displayed.
  • a participant name is displayed as identification information for identifying a speaker (Mr. A, Mr. B, Mr. C, and Ms. D).
  • the participants at the time of the discussion are displayed on the right side of the speech history. For example, the participants participating in the discussion were Mr. A, Mr. B, Mr. C, Mr. D, Mr. E, Mr. F, and Mr. G.
  • the person in charge of personnel can grasp what sort of discussion Mr. B has made in the discussion and, as a result, what kind of discussion has continued as a result of browsing the message display screen before and after FIG.
  • the identification information for identifying the speaker is displayed only for the speech made by a person other than the extracted person among the 21 speeches displayed on the front and rear speech display screen.
  • a person other than the extracted person is a person other than the extracted person (that is, Mr. A, Mr. B, and Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 9, there are five participants A, B, C, D, and E who entered the room, but participants C, D, and E who are participants other than the extracted person.
  • Identification information identifying the speaker is displayed for each of.
  • the identification information for identifying the speaker is displayed only for the speech made by the extracted person among the 21 speeches displayed on the front and rear speech display screen.
  • the extracted person is an extracted person (that is, Mr. A, Mr. B, Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 10, the identification information for identifying the speaker is displayed only for the speeches of Mr. A and Mr. B among the participants A, B, C, D, and E.
  • the identification information for identifying the speaker is displayed only for the speech of the participant who speaks before and after the selected speech among the 21 speeches displayed on the front and rear speech display screen. Specifically, as shown in FIG. 11, Mr. A, Mr. B, Mr. C, and Mr. D are speaking before and after the selected statement. On the other hand, Mr. E is speaking only after the selected speech. Therefore, the identification information for identifying the participant is displayed only for the speech of each participant except Mr. E.
  • the identification information for identifying the utterer is displayed only for the utterances of the participants who are speaking before and after the selected utterance.
  • the present invention is not limited to the 21 utterances.
  • the identification information for identifying the speaker may be displayed only for the speech of the participant who is speaking before and after the selected speech.
  • this example may be combined with the previous second example.
  • the speaker is specified only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance among Mr. C, D, and E who are not extracted persons.
  • Display identification information That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C and Mr. D.
  • this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, the identification that identifies the speaker only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance Display information. That is, the identification information for identifying the speaker is displayed only for the messages of Mr. A and Mr. B.
  • Example 5 In the fifth example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 12). Further, among the 21 utterances displayed on the front and rear utterance display screen, the utterance (12a in FIG. 12) immediately before the selected utterance made by the utterer (ie, Mr. B) of the selected utterance (12 in FIG. 12). ) And the next utterance (12b in FIG. 12), and the participant uttering between the selected utterance 12 and the previous utterance 12a, or between the selected utterance 12 and the next utterance 12b. Only for the participant who is speaking, identification information for identifying the speaker is displayed. Specifically, in FIG.
  • identification information for identifying the speaker is displayed only for the utterances of Mr. A, Mr. C, Mr. D, and Mr. E.
  • this example may be combined with the previous second example. Specifically, among C, D, and E who are not extracted persons, participants who speak between the selected utterance 12 and the previous utterance 12a, or one after the selected utterance 12 Only the participant who is speaking during the statement 12b displays the identification information for identifying the speaker. That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C, D, and E.
  • this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, participants who speak between the selected message 12 and the previous message 12a, or the selected message 12 and the next message Only the participant who speaks during 12b displays the identification information which identifies a speaker. That is, identification information for identifying the speaker is displayed only for Mr. A's statement.
  • utterances such as simple conflicts that are not necessary for grasping the outline of the discussion are excluded from the extraction targets of the preceding and following utterance extraction processing, and the contents of the discussion can be easily grasped.
  • the identification of the summary utterance may be, for example, automatically determining the content of the utterance (for example, automatically determining a utterance including a predetermined character string such as “summary” or “conclusion” as a summary utterance)
  • the virtual conference room may be equipped with a function that allows each participant or HR representative to specify a summary statement from among the statements, or only a greeting such as the final statement of a series of discussions May be determined as a summary statement.
  • the front-rear message display screen shown in FIG. 14 is managed. Displayed on the user terminal 4. Specifically, as shown in FIG. 14, Mr. A's summary utterance is displayed below the utterance history. Thereby, it is possible to grasp the position of the selected utterance of Mr. B who is the object to be evaluated with respect to the summary utterance, and B can be evaluated more appropriately.
  • the message history displayed on the administrator terminal 4 may always be the latest. Specifically, when the selected person makes a new utterance after executing the utterance history presentation process in step S205, the utterance displayed on the administrator terminal 4 by executing the processes in steps S204 and S205 again. The history may be updated. Thereby, since the person in charge of personnel can always browse the latest statement history, the selected person can be evaluated appropriately.
  • identification information for example, a participant's name of a speaker
  • a specific speaker is displayed for a speech made by a specific speaker.
  • identification information for example, a participant's name of a speaker
  • a speaker may be clearly specified for all the statements, and identification information (speaker name) of a specific speaker may be displayed in bold or colored characters.
  • the number of statements made by each participant may be displayed together with the participant list displayed on the right side of the message history.
  • the number of utterances may be the number of utterances in the extracted preceding and following utterances, or the total number of utterances including other utterances. Thereby, the number of utterances as an index for evaluating each participant can be presented.
  • the identification information (for example, participant name) of the speaker may be displayed in a different color for each participant. Thereby, each participant can be easily identified. Further, not only the participant name but also the color of the speech may differ depending on the participant.
  • the virtual meeting room server 1 is based on the condition receiving unit 1a for executing the condition input receiving process (step S201) for receiving the input of the search condition for searching for the participant, and the search condition.
  • Participant search processing unit 1b that performs search for participants and presents each extracted person extracted as a search result in a selectable manner, and remarks of a selected person selected from the extracted persons presented by the participant search processing unit
  • a speech history extraction unit 1c that executes a speech history extraction process (step S204) for extracting a history, and a speech history presentation process (step S205) that presents each of the speech histories extracted in the speech history extraction process in a selectable manner.
  • Statement history presentation unit 1d to be executed, and pre- and post-utterance extraction processing for extracting a part of the utterance before or after the selected utterance selected from the utterances presented to be selectable Front and rear utterance extractor 1e executing step S206), and a longitudinal speech presentation unit 1f to perform before and after speaking presentation processing (step S207) presenting the extracted utterance before and after utterance extraction process with the selected utterance.
  • step S207 in the front-rear message presentation process (step S207), along with the presentation of the extracted message, at least a part of the participants who performed the message Participant identification information (participant ID, participant name, etc.) is presented.
  • step S207 only the participants other than the extracted person are presented with the identification information among the participants who have made the speech.
  • the utterances of participants who do not match the search conditions are presented together with the identification information, so the participants who made the utterances are grasped from the remarks made by participants who do not match the search conditions. It becomes easy to do and it can prevent that the said participant leaks from the object which should be noted. That is, it is possible to focus on participants other than the participant (selected person) initially focused on by the personnel manager.
  • step S207 only the extracted person is presented with the identification information among the participants who have made the speech.
  • the speech was made both before and after the selected speech. Only participants are presented with identification information.
  • the selected speech and the selected person are one before or one of the selected speech.
  • Identification information is presented only to the participant who made a speech between any of the later speeches.
  • step S206 only the speech of a predetermined number of characters or more is extracted.
  • step S207 in the front / rear speech presentation process (step S207), all participants when the selected speech is performed are presented.
  • the conclusion specifying unit 1g for specifying the statement indicating the conclusion of the discussion as a collective statement is provided.
  • the selection is performed. A summary remark made after the selected remark along with the remark is presented.
  • the updated speech history is presented each time the speech history is updated by a new speech of the selected person.
  • a predetermined number of speeches may be extracted as part of the speech before or after the selected speech.
  • the virtual conference room server 1 of the present invention has been described above.
  • the program according to the embodiment is a program that causes an arithmetic processing device (such as a CPU) to execute processing in the virtual conference room server 1.
  • the program according to the embodiment causes the arithmetic processing device to execute a procedure for receiving an input of a search condition for searching for a participant. Further, a search is performed for participants based on the search condition, and the processing unit is caused to execute a procedure for selectively presenting each extracted person extracted as a search result. Further, the arithmetic processing apparatus is caused to execute a procedure for extracting the utterance history of the selected person selected from the extracted persons. Furthermore, the arithmetic processing unit is caused to execute a procedure for presenting each of the extracted utterance histories in a selectable manner.
  • this program is a program for causing the arithmetic processing unit to execute the processes of steps S201 to S207 in FIG.
  • the virtual conference room server 1 described above can be realized by such a program.
  • a program can be stored in advance in an HDD as a storage medium built in a device such as a computer device or a ROM in a microcomputer having a CPU. Alternatively, it can be stored (stored) temporarily or permanently in a removable storage medium such as a semiconductor memory, memory card, optical disk, magneto-optical disk, or magnetic disk. Such a removable storage medium can be provided as so-called package software. Further, such a program can be installed from a removable storage medium to a personal computer or the like, or can be downloaded from a download site via a network such as a LAN or the Internet.
  • 1 virtual conference room server 1a condition reception unit, 1b participant search processing unit, 1c statement history extraction unit, 1d statement history presentation unit, 1e before and after statement extraction unit, 1f before and after statement presentation unit, 1g conclusion identification unit, 2 communication network 3, Participant terminal, 4 Administrator terminal, 50 Conference room DB, 51 Participant DB, 52 Log DB

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The purpose of the present invention is to provide an environment in which it is possible to efficiently evaluate attendees who attend a discussion in which a virtual conference room is used. To this end, provided is an information processing device, comprising: a condition acceptance unit which executes a condition input acceptance process of accepting an input of a search condition for searching for an attendee; an attendee search processing unit which carries out a search for the attendee on the basis of the search condition, and selectably presents each extracted person who has been extracted as a search result; a statement history extraction unit which executes a statement history extraction process of extracting a statement history of a selected person who has been selected from among the extracted persons presented by the attendee search processing unit; a statement history presentation unit which executes a statement history presentation process of selectably presenting each statement of the statement history which has been extracted with the statement history extraction process; a preceding/following statement extraction unit which executes a preceding/following statement extraction process of extracting a portion of statements which precede or follow a selected statement which has been selected from the statements which have been selectably presented; and a preceding/following statement presentation unit which executes a preceding/following statement presentation process of presenting, together with the selected statement, a statement which has been extracted with the preceding/following statement extraction process.

Description

情報処理装置、情報処理方法、プログラム、記憶媒体Information processing apparatus, information processing method, program, and storage medium
 本発明は、情報処理装置、情報処理方法、プログラム、記憶媒体に関し、具体的には、仮想会議室における発言の抽出と提示に関する。 The present invention relates to an information processing apparatus, an information processing method, a program, and a storage medium, and more specifically, to extraction and presentation of comments in a virtual conference room.
特開2012-74808号公報JP 2012-74808 A
 学生が実際に企業に集まり業務を体験するインターンシップが知られている。近年では、インターネットなどの通信網を利用して、遠隔地から参加可能なウェブインターンシップを実施している企業がある。このようなウェブインターンシップは、実際に企業に集まることが困難な地域に住んでいる学生や、時間を取ることが難しい学生にとって利便性が高い。
 ウェブインターンシップでは、仮想会議室を用いて、参加者間で議論させる課題を課すことがある。仮想会議室を用いるシステムとしては、例えば特許文献1に示すものが知られている。このようなシステムを用いて仮想会議を実施した場合、主催者側(例えば会社側の人事担当者)が参加者の素質や性格などを把握するためには、仮想会議室で行われる各発言の内容を確認する必要がある。
Internships where students actually gather at companies and experience work are known. In recent years, there are companies that implement a web internship that allows participation from a remote location using a communication network such as the Internet. This kind of web internship is very convenient for students who live in areas where it is difficult to get together in the company or students who have difficulty taking time.
In web internships, virtual conference rooms may be used to impose tasks for participants to discuss. As a system using a virtual conference room, for example, a system shown in Patent Document 1 is known. When a virtual meeting is carried out using such a system, the organizer side (for example, a person in charge of personnel at the company side) must understand each participant's remarks in the virtual meeting room in order to grasp the qualities and characteristics of the participants. It is necessary to check the contents.
 ところが、参加者が多い場合には個々の参加者の評価が困難であったり、時間を要してしまったりすることがある。また、参加者が複数の仮想会議室に振り分けられた状態で、議論が行われた場合、複数の仮想会議室で同時進行される議論の全てを把握することは困難であり、各参加者を適正に評価することが難しくなる。
 本発明は、このような課題に鑑みてなされたものであり、仮想会議室を用いた議論に参加する参加者を効率よく評価することができる環境を提供することを目的とする。
However, when there are many participants, evaluation of individual participants may be difficult or time may be required. In addition, when discussions are conducted with participants assigned to multiple virtual meeting rooms, it is difficult to grasp all of the discussions that are simultaneously progressing in multiple virtual meeting rooms. It becomes difficult to evaluate properly.
This invention is made | formed in view of such a subject, and it aims at providing the environment which can evaluate efficiently the participant who participates in the discussion using a virtual conference room.
 本発明に係る情報処理装置は、参加者を検索するための検索条件の入力を受け付ける条件入力受付処理を実行する条件受付部と、前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する参加者検索処理部と、前記参加者検索処理部が提示した抽出人物の中から選択された選択人物の発言履歴を抽出する発言履歴抽出処理を実行する発言履歴抽出部と、前記発言履歴抽出処理で抽出した発言履歴のそれぞれの発言を選択可能に提示する発言履歴提示処理を実行する発言履歴提示部と、前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を抽出する前後発言抽出処理を実行する前後発言抽出部と、前記選択発言と共に前記前後発言抽出処理で抽出した発言を提示する前後発言提示処理を実行する前後発言提示部とを備えたものである。
 これにより、注目したい人物(検索条件により抽出された抽出人物から選択された選択人物)の発言履歴が抽出されて提示されると共に、選択発言を前後の他人の発言と共に抽出して提示する環境が提供される。
An information processing apparatus according to the present invention searches for a participant based on a condition receiving unit that executes a condition input receiving process that receives an input of a search condition for searching for a participant, and the search condition. A participant search processing unit that presents each extracted extracted person in a selectable manner, and a speech history extraction process that extracts a speech history of the selected person selected from the extracted persons presented by the participant search processing unit A speech history extraction unit that performs a speech history presentation process for selectively presenting each speech of the speech history extracted in the speech history extraction process, and a selection from the speech presented to be selectable A pre- and post-utterance extraction unit that executes a pre- and post-utterance extraction process for extracting a part of the pre- and post-selected utterances, It is obtained by a longitudinal speech presentation unit for executing Shimesuru longitudinal speech presentation process.
Thereby, the utterance history of the person who wants to pay attention (the selected person selected from the extracted persons extracted by the search condition) is extracted and presented, and the environment in which the selected utterance is extracted and presented together with the utterances of other people before and after is presented. Provided.
 上記した情報処理装置における前記前後発言提示処理では、前記抽出した発言の提示と共に、当該発言を行った参加者のうち少なくとも一部の参加者の識別情報を提示するものである。
 これにより、各発言の位置づけや関係性が把握しやすくされる。
In the front and rear speech presentation processing in the information processing apparatus described above, the identification information of at least some of the participants who have made the speech is presented together with the presentation of the extracted speech.
This makes it easy to grasp the position and relationship of each statement.
 上記した情報処理装置における前記前後発言提示処理では、当該発言を行った参加者のうち、前記抽出人物以外の参加者のみ識別情報を提示するものである。
 これにより、検索条件に合致していない参加者の発言が識別情報と共に提示される。
In the above-mentioned front-rear message presentation process in the information processing apparatus described above, only the participants other than the extracted person are presented with the identification information among the participants who have made the speech.
Thereby, the remarks of the participant who does not match the search condition are presented together with the identification information.
 上記した情報処理装置における前記前後発言提示処理では、当該発言を行った参加者のうち、前記抽出人物のみ識別情報を提示するものである。
 これにより、検索条件に合致した参加者の発言が識別情報と共に提示される。
In the preceding and following statement presentation processing in the information processing apparatus described above, only the extracted person is presented with identification information among the participants who have made the statement.
Thus, the participant's remarks that match the search condition are presented together with the identification information.
 上記した情報処理装置における前記前後発言提示処理では、当該発言を行った参加者のうち、前記選択発言の前と後の双方に発言を行った参加者のみ識別情報を提示するものである。
 これにより、選択人物の評価を行う上で重要な会話のやりとりを行っていると推定される参加者の識別情報が発言と共に提示される。
In the preceding and following speech presentation processing in the information processing apparatus described above, among the participants who have made the speech, only the participant who made the speech both before and after the selected speech is presented with the identification information.
Thereby, the identification information of the participant who is presumed to be exchanging important conversation in evaluating the selected person is presented together with the remark.
 上記した情報処理装置における前記前後発言提示処理では、当該発言を行った参加者のうち、前記選択発言と前記選択人物が前記選択発言の一つ前または一つ後に行ったいずれかの発言との間に発言を行った参加者のみ識別情報を提示するものである。
 これにより、選択人物の評価を行う上で重要な会話のやりとりを行っていると推定される参加者の識別情報が発言と共に提示される。
In the preceding and following message presentation processing in the information processing apparatus described above, among the participants who have made the message, the selected message and any message made by the selected person one before or after the selected message Only participants who have made a statement in the meantime present identification information.
Thereby, the identification information of the participant who is presumed to be exchanging important conversation in evaluating the selected person is presented together with the remark.
 上記した情報処理装置における前記前後発言抽出処理では、所定の文字数以上の発言のみを抽出するものである。
 これにより、例えば、相槌を打つだけの文字数の少ない発言は抽出されない。
In the preceding and following message extraction processing in the information processing apparatus described above, only messages having a predetermined number of characters or more are extracted.
As a result, for example, an utterance with a small number of characters enough to make a match is not extracted.
 上記した情報処理装置における前記前後発言提示処理では、前記選択発言が行われたときの参加者全てを提示するものである。
 これにより、選択発言が行われたときの状況を把握するための情報が提示される。
In the preceding and following message presentation processing in the information processing apparatus described above, all the participants when the selected message is performed are presented.
Thereby, information for grasping the situation when the selected utterance is performed is presented.
 上記した情報処理装置は、議論の結論を示した発言をまとめ発言として特定する結論特定部を備え、前記前後発言提示処理では、前記選択発言と共に前記選択発言の後に発言された前記まとめ発言を提示するものである。
 これにより、選択発言と議論の結論が共に提示される。
The information processing apparatus includes a conclusion specifying unit that specifies a comment indicating a conclusion of the discussion as a collective comment, and in the preceding and following comment presenting process, presents the summarized comment that is remarked after the selected comment and the selected comment. To do.
This presents both the selected utterance and the conclusion of the discussion.
 上記した情報処理装置における前記発言履歴提示処理では、前記選択人物の新規発言によって発言履歴が更新されるごとに更新された発言履歴を提示するものである。
 これにより、会議室に入室しなくとも選択人物の発言履歴が更新される。
In the statement history presentation process in the information processing apparatus described above, the updated statement history is presented each time the statement history is updated by a new statement of the selected person.
As a result, the remark history of the selected person is updated without entering the conference room.
 本発明に係る情報処理方法は、参加者を検索するための検索条件の入力を受け付ける条件入力受付処理ステップと、前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する参加者抽出処理ステップと、前記参加者抽出処理ステップで提示した抽出人物の中から選択された選択人物の発言履歴を抽出する発言履歴抽出処理ステップと、前記発言履歴抽出処理ステップで抽出した発言履歴のそれぞれの発言を選択可能に提示する発言履歴提示処理ステップと、前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を抽出する前後発言抽出処理ステップと、前記選択発言と共に前記前後発言抽出処理ステップで抽出した発言を提示する前後発言提示処理ステップとを、情報処理装置に実行させるものである。
 この情報処理方法により、議論に参加する参加者を効率よく評価することができる環境を提供するための処理が実行される。
An information processing method according to the present invention includes a condition input reception processing step for receiving an input of a search condition for searching for a participant, a search for a participant based on the search condition, and an extracted person extracted as a search result Participant extraction processing step presenting each selectable, a speech history extraction processing step for extracting a speech history of the selected person selected from the extracted persons presented in the participant extraction processing step, and the speech history extraction A speech history presentation processing step for selectively presenting each speech in the speech history extracted in the processing step, and extracting a part of the speech before or after the selected speech selected from the speech presented to be selectable Before / after utterance extraction processing step, and before / after utterance presentation processing step for presenting the utterance extracted in the preceding / following utterance extraction processing step together with the selected utterance And those to be executed by the information processing apparatus.
With this information processing method, processing for providing an environment in which participants participating in the discussion can be efficiently evaluated is executed.
 本発明に係るプログラムは、上記情報処理方法として実行する処理を演算処理装置に実行させるプログラムである。
 本発明に係る記憶媒体は、上記プログラムを記憶した記憶媒体である。これらのプログラムや記憶媒体により上記の情報処理装置を実現する。
A program according to the present invention is a program that causes an arithmetic processing unit to execute processing executed as the information processing method.
A storage medium according to the present invention is a storage medium storing the above program. The above information processing apparatus is realized by these programs and storage media.
 本発明によれば、議論に参加する参加者を効率よく評価することができる環境を提供することができる。 According to the present invention, it is possible to provide an environment in which participants participating in the discussion can be efficiently evaluated.
本発明の実施の形態の全体の構成を示す図である。It is a figure which shows the whole structure of embodiment of this invention. 本実施の形態の仮想会議室サーバのブロック図である。It is a block diagram of the virtual meeting room server of this Embodiment. 本実施の形態のコンピュータのブロック図である。It is a block diagram of the computer of this embodiment. 全体の処理の流れを示す図である。It is a figure which shows the flow of the whole process. 検索画面の一例を示す図である。It is a figure which shows an example of a search screen. 抽出参加者表示画面の一例を示す図である。It is a figure which shows an example of an extraction participant display screen. 発言履歴表示画面の一例を示す図である。It is a figure which shows an example of the utterance log | history display screen. 前後発言表示画面の第1例を示す図である。It is a figure which shows the 1st example of a back and forth message display screen. 前後発言表示画面の第2例を示す図である。It is a figure which shows the 2nd example of the back-and-front message display screen. 前後発言表示画面の第3例を示す図である。It is a figure which shows the 3rd example of a back-and-front message display screen. 前後発言表示画面の第4例を示す図である。It is a figure which shows the 4th example of the back-and-front message display screen. 前後発言表示画面の第5例を示す図である。It is a figure which shows the 5th example of the back-and-front message display screen. 前後発言抽出処理の別の第1例における前後発言表示画面の例を示す図である。It is a figure which shows the example of the back-and-front utterance display screen in another 1st example of the back-and-front utterance extraction process. 前後発言抽出処理の別の第2例における前後発言表示画面の例を示す図である。It is a figure which shows the example of the back-and-front utterance display screen in another 2nd example of the back-and-front utterance extraction process.
 本実施の形態においては、ウェブインターンシップを実施する企業が仮想会議室を用いて参加者に議論をさせる例を説明する。仮想会議室を提供し、参加者の評価の一助となる様々な機能を提供する情報処理装置としては、仮想会議室サーバを例に挙げる。
 以下、実施の形態を次の順序で説明する。
In the present embodiment, an example will be described in which a company that implements a web internship makes a participant discuss using a virtual conference room. A virtual conference room server is taken as an example of an information processing apparatus that provides a virtual conference room and provides various functions that help participants evaluate.
Hereinafter, embodiments will be described in the following order.
<1.全体構成>
<2.ハードウエア構成>
<3.各種データベース>
<4.処理の流れ>
<5.前後発言表示画面>
[5-1.第1例]
[5-2.第2例]
[5-3.第3例]
[5-4.第4例]
[5-5.第5例]
<6.前後発言抽出処理の別の例>
[6-1.別の第1例]
[6-2.別の第2例]
<7.変形例>
<8.まとめ>
<9.プログラム及び記憶媒体>
<1. Overall configuration>
<2. Hardware configuration>
<3. Various databases>
<4. Flow of processing>
<5. Front and rear remark display screen>
[5-1. First example]
[5-2. Second example]
[5-3. Third example]
[5-4. Fourth example]
[5-5. Example 5]
<6. Another example of before and after speech extraction processing>
[6-1. Another first example]
[6-2. Another second example]
<7. Modification>
<8. Summary>
<9. Program and Storage Medium>
<1.全体構成>

 本実施の形態の仮想会議室サーバ1を含むネットワークシステム全体の構成を図1及び図2を用いて説明する。
 尚、以下の説明においては、仮想会議室サーバ1を使用する主催者側の人物(例えば、人事担当者)を管理者として説明する。
 図1に示すように、本実施の形態の仮想会議室サーバ1は、通信ネットワーク2を介して、参加者端末3,3,3・・・、管理者端末4と相互に通信可能な状態に接続されている。
<1. Overall configuration>

A configuration of the entire network system including the virtual conference room server 1 according to the present embodiment will be described with reference to FIGS. 1 and 2.
In the following description, a person on the organizer side (for example, a person in charge of personnel) who uses the virtual meeting room server 1 will be described as an administrator.
As shown in FIG. 1, the virtual conference room server 1 according to the present embodiment is in a state in which it can communicate with the participant terminals 3, 3, 3... And the manager terminal 4 via the communication network 2. It is connected.
 仮想会議室サーバ1は、仮想会議室を用いた会議や討論に参加する参加者を必要に応じていくつかのグループに分ける機能を提供する。また、各グループに対して仮想会議室を割り当てる機能を提供する。更に、参加者や発言の検索機能を提供する。加えて、検索結果として抽出された参加者や発言を提示する機能を提供する。 The virtual meeting room server 1 provides a function of dividing participants who participate in meetings and discussions using the virtual meeting room into several groups as necessary. In addition, a function of assigning a virtual conference room to each group is provided. In addition, a search function for participants and speech is provided. In addition, it provides a function for presenting participants and comments extracted as search results.
 仮想会議室サーバ1は、上記した各種の機能を提供するために、様々なDB(Database)を管理する。例えば、仮想会議室の情報が記憶される会議室DB50、仮想会議室を用いた討論に参加する参加者の情報が記憶される参加者DB51、各仮想会議室において行われた発言のログが記憶されるログDB52などである。
 各DBについての詳細は、後述する。
The virtual conference room server 1 manages various DBs (Databases) in order to provide the various functions described above. For example, a conference room DB 50 that stores information on virtual conference rooms, a participant DB 51 that stores information on participants who participate in discussions using virtual conference rooms, and a log of statements made in each virtual conference room are stored. Log DB 52 or the like.
Details of each DB will be described later.
 仮想会議室サーバ1の構成を、図2を参照してより詳細に述べる。
 仮想会議室サーバ1は、条件受付部1a、参加者検索処理部1b、発言履歴抽出部1c、発言履歴提示部1d、前後発言抽出部1e、前後発言提示部1f、結論特定部1gを備える。
The configuration of the virtual conference room server 1 will be described in more detail with reference to FIG.
The virtual conference room server 1 includes a condition reception unit 1a, a participant search processing unit 1b, a speech history extraction unit 1c, a speech history presentation unit 1d, a front and rear speech extraction unit 1e, a front and rear speech presentation unit 1f, and a conclusion specifying unit 1g.
 条件受付部1aは、参加者や発言を検索するための検索条件の入力を受け付ける条件入力受付処理を実行する。
 参加者検索処理部1bは、特定の参加者を検索するための検索条件に基づいて検索を行い、条件に合致した参加者を抽出人物として抽出する抽出処理を実行する。また、抽出した検索結果を参加者端末3や管理者端末4へ提示する抽出参加者提示処理を実行する。
The condition receiving unit 1a executes a condition input receiving process for receiving an input of a search condition for searching for a participant or a statement.
The participant search processing unit 1b performs a search based on a search condition for searching for a specific participant, and executes an extraction process for extracting a participant that matches the condition as an extracted person. Moreover, the extraction participant presentation process which presents the extracted search result to the participant terminal 3 and the administrator terminal 4 is executed.
 発言履歴抽出部1cは、抽出人物からユーザが選択した人物(選択人物)の発言履歴を抽出する発言履歴抽出処理を実行する。
 発言履歴提示部1dは、抽出した発言履歴を参加者端末3や管理者端末4へ提示する発言履歴提示処理を実行する。
The speech history extraction unit 1c executes a speech history extraction process that extracts a speech history of a person (selected person) selected by the user from the extracted persons.
The statement history presentation unit 1 d executes a statement history presentation process for presenting the extracted statement history to the participant terminal 3 and the administrator terminal 4.
 前後発言抽出部1eは、提示された選択人物の発言履歴から選択された発言(選択発言)と、その前後の発言の一部(前後発言)を抽出する前後発言抽出処理を実行する。
 前後発言提示部1fは、抽出した前後発言を参加者端末3や管理者端末4へ提示する抽前後発言提示処理を実行する。
The front and rear utterance extraction unit 1e executes a front and rear utterance extraction process that extracts a selected utterance (selected utterance) from the utterance history of the presented selected person and a part of the previous and subsequent utterances (front and rear utterances).
The front / rear speech presenting unit 1 f executes a pre- and post-drawing speech presentation process for presenting the extracted front / rear speech to the participant terminal 3 and the administrator terminal 4.
 結論特定部1gは、議論や討論の結論部分となる発言を特定する結論特定処理を実行する。
 他にも、参加者の会議室への割り当てや、入退室管理を行いログに記憶する処理や、会議室選択画面や仮想会議室画面をユーザ端末上で表示させるための各種ウェブページデータを生成する処理や、ウェブページデータを送信する処理などを実行する。そのために、仮想会議室サーバ1には必要な各部が設けられている。
The conclusion specifying unit 1g executes a conclusion specifying process for specifying a comment that is a conclusion part of the discussion or discussion.
In addition, assigning participants to conference rooms, managing entry / exit management and storing them in a log, and generating various web page data for displaying conference room selection screens and virtual conference room screens on user terminals Processing to transmit, processing to transmit web page data, and the like. Therefore, the virtual conference room server 1 is provided with necessary units.
 図1の構成において、通信ネットワーク2の構成は特に限定されるものではなく、例えば、インターネット、イントラネット、エキストラネット、LAN(Local Area Network)、CATV(Community Antenna TeleVision)通信網、仮想専用網(Virtual Private Network)、電話回線網、移動体通信網、衛星通信網などが想定される。
 また通信ネットワーク2の全部又は一部を構成する伝送媒体についても多様な例が想定される。例えばIEEE(Institute of Electrical and Electronics Engineers)1394、USB(Universal Serial Bus)、電力線搬送、電話線などの有線でも、IrDA(Infrared Data Association)のような赤外線、ブルートゥース(登録商標)、802.11無線、携帯電話網、衛星回線、地上波デジタル網などの無線でも利用可能である。
In the configuration of FIG. 1, the configuration of the communication network 2 is not particularly limited. For example, the Internet, an intranet, an extranet, a LAN (Local Area Network), a CATV (Community Antenna TeleVision) communication network, a virtual private network (Virtual) Private network), telephone line network, mobile communication network, satellite communication network, etc. are assumed.
Various examples of transmission media constituting all or part of the communication network 2 are also envisaged. For example, IEEE (Institute of Electrical and Electronics Engineers) 1394, USB (Universal Serial Bus), power line carrier, telephone line, etc., infrared such as IrDA (Infrared Data Association), Bluetooth (registered trademark), 802.11 wireless It can also be used wirelessly, such as mobile phone networks, satellite lines, and digital terrestrial networks.
 図1に示す参加者端末3は、ウェブインターンシップに参加する参加者が使用する端末である。
 管理者端末4は、例えば、仮想会議室を管理し、ウェブインターンシップを主催する企業の人事担当者などが使用する情報処理装置である。尚、仮想会議室を管理する企業が別にあり、ウェブインターンシップを主催する企業が仮想会議室を借りて参加者に討論させる場合にも本発明は適用可能である。
 参加者端末3や管理者端末4では、必要に応じて各種の送受信処理などが実行される。 また、参加者端末3や管理者端末4は、例えば、通信機能を備えたPC(Personal Computer)やフィーチャーフォンやPDA(Personal Digital Assistants)、或いは、スマートフォンやタブレット端末などのスマートデバイスなどである。
A participant terminal 3 shown in FIG. 1 is a terminal used by a participant who participates in a web internship.
The manager terminal 4 is an information processing apparatus used by, for example, a personnel manager of a company that manages a virtual conference room and hosts a web internship. It should be noted that the present invention is also applicable when there is a separate company that manages the virtual meeting room and the company that hosts the web internship borrows the virtual meeting room and allows participants to discuss.
In the participant terminal 3 and the administrator terminal 4, various transmission / reception processes and the like are executed as necessary. The participant terminal 3 and the administrator terminal 4 are, for example, a PC (Personal Computer), a feature phone, a PDA (Personal Digital Assistants) having a communication function, or a smart device such as a smartphone or a tablet terminal.
<2.ハードウエア構成>

 図3は、図1に示した仮想会議室サーバ1、参加者端末3、及び管理者端末4のハードウエアを例示する図である。それぞれのサーバや端末におけるコンピュータ装置のCPU(Central Processing Unit)101は、ROM(Read Only Memory)102に記憶されているプログラム、または記憶部108からRAM(Random Access Memory)103にロードされたプログラムに従って各種の処理を実行する。RAM103にはまた、CPU101が各種の処理を実行する上において必要なデータなども適宜記憶される。
 CPU101、ROM102、およびRAM103は、バス104を介して相互に接続されている。このバス104には、入出力インターフェース105も接続されている。
 入出力インターフェース105には、キーボード、マウス、タッチパネルなどよりなる入力装置106、LCD(Liquid Crystal Display)、CRT(Cathode Ray Tube)、有機EL(Electroluminescence)パネルなどよりなるディスプレイ、並びにスピーカなどよりなる出力装置107、HDD(Hard Disk Drive)やフラッシュメモリ装置などより構成される記憶部108、通信ネットワーク2を介しての通信処理や機器間通信を行う通信部109が接続されている。
 入出力インターフェース105にはまた、必要に応じてメディアドライブ110が接続され、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリなどのリムーバブルメディア111が適宜装着され、リムーバブルメディア111に対する情報の書込や読出が行われる。
<2. Hardware configuration>

FIG. 3 is a diagram illustrating hardware of the virtual conference room server 1, the participant terminal 3, and the administrator terminal 4 illustrated in FIG. A CPU (Central Processing Unit) 101 of a computer device in each server or terminal follows a program stored in a ROM (Read Only Memory) 102 or a program loaded from a storage unit 108 into a RAM (Random Access Memory) 103. Perform various processes. The RAM 103 also appropriately stores data necessary for the CPU 101 to execute various processes.
The CPU 101, ROM 102, and RAM 103 are connected to each other via a bus 104. An input / output interface 105 is also connected to the bus 104.
The input / output interface 105 includes an input device 106 composed of a keyboard, mouse, touch panel, etc., a display composed of a liquid crystal display (LCD), a cathode ray tube (CRT), an organic EL (electroluminescence) panel, and an output composed of a speaker. A device 107, a storage unit 108 including an HDD (Hard Disk Drive), a flash memory device, and the like, and a communication unit 109 that performs communication processing and communication between devices via the communication network 2 are connected.
A media drive 110 is also connected to the input / output interface 105 as necessary, and a removable medium 111 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is appropriately mounted, and information can be written to the removable medium 111. Reading is performed.
 このようなコンピュータ装置では、通信部109による通信によりデータやプログラムのアップロード、ダウンロードが行われる。また、リムーバブルメディア111を介したデータやプログラムの受け渡しが可能である。
 CPU101が各種のプログラムに基づいて処理動作を行うことで、仮想会議室サーバ1、参加者端末3、及び管理者端末4のそれぞれにおいて後述する情報処理や通信が実行される。
 尚、仮想会議室サーバ1、参加者端末3、及び管理者端末4を構成するそれぞれの情報処理装置は、図3のようなコンピュータ装置が単一で構成されることに限らず、システム化された複数のコンピュータ装置によって構成されてもよい。複数のコンピュータ装置は、LANなどによりシステム化されていてもよいし、インターネットなどを利用したVPN(Virtual Private Network)などにより通信可能な状態で遠隔地に配置されたものでもよい。
In such a computer device, data and programs are uploaded and downloaded by communication by the communication unit 109. Data and programs can be exchanged via the removable medium 111.
The CPU 101 performs processing operations based on various programs, whereby information processing and communication described later are executed in each of the virtual conference room server 1, the participant terminal 3, and the administrator terminal 4.
In addition, each information processing apparatus which comprises the virtual meeting room server 1, the participant terminal 3, and the administrator terminal 4 is not limited to a single computer apparatus as shown in FIG. It may be configured by a plurality of computer devices. The plurality of computer devices may be systemized by a LAN or the like, or may be arranged in a remote place in a communicable state by a VPN (Virtual Private Network) using the Internet or the like.
<3.各種データベース>

 会議室DB50は、仮想会議室サーバ1が管理する仮想会議室の情報を記憶するDBである。会議室DB50には、例えば、仮想会議室を特定するための会議室ID(Identification)や利用状況(空室状況)や利用開始日時などが記憶される。
<3. Various databases>

The conference room DB 50 is a DB that stores information on a virtual conference room managed by the virtual conference room server 1. The conference room DB 50 stores, for example, a conference room ID (Identification) for identifying a virtual conference room, a usage status (availability), a usage start date and time, and the like.
 参加者DB51は、仮想会議室で行われる議論や討論に参加する参加者の情報を記憶するDBである。参加者DB51には、例えば、参加者情報として、参加者IDやログインPW(Password)、参加者の氏名や住所や連絡先(電話番号やメールアドレス)などが記憶される。 The participant DB 51 is a DB that stores information on participants who participate in discussions and discussions performed in a virtual conference room. In the participant DB 51, for example, a participant ID, a login PW (Password), a name and address of a participant, a contact address (telephone number or mail address), and the like are stored as participant information.
 ログDB52は、仮想会議室で行われた討論の内容を記憶するDBである。ログDB52には、例えば、仮想会議室の会議室IDと、議論や討論に参加した参加者の参加者IDが記憶される。参加者が議論の途中で仮想会議室への入退室を行った場合には、時間情報と共に入退室の情報も記憶される。また、仮想会議室においてなされた発言情報なども記憶される。発言情報としては、発言した時刻情報、発言した参加者ID(或いは参加者名)、その発言内容などが記憶される。
Log DB52 is DB which memorize | stores the content of the discussion performed in the virtual meeting room. The log DB 52 stores, for example, a conference room ID of a virtual conference room and a participant ID of a participant who participated in discussion or discussion. When the participant enters or leaves the virtual conference room during the discussion, the entry / exit information is also stored along with the time information. In addition, speech information made in the virtual conference room is also stored. As the utterance information, the utterance time information, the uttered participant ID (or participant name), the utterance content, and the like are stored.
<4.処理の流れ>

 人事担当者が管理者端末4を利用して参加者の発言を抽出するときの処理について、図4を参照して説明する。
 先ず、管理者端末4はステップS101において、特定の参加者を検索するための検索条件を指定するための検索画面を管理者端末4上に表示する条件設定画面表示処理を実行する。検索画面の例を図5に示す。検索画面は、例えば、管理者端末4にインストールされたウェブブラウザ5上に表示され、検索条件としてのフリーワードを入力するための入力欄6、検索結果の絞り込みを行うための各種ドロップダウンリスト7,7,・・・検索を実行する検索ボタン8が備えられている。
<4. Flow of processing>

A process when the personnel manager extracts a participant's speech using the administrator terminal 4 will be described with reference to FIG.
First, in step S101, the administrator terminal 4 executes a condition setting screen display process for displaying on the administrator terminal 4 a search screen for designating a search condition for searching for a specific participant. An example of the search screen is shown in FIG. The search screen is displayed on, for example, the web browser 5 installed in the administrator terminal 4, and includes an input field 6 for inputting a free word as a search condition, and various drop-down lists 7 for narrowing down search results. , 7,... A search button 8 for executing a search is provided.
 入力欄6に入力された文字列は、参加者を抽出するための文字列であり、例えば、参加者の発言内容に関する文字列や、参加者の属性やプロフィールに関する文字列である。
 また、入力欄6に複数の文字列(または文章)が入力された場合、AND検索やOR検索が実行される。
 ドロップダウンリスト7は、元に参加者を抽出するための参加者の属性やプロフィールを選択可能に構成されている。例えば、学歴のドロップダウンリスト7では、「高卒」、「大卒」、「大卒見込み」などの項目が選択可能とされている。
The character string input to the input field 6 is a character string for extracting a participant, and is, for example, a character string related to a participant's utterance content or a character string related to a participant's attribute or profile.
In addition, when a plurality of character strings (or sentences) are input in the input field 6, AND search and OR search are executed.
The drop-down list 7 is configured so that the attributes and profiles of the participants for extracting the participants can be selected. For example, in the education drop-down list 7, items such as “high school graduate”, “university graduate”, “university graduate prospect” can be selected.
 検索ボタン8は、入力欄6やドロップダウンリスト7,7,・・・によって入力或いは選択された情報を元に検索処理を仮想会議室サーバ1に実行させるためのボタンである。
 人事担当者が検索ボタン8を押下すると、図4のステップS102の条件送信処理が管理者端末4において実行される。条件送信処理では、検索条件(即ち、入力欄6やドロップダウンリスト7,7,・・・によって入力或いは選択された情報)が仮想会議室サーバ1へ送信される。
The search button 8 is a button for causing the virtual conference room server 1 to execute a search process based on information input or selected by the input field 6 or the drop-down lists 7, 7,.
When the person in charge of personnel presses the search button 8, the condition transmission process in step S <b> 102 of FIG. 4 is executed in the administrator terminal 4. In the condition transmission process, the search condition (that is, information input or selected by the input field 6 or the drop-down lists 7, 7,...) Is transmitted to the virtual conference room server 1.
 続いて、仮想会議室サーバ1は、ステップS201において、検索条件を受信する条件入力受付処理を実行する。
 次に、仮想会議室サーバ1はステップS202において、参加者抽出処理を実行する。参加者抽出処理では、受信した検索条件に基づいた検索が実行されて、対象の参加者(抽出人物)が抽出される。
 更に、仮想会議室サーバ1はステップS203において、ステップS202で抽出した抽出人物を管理者端末4に提示する抽出参加者提示処理を実行する。抽出参加者提示処理では、抽出した各参加者に対する選択操作が可能な状態で(例えば、クリック操作が可能な状態で)管理者端末4上へ表示させるためのウェブページデータが送信される。
Subsequently, in step S201, the virtual conference room server 1 executes a condition input reception process for receiving a search condition.
Next, the virtual meeting room server 1 performs a participant extraction process in step S202. In the participant extraction process, a search based on the received search condition is executed, and a target participant (extracted person) is extracted.
Further, in step S203, the virtual meeting room server 1 executes extraction participant presentation processing for presenting the extracted person extracted in step S202 to the administrator terminal 4. In the extracted participant presentation process, web page data to be displayed on the administrator terminal 4 is transmitted in a state where a selection operation for each extracted participant is possible (for example, in a state where a click operation is possible).
 抽出人物の情報を含んだウェブページデータを受信した管理者端末4は、ステップS103において、抽出参加者表示処理を実行する。抽出参加者表示処理では、図5の検索画面で指定した検索条件に基づいて抽出された検索結果としての参加者が表示される抽出参加者表示画面をウェブブラウザ5上に表示させる処理である。 The administrator terminal 4 that has received the web page data including the extracted person information executes the extracted participant display process in step S103. The extracted participant display process is a process for displaying on the web browser 5 an extracted participant display screen on which participants as search results extracted based on the search conditions specified on the search screen of FIG.
 抽出参加者表示画面の一例を図6に示す。抽出参加者表示画面には、図5と同様に各種の検索条件が表示され、その下方には、抽出参加者表示欄9が設けられている。
 抽出参加者表示欄9には、検索条件に合致し抽出された各参加者が抽出参加者10,10,10として表示されている。抽出参加者10としては、選択操作(例えばクリック操作)が可能な文字列(参加者名)が表示される。
An example of the extracted participant display screen is shown in FIG. On the extracted participant display screen, various search conditions are displayed as in FIG. 5, and an extracted participant display field 9 is provided below the search conditions.
In the extracted participant display field 9, each participant that matches the search condition and is extracted is displayed as the extracted participants 10, 10, 10. As the extraction participant 10, a character string (participant name) that can be selected (for example, clicked) is displayed.
 人事担当者が抽出参加者10として表示された文字列の何れかを選択すると、図4のステップS104の参加者選択処理が管理者端末4において実行される。
 参加者選択処理では、人事担当者が選択した参加者(選択人物)の情報(参加者選択情報)を仮想会議室サーバ1へ送信する処理を実行する。
When the personnel manager selects one of the character strings displayed as the extraction participant 10, the participant selection process in step S104 of FIG.
In the participant selection process, a process of transmitting information (participant selection information) of the participant (selected person) selected by the personnel manager to the virtual conference room server 1 is executed.
 参加者選択情報を受信した仮想会議室サーバ1は、続くステップS204において、選択人物の発言履歴を抽出する発言履歴抽出処理を実行する。
 そして、仮想会議室サーバ1はステップS205において、抽出した発言履歴を管理者端末4へ送信し、管理者端末4上へ表示させる発言履歴提示処理を実行する。この処理では、発言履歴のそれぞれの発言に対する選択操作が可能な状態で(例えば、クリック操作が可能な状態で)管理者端末4上へ表示させるためのウェブページデータが、発言履歴情報として送信される。
In step S204, the virtual conference room server 1 that has received the participant selection information executes a speech history extraction process for extracting the speech history of the selected person.
Then, in step S205, the virtual conference room server 1 transmits the extracted statement history to the administrator terminal 4 and executes a statement history presentation process for displaying on the administrator terminal 4. In this process, web page data to be displayed on the administrator terminal 4 in a state where a selection operation can be performed on each statement in the statement history (for example, in a state where a click operation is possible) is transmitted as the statement history information. The
 発言履歴情報を受信した管理者端末4は、次のステップS105において、発言履歴情報を管理者端末4上の発言履歴表示画面に表示させる発言履歴表示処理を実行する。 The administrator terminal 4 that has received the message history information executes a message history display process for displaying the message history information on the message history display screen on the administrator terminal 4 in the next step S105.
 発言履歴表示画面の一例を図7に示す。尚、図7は、人事担当者が図6に示す抽出参加者表示画面に表示された各参加者(Aさん、Bさん、Gさん)からBさんを選択した場合を示している。ウェブブラウザ5上に表示された発言履歴表示画面には、Bさんが既に発言した内容が時系列に沿って表示されている。尚、図中では、古い発言が上部に表示されるように並んでいるが、新しい発言が上部に表示されるように並べられていてもよい。
 図7に示すBさんの各発言は、選択操作が可能な文字列として表示される。
 尚、各発言が選択操作可能な文字列で構成される代わりに、前後発言を表示させるためのボタンを発言ごとに設けてもよい。
An example of the message history display screen is shown in FIG. FIG. 7 shows a case where the personnel manager selects Mr. B from each participant (Mr. A, Mr. B, Mr. G) displayed on the extracted participant display screen shown in FIG. On the message history display screen displayed on the web browser 5, the contents already spoken by Mr. B are displayed in chronological order. In the figure, old utterances are arranged so as to be displayed at the top, but new utterances may be arranged to be displayed at the top.
Each remark of Mr. B shown in FIG. 7 is displayed as a character string that can be selected.
It should be noted that each utterance may be provided with a button for displaying the utterance before and after each utterance, instead of being composed of a selectable character string.
 続いて、人事担当者が発言履歴表示画面に表示されたBさんの各発言のうちの一つを選択すると、管理者端末4はステップS106において、発言選択処理を実行する。発言選択処理では、人事担当者が選択した発言(選択発言)が選択発言情報として仮想会議室サーバ1へ送信される。 Subsequently, when the person in charge of personnel selects one of the remarks of Mr. B displayed on the remark history display screen, the manager terminal 4 executes a remark selection process in step S106. In the speech selection process, a speech (selected speech) selected by the personnel manager is transmitted to the virtual conference room server 1 as selected speech information.
 選択発言情報を受信した仮想会議室サーバ1は、続くステップS206において、前後発言抽出処理を実行する。前後発言抽出処理は、選択発言の前後の発言を抽出する処理である。
 前後の発言としては、例えば、選択発言がどのような流れで書き込まれたものであるかを把握するために、選択発言の前の発言のみを抽出してもよい。また、選択発言に基づきどのような議論がなされたかを把握するために、選択発言の後の発言のみを抽出してもよい。また、前後それぞれ一部の発言を抽出してもよい。これらの具体的な例は後述する。
In step S206, the virtual conference room server 1 that has received the selected utterance information executes the anteroposterior utterance extraction process. The pre-post utterance extraction process is a process of extracting utterances before and after the selected utterance.
As the utterances before and after, for example, only the utterances before the selected utterances may be extracted in order to grasp the flow of the selected utterances. Further, in order to grasp what kind of discussion has been made based on the selected utterance, only the utterance after the selected utterance may be extracted. Moreover, you may extract a part of utterances before and after. Specific examples of these will be described later.
 続いて、仮想会議室サーバ1はステップS207において、抽出された前後発言を管理者端末4へ送信し、管理者端末4上へ表示させる前後発言提示処理を実行する。前後発言提示処理では、抽出された前後発言が前後発言情報として管理者端末4へ送信される。
 尚、前後発言情報では、前後発言と共に表示する発言者の情報も送信される。
Subsequently, in step S207, the virtual conference room server 1 transmits the extracted front-rear message to the administrator terminal 4 and executes the front-rear message presenting process for displaying on the administrator terminal 4. In the front / rear speech presentation process, the extracted front / rear speech is transmitted to the administrator terminal 4 as front / rear speech information.
In the front / rear speech information, information of a speaker to be displayed together with the front / rear speech is also transmitted.
 前後発言情報を受信した管理者端末4は、ステップS107において、前後発言を管理者端末4上の前後発言表示画面に表示させる前後発言表示処理を実行する。
 これにより、人事担当者は、管理者端末4上に表示された選択人物の選択発言と前後発言を確認することができ、選択人物の評価を行うことができる。
 また、選択発言と前後発言を管理者端末4上に表示させる場合に、各発言を行った参加者がわかるように識別情報(例えば、発言者の参加者IDや参加者名など)を発言と共に表示させる。識別情報は、管理者端末4上に表示させる発言全てに対して表示させる必要はなく、具体例を以下で述べる。
The administrator terminal 4 that has received the front-rear message information performs a front-rear message display process for displaying the front-rear message on the front-rear message display screen on the administrator terminal 4 in step S107.
As a result, the personnel manager can confirm the selected speech and the previous / next speech of the selected person displayed on the administrator terminal 4, and can evaluate the selected person.
Further, when displaying the selected utterance and the preceding and following utterances on the administrator terminal 4, identification information (for example, the participant ID of the utterer, the participant name, etc.) together with the utterance so that the participant who made each utterance can be understood. Display. The identification information does not need to be displayed for all the messages displayed on the administrator terminal 4, and a specific example will be described below.
<5.前後発言表示画面>

[5-1.第1例]
 前後発言表示画面の第1例では、選択発言の前と後の発言それぞれ10個ずつが表示される例を、図8を参照して説明する。即ち、前後発言表示画面には、21個の発言が表示される。また、21個の発言それぞれに対して、発言者を特定する識別情報を表示させる。
<5. Front and rear remark display screen>

[5-1. First example]
In the first example of the front and rear message display screen, an example in which ten messages before and after the selected message are displayed will be described with reference to FIG. That is, 21 messages are displayed on the front and rear message display screen. Also, identification information for identifying the speaker is displayed for each of the 21 statements.
 図8に示すように、前後発言表示画面では、囲み線11で囲まれて明確にされた選択発言と、前後発言20個が表示されている。また、それぞれの発言には、発言者(Aさん、Bさん、Cさん、Dさん)を特定するための識別情報として、参加者名が表示されている。
 更に、この議論が行われた当時の参加者を発言履歴の右側に表示する。例えば、議論に参加している参加者は、Aさん、Bさん、Cさん、Dさん、Eさん、Fさん、Gさんであったが、図8に示す議論が行われていた時分(15時22分51秒~15時27分03秒)に会議室に入室していた参加者がAさん、Bさん、Cさん、Dさん、Eさんのみであった場合、参加者としてAさん、Bさん、Cさん、Dさん、Eさんが発言履歴の右側に表示される。
 尚、議論が行われた当時の参加者は、ログDB52に記憶された時間情報と入退室情報に基づいて特定することが可能である。
As shown in FIG. 8, on the front and rear utterance display screen, a selected utterance clarified by being surrounded by a surrounding line 11 and 20 front and rear utterances are displayed. In addition, in each remark, a participant name is displayed as identification information for identifying a speaker (Mr. A, Mr. B, Mr. C, and Ms. D).
In addition, the participants at the time of the discussion are displayed on the right side of the speech history. For example, the participants participating in the discussion were Mr. A, Mr. B, Mr. C, Mr. D, Mr. E, Mr. F, and Mr. G. When the discussion shown in FIG. (15:22:51 to 15:27:03) If A, B, C, D, and E were the only participants who entered the conference room, A was the participant. , B, C, D, E are displayed on the right side of the statement history.
It should be noted that the participants at the time of the discussion can be identified based on the time information and the entry / exit information stored in the log DB 52.
 人事担当者は、図8の前後発言表示画面を閲覧することにより、どのような議論の中でBさんの選択発言がなされ、その結果どのような議論が続いたかを把握することができる。
The person in charge of personnel can grasp what sort of discussion Mr. B has made in the discussion and, as a result, what kind of discussion has continued as a result of browsing the message display screen before and after FIG.
[5-2.第2例]
 前後発言表示画面の第2例では、第1例と同様に、選択発言の前と後の発言それぞれ10個ずつが表示される(図9参照)。また、前後発言表示画面に表示された21個の発言の中で、抽出人物以外が行った発言に対してのみ、発言者を特定する識別情報を表示させる。抽出人物以外とは、図4のステップS203の抽出参加者提示処理で抽出された抽出人物(即ち、Aさん、Bさん、Gさん)以外である。従って、図9においては、入室していた参加者はAさん、Bさん、Cさん、Dさん、Eさんの5名であるが、抽出人物以外の参加者であるCさん、Dさん、Eさんのそれぞれの発言に対して発言者を特定する識別情報が表示される。
[5-2. Second example]
In the second example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 9). In addition, the identification information for identifying the speaker is displayed only for the speech made by a person other than the extracted person among the 21 speeches displayed on the front and rear speech display screen. A person other than the extracted person is a person other than the extracted person (that is, Mr. A, Mr. B, and Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 9, there are five participants A, B, C, D, and E who entered the room, but participants C, D, and E who are participants other than the extracted person. Identification information identifying the speaker is displayed for each of.
 これにより、注目している参加者(抽出人物)以外の発言が明確にされ、抽出人物以外の参加者(検索条件に合致していない参加者)が行った注目すべき発言を見逃してしまうことを抑制することができる。
This makes it possible to clarify statements other than the participant (extracted person) who is paying attention, and miss the remarks made by participants other than the extracted person (participants who do not meet the search conditions). Can be suppressed.
[5-3.第3例]
 前後発言表示画面の第3例では、第1例と同様に、選択発言の前と後の発言それぞれ10個ずつが表示される(図10参照)。また、前後発言表示画面に表示された21個の発言の中で、抽出人物が行った発言に対してのみ、発言者を特定する識別情報を表示させる。抽出人物とは、図4のステップS203の抽出参加者提示処理で抽出された抽出人物(即ち、Aさん、Bさん、Gさん)である。従って、図10においては、参加者Aさん、Bさん、Cさん、Dさん、Eさんのうち、Aさん及びBさんの発言に対してのみ、発言者を特定する識別情報が表示される。
[5-3. Third example]
In the third example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 10). In addition, the identification information for identifying the speaker is displayed only for the speech made by the extracted person among the 21 speeches displayed on the front and rear speech display screen. The extracted person is an extracted person (that is, Mr. A, Mr. B, Mr. G) extracted in the extraction participant presentation process in step S203 of FIG. Therefore, in FIG. 10, the identification information for identifying the speaker is displayed only for the speeches of Mr. A and Mr. B among the participants A, B, C, D, and E.
 これにより、抽出人物が行った発言が明確化されるため、抽出人物の評価を容易に行うことができる。
Thereby, since the remarks made by the extracted person are clarified, the extracted person can be easily evaluated.
[5-4.第4例]
 前後発言表示画面の第4例では、第1例と同様に、選択発言の前と後の発言それぞれ10個ずつが表示される(図11参照)。また、前後発言表示画面に表示された21個の発言の中で、選択発言の前後何れにも発言している参加者の発言に対してのみ、発言者を特定する識別情報を表示させる。
 具体的には、図11に示すように、Aさん、Bさん、Cさん、Dさんは、選択発言の前にも後にも発言を行っている。一方、Eさんについては、選択発言の後にのみ発言を行っている。従って、Eさんを除いた各参加者の発言に対してのみ、参加者を特定する識別情報が表示される。
[5-4. Fourth example]
In the fourth example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 11). In addition, the identification information for identifying the speaker is displayed only for the speech of the participant who speaks before and after the selected speech among the 21 speeches displayed on the front and rear speech display screen.
Specifically, as shown in FIG. 11, Mr. A, Mr. B, Mr. C, and Mr. D are speaking before and after the selected statement. On the other hand, Mr. E is speaking only after the selected speech. Therefore, the identification information for identifying the participant is displayed only for the speech of each participant except Mr. E.
 これにより、選択発言を通じてBさんを評価する際に重要な参加者が明確化され、Bさんの評価を適切に行うことができる。
 尚、21個の発言の中で、選択発言の前後何れにも発言している参加者の発言に対してのみ、発言者を特定する識別情報を表示させたが、21個の発言に限らずに選択発言の前後何れにも発言している参加者の発言に対してのみ、発言者を特定する識別情報を表示させてもよい。
Thereby, when Mr. B is evaluated through the selected utterance, an important participant is clarified, and Mr. B can be appropriately evaluated.
In addition, among the 21 utterances, the identification information for identifying the utterer is displayed only for the utterances of the participants who are speaking before and after the selected utterance. However, the present invention is not limited to the 21 utterances. The identification information for identifying the speaker may be displayed only for the speech of the participant who is speaking before and after the selected speech.
 また、本例と先の第2例と組み合わせてもよい。具体的には、抽出人物以外であるCさん、Dさん、Eさんの中で、選択発言の選択発言の前後何れにも発言している参加者の発言に対してのみ、発言者を特定する識別情報を表示させる。即ち、Cさん、Dさんの発言に対してのみ、発言者を特定する識別情報を表示させる。 Also, this example may be combined with the previous second example. Specifically, the speaker is specified only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance among Mr. C, D, and E who are not extracted persons. Display identification information. That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C and Mr. D.
 更に、本例と先の第3例と組み合わせてもよい。具体的には、抽出人物であるAさん、Bさん、Gさんの中で、選択発言の選択発言の前後何れにも発言している参加者の発言に対してのみ、発言者を特定する識別情報を表示させる。即ち、Aさん、Bさんの発言に対してのみ、発言者を特定する識別情報を表示させる。
Further, this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, the identification that identifies the speaker only for the utterances of the participants who are speaking before and after the selected utterance of the selected utterance Display information. That is, the identification information for identifying the speaker is displayed only for the messages of Mr. A and Mr. B.
[5-5.第5例]
 前後発言表示画面の第5例では、第1例と同様に、選択発言の前と後の発言それぞれ10個ずつが表示される(図12参照)。また、前後発言表示画面に表示された21個の発言の中で、選択発言(図12における12)の発言者(即ちBさん)が行った選択発言の一つ前の発言(図12における12a)と一つ後の発言(図12における12b)に着目し、選択発言12と一つ前発言12aの間に発言している参加者、または、選択発言12と一つ後発言12bの間に発言している参加者に対してのみ、発言者を特定する識別情報が表示される。
 具体的には、図12において、選択発言12と一つ前発言12aの間に発言している参加者は、Aさん、Cさん、Dさんとなる。また、選択発言12と一つ後発言12bの間に発言している参加者は、Aさん、Eさんとなる。従って、第5例では、Aさん、Cさん、Dさん、Eさんの発言に対してのみ、発言者を特定する識別情報が表示される。
[5-5. Example 5]
In the fifth example of the front and rear utterance display screen, ten utterances before and after the selected utterance are displayed as in the first example (see FIG. 12). Further, among the 21 utterances displayed on the front and rear utterance display screen, the utterance (12a in FIG. 12) immediately before the selected utterance made by the utterer (ie, Mr. B) of the selected utterance (12 in FIG. 12). ) And the next utterance (12b in FIG. 12), and the participant uttering between the selected utterance 12 and the previous utterance 12a, or between the selected utterance 12 and the next utterance 12b. Only for the participant who is speaking, identification information for identifying the speaker is displayed.
Specifically, in FIG. 12, the participants who speak between the selected statement 12 and the previous statement 12a are Mr. A, Mr. C, and Mr. D. Participants who are speaking between the selected message 12 and the next message 12b are A and E. Therefore, in the fifth example, identification information for identifying the speaker is displayed only for the utterances of Mr. A, Mr. C, Mr. D, and Mr. E.
 これにより、選択発言を通じてBさんを評価する際に重要な参加者が明確化され、Bさんの評価を適切に行うことができる。 This makes it possible to clarify the important participants when evaluating Mr. B through selective remarks, and to appropriately evaluate Mr. B.
 また、本例と先の第2例と組み合わせてもよい。具体的には、抽出人物以外であるCさん、Dさん、Eさんの中で、選択発言12と一つ前発言12aの間に発言している参加者、または、選択発言12と一つ後発言12bの間に発言している参加者に対してのみ、発言者を特定する識別情報を表示させる。即ち、Cさん、Dさん、Eさんの発言に対してのみ、発言者を特定する識別情報を表示させる。 Also, this example may be combined with the previous second example. Specifically, among C, D, and E who are not extracted persons, participants who speak between the selected utterance 12 and the previous utterance 12a, or one after the selected utterance 12 Only the participant who is speaking during the statement 12b displays the identification information for identifying the speaker. That is, the identification information for identifying the speaker is displayed only for the remarks of Mr. C, D, and E.
 更に、本例と先の第3例と組み合わせてもよい。具体的には、抽出人物であるAさん、Bさん、Gさんの中で、選択発言12と一つ前発言12aの間に発言している参加者、または、選択発言12と一つ後発言12bの間に発言している参加者に対してのみ、発言者を特定する識別情報を表示させる。即ち、Aさんの発言に対してのみ、発言者を特定する識別情報を表示させる。
Further, this example may be combined with the previous third example. Specifically, among the extracted persons A, B, and G, participants who speak between the selected message 12 and the previous message 12a, or the selected message 12 and the next message Only the participant who speaks during 12b displays the identification information which identifies a speaker. That is, identification information for identifying the speaker is displayed only for Mr. A's statement.
<6.前後発言抽出処理の別の例>

[6-1.別の第1例]
 前後発言抽出処理の別の第1例を図13を参照して説明する。
 前後発言抽出処理の別の第1例では、先の前後発言抽出処理と同様に、選択発言の前後の一部(例えば、所定数)の発言を抽出する。但し、議論の流れを把握する上でそれほど重要でない発言を省くために、所定の文字数以上の発言のみを抽出対象とする。
 具体的には、所定の文字数を5文字とした場合、1文字乃至4文字からなる発言は抽出対象としない。即ち、図8における15時22分57秒のAさんの発言と、15時24分21秒のBさんの発言と、15時25分09秒のEさんの発言は抽出対象としない。従って、前後発言表示画面としては、図13に示すように、抽出された5文字以上の発言のみが表示される。
<6. Another example of before and after speech extraction processing>

[6-1. Another first example]
Another first example of the front-rear message extraction process will be described with reference to FIG.
In another first example of the front and rear speech extraction processing, a part (for example, a predetermined number) of speech before and after the selected speech is extracted as in the previous front and rear speech extraction processing. However, in order to omit comments that are not so important in grasping the flow of discussion, only comments with a predetermined number of characters or more are extracted.
Specifically, when the predetermined number of characters is five, a statement consisting of one to four characters is not extracted. That is, Mr. A's remarks at 15:22:57 in FIG. 8, Mr. B's remarks at 15:24:21, and Mr. E's remarks at 15:25:09 are not extracted. Therefore, as shown in FIG. 13, only the extracted five or more characters are displayed on the front and rear message display screen.
 これにより、例えば、議論の大筋を把握する上で不要となる単なる相槌などの発言が前後発言抽出処理の抽出対象から除かれ、議論の内容の把握を容易にすることができる。
Thereby, for example, utterances such as simple conflicts that are not necessary for grasping the outline of the discussion are excluded from the extraction targets of the preceding and following utterance extraction processing, and the contents of the discussion can be easily grasped.
[6-2.別の第2例]
 前後発言抽出処理の別の第2例を図14を参照して説明する。
 前後発言抽出処理の別の第2例では、前後発言抽出処理としてこれまで説明してきたそれぞれの処理に加えて、議論の結論を示した発言をまとめ発言として特定する処理を実行する。
 まとめ発言の特定は、例えば、発言内容を自動的に判定(例えば、「まとめ」や「結論」などの所定の文字列を含む発言を自動的にまとめ発言として判定)してもよいし、各発言の中からまとめ発言を各参加者や人事担当者が指定できる機能が仮想会議室に設けられていてもよいし、一連の議論の最終発言(但し、「それでは失礼します」などのあいさつのみの発言は除く)をまとめ発言として判定してもよい。
[6-2. Another second example]
Another second example of the front-rear message extraction processing will be described with reference to FIG.
In another second example of the front-rear message extraction process, in addition to the processes described so far as the front-rear message extraction process, a process of specifying a message indicating the conclusion of the discussion as a collective message is executed.
The identification of the summary utterance may be, for example, automatically determining the content of the utterance (for example, automatically determining a utterance including a predetermined character string such as “summary” or “conclusion” as a summary utterance) The virtual conference room may be equipped with a function that allows each participant or HR representative to specify a summary statement from among the statements, or only a greeting such as the final statement of a series of discussions May be determined as a summary statement.
 議論の結論を示した発言をまとめ発言として特定する処理を実行する前後発言抽出処理が実行された場合の管理者端末4の前後発言表示処理では、例えば、図14に示す前後発言表示画面を管理者端末4上に表示する。
 具体的には、図14に示すように、Aさんのまとめ発言が発言履歴の下方に表示される。
 これにより、評価する対象であるBさんの選択発言が、まとめ発言に対してどのような位置づけであったのかを把握することができ、Bさんの評価をより適切に行うことができる。
In the front-rear message display process of the administrator terminal 4 when the front-rear message extraction process for executing the process of specifying the messages indicating the conclusion of the discussion as a collective message is executed, for example, the front-rear message display screen shown in FIG. 14 is managed. Displayed on the user terminal 4.
Specifically, as shown in FIG. 14, Mr. A's summary utterance is displayed below the utterance history.
Thereby, it is possible to grasp the position of the selected utterance of Mr. B who is the object to be evaluated with respect to the summary utterance, and B can be evaluated more appropriately.
<7.変形例>

 先の図4のステップS205の発言履歴提示処理において、管理者端末4上へ表示させる発言履歴は、常に最新のものとなるようにしてもよい。具体的には、ステップS205の発言履歴提示処理を実行した後に選択人物が新たな発言を行った場合、再びステップS204及びステップS205の処理を実行して、管理者端末4上に表示される発言履歴が更新されるようにしてもよい。
 これにより、人事担当者は常に最新の発言履歴を閲覧することができるため、選択人物の評価を適切に行うことができる。
<7. Modification>

In the message history presentation process in step S205 of FIG. 4, the message history displayed on the administrator terminal 4 may always be the latest. Specifically, when the selected person makes a new utterance after executing the utterance history presentation process in step S205, the utterance displayed on the administrator terminal 4 by executing the processes in steps S204 and S205 again. The history may be updated.
Thereby, since the person in charge of personnel can always browse the latest statement history, the selected person can be evaluated appropriately.
 尚、上記したそれぞれの前後発言表示画面では、特定の発言者が行った発言に対して識別情報(例えば発言者の参加者名)を表示したが、異なる方法で特定の発言者を識別しやすくしてもよい。具体的には、例えば、全ての発言に対して発言者を明記すると共に、特定の発言者の識別情報(発言者名)を太字や色付き文字で表示するなどしてもよい。
 これにより、上記したそれぞれの例において、各効果を得ることができる。
In each of the above-mentioned front and rear speech display screens, identification information (for example, a participant's name of a speaker) is displayed for a speech made by a specific speaker. However, it is easy to identify a specific speaker using different methods. May be. Specifically, for example, a speaker may be clearly specified for all the statements, and identification information (speaker name) of a specific speaker may be displayed in bold or colored characters.
Thereby, in each above-mentioned example, each effect can be acquired.
 図8乃至図14に示した各前後発言表示画面において、発言履歴の右側に表示された参加者一覧と共に、それぞれの参加者が発言した発言数を表示してもよい。発言数は、抽出した前後発言の中での発言数でもよいし、それ以外の発言も含めた全発言数でもよい。
 これにより、各参加者を評価する上での指標としての発言数を提示することができる。
In each of the front and rear message display screens shown in FIGS. 8 to 14, the number of statements made by each participant may be displayed together with the participant list displayed on the right side of the message history. The number of utterances may be the number of utterances in the extracted preceding and following utterances, or the total number of utterances including other utterances.
Thereby, the number of utterances as an index for evaluating each participant can be presented.
 図8乃至図14に示した各前後発言表示画面において、発言者の識別情報(例えば参加者名)を、参加者ごとに異なる色で表示してもよい。
 これにより、各参加者を識別しやすくすることができる。
 また、参加者名だけでなく、発言の色も参加者に応じて異なるようにしてもよい。
In each of the front and rear message display screens shown in FIGS. 8 to 14, the identification information (for example, participant name) of the speaker may be displayed in a different color for each participant.
Thereby, each participant can be easily identified.
Further, not only the participant name but also the color of the speech may differ depending on the participant.
<8.まとめ>

 これまで説明してきたように、仮想会議室サーバ1は、参加者を検索するための検索条件の入力を受け付ける条件入力受付処理(ステップS201)を実行する条件受付部1aと、検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する参加者検索処理部1bと、参加者検索処理部が提示した抽出人物の中から選択された選択人物の発言履歴を抽出する発言履歴抽出処理(ステップS204)を実行する発言履歴抽出部1cと、発言履歴抽出処理で抽出した発言履歴のそれぞれの発言を選択可能に提示する発言履歴提示処理(ステップS205)を実行する発言履歴提示部1dと、選択可能に提示された発言から選択された選択発言の前または後の一部の発言を抽出する前後発言抽出処理(ステップS206)を実行する前後発言抽出部1eと、選択発言と共に前後発言抽出処理で抽出した発言を提示する前後発言提示処理(ステップS207)を実行する前後発言提示部1fとを備えている。
<8. Summary>

As described so far, the virtual meeting room server 1 is based on the condition receiving unit 1a for executing the condition input receiving process (step S201) for receiving the input of the search condition for searching for the participant, and the search condition. Participant search processing unit 1b that performs search for participants and presents each extracted person extracted as a search result in a selectable manner, and remarks of a selected person selected from the extracted persons presented by the participant search processing unit A speech history extraction unit 1c that executes a speech history extraction process (step S204) for extracting a history, and a speech history presentation process (step S205) that presents each of the speech histories extracted in the speech history extraction process in a selectable manner. Statement history presentation unit 1d to be executed, and pre- and post-utterance extraction processing for extracting a part of the utterance before or after the selected utterance selected from the utterances presented to be selectable Front and rear utterance extractor 1e executing step S206), and a longitudinal speech presentation unit 1f to perform before and after speaking presentation processing (step S207) presenting the extracted utterance before and after utterance extraction process with the selected utterance.
 これにより、仮想会議室で行われる全ての発言を確認しなくても選択人物の選択発言の位置づけなどを確認することができるため、議論に参加する参加者を効率よく評価することができる環境を提供することができる。 As a result, it is possible to confirm the position of the selected speech of the selected person without confirming all the remarks performed in the virtual conference room, so that an environment that can efficiently evaluate the participants participating in the discussion Can be provided.
 また、前後発言表示画面の第1例乃至第5例で説明したように、前後発言提示処理(ステップS207)では、抽出した発言の提示と共に、当該発言を行った参加者のうち少なくとも一部の参加者の識別情報(参加者IDや参加者名など)を提示する。 Further, as described in the first to fifth examples of the front-rear message display screen, in the front-rear message presentation process (step S207), along with the presentation of the extracted message, at least a part of the participants who performed the message Participant identification information (participant ID, participant name, etc.) is presented.
 これにより、各発言の位置づけや関係性が容易に把握可能とされ、議論の流れを掴みやすくすることができる。 This makes it possible to easily grasp the position and relationship of each statement and make it easier to grasp the flow of discussion.
 更に、前後発言表示画面の第2例で説明したように、前後発言提示処理(ステップS207)では、当該発言を行った参加者のうち、抽出人物以外の参加者のみ識別情報を提示する。 Furthermore, as described in the second example of the front and rear speech display screen, in the front and rear speech presentation processing (step S207), only the participants other than the extracted person are presented with the identification information among the participants who have made the speech.
 これにより、検索条件に合致していない参加者の発言が識別情報と共に提示されるため、検索条件に合致していない参加者の行った注目すべき発言から、その発言を行った参加者を把握することが容易となり、当該参加者が注目すべき対象から漏れてしまうことを防止することができる。即ち、人事担当者が当初着目していた参加者(選択人物)以外の参加者についても、着目させることができる。 As a result, the utterances of participants who do not match the search conditions are presented together with the identification information, so the participants who made the utterances are grasped from the remarks made by participants who do not match the search conditions. It becomes easy to do and it can prevent that the said participant leaks from the object which should be noted. That is, it is possible to focus on participants other than the participant (selected person) initially focused on by the personnel manager.
 更にまた、前後発言表示画面の第3例で説明したように、前後発言提示処理(ステップS207)では、当該発言を行った参加者のうち、抽出人物のみ識別情報を提示する。 Furthermore, as described in the third example of the front and rear speech display screen, in the front and rear speech presentation processing (step S207), only the extracted person is presented with the identification information among the participants who have made the speech.
 これにより、検索条件に合致した参加者の発言が識別情報と共に提示されるため、検索条件に合致する参加者の中での注目すべき発言を容易に判別することが可能となる。また、抽出人物に含まれていれば、人事担当者が当初着目していた参加者(選択人物)以外の参加者であっても、着目させることができる。 Thereby, since the participant's remarks that match the search conditions are presented together with the identification information, it is possible to easily determine the remarks that should be noted among the participants that match the search conditions. In addition, if it is included in the extracted person, it is possible to focus even on a participant other than the participant (selected person) that the person in charge of personnel originally focused on.
 加えて、前後発言表示画面の第4例で説明したように、前後発言提示処理(ステップS207)では、当該発言を行った参加者のうち、選択発言の前と後の双方に発言を行った参加者のみ識別情報を提示する。 In addition, as described in the fourth example of the front and rear speech display screen, in the front and rear speech presentation processing (step S207), among the participants who made the speech, the speech was made both before and after the selected speech. Only participants are presented with identification information.
 これにより、選択人物の評価を行う上で重要な会話のやりとりを行っていると推定される参加者の識別情報が発言と共に提示されるため、選択人物の評価を行いやすくすることができる。また、人事担当者が当初着目していた参加者(選択人物)以外の参加者(即ち、重要な会話のやりとりを行っていると推定される参加者)についても、着目させることができる。 This makes it possible to easily evaluate the selected person because the identification information of the participant who is presumed to be exchanging important conversations when evaluating the selected person is presented together with the remarks. In addition, participants other than the participant (selected person) initially focused by the person in charge of personnel (that is, a participant who is estimated to exchange important conversations) can also be focused.
 そして、前後発言表示画面の第5例で説明したように、前後発言提示処理(ステップS207)では、当該発言を行った参加者のうち、選択発言と選択人物が選択発言の一つ前または一つ後に行ったいずれかの発言との間に発言を行った参加者のみ識別情報を提示する。 Then, as described in the fifth example of the front and rear speech display screen, in the front and rear speech presentation processing (step S207), among the participants who have made the speech, the selected speech and the selected person are one before or one of the selected speech. Identification information is presented only to the participant who made a speech between any of the later speeches.
 これにより、選択人物の評価を行う上で重要な会話のやりとりを行っていると推定される参加者の識別情報が発言と共に提示されるため、選択人物の評価を行いやすくすることができる。また、人事担当者が当初着目していた参加者(選択人物)以外の参加者(即ち、重要な会話のやりとりを行っていると推定される参加者)についても、着目させることができる。 This makes it possible to easily evaluate the selected person because the identification information of the participant who is presumed to be exchanging important conversations when evaluating the selected person is presented together with the remarks. In addition, participants other than the participant (selected person) initially focused by the person in charge of personnel (that is, a participant who is estimated to exchange important conversations) can also be focused.
 また、前後発言抽出処理の別の第1例で説明したように、前後発言抽出処理(ステップS206)では、所定の文字数以上の発言のみを抽出する。 Also, as described in another first example of the front and rear speech extraction processing, in the front and rear speech extraction processing (step S206), only the speech of a predetermined number of characters or more is extracted.
 これにより、例えば、相槌を打つだけの文字数の少ない発言は抽出されないため、会話の流れの大筋を把握し易くすることができる。 Thus, for example, an utterance with a small number of characters that can be used for competing is not extracted, so that the outline of the conversation flow can be easily grasped.
 更に、上記した各例で説明したように、前後発言提示処理(ステップS207)では、選択発言が行われたときの参加者全てを提示する。 Furthermore, as described in each of the above examples, in the front / rear speech presentation process (step S207), all participants when the selected speech is performed are presented.
 これにより、選択発言が行われたときの状況を把握するための情報が提示され、選択人物の評価を行いやすくすることができる。 Thus, information for grasping the situation when the selected utterance is made is presented, and the selected person can be easily evaluated.
 更にまた、前後発言抽出処理の別の第2例で説明したように、議論の結論を示した発言をまとめ発言として特定する結論特定部1gを備え、前後発言提示処理(ステップS207)では、選択発言と共に選択発言の後に発言されたまとめ発言を提示する。 Furthermore, as described in the second example of the preceding and following statement extraction processing, the conclusion specifying unit 1g for specifying the statement indicating the conclusion of the discussion as a collective statement is provided. In the preceding and following statement presentation processing (step S207), the selection is performed. A summary remark made after the selected remark along with the remark is presented.
 これにより、選択発言と議論の結論が共に提示されるため、選択人物の評価を行いやすくすることができる。 This makes it easier to evaluate the selected person because both the selected statement and the conclusion of the discussion are presented.
 加えて、変形例で説明したように、発言履歴提示処理(ステップS205)では、選択人物の新規発言によって発言履歴が更新されるごとに更新された発言履歴を提示する。 In addition, as described in the modification, in the speech history presentation process (step S205), the updated speech history is presented each time the speech history is updated by a new speech of the selected person.
 これにより、会議室に入室しなくとも選択人物の発言履歴が更新されるため、新規発言の確認を容易に行うことができ、選択人物の評価を適切に行うことができる。 This allows the selected person's speech history to be updated without entering the conference room, so that new speech can be easily confirmed and the selected person can be evaluated appropriately.
 また、前後発言抽出処理では、選択発言の前または後の一部の発言として、所定数の発言を抽出してもよい。 Further, in the forward / backward speech extraction process, a predetermined number of speeches may be extracted as part of the speech before or after the selected speech.
 これにより、前後発言抽出処理において、発言の内容などを加味することないため、処理が簡易化される。
Thereby, in the front and rear speech extraction processing, since the content of the speech is not considered, the processing is simplified.
<9.プログラム及び記憶媒体>

 以上、本発明の仮想会議室サーバ1を説明してきたが、実施の形態のプログラムは、仮想会議室サーバ1における処理を演算処理装置(CPUなど)に実行させるプログラムである。
<9. Program and Storage Medium>

The virtual conference room server 1 of the present invention has been described above. The program according to the embodiment is a program that causes an arithmetic processing device (such as a CPU) to execute processing in the virtual conference room server 1.
 実施の形態のプログラムは、参加者を検索するための検索条件の入力を受け付ける手順を演算処理装置に実行させる。
 また、検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する手順を演算処理装置に実行させる。
 更に、抽出人物の中から選択された選択人物の発言履歴を抽出する手順を演算処理装置に実行させる。
 更にまた、抽出した発言履歴のそれぞれの発言を選択可能に提示する手順を演算処理装置に実行させる。
 加えて、選択可能に提示された発言から選択された選択発言の前または後の一部の発言を前後発言として抽出する手順を演算処理装置に実行させる。
 そして、選択発言と共に前後発言を提示する手順を演算処理装置に実行させる。
 即ちこのプログラムは、演算処理装置に対して、図4のステップS201乃至S207の各処理を実行させるプログラムである。
The program according to the embodiment causes the arithmetic processing device to execute a procedure for receiving an input of a search condition for searching for a participant.
Further, a search is performed for participants based on the search condition, and the processing unit is caused to execute a procedure for selectively presenting each extracted person extracted as a search result.
Further, the arithmetic processing apparatus is caused to execute a procedure for extracting the utterance history of the selected person selected from the extracted persons.
Furthermore, the arithmetic processing unit is caused to execute a procedure for presenting each of the extracted utterance histories in a selectable manner.
In addition, the arithmetic processing unit is caused to execute a procedure for extracting a part of the utterances before or after the selected utterance selected from the utterances presented so as to be selectable.
Then, the processing unit is caused to execute a procedure for presenting the preceding and following messages together with the selected message.
That is, this program is a program for causing the arithmetic processing unit to execute the processes of steps S201 to S207 in FIG.
 このようなプログラムにより、上述した仮想会議室サーバ1を実現できる。
 そしてこのようなプログラムはコンピュータ装置などの機器に内蔵されている記憶媒体としてのHDDや、CPUを有するマイクロコンピュータ内のROMなどに予め記憶しておくことができる。或いはまた、半導体メモリ、メモリカード、光ディスク、光磁気ディスク、磁気ディスクなどのリムーバブル記憶媒体に、一時的或いは永続的に格納(記憶)しておくことができる。またこのようなリムーバブル記憶媒体は、いわゆるパッケージソフトウェアとして提供することができる。
 また、このようなプログラムは、リムーバブル記憶媒体からパーソナルコンピュータなどにインストールする他、ダウンロードサイトから、LAN、インターネットなどのネットワークを介してダウンロードすることもできる。
The virtual conference room server 1 described above can be realized by such a program.
Such a program can be stored in advance in an HDD as a storage medium built in a device such as a computer device or a ROM in a microcomputer having a CPU. Alternatively, it can be stored (stored) temporarily or permanently in a removable storage medium such as a semiconductor memory, memory card, optical disk, magneto-optical disk, or magnetic disk. Such a removable storage medium can be provided as so-called package software.
Further, such a program can be installed from a removable storage medium to a personal computer or the like, or can be downloaded from a download site via a network such as a LAN or the Internet.
 1 仮想会議室サーバ、1a 条件受付部、1b 参加者検索処理部、1c 発言履歴抽出部、1d 発言履歴提示部、1e 前後発言抽出部、1f 前後発言提示部、1g 結論特定部、2 通信ネットワーク、3 参加者端末、4 管理者端末、50 会議室DB、51 参加者DB、52 ログDB 1 virtual conference room server, 1a condition reception unit, 1b participant search processing unit, 1c statement history extraction unit, 1d statement history presentation unit, 1e before and after statement extraction unit, 1f before and after statement presentation unit, 1g conclusion identification unit, 2 communication network 3, Participant terminal, 4 Administrator terminal, 50 Conference room DB, 51 Participant DB, 52 Log DB

Claims (13)

  1.  参加者を検索するための検索条件の入力を受け付ける条件入力受付処理を実行する条件受付部と、
     前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する参加者検索処理部と、
     前記参加者検索処理部が提示した抽出人物の中から選択された選択人物の発言履歴を抽出する発言履歴抽出処理を実行する発言履歴抽出部と、
     前記発言履歴抽出処理で抽出した発言履歴のそれぞれの発言を選択可能に提示する発言履歴提示処理を実行する発言履歴提示部と、
     前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を抽出する前後発言抽出処理を実行する前後発言抽出部と、
     前記選択発言と共に前記前後発言抽出処理で抽出した発言を提示する前後発言提示処理を実行する前後発言提示部とを備えた
     情報処理装置。
    A condition receiving unit for executing a condition input receiving process for receiving an input of a search condition for searching for a participant;
    A participant search processing unit that searches for participants based on the search condition and presents each extracted person extracted as a search result in a selectable manner;
    A statement history extraction unit that executes a statement history extraction process for extracting a statement history of a selected person selected from the extracted persons presented by the participant search processing unit;
    A statement history presentation unit that executes a statement history presentation process for selectively presenting each statement of the statement history extracted in the statement history extraction process;
    A before-and-after utterance extraction unit for executing a before-and-after utterance extraction process for extracting a part of utterances before or after the selected utterance selected from the selectable utterances;
    An information processing apparatus comprising: a front-rear message presentation unit that executes a front-rear message presentation process for presenting a message extracted in the front-rear message extraction process together with the selected message.
  2.  前記前後発言提示処理では、前記抽出した発言の提示と共に、当該発言を行った参加者のうち少なくとも一部の参加者の識別情報を提示する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein, in the front-rear message presentation processing, the identification information of at least some of the participants who have made the speech is presented together with the presentation of the extracted speech.
  3.  前記前後発言提示処理では、当該発言を行った参加者のうち、前記抽出人物以外の参加者のみ識別情報を提示する
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein in the front-rear message presentation process, identification information is presented only to participants other than the extracted person among the participants who have performed the message.
  4.  前記前後発言提示処理では、当該発言を行った参加者のうち、前記抽出人物のみ識別情報を提示する
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein in the front / rear speech presentation processing, only the extracted person is presented with identification information among participants who have made the speech.
  5.  前記前後発言提示処理では、当該発言を行った参加者のうち、前記選択発言の前と後の双方に発言を行った参加者のみ識別情報を提示する
     請求項3または請求項4に記載の情報処理装置。
    5. The information according to claim 3, wherein, in the before-and-after speech presentation processing, the identification information is presented only to the participant who made the speech before and after the selected speech among the participants who made the speech. Processing equipment.
  6.  前記前後発言提示処理では、当該発言を行った参加者のうち、前記選択発言と前記選択人物が前記選択発言の一つ前または一つ後に行ったいずれかの発言との間に発言を行った参加者のみ識別情報を提示する
     請求項3または請求項4に記載の情報処理装置。
    In the before-and-after speech presenting process, among the participants who made the speech, the selected speech and the selected person made a speech between any of the speeches made before or after the selected speech. The information processing apparatus according to claim 3, wherein only the participant presents identification information.
  7.  前記前後発言抽出処理では、所定の文字数以上の発言のみを抽出する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein in the front-rear message extraction process, only messages having a predetermined number of characters or more are extracted.
  8.  前記前後発言提示処理では、前記選択発言が行われたときの参加者全てを提示する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein in the front / rear speech presentation process, all participants when the selected speech is performed are presented.
  9.  議論の結論を示した発言をまとめ発言として特定する結論特定部を備え、
     前記前後発言提示処理では、前記選択発言と共に前記選択発言の後に発言された前記まとめ発言を提示する
     請求項1に記載の情報処理装置。
    It has a conclusion identification unit that identifies comments that indicate the conclusion of the discussion as summary comments.
    The information processing apparatus according to claim 1, wherein, in the front / rear speech presentation processing, the summary utterance uttered after the selected utterance is presented together with the selected utterance.
  10.  前記発言履歴提示処理では、前記選択人物の新規発言によって発言履歴が更新されるごとに更新された発言履歴を提示する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein the speech history presentation process presents an updated speech history each time the speech history is updated by a new speech of the selected person.
  11.  参加者を検索するための検索条件の入力を受け付ける条件入力受付処理ステップと、
     前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する参加者抽出処理ステップと、
     前記参加者抽出処理ステップで提示した抽出人物の中から選択された選択人物の発言履歴を抽出する発言履歴抽出処理ステップと、
     前記発言履歴抽出処理ステップで抽出した発言履歴のそれぞれの発言を選択可能に提示する発言履歴提示処理ステップと、
     前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を抽出する前後発言抽出処理ステップと、
     前記選択発言と共に前記前後発言抽出処理ステップで抽出した発言を提示する前後発言提示処理ステップとを
     情報処理装置に実行させる情報処理方法。
    A condition input acceptance processing step for accepting input of a search condition for searching for participants;
    Participant extraction processing step of searching for participants based on the search condition and presenting each extracted person extracted as a search result in a selectable manner;
    A statement history extraction processing step of extracting a statement history of a selected person selected from the extracted people presented in the participant extraction processing step;
    A statement history presentation processing step for selectively presenting each statement of the statement history extracted in the statement history extraction processing step;
    Before and after utterance extraction processing step of extracting a part of utterances before or after the selected utterance selected from the utterances presented to be selectable,
    An information processing method for causing an information processing apparatus to execute an anteroposterior utterance presentation processing step of presenting the utterance extracted in the anteroposterior utterance extraction processing step together with the selected utterance.
  12.  参加者を検索するための検索条件の入力を受け付ける手順と、
     前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する手順と、
     前記抽出人物の中から選択された選択人物の発言履歴を抽出する手順と、
     前記抽出した発言履歴のそれぞれの発言を選択可能に提示する手順と、
     前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を前後発言として抽出する手順と、
     前記選択発言と共に前記前後発言を提示する手順とを
     演算処理装置に実行させるプログラム。
    A procedure to accept search criteria to search for participants,
    A procedure for searching for participants based on the search conditions and presenting each extracted person extracted as a search result in a selectable manner;
    A procedure for extracting a remark history of a selected person selected from the extracted persons;
    A procedure for selectively presenting each of the extracted message histories;
    A procedure for extracting a part of the utterances before or after the selected utterance selected from the utterances presented so as to be selectable as anteroposterior utterances;
    A program for causing an arithmetic processing unit to execute a procedure for presenting the preceding and following utterances together with the selected utterance.
  13.  参加者を検索するための検索条件の入力を受け付ける手順と、
     前記検索条件に基づいて参加者の検索を行い、検索結果として抽出した抽出人物のそれぞれを選択可能に提示する手順と、
     前記抽出人物の中から選択された選択人物の発言履歴を抽出する手順と、
     前記抽出した発言履歴のそれぞれの発言を選択可能に提示する手順と、
     前記選択可能に提示された発言から選択された選択発言の前または後の一部の発言を前後発言として抽出する手順と、
     前記選択発言と共に前記前後発言を提示する手順とを
     演算処理装置に実行させるプログラムを記憶した記憶媒体。
    A procedure to accept search criteria to search for participants,
    A procedure for searching for participants based on the search conditions and presenting each extracted person extracted as a search result in a selectable manner;
    A procedure for extracting a remark history of a selected person selected from the extracted persons;
    A procedure for selectively presenting each of the extracted message histories;
    A procedure for extracting a part of the utterances before or after the selected utterance selected from the utterances presented so as to be selectable as anteroposterior utterances;
    The storage medium which memorize | stored the program which makes an arithmetic processing unit perform the procedure which presents the said back-and-forth message with the said selected message.
PCT/JP2015/065217 2015-05-27 2015-05-27 Information processing device, information processing method, program, and storage medium WO2016189685A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2016565361A JP6186519B2 (en) 2015-05-27 2015-05-27 Information processing apparatus, information processing method, program, and storage medium
PCT/JP2015/065217 WO2016189685A1 (en) 2015-05-27 2015-05-27 Information processing device, information processing method, program, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/065217 WO2016189685A1 (en) 2015-05-27 2015-05-27 Information processing device, information processing method, program, and storage medium

Publications (1)

Publication Number Publication Date
WO2016189685A1 true WO2016189685A1 (en) 2016-12-01

Family

ID=57394017

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/065217 WO2016189685A1 (en) 2015-05-27 2015-05-27 Information processing device, information processing method, program, and storage medium

Country Status (2)

Country Link
JP (1) JP6186519B2 (en)
WO (1) WO2016189685A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11242545A (en) * 1998-02-24 1999-09-07 Sharp Corp Real-time chat system
JPH11249990A (en) * 1998-02-27 1999-09-17 Fujitsu Ltd Utterance history management system in chat system
JP2001195428A (en) * 1999-11-02 2001-07-19 Atr Media Integration & Communications Res Lab Device for retrieving associative information
WO2003046764A1 (en) * 2001-11-26 2003-06-05 Fujitsu Limited Information analysis method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323215A (en) * 2006-05-31 2007-12-13 Fuji Xerox Co Ltd Conference information processor, conference information processing method and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11242545A (en) * 1998-02-24 1999-09-07 Sharp Corp Real-time chat system
JPH11249990A (en) * 1998-02-27 1999-09-17 Fujitsu Ltd Utterance history management system in chat system
JP2001195428A (en) * 1999-11-02 2001-07-19 Atr Media Integration & Communications Res Lab Device for retrieving associative information
WO2003046764A1 (en) * 2001-11-26 2003-06-05 Fujitsu Limited Information analysis method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KATASHI NAGAO ET AL.: "Discussion Mining: Knowledge Discovery from Discussions in Face- to-Face Meetings", IEICE TECHNICAL REPORT, THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. 112, no. 339, 1 December 2012 (2012-12-01), pages 59 - 64, XP055332015 *

Also Published As

Publication number Publication date
JP6186519B2 (en) 2017-08-23
JPWO2016189685A1 (en) 2017-06-08

Similar Documents

Publication Publication Date Title
JP7464098B2 (en) Electronic conference system
US10601739B2 (en) Smart messaging for computer-implemented devices
CN104144154B (en) Initiate the method, apparatus and system of preset conference
JP5814490B1 (en) Information processing apparatus, information processing method, program, and storage medium
US20160380941A1 (en) Idea Generation Platform for Distributed Work Environments
US9992142B2 (en) Messages from absent participants in online conferencing
CN106415625A (en) Dynamic invites with automatically adjusting displays
CN109753635A (en) Use high-quality exemplary document automation assistant
CN104756056A (en) Virtual meetings
CN110460510B (en) Method, device, electronic equipment and medium for establishing multi-person session
JP2008282191A (en) Computer system and second computer
JP2010211569A (en) Evaluation device, program and information processing system
KR20210064048A (en) Method, system, and computer program for providing expert counseling service
US9104297B2 (en) Indicating organization of visitor on user interface of user engaged in collaborative activity with visitor
JP2016045737A (en) Lunch member notification method, lunch member notification program, and information processing device
CN114257570B (en) Processing method, device, equipment and medium based on multi-user session
US8249996B1 (en) Artificial intelligence for social media
JP6186519B2 (en) Information processing apparatus, information processing method, program, and storage medium
JP2010191808A (en) Scheduling program, scheduling method, and scheduling device
TW201931273A (en) Social group portal directory generation system and method thereof which is simple to operate and allows quickly matching and joining social groups with desired attributes
US20190281422A1 (en) Message display apparatus and method
KR20080024704A (en) System for online-discussion service and method thereof
CN114513480B (en) Group chat-based information processing method, device, equipment and computer storage medium
CN102804232A (en) Popularity polling system
WO2022228273A1 (en) Information processing method and apparatus, terminal, and storage medium

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016565361

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15893319

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15893319

Country of ref document: EP

Kind code of ref document: A1