WO2024075302A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2024075302A1
WO2024075302A1 PCT/JP2022/037722 JP2022037722W WO2024075302A1 WO 2024075302 A1 WO2024075302 A1 WO 2024075302A1 JP 2022037722 W JP2022037722 W JP 2022037722W WO 2024075302 A1 WO2024075302 A1 WO 2024075302A1
Authority
WO
WIPO (PCT)
Prior art keywords
search
knowledge
character string
operator
information
Prior art date
Application number
PCT/JP2022/037722
Other languages
French (fr)
Japanese (ja)
Inventor
歩相名 神山
健一 町田
一比良 松井
Original Assignee
Nttテクノクロス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nttテクノクロス株式会社 filed Critical Nttテクノクロス株式会社
Priority to PCT/JP2022/037722 priority Critical patent/WO2024075302A1/en
Publication of WO2024075302A1 publication Critical patent/WO2024075302A1/en

Links

Images

Definitions

  • This disclosure relates to an information processing device, an information processing method, and a program.
  • Non-Patent Document 1 Technology that supports operators in answering telephone calls at contact centers (also called call centers) has been known for some time (for example, Non-Patent Document 1).
  • a UI user interface
  • response support screen is generally provided to the operator, and the operator can use various functions on this response support screen.
  • Functions available to operators on the above-mentioned customer support screen include the ability for the operator to search for knowledge such as FAQs (Frequently Asked Questions).
  • This disclosure has been made in light of the above, and aims to provide technology that allows appropriate knowledge to be obtained in a short period of time.
  • An information processing device has an extraction unit configured to extract a predetermined first character string from the text represented by the first display component that is the target of a selection operation in response to a selection operation on the first display component that represents text in units of utterances in a conversation between multiple people, and a search unit configured to search for knowledge representing information for supporting response to inquiries in the conversation using the extracted first character string as a search key.
  • FIG. 1 is a diagram illustrating an example of an overall configuration of a contact center system according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the response support system and an operator terminal according to the present embodiment.
  • FIG. 11 is a sequence diagram showing an example of a response support process according to the present embodiment.
  • FIG. 13 is a diagram showing an example of a response support screen (part 1).
  • FIG. 2 is a diagram showing an example of a response support screen (part 2).
  • FIG. 3 is a diagram showing an example of a response support screen (part 3).
  • 11 is a diagram for explaining an example of a relationship between a position where a selection operation is performed on an utterance part and an extracted keyword.
  • FIG. 1 is a diagram illustrating an example of an overall configuration of a contact center system according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of a functional configuration of the response support system and an operator terminal according to the present
  • a contact center system 1 will be described that is targeted at a contact center and enables contact center operators to obtain appropriate knowledge in a short time.
  • the contact center is just one example and is not limited to this.
  • the system can also be applied to other locations besides contact centers, such as offices and stores, where sales representatives or front desk staff of products or services obtain knowledge in the course of their work. More generally, the system can also be applied to cases where a person (which may be one person or multiple people) obtains knowledge during some kind of conversation between multiple people.
  • contact center operators handle customer inquiries and other tasks via voice calls, but this is not limited to this, and the same can be applied to tasks performed via text chat (including those that can send and receive stamps and attachments in addition to text), video calls, etc.
  • knowledge refers to information necessary to answer an inquiry, or information that supports the answer.
  • information obtained by searching FAQs for example, falls under knowledge.
  • Other examples of knowledge include information from manuals and web pages.
  • the contact center system 1 includes a response support system 10, a plurality of operator terminals 20, one or more supervisor terminals 30, a plurality of telephones 40, a PBX (Private Branch eXchange) 50, a NW switch 60, and a customer terminal 70.
  • the response support system 10, the operator terminals 20, the supervisor terminals 30, the telephones 40, the PBX 50, and the NW switch 60 are installed in a contact center environment E, which is a system environment of a contact center.
  • the contact center environment E is not limited to a system environment in the same building, and may be, for example, a system environment in a plurality of buildings that are geographically separated from each other.
  • the response support system 10 is a server or a group of servers that provides various functions (support functions) to support the operator in answering calls. There are various support functions provided by the response support system 10, but in this embodiment, at least the support functions shown in (1) and (2) below are provided.
  • Speech text provision function uses packets (voice packets) sent from the NW switch 60 to provide UI components that represent spoken text, which is a text of a voice call between an operator and a customer that has been converted using voice recognition technology with time information for each speaker. This function enables the operator to check the content of what is being said by the customer and the operator himself in real time during a call.
  • UI components are icons, buttons, figures, characters, etc. that are displayed on the screen. UI components may or may not be operable by the user.
  • UI components that represent spoken text are also referred to as "speech components.”
  • Knowledge search and provision function This function searches for knowledge such as FAQs and provides the knowledge as a search result. This function allows an operator to check the information necessary to answer an inquiry, for example.
  • the response support system 10 extracts a keyword and a keyword related to that keyword (also called related keywords) from the utterance text represented by that utterance part, and provides a UI part for searching knowledge using the extracted keywords.
  • a keyword and a keyword related to that keyword also called related keywords
  • an operator can search for knowledge using keywords that correspond to a desired search component simply by selecting that search component. This makes it possible to search for knowledge in a short amount of time compared to, for example, manually copying keywords from spoken text and entering them in a search area. Furthermore, because not only keywords but also related keywords are extracted, an operator can obtain appropriate knowledge by selecting a search component that corresponds to the appropriate keyword.
  • the customer service support system 10 may be able to provide various support functions other than those described above in (1) and (2).
  • the customer service support system 10 may be able to provide all or some of the support functions described below in (3) to (14).
  • Important matters provision function This function provides the matters that the operator needs to explain to the customer or confirm (such matters are also called "important matters”). This function enables the operator to know what matters need to be explained to the customer and what matters need to be confirmed.
  • This function provides the ability to send and receive messages (so-called chat) between the operator and a supervisor, etc. This function enables the operator, for example, to make inquiries of the supervisor or ask for assistance.
  • a supervisor is, for example, a person who monitors the operator's calls and supports the operator's telephone response duties when a problem appears to be occurring or at the request of the operator.
  • Support request function This function is for requesting support from a supervisor, etc. This function enables an operator to request support from a supervisor, etc., for example, when a problem occurs or when a problem is likely to occur.
  • Customer information provision function This function provides information about the customer that the operator is currently serving (for example, attribute information such as name, age, sex, address, product and service purchase history, etc.). This function allows the operator to know various information about the customer that he or she is currently serving.
  • Memo function This function allows the operator to create memos of their choice. This function allows the operator to leave any content they wish as a memo (interaction record memo), for example.
  • Previous call summary function This function provides a summary of past calls of the customer that the operator is currently serving. This function allows the operator to check the summary text that summarizes the past calls of the customer.
  • Call summary function This function uses the text spoken during a call to create a summary text that summarizes the contents of the call. With this function, the operator can, for example, know the summary of the call after it has ended, or if the same customer calls again, check the summary of the previous call using the previous call summary provision function.
  • (10) Call analysis function This function analyzes the text of speech during a call to identify the call reason and scene of the call, and whether the call is inbound or outbound. This function allows the operator to know the call reason and scene of the customer he or she is currently serving. In addition, for example, the important information provision function makes it possible to provide important information according to the call reason, scene, and inbound/outbound. Note that examples of scenes include the initial greeting (opening), confirmation of the inquiry, customer identity verification, response, and final greeting (closing).
  • Call history storage function This function stores, for example, information about the operator who answered the call, information about the customer of the call, the voice data of the call, the spoken text of the call, summary text, etc. as call history information. This function makes it possible to utilize the call history information for, for example, analyzing the quality of service and evaluating operators.
  • Call history search function This function searches for call history information stored by the call history storage function and provides the call history information as the search results to operators, supervisors, etc. This function enables operators, supervisors, etc. to search for call history information using desired search conditions and refer to the call history information obtained as the search results.
  • Call monitoring function This function provides information for supervisors to monitor operator calls (for example, the spoken text of the call and inappropriate words contained in the spoken text). This function enables supervisors to efficiently monitor the calls of the operators they are monitoring. Generally, one supervisor monitors the calls of several to a dozen operators.
  • Analysis function uses the call history information saved by the call history saving function to perform various analyses (e.g., analysis of response quality, etc.). This function enables those who wish to make calls with high response quality (e.g., response quality analysts and supervisors, etc.) to analyze response quality and consider various measures and strategies to improve it.
  • the support functions shown in (3) to (14) above are all examples.
  • the response support system 10 may be able to provide various support functions, for example, as described in the above-mentioned non-patent document 1.
  • the operator terminal 20 is a terminal such as a PC (personal computer) used by the operator.
  • the supervisor terminal 30 is a terminal such as a PC used by the supervisor.
  • Telephone 40 is an IP (Internet Protocol) telephone (such as a fixed IP telephone or a mobile IP telephone) used by an operator.
  • IP Internet Protocol
  • PBX 50 is a telephone exchange (IP-PBX) and is connected to a communications network 80 that includes a VoIP (Voice over Internet Protocol) network and a PSTN (Public Switched Telephone Network).
  • IP-PBX telephone exchange
  • VoIP Voice over Internet Protocol
  • PSTN Public Switched Telephone Network
  • the NW switch 60 relays packets (voice packets) between the telephone 40 and the PBX 50, and also captures the packets and transmits them to the response support system 10.
  • the customer terminal 70 is a variety of terminals used by customers, such as smartphones, mobile phones, and landlines.
  • the overall configuration of the contact center system 1 shown in FIG. 1 is an example, and is not limited to this, and other configurations may be used.
  • the response support system 10 is included in the contact center environment E (i.e., the response support system 10 is an on-premise type), but all or part of the functions of the response support system 10 may be realized by a cloud service, etc.
  • the PBX 50 is an on-premise type telephone exchange, but may be realized by a cloud service.
  • the contact center system 1 does not need to include a telephone set 40.
  • FIG. 2 shows an example of the functional configuration of the response support system 10 and the operator terminal 20 according to this embodiment.
  • the customer service support system 10 includes a voice recognition unit 101 and a customer service support unit 102. These functional units are realized, for example, by a process in which one or more programs installed in the customer service support system 10 are executed by a processor such as a CPU (Central Processing Unit).
  • the customer service support system 10 also includes a spoken text storage unit 103, a keyword storage unit 104, and a knowledge storage unit 105.
  • These storage units are realized, for example, by a storage device such as a hard disk drive (HDD), a solid state drive (SSD), or a flash memory. However, at least one of these storage units may be realized, for example, by a storage device such as a database server connected to the customer service support system 10 via a communication network.
  • the speech recognition unit 101 performs speech recognition on the voice represented by the voice data contained in the voice packet received from the NW switch 60, and creates spoken text with time information for each speaker.
  • the speech recognition unit 101 also stores these spoken texts in the spoken text storage unit 103 for each call. Note that, using existing speech recognition technology, it is possible to create spoken text with time information for each speaker from a voice that includes speech from multiple people.
  • the response support unit 102 provides the operator terminal 20 with a response support screen and UI parts displayed on the screen, as well as various support functions (i.e., various support functions including at least a spoken text provision function and a knowledge search and provision function).
  • the response support screen and the UI parts displayed on the screen are represented by information such as HTML (Hypertext Markup Language), CSS (Cascading Style Sheets), JavaScript, etc.
  • the response support unit 102 refers to the keyword information stored in the keyword storage unit 104 and extracts a keyword and related keywords from the speech text represented by that speech part. The response support unit 102 then displays a search part on the response support screen for searching knowledge using the extracted keyword.
  • the spoken text storage unit 103 stores, for each call, the spoken text with time information for each speaker in that call. In other words, for each call, the spoken text storage unit 103 stores, in chronological order, the spoken text for each speaker in that call.
  • Keyword information is information in which a keyword is associated with a keyword related to it.
  • keyword information is expressed in the format of (keyword, related keyword set).
  • the related keyword set is a set of related keywords.
  • the related keyword set may be an ordered set in which the related keywords are ordered in order of their relevance to the keyword, for example. Note that if there are no related keywords for a certain keyword, the related keyword set is an empty set.
  • keyword information is (cashing, ⁇ contract, bank ⁇ ). This keyword information indicates that the keyword is "cashing,” and that the keywords related to "cashing" are "contract” and "bank.”
  • the knowledge storage unit 105 stores one or more pieces of knowledge information.
  • Knowledge information is information in which a keyword and knowledge related to that keyword are associated with each other.
  • knowledge information is expressed in the format of (keyword, knowledge). This makes it possible to search for knowledge related to a keyword using the keyword as a search key.
  • the above format of knowledge information is only an example, and knowledge information may include various information other than keywords and knowledge, such as the number of times the knowledge has been referenced and evaluation information that evaluates the knowledge based on some criteria.
  • evaluation information may include, for example, the average value of the operator's evaluation of the knowledge using a graded numerical value (for example, the average value of a five-point evaluation), the total number of times the operator has evaluated the knowledge as appropriate, such as "Like” or "GOOD”, etc.
  • the response support system 10 also has various storage units and other functional units that are used to realize various support functions.
  • it has an operator information storage unit in which operator information including each operator's operator ID and attributes (name, affiliated organization, etc.) is stored, a call history information storage unit in which each call history information is stored, etc.
  • the operator terminal 20 has a UI control unit 201.
  • the UI control unit 201 is realized, for example, by a process in which one or more programs (e.g., a web browser, etc.) installed in the operator terminal 20 are executed by a processor such as a CPU.
  • the UI control unit 201 displays the response support screen provided by the response support system 10 on the display, and dynamically (in real time) displays each speech part received from the response support system 10 on the response support screen. Also, when the UI control unit 201 receives knowledge from the response support system 10, it displays the knowledge on the response support screen. Furthermore, the UI control unit 201 accepts various input operations by the operator on the response support screen (for example, pressing a button, inputting characters, etc.).
  • the voice recognition unit 101 of the response support system 10 performs voice recognition on the voice represented by the voice data contained in the voice packet received from the NW switch 60, and creates a spoken text with time information for each speaker (step S101).
  • the voice recognition unit 101 of the response support system 10 stores (memorizes) the speech text created in step S101 above in the speech text storage unit 103 (step S102).
  • the response support unit 102 of the response support system 10 transmits the speech component representing the speech text created in step S101 above to the operator terminal 20 (step S103).
  • the UI control unit 201 of the operator terminal 20 When the UI control unit 201 of the operator terminal 20 receives the speech parts from the customer service support system 10, it displays the speech parts on the customer service support screen (step S104).
  • the customer service support screen 1000 shown in FIG. 4 includes an utterance content display field 1100.
  • speech parts received from the customer service support system 10 are displayed in chronological order for each speaker.
  • speech parts 1111-1113 representing the operator's utterance text and speech parts 1121-1123 representing the customer's utterance text are displayed in chronological order. This allows the operator to check his or her own utterance content and the customer's utterance content in chronological order.
  • each speech part displayed in the speech content display field 1100 may be associated with a UI part (hereinafter also referred to as a speech evaluation part) for evaluating that the speech text represented by that speech part is useful for narrowing down knowledge during knowledge search.
  • This speech evaluation part may be called, for example, a "Like” button or a "GOOD” button related to the speech text.
  • speech parts 1111-1113 are associated with speech evaluation parts 1131-1133, respectively, and similarly, speech parts 1121-1123 are associated with speech evaluation parts 1141-1143.
  • the operator can perform a selection operation on the speech evaluation parts displayed in the speech content display field 1100 on the response support screen 1000 shown in FIG. 4.
  • the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired utterance evaluation component from among utterance evaluation components 1131 to 1133 and 1141 to 1143.
  • an input device such as a pointing device, touch panel, or keyboard
  • FIG. 4 shows a case where a selection operation is performed on utterance evaluation component 1141.
  • the spoken text is evaluated as to whether it is useful for narrowing down knowledge when searching for knowledge, but this is not limited to this, and it may be evaluated, for example, using a graded numerical value, etc.
  • the operator can perform a selection operation on the speech part displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4.
  • the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired speech part from among the speech parts 1111-1113 and 1121-1123 displayed in the speech content display field 1100.
  • an input device such as a pointing device, touch panel, or keyboard
  • the explanation will be continued assuming that a selection operation has been performed on a certain speech part displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4.
  • the speech text represented by the speech part may not contain any keyword, or may contain multiple keywords. If the speech text does not contain a keyword, no keyword is extracted in step S106 described below. On the other hand, if the speech text contains multiple keywords, in step S106 described below, for example, all of the multiple keywords may be extracted, or each time a selection operation is performed on the speech part, the keywords may be extracted in order starting from the keyword that first appears in the speech text.
  • the UI control unit 201 of the operator terminal 20 transmits operation information indicating that the speech part has been selected to the customer service support system 10 (step S105).
  • the operation information includes, for example, identification information (e.g., a part ID, etc.) of the speech part for which the selection operation has been performed.
  • the UI control unit 201 of the operator terminal 20 may transmit, for example, the speech text represented by the speech part to the customer service support system 10 instead of the operation information, or may transmit identification information (e.g., a speech text ID, etc.) of the speech text represented by the speech part.
  • the response support unit 102 of the response support system 10 When the response support unit 102 of the response support system 10 receives operation information from the operator terminal 20, it extracts keywords (including related keywords) from the spoken text represented by the speech component of the component ID included in the operation information (step S106). That is, the response support unit 102 refers to the keyword information stored in the keyword storage unit 104 to extract keywords from the spoken text, and also extracts keywords related to the keywords. Note that a collection of keywords and their related keywords (related keywords) may be called a "preset” or a "keyword preset”, etc.
  • keyword information (cashing, ⁇ contract, bank ⁇ ) etc. is stored in the keyword storage unit 104, and a selection operation is performed on the speech component 1122 displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4.
  • the customer service support unit 102 extracts the keyword “cashing” from the speech text "I'd like to confirm something about that cashing service” represented by the speech component 1122, and also extracts the keywords "contract” and "bank” related to the keyword "cashing".
  • the customer service support unit 102 of the customer service support system 10 transmits a search component for searching knowledge using the keywords extracted in step S106 above to the operator terminal 20 (step S107).
  • a search component for searching knowledge for "cashing,” a search component for searching knowledge for "contract,” and a search component for searching knowledge for "bank” are transmitted to the operator terminal 20.
  • the UI control unit 201 of the operator terminal 20 When the UI control unit 201 of the operator terminal 20 receives the search components from the customer service support system 10, it displays these search components on the customer service support screen (step S108). These search components are displayed, for example, in the search component display field 1200 of the customer service support screen 1000 shown in FIG. 4. At this time, for example, the search component corresponding to the keyword extracted from the spoken text is displayed on the leftmost side (the display position with the highest priority). In addition, the search components corresponding to the related keywords may be displayed, for example, from the left in order of relevance to the keyword, or from the left in order of the related keywords being extracted. For example, in the example shown in FIG.
  • the search component 1210 corresponding to the keyword "cashing” is displayed on the leftmost side (the display position with the highest priority), followed by the search component 1220 corresponding to the related keyword “contract” (the display position with the second highest priority), and then the search component 1230 corresponding to the related keyword "bank” (the display position with the third highest priority). Note that it is considered that the higher the display position of the search component is, the more likely it is that appropriate knowledge will be obtained when searching for knowledge with the keyword corresponding to that search component.
  • the display mode (e.g., color of UI components) of the search component corresponding to a keyword and the search component corresponding to a related keyword may be different.
  • the example shown in FIG. 4 shows a case where the display mode of the search component 1210 corresponding to a keyword is different from that of the search components 1220 to 1230 corresponding to related keywords.
  • the operator can perform a selection operation on the search component displayed in the search component display field 1200 on the customer service support screen 1000 shown in FIG. 4.
  • the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired search component from among the search components 1210 to 1230 displayed in the search component display field 1200.
  • an input device such as a pointing device, touch panel, or keyboard
  • the explanation will be continued assuming that a selection operation has been performed on a certain search component displayed in the search component display field 1200 on the customer service support screen 1000 shown in FIG. 4.
  • the UI control unit 201 of the operator terminal 20 transmits operation information indicating that the search component has been selected to the customer service support system 10 (step S109).
  • the operation information includes, for example, identification information (e.g., a component ID, etc.) of the search component on which the selection operation has been performed.
  • the UI control unit 201 of the operator terminal 20 may transmit, for example, a keyword corresponding to the search component to the customer service support system 10 instead of the operation information, or may transmit identification information (e.g., a keyword ID, etc.) of the keyword corresponding to the search component.
  • the customer service support unit 102 of the customer service support system 10 When the customer service support unit 102 of the customer service support system 10 receives operation information from the operator terminal 20, it searches for knowledge using a keyword corresponding to the search component of the component ID included in the operation information (step S110). That is, the customer service support unit 102 searches for knowledge information stored in the knowledge storage unit 105 using the keyword as a search key, and obtains knowledge information related to the keyword as the search result.
  • the knowledge information stored in knowledge storage unit 105 is searched for using the keyword "caching" corresponding to this search component 1210 as a search key. This allows knowledge information related to the keyword "caching" to be searched for. Note that a known method may be used as the knowledge search method.
  • a knowledge search is performed using the keywords contained in the speech text associated with that utterance evaluation part as an AND condition.
  • the utterance evaluation part 1141 is selected on the response support screen 1000 shown in FIG. 4, a knowledge search is performed using the keyword "ABC card" contained in the speech text associated with this utterance evaluation part 1141 as an AND condition.
  • the keyword corresponding to the search part selected by the operator is "cashing”
  • knowledge information is searched for using "cashing" and "ABC card” as search keys.
  • the customer service support unit 102 of the customer service support system 10 transmits the knowledge contained in the knowledge information searched in step S110 and associated information to the operator terminal 20 as a search result (step S111).
  • information associated with knowledge include the number of times the knowledge has been referenced, evaluation information for the knowledge, etc.
  • the UI control unit 201 of the operator terminal 20 When the UI control unit 201 of the operator terminal 20 receives the search results from the customer service support system 10, it displays the search results on the customer service support screen (step S112).
  • the search results are displayed, for example, in the search result display field 1300 of the customer service support screen 1000 shown in FIG. 4.
  • knowledge items 1301 to 1302 are displayed as search results. Note that by selecting the desired knowledge from the knowledge items displayed in the search result display field 1300, more detailed information regarding the selected knowledge item may be displayed.
  • each piece of knowledge obtained as a search result may be displayed in a display order according to information accompanying the knowledge.
  • each piece of knowledge may be displayed in order of the number of times the knowledge has been referenced, or in order of the best evaluation represented by the evaluation information (for example, in order of the greatest total number of "Likes” or "GOOD” votes, or in order of the highest average value of a graded numerical evaluation, etc.).
  • the knowledge displayed in the search result display field 1300 may be associated with a UI component (hereinafter also referred to as a knowledge evaluation component) for evaluating whether the knowledge is useful for answering an inquiry.
  • This knowledge evaluation component may be referred to as, for example, a "Like” button or a "GOOD” button related to the knowledge.
  • knowledge 1301-1302 are associated with knowledge evaluation components 1311-1312, respectively.
  • the operator can perform a selection operation on the knowledge evaluation components displayed in the search result display field 1300 on the response support screen 1000 shown in FIG. 4. For example, the operator can perform a selection operation to select a desired knowledge evaluation component from the knowledge evaluation components 1311-1312 using an input device such as a pointing device, touch panel, or keyboard.
  • operation information indicating that the knowledge evaluation component has been selected is sent to the response support system 10, and as a result, the evaluation information of the knowledge information stored in the knowledge storage unit 105 that includes the knowledge associated with that knowledge evaluation component is updated.
  • the operator can display a search component on the customer support screen for searching for knowledge using keywords contained in the spoken text or related keywords, simply by performing a selection operation on a speech component that represents the spoken text of a customer or the operator himself/herself.
  • the operator can then search for knowledge using the keywords that correspond to the desired search component, simply by performing a selection operation on the customer support screen.
  • search components that correspond to related keywords as well as keywords are displayed, and the operator can select a search component that corresponds to the appropriate keyword from among these search components, thereby enabling the operator to obtain appropriate knowledge.
  • the operator when responding to inquiries from customers, the operator can provide appropriate answers without making the customer wait, which, for example, can result in improved customer satisfaction.
  • the operator can also search for knowledge related to a desired keyword by entering the desired keyword in the search area 1400 on the response support screen 1000 shown in FIG. 4 and pressing the search button 1500.
  • knowledge is searched when a selection operation is performed on a search component, but for example, a knowledge search may be performed in advance for keywords (including related keywords) extracted from utterance components, and when the mouse is placed over a search component, the search results for the keywords corresponding to that search component (or the top several pieces of knowledge included in the search results) may be displayed in a pop-up.
  • a knowledge search may be performed in advance for keywords (including related keywords) extracted from utterance components
  • the search results for the keywords corresponding to that search component or the top several pieces of knowledge included in the search results
  • the search results for the keywords corresponding to that search component 1210 may be displayed on a pop-up screen 2100.
  • ⁇ Modification 2>> depending on the keyword extracted from the speech text represented by the speech part, appropriate knowledge may not be obtained. For example, when the keyword extracted from the speech text is an abstract term or a term with multiple meanings, a lot of knowledge may be obtained as a search result, and the narrowing down of knowledge may be insufficient. For this reason, when the keyword extracted from the speech text represented by the speech part is a specific keyword (for example, a keyword representing an abstract term or a term with multiple meanings), a suggestion (proposal or suggestion) for further narrowing down the knowledge during the knowledge search may be presented to the operator.
  • a specific keyword for example, a keyword representing an abstract term or a term with multiple meanings
  • a suggestion 3100 for further narrowing down knowledge is presented to the operator when searching for knowledge using the keyword "caching" corresponding to the search component 1210.
  • This suggestion 3100 displays the search results when searching for knowledge by further narrowing down knowledge related to the keyword "caching".
  • the keyword located closest to the position (operation position) where the selection operation was performed on the speech part may be extracted depending on the position (operation position) where the selection operation was performed.
  • a selection operation is performed on speech component 4100 shown in FIG. 7.
  • the speech text represented by speech component 4100 contains keyword 4101 representing "ABC service" and keyword 4102 representing "XX function.”
  • keyword 4101 is extracted in step S106 above.
  • keyword 4102 is extracted in step S106 above.
  • step S106 keywords and related keywords are extracted, but for example, only the keywords may be extracted without extracting the related keywords. Also, when only the keywords are extracted, knowledge may be automatically searched for using the keywords and the search results may be presented to the operator. This allows the operator to search for knowledge using keywords contained in the utterance text represented by the utterance parts simply by performing a selection operation on the utterance parts, making it possible to search for knowledge in a shorter time.
  • the priority (display order) of those search components may be determined according to their relationship with the topic of the conversation between the customer and the operator. For example, after determining the topic of the conversation between the customer and the operator, the likelihood (e.g., probability) that the related keyword is the topic may be calculated, and the priority (display order) of the search components corresponding to the related keywords may be determined in descending order of likelihood. Note that the topic may be determined using a known topic determination technique.
  • the search components displayed on the customer service support screen may be deleted, for example, each time a selection operation is performed on an utterance component, or may be added each time a selection operation is performed on an utterance component. In addition, if a search component is added each time a selection operation is performed on an utterance component, the oldest search component may be deleted in order when the maximum number of search components that can be displayed on the customer service support screen is exceeded.
  • Reference Signs List 1 Contact center system 10
  • Response support system 20
  • Operator terminal 30
  • Supervisor terminal 40
  • Telephone 50 PBX
  • NW switch 70
  • Customer terminal 80
  • Communication network 101
  • Voice recognition unit 102
  • Response support unit 103
  • Speech text storage unit 104
  • Keyword storage unit 105
  • Knowledge storage unit 201
  • UI control unit E Contact center environment

Abstract

An information processing device according to an embodiment of the present disclosure has: an extraction unit configured so that in response to a selection operation performed on a first display component expressing text in utterance units from a conversation between a plurality of people, a prescribed first character string is extracted from the text expressed by the first display component on which the selection operation was performed; and a search unit configured so as to use the extracted first character string as a search key and search knowledge expressing information for assisting with an inquiry response in the conversation.

Description

情報処理装置、情報処理方法及びプログラムInformation processing device, information processing method, and program
 本開示は、情報処理装置、情報処理方法及びプログラムに関する。 This disclosure relates to an information processing device, an information processing method, and a program.
 コンタクトセンタ(又は、コールセンタとも呼ばれる。)におけるオペレータの電話応対業務を支援する技術が従来から知られている(例えば、非特許文献1)。このような技術では、一般に、応対支援画面等と呼ばれるUI(ユーザインターフェース)がオペレータに提供され、この応対支援画面上でオペレータは様々な機能を利用することができる。 Technology that supports operators in answering telephone calls at contact centers (also called call centers) has been known for some time (for example, Non-Patent Document 1). In such technology, a UI (user interface) called a response support screen is generally provided to the operator, and the operator can use various functions on this response support screen.
 上記の応対支援画面上でオペレータが利用可能な機能として、FAQ(Frequently Asked Questions)等のナレッジをオペレータが検索できる機能がある。また、顧客との通話中における音声認識結果からキーワードを自動的に抽出してナレッジを検索し、検索結果として得られたナレッジのうち、顧客からの問い合わせに関連するナレッジをオペレータに提供する機能もある。 Functions available to operators on the above-mentioned customer support screen include the ability for the operator to search for knowledge such as FAQs (Frequently Asked Questions). There is also a function that automatically extracts keywords from the results of voice recognition during a call with a customer, searches for knowledge, and provides the operator with knowledge related to the customer's inquiry from among the knowledge obtained as a search result.
 しかしながら、従来では、オペレータが短時間で適切なナレッジを得ることはできなかった。例えば、ナレッジをオペレータが検索する場合、オペレータは音声認識結果からキーワードをコピーして検索エリアに入力する必要があり、適切なナレッジが得られる一方でナレッジを得るまでに時間を要する。また、例えば、音声認識結果からキーワードを自動的に抽出してナレッジを検索及び提供する場合、オペレータは短時間でナレッジを得ることができるが、ナレッジが適切でないことがある。 However, conventionally, operators have not been able to obtain appropriate knowledge in a short time. For example, when an operator searches for knowledge, the operator needs to copy keywords from the voice recognition results and enter them in a search area, and while appropriate knowledge can be obtained, it takes time to obtain the knowledge. Also, for example, when keywords are automatically extracted from the voice recognition results to search for and provide knowledge, the operator can obtain knowledge in a short time, but the knowledge may not be appropriate.
 本開示は、上記の点に鑑みてなされたもので、短時間で適切なナレッジを得ることができる技術を提供することを目的とする。 This disclosure has been made in light of the above, and aims to provide technology that allows appropriate knowledge to be obtained in a short period of time.
 本開示の一態様による情報処理装置は、複数人の会話における発話単位のテキストを表す第1の表示部品に対する選択操作に応じて、前記選択操作の対象となった第1の表示部品が表すテキストの中から所定の第1の文字列を抽出するように構成されている抽出部と、抽出された第1の文字列を検索キーとして、前記会話における問い合わせ対応を支援するための情報を表すナレッジを検索するように構成されている検索部と、を有する。 An information processing device according to one aspect of the present disclosure has an extraction unit configured to extract a predetermined first character string from the text represented by the first display component that is the target of a selection operation in response to a selection operation on the first display component that represents text in units of utterances in a conversation between multiple people, and a search unit configured to search for knowledge representing information for supporting response to inquiries in the conversation using the extracted first character string as a search key.
 短時間で適切なナレッジを得ることができる技術が提供される。 We provide technology that allows you to obtain appropriate knowledge in a short amount of time.
本実施形態に係るコンタクトセンタシステムの全体構成の一例を示す図である。1 is a diagram illustrating an example of an overall configuration of a contact center system according to an embodiment of the present invention. 本実施形態に係る応対支援システム及びオペレータ端末の機能構成の一例を示す図である。FIG. 2 is a diagram illustrating an example of a functional configuration of the response support system and an operator terminal according to the present embodiment. 本実施形態に係る応対支援処理の一例を示すシーケンス図である。FIG. 11 is a sequence diagram showing an example of a response support process according to the present embodiment. 応対支援画面の一例を示す図(その1)である。FIG. 13 is a diagram showing an example of a response support screen (part 1). 応対支援画面の一例を示す図(その2)である。FIG. 2 is a diagram showing an example of a response support screen (part 2). 応対支援画面の一例を示す図(その3)である。FIG. 3 is a diagram showing an example of a response support screen (part 3). 発話部品に対して選択操作が行われた位置と抽出キーワードの関係の一例を説明するための図である。11 is a diagram for explaining an example of a relationship between a position where a selection operation is performed on an utterance part and an extracted keyword. FIG.
 以下、本発明の一実施形態について説明する。以下の実施形態では、コンタクトセンタを対象として、コンタクトセンタのオペレータが短時間で適切なナレッジを得ることができるコンタクトセンタシステム1について説明する。ただし、コンタクトセンタは一例であって、これに限られるものではなく、コンタクトセンタ以外にも、例えば、オフィスや店舗等を対象として、商品やサービス等の営業担当者や窓口担当者等が業務の中でナレッジを得る場合等にも同様に適用することが可能である。より一般には、複数人が何等かの会話をしている中で或る者(1人であってもよいし、複数人であってもよい)がナレッジを得る場合にも同様に適用することが可能である。 Below, one embodiment of the present invention will be described. In the following embodiment, a contact center system 1 will be described that is targeted at a contact center and enables contact center operators to obtain appropriate knowledge in a short time. However, the contact center is just one example and is not limited to this. The system can also be applied to other locations besides contact centers, such as offices and stores, where sales representatives or front desk staff of products or services obtain knowledge in the course of their work. More generally, the system can also be applied to cases where a person (which may be one person or multiple people) obtains knowledge during some kind of conversation between multiple people.
 また、以下では、コンタクトセンタのオペレータは顧客との間で音声通話により問い合わせ対応等の業務を行うものとして説明するが、これに限られず、例えば、テキストチャット(テキスト以外にスタンプや添付ファイル等を送受信可能なものも含む。)、ビデオ通話等により業務を行う場合であっても同様に適用することが可能である。 Furthermore, in the following description, we will assume that contact center operators handle customer inquiries and other tasks via voice calls, but this is not limited to this, and the same can be applied to tasks performed via text chat (including those that can send and receive stamps and attachments in addition to text), video calls, etc.
 ここで、ナレッジ(知識)とは、何等かの問い合わせに対する回答に必要な情報、又は、当該回答を支援するための情報のことである。典型的には、例えば、FAQを検索することで得られる情報等がナレッジに該当する。この他にも、例えば、マニュアルやWebページの情報等といったものもナレッジに該当する。 Here, knowledge refers to information necessary to answer an inquiry, or information that supports the answer. Typically, information obtained by searching FAQs, for example, falls under knowledge. Other examples of knowledge include information from manuals and web pages.
 近年では、オンラインでのビジネスが加速し、コンタクトセンタの業務が増えていく中で、問い合わせ内容のバリエーションも増えており、オペレータの回答内容も多岐にわたっている。また、コンタクトセンタでは、オペレータは問い合わせに対して短時間で回答することが求められる。このため、本実施形態に係るコンタクトセンタシステム1を導入し、オペレータが短時間に適切なナレッジを得ることで(つまり、顧客からの問い合わせに回答するために有益なナレッジを短時間で得ることで)、オペレータの業務負担の軽減や顧客満足度の向上等といった効果が期待できると考えられる。 In recent years, as online business has accelerated and contact center operations have increased, the variety of inquiries has also increased, and the answers given by operators are also more diverse. Furthermore, at contact centers, operators are required to answer inquiries in a short time. For this reason, by introducing the contact center system 1 according to this embodiment and allowing operators to obtain appropriate knowledge in a short time (i.e., by obtaining useful knowledge for answering customer inquiries in a short time), it is expected that effects such as reducing the workload of operators and improving customer satisfaction can be achieved.
 <コンタクトセンタシステム1の全体構成例>
 本実施形態に係るコンタクトセンタシステム1の全体構成例を図1に示す。図1に示すように、本実施形態に係るコンタクトセンタシステム1には、応対支援システム10と、複数のオペレータ端末20と、1以上のスーパバイザ端末30と、複数の電話機40と、PBX(Private Branch eXchange)50と、NWスイッチ60と、顧客端末70とが含まれる。ここで、応対支援システム10、オペレータ端末20、スーパバイザ端末30、電話機40、PBX50、及びNWスイッチ60は、コンタクトセンタのシステム環境であるコンタクトセンタ環境E内に設置されている。なお、コンタクトセンタ環境Eは同一の建物内のシステム環境に限られず、例えば、地理的に離れた複数の建物内のシステム環境であってもよい。
<Example of Overall Configuration of Contact Center System 1>
An example of the overall configuration of a contact center system 1 according to the present embodiment is shown in Fig. 1. As shown in Fig. 1, the contact center system 1 according to the present embodiment includes a response support system 10, a plurality of operator terminals 20, one or more supervisor terminals 30, a plurality of telephones 40, a PBX (Private Branch eXchange) 50, a NW switch 60, and a customer terminal 70. Here, the response support system 10, the operator terminals 20, the supervisor terminals 30, the telephones 40, the PBX 50, and the NW switch 60 are installed in a contact center environment E, which is a system environment of a contact center. Note that the contact center environment E is not limited to a system environment in the same building, and may be, for example, a system environment in a plurality of buildings that are geographically separated from each other.
 応対支援システム10は、オペレータの電話応対業務を支援するための様々な機能(支援機能)を提供するサーバ又はサーバ群等である。応対支援システム10が提供する支援機能には様々なものが存在するが、本実施形態では、以下の(1)及び(2)に示す支援機能が少なくとも提供されるものとする。 The response support system 10 is a server or a group of servers that provides various functions (support functions) to support the operator in answering calls. There are various support functions provided by the response support system 10, but in this embodiment, at least the support functions shown in (1) and (2) below are provided.
 (1)発話テキスト提供機能:本機能は、NWスイッチ60から送信されたパケット(音声パケット)を用いて、オペレータと顧客との間の音声通話を音声認識技術により話者毎に時刻情報付きでテキスト化した発話テキストを表すUI部品を提供する機能である。この機能により、オペレータは、通話中に顧客及び自身の発話内容をリアルタイムに確認することが可能となる。なお、UI部品とは、画面上に表示されるアイコンやボタン、図形、文字等のことである。UI部品は、ユーザが操作可能なものであってもよいし、ユーザが操作できないものであってもよい。以下では、発話テキストを表すUI部品のことを「発話部品」ともいう。 (1) Speech text provision function: This function uses packets (voice packets) sent from the NW switch 60 to provide UI components that represent spoken text, which is a text of a voice call between an operator and a customer that has been converted using voice recognition technology with time information for each speaker. This function enables the operator to check the content of what is being said by the customer and the operator himself in real time during a call. Note that UI components are icons, buttons, figures, characters, etc. that are displayed on the screen. UI components may or may not be operable by the user. Below, UI components that represent spoken text are also referred to as "speech components."
 (2)ナレッジ検索・提供機能:本機能は、FAQ等のナレッジを検索し、その検索結果としてナレッジを提供する機能である。この機能により、オペレータは、例えば、問い合わせの回答に必要な情報を確認することが可能となる。 (2) Knowledge search and provision function: This function searches for knowledge such as FAQs and provides the knowledge as a search result. This function allows an operator to check the information necessary to answer an inquiry, for example.
 このとき、応対支援システム10は、オペレータによって発話部品が選択されると、その発話部品が表す発話テキストからキーワードとそのキーワードに関連するキーワード(関連キーワードともいう。)とを抽出し、抽出したキーワードでナレッジを検索するためのUI部品を提供する。以下、キーワードでナレッジを検索するためのUI部品のことを「検索部品」ともいう。 When an utterance part is selected by an operator, the response support system 10 extracts a keyword and a keyword related to that keyword (also called related keywords) from the utterance text represented by that utterance part, and provides a UI part for searching knowledge using the extracted keywords. Hereinafter, the UI part for searching knowledge using keywords is also called the "search part."
 これにより、オペレータは、所望の検索部品を選択するだけで、その検索部品に対応するキーワードでナレッジを検索することが可能となる。このため、例えば、発話テキストからキーワードを手動でコピーして検索エリアに入力する場合等と比較して、短時間でナレッジを検索することが可能となる。また、キーワードだけでなく、関連キーワードも抽出されるため、オペレータは、適切なキーワードに対応する検索部品を選択することで、適切なナレッジを得ることが可能となる。 As a result, an operator can search for knowledge using keywords that correspond to a desired search component simply by selecting that search component. This makes it possible to search for knowledge in a short amount of time compared to, for example, manually copying keywords from spoken text and entering them in a search area. Furthermore, because not only keywords but also related keywords are extracted, an operator can obtain appropriate knowledge by selecting a search component that corresponds to the appropriate keyword.
 なお、応対支援システム10は、上記の(1)及び(2)以外にも様々な支援機能を提供することができてもよい。例えば、応対支援システム10は、以下の(3)~(14)に示す支援機能の全部又は一部の支援機能を提供することができてもよい。 The customer service support system 10 may be able to provide various support functions other than those described above in (1) and (2). For example, the customer service support system 10 may be able to provide all or some of the support functions described below in (3) to (14).
 (3)重要事項提供機能:本機能は、オペレータが顧客に説明したり、確認したりする必要がある事項(このような事項は「重要事項」とも呼ばれる。)を提供する機能である。この機能により、オペレータは、どのような事項を顧客に説明したり、どのような事項を確認したりする必要があるのかを知ることが可能となる。 (3) Important matters provision function: This function provides the matters that the operator needs to explain to the customer or confirm (such matters are also called "important matters"). This function enables the operator to know what matters need to be explained to the customer and what matters need to be confirmed.
 (4)チャット機能:本機能は、スーパバイザ等との間でメッセージ送受信(いわゆるチャット)を提供する機能である。この機能により、オペレータは、例えば、スーパバイザに対して何等かの問い合わせを行ったり、支援を求めたりすることが可能となる。なお、スーパバイザとは、例えば、オペレータの通話を監視し、何等かの問題が発生しそうな場合やオペレータからの依頼に応じてそのオペレータの電話応対業務を支援する者のことである。 (4) Chat function: This function provides the ability to send and receive messages (so-called chat) between the operator and a supervisor, etc. This function enables the operator, for example, to make inquiries of the supervisor or ask for assistance. Note that a supervisor is, for example, a person who monitors the operator's calls and supports the operator's telephone response duties when a problem appears to be occurring or at the request of the operator.
 (5)支援依頼機能:本機能は、スーパバイザ等に対して支援を求める機能である。この機能により、オペレータは、例えば、何等かの問題が発生した場合又は何等かの問題が発生しそうな場合等にスーパバイザ等に対して支援を求めることが可能となる。 (5) Support request function: This function is for requesting support from a supervisor, etc. This function enables an operator to request support from a supervisor, etc., for example, when a problem occurs or when a problem is likely to occur.
 (6)顧客情報提供機能:本機能は、オペレータが現在応対している顧客の情報(例えば、氏名、年齢、性別、住所、商品やサービスの購入履歴等といった属性情報等)を提供する機能である。この機能により、オペレータは、現在応対中の顧客の様々な情報を知ることが可能となる。 (6) Customer information provision function: This function provides information about the customer that the operator is currently serving (for example, attribute information such as name, age, sex, address, product and service purchase history, etc.). This function allows the operator to know various information about the customer that he or she is currently serving.
 (7)メモ機能:本機能は、オペレータが任意にメモを作成できる機能である。この機能により、オペレータは、例えば、自身が所望する任意の内容をメモ(応対記録メモ)として残すことが可能となる。 (7) Memo function: This function allows the operator to create memos of their choice. This function allows the operator to leave any content they wish as a memo (interaction record memo), for example.
 (8)前回通話要約提供機能:本機能は、オペレータが現在応対している顧客の過去の通話の要約を提供する機能である。この機能により、オペレータは、その顧客の過去の通話を要約した要約テキストを確認することが可能となる。 (8) Previous call summary function: This function provides a summary of past calls of the customer that the operator is currently serving. This function allows the operator to check the summary text that summarizes the past calls of the customer.
 (9)通話要約機能:本機能は、通話中の発話テキストを用いて、通話内容を要約した要約テキストを作成する機能である。この機能により、オペレータは、例えば、通話終了後にその通話の要約を知ることができたり、同一顧客から再度通話があった場合に前回通話要約提供機能により過去通話の要約を確認したりすることが可能となる。 (9) Call summary function: This function uses the text spoken during a call to create a summary text that summarizes the contents of the call. With this function, the operator can, for example, know the summary of the call after it has ended, or if the same customer calls again, check the summary of the previous call using the previous call summary provision function.
 (10)通話解析機能:本機能は、通話中の発話テキストを解析して、その通話のコールリーズン(入電理由)やシーン(場面)を特定したり、その通話がインバウンド又はアウトバウンドのいずれであるかを特定したりする機能である。この機能により、オペレータは、自身が現在応対している顧客のコールリーズンやシーンを知ることが可能となる。また、例えば、重要事項提供機能は、コールリーズンやシーン、インバウンド/アウトバウンドに応じた重要事項を提供することが可能となる。なお、シーンとしては、例えば、最初の挨拶(オープニング)、問い合わせ内容確認、顧客の本人確認、応対、最後の挨拶(クロージング)等がある。 (10) Call analysis function: This function analyzes the text of speech during a call to identify the call reason and scene of the call, and whether the call is inbound or outbound. This function allows the operator to know the call reason and scene of the customer he or she is currently serving. In addition, for example, the important information provision function makes it possible to provide important information according to the call reason, scene, and inbound/outbound. Note that examples of scenes include the initial greeting (opening), confirmation of the inquiry, customer identity verification, response, and final greeting (closing).
 (11)通話履歴保存機能:本機能は、例えば、通話の応対を行ったオペレータの情報、その通話の顧客の情報、その通話の音声データ、その通話の発話テキスト、要約テキスト等を通話履歴情報として保存する機能である。この機能により、例えば、応対品質の分析やオペレータの評価等に通話履歴情報を活用することが可能となる。 (11) Call history storage function: This function stores, for example, information about the operator who answered the call, information about the customer of the call, the voice data of the call, the spoken text of the call, summary text, etc. as call history information. This function makes it possible to utilize the call history information for, for example, analyzing the quality of service and evaluating operators.
 (12)通話履歴検索機能:本機能は、通話履歴保存機能によって保存された通話履歴情報を検索し、その検索結果として通話履歴情報をオペレータやスーパバイザ等に提供する機能である。この機能により、オペレータやスーパバイザ等は、所望の検索条件により通話履歴情報を検索し、その検索結果として得られた通話履歴情報を参照することが可能となる。 (12) Call history search function: This function searches for call history information stored by the call history storage function and provides the call history information as the search results to operators, supervisors, etc. This function enables operators, supervisors, etc. to search for call history information using desired search conditions and refer to the call history information obtained as the search results.
 (13)通話監視機能:本機能は、スーパバイザがオペレータの通話を監視するための情報(例えば、その通話の発話テキストやその発話テキストに含まれるNGワード等)を提供する機能である。この機能により、スーパバイザは、監視対象のオペレータの通話を効率的に監視することが可能となる。なお、通常、数人~十数人程度のオペレータの通話が1人のスーパバイザにより監視されることが一般的である。 (13) Call monitoring function: This function provides information for supervisors to monitor operator calls (for example, the spoken text of the call and inappropriate words contained in the spoken text). This function enables supervisors to efficiently monitor the calls of the operators they are monitoring. Generally, one supervisor monitors the calls of several to a dozen operators.
 (14)分析機能:本機能は、通話履歴保存機能によって保存された通話履歴情報を用いて、様々な分析(例えば、応対品質の分析等)を行う機能である。この機能により、応対品質の通話を行いたい者(例えば、応対品質の分析担当者やスーパバイザ等)は応対品質を分析し、それを向上させるための様々な施策や対策等を検討することが可能となる。 (14) Analysis function: This function uses the call history information saved by the call history saving function to perform various analyses (e.g., analysis of response quality, etc.). This function enables those who wish to make calls with high response quality (e.g., response quality analysts and supervisors, etc.) to analyze response quality and consider various measures and strategies to improve it.
 なお、上記の(3)~(14)に示す支援機能はいずれも一例であることは言うまでもない。これら以外にも、応対支援システム10は、例えば、上記の非特許文献1に記載されている様々な支援機能を提供することができてもよい。 It goes without saying that the support functions shown in (3) to (14) above are all examples. In addition to these, the response support system 10 may be able to provide various support functions, for example, as described in the above-mentioned non-patent document 1.
 オペレータ端末20は、オペレータが利用するPC(パーソナルコンピュータ)等の端末である。 The operator terminal 20 is a terminal such as a PC (personal computer) used by the operator.
 スーパバイザ端末30は、スーパバイザが利用するPC等の端末である。 The supervisor terminal 30 is a terminal such as a PC used by the supervisor.
 電話機40は、オペレータが利用するIP(Internet Protocol)電話機(固定IP電話機又は携帯IP電話機等)である。 Telephone 40 is an IP (Internet Protocol) telephone (such as a fixed IP telephone or a mobile IP telephone) used by an operator.
 PBX50は、電話交換機(IP-PBX)であり、VoIP(Voice over Internet Protocol)網やPSTN(Public Switched Telephone Network)を含む通信ネットワーク80に接続されている。 PBX 50 is a telephone exchange (IP-PBX) and is connected to a communications network 80 that includes a VoIP (Voice over Internet Protocol) network and a PSTN (Public Switched Telephone Network).
 NWスイッチ60は、電話機40とPBX50との間でパケット(音声パケット)を中継すると共に、そのパケットをキャプチャして応対支援システム10に送信する。 The NW switch 60 relays packets (voice packets) between the telephone 40 and the PBX 50, and also captures the packets and transmits them to the response support system 10.
 顧客端末70は、顧客が利用するスマートフォンや携帯電話、固定電話等の各種端末である。 The customer terminal 70 is a variety of terminals used by customers, such as smartphones, mobile phones, and landlines.
 なお、図1に示すコンタクトセンタシステム1の全体構成は一例であって、これに限られるものではなく、他の構成であってもよい。例えば、図1に示す例では、応対支援システム10がコンタクトセンタ環境Eに含まれているが(つまり、応対支援システム10はオンプレミス型であるが)、応対支援システム10の全部又は一部の機能がクラウドサービス等により実現されていてもよい。同様に、図1に示す例では、PBX50はオンプレミス型の電話交換機であるが、クラウドサービスにより実現されていてもよい。また、オペレータ端末20が電話機能を有している場合には、コンタクトセンタシステム1には電話機40が含まれていなくてもよい。 Note that the overall configuration of the contact center system 1 shown in FIG. 1 is an example, and is not limited to this, and other configurations may be used. For example, in the example shown in FIG. 1, the response support system 10 is included in the contact center environment E (i.e., the response support system 10 is an on-premise type), but all or part of the functions of the response support system 10 may be realized by a cloud service, etc. Similarly, in the example shown in FIG. 1, the PBX 50 is an on-premise type telephone exchange, but may be realized by a cloud service. Furthermore, if the operator terminal 20 has a telephone function, the contact center system 1 does not need to include a telephone set 40.
 <応対支援システム10及びオペレータ端末20の機能構成例>
 本実施形態に係る応対支援システム10及びオペレータ端末20の機能構成例を図2に示す。
<Example of functional configuration of response support system 10 and operator terminal 20>
FIG. 2 shows an example of the functional configuration of the response support system 10 and the operator terminal 20 according to this embodiment.
  ≪応対支援システム10≫
 図2に示すように、本実施形態に係る応対支援システム10は、音声認識部101と、応対支援部102とを有する。これら各機能部は、例えば、応対支援システム10にインストールされた1以上のプログラムが、CPU(Central Processing Unit)等のプロセッサに実行させる処理により実現される。また、本実施形態に係る応対支援システム10は、発話テキスト記憶部103と、キーワード記憶部104と、ナレッジ記憶部105とを有する。これら各記憶部は、例えば、HDD(Hard Disk Drive)やSSD(Solid State Drive)、フラッシュメモリ等の記憶装置により実現される。ただし、これら各記憶部のうちの少なくとも1つの記憶部が、例えば、応対支援システム10と通信ネットワークを介して接続されるデータベースサーバ等の記憶装置により実現されていてもよい。
<Response Support System 10>
As shown in FIG. 2, the customer service support system 10 according to the present embodiment includes a voice recognition unit 101 and a customer service support unit 102. These functional units are realized, for example, by a process in which one or more programs installed in the customer service support system 10 are executed by a processor such as a CPU (Central Processing Unit). The customer service support system 10 according to the present embodiment also includes a spoken text storage unit 103, a keyword storage unit 104, and a knowledge storage unit 105. These storage units are realized, for example, by a storage device such as a hard disk drive (HDD), a solid state drive (SSD), or a flash memory. However, at least one of these storage units may be realized, for example, by a storage device such as a database server connected to the customer service support system 10 via a communication network.
 音声認識部101は、NWスイッチ60から受信した音声パケットに含まれる音声データが表す音声に対して音声認識を行って、話者毎の時刻情報付き発話テキストを作成する。また、音声認識部101は、通話毎に、これらの発話テキストを発話テキスト記憶部103に記憶する。なお、既存の音声認識技術により、複数人の発話が含まれる音声から話者毎の時刻情報付き発話テキストを作成することが可能である。 The speech recognition unit 101 performs speech recognition on the voice represented by the voice data contained in the voice packet received from the NW switch 60, and creates spoken text with time information for each speaker. The speech recognition unit 101 also stores these spoken texts in the spoken text storage unit 103 for each call. Note that, using existing speech recognition technology, it is possible to create spoken text with time information for each speaker from a voice that includes speech from multiple people.
 応対支援部102は、オペレータ端末20に対して応対支援画面やその画面上に表示されるUI部品等を提供すると共に、各種支援機能(すなわち、発話テキスト提供機能とナレッジ検索・提供機能を少なくとも含む各種支援機能)を提供する。なお、応対支援画面及びその画面上に表示されるUI部品は、例えば、HTML(Hypertext Markup Language)、CSS(Cascading Style Sheets)、JavaScript等といった情報により表される。 The response support unit 102 provides the operator terminal 20 with a response support screen and UI parts displayed on the screen, as well as various support functions (i.e., various support functions including at least a spoken text provision function and a knowledge search and provision function). The response support screen and the UI parts displayed on the screen are represented by information such as HTML (Hypertext Markup Language), CSS (Cascading Style Sheets), JavaScript, etc.
 また、応対支援部102は、応対支援画面上で発話部品が選択された場合、キーワード記憶部104に記憶されているキーワード情報を参照して、その発話部品が表す発話テキストからキーワードとそれに関連するキーワードを抽出する。そして、応対支援部102は、抽出したキーワードでナレッジを検索するための検索部品を応対支援画面上に表示させる。 In addition, when a speech part is selected on the response support screen, the response support unit 102 refers to the keyword information stored in the keyword storage unit 104 and extracts a keyword and related keywords from the speech text represented by that speech part. The response support unit 102 then displays a search part on the response support screen for searching knowledge using the extracted keyword.
 発話テキスト記憶部103は、通話毎に、その通話における話者毎の時刻情報付き発話テキストを記憶する。言い換えれば、発話テキスト記憶部103は、通話毎に、その通話における話者毎の発話テキストを時系列順に記憶する。 The spoken text storage unit 103 stores, for each call, the spoken text with time information for each speaker in that call. In other words, for each call, the spoken text storage unit 103 stores, in chronological order, the spoken text for each speaker in that call.
 キーワード記憶部104は、1つ以上のキーワード情報を記憶する。キーワード情報とは、キーワードとそれに関連するキーワードとが対応付けられた情報である。例えば、キーワード情報は、(キーワード,関連キーワード集合)という形式で表される。ここで、関連キーワード集合とは、関連キーワードの集合のことである。関連キーワード集合は、例えば、キーワードとの関連性が高い順に関連キーワードが順序性を持つ順序集合であってもよい。なお、或るキーワードに関連キーワードが存在しない場合は、関連キーワード集合は空集合となる。 The keyword storage unit 104 stores one or more pieces of keyword information. Keyword information is information in which a keyword is associated with a keyword related to it. For example, keyword information is expressed in the format of (keyword, related keyword set). Here, the related keyword set is a set of related keywords. The related keyword set may be an ordered set in which the related keywords are ordered in order of their relevance to the keyword, for example. Note that if there are no related keywords for a certain keyword, the related keyword set is an empty set.
 キーワード情報の具体例としては、例えば、(キャッシング,{契約,銀行})等が挙げられる。このキーワード情報は、キーワードが「キャッシング」であり、「キャッシング」に関連するキーワードが「契約」、「銀行」であることを表している。 An example of keyword information is (cashing, {contract, bank}). This keyword information indicates that the keyword is "cashing," and that the keywords related to "cashing" are "contract" and "bank."
 ナレッジ記憶部105は、1つ以上のナレッジ情報を記憶する。ナレッジ情報とは、キーワードとそのキーワードに関連するナレッジとが対応付けられた情報である。例えば、ナレッジ情報は、(キーワード,ナレッジ)という形式で表される。これにより、キーワードを検索キーとして、そのキーワードに関連するナレッジの検索が可能となる。ただし、上記のナレッジ情報の形式は一例であって、ナレッジ情報には、キーワードとナレッジ以外にも、例えば、ナレッジが参照された回数、ナレッジを何等かの基準で評価した評価情報等といった様々な情報が含まれていてもよい。なお、評価情報としては、例えば、オペレータがナレッジを段階的な数値により評価した値の平均値(例えば、5段階評価の平均値)、「いいね」や「GOOD」等といったオペレータが適切なナレッジであると評価した総数、等が挙げられる。 The knowledge storage unit 105 stores one or more pieces of knowledge information. Knowledge information is information in which a keyword and knowledge related to that keyword are associated with each other. For example, knowledge information is expressed in the format of (keyword, knowledge). This makes it possible to search for knowledge related to a keyword using the keyword as a search key. However, the above format of knowledge information is only an example, and knowledge information may include various information other than keywords and knowledge, such as the number of times the knowledge has been referenced and evaluation information that evaluates the knowledge based on some criteria. In addition, evaluation information may include, for example, the average value of the operator's evaluation of the knowledge using a graded numerical value (for example, the average value of a five-point evaluation), the total number of times the operator has evaluated the knowledge as appropriate, such as "Like" or "GOOD", etc.
 なお、図2に示す例では図示していないが、応対支援システム10は、各種支援機能の実現に利用される様々な記憶部やその他の機能部も有している。例えば、各オペレータのオペレータIDや属性(氏名、所属組織等)等が含まれるオペレータ情報が格納されているオペレータ情報記憶部、各通話履歴情報が格納されている通話履歴情報記憶部等を有している。 Note that although not shown in the example shown in FIG. 2, the response support system 10 also has various storage units and other functional units that are used to realize various support functions. For example, it has an operator information storage unit in which operator information including each operator's operator ID and attributes (name, affiliated organization, etc.) is stored, a call history information storage unit in which each call history information is stored, etc.
  ≪オペレータ端末20≫
 図2に示すように、本実施形態に係るオペレータ端末20は、UI制御部201を有する。UI制御部201は、例えば、オペレータ端末20にインストールされた1以上のプログラム(例えば、Webブラウザ等)が、CPU等のプロセッサに実行させる処理により実現される。
<Operator terminal 20>
2, the operator terminal 20 according to the present embodiment has a UI control unit 201. The UI control unit 201 is realized, for example, by a process in which one or more programs (e.g., a web browser, etc.) installed in the operator terminal 20 are executed by a processor such as a CPU.
 UI制御部201は、応対支援システム10から提供された応対支援画面をディスプレイ上に表示する共に、応対支援システム10から発話部品を受信する毎にその発話部品を応対支援画面上に動的(リアルタイム)に表示する。また、UI制御部201は、応対支援システム10からナレッジを受信すると、そのナレッジを応対支援画面上に表示する。更に、UI制御部201は、応対支援画面におけるオペレータの各種入力操作(例えば、ボタンの押下操作、文字の入力操作等)を受け付ける。 The UI control unit 201 displays the response support screen provided by the response support system 10 on the display, and dynamically (in real time) displays each speech part received from the response support system 10 on the response support screen. Also, when the UI control unit 201 receives knowledge from the response support system 10, it displays the knowledge on the response support screen. Furthermore, the UI control unit 201 accepts various input operations by the operator on the response support screen (for example, pressing a button, inputting characters, etc.).
 <応対支援処理>
 以下、顧客とオペレータとの間で音声通話が行われているものとして、このオペレータの電話応対業務を支援するための応対支援処理について、図3を参照しながら説明する。以下では、当該オペレータが利用しているオペレータ端末20のディスプレイ上には、UI制御部201によって応対支援画面が表示されているものとする。なお、顧客とオペレータとの間で音声通話が行われている間、当該顧客が利用している顧客端末70と当該オペレータが利用しているオペレータ端末20との間で送受信される音声パケットがNWスイッチ60によって応対支援システム10に送信(転送)される。
<Response support processing>
Hereinafter, assuming that a voice call is being made between a customer and an operator, the response support process for supporting the operator in answering the telephone call will be described with reference to Fig. 3. In the following, it is assumed that a response support screen is displayed on the display of the operator terminal 20 used by the operator by the UI control unit 201. Note that while a voice call is being made between a customer and an operator, voice packets transmitted and received between the customer terminal 70 used by the customer and the operator terminal 20 used by the operator are transmitted (transferred) to the response support system 10 by the NW switch 60.
 応対支援システム10の音声認識部101は、NWスイッチ60から受信した音声パケットに含まれる音声データが表す音声に対して音声認識を行って、話者毎の時刻情報付き発話テキストを作成する(ステップS101)。 The voice recognition unit 101 of the response support system 10 performs voice recognition on the voice represented by the voice data contained in the voice packet received from the NW switch 60, and creates a spoken text with time information for each speaker (step S101).
 次に、応対支援システム10の音声認識部101は、上記のステップS101で作成した発話テキストを発話テキスト記憶部103に格納(記憶)する(ステップS102)。 Next, the voice recognition unit 101 of the response support system 10 stores (memorizes) the speech text created in step S101 above in the speech text storage unit 103 (step S102).
 次に、応対支援システム10の応対支援部102は、上記のステップS101で作成した発話テキストを表す発話部品をオペレータ端末20に送信する(ステップS103)。 Next, the response support unit 102 of the response support system 10 transmits the speech component representing the speech text created in step S101 above to the operator terminal 20 (step S103).
 オペレータ端末20のUI制御部201は、応対支援システム10から発話部品を受信すると、この発話部品を応対支援画面上に表示する(ステップS104)。 When the UI control unit 201 of the operator terminal 20 receives the speech parts from the customer service support system 10, it displays the speech parts on the customer service support screen (step S104).
 ここで、UI制御部201によって表示される応対支援画面の一例を図4に示す。図4に示す応対支援画面1000には、発話内容表示欄1100に含まれている。発話内容表示欄1100には、応対支援システム10から受信した発話部品が話者毎に時系列順で表示される。例えば、図4に示す例では、オペレータの発話テキストを表す発話部品1111~1113と、顧客の発話テキストを表す発話部品1121~1123とが時系列順に表示されている。これにより、オペレータは、自身の発話内容と顧客の発話内容を時系列順に確認することが可能となる。 Here, an example of the customer service support screen displayed by the UI control unit 201 is shown in FIG. 4. The customer service support screen 1000 shown in FIG. 4 includes an utterance content display field 1100. In the utterance content display field 1100, speech parts received from the customer service support system 10 are displayed in chronological order for each speaker. For example, in the example shown in FIG. 4, speech parts 1111-1113 representing the operator's utterance text and speech parts 1121-1123 representing the customer's utterance text are displayed in chronological order. This allows the operator to check his or her own utterance content and the customer's utterance content in chronological order.
 なお、発話内容表示欄1100に表示される各発話部品には、その発話部品が表す発話テキストがナレッジ検索の際のナレッジ絞り込みに有用な発話テキストであると評価するためのUI部品(以下、発話評価部品ともいう。)が対応付けられていてもよい。この発話評価部品は、例えば、発話テキストに関する「いいね」ボタンや「GOOD」ボタン等と称されてもよい。図4に示す例では、発話部品1111~1113には発話評価部品1131~1133がそれぞれ対応付けられており、同様に発話部品1121~1123には発話評価部品1141~1143が対応付けられている。オペレータは、図4に示す応対支援画面1000上で発話内容表示欄1100に表示されている発話評価部品に対して選択操作を行うことができる。例えば、オペレータは、ポインティングデバイスやタッチパネル、キーボード等の入力装置を用いて、発話評価部品1131~1133及び1141~1143の中から所望の発話評価部品を選択する選択操作を行うことができる。図4に示す例では、発話評価部品1141に対して選択操作が行われた場合を示している。 In addition, each speech part displayed in the speech content display field 1100 may be associated with a UI part (hereinafter also referred to as a speech evaluation part) for evaluating that the speech text represented by that speech part is useful for narrowing down knowledge during knowledge search. This speech evaluation part may be called, for example, a "Like" button or a "GOOD" button related to the speech text. In the example shown in FIG. 4, speech parts 1111-1113 are associated with speech evaluation parts 1131-1133, respectively, and similarly, speech parts 1121-1123 are associated with speech evaluation parts 1141-1143. The operator can perform a selection operation on the speech evaluation parts displayed in the speech content display field 1100 on the response support screen 1000 shown in FIG. 4. For example, the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired utterance evaluation component from among utterance evaluation components 1131 to 1133 and 1141 to 1143. The example shown in FIG. 4 shows a case where a selection operation is performed on utterance evaluation component 1141.
 これにより、オペレータは、ナレッジ検索の際のナレッジ絞り込みに有用な発話テキストを評価することが可能となり、後述するように、この発話テキストに含まれるキーワードによりナレッジの絞り込みを行うことが可能となる。なお、図4に示す例では、ナレッジ検索の際のナレッジ絞り込みに有用な発話テキストであるか否かが評価されているが、これに限られず、例えば、段階的な数値等により評価されてもよい。 This allows the operator to evaluate spoken text that is useful for narrowing down knowledge when searching for knowledge, and as described below, it becomes possible to narrow down knowledge using keywords contained in this spoken text. Note that in the example shown in Figure 4, the spoken text is evaluated as to whether it is useful for narrowing down knowledge when searching for knowledge, but this is not limited to this, and it may be evaluated, for example, using a graded numerical value, etc.
 ここで、オペレータは、図4に示す応対支援画面1000上で発話内容表示欄1100に表示されている発話部品に対して選択操作を行うことができる。例えば、オペレータは、ポインティングデバイスやタッチパネル、キーボード等の入力装置を用いて、発話内容表示欄1100に表示されている発話部品1111~1113及び1121~1123の中から所望の発話部品を選択する選択操作を行うことができる。以下では、図4に示す応対支援画面1000上で発話内容表示欄1100に表示されている或る発話部品に対して選択操作が行われたものとして説明を続ける。 Here, the operator can perform a selection operation on the speech part displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4. For example, the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired speech part from among the speech parts 1111-1113 and 1121-1123 displayed in the speech content display field 1100. In the following, the explanation will be continued assuming that a selection operation has been performed on a certain speech part displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4.
 なお、以下では、発話部品が表す発話テキスト内には1つのキーワードが含まれている場合を想定して説明する。ただし、発話部品が表す発話テキスト内にはキーワードが含まれていなくてもよいし、複数のキーワードが含まれていてもよい。当該発話テキスト内にキーワードが含まれていない場合は、後述するステップS106でキーワードは抽出されない。一方で、当該発話テキスト内に複数のキーワードが含まれている場合は、後述するステップS106では、例えば、複数のキーワードがすべて抽出されてもよいし、発話部品に対して選択操作が行われる都度、当該発話テキスト内に最初に出現するキーワードから順に抽出されてもよい。 In the following, it is assumed that one keyword is included in the speech text represented by the speech part. However, the speech text represented by the speech part may not contain any keyword, or may contain multiple keywords. If the speech text does not contain a keyword, no keyword is extracted in step S106 described below. On the other hand, if the speech text contains multiple keywords, in step S106 described below, for example, all of the multiple keywords may be extracted, or each time a selection operation is performed on the speech part, the keywords may be extracted in order starting from the keyword that first appears in the speech text.
 当該発話部品に対して選択操作が行われた場合、オペレータ端末20のUI制御部201は、その発話部品が選択されたことを表す操作情報を応対支援システム10に送信する(ステップS105)。なお、操作情報には、例えば、選択操作が行われた発話部品の識別情報(例えば、部品ID等)が含まれる。ただし、オペレータ端末20のUI制御部201は、例えば、操作情報の代わりに、当該発話部品が表す発話テキストを応対支援システム10に送信してもよいし、当該発話部品が表す発話テキストの識別情報(例えば、発話テキストID等)を送信してもよい。 When a selection operation is performed on the speech part, the UI control unit 201 of the operator terminal 20 transmits operation information indicating that the speech part has been selected to the customer service support system 10 (step S105). Note that the operation information includes, for example, identification information (e.g., a part ID, etc.) of the speech part for which the selection operation has been performed. However, the UI control unit 201 of the operator terminal 20 may transmit, for example, the speech text represented by the speech part to the customer service support system 10 instead of the operation information, or may transmit identification information (e.g., a speech text ID, etc.) of the speech text represented by the speech part.
 応対支援システム10の応対支援部102は、オペレータ端末20から操作情報を受信すると、この操作情報に含まれる部品IDの発話部品が表す発話テキストからキーワード(関連キーワードも含む)を抽出する(ステップS106)。すなわち、応対支援部102は、キーワード記憶部104に記憶されているキーワード情報を参照して、当該発話テキストからキーワードを抽出すると共に、そのキーワードに関連するキーワードを抽出する。なお、キーワードとそれに関連するキーワード(関連キーワード)の集合は「プリセット」又は「キーワードプリセット」等と呼ばれてもよい。 When the response support unit 102 of the response support system 10 receives operation information from the operator terminal 20, it extracts keywords (including related keywords) from the spoken text represented by the speech component of the component ID included in the operation information (step S106). That is, the response support unit 102 refers to the keyword information stored in the keyword storage unit 104 to extract keywords from the spoken text, and also extracts keywords related to the keywords. Note that a collection of keywords and their related keywords (related keywords) may be called a "preset" or a "keyword preset", etc.
 例えば、キーワード情報(キャッシング,{契約,銀行})等がキーワード記憶部104に記憶されており、図4に示す応対支援画面1000上で発話内容表示欄1100に表示されている発話部品1122に対して選択操作が行われたものとする。この場合、応対支援部102は、発話部品1122が表す発話テキスト「そのキャッシングサービスのことで、ちょっと確認したいのですが。」からキーワード「キャッシング」を抽出すると共に、キーワード「キャッシング」に関連するキーワード「契約」及び「銀行」を抽出する。 For example, assume that keyword information (cashing, {contract, bank}) etc. is stored in the keyword storage unit 104, and a selection operation is performed on the speech component 1122 displayed in the speech content display field 1100 on the customer service support screen 1000 shown in FIG. 4. In this case, the customer service support unit 102 extracts the keyword "cashing" from the speech text "I'd like to confirm something about that cashing service" represented by the speech component 1122, and also extracts the keywords "contract" and "bank" related to the keyword "cashing".
 次に、応対支援システム10の応対支援部102は、上記のステップS106で抽出されたキーワードでナレッジを検索するための検索部品をオペレータ端末20に送信する(ステップS107)。 Next, the customer service support unit 102 of the customer service support system 10 transmits a search component for searching knowledge using the keywords extracted in step S106 above to the operator terminal 20 (step S107).
 例えば、上記のステップS106で抽出されたキーワードが「キャッシング」、「契約」及び「銀行」である場合、「キャッシング」でナレッジを検索するための検索部品と、「契約」でナレッジを検索するための検索部品と、「銀行」でナレッジを検索するための検索部品とがオペレータ端末20に送信される。 For example, if the keywords extracted in step S106 above are "cashing," "contract," and "bank," a search component for searching knowledge for "cashing," a search component for searching knowledge for "contract," and a search component for searching knowledge for "bank" are transmitted to the operator terminal 20.
 オペレータ端末20のUI制御部201は、応対支援システム10から検索部品を受信すると、これらの検索部品を応対支援画面上に表示する(ステップS108)。これらの検索部品は、例えば、図4に示す応対支援画面1000の検索部品表示欄1200に表示される。このとき、例えば、発話テキストから抽出されたキーワードに対応する検索部品が最も左側(最も優先度の高い表示位置)に表示される。また、関連キーワードに対応する検索部品は、例えば、キーワードとの関連性が高い順に左から順に表示されてもよいし、関連キーワードが抽出された順に左から順に表示されてもよい。例えば、図4に示す例では、キーワード「キャッシング」に対応する検索部品1210が最も左側(最も優先度の高い表示位置)に表示されており、その次(2番目に優先度が高い表示位置)に関連キーワード「契約」に対応する検索部品1220、その次(3番目に優先度が高い表示位置)に関連キーワード「銀行」に対応する検索部品1230が表示されている。なお、優先度が高い表示位置にある検索部品ほどその検索部品に対応するキーワードでナレッジを検索した際に、適切なナレッジが得られる可能性が高いと考えられる。 When the UI control unit 201 of the operator terminal 20 receives the search components from the customer service support system 10, it displays these search components on the customer service support screen (step S108). These search components are displayed, for example, in the search component display field 1200 of the customer service support screen 1000 shown in FIG. 4. At this time, for example, the search component corresponding to the keyword extracted from the spoken text is displayed on the leftmost side (the display position with the highest priority). In addition, the search components corresponding to the related keywords may be displayed, for example, from the left in order of relevance to the keyword, or from the left in order of the related keywords being extracted. For example, in the example shown in FIG. 4, the search component 1210 corresponding to the keyword "cashing" is displayed on the leftmost side (the display position with the highest priority), followed by the search component 1220 corresponding to the related keyword "contract" (the display position with the second highest priority), and then the search component 1230 corresponding to the related keyword "bank" (the display position with the third highest priority). Note that it is considered that the higher the display position of the search component is, the more likely it is that appropriate knowledge will be obtained when searching for knowledge with the keyword corresponding to that search component.
 なお、検索部品は、キーワードに対応する検索部品と関連キーワードに対応する検索部品とで表示態様(例えば、UI部品の色等)が異なっていてもよい。図4に示す例は、キーワードに対応する検索部品1210と、関連キーワードに対応する検索部品1220~1230とで表示態様が異なっている場合を示している。 Note that the display mode (e.g., color of UI components) of the search component corresponding to a keyword and the search component corresponding to a related keyword may be different. The example shown in FIG. 4 shows a case where the display mode of the search component 1210 corresponding to a keyword is different from that of the search components 1220 to 1230 corresponding to related keywords.
 ここで、オペレータは、図4に示す応対支援画面1000上で検索部品表示欄1200に表示されている検索部品に対して選択操作を行うことができる。例えば、オペレータは、ポインティングデバイスやタッチパネル、キーボード等の入力装置を用いて、検索部品表示欄1200に表示されている検索部品1210~1230の中から所望の検索部品を選択する選択操作を行うことができる。以下では、図4に示す応対支援画面1000上で検索部品表示欄1200に表示されている或る検索部品に対して選択操作が行われたものとして説明を続ける。 Here, the operator can perform a selection operation on the search component displayed in the search component display field 1200 on the customer service support screen 1000 shown in FIG. 4. For example, the operator can use an input device such as a pointing device, touch panel, or keyboard to perform a selection operation to select a desired search component from among the search components 1210 to 1230 displayed in the search component display field 1200. In the following, the explanation will be continued assuming that a selection operation has been performed on a certain search component displayed in the search component display field 1200 on the customer service support screen 1000 shown in FIG. 4.
 当該検索部品に対して選択操作が行われた場合、オペレータ端末20のUI制御部201は、その検索部品が選択されたことを表す操作情報を応対支援システム10に送信する(ステップS109)。なお、操作情報には、例えば、選択操作が行われた検索部品の識別情報(例えば、部品ID等)が含まれる。ただし、オペレータ端末20のUI制御部201は、例えば、操作情報の代わりに、当該検索部品に対応するキーワードを応対支援システム10に送信してもよいし、当該検索部品に対応するキーワードの識別情報(例えば、キーワードID等)を送信してもよい。 When a selection operation is performed on the search component, the UI control unit 201 of the operator terminal 20 transmits operation information indicating that the search component has been selected to the customer service support system 10 (step S109). Note that the operation information includes, for example, identification information (e.g., a component ID, etc.) of the search component on which the selection operation has been performed. However, the UI control unit 201 of the operator terminal 20 may transmit, for example, a keyword corresponding to the search component to the customer service support system 10 instead of the operation information, or may transmit identification information (e.g., a keyword ID, etc.) of the keyword corresponding to the search component.
 応対支援システム10の応対支援部102は、オペレータ端末20から操作情報を受信すると、この操作情報に含まれる部品IDの検索部品に対応するキーワードによりナレッジを検索する(ステップS110)。すなわち、応対支援部102は、当該キーワードを検索キーとして、ナレッジ記憶部105に記憶されているナレッジ情報を検索し、その検索結果として当該キーワードに関連するナレッジ情報を取得する。 When the customer service support unit 102 of the customer service support system 10 receives operation information from the operator terminal 20, it searches for knowledge using a keyword corresponding to the search component of the component ID included in the operation information (step S110). That is, the customer service support unit 102 searches for knowledge information stored in the knowledge storage unit 105 using the keyword as a search key, and obtains knowledge information related to the keyword as the search result.
 例えば、オペレータによって検索部品1210に対して選択操作が行われた場合、この検索部品1210に対応するキーワード「キャッシング」を検索キーとして、ナレッジ記憶部105に記憶されているナレッジ情報が検索される。これにより、キーワード「キャッシング」に関連するナレッジ情報が検索される。なお、ナレッジの検索手法としては既知の手法を用いればよい。 For example, when an operator performs a selection operation on search component 1210, the knowledge information stored in knowledge storage unit 105 is searched for using the keyword "caching" corresponding to this search component 1210 as a search key. This allows knowledge information related to the keyword "caching" to be searched for. Note that a known method may be used as the knowledge search method.
 ここで、オペレータによって或る発話評価部品が選択されている場合、上記のステップS110では、当該発話評価部品に対応付けられている発話テキストに含まれるキーワードをAND条件としてナレッジの検索が行われる。例えば、図4に示す応対支援画面1000上で発話評価部品1141が選択されている場合、この発話評価部品1141に対応付けられている発話テキストに含まれるキーワード「ABCカード」をAND条件としてナレッジの検索が行われる。具体的には、例えば、オペレータによって選択操作が行われた検索部品に対応するキーワードが「キャッシング」であれば、「キャッシング」及び「ABCカード」を検索キーとしてナレッジ情報が検索される。このように、オペレータによって或る発話評価部品が選択されている場合、ナレッジ検索の際に、その発話評価部品に対応付けられている発話テキストに含まれるキーワードにより、ナレッジの絞り込みを行うことが可能となる。 Here, if an utterance evaluation part is selected by the operator, in step S110, a knowledge search is performed using the keywords contained in the speech text associated with that utterance evaluation part as an AND condition. For example, if the utterance evaluation part 1141 is selected on the response support screen 1000 shown in FIG. 4, a knowledge search is performed using the keyword "ABC card" contained in the speech text associated with this utterance evaluation part 1141 as an AND condition. Specifically, for example, if the keyword corresponding to the search part selected by the operator is "cashing," knowledge information is searched for using "cashing" and "ABC card" as search keys. In this way, when an utterance evaluation part is selected by the operator, it is possible to narrow down knowledge during a knowledge search by the keywords contained in the speech text associated with that utterance evaluation part.
 次に、応対支援システム10の応対支援部102は、上記のステップS110で検索されたナレッジ情報に含まれるナレッジとそれに付随する情報とを検索結果としてオペレータ端末20に送信する(ステップS111)。なお、ナレッジに付随する情報としては、例えば、そのナレッジが参照された回数、そのナレッジの評価情報等が挙げられる。 Next, the customer service support unit 102 of the customer service support system 10 transmits the knowledge contained in the knowledge information searched in step S110 and associated information to the operator terminal 20 as a search result (step S111). Note that examples of information associated with knowledge include the number of times the knowledge has been referenced, evaluation information for the knowledge, etc.
 オペレータ端末20のUI制御部201は、応対支援システム10から検索結果を受信すると、この検索結果を応対支援画面上に表示する(ステップS112)。この検索結果は、例えば、図4に示す応対支援画面1000の検索結果表示欄1300に表示される。図4に示す検索結果表示欄1300には、検索結果として、ナレッジ1301~1302が表示されている。なお、検索結果表示欄1300に表示されているナレッジの中から所望のナレッジを選択することで、選択されたナレッジに関してより詳細な情報が表示されてもよい。 When the UI control unit 201 of the operator terminal 20 receives the search results from the customer service support system 10, it displays the search results on the customer service support screen (step S112). The search results are displayed, for example, in the search result display field 1300 of the customer service support screen 1000 shown in FIG. 4. In the search result display field 1300 shown in FIG. 4, knowledge items 1301 to 1302 are displayed as search results. Note that by selecting the desired knowledge from the knowledge items displayed in the search result display field 1300, more detailed information regarding the selected knowledge item may be displayed.
 このとき、検索結果表示欄1300では、ナレッジに付随する情報に応じた表示順で、検索結果として得られた各ナレッジが表示されてもよい。例えば、ナレッジの参照回数が多い順に各ナレッジが表示されてもよいし、評価情報によって表される評価が良い順(例えば、「いいね」や「GOOD」等の総数が多い順、段階的な数値により評価した値の平均値が高い順等)に各ナレッジが表示されてもよい。 At this time, in the search result display area 1300, each piece of knowledge obtained as a search result may be displayed in a display order according to information accompanying the knowledge. For example, each piece of knowledge may be displayed in order of the number of times the knowledge has been referenced, or in order of the best evaluation represented by the evaluation information (for example, in order of the greatest total number of "Likes" or "GOOD" votes, or in order of the highest average value of a graded numerical evaluation, etc.).
 なお、検索結果表示欄1300に表示されるナレッジには、そのナレッジが問い合わせに対する回答に有用なナレッジであると評価するためのUI部品(以下、ナレッジ評価部品ともいう。)が対応付けられていてもよい。このナレッジ評価部品は、例えば、ナレッジに関する「いいね」ボタンや「GOOD」ボタン等と称されてもよい。図4に示す例では、ナレッジ1301~1302にはそれぞれナレッジ評価部品1311~1312が対応付けられている。オペレータは、図4に示す応対支援画面1000上で検索結果表示欄1300に表示されているナレッジ評価部品に対して選択操作を行うことができる。例えば、オペレータは、ポインティングデバイスやタッチパネル、キーボード等の入力装置を用いて、ナレッジ評価部品1311~1312の中から所望のナレッジ評価部品を選択する選択操作を行うことができる。ナレッジ評価部品に対して選択操作が行われた場合、そのナレッジ評価部品が選択されたことを表す操作情報が応対支援システム10に送信され、その結果、ナレッジ記憶部105に記憶されているナレッジ情報のうち、そのナレッジ評価部品に対応付けられているナレッジが含まれるナレッジ情報の評価情報が更新される。 The knowledge displayed in the search result display field 1300 may be associated with a UI component (hereinafter also referred to as a knowledge evaluation component) for evaluating whether the knowledge is useful for answering an inquiry. This knowledge evaluation component may be referred to as, for example, a "Like" button or a "GOOD" button related to the knowledge. In the example shown in FIG. 4, knowledge 1301-1302 are associated with knowledge evaluation components 1311-1312, respectively. The operator can perform a selection operation on the knowledge evaluation components displayed in the search result display field 1300 on the response support screen 1000 shown in FIG. 4. For example, the operator can perform a selection operation to select a desired knowledge evaluation component from the knowledge evaluation components 1311-1312 using an input device such as a pointing device, touch panel, or keyboard. When a selection operation is performed on a knowledge evaluation component, operation information indicating that the knowledge evaluation component has been selected is sent to the response support system 10, and as a result, the evaluation information of the knowledge information stored in the knowledge storage unit 105 that includes the knowledge associated with that knowledge evaluation component is updated.
 これにより、検索結果として得られたナレッジに関して、問い合わせに対する回答に有用なナレッジ(つまり、品質の良いナレッジ)ほどその表示順を高く、そうでないナレッジほどその表示順を低くすることが可能となる。 This makes it possible to display knowledge obtained as search results that is more useful in answering the inquiry (i.e., knowledge of higher quality) higher in the display order, and less useful knowledge lower in the display order.
 以上のように、本実施形態に係るコンタクトセンタシステム1では、オペレータは、顧客又は自身の発話テキストを表す発話部品に対して選択操作を行うだけで、その発話テキストに含まれるキーワードやその関連キーワードでナレッジを検索するための検索部品を応対支援画面上に表示させることができる。そして、その後、オペレータは、当該応対支援画面上で所望の検索部品に対して選択操作を行うだけで、その検索部品に対応するキーワードでナレッジを検索することができる。このため、オペレータは、短時間でナレッジを検索することができるようになる。また、キーワードだけでなく関連キーワードに対応する検索部品も表示され、オペレータは、それらの検索部品の中から適切なキーワードに対応する検索部品を選択できるため、適切なナレッジを得ることができるようになる。 As described above, in the contact center system 1 according to this embodiment, the operator can display a search component on the customer support screen for searching for knowledge using keywords contained in the spoken text or related keywords, simply by performing a selection operation on a speech component that represents the spoken text of a customer or the operator himself/herself. The operator can then search for knowledge using the keywords that correspond to the desired search component, simply by performing a selection operation on the customer support screen. This allows the operator to search for knowledge in a short amount of time. Furthermore, search components that correspond to related keywords as well as keywords are displayed, and the operator can select a search component that corresponds to the appropriate keyword from among these search components, thereby enabling the operator to obtain appropriate knowledge.
 したがって、本実施形態に係るコンタクトセンタシステム1によれば、顧客からの問い合わせ対応時に、オペレータは顧客を待たせることなく、適切な回答を提供することが可能となり、その結果、例えば、顧客満足度の向上等を図ることが可能となる。 Therefore, according to the contact center system 1 of this embodiment, when responding to inquiries from customers, the operator can provide appropriate answers without making the customer wait, which, for example, can result in improved customer satisfaction.
 なお、従来と同様に、オペレータは、図4に示す応対支援画面1000上で検索エリア1400に所望のキーワードを入力し、検索ボタン1500を押下することで、そのキーワードに関連するナレッジを検索することもできる。 As in the past, the operator can also search for knowledge related to a desired keyword by entering the desired keyword in the search area 1400 on the response support screen 1000 shown in FIG. 4 and pressing the search button 1500.
 <変形例>
 以下、本実施形態の変形例について説明する。
<Modification>
A modification of this embodiment will now be described.
  ≪変形例1≫
 上記の実施形態では、検索部品に対して選択操作が行われた場合にはナレッジが検索されたが、例えば、発話部品から抽出されたキーワード(関連キーワードも含む。)に関して予めナレッジ検索が行われており、検索部品に対してマウスオーバーが行われたときに、その検索部品に対応するキーワードによる検索結果(又は、その検索結果に含まれるナレッジのうち上位数件)がポップアップ表示されてもよい。例えば、図5に示す応対支援画面1000上で検索部品1210に対してマウスオーバーが行われたとき、その検索部品1210に対応するキーワードによる検索結果がポップアップ画面2100上に表示されてもよい。
<Modification 1>
In the above embodiment, knowledge is searched when a selection operation is performed on a search component, but for example, a knowledge search may be performed in advance for keywords (including related keywords) extracted from utterance components, and when the mouse is placed over a search component, the search results for the keywords corresponding to that search component (or the top several pieces of knowledge included in the search results) may be displayed in a pop-up. For example, when the mouse is placed over a search component 1210 on the customer service support screen 1000 shown in FIG. 5, the search results for the keywords corresponding to that search component 1210 may be displayed on a pop-up screen 2100.
 これにより、オペレータは、検索部品に対してマウスオーバーを行うだけで、その検索部品に対応するキーワードでナレッジ検索を行ったときにどうような検索結果が得られるのかを容易に確認することが可能となる。 This allows operators to simply hover the mouse over a search component and easily check what kind of search results will be obtained when performing a knowledge search using the keywords that correspond to that search component.
  ≪変形例2≫
 上記の実施形態では、発話部品が表す発話テキストから抽出されたキーワードによっては適切なナレッジが得られない場合もあり得る。例えば、当該発話テキストから抽出されたキーワードが抽象的な用語や多義的な意味を持つ用語等である場合には、多くのナレッジが検索結果として得られ、絞り込みが十分でないことがある。このため、発話部品が表す発話テキストから抽出されたキーワードが特定のキーワード(例えば、抽象的な用語や多義的な意味を持つ用語等を表すキーワード)である場合には、ナレッジ検索の際にナレッジをより絞り込むためのサジェスト(提案・示唆)をオペレータに提示してもよい。
<<Modification 2>>
In the above embodiment, depending on the keyword extracted from the speech text represented by the speech part, appropriate knowledge may not be obtained. For example, when the keyword extracted from the speech text is an abstract term or a term with multiple meanings, a lot of knowledge may be obtained as a search result, and the narrowing down of knowledge may be insufficient. For this reason, when the keyword extracted from the speech text represented by the speech part is a specific keyword (for example, a keyword representing an abstract term or a term with multiple meanings), a suggestion (proposal or suggestion) for further narrowing down the knowledge during the knowledge search may be presented to the operator.
 例えば、図6に示す応対支援画面1000では、検索部品1210に対応するキーワード「キャッシング」でナレッジを検索した際にナレッジをより絞り込むためのサジェスト3100がオペレータに提示されている。このサジェスト3100内には、キーワード「キャッシング」に関してナレッジをより絞り込んで検索したときの検索結果が表示される。 For example, in the response support screen 1000 shown in FIG. 6, a suggestion 3100 for further narrowing down knowledge is presented to the operator when searching for knowledge using the keyword "caching" corresponding to the search component 1210. This suggestion 3100 displays the search results when searching for knowledge by further narrowing down knowledge related to the keyword "caching".
 これにより、オペレータは、発話部品が表す発話テキストから抽出されたキーワードが抽象的な用語や多義的な意味を持つ用語等を表すキーワードであっても、適切なナレッジを得ることが可能となる。 This allows the operator to obtain appropriate knowledge even if the keywords extracted from the spoken text represented by the speech components represent abstract terms or terms with multiple meanings.
  ≪変形例3≫
 発話部品が表す発話テキスト内に複数のキーワードが含まれている場合、上記のステップS106では、当該発話部品に対して選択操作が行われた位置(操作位置)に応じて、その操作位置に最も近い位置にあるキーワードが抽出されてもよい。
<Modification 3>
If multiple keywords are contained in the speech text represented by the speech part, in the above step S106, the keyword located closest to the position (operation position) where the selection operation was performed on the speech part may be extracted depending on the position (operation position) where the selection operation was performed.
 例えば、図7に示す発話部品4100に対して選択操作が行われたものとする。ここで、発話部品4100が表す発話テキストには、「ABCサービス」を表すキーワード4101と、「〇〇機能」を表すキーワード4102とが含まれているものとする。このとき、例えば、操作位置4103に対して選択操作が行われた場合、上記のステップS106では、キーワード4101が抽出される。一方で、例えば、操作位置4104に対して選択操作が行われた場合、上記のステップS106では、キーワード4102が抽出される。 For example, assume that a selection operation is performed on speech component 4100 shown in FIG. 7. Here, assume that the speech text represented by speech component 4100 contains keyword 4101 representing "ABC service" and keyword 4102 representing "XX function." In this case, for example, if a selection operation is performed on operation position 4103, keyword 4101 is extracted in step S106 above. On the other hand, for example, if a selection operation is performed on operation position 4104, keyword 4102 is extracted in step S106 above.
 これにより、オペレータは、自身の操作位置に応じて、発話部品が表す発話テキストから所望のキーワードのみを抽出することが可能となる。 This allows the operator to extract only the desired keywords from the speech text represented by the speech component depending on their operating position.
  ≪変形例4≫
 上記のステップS106では、キーワードと関連キーワードとを抽出したが、例えば、関連キーワードは抽出せずに、キーワードのみが抽出されてもよい。また、キーワードのみが抽出された場合、このキーワードにより自動的にナレッジが検索され、その検索結果がオペレータに提示されてもよい。これにより、オペレータは、発話部品に対して選択操作を行うだけで、その発話部品が表す発話テキストに含まれるキーワードによりナレッジの検索を行うことが可能となり、より短時間でナレッジを検索することができるようになる。
<Modification 4>
In step S106 above, keywords and related keywords are extracted, but for example, only the keywords may be extracted without extracting the related keywords. Also, when only the keywords are extracted, knowledge may be automatically searched for using the keywords and the search results may be presented to the operator. This allows the operator to search for knowledge using keywords contained in the utterance text represented by the utterance parts simply by performing a selection operation on the utterance parts, making it possible to search for knowledge in a shorter time.
  ≪変形例5≫
 応対支援画面上に関連キーワードに対応する検索部品を表示する際に、顧客とオペレータ間の会話の話題(トピック)との関係に応じて、それらの検索部品の優先度(表示順)を決定してもよい。例えば、顧客とオペレータ間の会話のトピックを判定した上で、関連キーワードが当該トピックである確からしさ(例えば、確率等)を求め、その確からしさが高い順に関連キーワードに対応する検索部品の優先度(表示順)を決定してもよい。なお、トピックは、既知のトピック判定技術により判定すればよい。
<Modification 5>
When displaying search components corresponding to related keywords on the customer support screen, the priority (display order) of those search components may be determined according to their relationship with the topic of the conversation between the customer and the operator. For example, after determining the topic of the conversation between the customer and the operator, the likelihood (e.g., probability) that the related keyword is the topic may be calculated, and the priority (display order) of the search components corresponding to the related keywords may be determined in descending order of likelihood. Note that the topic may be determined using a known topic determination technique.
  ≪変形例6≫
 応対支援画面上に表示される検索部品は、例えば、発話部品に対して選択操作が行われる都度削除されてもよいし、発話部品に対して選択操作が行われる都度追加されてもよい。また、発話部品に対して選択操作が行われる都度追加される場合は、応対支援画面上に表示可能な検索部品の最大数を超えたときに、古い検索部品から順に削除されてもよい。
<Modification 6>
The search components displayed on the customer service support screen may be deleted, for example, each time a selection operation is performed on an utterance component, or may be added each time a selection operation is performed on an utterance component. In addition, if a search component is added each time a selection operation is performed on an utterance component, the oldest search component may be deleted in order when the maximum number of search components that can be displayed on the customer service support screen is exceeded.
 本発明は、具体的に開示された上記の実施形態に限定されるものではなく、請求の範囲の記載から逸脱することなく、種々の変形や変更、既知の技術との組み合わせ等が可能である。 The present invention is not limited to the specifically disclosed embodiments above, and various modifications, changes, and combinations with known technologies are possible without departing from the scope of the claims.
 1    コンタクトセンタシステム
 10   応対支援システム
 20   オペレータ端末
 30   スーパバイザ端末
 40   電話機
 50   PBX
 60   NWスイッチ
 70   顧客端末
 80   通信ネットワーク
 101  音声認識部
 102  応対支援部
 103  発話テキスト記憶部
 104  キーワード記憶部
 105  ナレッジ記憶部
 201  UI制御部
 E    コンタクトセンタ環境
Reference Signs List 1 Contact center system 10 Response support system 20 Operator terminal 30 Supervisor terminal 40 Telephone 50 PBX
60 NW switch 70 Customer terminal 80 Communication network 101 Voice recognition unit 102 Response support unit 103 Speech text storage unit 104 Keyword storage unit 105 Knowledge storage unit 201 UI control unit E Contact center environment

Claims (9)

  1.  複数人の会話における発話単位のテキストを表す第1の表示部品に対する選択操作に応じて、前記選択操作の対象となった第1の表示部品が表すテキストの中から所定の第1の文字列を抽出するように構成されている抽出部と、
     抽出された第1の文字列を検索キーとして、前記会話における問い合わせ対応を支援するための情報を表すナレッジを検索するように構成されている検索部と、
     を有する情報処理装置。
    an extraction unit configured to extract, in response to a selection operation on a first display part that represents text in units of utterances in a conversation between multiple people, a predetermined first character string from the text represented by the first display part that is the target of the selection operation;
    a search unit configured to search for knowledge representing information for supporting a response to an inquiry in the conversation, using the extracted first character string as a search key;
    An information processing device having the above configuration.
  2.  前記抽出部は、
     前記第1の文字列と、該第1の文字列に関連する第2の文字列とを抽出するように構成されており、
     前記検索部は、
     前記第1の文字列又は前記第2の文字列を検索キーとして前記検索を行うための第2の表示部品に対する選択操作に応じて、前記選択操作の対象となった第2の表示部品に対応する第1の文字列又は第2の文字列を検索キーとして前記ナレッジを検索するように構成されている、請求項1に記載の情報処理装置。
    The extraction unit is
    configured to extract the first string and a second string related to the first string;
    The search unit is
    2. The information processing device according to claim 1, configured to search the knowledge using the first character string or the second character string corresponding to the second display part that is the target of the selection operation, in response to a selection operation on a second display part for performing the search using the first character string or the second character string as a search key.
  3.  前記第2の文字列に対応する第2の表示部品は、
     前記第2の文字列と前記第1の文字列との関連性が高い順に、又は、前記第2の文字列のトピックが前記会話のトピックと同一である確からしさが高い順に、UI上に表示される、請求項2に記載の情報処理装置。
    The second display component corresponding to the second character string is
    3 . The information processing device according to claim 2 , wherein the second character string is displayed on a UI in order of relevance between the second character string and the first character string, or in order of likelihood that a topic of the second character string is the same as a topic of the conversation.
  4.  前記抽出部は、
     前記第1の文字列と、該第1の文字列に関連する第2の文字列とを抽出するように構成されており、
     前記検索部は、
     前記第1の文字列及び前記第2の文字列の各々を検索キーとして前記検索を行い、
     前記第1の文字列又は前記第2の文字列を検索キーとして前記検索を行うための第2の表示部品に対するポインタ位置の重畳操作に応じて、前記重畳操作の対象となった第2の表示部品に対応する第1の文字列又は第2の文字列を検索キーとして前記検索を行ったときの検索結果として得られたナレッジ又は前記検索結果に含まれる上位所定の件数のナレッジをUI上に表示させるように構成されている、請求項1に記載の情報処理装置。
    The extraction unit is
    configured to extract the first string and a second string related to the first string;
    The search unit is
    performing the search using each of the first character string and the second character string as a search key;
    2. The information processing device according to claim 1, configured to display, in response to a superimposition operation of a pointer position on a second display component for performing the search using the first character string or the second character string as a search key, knowledge obtained as a search result when the search is performed using the first character string or the second character string corresponding to the second display component that is the target of the superimposition operation as a search key, or a predetermined number of pieces of knowledge included in the search result, on a UI.
  5.  前記検索部は、
     前記第1の表示部品のうち、ユーザによって有用な発話を表すと評価された第1の表示部品が表すテキストに含まれる所定の文字列も検索キーとして、前記ナレッジを検索するように構成されている、請求項1乃至4の何れか一項に記載の情報処理装置。
    The search unit is
    5. The information processing device according to claim 1, further comprising: a first display part that is evaluated by a user as representing a useful utterance among the first display parts, and a predetermined character string contained in the text displayed by the first display part is also used as a search key to search the knowledge.
  6.  前記検索部は、
     検索結果として得られたナレッジを、前記ナレッジに対する評価情報に基づいた順でUI上に表示させるように構成されている、請求項1に記載の情報処理装置。
    The search unit is
    The information processing apparatus according to claim 1 , wherein the knowledge obtained as a search result is displayed on the UI in an order based on evaluation information for the knowledge.
  7.  前記抽出部は、
     前記選択操作の対象となった第1の表示部品が表すテキストの中に複数の前記第1の文字列が含まれる場合、前記選択操作が行われた位置と最も近い位置にある前記第1の文字列を抽出するように構成されている、請求項1に記載の情報処理装置。
    The extraction unit is
    2. The information processing device according to claim 1, further configured to extract the first character string located closest to the position where the selection operation was performed when a plurality of the first character strings are included in the text represented by the first display component that was the target of the selection operation.
  8.  複数人の会話における発話単位のテキストを表す第1の表示部品に対する選択操作に応じて、前記選択操作の対象となった第1の表示部品が表すテキストの中から所定の第1の文字列を抽出する抽出手順と、
     抽出された第1の文字列を検索キーとして、前記会話における問い合わせ対応を支援するための情報を表すナレッジを検索する検索手順と、
     をコンピュータが実行する情報処理方法。
    an extraction step of extracting a predetermined first character string from the text represented by a first display part that is the target of a selection operation, the first display part representing text in units of utterances in a conversation between multiple people;
    a search procedure for searching for knowledge representing information for supporting a response to an inquiry in the conversation, using the extracted first character string as a search key;
    An information processing method implemented by a computer.
  9.  複数人の会話における発話単位のテキストを表す第1の表示部品に対する選択操作に応じて、前記選択操作の対象となった第1の表示部品が表すテキストの中から所定の第1の文字列を抽出する抽出手順と、
     抽出された第1の文字列を検索キーとして、前記会話における問い合わせ対応を支援するための情報を表すナレッジを検索する検索手順と、
     をコンピュータに実行させるプログラム。
    an extraction step of extracting a predetermined first character string from the text represented by a first display part that is the target of a selection operation, the first display part representing text in units of utterances in a conversation between multiple people;
    a search procedure for searching for knowledge representing information for supporting a response to an inquiry in the conversation, using the extracted first character string as a search key;
    A program that causes a computer to execute the following.
PCT/JP2022/037722 2022-10-07 2022-10-07 Information processing device, information processing method, and program WO2024075302A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/037722 WO2024075302A1 (en) 2022-10-07 2022-10-07 Information processing device, information processing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/037722 WO2024075302A1 (en) 2022-10-07 2022-10-07 Information processing device, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2024075302A1 true WO2024075302A1 (en) 2024-04-11

Family

ID=90608006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/037722 WO2024075302A1 (en) 2022-10-07 2022-10-07 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2024075302A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323558A (en) * 2006-06-05 2007-12-13 Nippon Telegr & Teleph Corp <Ntt> Keyword generator, and document retrieval device, method and program
JP2018128869A (en) * 2017-02-08 2018-08-16 日本電信電話株式会社 Search result display device, search result display method, and program
WO2020036194A1 (en) * 2018-08-15 2020-02-20 日本電信電話株式会社 Search result display device, search result display method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323558A (en) * 2006-06-05 2007-12-13 Nippon Telegr & Teleph Corp <Ntt> Keyword generator, and document retrieval device, method and program
JP2018128869A (en) * 2017-02-08 2018-08-16 日本電信電話株式会社 Search result display device, search result display method, and program
WO2020036194A1 (en) * 2018-08-15 2020-02-20 日本電信電話株式会社 Search result display device, search result display method, and program

Similar Documents

Publication Publication Date Title
US10171659B2 (en) Customer portal of an intelligent automated agent for a contact center
CA2917294C (en) Intelligent automated agent for a contact center
US20180053125A1 (en) Tag-based performance framework for contact center
JP6531138B2 (en) Call center support system and operator&#39;s client
JP2009081627A (en) Customer dealing support device and customer dealing support method
EP2944077B1 (en) Method and apparatus for analyzing leakage from chat to voice
WO2021138398A9 (en) Systems and methods relating to automation for personalizing the customer experience
JP2007265325A (en) Service user support system
JP2017152948A (en) Information provision method, information provision program, and information provision system
US20240080283A1 (en) Information processing system and information processing method
WO2024075302A1 (en) Information processing device, information processing method, and program
US20230342864A1 (en) System and method for automatically responding to negative content
WO2023144896A1 (en) Information processing device, information processing method, and program
JP2018023017A (en) Information providing system, information providing method, and information providing program
WO2024075237A1 (en) Layout change device, layout change method, and program
US20240013779A1 (en) Information-processing apparatus, information-processing method, and program
JP6784795B2 (en) Communication device
WO2023162010A1 (en) Support device, support method, and program
WO2024018598A1 (en) Information processing system, information processing method, and program
KR101674049B1 (en) Device and system for providing phone number service using emotion analysis of customer and method thereof
JP2022129582A (en) Information processing system, operator terminal, and program
WO2023049158A1 (en) Systems and methods relating to routing incoming interactions in a contact center
WO2023114358A1 (en) Systems and methods relating to managing cross-channel interactions in contact centers
JP2004362263A (en) Business activity supporting system