US20210111915A1 - Guiding a presenter in a collaborative session on word choice - Google Patents
Guiding a presenter in a collaborative session on word choice Download PDFInfo
- Publication number
- US20210111915A1 US20210111915A1 US17/128,187 US202017128187A US2021111915A1 US 20210111915 A1 US20210111915 A1 US 20210111915A1 US 202017128187 A US202017128187 A US 202017128187A US 2021111915 A1 US2021111915 A1 US 2021111915A1
- Authority
- US
- United States
- Prior art keywords
- interest
- subject domain
- presenting
- communication session
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000004891 communication Methods 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 230000015654 memory Effects 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 239000000463 material Substances 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1822—Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
Definitions
- a method for suggesting words includes, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter and selecting, using a processor, at least one word from the first subject domain. The method further includes providing the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
- a system for suggesting words includes a processor programmed to initiate executable operations.
- the executable operations include, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter and selecting at least one word from the first subject domain.
- the executable operations further include providing the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
- a computer program product for suggesting words includes a computer readable storage medium having program code embodied therewith.
- the program code is executable by a processor to perform a method including, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter using the processor, selecting, using the processor, at least one word from the first subject domain, and providing, using the processor, the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
- FIG. 1 is a block diagram illustrating an example of a communication system.
- FIG. 2 is a block diagram illustrating an exemplary implementation of a guidance system as shown in FIG. 1 .
- FIG. 3 is a flow chart illustrating an exemplary method of suggesting words to a presenter within a collaborative session.
- FIG. 4 is an exemplary view displayed upon a display of a communication device of a presenter during a collaborative session.
- FIG. 5 is an exemplary view displayed upon a display of a communication device of a participant, non-presenter during a collaborative session.
- aspects of the present invention may be embodied as a system, method or computer program product.
- aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.”
- aspects of the present invention may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer-readable program code embodied, e.g., stored, thereon.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- a computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible, e.g., non-transitory, medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JavaTM, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider an Internet Service Provider
- These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- One or more embodiments disclosed within this specification relate to providing guidance as to word choice to a presenter within a collaborative session.
- one or more subject domains of interest to participants in the collaborative session can be determined.
- One or more words from the one or more of the subject domains of interest are selected and provided to a presenter within the collaborative session.
- the presenter receives the words and is able to incorporate the words within the presentation.
- FIG. 1 is a block diagram illustrating an example of a communication system 100 .
- Communication system 100 includes a collaboration system 105 , a word guidance system (guidance system) 110 , one or more interest sources 125 , and a plurality of communication devices 145 - 165 communicatively linked through a network 170 .
- Network 170 can be implemented as, or include, any of a variety of different networks such as a WAN, a LAN, a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, a Public Switched Telephone Network (PSTN), or the like.
- VPN Virtual Private Network
- PSTN Public Switched Telephone Network
- collaboration system 105 is implemented as a data processing system, e.g., a server, executing suitable operational and/or application software.
- Collaboration system 105 implements one or more collaborative sessions, such as collaborative session 140 , of which communication devices 145 - 165 are participants.
- a “collaborative session,” as used herein, refers to a communication session in which a two or more users participate and communicate with one another concurrently through appropriate communication devices. Examples of a collaborative session include, but are not limited to, a conference call, a screen cast, a Web-meeting, or the like. Typically, within the collaborative session, one of the participants is designated as a presenter.
- collaboration system 105 is implemented as a data processing system. In other cases, collaboration system 105 is implemented as specialized hardware or a combination of specialized hardware operating in cooperation with a data processing system.
- collaboration system 105 can include or be implemented as, a telephony switch. The telephony switch can operate in place of, or in cooperation with, a data processing system.
- collaboration system 105 can be implemented in any of a variety of forms using any of a variety of specialized hardware.
- Each of users A, B, C, D, and E is taking part in collaborative session 140 through a respective one of communication devices 145 - 165 .
- User A corresponds to communication device 145 .
- User B corresponds to communication device 150 .
- User C corresponds to communication device 155 .
- User D corresponds to communication device 160 .
- User E corresponds to communication device 165 .
- Communication devices 145 - 165 represent any of a variety of different communication devices such as mobile communication devices (e.g., smart phones, Internet-enabled phones, tablet devices, etc.), computers, or other information processing or communication appliances. Appreciably, the particular type of communication device used by a user will vary according to the type of collaborative session 140 .
- reference to a communication device may also refer to the particular user of that communication device.
- a “user” refers to a human being that operates or uses a particular communication device.
- reference to a user may also refer to the communication device utilized by that user.
- reference to user A can refer to communication device 145 .
- reference to communication device 145 can refer to user A or an identity maintained by user A on communication device 145 .
- a shared artifact within a collaborative session can be made available from user A to user B is understood to mean that the artifact is made available from communication device 145 to communication device 150 .
- the term “participant” and the term “presenter” each refers to a user, or communication device of the user, that is taking part in, or has joined, collaborative session 140 .
- a presenter is a participant of collaborative session 140 that is designated, e.g., by collaboration system 105 , as the presenter.
- a presenter typically has one or more rights and/or privileges that are not possessed by other participants of collaborative session 140 that are not designated as presenter.
- user A is a presenter. Referring to the example of FIG. 1 , user A has an ability to share content from communication device 145 , e.g., screen share or share a presentation to be viewed by users B-E within collaborative session 140 . Speech of the user A, as the presenter, is provided to users B-E.
- participant may speak or obtain permission to speak to one or more or all other participants, but are not considered presenters as that term confers a particular status upon a participant within collaboration system 105 and, more particularly, within collaborative session 140 .
- Guidance system 110 is implemented as a data processing system, e.g., a server, executing suitable operational software and a guidance module 120 .
- Guidance system 110 is configured to interact with collaboration system 105 to determine one or more subject domains of interest to one or more of the participants B-E of collaborative session 140 .
- a “subject domain” refers to an area or department of knowledge or learning. Subject domains can be ordered with respect to one another. For example, subject domains can be classified into a structured relationship and/or hierarchy.
- guidance system 105 determines subject domains that are of interest to participants of collaborative session 140 by accessing one or more interest sources 125 .
- interest sources 125 include, but are not limited to, blogs 130 belonging to a participant, user profiles 135 (e.g., company user profiles specifying areas of expertise, interests, etc.) of participants, social network Websites to which participants belong or publish data, or the like.
- guidance system 110 selects one or more words from one or more of the subject domains.
- Guidance system 110 provides the selected word or words to communication device 145 .
- the word or words that are selected can be presented or displayed to the presenter, i.e., user A, through communication device 145 .
- the presenter can choose to utilize the word or words, e.g., speak the word or words, during the collaborative session to raise the level of interest and/or attentiveness of users B-E that have an interest in the particular subject domain from which the word or words were selected.
- guidance system 110 selects words in view of the current subject domain of collaborative session 140 , which can be determined prior to the start of collaborative session 140 or in real time as collaborative session 140 continues.
- the phrase “real time” means a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made, or that enables the processor to keep up with some external process.
- the word or words selected by guidance system 110 may be common to both a selected subject domain of interest for at least one participant and the subject domain of the collaborative session.
- FIG. 2 is a block diagram illustrating an exemplary implementation of guidance system 110 of FIG. 1 .
- Guidance system 110 includes at least one processor 205 , e.g., a central processing unit (CPU), coupled to memory elements 210 through a system bus 215 or other suitable circuitry. As such, guidance system 110 stores program code within memory elements 210 . Processor 205 executes the program code accessed from memory elements 210 via system bus 215 .
- guidance system 110 can be implemented as a computer or a programmable data processing apparatus that is suitable for storing and/or executing program code. It should be appreciated, however, that guidance system 110 can be implemented in the form of any system including a processor and memory that is capable of performing the functions and/or operations described within this specification.
- Memory elements 210 can include one or more physical memory devices such as, for example, local memory 220 and one or more bulk storage devices 225 .
- Local memory 220 refers to RAM or other non-persistent memory device(s) generally used during actual execution of the program code.
- Bulk storage device(s) 225 can be implemented as a hard disk drive (HDD), solid state drive (SSD), or other persistent data storage device.
- Guidance system 110 also can include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from bulk storage device 225 during execution.
- I/O devices such as a keyboard 230 , a display 235 , and a pointing device 240 optionally can be coupled to guidance system 110 .
- the I/O devices can be coupled to guidance system 110 either directly or through intervening I/O controllers.
- One or more network adapters 245 also can be coupled to guidance system 110 to enable guidance system 110 to become coupled to other systems, computer systems, remote printers, and/or remote storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapters 245 that can be used with guidance system 110 .
- Memory elements 210 can store operational software such as an operating system (not shown) in addition to guidance module 120 .
- the operational software and guidance module 120 being implemented in the form of program code, are executed by guidance system 110 and, as such, are considered an integrated part of guidance system 110 .
- guidance module 120 optionally includes a plurality of components such as a speech processor 250 , a natural language processor (NLP) 255 , and a controller 260 .
- the operational software, guidance module 120 , and any components included therein, e.g., speech processor 250 , NLP 255 , and controller 260 , as well as any other data relied upon in performing the functions described herein are functional data structures that impart functionality when employed as part of the system of FIG. 2 .
- Speech processor 250 converts speech in the form of digital audio into text for analysis.
- NLP 255 is configured to perform any of a variety of known NLP functions. Examples of NLP functions that can be performed include, but are not limited to, automatic summarization, co-reference resolution, discourse analysis, named entity recognition, part-of-speech tagging, keyword spotting, relationship extraction, sentiment analysis, topic segmentation (e.g., subject domain segmentation), or the like.
- Controller 260 coordinates operation of speech processor 250 and NLP 255 . Controller 260 is also configured to interact with collaboration system 105 to determine the participants and presenter of a collaborative session. In addition, controller 260 is configured to interact with communication device 145 , for example, to deliver word usage, e.g., selected word(s), to the presenter of collaborative session 140 .
- word usage e.g., selected word(s)
- speech processor 250 can be applied to audio channels of a collaborative session to generate text that is provided to NLP 255 to determine a current subject domain of the collaborative session.
- text from interest sources can be provided to NLP 255 for analysis to determine subject domains of interest for the participants of the collaborative session.
- FIG. 2 is described with reference to guidance system 110
- the architecture illustrated in FIG. 2 can be utilized to implement one or more other devices such as, for example, collaboration system 105 and one or more of communication devices 145 - 165 .
- Each such device can include operational software (e.g., an operating system) and suitable application software, whether a communication client such as a browser or another type of client software or server side software executing therein as appropriate.
- FIG. 3 is a flow chart illustrating an exemplary method 300 of suggesting words to a presenter within a collaborative session.
- Method 300 is performed by a system such as the guidance system described within this specification with reference to FIGS. 1 and 2 .
- the guidance system functions in a communication (and computing) environment as illustrated with reference to FIG. 1 .
- method 300 begins in a state in which a collaborative session has started.
- the collaborative session includes a plurality of participants and one participant is designated as the presenter.
- the guidance system determines the participants of the collaborative session that has started and is ongoing.
- the guidance system queries the collaborative system for a list of participants in the collaborative session.
- a collaborative session whether Web-based or a conference call conducted over conventional telephone lines, etc., is associated with a universal resource locator (URL). From the URL, users and/or other automated computer processes can determine or access information about the collaborative session, including the participants.
- the collaborative system identifies the particular participant that is speaking at any given time.
- the guidance system determines one or more subject domains that are of interest to the participants.
- the guidance system having a list of participants in the collaborative session, queries one or more interest sources to determine particular subject domain(s) of interest for one or more or each participant.
- subject domains of interest including expertise of a user, are explicitly listed within a profile, e.g., a company profile, a biography, a social media Website, and/or a blog of a participant.
- NLP is applied to the interest sources to determine subject domains of interest when not explicitly stated.
- the guidance system determines interests of participants from the sources previously noted.
- the interests as expressed in text obtained from the interest sources, are compared with a formal set of subject domains that can be expressed as a taxonomy, a list, a hierarchy, or other formal structured data specifying subject domains maintained within the guidance system.
- the guidance system determines or derives one or more subject domains of interest using a correlation or matching process.
- the guidance system determines the subject domain(s) that match, or most closely match, the interests of participants, which can exclude those of the presenter.
- the guidance system has a list of the different subject domains of interest to participants of the collaborative session.
- interests of the presenter can be excluded from discovery and/or consideration.
- the guidance system has a list of subject domains of interest to participants, knows the number of participants and particular ones of the participants that are interested in each respective subject domain on the list for the collaborative session.
- the guidance system optionally determines the subject domain of the collaborative session. It should be appreciated that this determination is time specific as the particular subject domain discussed within the collaborative session can change at any given time. As such, the subject domain of the collaborative session is a “current” subject domain.
- the guidance system performs an analysis of audio via speech processing and applies NLP to the generated text.
- the NLP can be used to process text and, further, to correlate text derived from speech within the collaborative session with one or more subject domains from the formal set of subject domains maintained in the guidance system. As such, the subject domain(s) determined to match, or most closely match, the text is the current subject domain of the collaborative session.
- the content of a digital artifact being shared e.g., a file such as a presentation or word processing document, in the collaborative session can be evaluated using NLP.
- a digital artifact being shared e.g., a file such as a presentation or word processing document
- Such an artifact can be provided to the guidance system either directly from a communication device of the presenter or from the collaboration system upon request of the guidance system.
- Content e.g., text from the artifact, can be compared with the formal set of subject domains to determine a matching, or most closely matching, subject domain as the current subject domain of the collaborative session.
- the particular location within the artifact that is displayed or shared at a particular point in time is tracked as the collaborative session continues.
- the particular page of the slide being shown and/or the particular point or topic on the page (slide) can be determined by the guidance system.
- the presenter can provide input through his or her communication device (e.g., using a pointer or cursor) to navigate through the artifact, e.g., indicate page number, lines, and/or points currently discussed.
- This information can be provided to the collaborative system and obtained by the guidance system.
- the current subject domain of the collaborative session can be correlated with a particular slide, point on the slide, line of text, etc., and, as such, can change and be updated as the collaborative session continues.
- the guidance system selects a subject domain.
- the selected subject domain is the one from which one or more words are selected and made available to the presenter in the form of word choice guidance.
- the guidance system selects the particular subject domain in which the largest number of participants has an interest.
- the guidance system tracks which subject domains have been used to provide guidance during the collaborative session. In that case, the guidance system can select the particular subject domain in which the largest number of participants has an interest that has not yet been used as a source from which words are selected and provided to the presenter for the current collaborative session.
- the guidance system having also determined the current subject domain of the collaborative session, compares the current subject domain of the collaborative session with the subject domains determined in block 310 .
- the guidance system selects the subject domain from those determined in block 310 that matches, or most closely matches, the current subject domain of the collaborative session.
- the discussion taking place within the collaborative session may briefly move to one or more other and tangential topics that can be detected and used to determine a level of interest in such tangential topics. This allows the presenter to continue presenting materials and relate the subject matter of the presentation to such side or tangential topics determined to be of interest to participants.
- the guidance system determines the number of participants that have an interest in the current subject domain of the collaborative session. For example, a particular subset of the participants of the collaborative session may have an interest, e.g., an expertise, in the current subject domain of the collaborative session as determined from the interest sources. While participants, in general, may be considered to be interested in the collaborative session by virtue of attendance, having an interest, as used within this specification, refers to determining that a participant has an interest from an outside source, e.g., from the interest sources noted. Accordingly, the guidance system selects a subject domain from those determined in block 310 for which a largest number of participants of the subset of participants of the collaborative session has an interest.
- an interest e.g., an expertise
- the guidance system selects one or more words from the selected subject domain.
- the guidance system can include and/or access a list or corpus of text that is subdivided or structured according to subject domain.
- the word(s) can be selected from the corpus of text for the selected subject domain.
- Subject domain-specific words can include common phrases, expressions, action words, etc. for the selected subject domain.
- the guidance system determines whether the presenter is accepting guidance.
- the presenter working through his or her communication client, can set a parameter indicating whether the presenter would like to receive word choice guidance.
- the parameter is communicated to the guidance system. Accordingly, when the guidance system determines that the presenter does want word choice guidance as indicated by the parameter, method 300 proceeds to block 335 .
- the guidance system determines that the presenter does not want word choice guidance, the selected word(s) are not provided to the presenter. In that case, method 300 continues to block 340 .
- the guidance system provides the word(s) selected in block 325 to the presenter's communication device.
- the selected word(s) can be presented concurrently to the presenter with any material or artifacts for the collaborative session.
- the word(s) are not distributed, displayed, or otherwise made available, to non-presenter, participant of the collaborative session.
- the words can be placed or located on the display of the presenter in association with the particular items being discussed.
- the guidance system determines whether the collaborative session has ended. For example, the guidance system can receive a notification from the collaboration server, can query the collaboration server for a status, receive a notification from the communication device of the presenter, etc., indicating whether the collaborative session has ended. If the collaborative session has ended, method 300 ends. If not, method 300 loops back to block 305 to continue processing.
- FIG. 3 illustrates a process that can be performed on a continuous basis (e.g., iterate), periodically, or responsive to particular events.
- events that can trigger or start the process described with reference to FIG. 3 include, but are not limited to, receiving a user input from the presenter requesting word choice guidance, participant(s) either joining or leaving the collaborative session, the particular artifact that is being shared changing state, e.g., a page turn or moving to a new bullet point on the same page, the sharing of a different artifact entirely, etc.
- the guidance system can continually monitor the subject domain of the collaborative session and select words as described responsive to determining that the subject domain has changed from a first subject domain to a second and different subject domain or from a first subject domain to a subject domain that is considered or classified as a sub-topic of the first subject domain.
- FIG. 4 is an exemplary view 400 as displayed upon a display of a communication device of a presenter during a collaborative session.
- View 400 includes a first window 405 and a second window 410 .
- Window 405 shows content of an artifact that is being shared with, and as such, is viewable by, other participants in the collaborative session.
- the text “Point 1,” “Point a,” “Point b,” and “Point 2” is part of the original artifact.
- the artifact is a slide show and one page, or slide, is in view by the presenter and the participants.
- the guidance system which has been engaged and is operational, is providing guidance for word choice illustrated as text blocks 415 and 420 .
- text block 415 and 420 one or more words as selected by the guidance system from a selected subject domain are shown to the presenter.
- the particular words that are selected are shown in association with, e.g., next to, a particular point or portion of text of the artifact that is being shared.
- text block 415 is next to “Point a” indicating that the presenter should use the word “drive” when speaking about point a.
- text box 420 is displayed in association with, e.g., next to, “Point 2.” This indicates that the presenter should attempt to speak the text “apply the brakes” when discussing point 2. For example, the presenter can state “this ⁇ text of point 2> is becoming a problem and we need to apply the brakes.”
- the word choice guidance in the form of text blocks 415 and 420 is visually distinguished from the original content of the artifact that is being shared on the display viewed by the presenter.
- the presenter is provided with an indicator 425 , e.g., a list, specifying one or more subject domains of interest to participants of the collaborative session and an indication of the number of participants determined to have an interest in each respective subject domain.
- an indicator 425 e.g., a list
- four participants e.g., corresponding to four shaded blocks
- Three participants have an interest in nursing.
- Two participants have an interest in swimming.
- Indicator 425 can be configured to present each subject domain of interest determined for the participants, the top “N” subject domains of interest in which “N” is an integer number, or the subject domains of interest with more than a minimum number of interested participants.
- Window 410 illustrates that the guidance system can present any of a variety of information as determined and described herein to the presenter thereby allowing the presenter to utilize language from subject domains other than the subject domain selected.
- the presenter is told of the particular subject domains of interest to participants and provided with immediate means for holding the attention of the participants.
- a “guide me” button or control 430 is shown within window 410 .
- the presenter can select or activate control 430 .
- the communication device of the presenter can submit a request to the guidance system for word choice guidance.
- the guidance system can perform one or more steps as described herein and update indicator 425 , provide one or more suggested words (e.g., text blocks 415 and 420 ), or perform both functions.
- FIG. 5 is an exemplary view 500 as displayed upon a display of a communication device of a participant, non-presenter.
- View 500 illustrates that each participant is able to see the same portion of the artifact as seen by the presenter illustrated in FIG. 4 .
- One or more participants within the collaborative session have view 500 displayed upon the display of their communication device concurrently with the presenter having view 400 of FIG. 4 displayed upon the display of the presenter's communication device. As illustrated, however, the participants are not provided with suggested words, selected subject domains, etc.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- the term “plurality,” as used herein, is defined as two or more than two.
- the term “another,” as used herein, is defined as at least a second or more.
- the term “coupled,” as used herein, is defined as connected, whether directly without any intervening elements or indirectly with one or more intervening elements, unless otherwise indicated. Two elements also can be coupled mechanically, electrically, or communicatively linked through a communication channel, pathway, network, or system.
- the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context indicates otherwise.
- if may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.
- phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
- Maintaining the attention of an audience while speaking can be difficult. Such is the case for both the skilled and unskilled speaker. Despite the best of intentions, one or more listeners are likely to lose focus at some point during the presentation. When the attention of a listener drifts away from the speaker, the information that the speaker is attempting to convey is lost on that individual. Further, opportunities for collaboration between the listener and other members of the audience and/or the speaker are diminished or entirely lost. The likelihood of the listener contributing to the meeting is also reduced. The inability of a speaker to maintain the attention of an audience, when considered over time, can result in a loss of productivity and effectiveness for an organization.
- A method for suggesting words includes, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter and selecting, using a processor, at least one word from the first subject domain. The method further includes providing the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
- A system for suggesting words includes a processor programmed to initiate executable operations. The executable operations include, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter and selecting at least one word from the first subject domain. The executable operations further include providing the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
- A computer program product for suggesting words includes a computer readable storage medium having program code embodied therewith. The program code is executable by a processor to perform a method including, during a collaborative session involving a plurality of participants, determining a first subject domain of interest for at least one participant of the collaborative session that is not a presenter using the processor, selecting, using the processor, at least one word from the first subject domain, and providing, using the processor, the word to a communication device of the participant designated as the presenter and not to any other communication device of a participant.
-
FIG. 1 is a block diagram illustrating an example of a communication system. -
FIG. 2 is a block diagram illustrating an exemplary implementation of a guidance system as shown inFIG. 1 . -
FIG. 3 is a flow chart illustrating an exemplary method of suggesting words to a presenter within a collaborative session. -
FIG. 4 is an exemplary view displayed upon a display of a communication device of a presenter during a collaborative session. -
FIG. 5 is an exemplary view displayed upon a display of a communication device of a participant, non-presenter during a collaborative session. - As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product.
- Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer-readable medium(s) having computer-readable program code embodied, e.g., stored, thereon.
- Any combination of one or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk drive (HDD), a solid state drive (SSD), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible, e.g., non-transitory, medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java™, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- One or more embodiments disclosed within this specification relate to providing guidance as to word choice to a presenter within a collaborative session. In accordance with the inventive arrangements disclosed within this specification, one or more subject domains of interest to participants in the collaborative session can be determined. One or more words from the one or more of the subject domains of interest are selected and provided to a presenter within the collaborative session. The presenter receives the words and is able to incorporate the words within the presentation. By using, e.g., speaking, words from the selected subject domain to the participants, the likelihood that participants in the collaborative session remain focused and attentive on the presenter is increased.
- For purposes of simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers are repeated among the figures to indicate corresponding, analogous, or like features.
-
FIG. 1 is a block diagram illustrating an example of acommunication system 100.Communication system 100 includes acollaboration system 105, a word guidance system (guidance system) 110, one ormore interest sources 125, and a plurality of communication devices 145-165 communicatively linked through anetwork 170. Network 170 can be implemented as, or include, any of a variety of different networks such as a WAN, a LAN, a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, a Public Switched Telephone Network (PSTN), or the like. - In one aspect,
collaboration system 105 is implemented as a data processing system, e.g., a server, executing suitable operational and/or application software.Collaboration system 105 implements one or more collaborative sessions, such ascollaborative session 140, of which communication devices 145-165 are participants. A “collaborative session,” as used herein, refers to a communication session in which a two or more users participate and communicate with one another concurrently through appropriate communication devices. Examples of a collaborative session include, but are not limited to, a conference call, a screen cast, a Web-meeting, or the like. Typically, within the collaborative session, one of the participants is designated as a presenter. - In some cases,
collaboration system 105 is implemented as a data processing system. In other cases,collaboration system 105 is implemented as specialized hardware or a combination of specialized hardware operating in cooperation with a data processing system. For example, in the case wherecollaborative session 140 is a conference call conducted over the PSTN,collaboration system 105 can include or be implemented as, a telephony switch. The telephony switch can operate in place of, or in cooperation with, a data processing system. Depending upon the type ofcollaborative session 140,collaboration system 105 can be implemented in any of a variety of forms using any of a variety of specialized hardware. - Each of users A, B, C, D, and E is taking part in
collaborative session 140 through a respective one of communication devices 145-165. User A corresponds tocommunication device 145. User B corresponds tocommunication device 150. User C corresponds tocommunication device 155. User D corresponds tocommunication device 160. User E corresponds tocommunication device 165. - Communication devices 145-165 represent any of a variety of different communication devices such as mobile communication devices (e.g., smart phones, Internet-enabled phones, tablet devices, etc.), computers, or other information processing or communication appliances. Appreciably, the particular type of communication device used by a user will vary according to the type of
collaborative session 140. - From time-to-time within this specification, reference to a communication device, such as any of communication devices 145-165, may also refer to the particular user of that communication device. A “user” refers to a human being that operates or uses a particular communication device. Similarly, reference to a user may also refer to the communication device utilized by that user. For example, reference to user A can refer to
communication device 145. Similarly, reference tocommunication device 145 can refer to user A or an identity maintained by user A oncommunication device 145. In illustration, a shared artifact within a collaborative session can be made available from user A to user B is understood to mean that the artifact is made available fromcommunication device 145 tocommunication device 150. Further, in reference to a collaborative session, the term “participant” and the term “presenter” each refers to a user, or communication device of the user, that is taking part in, or has joined,collaborative session 140. - A presenter is a participant of
collaborative session 140 that is designated, e.g., bycollaboration system 105, as the presenter. A presenter typically has one or more rights and/or privileges that are not possessed by other participants ofcollaborative session 140 that are not designated as presenter. For purposes of discussion, user A is a presenter. Referring to the example ofFIG. 1 , user A has an ability to share content fromcommunication device 145, e.g., screen share or share a presentation to be viewed by users B-E withincollaborative session 140. Speech of the user A, as the presenter, is provided to users B-E. In some cases, other participants, e.g., users B, C, D, and/or E, may speak or obtain permission to speak to one or more or all other participants, but are not considered presenters as that term confers a particular status upon a participant withincollaboration system 105 and, more particularly, withincollaborative session 140. -
Guidance system 110 is implemented as a data processing system, e.g., a server, executing suitable operational software and aguidance module 120.Guidance system 110 is configured to interact withcollaboration system 105 to determine one or more subject domains of interest to one or more of the participants B-E ofcollaborative session 140. A “subject domain” refers to an area or department of knowledge or learning. Subject domains can be ordered with respect to one another. For example, subject domains can be classified into a structured relationship and/or hierarchy. - In one aspect,
guidance system 105 determines subject domains that are of interest to participants ofcollaborative session 140 by accessing one ormore interest sources 125. Examples ofinterest sources 125 include, but are not limited to,blogs 130 belonging to a participant, user profiles 135 (e.g., company user profiles specifying areas of expertise, interests, etc.) of participants, social network Websites to which participants belong or publish data, or the like. - Having determined subject domains of interest for one or more of users B-E,
guidance system 110 selects one or more words from one or more of the subject domains.Guidance system 110 provides the selected word or words tocommunication device 145. The word or words that are selected can be presented or displayed to the presenter, i.e., user A, throughcommunication device 145. Having the word or words generated byguidance system 110, the presenter can choose to utilize the word or words, e.g., speak the word or words, during the collaborative session to raise the level of interest and/or attentiveness of users B-E that have an interest in the particular subject domain from which the word or words were selected. - In another example,
guidance system 110 selects words in view of the current subject domain ofcollaborative session 140, which can be determined prior to the start ofcollaborative session 140 or in real time ascollaborative session 140 continues. As used herein, the phrase “real time” means a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made, or that enables the processor to keep up with some external process. As an illustrative example, the word or words selected byguidance system 110 may be common to both a selected subject domain of interest for at least one participant and the subject domain of the collaborative session. -
FIG. 2 is a block diagram illustrating an exemplary implementation ofguidance system 110 ofFIG. 1 .Guidance system 110 includes at least oneprocessor 205, e.g., a central processing unit (CPU), coupled tomemory elements 210 through asystem bus 215 or other suitable circuitry. As such,guidance system 110 stores program code withinmemory elements 210.Processor 205 executes the program code accessed frommemory elements 210 viasystem bus 215. In one aspect, for example,guidance system 110 can be implemented as a computer or a programmable data processing apparatus that is suitable for storing and/or executing program code. It should be appreciated, however, thatguidance system 110 can be implemented in the form of any system including a processor and memory that is capable of performing the functions and/or operations described within this specification. -
Memory elements 210 can include one or more physical memory devices such as, for example,local memory 220 and one or morebulk storage devices 225.Local memory 220 refers to RAM or other non-persistent memory device(s) generally used during actual execution of the program code. Bulk storage device(s) 225 can be implemented as a hard disk drive (HDD), solid state drive (SSD), or other persistent data storage device.Guidance system 110 also can include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved frombulk storage device 225 during execution. - Input/output (I/O) devices such as a
keyboard 230, adisplay 235, and apointing device 240 optionally can be coupled toguidance system 110. The I/O devices can be coupled toguidance system 110 either directly or through intervening I/O controllers. One ormore network adapters 245 also can be coupled toguidance system 110 to enableguidance system 110 to become coupled to other systems, computer systems, remote printers, and/or remote storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are examples of different types ofnetwork adapters 245 that can be used withguidance system 110. -
Memory elements 210 can store operational software such as an operating system (not shown) in addition toguidance module 120. The operational software andguidance module 120, being implemented in the form of program code, are executed byguidance system 110 and, as such, are considered an integrated part ofguidance system 110. - In one aspect,
guidance module 120 optionally includes a plurality of components such as aspeech processor 250, a natural language processor (NLP) 255, and acontroller 260. The operational software,guidance module 120, and any components included therein, e.g.,speech processor 250,NLP 255, andcontroller 260, as well as any other data relied upon in performing the functions described herein are functional data structures that impart functionality when employed as part of the system ofFIG. 2 . -
Speech processor 250 converts speech in the form of digital audio into text for analysis.NLP 255 is configured to perform any of a variety of known NLP functions. Examples of NLP functions that can be performed include, but are not limited to, automatic summarization, co-reference resolution, discourse analysis, named entity recognition, part-of-speech tagging, keyword spotting, relationship extraction, sentiment analysis, topic segmentation (e.g., subject domain segmentation), or the like. -
Controller 260 coordinates operation ofspeech processor 250 andNLP 255.Controller 260 is also configured to interact withcollaboration system 105 to determine the participants and presenter of a collaborative session. In addition,controller 260 is configured to interact withcommunication device 145, for example, to deliver word usage, e.g., selected word(s), to the presenter ofcollaborative session 140. - Under control of
controller 260, for example,speech processor 250 can be applied to audio channels of a collaborative session to generate text that is provided toNLP 255 to determine a current subject domain of the collaborative session. Throughcontroller 260, for example, text from interest sources can be provided toNLP 255 for analysis to determine subject domains of interest for the participants of the collaborative session. - It should be appreciated that while
FIG. 2 is described with reference toguidance system 110, the architecture illustrated inFIG. 2 can be utilized to implement one or more other devices such as, for example,collaboration system 105 and one or more of communication devices 145-165. Each such device can include operational software (e.g., an operating system) and suitable application software, whether a communication client such as a browser or another type of client software or server side software executing therein as appropriate. -
FIG. 3 is a flow chart illustrating anexemplary method 300 of suggesting words to a presenter within a collaborative session.Method 300 is performed by a system such as the guidance system described within this specification with reference toFIGS. 1 and 2 . The guidance system functions in a communication (and computing) environment as illustrated with reference toFIG. 1 . For purposes of discussion,method 300 begins in a state in which a collaborative session has started. The collaborative session includes a plurality of participants and one participant is designated as the presenter. - In
block 305, the guidance system determines the participants of the collaborative session that has started and is ongoing. The guidance system, for example, being accessible and/or activated by the presenter of the collaborative session, queries the collaborative system for a list of participants in the collaborative session. Typically, a collaborative session, whether Web-based or a conference call conducted over conventional telephone lines, etc., is associated with a universal resource locator (URL). From the URL, users and/or other automated computer processes can determine or access information about the collaborative session, including the participants. In some cases, the collaborative system identifies the particular participant that is speaking at any given time. - In
block 310, the guidance system determines one or more subject domains that are of interest to the participants. The guidance system, having a list of participants in the collaborative session, queries one or more interest sources to determine particular subject domain(s) of interest for one or more or each participant. In some cases, subject domains of interest, including expertise of a user, are explicitly listed within a profile, e.g., a company profile, a biography, a social media Website, and/or a blog of a participant. In other cases, NLP is applied to the interest sources to determine subject domains of interest when not explicitly stated. - For example, the guidance system determines interests of participants from the sources previously noted. The interests, as expressed in text obtained from the interest sources, are compared with a formal set of subject domains that can be expressed as a taxonomy, a list, a hierarchy, or other formal structured data specifying subject domains maintained within the guidance system. As such, from the text expressing interests for each participant, the guidance system determines or derives one or more subject domains of interest using a correlation or matching process. The guidance system, for example, the guidance system determines the subject domain(s) that match, or most closely match, the interests of participants, which can exclude those of the presenter.
- After
block 310, the guidance system has a list of the different subject domains of interest to participants of the collaborative session. In one aspect, interests of the presenter can be excluded from discovery and/or consideration. In any case, the guidance system has a list of subject domains of interest to participants, knows the number of participants and particular ones of the participants that are interested in each respective subject domain on the list for the collaborative session. - In
block 315, the guidance system optionally determines the subject domain of the collaborative session. It should be appreciated that this determination is time specific as the particular subject domain discussed within the collaborative session can change at any given time. As such, the subject domain of the collaborative session is a “current” subject domain. - As discussed, in one aspect, the guidance system performs an analysis of audio via speech processing and applies NLP to the generated text. The NLP can be used to process text and, further, to correlate text derived from speech within the collaborative session with one or more subject domains from the formal set of subject domains maintained in the guidance system. As such, the subject domain(s) determined to match, or most closely match, the text is the current subject domain of the collaborative session.
- In another example, the content of a digital artifact being shared, e.g., a file such as a presentation or word processing document, in the collaborative session can be evaluated using NLP. Such an artifact, for example, can be provided to the guidance system either directly from a communication device of the presenter or from the collaboration system upon request of the guidance system. Content, e.g., text from the artifact, can be compared with the formal set of subject domains to determine a matching, or most closely matching, subject domain as the current subject domain of the collaborative session.
- In another aspect, while sharing an artifact within the collaborative session, the particular location within the artifact that is displayed or shared at a particular point in time is tracked as the collaborative session continues. In the case of a slide presentation, for example, the particular page of the slide being shown and/or the particular point or topic on the page (slide) can be determined by the guidance system. The presenter can provide input through his or her communication device (e.g., using a pointer or cursor) to navigate through the artifact, e.g., indicate page number, lines, and/or points currently discussed. This information can be provided to the collaborative system and obtained by the guidance system. In this regard, the current subject domain of the collaborative session can be correlated with a particular slide, point on the slide, line of text, etc., and, as such, can change and be updated as the collaborative session continues.
- In
block 320, the guidance system selects a subject domain. The selected subject domain is the one from which one or more words are selected and made available to the presenter in the form of word choice guidance. In one aspect, the guidance system selects the particular subject domain in which the largest number of participants has an interest. - In another aspect, the guidance system tracks which subject domains have been used to provide guidance during the collaborative session. In that case, the guidance system can select the particular subject domain in which the largest number of participants has an interest that has not yet been used as a source from which words are selected and provided to the presenter for the current collaborative session.
- In another aspect, the guidance system, having also determined the current subject domain of the collaborative session, compares the current subject domain of the collaborative session with the subject domains determined in
block 310. The guidance system selects the subject domain from those determined inblock 310 that matches, or most closely matches, the current subject domain of the collaborative session. In illustration, the discussion taking place within the collaborative session may briefly move to one or more other and tangential topics that can be detected and used to determine a level of interest in such tangential topics. This allows the presenter to continue presenting materials and relate the subject matter of the presentation to such side or tangential topics determined to be of interest to participants. - In still another aspect, the guidance system determines the number of participants that have an interest in the current subject domain of the collaborative session. For example, a particular subset of the participants of the collaborative session may have an interest, e.g., an expertise, in the current subject domain of the collaborative session as determined from the interest sources. While participants, in general, may be considered to be interested in the collaborative session by virtue of attendance, having an interest, as used within this specification, refers to determining that a participant has an interest from an outside source, e.g., from the interest sources noted. Accordingly, the guidance system selects a subject domain from those determined in
block 310 for which a largest number of participants of the subset of participants of the collaborative session has an interest. - In
block 325, the guidance system selects one or more words from the selected subject domain. For example, the guidance system can include and/or access a list or corpus of text that is subdivided or structured according to subject domain. The word(s) can be selected from the corpus of text for the selected subject domain. Subject domain-specific words can include common phrases, expressions, action words, etc. for the selected subject domain. - In
block 330, the guidance system determines whether the presenter is accepting guidance. In one aspect, the presenter, working through his or her communication client, can set a parameter indicating whether the presenter would like to receive word choice guidance. The parameter is communicated to the guidance system. Accordingly, when the guidance system determines that the presenter does want word choice guidance as indicated by the parameter,method 300 proceeds to block 335. When the guidance system determines that the presenter does not want word choice guidance, the selected word(s) are not provided to the presenter. In that case,method 300 continues to block 340. - In
block 335, the guidance system provides the word(s) selected inblock 325 to the presenter's communication device. The selected word(s) can be presented concurrently to the presenter with any material or artifacts for the collaborative session. The word(s) are not distributed, displayed, or otherwise made available, to non-presenter, participant of the collaborative session. In one aspect, the words can be placed or located on the display of the presenter in association with the particular items being discussed. - In
block 340, the guidance system determines whether the collaborative session has ended. For example, the guidance system can receive a notification from the collaboration server, can query the collaboration server for a status, receive a notification from the communication device of the presenter, etc., indicating whether the collaborative session has ended. If the collaborative session has ended,method 300 ends. If not,method 300 loops back to block 305 to continue processing. -
FIG. 3 illustrates a process that can be performed on a continuous basis (e.g., iterate), periodically, or responsive to particular events. Examples of events that can trigger or start the process described with reference toFIG. 3 include, but are not limited to, receiving a user input from the presenter requesting word choice guidance, participant(s) either joining or leaving the collaborative session, the particular artifact that is being shared changing state, e.g., a page turn or moving to a new bullet point on the same page, the sharing of a different artifact entirely, etc. In another aspect, the guidance system can continually monitor the subject domain of the collaborative session and select words as described responsive to determining that the subject domain has changed from a first subject domain to a second and different subject domain or from a first subject domain to a subject domain that is considered or classified as a sub-topic of the first subject domain. -
FIG. 4 is anexemplary view 400 as displayed upon a display of a communication device of a presenter during a collaborative session. View 400 includes afirst window 405 and asecond window 410.Window 405 shows content of an artifact that is being shared with, and as such, is viewable by, other participants in the collaborative session. The text “Point 1,” “Point a,” “Point b,” and “Point 2” is part of the original artifact. In this example, the artifact is a slide show and one page, or slide, is in view by the presenter and the participants. - As illustrated, the guidance system, which has been engaged and is operational, is providing guidance for word choice illustrated as text blocks 415 and 420. Within each of
text block text block 415 is next to “Point a” indicating that the presenter should use the word “drive” when speaking about point a. As an illustration, the presenter could state that “we need to drive <text of point a> through the roof!” Similarly,text box 420 is displayed in association with, e.g., next to, “Point 2.” This indicates that the presenter should attempt to speak the text “apply the brakes” when discussingpoint 2. For example, the presenter can state “this <text ofpoint 2> is becoming a problem and we need to apply the brakes.” - As illustrated, the word choice guidance in the form of text blocks 415 and 420 is visually distinguished from the original content of the artifact that is being shared on the display viewed by the presenter.
- Within
window 410, the presenter is provided with anindicator 425, e.g., a list, specifying one or more subject domains of interest to participants of the collaborative session and an indication of the number of participants determined to have an interest in each respective subject domain. In this example, four participants (e.g., corresponding to four shaded blocks) have an interest in cars. Three participants have an interest in nursing. Two participants have an interest in swimming.Indicator 425 can be configured to present each subject domain of interest determined for the participants, the top “N” subject domains of interest in which “N” is an integer number, or the subject domains of interest with more than a minimum number of interested participants. -
Window 410 illustrates that the guidance system can present any of a variety of information as determined and described herein to the presenter thereby allowing the presenter to utilize language from subject domains other than the subject domain selected. The presenter is told of the particular subject domains of interest to participants and provided with immediate means for holding the attention of the participants. - Within
window 410, a “guide me” button or control 430 is shown. In one aspect, the presenter can select or activate control 430. Responsive to control 430, the communication device of the presenter can submit a request to the guidance system for word choice guidance. Accordingly, the guidance system can perform one or more steps as described herein andupdate indicator 425, provide one or more suggested words (e.g., text blocks 415 and 420), or perform both functions. -
FIG. 5 is anexemplary view 500 as displayed upon a display of a communication device of a participant, non-presenter. View 500 illustrates that each participant is able to see the same portion of the artifact as seen by the presenter illustrated inFIG. 4 . One or more participants within the collaborative session haveview 500 displayed upon the display of their communication device concurrently with thepresenter having view 400 ofFIG. 4 displayed upon the display of the presenter's communication device. As illustrated, however, the participants are not provided with suggested words, selected subject domains, etc. - The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment disclosed within this specification. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
- The term “plurality,” as used herein, is defined as two or more than two. The term “another,” as used herein, is defined as at least a second or more. The term “coupled,” as used herein, is defined as connected, whether directly without any intervening elements or indirectly with one or more intervening elements, unless otherwise indicated. Two elements also can be coupled mechanically, electrically, or communicatively linked through a communication channel, pathway, network, or system. The term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context indicates otherwise.
- The term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
- The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the embodiments disclosed within this specification have been presented for purposes of illustration and description, but are not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the embodiments of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the inventive arrangements for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/128,187 US20210111915A1 (en) | 2012-10-22 | 2020-12-20 | Guiding a presenter in a collaborative session on word choice |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/657,025 US10897369B2 (en) | 2012-10-22 | 2012-10-22 | Guiding a presenter in a collaborative session on word choice |
US17/128,187 US20210111915A1 (en) | 2012-10-22 | 2020-12-20 | Guiding a presenter in a collaborative session on word choice |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/657,025 Continuation US10897369B2 (en) | 2012-10-22 | 2012-10-22 | Guiding a presenter in a collaborative session on word choice |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210111915A1 true US20210111915A1 (en) | 2021-04-15 |
Family
ID=50486342
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/657,025 Active US10897369B2 (en) | 2012-10-22 | 2012-10-22 | Guiding a presenter in a collaborative session on word choice |
US17/128,187 Abandoned US20210111915A1 (en) | 2012-10-22 | 2020-12-20 | Guiding a presenter in a collaborative session on word choice |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/657,025 Active US10897369B2 (en) | 2012-10-22 | 2012-10-22 | Guiding a presenter in a collaborative session on word choice |
Country Status (1)
Country | Link |
---|---|
US (2) | US10897369B2 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8886782B2 (en) * | 2012-05-03 | 2014-11-11 | Nokia Corporation | Method and apparatus for binding devices into one or more groups |
US10951771B2 (en) * | 2013-05-31 | 2021-03-16 | Vonage Business Inc. | Method and apparatus for call handling control |
US20210327416A1 (en) * | 2017-07-28 | 2021-10-21 | Hewlett-Packard Development Company, L.P. | Voice data capture |
US10770069B2 (en) * | 2018-06-07 | 2020-09-08 | International Business Machines Corporation | Speech processing and context-based language prompting |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240379A1 (en) * | 2006-08-03 | 2008-10-02 | Pudding Ltd. | Automatic retrieval and presentation of information relevant to the context of a user's conversation |
US20080275701A1 (en) * | 2007-04-25 | 2008-11-06 | Xiaotao Wu | System and method for retrieving data based on topics of conversation |
US20110274260A1 (en) * | 2010-05-05 | 2011-11-10 | Vaananen Mikko | Caller id surfing |
US8594292B1 (en) * | 2012-04-20 | 2013-11-26 | West Corporation | Conference call information sharing via interaction with social networking data |
Family Cites Families (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5437555A (en) * | 1991-05-02 | 1995-08-01 | Discourse Technologies, Inc. | Remote teaching system |
US6346952B1 (en) * | 1999-12-01 | 2002-02-12 | Genesys Telecommunications Laboratories, Inc. | Method and apparatus for summarizing previous threads in a communication-center chat session |
US7698360B2 (en) * | 2002-02-26 | 2010-04-13 | Novell, Inc. | System and method for distance learning |
US7680820B2 (en) | 2002-04-19 | 2010-03-16 | Fuji Xerox Co., Ltd. | Systems and methods for displaying text recommendations during collaborative note taking |
US7454460B2 (en) * | 2003-05-16 | 2008-11-18 | Seiko Epson Corporation | Method and system for delivering produced content to passive participants of a videoconference |
US20120179981A1 (en) * | 2011-01-07 | 2012-07-12 | Meetup, Inc. | Collaboration Meeting Management in a Web-Based Interactive Meeting Facility |
US10298410B2 (en) * | 2003-06-16 | 2019-05-21 | Meetup, Inc. | Progressive announcements in a web-based interactive meeting facility |
US7530021B2 (en) * | 2004-04-01 | 2009-05-05 | Microsoft Corporation | Instant meeting preparation architecture |
US7996462B2 (en) * | 2004-07-30 | 2011-08-09 | Sap Ag | Collaborative agent for a work environment |
US20060080432A1 (en) * | 2004-09-03 | 2006-04-13 | Spataro Jared M | Systems and methods for collaboration |
US7707249B2 (en) * | 2004-09-03 | 2010-04-27 | Open Text Corporation | Systems and methods for collaboration |
US7974569B2 (en) * | 2004-11-17 | 2011-07-05 | The New England Center For Children, Inc. | Method and apparatus for customizing lesson plans |
US8271574B1 (en) * | 2004-12-22 | 2012-09-18 | Hewlett-Packard Development Company, L.P. | Content sharing and collaboration |
US7475340B2 (en) * | 2005-03-24 | 2009-01-06 | International Business Machines Corporation | Differential dynamic content delivery with indications of interest from non-participants |
US7493556B2 (en) * | 2005-03-31 | 2009-02-17 | International Business Machines Corporation | Differential dynamic content delivery with a session document recreated in dependence upon an interest of an identified user participant |
US7765257B2 (en) * | 2005-06-29 | 2010-07-27 | Cisco Technology, Inc. | Methods and apparatuses for selectively providing privacy through a dynamic social network system |
US8539027B1 (en) * | 2005-06-29 | 2013-09-17 | Cisco Technology, Inc. | System and method for suggesting additional participants for a collaboration session |
US20070005698A1 (en) * | 2005-06-29 | 2007-01-04 | Manish Kumar | Method and apparatuses for locating an expert during a collaboration session |
US8046410B1 (en) * | 2005-06-29 | 2011-10-25 | Weidong Chen | System and method for attribute detection in user profile creation and update |
US7467947B2 (en) * | 2005-10-24 | 2008-12-23 | Sap Aktiengesellschaft | External course catalog updates |
US7925716B2 (en) * | 2005-12-05 | 2011-04-12 | Yahoo! Inc. | Facilitating retrieval of information within a messaging environment |
US8121269B1 (en) * | 2006-03-31 | 2012-02-21 | Rockstar Bidco Lp | System and method for automatically managing participation at a meeting |
WO2007130400A2 (en) * | 2006-05-01 | 2007-11-15 | Zingdom Communications, Inc. | Web-based system and method of establishing an on-line meeting or teleconference |
US7925673B2 (en) * | 2006-10-16 | 2011-04-12 | Jon Beard | Method and system for knowledge based community solutions |
US8769006B2 (en) * | 2006-11-28 | 2014-07-01 | International Business Machines Corporation | Role-based display of document renditions for web conferencing |
US8699939B2 (en) * | 2008-12-19 | 2014-04-15 | Xerox Corporation | System and method for recommending educational resources |
CN101689365B (en) * | 2007-09-13 | 2012-05-30 | 阿尔卡特朗讯 | Method of controlling a video conference |
US20090192845A1 (en) * | 2008-01-30 | 2009-07-30 | Microsoft Corporation | Integrated real time collaboration experiences with online workspace |
US7974940B2 (en) * | 2008-05-22 | 2011-07-05 | Yahoo! Inc. | Real time expert dialog service |
US7739333B2 (en) * | 2008-06-27 | 2010-06-15 | Microsoft Corporation | Management of organizational boundaries in unified communications systems |
US8250141B2 (en) * | 2008-07-07 | 2012-08-21 | Cisco Technology, Inc. | Real-time event notification for collaborative computing sessions |
US9195739B2 (en) * | 2009-02-20 | 2015-11-24 | Microsoft Technology Licensing, Llc | Identifying a discussion topic based on user interest information |
US8185828B2 (en) * | 2009-04-08 | 2012-05-22 | Cisco Technology, Inc. | Efficiently sharing windows during online collaborative computing sessions |
US8589806B1 (en) * | 2009-08-28 | 2013-11-19 | Adobe Systems Incorporated | Online meeting systems and methods for handheld machines |
US9461834B2 (en) * | 2010-04-22 | 2016-10-04 | Sharp Laboratories Of America, Inc. | Electronic document provision to an online meeting |
US8510399B1 (en) * | 2010-05-18 | 2013-08-13 | Google Inc. | Automated participants for hosted conversations |
US9832423B2 (en) * | 2010-06-30 | 2017-11-28 | International Business Machines Corporation | Displaying concurrently presented versions in web conferences |
US8843832B2 (en) * | 2010-07-23 | 2014-09-23 | Reh Hat, Inc. | Architecture, system and method for a real-time collaboration interface |
US9602670B2 (en) * | 2010-10-29 | 2017-03-21 | Avaya Inc. | Methods and systems for selectively sharing content |
US9031839B2 (en) * | 2010-12-01 | 2015-05-12 | Cisco Technology, Inc. | Conference transcription based on conference data |
US20120144319A1 (en) * | 2010-12-03 | 2012-06-07 | Razer (Asia-Pacific) Pte Ltd | Collaboration Management System |
US8627214B2 (en) * | 2010-12-15 | 2014-01-07 | International Business Machines Corporation | Inviting temporary participants to a virtual meeting or other communication session for a fixed duration |
US8698872B2 (en) * | 2011-03-02 | 2014-04-15 | At&T Intellectual Property I, Lp | System and method for notification of events of interest during a video conference |
US8886797B2 (en) * | 2011-07-14 | 2014-11-11 | Cisco Technology, Inc. | System and method for deriving user expertise based on data propagating in a network environment |
US9043350B2 (en) * | 2011-09-22 | 2015-05-26 | Microsoft Technology Licensing, Llc | Providing topic based search guidance |
US8731454B2 (en) * | 2011-11-21 | 2014-05-20 | Age Of Learning, Inc. | E-learning lesson delivery platform |
KR20130096978A (en) * | 2012-02-23 | 2013-09-02 | 삼성전자주식회사 | User terminal device, server, information providing system based on situation and method thereof |
US8914452B2 (en) * | 2012-05-31 | 2014-12-16 | International Business Machines Corporation | Automatically generating a personalized digest of meetings |
US9071659B2 (en) * | 2012-11-29 | 2015-06-30 | Citrix Systems, Inc. | Systems and methods for automatically identifying and sharing a file presented during a meeting |
-
2012
- 2012-10-22 US US13/657,025 patent/US10897369B2/en active Active
-
2020
- 2020-12-20 US US17/128,187 patent/US20210111915A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240379A1 (en) * | 2006-08-03 | 2008-10-02 | Pudding Ltd. | Automatic retrieval and presentation of information relevant to the context of a user's conversation |
US20080275701A1 (en) * | 2007-04-25 | 2008-11-06 | Xiaotao Wu | System and method for retrieving data based on topics of conversation |
US20110274260A1 (en) * | 2010-05-05 | 2011-11-10 | Vaananen Mikko | Caller id surfing |
US8594292B1 (en) * | 2012-04-20 | 2013-11-26 | West Corporation | Conference call information sharing via interaction with social networking data |
Also Published As
Publication number | Publication date |
---|---|
US10897369B2 (en) | 2021-01-19 |
US20140115065A1 (en) | 2014-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210111915A1 (en) | Guiding a presenter in a collaborative session on word choice | |
KR101667220B1 (en) | Methods and systems for generation of flexible sentences in a social networking system | |
US20180260783A1 (en) | Auto-calendaring | |
US10003557B2 (en) | Preserving collaboration history with relevant contextual information | |
US9961162B2 (en) | Disambiguating online identities | |
CN110019934B (en) | Identifying relevance of video | |
US10409901B2 (en) | Providing collaboration communication tools within document editor | |
US20130227020A1 (en) | Methods and systems for recommending a context based on content interaction | |
US11222029B2 (en) | Prioritizing items based on user activity | |
US9313282B2 (en) | Intelligently detecting the leader of a co-browsing session | |
AU2016423749A1 (en) | Video keyframes display on online social networks | |
JP2017504992A (en) | Collaborative video editing in a cloud environment | |
US9582167B2 (en) | Real-time management of presentation delivery | |
US20180293306A1 (en) | Customized data feeds for online social networks | |
US11871150B2 (en) | Apparatuses, computer-implemented methods, and computer program products for generating a collaborative contextual summary interface in association with an audio-video conferencing interface service | |
US9684657B2 (en) | Dynamically updating content in a live presentation | |
US9473742B2 (en) | Moment capture in a collaborative teleconference | |
US10534826B2 (en) | Guided search via content analytics and ontology | |
Schwarzenegger | Exploring digital yesterdays–Reflections on new media and the future of communication history | |
US20140280531A1 (en) | Object ranking and recommendations within a social network | |
US20170149724A1 (en) | Automatic generation of social media messages regarding a presentation | |
US9843546B2 (en) | Access predictions for determining whether to share content | |
US20170180279A1 (en) | Providing interest based navigation of communications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARPUR, LIAM;LYLE, RUTHIE D.;O'SULLIVAN, PATRICK J.;AND OTHERS;SIGNING DATES FROM 20121016 TO 20121018;REEL/FRAME:054703/0809 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |