US20210365896A1 - Machine learning (ml) model for participants - Google Patents
Machine learning (ml) model for participants Download PDFInfo
- Publication number
- US20210365896A1 US20210365896A1 US17/308,772 US202117308772A US2021365896A1 US 20210365896 A1 US20210365896 A1 US 20210365896A1 US 202117308772 A US202117308772 A US 202117308772A US 2021365896 A1 US2021365896 A1 US 2021365896A1
- Authority
- US
- United States
- Prior art keywords
- meeting
- participant
- processor
- model
- transcript
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/34—Browsing; Visualisation therefor
- G06F16/345—Summarisation for human users
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/954—Navigation, e.g. using categorised browsing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1093—Calendar-based scheduling for persons or groups
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1097—Time management, e.g. calendars, reminders, meetings or time accounting using calendar-based scheduling for task assignment
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/57—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1818—Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1831—Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1069—Session establishment or de-establishment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1096—Supplementary features, e.g. call forwarding or call holding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/401—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
- H04L65/4015—Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/155—Conference systems involving storage of or access to video conference sessions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Definitions
- the presently disclosed embodiments are related, in general, to a meeting. More particularly, the presently disclosed embodiments are related to a ML model for participants of the meeting.
- Meetings conducted over a communication network, involve participants joining the meeting through computing devices connected to the communication network.
- plurality of participants of the meeting may generate meeting data during a course of the meeting.
- the meeting data may include, but not limited to, audio content which may include a participant's voice/audio, video content which may include participant's video and/or other videos, meeting notes input by the plurality of participants, presentation content, and/or the like.
- the meeting data may be utilized to predict future meeting recommendations for the plurality of the participants.
- a system and method to generate a ML model for participants is provided substantially as shown in, and/or described in connection with, at least one of the figures, as set forth more completely in the claims.
- FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure
- FIG. 2 is a block diagram of a central server, in accordance with an embodiment of the disclosure.
- FIG. 3 is a diagram that illustrates an example meeting transcript, in accordance with an embodiment of the disclosure.
- FIG. 4 is a diagram that illustrates an exemplary scenario of the meeting, in accordance with an embodiment of the disclosure
- FIG. 5 is a diagram of another exemplary scenario illustrating generation of the one or more meeting recommendations, in accordance with an embodiment of the disclosure.
- FIG. 6 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure
- FIG. 7 is a flowchart illustrating another method for training the ML model, in accordance with an embodiment of the disclosure.
- FIG. 8 is a flowchart illustrating a method for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure.
- the illustrated embodiments describe a method that includes identifying, by a processor in real time, a trigger event initiated by at least one participant of the meeting.
- the trigger event is indicative of at least a reference to meeting metadata associated with the meeting.
- the meeting data associated with at least one participant is recorded for a determined duration to generate meeting snippets based on identification of the trigger event.
- the method includes training a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant.
- the method includes generating one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata and/or meeting data for one or more meetings.
- ML machine learning
- the various embodiments describe a central server comprising a memory device that stores a set of instructions. Further, the central server includes a processor communicatively coupled to the memory device, wherein the processor is configured to identify, in real time, a trigger event initiated by at least one participant of the meeting, wherein the trigger event is indicative of at least a reference to meeting metadata associated with the meeting. The processor is further configured to record meeting data associated with the at least one participant of the meeting for a determined duration to generate a meeting snippet based on the identification of the trigger event. Furthermore, the processor is configured to train a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant. Additionally, the processor is configured to generate one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata for another meeting.
- ML machine learning
- the various embodiments describe a non-transitory computer-readable medium having stored thereon, computer-readable instructions, which when executed by a computer, causes a processor in the computer to execute operations.
- the operations include identifying, in real time, a trigger event initiated by at least one participant of the meeting, wherein the trigger event is indicative of at least a reference to meeting metadata associated with the meeting.
- the operations further includes recording, meeting data associated with the at least one participant of the meeting for a determined duration to generate a meeting snippet, wherein the recording is based on the identified trigger.
- the operations include training a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant.
- the operations further include generating one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata for another meeting.
- ML machine learning
- FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure.
- a system environment 100 which includes a central server 102 , one or more computing devices 104 a , 104 b , and 104 c collectively referenced as computing devices 104 , and a communication network 106 .
- the central server 102 and the computing devices 104 may be communicatively coupled with each other through the communication network 106 .
- the central server 102 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to create a meeting session through which the computing devices 104 may communicate with each other.
- the computing devices 104 may share content (referred to as meeting data) amongst each other via the meeting session.
- the central server 102 may receive the meeting data from each of the computing devices 104 . Thereafter, the central server 102 may be configured to monitor the meeting data received from each of the computing devices 104 .
- the monitoring of the meeting data may comprise identifying a trigger event during the meeting.
- the central server 102 may be configured to capture a plurality of meeting snippets for each of the plurality of participants based on the identification of the trigger event.
- the central server 102 may be configured to train a Machine Learning (ML) model for each of the plurality of participants based on the plurality of meeting snippets.
- the central server 102 may be configured to train the ML model for each of the plurality of participants, directly, based on the meeting data received from the each of the computing devices 104 .
- the central server 102 may be configured to utilize the ML model to generate one or more meeting recommendations for each of the plurality of participants.
- Examples of the central server 102 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, a computing device coupled to the computing devices 104 over a local network, an edge computing device, a cloud server, or any other computing device. Notwithstanding, the disclosure may not be so limited and other embodiments may be included without limiting the scope of the disclosure.
- PDA personal digital assistant
- the computing devices 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to connect to the meeting session, created by the central server 102 .
- the computing devices 104 may be associated with the plurality of participants of the meeting.
- the plurality of participants may provide one or more inputs during the meeting that may cause the computing devices 104 to generate the meeting data during the meeting.
- the meeting data may correspond to the content shared amongst the computing devices 104 during the meeting.
- the meeting data may comprise, but are not limited to, audio content that is generated by the plurality of participants as the plurality of participants speak during the meeting, video content that may include video feed of the plurality of participants, meeting notes input by the plurality of participants during the meeting, presentation content, screen sharing content, file sharing content and/or any other content shared during the meeting.
- the computing devices 104 may be configured to transmit the meeting data to the central server 102 . Additionally, or alternatively, the computing devices 104 may be configured to receive an input, indicative of the trigger event, from the plurality of participants. Upon receiving the input, the computing devices 104 may be configured to transmit the input to the central server 102 . Examples of the computing devices 104 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, or any other computing device.
- PDA personal digital assistant
- the communication network 106 may include a communication medium through which each of the computing devices 104 associated with the plurality of participants may communicate with each other and/or with the central server 102 .
- a communication may be performed, in accordance with various wired and wireless communication protocols.
- wired and wireless communication protocols include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, 2G, 3G, 4G, 5G, 6G cellular communication protocols, and/or Bluetooth (BT) communication protocols.
- the communication network 106 may include, but is not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a telephone line (POTS), and/or a Metropolitan Area Network (MAN).
- Wi-Fi Wireless Fidelity
- WLAN Wireless Local Area Network
- LAN Local Area Network
- POTS telephone line
- MAN Metropolitan Area Network
- the central server 102 may receive a request, from a computing device 104 a , to generate the meeting session for a meeting.
- the request may include meeting metadata associated with the meeting that is to be scheduled.
- the meeting metadata may include, but is not limited to, an agenda of the meeting, one or more topics to be discussed during the meeting, a time duration of the meeting, a schedule of the meeting, meeting notes carried forward from previous meetings, and/or the like.
- the central server 102 may create the meeting session.
- the meeting session may correspond to a communication session that allows the computing devices 104 to communicate with each other.
- the meeting session may share unique keys (public and private keys) with the computing devices 104 , which allows the computing devices 104 to communicate with each other.
- the unique keys corresponding to the meeting session may ensure that any other computing devices (other than the computing devices 104 ) are not allowed to join the meeting session.
- the central server 102 may send a notification to the computing devices 104 pertaining to the scheduled meeting.
- the notification may include the details of the meeting session.
- the central server 102 may transmit the unique keys and/or the meeting metadata to the computing devices 104 .
- the computing devices 104 may join the meeting through the meeting session.
- the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to join the meeting session.
- joining the meeting session has been interchangeably referred to as joining the meeting.
- the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to share content amongst each other.
- the plurality of participants may provide input to the computing devices 104 to cause the computing devices 104 to share the content amongst each other.
- the plurality of participants may speak during the meeting.
- the computing devices 104 may capture voice of the plurality of participants through one or more microphones to generate audio content.
- the computing devices 104 may transmit the audio content over the communication network 106 (i.e., meeting session). Additionally, or alternatively, the plurality of participants may share respective video feeds amongst each other by utilizing image capturing device (e.g., camera) associated with the computing devices 104 . Additionally, or alternatively, a participant of the plurality of participants may present content saved on the computing device (for example, the computing device 104 a ) through screen sharing capability. For example, the participant may present content to other participants (of the plurality of participants) through the power point presentation application installed on the computing device 104 a . In some examples, the participant may share content through other applications installed on the computing device 104 a .
- image capturing device e.g., camera
- a participant of the plurality of participants may present content saved on the computing device (for example, the computing device 104 a ) through screen sharing capability. For example, the participant may present content to other participants (of the plurality of participants) through the power point presentation application installed on the computing device 104
- the participant may share content through the word processor application installed on the computing device 104 a .
- the participant may take meeting notes during the meeting.
- the meeting data may include the audio content, the video content, the meeting notes, and/or the screen sharing content (e.g., through applications installed on the computing device 104 a ).
- the computing device 104 a may generate the meeting data during the meeting.
- other computing devices 104 b and 104 c may also generate the meeting data during the meeting.
- the computing devices 104 may transmit the meeting data to the central server 102 over the meeting session.
- the computing devices 104 may transmit the meeting data in near real time. To this end, the computing devices 104 may be configured to transmit the meeting data as and when the computing devices 104 generate the meeting data.
- the central server 102 may receive the meeting data from each of the computing devices 104 . Thereafter, the central server 102 may be configured to utilize the meeting data, received from each of the computing devices 104 , to train a ML model for each of the plurality of participants. For example, the central server 102 receives the meeting data from the computing device 104 a , associated with the participant- 1 . Further, the central server 102 receives the meeting data from the computing device 104 b , associated with the participant- 2 . Accordingly, the central server 102 may train a ML model for the participant- 1 based on meeting data received from the computing device 104 a . Additionally, the central server 102 may train another ML model for the participant- 2 based on the meeting data received from the computing device 104 b . Accordingly, the central server 102 may be configured to train the ML model for each of the plurality of participants.
- the scope of the disclosure is not limited to the central server 102 utilizing the complete meeting data to train the ML model for each of the plurality of participants.
- the central server 102 may be configured to train the ML model based on a portion of the meeting data received from the computing device 104 a .
- the central server 102 may compare the meeting data (received from each of the computing devices 104 ) with the meeting metadata to identify a trigger event in the meeting data.
- the central server 102 may compare the meeting data received from the computing device 104 a with the meeting metadata to identify the trigger event initiated by the participant associated with the computing device 104 a .
- the trigger event may be indicative of a timestamp at which the participant discussed or referred to a topic corresponding to the meeting metadata. For example, the participant discussed a topic mentioned in the agenda of the meeting.
- the central server 102 may generate a meeting snippet by recording the meeting data, received from a computing device (e.g., computing device 104 a ) for a determined duration.
- the central server 102 may be configured to associate the meeting snippet with the participant associated with the computing device (e.g., computing device 104 a ).
- the central server 102 may be configured to generate a plurality of meeting snippets associated with each of the plurality of participants. Thereafter, the central server 102 may be configured to train the ML model for each of the plurality of participants based on the plurality of meeting snippets associated with each of the plurality of participants.
- the central server 102 may be configured to utilize the ML model to generate one or more meeting recommendations for each of the plurality of participants.
- the one or more meeting recommendations may include, but are not limited to, suggesting meeting metadata for another meeting to be scheduled with the plurality of participants.
- FIG. 2 is a block diagram of the central server, in accordance with an embodiment of the disclosure.
- a central server 102 comprises a processor 202 , a non-transitory computer readable medium 203 , a memory device 204 , a transceiver 206 , a meeting data monitoring unit 208 , a trigger event identification unit 210 , a recording unit 212 , and a training unit 214 , and a recommendation unit 216 .
- the processor 202 may be embodied as one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an application specific integrated circuit (ASIC) or field programmable gate array (FPGA), or some combination thereof.
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- the processor 202 may include a plurality of processors and signal processing modules.
- the plurality of processors may be embodied on a single electronic device or may be distributed across a plurality of electronic devices collectively configured to function as the circuitry of the central server 102 .
- the plurality of processors may be in communication with each other and may be collectively configured to perform one or more functionalities of the circuitry of the central server 102 , as described herein.
- the processor 202 may be configured to execute instructions stored in the memory device 204 or otherwise accessible to the processor 202 . These instructions, when executed by the processor 202 , may cause the circuitry of the central server 102 to perform one or more of the functionalities, as described herein.
- the processor 202 may include an entity capable of performing operations according to embodiments of the present disclosure while configured accordingly.
- the processor 202 when the processor 202 is embodied as an ASIC, FPGA or the like, the processor 202 may include specifically configured hardware for conducting one or more operations described herein.
- the processor 202 when the processor 202 is embodied as an executor of instructions, such as may be stored in the memory device 204 , the instructions may specifically configure the processor 202 to perform one or more algorithms and operations described herein.
- the processor 202 used herein may refer to a programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above.
- multiple processors may be provided that may be dedicated to wireless communication functions and one processor may be dedicated to running other applications.
- Software applications may be stored in the internal memory before they are accessed and loaded into the processors.
- the processors may include internal memory sufficient to store the application software instructions.
- the internal memory may be a volatile or non-volatile memory, such as flash memory, or a mixture of both.
- the memory can also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection).
- the non-transitory computer readable medium 203 may include any tangible or non-transitory storage media or memory media such as electronic, magnetic, or optical media—e.g., disk or CD/DVD-ROM coupled to processor 202 .
- the memory device 204 may include suitable logic, circuitry, and/or interfaces that are adapted to store a set of instructions that is executable by the processor 202 to perform predetermined operations.
- Some of the commonly known memory implementations include, but are not limited to, a hard disk, random access memory, cache memory, read-only memory (ROM), erasable programmable read-only memory (EPROM) & electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof.
- the memory device 204 may be integrated with the processor 202 on a single chip, without departing from the scope of the disclosure.
- the transceiver 206 may correspond to a communication interface that may facilitate transmission and reception of messages and data to and from various devices (e.g., computing devices 104 ). Examples of the transceiver 206 may include, but are not limited to, an antenna, an Ethernet port, a USB port, a serial port, or any other port that can be adapted to receive and transmit data.
- the transceiver 206 transmits and receives data and/or messages in accordance with the various communication protocols, such as, Bluetooth®, Infra-Red, I2C, TCP/IP, UDP, and 2G, 3G, 4G or 5G communication protocols.
- the meeting data monitoring unit 208 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to receive the meeting data from each of the computing devices 104 .
- the meeting data monitoring unit 208 may be configured to generate a transcript from the meeting data using one or more known techniques. Some examples of the one or more known techniques may include Speech to Text (STT), Optical character Recognition (OCR), and/or the like.
- the meeting data monitoring unit 208 may be configured to individually generate transcript for the meeting data received from each of the computing devices 104 .
- the meeting data monitoring unit 208 may be configured to timestamp the transcript, received from each of the computing devices 104 , in accordance with a time instant at which the central server 102 received the meeting data (from which the transcript was generated).
- the meeting data monitoring unit 208 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
- the trigger event identification unit 210 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to compare the transcript of the meeting data with the meeting metadata. Based on the comparison between the meeting metadata and the transcript of the meeting data, the trigger identification unit 210 may be configured to identify the trigger event. In an example embodiment, the trigger event identification unit 210 may be configured to individually identify the trigger event in the meeting data received from each of the computing devices 104 . The trigger event identification unit 210 may be configured to associate the trigger event with a timestamp. In an example embodiment, the timestamp may correspond to a time instant at which the at least one participant mentioned or referred to the meeting metadata.
- the trigger event identification unit 210 may be configured to receive an input from a computing device (e.g., the computing device 104 a ) of the computing devices 104 .
- the trigger identification unit 210 may identify the received input as the trigger event for the computing device 104 a .
- the trigger event identification unit 210 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
- the recording unit 212 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to generate a meeting snippet based on the identification of the trigger event.
- the recording unit 212 may be configured to record the meeting data (in which the trigger event is identified) for a determined duration in order to generate the meeting snippet.
- the recording unit 212 may be configured to record the meeting data, received from the computing device 104 a , to generate meeting snippet.
- the recording unit 212 may be configured to generate a plurality of meeting snippets by recording the meeting data received from a computing device (e.g., 104 a ).
- the recording unit 212 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
- FPGA Field Programmable Gate array
- ASIC Application Specific Integrated Circuit
- the training unit 214 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to train the ML model for each of the plurality of participants based on the meeting data received from the respective computing devices 104 .
- the training unit 214 may be configured to train the ML model for the participant- 1 based on the meeting data received from the computing device 104 a (being used by the participant- 1 ).
- the training unit 214 may be configured to train another ML model for the participant- 2 based on the meeting data received from the computing device 104 b (being used by the participant- 2 ).
- the training unit 214 may be configured to train the ML model for each of the plurality of participants based on the plurality of meeting snippets.
- the training unit 214 may be configured to train the ML model based on other information obtained from other sources such as, but not limited to, one or more project tracking tools, and/or meeting metadata.
- the training unit 214 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
- the recommendation unit 216 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to generate the one or more meeting recommendations for each of the plurality of participants.
- the recommendation unit 216 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
- the processor 202 may receive the request to schedule the meeting from at least one computing device 104 a of the computing devices 104 .
- the request to schedule the meeting includes meeting metadata.
- the meeting metadata includes the agenda of the meeting, the one or more topics to be discussed during the meeting, the time duration of the meeting, the schedule of the meeting, the meeting notes carried from previous meetings, and/or the like. Following table illustrates an example meeting metadata:
- UI to design of the 2. Fields to 2020; 9 include feature 1, User be displayed PM to 10 PM feature 2 interface in UI 2.
- Feature 1 (UI) 3. Current defined as a status of portion depicting project participants 3.
- the processor 202 may be configured to store the meeting metadata in the memory device 204 . Additionally, based on receiving the request to schedule the meeting, the processor 202 may be configured to create the meeting session. As discussed, the meeting session corresponds to a communication session that allows the computing devices 104 to connect to the central server 102 . Further, the meeting session allows the computing devices 104 to communicate amongst each other. For example, over the meeting session, the computing devices 104 may share content (e.g., audio content and/or video content) amongst each other. In an exemplary embodiment, the processor 202 may be configured to transmit a message to each of the computing devices 104 comprising the details of the meeting session. For example, the message may include a link to connect to the meeting session.
- the meeting session corresponds to a communication session that allows the computing devices 104 to connect to the central server 102 . Further, the meeting session allows the computing devices 104 to communicate amongst each other. For example, over the meeting session, the computing devices 104 may share content (e.g., audio
- the plurality of participants may cause the respective computing devices 104 to join the meeting session. For example, the participant may click on the link (received in the message from the central server 102 ) to cause the computing devices 104 to join the meeting session.
- the central server 102 may transmit a User Interface (UI) to each of the computing devices 104 .
- the UI may allow the plurality of participants to access one or more features.
- the UI may allow the plurality of participants to share audio content and/or video content.
- the UI may provide control to the plurality of participants to enable/disable an image capturing device and/or an audio capturing device in the computing devices 104 .
- the UI may enable the plurality of participants to share other content.
- the UI may provide a feature to the plurality of participants that would allow the plurality of participants to cause the computing devices 104 to share content/applications being displayed on a display device associated with the computing devices 104 .
- the plurality of participants may cause the computing devices 104 to share a power point presentation being displayed on the computing devices 104 .
- the UI may present a note feature to the plurality of participants on respective computing devices 104 .
- the notes feature may enable the plurality of participants to input notes or keep track of important points discussed during the meeting.
- the notes feature of the UI may correspond to a space on the UI in which the plurality of participants may input text for his/her reference.
- the text input by the plurality of participants may correspond to the notes taken by the plurality of participants during the meeting.
- the computing devices 104 may be configured to transmit the text input by the plurality of participants to the central server 102 .
- the central server 102 may be configured to share the text input by the plurality of participants amongst each of the computing devices 104 . In an alternative embodiment, the central server 102 may not share the text input by the plurality of participants amongst each of the computing devices 104 .
- each of the computing devices 104 may generate meeting data during the meeting.
- the meeting data may include, but is not limited to, the audio content generated by the plurality of participants as the plurality of participants speak during the meeting, the video content includes video feed of the plurality of participants, the meeting notes input by the plurality of participants during the meeting, the presentation content, the screen sharing content, the file sharing content and/or any other content shared during the meeting.
- the processor 202 may receive the meeting data from each of the computing devices 104 in real time.
- the meeting data received from each of the computing devices 104 are associated with the respective participants using the computing devices 104 .
- the meeting data received from the computing device 104 a is associated with the participant- 1 using the computing device 104 a .
- the foregoing description has been described in conjunction with the meeting data received from the computing device 104 a .
- those skilled in the art would appreciate that the foregoing description is also applicable on the meeting data received from the other computing devices 104 .
- the meeting data monitoring unit 208 may be configured to generate, in real time, a transcript of the meeting data received from the computing device 104 a .
- the meeting data monitoring unit 208 may be configured to convert the audio content (received from computing devices 104 ) to text using known Speech to Text (STT) techniques. The text (obtained from the audio content) may constitute the transcript.
- the meeting data monitoring unit 208 may be configured to generate the transcript from the video content.
- the meeting data monitoring unit 208 may perform optical character recognition (OCR) on the video content to generate the transcript.
- OCR optical character recognition
- the meeting data monitoring unit 208 may be configured to consider the meeting notes (input by the participant associated with the computing device 104 a ) as the transcript.
- the meeting data monitoring unit 208 may be configured to perform OCR on the content shared via the screen sharing feature to generate the transcript. Additionally, or alternatively, the meeting data monitoring unit 208 may be configured to timestamp the transcript in accordance with a time instant of the reception of the meeting data from the computing device 104 a .
- the processor 202 receives the meeting data at time instant T 1 .
- the meeting data monitoring unit 208 may generate the transcript from the meeting data received at the time instant T 1 , and may timestamp the transcript with time instant T 1 .
- An example the transcript is further illustrated and described in FIG. 3 .
- the meeting data monitoring unit 208 may be configured to generate multiple transcripts of the meeting data received from the computing device 104 a based on the time instant at which the central server 102 receives the corresponding meeting data. For example, the meeting data monitoring unit 208 may generate another transcript at the time instant T 2 based on the meeting data received at the time instant T 2 . To this end, the meeting data monitoring unit 208 may be configured to generate the transcripts as and when the central server 102 receives the meeting data from the computing device 104 a.
- the meeting data monitoring unit 208 may be configured to include the meeting metadata (generated during scheduling the meeting) in the transcript. Additionally, or alternatively, the meeting data monitoring unit 208 may be configured to retrieve task metadata associated with one or more tasks assigned to the participant- 1 from the one or more project tracking tools. Some examples, of the project tracking tools may include, but are not limited to, Salesforce®, Era®, and/or the like.
- the task metadata may include, but not limited to, tasks description, task outcome, tools to be used to complete the task, planned completion date associated with the task, and/or a current status of the task.
- the participant- 1 may be working on more than one project in parallel. Accordingly, the participant- 1 may be assigned with multiple tasks.
- the task metadata associated with such multiple tasks is usually stored on the one or more project tracking tools.
- the meeting data monitoring unit 208 may be configured to retrieve the task metadata associated with the one or more tasks, assigned to the participant 1 , from the project tracking tools.
- the meeting data monitoring unit 208 may be configured to retrieve the task metadata associated with a set of tasks, of the one or more tasks assigned to the participant 1 , that are relevant to the meeting.
- the meeting data monitoring unit 208 may be configured to retrieve the task metadata associated with the set of tasks based on the meeting metadata.
- the meeting data monitoring unit 208 may be configured to query the Application Interface (API) of the one or more project tracking tools using the meeting metadata to retrieve the task metadata associated with the set of tasks.
- API Application Interface
- the meeting metadata includes the agenda UI design. Accordingly, the meeting data monitoring unit 208 may be configured to retrieve the task metadata associated with the set of tasks assigned to the participant- 1 pertaining to the UI design. Further, the meeting data monitoring unit 208 may be configured to add the task metadata in the transcript.
- FIG. 3 is a diagram that illustrates an example meeting transcript, in accordance with an embodiment of the disclosure.
- a meeting transcript 300 that includes a transcript “agendal: to create UI” (depicted by 302 ) received at the time instant T 1 (depicted by 304 ) from the computing device 104 a .
- the meeting transcript 300 includes another transcript “UI to include feature 1 and feature 2 ” (depicted by 306 ) received at the time instant T 2 (depicted by 308 ) from the computing device 104 a .
- the meeting transcript 300 includes the task metadata 310 associated with the set of tasks assigned to the participant- 1 associated with the computing device 104 a .
- the meeting transcript 300 includes the meeting metadata 312 .
- the training unit 214 may be configured to train a ML model for the participant- 1 associated with the computing device 104 a based on the transcript (generated from the meeting data received from the computing device 104 a , the task metadata associated with the set of tasks assigned to the participant- 1 , and meeting metadata).
- the ML model may be indicative of a profile of the participant 1 .
- the profile of a participant may be deterministic of one or more topics which are relevant and/or of interest to the participant. Additionally, or alternatively, the profile may be indicative of one or more skills of the participant 1 .
- the training unit 214 may be configured to remove unwanted words and/or phrases, from the transcript to generate a clean transcript. Such unwanted words and/or phrases may be referred to as stop words.
- the stop words may include words that are insignificant and do not add meaning to the transcript. Some examples of the stop words may include, but are not limited to, “is”, “are”, “and” “at least”, and/or the like.
- the training unit 214 may be configured to identify n-grams in the clean transcript, where n-grams corresponds to combination of two or more words in the clean transcript that are used in conjunction, in the transcript. For example, the term “user” and “interface” are often used together. Accordingly, the training unit 214 may be configured to identify the term “user interface” as an n-gram. In an exemplary embodiment, the training unit 214 may be configured to add the identified n-gram to the clean transcript to create a training corpus.
- the training unit 214 may be configured to train the ML model using the training corpus.
- training the ML model using the training corpus may include converting the words in training corpus in one or more vectors.
- the training unit 214 may be configured to train a neural network using the one or more vectors.
- the trained neural network corresponds to the ML model.
- the ML model may be realized using other techniques such as, but not limited to, logistic regression, Bayesian regression, random forest regression, and/or the like.
- the ML model is associated with the participant 1 .
- the training unit 214 may be configured to train other ML models for other participants.
- the scope of the disclosure is not limited to the training the ML model using the training corpus generated from the meeting data.
- the training unit 214 may be configured to generate training corpus based on an identification of a trigger event in the meeting data.
- the trigger event identification unit 210 may be configured to compare the meeting metadata and the transcript.
- the trigger event identification unit 210 may compare the transcript at each timestamp (in the meeting transcript) with the meeting metadata using one or more known text comparison techniques. Some examples of the text comparison techniques may include, but not are limited to, Cosine Similarity, Euclidean distance, Pearson coefficient and/or the like.
- the trigger event identification unit 210 may be configured to convert the transcript at each timestamp into a transcript vector using one or more known transformation techniques such as, but not limited to, term frequency—inverse document frequency (TF-IDF), Wor2Vec, and/or the like.
- the transcript vector may correspond to an array of integers, in which each integer corresponds to a term in the transcript.
- the value of the integer may be deterministic of the characteristic of the term within the transcript. For example, the integer may be deterministic of a count of times a term has appeared in the transcript.
- the trigger event identification unit 210 may be configured to convert the meeting metadata to a metadata vector.
- the trigger event identification unit 210 may utilize the one or more text comparison techniques to compare the metadata vector and the transcript vector and determine a similarity score between the metadata vector and the transcript vector. For example, the trigger event identification unit 210 may determine a Cosine similarity score between the metadata vector and the transcript vector.
- the trigger event identification unit 210 may be configured to determine whether the similarity score is greater than or equal to a similarity score threshold. If the trigger event identification unit 210 determines that similarity score is less than the similarity score threshold, the trigger event identification unit 210 may be configured to determine that the transcript is dissimilar from the meeting metadata. However, if the trigger event identification unit 210 determines that the similarity score is greater than or equal to the similarity score threshold, the trigger event identification unit 210 may be configured to determine that the transcript is similar to the meeting metadata. Accordingly, the trigger event identification unit 210 may determine that the participant- 1 mentioned or presented content related to the meeting metadata. To this end, the trigger event identification unit 210 may identify the trigger event.
- the scope of the disclosure is not limited to the trigger event identification unit 210 identifying the trigger event based on the comparison between the meeting data and the meeting metadata.
- the trigger event identification unit 210 may be configured to receive an input from a computing device (e.g., 104 a ) of the computing devices 104 .
- the input may indicate that a participant may want to record a portion of the meeting for later reference. For example, during the meeting, the participant may find the discussion and/or the content being presented to be interesting. Accordingly, in some examples, the participant may provide an input on the UI to record the portion of the meeting that includes the discussion that the participant found interesting.
- the computing device 104 a may transmit the input (received from the participant through UI) to the central server 102 .
- the trigger event identification unit 210 may identify the input as the trigger event.
- the processor 202 may be configured to categorize the transcript at each timestamp in one or more categories.
- the one or more categories may include an action category, a schedule category, work status category, and or the like.
- the action category may correspond to a category that may comprise transcripts which are indicative of an action item for the plurality of participants.
- the schedule category may correspond to a category that may comprise transcripts indicative of schedule of a subsequent meeting.
- the work status category may correspond to a category that may include transcripts indicative of status of a task or a work.
- the processor 202 may be configured to utilize a classifier to categorize the transcript at each timestamp in the one or more categories.
- the classifier may correspond to a machine learning (ML) model that is capable of categorizing the transcript at each timestamp based on the semantics of the transcripts.
- the ML model may be capable of transforming the transcript into the transcript vector.
- the ML model may be configured to utilize the known classification techniques to classify the transcript at each transcript in the one or more categories.
- Some examples of the classification techniques may include, but not limited to, na ⁇ ve bayes classification technique, logistic regression, hierarchal classifier, random forest classifier, and/or the like.
- the processor 202 may be configured to train the classifier based on training data.
- the training data may include one or more features and one or more labels.
- the one or more features may include training transcripts, while the one or more labels may include the one or more categories.
- each of the transcript is associated with a category of the one or more categories.
- Training the classifier may include the processor 202 defining a mathematical relationship between the transcript vectors and the one or more categories. Thereafter, the processor 202 utilizes the classifier to classify the transcript to the one or more categories.
- the trigger event identification unit 210 may be configured to identify the trigger event based on the classification of the transcript in the one or more categories. Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories and the reception of the input from the computing device (e.g., 104 a ). Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories and the similarity score. Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the similarity score and the reception of the input from the computing device (e.g., 104 a ).
- the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories, reception of the input from the computing device (e.g., 104 a ), and the similarity score.
- the recording unit 212 may be configured to record the meeting data received from the computing device 104 a , for the determined duration.
- a length of the determined duration may be defined during configuration of the central server 102 .
- the determined duration may be defined based on the timestamp associated with the transcript corresponding to the trigger event (i.e., the transcript that is similar to the meeting metadata).
- the determined duration may be defined based on the timestamp of the reception of the input from the computing device 104 a .
- the determined duration is defined to include a first determined duration chronologically prior to the timestamp and a second determined duration chronologically after the timestamp.
- a length of the first determined duration is same as a length of the second determined duration. In another example, the length of the first determined duration is different from the length of the second determined duration. For instance, the length of the first determined duration is greater than the length of the second determined duration. In another instance, the length of the second determined duration is greater than the length of the first determined duration.
- the recording unit 212 may be configured to contiguously record the meeting data for first determined duration prior to the timestamp and for the second determined duration after the timestamp. Accordingly, the recording of the meeting data includes, the recording of the audio content, the video content, the screen sharing content, the meeting notes, the presentation content and/or the like, received during the determined duration. In some examples, the recorded meeting data may correspond to the meeting snippet.
- the recording unit 212 may be configured to record the meeting data for the determined duration after the timestamp. In another example, the recording unit 212 may be configured to record the meeting data for the determined duration prior to the timestamp. In an exemplary embodiment, using the methodology described herein, the recording unit 212 may be configured to record a plurality of meeting snippets from the meeting data received from the computing device 104 a . Thereafter, the meeting data monitoring unit 208 may be configured to generate a plurality of transcripts for each of the plurality of meeting snippets. Further, the meeting data monitoring unit 208 may be configured to aggregate the plurality of transcripts to generate a summary transcript.
- the meeting data monitoring unit 208 may be configured to aggregate the plurality of transcripts based on the chronological order of the timestamp associated with each of the respective meeting snippets to generate a summary transcript.
- the summary transcript may capture moments in the meeting in which the participant- 1 caused the identification of the trigger event.
- the recording unit 212 may be configured to record the meeting data received from the other computing devices 104 for the determined duration based on the identification of the trigger event to generate a plurality of additional meeting snippets.
- the meeting data monitoring unit 208 may be configured to generate additional meeting transcripts based on the plurality of additional meeting snippets.
- a computing device 104 c of the computing devices 104 may not generate the meeting data.
- the participant associated with the computing device may only be listening to the meeting and may be providing inputs to record meeting snippets.
- the central server 102 may be configured to record the meeting data received from other computing devices 104 for a determined duration to generate the meeting snippet, based on the reception of the input from the computing device 104 c .
- the central server 102 may be configured to convert the meeting snippet to transcript, where the transcript is associated with the computing device 104 c .
- the central server 102 may be configured to train the ML model for the participant associated with the computing device 104 c based on the transcript obtained from the meeting snippet.
- the training unit 214 may be configured to generate a training corpus from the summary transcript and/or the additional transcript using the methodology described above. Further, the training unit may be configured to train the ML model using the training corpus generated from the summary transcript and/or the additional transcript. Similarly, the training unit 214 may be configured to train other ML models for the other participants. Further, the training unit 214 may be configured to store the ML models, trained for each of the plurality of participants, in the memory device 204 . In some examples, where the ML model for a participant in the meeting is already stored on the memory device 204 , the training unit 214 may be configured to update the existing ML model. In such an embodiment, the training unit 214 may be configured to update the existing ML model based on the training corpus generated from the transcript of the meeting data associated with the participant.
- the processor 202 may be configured to receive another input from the computing device 104 a (associated with the participant 1 ) to schedule another meeting.
- the input may further include details pertaining to other participants that the participant- 1 intends to be part of the meeting.
- the processor 202 may be configured to retrieve the ML model associated with the participant- 1 and the other participants from the memory device 204 . Thereafter, the processor 202 may be configured to generate one or more meeting recommendations for the other meetings based on the ML model associated with the participant- 1 and the other participants.
- the processor 202 may be configured to determine one or more topics that are common to the participant- 1 and the other participants based on the respective ML models.
- the processor 202 may be configured to utilize the one or more topics as the one or more meeting recommendations.
- the ML model associated with the plurality of participants may enable the central server 102 to capture a plurality of meeting snippets that may be of interest to the plurality of participants.
- the central server 102 may be configured to identify trigger events during the other meeting.
- the central server 102 may be configured to identify (during the other meeting) time instants at which the plurality of participants referred to the one or more topics, as the trigger events.
- the central server 102 may be configured to record the meeting for the determined duration to generate a plurality of meeting snippets.
- the first processor 202 may be configured to capture the plurality of meeting snippets of one or more non-real time meeting data shared amongst the plurality of participants.
- the one or more non-real time meeting data may include meeting data that is shared amongst the plurality of participants outside the meeting.
- the one or more non-real time meeting data may include text messages shared amongst the plurality of participants, the one or more audio messages shared amongst the plurality of participants.
- first processor 202 may be configured to record the plurality of meeting snippets of the one or more non-real time meeting data using similar methodology, as is described above.
- FIG. 4 is a diagram that illustrates an exemplary scenario of the meeting, in accordance with an embodiment of the disclosure.
- the exemplary scenario 400 illustrates that each of the computing devices 104 generates the meeting data. Additionally, or alternatively, each of the computing devices 104 transmit the meeting data to the central server 102 .
- the meeting data 402 transmitted by the computing device 104 a comprises text corresponding to the audio content spoken by the participant- 1 associated with the computing device 104 a .
- the text indicates “referring to topic 1 , participant 2 will provide the details”.
- the timestamp associated with the meeting data, transmitted by the computing device 104 a is T 1 .
- the computing device 104 b At time instant T 2 , the computing device 104 b generates the meeting data 404 that includes text obtained from presentation content (by performing OCR). The text indicates “with reference to topic- 1 , the UI includes feature- 1 feature- 2 and feature- 3 ”. Further, at time instant T 2 , the exemplary scenario 400 illustrates that the computing device 104 c transmits an input 405 to the central server 102 .
- the meeting data monitoring unit 208 appends the task metadata 406 associated with the set of tasks assigned to the participant- 1 to the meeting data received from the computing device 104 a .
- the task metadata for the participant- 1 indicates “Design feature of UI” (depicted by 408 ).
- the meeting data monitoring unit 208 appends the meeting metadata 410 to the meeting data 402 received from the computing device 104 a and the meeting data 404 received from the computing device 104 b .
- the recording unit 212 may be configured to record the meeting data 402 received from the computing device 104 a and the meeting data 404 received from the computing device 104 b for the determined duration to generate a meeting snippet- 1 (depicted by 412 ) and a meeting snippet- 2 (depicted by 414 ), respectively.
- the meeting data monitoring unit 208 may be configured to consider the meeting snippet- 1 (depicted by 412 ) and the meeting snippet- 2 (depicted by 414 ) as the meeting data 416 for the computing device 104 c.
- the meeting data monitoring unit 208 may be configured to generate the transcript 418 from the meeting data 402 (received from the computing device 104 a ), the transcript 420 from the meeting data 404 (received from the computing device 104 b ), and the transcript 422 from the meeting data 416 associated with the computing device 104 c .
- the transcript 418 includes “Design feature for UI, UI development, feature 1 of UI is WIP”.
- the transcript 420 includes “Color scheme of UI, UI development, feature 2 of UI is complete”.
- the transcript 422 includes “Design feature for UI, UI development, feature 1 of UI is WIP, Color scheme of UI, UI development, feature 2 of UI is complete”.
- the training unit 214 may be configured to generate the training corpuses 424 , 426 , and 428 based on the transcript 418 , the transcript 420 , and the transcript 422 , respectively.
- the training corpuses 424 , 426 , and 428 are associated with the participant- 1 , participant- 2 , and participant- 3 , respectively.
- the training unit 214 may be configured to train the ML model- 1 430 , ML model- 2 432 , and ML model- 3 434 .
- the ML model- 1 430 , ML model- 2 432 , and ML model- 3 434 are associated with the participant- 1 , participant- 2 , and participant- 3 , respectively.
- the ML model is indicative of one or more topics and/or skills associated with a participant.
- the ML model- 1 430 includes “UI”, “feature- 1 ”, and “C++” as the one or more topics and/or skills associated with the participant- 1 .
- the ML model- 2 432 includes “UI”, “feature- 2 ”, and “Java” as the one or more topics and/or skills associated with the participant- 2 .
- FIG. 5 is a diagram that illustrates another exemplary scenario illustrating generation of the one or more meeting recommendations, in accordance with an embodiment of the disclosure.
- the exemplary scenario 500 includes the ML model- 1 430 , the ML model- 2 432 , and the ML model- 3 434 .
- the exemplary scenario 500 illustrates the one or more topics 502 , 504 , and 506 represented by each of the ML model- 1 430 , the ML model- 2 432 , and the ML model- 3 434 , respectively.
- the one or more topics 502 associated with the ML model- 1 432 includes “UI”, “feature- 1 ”, and “C++”.
- the one or more topics 504 associated with the ML model- 2 includes “UI”, “feature- 2 ”, and “Java”.
- the one or more topics 506 associated with the ML model- 3 includes “UI, feature 1 , Color scheme”.
- the exemplary scenario 500 illustrates that central server 102 receives an input from the computing device 104 a pertaining to scheduling a meeting.
- the input may further include the details pertaining to the plurality of participants of the meeting.
- the details pertaining to the plurality participants includes the participant- 1 and the participant- 2 .
- the recommendation unit 216 may be configured to utilize the ML model- 1 430 and the ML model- 2 432 (associated with the participant- 1 and the participant- 2 , respectively) to determine the one or more topics associated with the participant- 1 and the participant- 2 . Further, the recommendation unit 216 may be configured to determine an intersection between the one or more topics associated with participant- 1 and the one or more topics associated with the participant- 2 (depicted by 508 ).
- the recommendation unit 216 determines that the intersection between the one or more topics associated with the participant- 1 and the participant- 2 is “UI” (depicted by 510 ). Accordingly, the recommendation unit 216 may be configured to generate the meeting recommendation “UI” (depicted by 510 ).
- FIG. 6 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure.
- the meeting data is received from the computing devices 104 .
- the processor 202 may be configured to receive the meeting data from each of the computing devices 104 during the meeting.
- the transcript is created based on the meeting data.
- the meeting data monitoring unit 208 may be configured to transform the meeting data to the transcript.
- the ML model is trained based on the meeting data.
- the training unit 208 may be configured to train the ML model based on the meeting data.
- the training unit 208 may be configured to train the ML model for each of the plurality of participants.
- FIG. 7 is a flowchart illustrating another method for training the ML model, in accordance with an embodiment of the disclosure.
- the meeting data is received from the computing devices 104 .
- the processor 202 may be configured to receive the meeting data from each of the computing devices 104 during the meeting.
- a trigger event is identified in the meeting data.
- the trigger event identification unit 210 may be configured to identify the trigger event in the meeting data.
- meeting data is recorded for determined duration.
- the recording unit 212 may be configured to record the meeting data to generate meeting snippet.
- the transcript is created based on the meeting snippet.
- the meeting data monitoring unit 208 may be configured to generate transcript.
- the ML model is trained based on the meeting data.
- the training unit 208 may be configured to train the ML model based on the meeting data.
- FIG. 8 is a flowchart 800 illustrating a method for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure.
- an input to schedule a meeting is received from a participant.
- the processor 202 may be configured to receive the input.
- the input includes the details of other participants of the meeting.
- the one or more topics associated with each of the participants is determined based on respective ML models.
- the recommendation unit 216 may be configured to determine the one or more topics for each of the participants.
- the intersection of the one or more topics associated with each of the participants is determined.
- the recommendation unit 216 is configured to determine the intersection.
- the one or more meeting recommendations are generated.
- the recommendation unit 216 may be configured to determine the one or more meeting recommendations based on the intersection.
- the hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may include a general purpose processor, a digital signal processor (DSP), a special-purpose processor such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA), a programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
- a general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, or in addition, some operations or methods may be performed by circuitry that is specific to a given function.
- the functions described herein may be implemented by special-purpose hardware or a combination of hardware programmed by firmware or other software. In implementations relying on firmware or other software, the functions may be performed as a result of execution of one or more instructions stored on one or more non-transitory computer-readable media and/or one or more non-transitory processor-readable media. These instructions may be embodied by one or more processor-executable software modules that reside on the one or more non-transitory computer-readable or processor-readable storage media.
- Non-transitory computer-readable or processor-readable storage media may in this regard comprise any storage media that may be accessed by a computer or a processor.
- non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, disk storage, magnetic storage devices, or the like.
- Disk storage includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray DiscTM, or other storage devices that store data magnetically or optically with lasers. Combinations of the above types of media are also included within the scope of the terms non-transitory computer-readable and processor-readable media. Additionally, any combination of instructions stored on the one or more non-transitory processor-readable or computer-readable media may be referred to herein as a computer program product.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Computer Interaction (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Telephonic Communication Services (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
- This Application makes reference to, claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 63/028,123, which was filed on May 21, 2020.
- The above referenced Application is hereby incorporated herein by reference in its entirety.
- The presently disclosed embodiments are related, in general, to a meeting. More particularly, the presently disclosed embodiments are related to a ML model for participants of the meeting.
- Meetings, conducted over a communication network, involve participants joining the meeting through computing devices connected to the communication network. In some examples, plurality of participants of the meeting may generate meeting data during a course of the meeting. Some examples of the meeting data may include, but not limited to, audio content which may include a participant's voice/audio, video content which may include participant's video and/or other videos, meeting notes input by the plurality of participants, presentation content, and/or the like. In some examples, the meeting data may be utilized to predict future meeting recommendations for the plurality of the participants.
- Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of described systems with some aspects of the present disclosure, as set forth in the remainder of the present application and with reference to the drawings.
- A system and method to generate a ML model for participants is provided substantially as shown in, and/or described in connection with, at least one of the figures, as set forth more completely in the claims.
- These and other features and advantages of the present disclosure may be appreciated from a review of the following detailed description of the present disclosure, along with the accompanying figures in which like reference numerals refer to like parts throughout.
-
FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure; -
FIG. 2 is a block diagram of a central server, in accordance with an embodiment of the disclosure; -
FIG. 3 is a diagram that illustrates an example meeting transcript, in accordance with an embodiment of the disclosure; -
FIG. 4 is a diagram that illustrates an exemplary scenario of the meeting, in accordance with an embodiment of the disclosure; -
FIG. 5 is a diagram of another exemplary scenario illustrating generation of the one or more meeting recommendations, in accordance with an embodiment of the disclosure; and -
FIG. 6 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure; -
FIG. 7 is a flowchart illustrating another method for training the ML model, in accordance with an embodiment of the disclosure; and -
FIG. 8 is a flowchart illustrating a method for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure. - The illustrated embodiments describe a method that includes identifying, by a processor in real time, a trigger event initiated by at least one participant of the meeting. The trigger event is indicative of at least a reference to meeting metadata associated with the meeting. The meeting data associated with at least one participant is recorded for a determined duration to generate meeting snippets based on identification of the trigger event. Further, the method includes training a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant. Additionally, the method includes generating one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata and/or meeting data for one or more meetings.
- The various embodiments describe a central server comprising a memory device that stores a set of instructions. Further, the central server includes a processor communicatively coupled to the memory device, wherein the processor is configured to identify, in real time, a trigger event initiated by at least one participant of the meeting, wherein the trigger event is indicative of at least a reference to meeting metadata associated with the meeting. The processor is further configured to record meeting data associated with the at least one participant of the meeting for a determined duration to generate a meeting snippet based on the identification of the trigger event. Furthermore, the processor is configured to train a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant. Additionally, the processor is configured to generate one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata for another meeting.
- The various embodiments describe a non-transitory computer-readable medium having stored thereon, computer-readable instructions, which when executed by a computer, causes a processor in the computer to execute operations. The operations include identifying, in real time, a trigger event initiated by at least one participant of the meeting, wherein the trigger event is indicative of at least a reference to meeting metadata associated with the meeting. The operations further includes recording, meeting data associated with the at least one participant of the meeting for a determined duration to generate a meeting snippet, wherein the recording is based on the identified trigger. Additionally, the operations include training a machine learning (ML) model associated with the at least one participant based on the meeting snippet associated with the at least one participant. The operations further include generating one or more meeting recommendations by utilizing the trained ML model, wherein the one or more meeting recommendations include meeting metadata for another meeting.
-
FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure. Referring toFIG. 1 , there is shown a system environment 100, which includes acentral server 102, one ormore computing devices computing devices 104, and acommunication network 106. Thecentral server 102 and thecomputing devices 104 may be communicatively coupled with each other through thecommunication network 106. - The
central server 102 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to create a meeting session through which thecomputing devices 104 may communicate with each other. For example, thecomputing devices 104, may share content (referred to as meeting data) amongst each other via the meeting session. For example, thecentral server 102 may receive the meeting data from each of thecomputing devices 104. Thereafter, thecentral server 102 may be configured to monitor the meeting data received from each of thecomputing devices 104. The monitoring of the meeting data may comprise identifying a trigger event during the meeting. Thecentral server 102 may be configured to capture a plurality of meeting snippets for each of the plurality of participants based on the identification of the trigger event. Additionally, or alternatively, thecentral server 102 may be configured to train a Machine Learning (ML) model for each of the plurality of participants based on the plurality of meeting snippets. In an alternative embodiment, thecentral server 102 may be configured to train the ML model for each of the plurality of participants, directly, based on the meeting data received from the each of thecomputing devices 104. Further, thecentral server 102 may be configured to utilize the ML model to generate one or more meeting recommendations for each of the plurality of participants. Examples of thecentral server 102 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, a computing device coupled to thecomputing devices 104 over a local network, an edge computing device, a cloud server, or any other computing device. Notwithstanding, the disclosure may not be so limited and other embodiments may be included without limiting the scope of the disclosure. - The
computing devices 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to connect to the meeting session, created by thecentral server 102. In an exemplary embodiment, thecomputing devices 104 may be associated with the plurality of participants of the meeting. The plurality of participants may provide one or more inputs during the meeting that may cause thecomputing devices 104 to generate the meeting data during the meeting. In an exemplary embodiment, the meeting data may correspond to the content shared amongst thecomputing devices 104 during the meeting. In some examples, the meeting data may comprise, but are not limited to, audio content that is generated by the plurality of participants as the plurality of participants speak during the meeting, video content that may include video feed of the plurality of participants, meeting notes input by the plurality of participants during the meeting, presentation content, screen sharing content, file sharing content and/or any other content shared during the meeting. In some examples, thecomputing devices 104 may be configured to transmit the meeting data to thecentral server 102. Additionally, or alternatively, thecomputing devices 104 may be configured to receive an input, indicative of the trigger event, from the plurality of participants. Upon receiving the input, thecomputing devices 104 may be configured to transmit the input to thecentral server 102. Examples of thecomputing devices 104 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, or any other computing device. - In an embodiment, the
communication network 106 may include a communication medium through which each of thecomputing devices 104 associated with the plurality of participants may communicate with each other and/or with thecentral server 102. Such a communication may be performed, in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, 2G, 3G, 4G, 5G, 6G cellular communication protocols, and/or Bluetooth (BT) communication protocols. Thecommunication network 106 may include, but is not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a telephone line (POTS), and/or a Metropolitan Area Network (MAN). - In operation, the
central server 102 may receive a request, from acomputing device 104 a, to generate the meeting session for a meeting. In an exemplary embodiment, the request may include meeting metadata associated with the meeting that is to be scheduled. In an exemplary embodiment, the meeting metadata may include, but is not limited to, an agenda of the meeting, one or more topics to be discussed during the meeting, a time duration of the meeting, a schedule of the meeting, meeting notes carried forward from previous meetings, and/or the like. Upon receiving the request, thecentral server 102 may create the meeting session. In an exemplary embodiment, the meeting session may correspond to a communication session that allows thecomputing devices 104 to communicate with each other. The meeting session may share unique keys (public and private keys) with thecomputing devices 104, which allows thecomputing devices 104 to communicate with each other. In some examples, the unique keys corresponding to the meeting session may ensure that any other computing devices (other than the computing devices 104) are not allowed to join the meeting session. Additionally, or alternatively, thecentral server 102 may send a notification to thecomputing devices 104 pertaining to the scheduled meeting. The notification may include the details of the meeting session. For example, thecentral server 102 may transmit the unique keys and/or the meeting metadata to thecomputing devices 104. - The
computing devices 104 may join the meeting through the meeting session. In an exemplary embodiment, the plurality of participants associated with thecomputing devices 104 may cause thecomputing devices 104 to join the meeting session. In an exemplary embodiment, joining the meeting session has been interchangeably referred to as joining the meeting. Thereafter, the plurality of participants associated with thecomputing devices 104 may cause thecomputing devices 104 to share content amongst each other. For instance, the plurality of participants may provide input to thecomputing devices 104 to cause thecomputing devices 104 to share the content amongst each other. For example, the plurality of participants may speak during the meeting. Thecomputing devices 104 may capture voice of the plurality of participants through one or more microphones to generate audio content. Further, thecomputing devices 104 may transmit the audio content over the communication network 106 (i.e., meeting session). Additionally, or alternatively, the plurality of participants may share respective video feeds amongst each other by utilizing image capturing device (e.g., camera) associated with thecomputing devices 104. Additionally, or alternatively, a participant of the plurality of participants may present content saved on the computing device (for example, thecomputing device 104 a) through screen sharing capability. For example, the participant may present content to other participants (of the plurality of participants) through the power point presentation application installed on thecomputing device 104 a. In some examples, the participant may share content through other applications installed on thecomputing device 104 a. For example, the participant may share content through the word processor application installed on thecomputing device 104 a. Additionally, or alternatively, the participant may take meeting notes during the meeting. In an exemplary embodiment, the meeting data may include the audio content, the video content, the meeting notes, and/or the screen sharing content (e.g., through applications installed on thecomputing device 104 a). Accordingly, in some examples, thecomputing device 104 a may generate the meeting data during the meeting. Similarly,other computing devices computing devices 104 may transmit the meeting data to thecentral server 102 over the meeting session. In an exemplary embodiment, thecomputing devices 104 may transmit the meeting data in near real time. To this end, thecomputing devices 104 may be configured to transmit the meeting data as and when thecomputing devices 104 generate the meeting data. - In an exemplary embodiment, the
central server 102 may receive the meeting data from each of thecomputing devices 104. Thereafter, thecentral server 102 may be configured to utilize the meeting data, received from each of thecomputing devices 104, to train a ML model for each of the plurality of participants. For example, thecentral server 102 receives the meeting data from thecomputing device 104 a, associated with the participant-1. Further, thecentral server 102 receives the meeting data from thecomputing device 104 b, associated with the participant-2. Accordingly, thecentral server 102 may train a ML model for the participant-1 based on meeting data received from thecomputing device 104 a. Additionally, thecentral server 102 may train another ML model for the participant-2 based on the meeting data received from thecomputing device 104 b. Accordingly, thecentral server 102 may be configured to train the ML model for each of the plurality of participants. - In some examples, the scope of the disclosure is not limited to the
central server 102 utilizing the complete meeting data to train the ML model for each of the plurality of participants. In some examples, thecentral server 102 may be configured to train the ML model based on a portion of the meeting data received from thecomputing device 104 a. In such an embodiment, prior to training the ML model, thecentral server 102 may compare the meeting data (received from each of the computing devices 104) with the meeting metadata to identify a trigger event in the meeting data. For example, thecentral server 102 may compare the meeting data received from thecomputing device 104 a with the meeting metadata to identify the trigger event initiated by the participant associated with thecomputing device 104 a. In an exemplary embodiment, the trigger event may be indicative of a timestamp at which the participant discussed or referred to a topic corresponding to the meeting metadata. For example, the participant discussed a topic mentioned in the agenda of the meeting. - Based on the identification of the trigger event, the
central server 102 may generate a meeting snippet by recording the meeting data, received from a computing device (e.g.,computing device 104 a) for a determined duration. In an example embodiment, thecentral server 102 may be configured to associate the meeting snippet with the participant associated with the computing device (e.g.,computing device 104 a). Similarly, during the meeting, thecentral server 102 may be configured to generate a plurality of meeting snippets associated with each of the plurality of participants. Thereafter, thecentral server 102 may be configured to train the ML model for each of the plurality of participants based on the plurality of meeting snippets associated with each of the plurality of participants. - In exemplary embodiment, the
central server 102 may be configured to utilize the ML model to generate one or more meeting recommendations for each of the plurality of participants. In an example embodiment, the one or more meeting recommendations may include, but are not limited to, suggesting meeting metadata for another meeting to be scheduled with the plurality of participants. -
FIG. 2 is a block diagram of the central server, in accordance with an embodiment of the disclosure. Referring toFIG. 2 , there is shown acentral server 102 comprises aprocessor 202, a non-transitory computerreadable medium 203, amemory device 204, atransceiver 206, a meetingdata monitoring unit 208, a trigger event identification unit 210, arecording unit 212, and a training unit 214, and arecommendation unit 216. - The
processor 202 may be embodied as one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an application specific integrated circuit (ASIC) or field programmable gate array (FPGA), or some combination thereof. - Accordingly, although illustrated in
FIG. 2 as a single controller, in an exemplary embodiment, theprocessor 202 may include a plurality of processors and signal processing modules. The plurality of processors may be embodied on a single electronic device or may be distributed across a plurality of electronic devices collectively configured to function as the circuitry of thecentral server 102. The plurality of processors may be in communication with each other and may be collectively configured to perform one or more functionalities of the circuitry of thecentral server 102, as described herein. In an exemplary embodiment, theprocessor 202 may be configured to execute instructions stored in thememory device 204 or otherwise accessible to theprocessor 202. These instructions, when executed by theprocessor 202, may cause the circuitry of thecentral server 102 to perform one or more of the functionalities, as described herein. - Whether configured by hardware, firmware/software methods, or by a combination thereof, the
processor 202 may include an entity capable of performing operations according to embodiments of the present disclosure while configured accordingly. Thus, for example, when theprocessor 202 is embodied as an ASIC, FPGA or the like, theprocessor 202 may include specifically configured hardware for conducting one or more operations described herein. Alternatively, as another example, when theprocessor 202 is embodied as an executor of instructions, such as may be stored in thememory device 204, the instructions may specifically configure theprocessor 202 to perform one or more algorithms and operations described herein. - Thus, the
processor 202 used herein may refer to a programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above. In some devices, multiple processors may be provided that may be dedicated to wireless communication functions and one processor may be dedicated to running other applications. Software applications may be stored in the internal memory before they are accessed and loaded into the processors. The processors may include internal memory sufficient to store the application software instructions. In many devices, the internal memory may be a volatile or non-volatile memory, such as flash memory, or a mixture of both. The memory can also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection). - The non-transitory computer
readable medium 203 may include any tangible or non-transitory storage media or memory media such as electronic, magnetic, or optical media—e.g., disk or CD/DVD-ROM coupled toprocessor 202. - The
memory device 204 may include suitable logic, circuitry, and/or interfaces that are adapted to store a set of instructions that is executable by theprocessor 202 to perform predetermined operations. Some of the commonly known memory implementations include, but are not limited to, a hard disk, random access memory, cache memory, read-only memory (ROM), erasable programmable read-only memory (EPROM) & electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof. In an exemplary embodiment, thememory device 204 may be integrated with theprocessor 202 on a single chip, without departing from the scope of the disclosure. - The
transceiver 206 may correspond to a communication interface that may facilitate transmission and reception of messages and data to and from various devices (e.g., computing devices 104). Examples of thetransceiver 206 may include, but are not limited to, an antenna, an Ethernet port, a USB port, a serial port, or any other port that can be adapted to receive and transmit data. Thetransceiver 206 transmits and receives data and/or messages in accordance with the various communication protocols, such as, Bluetooth®, Infra-Red, I2C, TCP/IP, UDP, and 2G, 3G, 4G or 5G communication protocols. - The meeting
data monitoring unit 208 may comprise suitable logic, circuitry, interfaces, and/or code that may configure thecentral server 102 to receive the meeting data from each of thecomputing devices 104. In an exemplary embodiment, the meetingdata monitoring unit 208 may be configured to generate a transcript from the meeting data using one or more known techniques. Some examples of the one or more known techniques may include Speech to Text (STT), Optical character Recognition (OCR), and/or the like. In an example embodiment, the meetingdata monitoring unit 208 may be configured to individually generate transcript for the meeting data received from each of thecomputing devices 104. Additionally, or alternatively, the meetingdata monitoring unit 208 may be configured to timestamp the transcript, received from each of thecomputing devices 104, in accordance with a time instant at which thecentral server 102 received the meeting data (from which the transcript was generated). The meetingdata monitoring unit 208 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC). - The trigger event identification unit 210 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the
central server 102 to compare the transcript of the meeting data with the meeting metadata. Based on the comparison between the meeting metadata and the transcript of the meeting data, the trigger identification unit 210 may be configured to identify the trigger event. In an example embodiment, the trigger event identification unit 210 may be configured to individually identify the trigger event in the meeting data received from each of thecomputing devices 104. The trigger event identification unit 210 may be configured to associate the trigger event with a timestamp. In an example embodiment, the timestamp may correspond to a time instant at which the at least one participant mentioned or referred to the meeting metadata. In an exemplary embodiment, the trigger event identification unit 210 may be configured to receive an input from a computing device (e.g., thecomputing device 104 a) of thecomputing devices 104. The trigger identification unit 210 may identify the received input as the trigger event for thecomputing device 104 a. The trigger event identification unit 210 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC). - The
recording unit 212 may comprise suitable logic, circuitry, interfaces, and/or code that may configure thecentral server 102 to generate a meeting snippet based on the identification of the trigger event. In an exemplary embodiment, therecording unit 212 may be configured to record the meeting data (in which the trigger event is identified) for a determined duration in order to generate the meeting snippet. For example, therecording unit 212 may be configured to record the meeting data, received from thecomputing device 104 a, to generate meeting snippet. In an exemplary embodiment, therecording unit 212 may be configured to generate a plurality of meeting snippets by recording the meeting data received from a computing device (e.g., 104 a). Therecording unit 212 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC). - The training unit 214 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the
central server 102 to train the ML model for each of the plurality of participants based on the meeting data received from therespective computing devices 104. For example, the training unit 214 may be configured to train the ML model for the participant-1 based on the meeting data received from thecomputing device 104 a (being used by the participant-1). Similarly, the training unit 214 may be configured to train another ML model for the participant-2 based on the meeting data received from thecomputing device 104 b (being used by the participant-2). In another example, the training unit 214 may be configured to train the ML model for each of the plurality of participants based on the plurality of meeting snippets. Additionally, or alternatively, the training unit 214 may be configured to train the ML model based on other information obtained from other sources such as, but not limited to, one or more project tracking tools, and/or meeting metadata. The training unit 214 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC). - The
recommendation unit 216 may comprise suitable logic, circuitry, interfaces, and/or code that may configure thecentral server 102 to generate the one or more meeting recommendations for each of the plurality of participants. Therecommendation unit 216 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC). - In operation, the
processor 202 may receive the request to schedule the meeting from at least onecomputing device 104 a of thecomputing devices 104. In an exemplary embodiment, the request to schedule the meeting includes meeting metadata. As discussed, the meeting metadata includes the agenda of the meeting, the one or more topics to be discussed during the meeting, the time duration of the meeting, the schedule of the meeting, the meeting notes carried from previous meetings, and/or the like. Following table illustrates an example meeting metadata: -
TABLE 1 Example meeting metadata Meeting notes One or more Time Schedule of from previous Agenda topics duration the meeting meetings To discuss 1. Layout 1 hour 15th Nov. 1. UI to design of the 2. Fields to 2020; 9 include feature 1,User be displayed PM to 10 PM feature 2 interface in UI 2. Feature 1 (UI) 3. Current defined as a status of portion depicting project participants 3. Feature 2depicting chat box - In an exemplary embodiment, the
processor 202 may be configured to store the meeting metadata in thememory device 204. Additionally, based on receiving the request to schedule the meeting, theprocessor 202 may be configured to create the meeting session. As discussed, the meeting session corresponds to a communication session that allows thecomputing devices 104 to connect to thecentral server 102. Further, the meeting session allows thecomputing devices 104 to communicate amongst each other. For example, over the meeting session, thecomputing devices 104 may share content (e.g., audio content and/or video content) amongst each other. In an exemplary embodiment, theprocessor 202 may be configured to transmit a message to each of thecomputing devices 104 comprising the details of the meeting session. For example, the message may include a link to connect to the meeting session. - At the scheduled time, the plurality of participants may cause the
respective computing devices 104 to join the meeting session. For example, the participant may click on the link (received in the message from the central server 102) to cause thecomputing devices 104 to join the meeting session. Based on thecomputing devices 104 joining the meeting session, thecentral server 102 may transmit a User Interface (UI) to each of thecomputing devices 104. In an exemplary embodiment, the UI may allow the plurality of participants to access one or more features. For example, the UI may allow the plurality of participants to share audio content and/or video content. To this end, the UI may provide control to the plurality of participants to enable/disable an image capturing device and/or an audio capturing device in thecomputing devices 104. Additionally, or alternatively, the UI may enable the plurality of participants to share other content. For example, the UI may provide a feature to the plurality of participants that would allow the plurality of participants to cause thecomputing devices 104 to share content/applications being displayed on a display device associated with thecomputing devices 104. For instance, through the UI, the plurality of participants may cause thecomputing devices 104 to share a power point presentation being displayed on thecomputing devices 104. Additionally, or alternatively, the UI may present a note feature to the plurality of participants onrespective computing devices 104. The notes feature may enable the plurality of participants to input notes or keep track of important points discussed during the meeting. For example, the notes feature of the UI may correspond to a space on the UI in which the plurality of participants may input text for his/her reference. Further, the text input by the plurality of participants may correspond to the notes taken by the plurality of participants during the meeting. Additionally, or alternatively, thecomputing devices 104 may be configured to transmit the text input by the plurality of participants to thecentral server 102. Further, in one embodiment, thecentral server 102 may be configured to share the text input by the plurality of participants amongst each of thecomputing devices 104. In an alternative embodiment, thecentral server 102 may not share the text input by the plurality of participants amongst each of thecomputing devices 104. - The plurality of participants may utilize the one or more features presented on the UI to interact and/or share content amongst each other. Accordingly, each of the
computing devices 104 may generate meeting data during the meeting. As discussed, the meeting data may include, but is not limited to, the audio content generated by the plurality of participants as the plurality of participants speak during the meeting, the video content includes video feed of the plurality of participants, the meeting notes input by the plurality of participants during the meeting, the presentation content, the screen sharing content, the file sharing content and/or any other content shared during the meeting. To this end, in an exemplary embodiment, theprocessor 202 may receive the meeting data from each of thecomputing devices 104 in real time. - In some examples, since the
computing devices 104 generate the meeting data based on the input provided by the plurality of participants. Accordingly, the meeting data received from each of thecomputing devices 104 are associated with the respective participants using thecomputing devices 104. For example, the meeting data received from thecomputing device 104 a is associated with the participant-1 using thecomputing device 104 a. For the purpose of brevity, the foregoing description has been described in conjunction with the meeting data received from thecomputing device 104 a. However, those skilled in the art would appreciate that the foregoing description is also applicable on the meeting data received from theother computing devices 104. - In an exemplary embodiment, the meeting
data monitoring unit 208 may be configured to generate, in real time, a transcript of the meeting data received from thecomputing device 104 a. For example, the meetingdata monitoring unit 208 may be configured to convert the audio content (received from computing devices 104) to text using known Speech to Text (STT) techniques. The text (obtained from the audio content) may constitute the transcript. In another example, the meetingdata monitoring unit 208 may be configured to generate the transcript from the video content. For instance, the meetingdata monitoring unit 208 may perform optical character recognition (OCR) on the video content to generate the transcript. In yet another example, the meetingdata monitoring unit 208 may be configured to consider the meeting notes (input by the participant associated with thecomputing device 104 a) as the transcript. In yet another example, the meetingdata monitoring unit 208 may be configured to perform OCR on the content shared via the screen sharing feature to generate the transcript. Additionally, or alternatively, the meetingdata monitoring unit 208 may be configured to timestamp the transcript in accordance with a time instant of the reception of the meeting data from thecomputing device 104 a. For example, theprocessor 202 receives the meeting data at time instant T1. To this end, the meetingdata monitoring unit 208 may generate the transcript from the meeting data received at the time instant T1, and may timestamp the transcript with time instant T1. An example the transcript is further illustrated and described inFIG. 3 . Similarly, during the meeting the meetingdata monitoring unit 208 may be configured to generate multiple transcripts of the meeting data received from thecomputing device 104 a based on the time instant at which thecentral server 102 receives the corresponding meeting data. For example, the meetingdata monitoring unit 208 may generate another transcript at the time instant T2 based on the meeting data received at the time instant T2. To this end, the meetingdata monitoring unit 208 may be configured to generate the transcripts as and when thecentral server 102 receives the meeting data from thecomputing device 104 a. - Additionally, or alternatively, the meeting
data monitoring unit 208 may be configured to include the meeting metadata (generated during scheduling the meeting) in the transcript. Additionally, or alternatively, the meetingdata monitoring unit 208 may be configured to retrieve task metadata associated with one or more tasks assigned to the participant-1 from the one or more project tracking tools. Some examples, of the project tracking tools may include, but are not limited to, Salesforce®, Era®, and/or the like. In an exemplary embodiment, the task metadata may include, but not limited to, tasks description, task outcome, tools to be used to complete the task, planned completion date associated with the task, and/or a current status of the task. In some examples, the participant-1 may be working on more than one project in parallel. Accordingly, the participant-1 may be assigned with multiple tasks. The task metadata associated with such multiple tasks is usually stored on the one or more project tracking tools. The meetingdata monitoring unit 208 may be configured to retrieve the task metadata associated with the one or more tasks, assigned to theparticipant 1, from the project tracking tools. In an alternative embodiment, the meetingdata monitoring unit 208 may be configured to retrieve the task metadata associated with a set of tasks, of the one or more tasks assigned to theparticipant 1, that are relevant to the meeting. For example, the meetingdata monitoring unit 208 may be configured to retrieve the task metadata associated with the set of tasks based on the meeting metadata. To this end, the meetingdata monitoring unit 208 may be configured to query the Application Interface (API) of the one or more project tracking tools using the meeting metadata to retrieve the task metadata associated with the set of tasks. For example, the meeting metadata includes the agenda UI design. Accordingly, the meetingdata monitoring unit 208 may be configured to retrieve the task metadata associated with the set of tasks assigned to the participant-1 pertaining to the UI design. Further, the meetingdata monitoring unit 208 may be configured to add the task metadata in the transcript. -
FIG. 3 is a diagram that illustrates an example meeting transcript, in accordance with an embodiment of the disclosure. Referring toFIG. 3 , there is shown ameeting transcript 300 that includes a transcript “agendal: to create UI” (depicted by 302) received at the time instant T1 (depicted by 304) from thecomputing device 104 a. Similarly, themeeting transcript 300 includes another transcript “UI to includefeature 1 andfeature 2” (depicted by 306) received at the time instant T2 (depicted by 308) from thecomputing device 104 a. Additionally, themeeting transcript 300 includes thetask metadata 310 associated with the set of tasks assigned to the participant-1 associated with thecomputing device 104 a. Additionally, or alternatively, themeeting transcript 300 includes themeeting metadata 312. - In an exemplary embodiment, the training unit 214 may be configured to train a ML model for the participant-1 associated with the
computing device 104 a based on the transcript (generated from the meeting data received from thecomputing device 104 a, the task metadata associated with the set of tasks assigned to the participant-1, and meeting metadata). In some examples, the ML model may be indicative of a profile of theparticipant 1. In an exemplary embodiment, the profile of a participant may be deterministic of one or more topics which are relevant and/or of interest to the participant. Additionally, or alternatively, the profile may be indicative of one or more skills of theparticipant 1. - To train the ML model, the training unit 214 may be configured to remove unwanted words and/or phrases, from the transcript to generate a clean transcript. Such unwanted words and/or phrases may be referred to as stop words. In some examples, the stop words may include words that are insignificant and do not add meaning to the transcript. Some examples of the stop words may include, but are not limited to, “is”, “are”, “and” “at least”, and/or the like. Thereafter, in some examples, the training unit 214 may be configured to identify n-grams in the clean transcript, where n-grams corresponds to combination of two or more words in the clean transcript that are used in conjunction, in the transcript. For example, the term “user” and “interface” are often used together. Accordingly, the training unit 214 may be configured to identify the term “user interface” as an n-gram. In an exemplary embodiment, the training unit 214 may be configured to add the identified n-gram to the clean transcript to create a training corpus.
- Thereafter, the training unit 214 may be configured to train the ML model using the training corpus. In some examples, training the ML model using the training corpus may include converting the words in training corpus in one or more vectors. Thereafter, the training unit 214 may be configured to train a neural network using the one or more vectors. The trained neural network corresponds to the ML model. Those skilled in the art would appreciate that scope of the disclosure is not limited to using the neural network as the ML model. In an exemplary embodiment, the ML model may be realized using other techniques such as, but not limited to, logistic regression, Bayesian regression, random forest regression, and/or the like.
- In an exemplary embodiment, as discussed, the ML model is associated with the
participant 1. Similarly, the training unit 214 may be configured to train other ML models for other participants. - In some examples, the scope of the disclosure is not limited to the training the ML model using the training corpus generated from the meeting data. In an exemplary embodiment, the training unit 214 may be configured to generate training corpus based on an identification of a trigger event in the meeting data. To this end, in an exemplary embodiment, the trigger event identification unit 210 may be configured to compare the meeting metadata and the transcript. In an exemplary embodiment, the trigger event identification unit 210 may compare the transcript at each timestamp (in the meeting transcript) with the meeting metadata using one or more known text comparison techniques. Some examples of the text comparison techniques may include, but not are limited to, Cosine Similarity, Euclidean distance, Pearson coefficient and/or the like. In order to utilize the text comparison techniques, the trigger event identification unit 210 may be configured to convert the transcript at each timestamp into a transcript vector using one or more known transformation techniques such as, but not limited to, term frequency—inverse document frequency (TF-IDF), Wor2Vec, and/or the like. In an exemplary embodiment, the transcript vector may correspond to an array of integers, in which each integer corresponds to a term in the transcript. Further, the value of the integer may be deterministic of the characteristic of the term within the transcript. For example, the integer may be deterministic of a count of times a term has appeared in the transcript. Similarly, the trigger event identification unit 210 may be configured to convert the meeting metadata to a metadata vector. Thereafter, the trigger event identification unit 210 may utilize the one or more text comparison techniques to compare the metadata vector and the transcript vector and determine a similarity score between the metadata vector and the transcript vector. For example, the trigger event identification unit 210 may determine a Cosine similarity score between the metadata vector and the transcript vector.
- In some embodiments, the trigger event identification unit 210 may be configured to determine whether the similarity score is greater than or equal to a similarity score threshold. If the trigger event identification unit 210 determines that similarity score is less than the similarity score threshold, the trigger event identification unit 210 may be configured to determine that the transcript is dissimilar from the meeting metadata. However, if the trigger event identification unit 210 determines that the similarity score is greater than or equal to the similarity score threshold, the trigger event identification unit 210 may be configured to determine that the transcript is similar to the meeting metadata. Accordingly, the trigger event identification unit 210 may determine that the participant-1 mentioned or presented content related to the meeting metadata. To this end, the trigger event identification unit 210 may identify the trigger event.
- In some embodiments, the scope of the disclosure is not limited to the trigger event identification unit 210 identifying the trigger event based on the comparison between the meeting data and the meeting metadata. In an exemplary embodiment, the trigger event identification unit 210 may be configured to receive an input from a computing device (e.g., 104 a) of the
computing devices 104. The input may indicate that a participant may want to record a portion of the meeting for later reference. For example, during the meeting, the participant may find the discussion and/or the content being presented to be interesting. Accordingly, in some examples, the participant may provide an input on the UI to record the portion of the meeting that includes the discussion that the participant found interesting. In such an embodiment, thecomputing device 104 a may transmit the input (received from the participant through UI) to thecentral server 102. Upon receiving the input from thecomputing device 104 a, the trigger event identification unit 210 may identify the input as the trigger event. - Additionally, or alternatively, the
processor 202 may be configured to categorize the transcript at each timestamp in one or more categories. In an exemplary embodiment, the one or more categories may include an action category, a schedule category, work status category, and or the like. In an exemplary embodiment, the action category may correspond to a category that may comprise transcripts which are indicative of an action item for the plurality of participants. In an exemplary embodiment, the schedule category may correspond to a category that may comprise transcripts indicative of schedule of a subsequent meeting. In yet another embodiment, the work status category may correspond to a category that may include transcripts indicative of status of a task or a work. - In an exemplary embodiment, the
processor 202 may be configured to utilize a classifier to categorize the transcript at each timestamp in the one or more categories. In some examples, the classifier may correspond to a machine learning (ML) model that is capable of categorizing the transcript at each timestamp based on the semantics of the transcripts. For example, the ML model may be capable of transforming the transcript into the transcript vector. Thereafter, the ML model may be configured to utilize the known classification techniques to classify the transcript at each transcript in the one or more categories. Some examples of the classification techniques may include, but not limited to, naïve bayes classification technique, logistic regression, hierarchal classifier, random forest classifier, and/or the like. In some examples, prior to utilizing the classifier to classify the transcripts in the one or more categories, theprocessor 202 may be configured to train the classifier based on training data. The training data may include one or more features and one or more labels. The one or more features may include training transcripts, while the one or more labels may include the one or more categories. In the training data, each of the transcript is associated with a category of the one or more categories. Training the classifier may include theprocessor 202 defining a mathematical relationship between the transcript vectors and the one or more categories. Thereafter, theprocessor 202 utilizes the classifier to classify the transcript to the one or more categories. - In some examples, the trigger event identification unit 210 may be configured to identify the trigger event based on the classification of the transcript in the one or more categories. Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories and the reception of the input from the computing device (e.g., 104 a). Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories and the similarity score. Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the similarity score and the reception of the input from the computing device (e.g., 104 a). Additionally, or alternatively, the trigger event identification unit 210 may be configured to identify the trigger event based on the categorization of the transcript in the one or more categories, reception of the input from the computing device (e.g., 104 a), and the similarity score.
- In an exemplary embodiment, based on the identification of the trigger event, the
recording unit 212 may be configured to record the meeting data received from thecomputing device 104 a, for the determined duration. In an exemplary embodiment, a length of the determined duration may be defined during configuration of thecentral server 102. Further, the determined duration may be defined based on the timestamp associated with the transcript corresponding to the trigger event (i.e., the transcript that is similar to the meeting metadata). In an alternate embodiment, the determined duration may be defined based on the timestamp of the reception of the input from thecomputing device 104 a. In an exemplary embodiment, the determined duration is defined to include a first determined duration chronologically prior to the timestamp and a second determined duration chronologically after the timestamp. In some examples, a length of the first determined duration is same as a length of the second determined duration. In another example, the length of the first determined duration is different from the length of the second determined duration. For instance, the length of the first determined duration is greater than the length of the second determined duration. In another instance, the length of the second determined duration is greater than the length of the first determined duration. - In view of the foregoing, the
recording unit 212 may be configured to contiguously record the meeting data for first determined duration prior to the timestamp and for the second determined duration after the timestamp. Accordingly, the recording of the meeting data includes, the recording of the audio content, the video content, the screen sharing content, the meeting notes, the presentation content and/or the like, received during the determined duration. In some examples, the recorded meeting data may correspond to the meeting snippet. - In some examples, the
recording unit 212 may be configured to record the meeting data for the determined duration after the timestamp. In another example, therecording unit 212 may be configured to record the meeting data for the determined duration prior to the timestamp. In an exemplary embodiment, using the methodology described herein, therecording unit 212 may be configured to record a plurality of meeting snippets from the meeting data received from thecomputing device 104 a. Thereafter, the meetingdata monitoring unit 208 may be configured to generate a plurality of transcripts for each of the plurality of meeting snippets. Further, the meetingdata monitoring unit 208 may be configured to aggregate the plurality of transcripts to generate a summary transcript. In an exemplary embodiment, the meetingdata monitoring unit 208 may be configured to aggregate the plurality of transcripts based on the chronological order of the timestamp associated with each of the respective meeting snippets to generate a summary transcript. In some examples, the summary transcript may capture moments in the meeting in which the participant-1 caused the identification of the trigger event. - Additionally, or alternatively, the
recording unit 212 may be configured to record the meeting data received from theother computing devices 104 for the determined duration based on the identification of the trigger event to generate a plurality of additional meeting snippets. The meetingdata monitoring unit 208 may be configured to generate additional meeting transcripts based on the plurality of additional meeting snippets. - In some examples, a
computing device 104 c of thecomputing devices 104 may not generate the meeting data. For example, in such an embodiment, the participant associated with the computing device may only be listening to the meeting and may be providing inputs to record meeting snippets. In such an embodiment, thecentral server 102 may be configured to record the meeting data received fromother computing devices 104 for a determined duration to generate the meeting snippet, based on the reception of the input from thecomputing device 104 c. Further, thecentral server 102 may be configured to convert the meeting snippet to transcript, where the transcript is associated with thecomputing device 104 c. Furthermore, thecentral server 102 may be configured to train the ML model for the participant associated with thecomputing device 104 c based on the transcript obtained from the meeting snippet. - In an exemplary embodiment, the training unit 214 may be configured to generate a training corpus from the summary transcript and/or the additional transcript using the methodology described above. Further, the training unit may be configured to train the ML model using the training corpus generated from the summary transcript and/or the additional transcript. Similarly, the training unit 214 may be configured to train other ML models for the other participants. Further, the training unit 214 may be configured to store the ML models, trained for each of the plurality of participants, in the
memory device 204. In some examples, where the ML model for a participant in the meeting is already stored on thememory device 204, the training unit 214 may be configured to update the existing ML model. In such an embodiment, the training unit 214 may be configured to update the existing ML model based on the training corpus generated from the transcript of the meeting data associated with the participant. - In an exemplary embodiment, the
processor 202 may be configured to receive another input from thecomputing device 104 a (associated with the participant 1) to schedule another meeting. In some examples, the input may further include details pertaining to other participants that the participant-1 intends to be part of the meeting. In such an embodiment, theprocessor 202 may be configured to retrieve the ML model associated with the participant-1 and the other participants from thememory device 204. Thereafter, theprocessor 202 may be configured to generate one or more meeting recommendations for the other meetings based on the ML model associated with the participant-1 and the other participants. For example, theprocessor 202 may be configured to determine one or more topics that are common to the participant-1 and the other participants based on the respective ML models. Theprocessor 202 may be configured to utilize the one or more topics as the one or more meeting recommendations. - Further, during the other meeting, the ML model associated with the plurality of participants may enable the
central server 102 to capture a plurality of meeting snippets that may be of interest to the plurality of participants. For example, based on the one or more topics associated with each of the plurality of participants (determined from the ML model associated with each of the plurality of participants), thecentral server 102 may be configured to identify trigger events during the other meeting. For example, in such an embodiment, thecentral server 102 may be configured to identify (during the other meeting) time instants at which the plurality of participants referred to the one or more topics, as the trigger events. Accordingly, based on the identification of the trigger events, thecentral server 102 may be configured to record the meeting for the determined duration to generate a plurality of meeting snippets. - The scope of the disclosure is not limited to capturing the plurality of snippets during the meeting. In an exemplary embodiment, the
first processor 202 may be configured to capture the plurality of meeting snippets of one or more non-real time meeting data shared amongst the plurality of participants. The one or more non-real time meeting data may include meeting data that is shared amongst the plurality of participants outside the meeting. For example, the one or more non-real time meeting data may include text messages shared amongst the plurality of participants, the one or more audio messages shared amongst the plurality of participants. In some examples,first processor 202 may be configured to record the plurality of meeting snippets of the one or more non-real time meeting data using similar methodology, as is described above. -
FIG. 4 is a diagram that illustrates an exemplary scenario of the meeting, in accordance with an embodiment of the disclosure. Referring toFIG. 4 , theexemplary scenario 400 illustrates that each of thecomputing devices 104 generates the meeting data. Additionally, or alternatively, each of thecomputing devices 104 transmit the meeting data to thecentral server 102. Themeeting data 402 transmitted by thecomputing device 104 a comprises text corresponding to the audio content spoken by the participant-1 associated with thecomputing device 104 a. The text indicates “referring totopic 1,participant 2 will provide the details”. Further, the timestamp associated with the meeting data, transmitted by thecomputing device 104 a, is T1. At time instant T2, thecomputing device 104 b generates themeeting data 404 that includes text obtained from presentation content (by performing OCR). The text indicates “with reference to topic-1, the UI includes feature-1 feature-2 and feature-3”. Further, at time instant T2, theexemplary scenario 400 illustrates that thecomputing device 104 c transmits aninput 405 to thecentral server 102. - In an exemplary embodiment, the meeting
data monitoring unit 208 appends thetask metadata 406 associated with the set of tasks assigned to the participant-1 to the meeting data received from thecomputing device 104 a. As illustrated, the task metadata for the participant-1 indicates “Design feature of UI” (depicted by 408). Additionally, or alternatively, the meetingdata monitoring unit 208 appends themeeting metadata 410 to themeeting data 402 received from thecomputing device 104 a and themeeting data 404 received from thecomputing device 104 b. Since thecomputing device 104 c does not generate the meeting data, based on receiving the input from thecomputing device 104 c, therecording unit 212 may be configured to record themeeting data 402 received from thecomputing device 104 a and themeeting data 404 received from thecomputing device 104 b for the determined duration to generate a meeting snippet-1 (depicted by 412) and a meeting snippet-2 (depicted by 414), respectively. Further, the meetingdata monitoring unit 208 may be configured to consider the meeting snippet-1 (depicted by 412) and the meeting snippet-2 (depicted by 414) as themeeting data 416 for thecomputing device 104 c. - In an exemplary embodiment, the meeting
data monitoring unit 208 may be configured to generate thetranscript 418 from the meeting data 402 (received from thecomputing device 104 a), thetranscript 420 from the meeting data 404 (received from thecomputing device 104 b), and thetranscript 422 from themeeting data 416 associated with thecomputing device 104 c. Thetranscript 418 includes “Design feature for UI, UI development,feature 1 of UI is WIP”. Thetranscript 420 includes “Color scheme of UI, UI development,feature 2 of UI is complete”. Thetranscript 422 includes “Design feature for UI, UI development,feature 1 of UI is WIP, Color scheme of UI, UI development,feature 2 of UI is complete”. - The training unit 214 may be configured to generate the training corpuses 424, 426, and 428 based on the
transcript 418, thetranscript 420, and thetranscript 422, respectively. The training corpuses 424, 426, and 428 are associated with the participant-1, participant-2, and participant-3, respectively. Based on the training corpuses 424, 426, and 428, the training unit 214 may be configured to train the ML model-1 430, ML model-2 432, and ML model-3 434. The ML model-1 430, ML model-2 432, and ML model-3 434 are associated with the participant-1, participant-2, and participant-3, respectively. As discussed, the ML model is indicative of one or more topics and/or skills associated with a participant. For example, the ML model-1 430 includes “UI”, “feature-1”, and “C++” as the one or more topics and/or skills associated with the participant-1. In another example, the ML model-2 432 includes “UI”, “feature-2”, and “Java” as the one or more topics and/or skills associated with the participant-2. -
FIG. 5 is a diagram that illustrates another exemplary scenario illustrating generation of the one or more meeting recommendations, in accordance with an embodiment of the disclosure. Referring toFIG. 5 , theexemplary scenario 500 includes the ML model-1 430, the ML model-2 432, and the ML model-3 434. Further, theexemplary scenario 500 illustrates the one ormore topics more topics 502 associated with the ML model-1 432 includes “UI”, “feature-1”, and “C++”. Similarly, the one ormore topics 504 associated with the ML model-2 includes “UI”, “feature-2”, and “Java”. Further, the one ormore topics 506 associated with the ML model-3 includes “UI,feature 1, Color scheme”. - Further, the
exemplary scenario 500 illustrates thatcentral server 102 receives an input from thecomputing device 104 a pertaining to scheduling a meeting. The input may further include the details pertaining to the plurality of participants of the meeting. For example, the details pertaining to the plurality participants includes the participant-1 and the participant-2. Thereafter, therecommendation unit 216 may be configured to utilize the ML model-1 430 and the ML model-2 432 (associated with the participant-1 and the participant-2, respectively) to determine the one or more topics associated with the participant-1 and the participant-2. Further, therecommendation unit 216 may be configured to determine an intersection between the one or more topics associated with participant-1 and the one or more topics associated with the participant-2 (depicted by 508). For example, therecommendation unit 216 determines that the intersection between the one or more topics associated with the participant-1 and the participant-2 is “UI” (depicted by 510). Accordingly, therecommendation unit 216 may be configured to generate the meeting recommendation “UI” (depicted by 510). -
FIG. 6 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure. Referring toFIG. 6 , at 602, the meeting data is received from thecomputing devices 104. In an exemplary embodiment, theprocessor 202 may be configured to receive the meeting data from each of thecomputing devices 104 during the meeting. At 604, the transcript is created based on the meeting data. In an exemplary embodiment, the meetingdata monitoring unit 208 may be configured to transform the meeting data to the transcript. At 606, the ML model is trained based on the meeting data. In an exemplary embodiment, thetraining unit 208 may be configured to train the ML model based on the meeting data. In some examples, thetraining unit 208 may be configured to train the ML model for each of the plurality of participants. -
FIG. 7 is a flowchart illustrating another method for training the ML model, in accordance with an embodiment of the disclosure. Referring toFIG. 7 , at 702, the meeting data is received from thecomputing devices 104. In an exemplary embodiment, theprocessor 202 may be configured to receive the meeting data from each of thecomputing devices 104 during the meeting. At 704, a trigger event is identified in the meeting data. In an exemplary embodiment, the trigger event identification unit 210 may be configured to identify the trigger event in the meeting data. At 706, meeting data is recorded for determined duration. In an exemplary embodiment, therecording unit 212 may be configured to record the meeting data to generate meeting snippet. At 708, the transcript is created based on the meeting snippet. In an exemplary embodiment, the meetingdata monitoring unit 208 may be configured to generate transcript. At 710, the ML model is trained based on the meeting data. In an exemplary embodiment, thetraining unit 208 may be configured to train the ML model based on the meeting data. -
FIG. 8 is aflowchart 800 illustrating a method for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure. Referring toFIG. 8 , at 802, an input to schedule a meeting is received from a participant. In an exemplary embodiment, theprocessor 202 may be configured to receive the input. In an exemplary embodiment, the input includes the details of other participants of the meeting. At 804, the one or more topics associated with each of the participants is determined based on respective ML models. In an exemplary embodiment, therecommendation unit 216 may be configured to determine the one or more topics for each of the participants. At 806, the intersection of the one or more topics associated with each of the participants is determined. In an exemplary embodiment, therecommendation unit 216 is configured to determine the intersection. At 808, the one or more meeting recommendations are generated. In an exemplary embodiment, therecommendation unit 216 may be configured to determine the one or more meeting recommendations based on the intersection. - The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art, the operations may be performed in one or more different orders without departing from the various embodiments of the disclosure
- The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may include a general purpose processor, a digital signal processor (DSP), a special-purpose processor such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA), a programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, or in addition, some operations or methods may be performed by circuitry that is specific to a given function.
- In one or more exemplary embodiments, the functions described herein may be implemented by special-purpose hardware or a combination of hardware programmed by firmware or other software. In implementations relying on firmware or other software, the functions may be performed as a result of execution of one or more instructions stored on one or more non-transitory computer-readable media and/or one or more non-transitory processor-readable media. These instructions may be embodied by one or more processor-executable software modules that reside on the one or more non-transitory computer-readable or processor-readable storage media. Non-transitory computer-readable or processor-readable storage media may in this regard comprise any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, disk storage, magnetic storage devices, or the like. Disk storage, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray Disc™, or other storage devices that store data magnetically or optically with lasers. Combinations of the above types of media are also included within the scope of the terms non-transitory computer-readable and processor-readable media. Additionally, any combination of instructions stored on the one or more non-transitory processor-readable or computer-readable media may be referred to herein as a computer program product.
- Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of teachings presented in the foregoing descriptions and the associated drawings. Although the figures only show certain components of the apparatus and systems described herein, it is understood that various other components may be used in conjunction with the supply management system. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, the operations in the method described above may not necessarily occur in the order depicted in the accompanying diagrams, and in some cases one or more of the operations depicted may occur substantially simultaneously, or additional operations may be involved. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/308,772 US20210365896A1 (en) | 2020-05-21 | 2021-05-05 | Machine learning (ml) model for participants |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063028123P | 2020-05-21 | 2020-05-21 | |
US17/308,772 US20210365896A1 (en) | 2020-05-21 | 2021-05-05 | Machine learning (ml) model for participants |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210365896A1 true US20210365896A1 (en) | 2021-11-25 |
Family
ID=78607913
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/308,887 Abandoned US20210367984A1 (en) | 2020-05-21 | 2021-05-05 | Meeting experience management |
US17/308,623 Active US11488116B2 (en) | 2020-05-21 | 2021-05-05 | Dynamically generated news feed |
US17/308,916 Abandoned US20210367986A1 (en) | 2020-05-21 | 2021-05-05 | Enabling Collaboration Between Users |
US17/308,264 Active US11537998B2 (en) | 2020-05-21 | 2021-05-05 | Capturing meeting snippets |
US17/308,329 Active US11416831B2 (en) | 2020-05-21 | 2021-05-05 | Dynamic video layout in video conference meeting |
US17/308,586 Abandoned US20210365893A1 (en) | 2020-05-21 | 2021-05-05 | Recommendation unit for generating meeting recommendations |
US17/308,640 Abandoned US20210367802A1 (en) | 2020-05-21 | 2021-05-05 | Meeting summary generation |
US17/308,772 Abandoned US20210365896A1 (en) | 2020-05-21 | 2021-05-05 | Machine learning (ml) model for participants |
Family Applications Before (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/308,887 Abandoned US20210367984A1 (en) | 2020-05-21 | 2021-05-05 | Meeting experience management |
US17/308,623 Active US11488116B2 (en) | 2020-05-21 | 2021-05-05 | Dynamically generated news feed |
US17/308,916 Abandoned US20210367986A1 (en) | 2020-05-21 | 2021-05-05 | Enabling Collaboration Between Users |
US17/308,264 Active US11537998B2 (en) | 2020-05-21 | 2021-05-05 | Capturing meeting snippets |
US17/308,329 Active US11416831B2 (en) | 2020-05-21 | 2021-05-05 | Dynamic video layout in video conference meeting |
US17/308,586 Abandoned US20210365893A1 (en) | 2020-05-21 | 2021-05-05 | Recommendation unit for generating meeting recommendations |
US17/308,640 Abandoned US20210367802A1 (en) | 2020-05-21 | 2021-05-05 | Meeting summary generation |
Country Status (1)
Country | Link |
---|---|
US (8) | US20210367984A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220173921A1 (en) * | 2018-08-29 | 2022-06-02 | Capital One Services, Llc | Managing meeting data |
US12175968B1 (en) * | 2021-03-26 | 2024-12-24 | Amazon Technologies, Inc. | Skill selection for responding to natural language inputs |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12400661B2 (en) | 2017-07-09 | 2025-08-26 | Otter.ai, Inc. | Systems and methods for capturing, processing, and rendering one or more context-aware moment-associating elements |
US11423911B1 (en) | 2018-10-17 | 2022-08-23 | Otter.ai, Inc. | Systems and methods for live broadcasting of context-aware transcription and/or other elements related to conversations and/or speeches |
US11765213B2 (en) * | 2019-06-11 | 2023-09-19 | Nextiva, Inc. | Mixing and transmitting multiplex audiovisual information |
US11595447B2 (en) | 2020-08-05 | 2023-02-28 | Toucan Events Inc. | Alteration of event user interfaces of an online conferencing service |
US12023116B2 (en) * | 2020-12-21 | 2024-07-02 | Cilag Gmbh International | Dynamic trocar positioning for robotic surgical system |
US11676623B1 (en) | 2021-02-26 | 2023-06-13 | Otter.ai, Inc. | Systems and methods for automatic joining as a virtual meeting participant for transcription |
US11937016B2 (en) * | 2021-05-26 | 2024-03-19 | International Business Machines Corporation | System and method for real-time, event-driven video conference analytics |
US11894938B2 (en) | 2021-06-21 | 2024-02-06 | Toucan Events Inc. | Executing scripting for events of an online conferencing service |
US11916687B2 (en) | 2021-07-28 | 2024-02-27 | Zoom Video Communications, Inc. | Topic relevance detection using automated speech recognition |
US11330229B1 (en) * | 2021-09-28 | 2022-05-10 | Atlassian Pty Ltd. | Apparatuses, computer-implemented methods, and computer program products for generating a collaborative contextual summary interface in association with an audio-video conferencing interface service |
US20230098137A1 (en) * | 2021-09-30 | 2023-03-30 | C/o Uniphore Technologies Inc. | Method and apparatus for redacting sensitive information from audio |
US11985180B2 (en) * | 2021-11-16 | 2024-05-14 | Microsoft Technology Licensing, Llc | Meeting-video management engine for a meeting-video management system |
US11722536B2 (en) | 2021-12-27 | 2023-08-08 | Atlassian Pty Ltd. | Apparatuses, computer-implemented methods, and computer program products for managing a shared dynamic collaborative presentation progression interface in association with an audio-video conferencing interface service |
WO2023158330A1 (en) * | 2022-02-16 | 2023-08-24 | Ringcentral, Inc., | System and method for rearranging conference recordings |
US20230297208A1 (en) * | 2022-03-16 | 2023-09-21 | Figma, Inc. | Collaborative widget state synchronization |
US12155729B2 (en) * | 2022-03-18 | 2024-11-26 | Zoom Video Communications, Inc. | App pinning in video conferences |
JP7459890B2 (en) * | 2022-03-23 | 2024-04-02 | セイコーエプソン株式会社 | Display methods, display systems and programs |
US12182502B1 (en) | 2022-03-28 | 2024-12-31 | Otter.ai, Inc. | Systems and methods for automatically generating conversation outlines and annotation summaries |
US20230401497A1 (en) * | 2022-06-09 | 2023-12-14 | Vmware, Inc. | Event recommendations using machine learning |
CN117459673A (en) * | 2022-07-19 | 2024-01-26 | 奥图码股份有限公司 | Electronic device and method for video conferencing |
US12095580B2 (en) * | 2022-10-31 | 2024-09-17 | Docusign, Inc. | Conferencing platform integration with agenda generation |
US11838139B1 (en) | 2022-10-31 | 2023-12-05 | Docusign, Inc. | Conferencing platform integration with assent tracking |
US12373644B2 (en) * | 2022-12-13 | 2025-07-29 | Calabrio, Inc. | Evaluating transcripts through repetitive statement analysis |
US20240395254A1 (en) * | 2023-05-24 | 2024-11-28 | Otter.ai, Inc. | Systems and methods for live summarization |
US20250119507A1 (en) * | 2023-10-09 | 2025-04-10 | Dell Products, L.P. | Handling conference room boundaries and/or context |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140188541A1 (en) * | 2012-12-30 | 2014-07-03 | David Goldsmith | Situational and global context aware calendar, communications, and relationship management |
US20180046957A1 (en) * | 2016-08-09 | 2018-02-15 | Microsoft Technology Licensing, Llc | Online Meetings Optimization |
US20180204128A1 (en) * | 2017-01-13 | 2018-07-19 | Fuji Xerox Co., Ltd. | Systems and methods for context aware redirection based on machine-learning |
US20190108834A1 (en) * | 2017-10-09 | 2019-04-11 | Ricoh Company, Ltd. | Speech-to-Text Conversion for Interactive Whiteboard Appliances Using Multiple Services |
WO2019183195A1 (en) * | 2018-03-22 | 2019-09-26 | Siemens Corporation | System and method for collaborative decentralized planning using deep reinforcement learning agents in an asynchronous environment |
US20190332994A1 (en) * | 2015-10-03 | 2019-10-31 | WeWork Companies Inc. | Generating insights about meetings in an organization |
US11119985B1 (en) * | 2021-03-19 | 2021-09-14 | Atlassian Pty Ltd. | Apparatuses, methods, and computer program products for the programmatic documentation of extrinsic event based data objects in a collaborative documentation service |
US20210319408A1 (en) * | 2020-04-09 | 2021-10-14 | Science House LLC | Platform for electronic management of meetings |
Family Cites Families (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10742433B2 (en) * | 2003-06-16 | 2020-08-11 | Meetup, Inc. | Web-based interactive meeting facility, such as for progressive announcements |
US6963352B2 (en) | 2003-06-30 | 2005-11-08 | Nortel Networks Limited | Apparatus, method, and computer program for supporting video conferencing in a communication system |
US7634540B2 (en) | 2006-10-12 | 2009-12-15 | Seiko Epson Corporation | Presenter view control system and method |
US8180029B2 (en) | 2007-06-28 | 2012-05-15 | Voxer Ip Llc | Telecommunication and multimedia management method and apparatus |
US8073922B2 (en) * | 2007-07-27 | 2011-12-06 | Twinstrata, Inc | System and method for remote asynchronous data replication |
US20090210933A1 (en) * | 2008-02-15 | 2009-08-20 | Shear Jeffrey A | System and Method for Online Content Production |
US9824333B2 (en) * | 2008-02-29 | 2017-11-21 | Microsoft Technology Licensing, Llc | Collaborative management of activities occurring during the lifecycle of a meeting |
US7912901B2 (en) * | 2008-08-12 | 2011-03-22 | International Business Machines Corporation | Automating application state of a set of computing devices responsive to scheduled events based on historical data |
US8214748B2 (en) | 2009-09-22 | 2012-07-03 | International Business Machines Corporation | Meeting agenda management |
US9615056B2 (en) | 2010-02-10 | 2017-04-04 | Oovoo, Llc | System and method for video communication on mobile devices |
US9264659B2 (en) | 2010-04-07 | 2016-02-16 | Apple Inc. | Video conference network management for a mobile device |
US8514263B2 (en) * | 2010-05-12 | 2013-08-20 | Blue Jeans Network, Inc. | Systems and methods for scalable distributed global infrastructure for real-time multimedia communication |
US20130191299A1 (en) | 2010-10-28 | 2013-07-25 | Talentcircles, Inc. | Methods and apparatus for a social recruiting network |
US20120144320A1 (en) | 2010-12-03 | 2012-06-07 | Avaya Inc. | System and method for enhancing video conference breaks |
US20120192080A1 (en) * | 2011-01-21 | 2012-07-26 | Google Inc. | Tailoring content based on available bandwidth |
US9210213B2 (en) * | 2011-03-03 | 2015-12-08 | Citrix Systems, Inc. | Reverse seamless integration between local and remote computing environments |
US9113032B1 (en) | 2011-05-31 | 2015-08-18 | Google Inc. | Selecting participants in a video conference |
US8941708B2 (en) | 2011-07-29 | 2015-01-27 | Cisco Technology, Inc. | Method, computer-readable storage medium, and apparatus for modifying the layout used by a video composing unit to generate a composite video signal |
JP2015507246A (en) | 2011-12-06 | 2015-03-05 | アグリーヤ モビリティ インコーポレーテッド | Seamless collaboration and communication |
US20130282820A1 (en) | 2012-04-23 | 2013-10-24 | Onmobile Global Limited | Method and System for an Optimized Multimedia Communications System |
US8914452B2 (en) * | 2012-05-31 | 2014-12-16 | International Business Machines Corporation | Automatically generating a personalized digest of meetings |
US9141504B2 (en) | 2012-06-28 | 2015-09-22 | Apple Inc. | Presenting status data received from multiple devices |
US10075676B2 (en) | 2013-06-26 | 2018-09-11 | Touchcast LLC | Intelligent virtual assistant system and method |
US9723075B2 (en) * | 2013-09-13 | 2017-08-01 | Incontact, Inc. | Systems and methods for data synchronization management between call centers and CRM systems |
US10484189B2 (en) * | 2013-11-13 | 2019-11-19 | Microsoft Technology Licensing, Llc | Enhanced collaboration services |
US9400833B2 (en) | 2013-11-15 | 2016-07-26 | Citrix Systems, Inc. | Generating electronic summaries of online meetings |
US20150358810A1 (en) | 2014-06-10 | 2015-12-10 | Qualcomm Incorporated | Software Configurations for Mobile Devices in a Collaborative Environment |
US10990620B2 (en) * | 2014-07-14 | 2021-04-27 | Verizon Media Inc. | Aiding composition of themed articles about popular and novel topics and offering users a navigable experience of associated content |
US20160117624A1 (en) | 2014-10-23 | 2016-04-28 | International Business Machines Incorporated | Intelligent meeting enhancement system |
US9939983B2 (en) * | 2014-12-17 | 2018-04-10 | Fuji Xerox Co., Ltd. | Systems and methods for plan-based hypervideo playback |
US9846528B2 (en) * | 2015-03-02 | 2017-12-19 | Dropbox, Inc. | Native application collaboration |
US20160307165A1 (en) | 2015-04-20 | 2016-10-20 | Cisco Technology, Inc. | Authorizing Participant Access To A Meeting Resource |
US20160350720A1 (en) | 2015-05-29 | 2016-12-01 | Citrix Systems, Inc. | Recommending meeting times based on previous meeting acceptance history |
US10255946B1 (en) | 2015-06-25 | 2019-04-09 | Amazon Technologies, Inc. | Generating tags during video upload |
DE112016003352T5 (en) | 2015-07-24 | 2018-04-12 | Max Andaker | Smooth user interface for virtual collaboration, communication and cloud computing |
US10620811B2 (en) * | 2015-12-30 | 2020-04-14 | Dropbox, Inc. | Native application collaboration |
US10572961B2 (en) * | 2016-03-15 | 2020-02-25 | Global Tel*Link Corporation | Detection and prevention of inmate to inmate message relay |
US20170308866A1 (en) * | 2016-04-22 | 2017-10-26 | Microsoft Technology Licensing, Llc | Meeting Scheduling Resource Efficiency |
US20180077092A1 (en) | 2016-09-09 | 2018-03-15 | Tariq JALIL | Method and system for facilitating user collaboration |
US10572858B2 (en) * | 2016-10-11 | 2020-02-25 | Ricoh Company, Ltd. | Managing electronic meetings using artificial intelligence and meeting rules templates |
US10510051B2 (en) | 2016-10-11 | 2019-12-17 | Ricoh Company, Ltd. | Real-time (intra-meeting) processing using artificial intelligence |
US20180101760A1 (en) * | 2016-10-11 | 2018-04-12 | Ricoh Company, Ltd. | Selecting Meeting Participants for Electronic Meetings Using Artificial Intelligence |
US9699410B1 (en) | 2016-10-28 | 2017-07-04 | Wipro Limited | Method and system for dynamic layout generation in video conferencing system |
TWI644565B (en) | 2017-02-17 | 2018-12-11 | 陳延祚 | Video image processing method and system using the same |
US20180270452A1 (en) | 2017-03-15 | 2018-09-20 | Electronics And Telecommunications Research Institute | Multi-point connection control apparatus and method for video conference service |
US10838396B2 (en) | 2017-04-18 | 2020-11-17 | Cisco Technology, Inc. | Connecting robotic moving smart building furnishings |
US20180331842A1 (en) | 2017-05-15 | 2018-11-15 | Microsoft Technology Licensing, Llc | Generating a transcript to capture activity of a conference session |
CN107342932B (en) * | 2017-05-23 | 2020-12-04 | 华为技术有限公司 | An information interaction method and terminal |
US9967520B1 (en) | 2017-06-30 | 2018-05-08 | Ringcentral, Inc. | Method and system for enhanced conference management |
US11412012B2 (en) * | 2017-08-24 | 2022-08-09 | Re Mago Holding Ltd | Method, apparatus, and computer-readable medium for desktop sharing over a web socket connection in a networked collaboration workspace |
US20200106735A1 (en) * | 2018-09-27 | 2020-04-02 | Salvatore Guerrieri | Systems and Methods for Communications & Commerce Between System Users and Non-System Users |
US20190172017A1 (en) * | 2017-12-04 | 2019-06-06 | Microsoft Technology Licensing, Llc | Tagging meeting invitees to automatically create tasks |
US20190205839A1 (en) * | 2017-12-29 | 2019-07-04 | Microsoft Technology Licensing, Llc | Enhanced computer experience from personal activity pattern |
TWI656942B (en) * | 2018-01-12 | 2019-04-21 | 財團法人工業技術研究院 | Machine tool collision avoidance method and system |
US11120199B1 (en) * | 2018-02-09 | 2021-09-14 | Voicebase, Inc. | Systems for transcribing, anonymizing and scoring audio content |
US10757148B2 (en) * | 2018-03-02 | 2020-08-25 | Ricoh Company, Ltd. | Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices |
US20190312917A1 (en) | 2018-04-05 | 2019-10-10 | Microsoft Technology Licensing, Llc | Resource collaboration with co-presence indicators |
CN108595645B (en) * | 2018-04-26 | 2020-10-30 | 深圳市鹰硕技术有限公司 | Conference speech management method and device |
US10735211B2 (en) * | 2018-05-04 | 2020-08-04 | Microsoft Technology Licensing, Llc | Meeting insight computing system |
JP2019215727A (en) * | 2018-06-13 | 2019-12-19 | レノボ・シンガポール・プライベート・リミテッド | Conference apparatus, conference apparatus control method, program, and conference system |
US11367095B2 (en) * | 2018-10-16 | 2022-06-21 | Igt | Unlockable electronic incentives |
US10606576B1 (en) * | 2018-10-26 | 2020-03-31 | Salesforce.Com, Inc. | Developer experience for live applications in a cloud collaboration platform |
US11016993B2 (en) * | 2018-11-27 | 2021-05-25 | Slack Technologies, Inc. | Dynamic and selective object update for local storage copy based on network connectivity characteristics |
CN111586674B (en) * | 2019-02-18 | 2022-01-14 | 华为技术有限公司 | Communication method, device and system |
US20200341625A1 (en) | 2019-04-26 | 2020-10-29 | Microsoft Technology Licensing, Llc | Automated conference modality setting application |
US20200374146A1 (en) | 2019-05-24 | 2020-11-26 | Microsoft Technology Licensing, Llc | Generation of intelligent summaries of shared content based on a contextual analysis of user engagement |
US11689379B2 (en) | 2019-06-24 | 2023-06-27 | Dropbox, Inc. | Generating customized meeting insights based on user interactions and meeting media |
US11262886B2 (en) * | 2019-10-22 | 2022-03-01 | Microsoft Technology Licensing, Llc | Structured arrangements for tracking content items on a shared user interface |
US11049511B1 (en) | 2019-12-26 | 2021-06-29 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to determine whether to unmute microphone based on camera input |
US11049077B1 (en) | 2019-12-31 | 2021-06-29 | Capital One Services, Llc | Computer-based systems configured for automated electronic calendar management and work task scheduling and methods of use thereof |
US10999346B1 (en) | 2020-01-06 | 2021-05-04 | Dialogic Corporation | Dynamically changing characteristics of simulcast video streams in selective forwarding units |
US11989696B2 (en) * | 2020-01-16 | 2024-05-21 | Capital One Services, Llc | Computer-based systems configured for automated electronic calendar management with meeting room locating and methods of use thereof |
US10735212B1 (en) | 2020-01-21 | 2020-08-04 | Capital One Services, Llc | Computer-implemented systems configured for automated electronic calendar item predictions and methods of use thereof |
US11288636B2 (en) | 2020-01-23 | 2022-03-29 | Capital One Services, Llc | Computer-implemented systems configured for automated electronic calendar item predictions for calendar item rescheduling and methods of use thereof |
US11438841B2 (en) | 2020-01-31 | 2022-09-06 | Dell Products, Lp | Energy savings system based machine learning of wireless performance activity for mobile information handling system connected to plural wireless networks |
US11393176B2 (en) * | 2020-02-07 | 2022-07-19 | Krikey, Inc. | Video tools for mobile rendered augmented reality game |
US11095468B1 (en) | 2020-02-13 | 2021-08-17 | Amazon Technologies, Inc. | Meeting summary service |
US11488114B2 (en) | 2020-02-20 | 2022-11-01 | Sap Se | Shared collaborative electronic events for calendar services |
US11080356B1 (en) * | 2020-02-27 | 2021-08-03 | International Business Machines Corporation | Enhancing online remote meeting/training experience using machine learning |
WO2021194372A1 (en) * | 2020-03-26 | 2021-09-30 | Ringcentral, Inc. | Methods and systems for managing meeting notes |
US11470014B2 (en) | 2020-04-30 | 2022-10-11 | Dell Products, Lp | System and method of managing data connections to a communication network using tiered devices and telemetry data |
US11570219B2 (en) * | 2020-05-07 | 2023-01-31 | Re Mago Holding Ltd | Method, apparatus, and computer readable medium for virtual conferencing with embedded collaboration tools |
US11184560B1 (en) | 2020-12-16 | 2021-11-23 | Lenovo (Singapore) Pte. Ltd. | Use of sensor input to determine video feed to provide as part of video conference |
-
2021
- 2021-05-05 US US17/308,887 patent/US20210367984A1/en not_active Abandoned
- 2021-05-05 US US17/308,623 patent/US11488116B2/en active Active
- 2021-05-05 US US17/308,916 patent/US20210367986A1/en not_active Abandoned
- 2021-05-05 US US17/308,264 patent/US11537998B2/en active Active
- 2021-05-05 US US17/308,329 patent/US11416831B2/en active Active
- 2021-05-05 US US17/308,586 patent/US20210365893A1/en not_active Abandoned
- 2021-05-05 US US17/308,640 patent/US20210367802A1/en not_active Abandoned
- 2021-05-05 US US17/308,772 patent/US20210365896A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140188541A1 (en) * | 2012-12-30 | 2014-07-03 | David Goldsmith | Situational and global context aware calendar, communications, and relationship management |
US20190332994A1 (en) * | 2015-10-03 | 2019-10-31 | WeWork Companies Inc. | Generating insights about meetings in an organization |
US20180046957A1 (en) * | 2016-08-09 | 2018-02-15 | Microsoft Technology Licensing, Llc | Online Meetings Optimization |
US20180204128A1 (en) * | 2017-01-13 | 2018-07-19 | Fuji Xerox Co., Ltd. | Systems and methods for context aware redirection based on machine-learning |
US20190108834A1 (en) * | 2017-10-09 | 2019-04-11 | Ricoh Company, Ltd. | Speech-to-Text Conversion for Interactive Whiteboard Appliances Using Multiple Services |
WO2019183195A1 (en) * | 2018-03-22 | 2019-09-26 | Siemens Corporation | System and method for collaborative decentralized planning using deep reinforcement learning agents in an asynchronous environment |
US20210319408A1 (en) * | 2020-04-09 | 2021-10-14 | Science House LLC | Platform for electronic management of meetings |
US11119985B1 (en) * | 2021-03-19 | 2021-09-14 | Atlassian Pty Ltd. | Apparatuses, methods, and computer program products for the programmatic documentation of extrinsic event based data objects in a collaborative documentation service |
Non-Patent Citations (1)
Title |
---|
Nguyen, V. (2015). Guided probabilistic topic models for agenda-setting and framing (Order No. 3711770). Available from ProQuest Dissertations and Theses Professional. (1707358055). (Year: 2015) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220173921A1 (en) * | 2018-08-29 | 2022-06-02 | Capital One Services, Llc | Managing meeting data |
US11546183B2 (en) * | 2018-08-29 | 2023-01-03 | Capital One Services, Llc | Managing meeting data |
US11838142B2 (en) | 2018-08-29 | 2023-12-05 | Capital One Services, Llc | Managing meeting data |
US12175968B1 (en) * | 2021-03-26 | 2024-12-24 | Amazon Technologies, Inc. | Skill selection for responding to natural language inputs |
Also Published As
Publication number | Publication date |
---|---|
US20210367800A1 (en) | 2021-11-25 |
US20210367801A1 (en) | 2021-11-25 |
US11488116B2 (en) | 2022-11-01 |
US20210368134A1 (en) | 2021-11-25 |
US11416831B2 (en) | 2022-08-16 |
US20210365893A1 (en) | 2021-11-25 |
US11537998B2 (en) | 2022-12-27 |
US20210367986A1 (en) | 2021-11-25 |
US20210367984A1 (en) | 2021-11-25 |
US20210367802A1 (en) | 2021-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210365896A1 (en) | Machine learning (ml) model for participants | |
US11095468B1 (en) | Meeting summary service | |
US10915570B2 (en) | Personalized meeting summaries | |
US20210398028A1 (en) | Automatic reservation of a conference | |
US11228542B2 (en) | Systems and methods for communication channel recommendations using machine learning | |
US20190057698A1 (en) | In-call virtual assistant | |
US10891436B2 (en) | Device and method for voice-driven ideation session management | |
US20250258862A1 (en) | Suggested queries for transcript search | |
US20160117624A1 (en) | Intelligent meeting enhancement system | |
US20180101760A1 (en) | Selecting Meeting Participants for Electronic Meetings Using Artificial Intelligence | |
CN111258528B (en) | Display method of voice user interface and conference terminal | |
CN106686339A (en) | Electronic meeting intelligence | |
CN106685916A (en) | Electronic meeting intelligence | |
US20220027396A1 (en) | Customizing a data discovery user interface based on artificial intelligence | |
US11665010B2 (en) | Intelligent meeting recording using artificial intelligence algorithms | |
US20220321698A1 (en) | Emergency communication system with contextual snippets | |
US20240395254A1 (en) | Systems and methods for live summarization | |
CN114008621A (en) | Determining observations about a topic in a meeting | |
US20230252809A1 (en) | Systems and methods for dynamically providing notary sessions | |
WO2024167488A1 (en) | Systems and methods for dynamically providing notary sessions | |
US20250165529A1 (en) | Interactive real-time video search based on knowledge graph | |
US20240020463A1 (en) | Text based contextual audio annotation | |
US20250111278A1 (en) | Method, apparatus, device and storage medium for processing information | |
US20240414019A1 (en) | Methods and systems for monitoring and regulating various means of communication using artificial intelligence technology | |
CN119324967A (en) | Method, apparatus, device and storage medium for information processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HUDDL INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DAVULURI, NAVA;RAJAMANI, HARISH;YARLAGADDA, KRISHNA;AND OTHERS;REEL/FRAME:056242/0257 Effective date: 20210426 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION COUNTED, NOT YET MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |