US20210367984A1 - Meeting experience management - Google Patents

Meeting experience management Download PDF

Info

Publication number
US20210367984A1
US20210367984A1 US17/308,887 US202117308887A US2021367984A1 US 20210367984 A1 US20210367984 A1 US 20210367984A1 US 202117308887 A US202117308887 A US 202117308887A US 2021367984 A1 US2021367984 A1 US 2021367984A1
Authority
US
United States
Prior art keywords
meeting
participant
processor
network parameters
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/308,887
Inventor
Harish Rajamani
Nava DAVULURI
Krishna Yarlagadda
Kirankumar Ravuri
Mallikarjuna Kamarthi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huddl Inc
Original Assignee
Huddl Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huddl Inc filed Critical Huddl Inc
Priority to US17/308,887 priority Critical patent/US20210367984A1/en
Assigned to HUDDL Inc. reassignment HUDDL Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YARLAGADDA, KRISHNA, RAVURI, KiranKumar, KAMARTHI, MALLIKARJUNA, DAVULURI, NAVA, RAJAMANI, HARISH
Publication of US20210367984A1 publication Critical patent/US20210367984A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/954Navigation, e.g. using categorised browsing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1093Calendar-based scheduling for persons or groups
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1097Time management, e.g. calendars, reminders, meetings or time accounting using calendar-based scheduling for task assignment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1818Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1096Supplementary features, e.g. call forwarding or call holding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/155Conference systems involving storage of or access to video conference sessions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Definitions

  • the presently disclosed embodiments are related, in general, to a meeting. More particularly, the presently disclosed embodiments are related to meeting experience management during the meeting.
  • Meetings conducted over a communication network, involve a plurality of participants joining the meeting through computing devices connected to the communication network.
  • the plurality of participants of the meeting may generate meeting data during a course of the meeting.
  • Some examples of the meeting data may include, but not limited to, audio content which may include a participant's voice/audio, video content which may include participant's video and/or other videos, meeting notes input by the plurality of participants, presentation content, and/or the like.
  • the plurality of participants may access the meeting data through the respective computing devices. Further, for seamless reception of the meeting data on a computing device is dependent on one or more network parameters associated with the communication network to which the computing device is connected.
  • the one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, a time of the call, and/or the like.
  • a recommendation unit for generating meeting recommendations is provided substantially as shown in, and/or described in connection with, at least one of the figures, as set forth more completely in the claims.
  • FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure
  • FIG. 2 is a block diagram of a central server, in accordance with an embodiment of the disclosure.
  • FIG. 3 is a diagram that illustrates an exemplary scenario of predicting the one or more features of the meeting, in accordance with an embodiment of the disclosure
  • FIG. 4 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure.
  • FIG. 5 is a flowchart illustrating a method for predicting one or more features of the meeting to be enabled/disabled, in accordance with an embodiment of the disclosure.
  • the illustrated embodiments describe a method that includes receiving one or more current network parameters associated with at least one participant of a plurality of participants in a meeting.
  • the method further includes predicting one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant.
  • the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant.
  • the method further includes modifying a user interface of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting.
  • the UI enables participation of the at least one participant in the meeting.
  • the various embodiments illustrate a central server comprising a memory device comprising a set of instructions.
  • the central server further includes a processor communicatively coupled to the memory device.
  • the processor is configured to receive one or more current network parameters associated with at least one participant of a plurality of participants in a meeting.
  • the processor is further configured to predict one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant.
  • the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant.
  • the processor is further configured to modify a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting.
  • the UI enables participation of the at least one participant in the meeting.
  • the various embodiments describe a non-transitory computer-readable medium having stored thereon, computer-readable instructions, which when executed by a computer, causes a processor in the computer to execute operations.
  • the operations comprise receiving, by a processor, one or more current network parameters associated with at least one participant of a plurality of participants in a meeting.
  • the operations comprise predicting, by the processor, one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant.
  • the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant.
  • the operations comprise modifying, by the processor, a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting.
  • the UI enables participation of the at least one participant in the meeting.
  • FIG. 1 is a block diagram that illustrates a system environment for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure.
  • a system environment 100 which includes a central server 102 , one or more computing devices 104 a , 104 b , and 104 c collectively referenced as computing devices 104 , and a communication network 106 .
  • the central server 102 and the computing devices 104 may be communicatively coupled with each other through the communication network 106 .
  • the central server 102 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to create a meeting session through which the computing devices 104 may communicate with each other.
  • the computing devices 104 may share content (referred to as meeting data) amongst each other via the meeting session.
  • the central server 102 may receive the meeting data from each of the computing devices 104 .
  • the central server 102 may be configured to monitor one or more network parameters associated with the communication link of each of the computing devices 104 .
  • the one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, a time of the call, and/or the like.
  • the central server 102 may be configured to monitor input received from each of the computing devices 104 pertaining to enabling/disabling of one or more features of the meeting. Based on the one or more features of the meeting enabled/disabled and the one or more network parameters, the central server 102 may be configured to train ML model for each of the computing devices 104 .
  • the central server 102 may be configured to utilize the ML model for each of the computing devices 104 to predict one or more features of the meeting that are to be enabled/disabled during a future meeting.
  • Examples of the central server 102 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, a computing device coupled to the computing devices 104 over a local network, an edge computing device, a cloud server, or any other computing device.
  • PDA personal digital assistant
  • the disclosure may not be so limited and other embodiments may be included without limiting the scope of the disclosure.
  • the computing devices 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to connect to the meeting session, created by the central server 102 .
  • the computing devices 104 may be associated with the plurality of participants of the meeting.
  • the plurality of participants may provide one or more inputs during the meeting that may cause the computing devices 104 to generate the meeting data during the meeting.
  • the meeting data may correspond to the content shared amongst the computing devices 104 during the meeting.
  • the meeting data may comprise, but are not limited to, audio content that is generated by the plurality of participants as the plurality of participants speak during the meeting, video content that may include video feed of the plurality of participants, meeting notes input by the plurality of participants during the meeting, presentation content, screen sharing content, file sharing content and/or any other content shared during the meeting.
  • the computing devices 104 may be configured to transmit the meeting data to the central server 102 . Examples of the computing devices 104 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, or any other computing device.
  • PDA personal digital assistant
  • the communication network 106 may include a communication medium through which each of the computing devices 104 associated with the plurality of participants may communicate with each other and/or with the central server 102 .
  • a communication may be performed, in accordance with various wired and wireless communication protocols.
  • wired and wireless communication protocols include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, 2G, 3G, 4G, 5G, 6G cellular communication protocols, and/or Bluetooth (BT) communication protocols.
  • the communication network 106 may include, but is not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a telephone line (POTS), and/or a Metropolitan Area Network (MAN).
  • Wi-Fi Wireless Fidelity
  • WLAN Wireless Local Area Network
  • LAN Local Area Network
  • POTS telephone line
  • MAN Metropolitan Area Network
  • the central server 102 may receive a request, from a computing device 104 a , to generate the meeting session for a meeting.
  • the request may include meeting metadata associated with the meeting that is to be scheduled.
  • the meeting metadata may include, but not limited to, an agenda of the meeting, one or more topics to be discussed during the meeting, a time duration of the meeting, a schedule of the meeting, meeting notes carried forward from previous meetings, a plurality of participants to attend the meeting, and/or the like.
  • the central server 102 may create the meeting session.
  • the meeting session may correspond to a communication session that allows the computing devices 104 to communicate with each other.
  • the meeting session may share unique keys (public and private keys) with the computing devices 104 , which allows the computing devices 104 to communicate with each other.
  • the unique keys corresponding to the meeting session may ensure that any other computing devices (other than the computing devices 104 ) are not allowed to join the meeting session.
  • the central server 102 may send a notification to the computing devices 104 pertaining to the scheduled meeting.
  • the notification may include the details of the meeting session.
  • the central server 102 may transmit the unique keys and/or the meeting metadata to the computing devices 104 .
  • the computing devices 104 may join the meeting through the meeting session.
  • the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to join the meeting session.
  • joining the meeting session has been interchangeably referred to as joining the meeting.
  • the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to share content amongst each other.
  • the plurality of participants may provide the one or more inputs to the computing devices 104 to cause the computing devices 104 to share the content amongst each other.
  • the plurality of participants may speak during the meeting.
  • the computing devices 104 may capture voice of the plurality of participants through one or more microphones to generate audio content.
  • the computing devices 104 may transmit the audio content over the communication network 106 (i.e., meeting session). Additionally, or alternatively, the plurality of participants may share respective video feeds amongst each other by utilizing image capturing device (e.g., camera) associated with the computing devices 104 . Additionally, or alternatively, a participant-1 of the plurality of participants may present content saved on the computing device (for example, the computing device 104 a ) through screen sharing capability. For example, the participant-1 may present content to other participants (of the plurality of participants) through the power point presentation application installed on the computing device 104 a . In some examples, the participant-1 may share content through other applications installed on the computing device 104 a .
  • image capturing device e.g., camera
  • a participant-1 of the plurality of participants may present content saved on the computing device (for example, the computing device 104 a ) through screen sharing capability. For example, the participant-1 may present content to other participants (of the plurality of participants) through the power point presentation application installed on the
  • the participant-1 may share content through the word processor application installed on the computing device 104 a . Additionally, or alternatively, the participant-1 may take meeting notes during the meeting.
  • the audio content, the video content, the meeting notes, and/or the screen sharing content may constitute the meeting data. Therefore, in some examples, the computing device 104 a may generate the meeting data during the meeting. Similarly, other computing devices 104 b and 104 c may also generate the meeting data during the meeting. Additionally, or alternatively, the computing devices 104 may transmit the meeting data to the central server 102 over the meeting session. In an exemplary embodiment, the computing devices 104 may transmit the meeting data in near real time of respective generation of the meeting data. To this end, the computing devices 104 may be configured to transmit the meeting data as and when the computing devices 104 generate the meeting data.
  • the central server 102 may be configured to monitor the one or more network parameters associated with the communication link of each of the computing devices 104 . Additionally, or alternatively, the central server 102 may be configured to record the inputs corresponding to enabling/disabling of the one or more features of the meeting.
  • the one or more features of the meeting may include, but not limited to, transmission/reception of audio content, transmission/reception of the video content, and/or the like. Additionally or alternatively, the one or more features may include, but are not limited to, modifying a layout of User interface (UI) presented on each of the computing devices 104 .
  • the central server 102 may be configured to generate training data based on the one or more features of the meeting, and the one or more network parameters.
  • the central server 102 may be further configured to train ML model for each of the computing devices 104 based on the training data. During a future meeting, the central server 102 may be configured to utilize the ML model to predict the one or more features of the meeting that are to be enabled/disabled based on the one or more network parameters recorded during the future meeting. Additionally or alternately, the central server 102 may be configured to transmit a notification to the computing devices 104 pertaining to the one or more features of the meeting to be enabled/disabled.
  • FIG. 2 is a block diagram of the central server, in accordance with an embodiment of the disclosure.
  • a central server 102 comprises a processor 202 , a non-transitory computer readable medium 203 , a memory device 204 , a transceiver 206 , a network monitoring unit 208 , a training unit 210 , and a meeting experience management unit 212 .
  • the processor 202 may be embodied as one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an application specific integrated circuit (ASIC) or field programmable gate array (FPGA), or some combination thereof.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 202 may include a plurality of processors and signal processing modules.
  • the plurality of processors may be embodied on a single electronic device or may be distributed across a plurality of electronic devices collectively configured to function as the circuitry of the central server 102 .
  • the plurality of processors may be in communication with each other and may be collectively configured to perform one or more functionalities of the circuitry of the central server 102 , as described herein.
  • the processor 202 may be configured to execute instructions stored in the memory device 204 or otherwise accessible to the processor 202 . These instructions, when executed by the processor 202 , may cause the circuitry of the central server 102 to perform one or more of the functionalities, as described herein.
  • the processor 202 may include an entity capable of performing operations according to embodiments of the present disclosure while configured accordingly.
  • the processor 202 when the processor 202 is embodied as an ASIC, FPGA or the like, the processor 202 may include specifically configured hardware for conducting one or more operations described herein.
  • the processor 202 when the processor 202 is embodied as an executor of instructions, such as may be stored in the memory device 204 , the instructions may specifically configure the processor 202 to perform one or more algorithms and operations described herein.
  • the processor 202 used herein may refer to a programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above.
  • multiple processors may be provided that may be dedicated to wireless communication functions and one processor may be dedicated to running other applications.
  • Software applications may be stored in the internal memory before they are accessed and loaded into the processors.
  • the processors may include internal memory sufficient to store the application software instructions.
  • the internal memory may be a volatile or non-volatile memory, such as flash memory, or a mixture of both.
  • the memory can also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection).
  • the non-transitory computer readable medium 203 may include any tangible or non-transitory storage media or memory media such as electronic, magnetic, or optical media e.g., disk or CD/DVD-ROM coupled to processor 202 .
  • the memory device 204 may include suitable logic, circuitry, and/or interfaces that are adapted to store a set of instructions that is executable by the processor 202 to perform predetermined operations.
  • Some of the commonly known memory implementations include, but are not limited to, a hard disk, random access memory, cache memory, read only memory (ROM), erasable programmable read-only memory (EPROM) and electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof.
  • the memory device 204 may be integrated with the processor 202 on a single chip, without departing from the scope of the disclosure.
  • the transceiver 206 may correspond to a communication interface that may facilitate transmission and reception of messages and data to and from various devices (e.g., computing devices 104 ). Examples of the transceiver 206 may include, but are not limited to, an antenna, an Ethernet port, a USB port, a serial port, or any other port that can be adapted to receive and transmit data.
  • the transceiver 206 transmits and receives data and/or messages in accordance with the various communication protocols, such as, Bluetooth®, Infra-Red, I2C, TCP/IP, UDP, and 2G, 3G, 4G or 5G communication protocols.
  • the network monitoring unit 208 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to determine one or more network parameters associated with each of the computing devices 104 .
  • the one or more network parameters associated with a computing device may be deterministic of a quality of the communication link through which the computing device (e.g., computing device 104 a ) is connected to the central server 102 .
  • the one or more network parameters may include, but are not limited to, a network bandwidth of the communication link, a platform through which the computing device is connected to the communication network, an availability of audio bandwidth, an audio opinion score, an availability of video bandwidth, a video opinion score, a location of the computing device (e.g., the computing device 104 a ) and/or a time of the call.
  • the network monitoring unit 208 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • the training unit 210 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to train a machine learning (ML) model for each of the computing devices 104 .
  • the ML model associated with each of the computing devices 104 may correspond to historical data that comprises the one or more network parameters associated with historical communication links through which the computing device (e.g., computing device 104 a ) connected to the central server 102 during previous meetings. Additionally or alternatively, the historical data may include information pertaining to one or more features of the UI that were enabled during the previous meetings.
  • the one or more features of the UI may include, but are not limited to, enabling/disabling video sharing, enabling/disabling audio, enabling/disabling screen sharing capability, and/or the like.
  • the training unit 210 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • the meeting experience management unit 212 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to utilize the ML model associated with the computing device (e.g., the computing device 104 a ) of the computing devices 104 to predict one or more UI features that are to be enabled and/or disabled based on the one or more network parameters associated with the communication link through which the computing device 104 a is connected to the central server 102 .
  • the meeting experience management unit 212 may be configured to generate and transmit one or more recommendations, pertaining to enabling/disabling the one or more UI features, to each of the computing devices 104 .
  • the meeting experience management unit 212 may be configured to automatically enable/disable the one or more UI features on each of the computing devices 104 .
  • the meeting experience management unit 212 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • the processor 202 may receive the request to schedule the meeting from at least one computing device 104 a of the computing devices 104 .
  • the request to schedule the meeting includes meeting metadata.
  • the meeting metadata includes the agenda of the meeting, the one or more topics to be discussed during the meeting, the time duration of the meeting, the schedule of the meeting, the meeting notes carried from previous meetings, the plurality of participants to attend the meeting, and/or the like.
  • Table 1 illustrates an example meeting metadata:
  • UI to design of the 2. Fields to 2020; 9 include feature 1, User be displayed PM to 10 PM feature 2 interface in UI 2.
  • Feature 1 (UI) 3. Current defined as a status of portion depicting project participants 3.
  • the processor 202 may be configured to create the meeting session.
  • the meeting session corresponds to a communication session that allows the computing devices 104 to connect to the central server 102 through the communication link.
  • the meeting session allows the computing devices 104 to communicate amongst each other.
  • the computing devices 104 may share content (e.g., audio content and/or video content) amongst each other.
  • the processor 202 may be configured to transmit a message to each of the computing devices 104 comprising the details of the meeting session.
  • the message may include a link to connect to the meeting session.
  • the plurality of participants may cause the respective computing devices 104 to join the meeting session. For example, the participant may click on the link (received in the message from the central server 102 ) to cause the computing devices 104 to join the meeting session.
  • the central server 102 may transmit a User Interface (UI) to each of the computing devices 104 .
  • the UI may allow the plurality of participants to access one or more features of the meeting.
  • the UI may allow the plurality of participants to share audio content and/or video content.
  • the UI may provide control to the plurality of participants to enable/disable one or more peripherals coupled to the computing device 104 a .
  • UI may allow the plurality of participants to enable/disable an image capturing device and/or an audio capturing device (examples of the one or more peripherals) in the computing devices 104 .
  • the UI may enable the plurality of participants to share other content.
  • the UI may provide a feature to the plurality of participants that would allow to the plurality of participants to cause the computing devices 104 to share content/applications being displayed on a display device associated with the computing devices 104 .
  • the plurality of participants may cause the computing devices 104 to share a power point presentation being displayed on the computing devices 104 .
  • the UI may present a note feature to the plurality of participants on respective computing devices 104 .
  • the notes feature may enable the plurality of participants to input meeting notes or keep track of important points discussed during the meeting.
  • the notes feature of the UI may correspond to a space on the UI in which the plurality of participants may input text for his/her reference.
  • the text input by the plurality of participants may correspond to the meeting notes taken by the plurality of participants during the meeting.
  • the computing devices 104 may be configured to transmit the meeting notes to the central server 102 .
  • the central server 102 may be configured to share the meeting notes input by the plurality of participants amongst each of the computing devices 104 .
  • the central server 102 may not share the meeting notes input by the plurality of participants amongst each of the computing devices 104 .
  • the meeting notes are available locally to a participant who inputs the text corresponding to the meeting notes.
  • each of the computing devices 104 may generate meeting data during the meeting.
  • the meeting data may include, but is not limited to, the audio content generated by the plurality of participants as the plurality of participants speak during the meeting, the video content include video feed of the plurality of participants, the meeting notes input by the plurality of participants during the meeting, the presentation content, the screen sharing content, the file sharing content and/or any other content shared during the meeting.
  • the processor 202 may receive the meeting data from each of the computing devices 104 in real time.
  • the computing device 104 a may be configured to display the meeting data received from the other computing devices 104 through the UI.
  • the UI may be configured to present the video content received from each of the other computing devices 104 .
  • the video content received from each of the other computing devices 104 may be presented in a grid layout.
  • the grid layout may correspond to a matrix layout.
  • the processor 202 may be configured to transmit the meeting data to the computing devices 104 over the respective communication link.
  • the network monitoring unit 208 may be configured to determine the one or more network parameters associated with the communication link associated with each of the computing devices 104 .
  • the one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an availability of audio bandwidth, an audio opinion score, an availability of video bandwidth, a video opinion score, a location of the computing device, a time of the call, and/or the like.
  • the network monitoring unit 208 may be configured to utilize known technologies to determine the one or more network parameters.
  • the network monitoring unit 208 may be configured to determine an IP address of the computing device 104 . Thereafter, based on the IP address, the network monitoring unit 208 may be configured to determine the location of the computing device 104 a . It may be understood by a person having ordinary skill in the art that each IP address is assigned to predetermined locations. Accordingly, the network monitoring unit 208 may be configured to determine the location of the computing device 104 a based on the IP address associated with the computing device 104 a . Additionally or alternatively, the network monitoring unit 208 may be configured to utilize Global positioning system (GPS) and/or triangulation techniques to determine the location of the computing device 104 a.
  • GPS Global positioning system
  • the network monitoring unit 208 may be configured to determine the network bandwidth.
  • the network monitoring unit 208 may be configured to determine a latency measure associated with the communication link of the computing device 104 a .
  • the network monitoring unit 208 may be configured to transmit an internet control message protocol (ICMP) message to the computing device 104 a to determine the latency associated with the communication link (through which the computing device 104 a is coupled to the central server 102 ).
  • ICMP internet control message protocol
  • the latency measure of the communication link is indicative of the network bandwidth of the communication link.
  • the network monitoring unit 208 may be configured to download a predetermined file from the computing device 104 a to determine an upload bandwidth associated with the communication link through which the computing device 104 a is coupled to the central server 102 .
  • the network monitoring unit 208 may be configured to monitor a speed (e.g., in bits/second) at which the predetermined file is being downloaded by the central server 102 .
  • the speed at which the central server 102 downloads the predetermined file corresponds to the upload bandwidth of the communication link.
  • the network monitoring unit 208 may be configured to upload the predetermined file to the computing device 104 a to determine a download bandwidth associated with the communication link through which the computing device 104 a is coupled to the central server 102 .
  • the network monitoring unit 208 may be configured to monitor a speed (e.g., in bits/second) at which the central server 102 received the predetermined file.
  • the speed at which the central server 102 receives the predetermined file corresponds to the download bandwidth of the communication link.
  • the upload bandwidth and the download bandwidth associated with the communication link corresponds to the network bandwidth associated with the communication link.
  • the network monitoring unit 208 may be configured to determine a bitrate at which the audio content and/or the video content are to be transmitted to the computing device 104 a .
  • the bit rate at which the audio content and/or the video content (portion of the meeting data) is to be transmitted to the computing device 104 a is predetermined.
  • the network monitoring unit 208 may be configured to determine the bitrate based on the network bandwidth associated with the communication link (through which the computing device 104 a is connected to the central server 102 ). More particularly, the network monitoring unit 208 may be configured to determine the audio bitrate and the video bitrate based on the audio bandwidth and the video bandwidth associated with the communication link, respectively.
  • the network monitoring unit 208 may be configured to determine the audio bitrate and the video bitrate based on a look-up table. Following table, Table 2, illustrates an example look-up table to determine the audio bitrate and the video bitrate:
  • the network monitoring unit 208 may be configured to automatically determine the audio bitrate and the video bitrate based on known mathematical relation between the network bandwidth, and the audio bitrate and the video bitrate.
  • bitrate 2*bandwidth*log 2( M ) (1)
  • the scope of the disclosure is not limited to determining the bitrate based on aforementioned equations.
  • the networking monitoring unit 208 may be configured to utilize another mathematical relation to determine the audio bitrate and the video bitrate.
  • the network monitoring unit 208 may be configured to the audio opinion score and the video opinion score.
  • the audio opinion score and the video opinion score correspond to a mean opinion score (MOS) that is indicative of a quality of audio and/or video being experience by the participant through the computing device 104 a .
  • the network monitoring unit 208 may be configured to utilize known protocols defined in the International Telecommunication Union (ITU-T).
  • ITU-T International Telecommunication Union
  • the network bandwidth of the communication link may vary because of one or more other network services, other than the meeting, using the network bandwidth.
  • another network device connected to the communication link is streaming video on a video streaming service. Accordingly, the network bandwidth available for the meeting may get reduced.
  • the experience of the meeting, for the participant may worsen.
  • the computing device 104 a may not be able to receive video content and/or audio content.
  • the participant may experience choppy video content and/or audio content.
  • the participant may provide input through the UI to enable/disable one or more features of the meeting.
  • the participant may disable the reception of the video content.
  • the participant may disable transmission of the video content. Such disabling of reception/transmission of the video content may allow the network bandwidth to be available for seamless reception of the audio content.
  • the meeting experience management unit 212 may be configured to record the modifications made to the one or more features of the meeting. Based on the inputs provided by the participant to modify the one or more features of the meeting.
  • the processor 202 may be configured to receive information pertaining to one or more peripherals attached to the computing device 104 a from the computing device 104 a .
  • the one or more peripherals may correspond to devices through which the participant may transmit/receive the audio content and/or video content.
  • examples of the one or more peripherals include, but are not limited to, an audio capturing device, an audio generating device, an image capturing device, and/or the like.
  • the information pertaining to the one or more peripherals may include a make of the one or more peripherals coupled to the computing device 104 a .
  • the processor 202 may be configured to determine a quality score of the meeting data generated by the one or more peripherals.
  • the processor 202 may be configured to determine a background noise in the audio content generated by the audio capturing device. Accordingly, based on the background noise, the processor 202 may be configured to determine the quality score associated with the audio content generated by the one or more peripherals. In another example, the processor 202 may be configured to determine pixelation in the video content generated by the image capturing device. Accordingly, based on the pixelation, the processor 202 may be configured to determine the quality score associated with the video content generated by the one or more peripherals. In an exemplary embodiment, the meeting experience management unit 212 may be configured to correlate the quality score associated with the meeting data and the information pertaining to the one or more peripherals coupled to the computing device 104 a.
  • the training unit 210 may be configured to generate historical data based on the one or more network parameters recorded by the network monitoring unit 208 and the one or more features of the meeting recorded by the meeting experience management unit 212 . Additionally or alternatively, the training unit 210 may be configured to include the correlation between the quality score associated with the meeting data and the information pertaining to the one or more peripherals coupled to the computing device 104 a .
  • the historical data may correspond to training data.
  • the one or more network parameters, the information pertaining to the one or more peripherals, and the quality score of the meeting data generated by the computing device 104 a may correspond to one or more features of the training data and the one or more features of the meeting corresponds to the one or more labels of the training data.
  • the one or more features of training data may correspond to inputs that may be provided to an ML model (trained using the training data), and the one or more labels may correspond to expected output of the ML model. Following table illustrates an example historical data:
  • the participant may have provided the input to disable the screen sharing option and the video content transmission and/or reception for seamless reception of the audio content.
  • the training unit 210 may be configured to train the ML model based on the training data.
  • the training unit 210 may be configured to utilize known technologies such as neural network, multi-dimensional Bayes theorem, Gaussian Copula, and/or the like, to train the ML model.
  • the scope of the disclosure is not limited to the training unit 210 training the ML model for only the participant associated with the computing device 104 a .
  • the training unit 210 may be configured to train the ML model for other participants of the meeting, associated with the other computing devices (e.g., 104 b and 104 c ). To this end, the training unit 210 may be configured to train the ML model associated with the other participants using the one or more network parameters associated with the respective communication link through which the respective computing devices 104 connect to the central server 102 .
  • the meeting experience management unit 212 may be configured to utilize the ML model in future meetings to predict the one or more features of the meeting that are to be enabled/disabled for seamless reception of the meeting data.
  • the network monitoring unit 208 may be configured to determine a one or more current network parameters associated with the communication link through which the computing device 104 a is coupled to the central server 102 . Thereafter, the meeting experience management unit 212 may be configured to utilize the ML model to predict the one or more features of the meeting that are to be enabled/disabled based on the one or more current network parameters.
  • the meeting experience management unit 212 may be configured to transmit a notification to the computing device 104 a pertaining to the one or more features of the meeting that are to be enabled/disabled. In another embodiment, the meeting experience management unit 212 may be configured to automatically enable/disable the one or more features of the meeting. For example, the meeting experience management unit 212 may be configured to disable the image capturing device of the computing device 104 a for disabling the transmission of the video content on the computing device 104 a . To this end, the meeting experience management unit 212 may be configured to halt the transmission of the video content to the computing device 104 a.
  • the meeting experience management unit 212 may be configured to halt transmission of a portion of the video content.
  • the meeting experience management unit 212 may be configured to transmit the video content from the computing device 104 b and halt the transmission of the video content from the computing device 104 c , to the computing device 104 a .
  • the meeting experience management unit 212 may be configured to modify the UI, itself, based on the one or more features of the meeting predicted using the ML model.
  • the meeting experience management unit 212 may be configured to modify the UI to display video content received from only the computing device 104 b . In such an embodiment, the grid layout of the UI is disabled.
  • the scope of the disclosure is not limited to the enabling/disabling the one or more features of the meeting during the meeting.
  • the meeting experience management unit 212 may be configured to transmit the notification related to the one or more features of the meeting prior to the meeting.
  • the meeting experience management unit 212 may be configured to determine the one or more current network parameters associated with the communication link of the computing device 104 a at a predetermined time period prior to a scheduled time of the meeting. Thereafter, the meeting experience management unit 212 may be configured to utilize the ML model to predict the one or more features of the meeting that may have to be enabled/disabled during the meeting for seamless reception of the meeting data (e.g., the audio content).
  • the meeting experience management unit 212 may be configured to transmit the notification pertaining to enabling/disabling of the one or more features, to the computing device 104 a prior to the meeting.
  • the meeting experience management unit 212 may be configured to transmit the notification “Since you are joining from location X, disable the reception of the video content for seamless reception of the audio content.”
  • the meeting experience management unit 212 may be configured to transmit the notification “Since you are joining from location X at time 8:00 PM, disable the reception of the video content for seamless reception of the audio content.”
  • FIG. 3 is a diagram illustrating an exemplary scenario of predicting the one or more features of the meeting, in accordance with an embodiment of the disclosure.
  • the exemplary scenario 300 include the ML model 302 associated with the participant-1.
  • the participant-1 is using the computing device 104 a for joining the meeting.
  • the ML model has been created using historical data 304 .
  • the historical data 304 includes the one or more previous network parameters associated with the communication link, previously used to connect to the meeting.
  • the historical data includes the one or more network parameters, are illustrated in table 3.
  • the exemplary scenario 300 further illustrates that the central server 102 determines the one or more current network parameters 306 .
  • the one or more current network parameters 306 may include that the participant-1 is connecting to the meeting from home at 8:00 PM.
  • the meeting experience management unit 212 may be configured to utilize the ML model 302 to predict the one or more features 308 of the meeting based on the one or more current network parameters 306 .
  • the one or more features 308 includes disabling the reception of the video content. Accordingly, the meeting experience management unit 212 transmits a notification to the computing device 104 a pertaining to disabling the video content (depicted by 310 ). Alternatively, the meeting experience management unit 212 may modify the UI (depicted by 312 ) presented to the computing device 104 a .
  • the meeting experience management unit 212 may disable grid layout which may cause disabling of the reception of the video content from the other computing devices 104 .
  • the meeting experience management unit 212 may enable reception of the video content from a single computing device of the other computing devices 104 (e.g., computing device 104 b ).
  • FIG. 4 is a flowchart illustrating a method for training the ML model associated with the participant-1, in accordance with an embodiment of the disclosure.
  • the one or more network parameters are determined.
  • the network monitoring unit 208 may be configured to determine the one or more network parameters during the meeting.
  • inputs to enable/disable the one or more features of the meeting is recorded.
  • the processor 202 may be configured to record the input to enable/disable the one or more features of the meeting.
  • the training data is generated based on the one or more features of the meeting enabled/disabled, and the one or more network parameters associated with the communication link through which the computing device 104 a is connected to the central server 102 .
  • the training unit 210 may be configured to generate the training data.
  • the ML model associated with the participant-1 is trained.
  • the training unit 210 may be configured to train the ML model.
  • FIG. 5 is a flowchart illustrating a method for predicting the one or more features of the meeting to be enabled/disabled, in accordance with an embodiment of the disclosure.
  • the network monitoring unit 208 may be configured to determine the one or more current network parameters during the meeting.
  • the one or more features of the meeting are predicted using the ML model, based on the one or more current network parameters.
  • the meeting experience management unit 212 may be configured to predict the one or more features of the meeting based on the one or more current network parameters and the ML model.
  • the notification is generated based on the one or more features of the meeting predicted using the ML model.
  • the meeting experience management unit 212 may be configured to generate the notification.
  • the hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may include a general purpose processor, a digital signal processor (DSP), a special-purpose processor such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA), a programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
  • a general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, or in addition, some operations or methods may be performed by circuitry that is specific to a given function.
  • the functions described herein may be implemented by special-purpose hardware or a combination of hardware programmed by firmware or other software. In implementations relying on firmware or other software, the functions may be performed as a result of execution of one or more instructions stored on one or more non-transitory computer-readable media and/or one or more non-transitory processor-readable media. These instructions may be embodied by one or more processor-executable software modules that reside on the one or more non-transitory computer-readable or processor-readable storage media.
  • Non-transitory computer-readable or processor-readable storage media may in this regard comprise any storage media that may be accessed by a computer or a processor.
  • non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, disk storage, magnetic storage devices, or the like.
  • Disk storage includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray DiscTM, or other storage devices that store data magnetically or optically with lasers. Combinations of the above types of media are also included within the scope of the terms non-transitory computer-readable and processor-readable media. Additionally, any combination of instructions stored on the one or more non-transitory processor-readable or computer-readable media may be referred to herein as a computer program product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Provided is a method that includes receiving one or more current network parameters associated with at least one participant of a plurality of participants in a meeting. The method further includes predicting one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant, wherein the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant. The method further includes modifying a user interface of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting, wherein the UI enables participation of the at least one participant in the meeting.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE
  • This Application makes reference to, claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 63/028,123, which was filed on May 21, 2020.
  • The above referenced Application is hereby incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The presently disclosed embodiments are related, in general, to a meeting. More particularly, the presently disclosed embodiments are related to meeting experience management during the meeting.
  • BACKGROUND
  • Meetings, conducted over a communication network, involve a plurality of participants joining the meeting through computing devices connected to the communication network. In some examples, the plurality of participants of the meeting may generate meeting data during a course of the meeting. Some examples of the meeting data may include, but not limited to, audio content which may include a participant's voice/audio, video content which may include participant's video and/or other videos, meeting notes input by the plurality of participants, presentation content, and/or the like. In some examples, the plurality of participants may access the meeting data through the respective computing devices. Further, for seamless reception of the meeting data on a computing device is dependent on one or more network parameters associated with the communication network to which the computing device is connected. The one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, a time of the call, and/or the like.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of described systems with some aspects of the present disclosure, as set forth in the remainder of the present application and with reference to the drawings.
  • SUMMARY
  • A recommendation unit for generating meeting recommendations is provided substantially as shown in, and/or described in connection with, at least one of the figures, as set forth more completely in the claims.
  • These and other features and advantages of the present disclosure may be appreciated from a review of the following detailed description of the present disclosure, along with the accompanying figures in which like reference numerals refer to like parts throughout.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram that illustrates a system environment for training a ML model, in accordance with an embodiment of the disclosure;
  • FIG. 2 is a block diagram of a central server, in accordance with an embodiment of the disclosure;
  • FIG. 3 is a diagram that illustrates an exemplary scenario of predicting the one or more features of the meeting, in accordance with an embodiment of the disclosure;
  • FIG. 4 is a flowchart illustrating a method for training the ML model, in accordance with an embodiment of the disclosure; and
  • FIG. 5 is a flowchart illustrating a method for predicting one or more features of the meeting to be enabled/disabled, in accordance with an embodiment of the disclosure.
  • DETAILED DESCRIPTION
  • The illustrated embodiments describe a method that includes receiving one or more current network parameters associated with at least one participant of a plurality of participants in a meeting. The method further includes predicting one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant. The ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant. The method further includes modifying a user interface of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting. The UI enables participation of the at least one participant in the meeting.
  • The various embodiments illustrate a central server comprising a memory device comprising a set of instructions. The central server further includes a processor communicatively coupled to the memory device. The processor is configured to receive one or more current network parameters associated with at least one participant of a plurality of participants in a meeting. The processor is further configured to predict one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant. The ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant. The processor is further configured to modify a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting. The UI enables participation of the at least one participant in the meeting.
  • The various embodiments describe a non-transitory computer-readable medium having stored thereon, computer-readable instructions, which when executed by a computer, causes a processor in the computer to execute operations. The operations comprise receiving, by a processor, one or more current network parameters associated with at least one participant of a plurality of participants in a meeting. The operations comprise predicting, by the processor, one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant. The ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant. The operations comprise modifying, by the processor, a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting. The UI enables participation of the at least one participant in the meeting.
  • FIG. 1 is a block diagram that illustrates a system environment for generating one or more meeting recommendations, in accordance with an embodiment of the disclosure. Referring to FIG. 1, there is shown a system environment 100, which includes a central server 102, one or more computing devices 104 a, 104 b, and 104 c collectively referenced as computing devices 104, and a communication network 106. The central server 102 and the computing devices 104 may be communicatively coupled with each other through the communication network 106.
  • The central server 102 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to create a meeting session through which the computing devices 104 may communicate with each other. For example, the computing devices 104, may share content (referred to as meeting data) amongst each other via the meeting session. For example, the central server 102 may receive the meeting data from each of the computing devices 104. During the meeting, the central server 102 may be configured to monitor one or more network parameters associated with the communication link of each of the computing devices 104. The one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, a time of the call, and/or the like. Further, the central server 102 may be configured to monitor input received from each of the computing devices 104 pertaining to enabling/disabling of one or more features of the meeting. Based on the one or more features of the meeting enabled/disabled and the one or more network parameters, the central server 102 may be configured to train ML model for each of the computing devices 104. The central server 102 may be configured to utilize the ML model for each of the computing devices 104 to predict one or more features of the meeting that are to be enabled/disabled during a future meeting. Examples of the central server 102 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, a computing device coupled to the computing devices 104 over a local network, an edge computing device, a cloud server, or any other computing device. Notwithstanding, the disclosure may not be so limited and other embodiments may be included without limiting the scope of the disclosure.
  • The computing devices 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to connect to the meeting session, created by the central server 102. In an exemplary embodiment, the computing devices 104 may be associated with the plurality of participants of the meeting. The plurality of participants may provide one or more inputs during the meeting that may cause the computing devices 104 to generate the meeting data during the meeting. In an exemplary embodiment, the meeting data may correspond to the content shared amongst the computing devices 104 during the meeting. In some examples, the meeting data may comprise, but are not limited to, audio content that is generated by the plurality of participants as the plurality of participants speak during the meeting, video content that may include video feed of the plurality of participants, meeting notes input by the plurality of participants during the meeting, presentation content, screen sharing content, file sharing content and/or any other content shared during the meeting. In some examples, the computing devices 104 may be configured to transmit the meeting data to the central server 102. Examples of the computing devices 104 may include, but are not limited to, a personal computer, a laptop, a personal digital assistant (PDA), a mobile device, a tablet, or any other computing device.
  • In an embodiment, the communication network 106 may include a communication medium through which each of the computing devices 104 associated with the plurality of participants may communicate with each other and/or with the central server 102. Such a communication may be performed, in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, 2G, 3G, 4G, 5G, 6G cellular communication protocols, and/or Bluetooth (BT) communication protocols. The communication network 106 may include, but is not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a telephone line (POTS), and/or a Metropolitan Area Network (MAN).
  • In operation, the central server 102 may receive a request, from a computing device 104 a, to generate the meeting session for a meeting. In an exemplary embodiment, the request may include meeting metadata associated with the meeting that is to be scheduled. In an exemplary embodiment, the meeting metadata may include, but not limited to, an agenda of the meeting, one or more topics to be discussed during the meeting, a time duration of the meeting, a schedule of the meeting, meeting notes carried forward from previous meetings, a plurality of participants to attend the meeting, and/or the like. Upon receiving the request, the central server 102 may create the meeting session. In an exemplary embodiment, the meeting session may correspond to a communication session that allows the computing devices 104 to communicate with each other. The meeting session may share unique keys (public and private keys) with the computing devices 104, which allows the computing devices 104 to communicate with each other. In some examples, the unique keys corresponding to the meeting session may ensure that any other computing devices (other than the computing devices 104) are not allowed to join the meeting session. Additionally, or alternatively, the central server 102 may send a notification to the computing devices 104 pertaining to the scheduled meeting. The notification may include the details of the meeting session. For example, the central server 102 may transmit the unique keys and/or the meeting metadata to the computing devices 104.
  • The computing devices 104 may join the meeting through the meeting session. In an exemplary embodiment, the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to join the meeting session. In an exemplary embodiment, joining the meeting session has been interchangeably referred to as joining the meeting. Thereafter, the plurality of participants associated with the computing devices 104 may cause the computing devices 104 to share content amongst each other. For instance, the plurality of participants may provide the one or more inputs to the computing devices 104 to cause the computing devices 104 to share the content amongst each other. For example, the plurality of participants may speak during the meeting. The computing devices 104 may capture voice of the plurality of participants through one or more microphones to generate audio content. Further, the computing devices 104 may transmit the audio content over the communication network 106 (i.e., meeting session). Additionally, or alternatively, the plurality of participants may share respective video feeds amongst each other by utilizing image capturing device (e.g., camera) associated with the computing devices 104. Additionally, or alternatively, a participant-1 of the plurality of participants may present content saved on the computing device (for example, the computing device 104 a) through screen sharing capability. For example, the participant-1 may present content to other participants (of the plurality of participants) through the power point presentation application installed on the computing device 104 a. In some examples, the participant-1 may share content through other applications installed on the computing device 104 a. For example, the participant-1 may share content through the word processor application installed on the computing device 104 a. Additionally, or alternatively, the participant-1 may take meeting notes during the meeting. In an exemplary embodiment, the audio content, the video content, the meeting notes, and/or the screen sharing content (e.g., through applications installed on the computing device 104 a) may constitute the meeting data. Therefore, in some examples, the computing device 104 a may generate the meeting data during the meeting. Similarly, other computing devices 104 b and 104 c may also generate the meeting data during the meeting. Additionally, or alternatively, the computing devices 104 may transmit the meeting data to the central server 102 over the meeting session. In an exemplary embodiment, the computing devices 104 may transmit the meeting data in near real time of respective generation of the meeting data. To this end, the computing devices 104 may be configured to transmit the meeting data as and when the computing devices 104 generate the meeting data.
  • In an exemplary embodiment, the central server 102 may be configured to monitor the one or more network parameters associated with the communication link of each of the computing devices 104. Additionally, or alternatively, the central server 102 may be configured to record the inputs corresponding to enabling/disabling of the one or more features of the meeting. In an exemplary embodiment, the one or more features of the meeting may include, but not limited to, transmission/reception of audio content, transmission/reception of the video content, and/or the like. Additionally or alternatively, the one or more features may include, but are not limited to, modifying a layout of User interface (UI) presented on each of the computing devices 104. In an exemplary embodiment, the central server 102 may be configured to generate training data based on the one or more features of the meeting, and the one or more network parameters.
  • In an exemplary embodiment, the central server 102 may be further configured to train ML model for each of the computing devices 104 based on the training data. During a future meeting, the central server 102 may be configured to utilize the ML model to predict the one or more features of the meeting that are to be enabled/disabled based on the one or more network parameters recorded during the future meeting. Additionally or alternately, the central server 102 may be configured to transmit a notification to the computing devices 104 pertaining to the one or more features of the meeting to be enabled/disabled.
  • FIG. 2 is a block diagram of the central server, in accordance with an embodiment of the disclosure. Referring to FIG. 2, there is shown a central server 102 comprises a processor 202, a non-transitory computer readable medium 203, a memory device 204, a transceiver 206, a network monitoring unit 208, a training unit 210, and a meeting experience management unit 212.
  • The processor 202 may be embodied as one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an application specific integrated circuit (ASIC) or field programmable gate array (FPGA), or some combination thereof.
  • Accordingly, although illustrated in FIG. 2 as a single controller, in an exemplary embodiment, the processor 202 may include a plurality of processors and signal processing modules. The plurality of processors may be embodied on a single electronic device or may be distributed across a plurality of electronic devices collectively configured to function as the circuitry of the central server 102. The plurality of processors may be in communication with each other and may be collectively configured to perform one or more functionalities of the circuitry of the central server 102, as described herein. In an exemplary embodiment, the processor 202 may be configured to execute instructions stored in the memory device 204 or otherwise accessible to the processor 202. These instructions, when executed by the processor 202, may cause the circuitry of the central server 102 to perform one or more of the functionalities, as described herein.
  • Whether configured by hardware, firmware/software methods, or by a combination thereof, the processor 202 may include an entity capable of performing operations according to embodiments of the present disclosure while configured accordingly. Thus, for example, when the processor 202 is embodied as an ASIC, FPGA or the like, the processor 202 may include specifically configured hardware for conducting one or more operations described herein. Alternatively, as another example, when the processor 202 is embodied as an executor of instructions, such as may be stored in the memory device 204, the instructions may specifically configure the processor 202 to perform one or more algorithms and operations described herein.
  • The processor 202 used herein may refer to a programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above. In some devices, multiple processors may be provided that may be dedicated to wireless communication functions and one processor may be dedicated to running other applications. Software applications may be stored in the internal memory before they are accessed and loaded into the processors. The processors may include internal memory sufficient to store the application software instructions. In many devices, the internal memory may be a volatile or non-volatile memory, such as flash memory, or a mixture of both. The memory can also be located internal to another computing resource (e.g., enabling computer readable instructions to be downloaded over the Internet or another wired or wireless connection).
  • The non-transitory computer readable medium 203 may include any tangible or non-transitory storage media or memory media such as electronic, magnetic, or optical media e.g., disk or CD/DVD-ROM coupled to processor 202.
  • The memory device 204 may include suitable logic, circuitry, and/or interfaces that are adapted to store a set of instructions that is executable by the processor 202 to perform predetermined operations. Some of the commonly known memory implementations include, but are not limited to, a hard disk, random access memory, cache memory, read only memory (ROM), erasable programmable read-only memory (EPROM) and electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, a compact disc read only memory (CD-ROM), digital versatile disc read only memory (DVD-ROM), an optical disc, circuitry configured to store information, or some combination thereof. In an exemplary embodiment, the memory device 204 may be integrated with the processor 202 on a single chip, without departing from the scope of the disclosure.
  • The transceiver 206 may correspond to a communication interface that may facilitate transmission and reception of messages and data to and from various devices (e.g., computing devices 104). Examples of the transceiver 206 may include, but are not limited to, an antenna, an Ethernet port, a USB port, a serial port, or any other port that can be adapted to receive and transmit data. The transceiver 206 transmits and receives data and/or messages in accordance with the various communication protocols, such as, Bluetooth®, Infra-Red, I2C, TCP/IP, UDP, and 2G, 3G, 4G or 5G communication protocols.
  • The network monitoring unit 208 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to determine one or more network parameters associated with each of the computing devices 104. In some examples, the one or more network parameters associated with a computing device (e.g., computing device 104 a) may be deterministic of a quality of the communication link through which the computing device (e.g., computing device 104 a) is connected to the central server 102. In an example embodiment, the one or more network parameters may include, but are not limited to, a network bandwidth of the communication link, a platform through which the computing device is connected to the communication network, an availability of audio bandwidth, an audio opinion score, an availability of video bandwidth, a video opinion score, a location of the computing device (e.g., the computing device 104 a) and/or a time of the call. The network monitoring unit 208 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • The training unit 210 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to train a machine learning (ML) model for each of the computing devices 104. In an example embodiment, the ML model associated with each of the computing devices 104 may correspond to historical data that comprises the one or more network parameters associated with historical communication links through which the computing device (e.g., computing device 104 a) connected to the central server 102 during previous meetings. Additionally or alternatively, the historical data may include information pertaining to one or more features of the UI that were enabled during the previous meetings. As discussed, the one or more features of the UI may include, but are not limited to, enabling/disabling video sharing, enabling/disabling audio, enabling/disabling screen sharing capability, and/or the like. The training unit 210 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • The meeting experience management unit 212 may comprise suitable logic, circuitry, interfaces, and/or code that may configure the central server 102 to utilize the ML model associated with the computing device (e.g., the computing device 104 a) of the computing devices 104 to predict one or more UI features that are to be enabled and/or disabled based on the one or more network parameters associated with the communication link through which the computing device 104 a is connected to the central server 102. In one embodiment, the meeting experience management unit 212 may be configured to generate and transmit one or more recommendations, pertaining to enabling/disabling the one or more UI features, to each of the computing devices 104. In another embodiment, the meeting experience management unit 212 may be configured to automatically enable/disable the one or more UI features on each of the computing devices 104. The meeting experience management unit 212 may be implemented using Field Programmable Gate array (FPGA) and/or Application Specific Integrated Circuit (ASIC).
  • In operation, the processor 202 may receive the request to schedule the meeting from at least one computing device 104 a of the computing devices 104. In an exemplary embodiment, the request to schedule the meeting includes meeting metadata. As discussed, the meeting metadata includes the agenda of the meeting, the one or more topics to be discussed during the meeting, the time duration of the meeting, the schedule of the meeting, the meeting notes carried from previous meetings, the plurality of participants to attend the meeting, and/or the like. The following table, Table 1, illustrates an example meeting metadata:
  • TABLE 1
    Example meeting metadata
    Meeting notes
    One or more Time Schedule of from previous
    Agenda topics duration the meeting meetings
    To discuss 1. Layout 1 hour 15th Nov. 1. UI to
    design of the 2. Fields to 2020; 9 include feature 1,
    User be displayed PM to 10 PM feature 2
    interface in UI 2. Feature 1
    (UI) 3. Current defined as a
    status of portion depicting
    project participants
    3. Feature 2
    depicting chat
    box
  • In an exemplary embodiment, based on reception of the request to schedule the meeting, the processor 202 may be configured to create the meeting session. As discussed, the meeting session corresponds to a communication session that allows the computing devices 104 to connect to the central server 102 through the communication link. Further, the meeting session allows the computing devices 104 to communicate amongst each other. For example, over the meeting session, the computing devices 104 may share content (e.g., audio content and/or video content) amongst each other. In an exemplary embodiment, the processor 202 may be configured to transmit a message to each of the computing devices 104 comprising the details of the meeting session. For example, the message may include a link to connect to the meeting session.
  • At the scheduled time, the plurality of participants may cause the respective computing devices 104 to join the meeting session. For example, the participant may click on the link (received in the message from the central server 102) to cause the computing devices 104 to join the meeting session. Based on the computing devices 104 joining the meeting session, the central server 102 may transmit a User Interface (UI) to each of the computing devices 104. In an exemplary embodiment, the UI may allow the plurality of participants to access one or more features of the meeting. For example, the UI may allow the plurality of participants to share audio content and/or video content. To this end, the UI may provide control to the plurality of participants to enable/disable one or more peripherals coupled to the computing device 104 a. For example, UI may allow the plurality of participants to enable/disable an image capturing device and/or an audio capturing device (examples of the one or more peripherals) in the computing devices 104. Additionally, or alternatively, the UI may enable the plurality of participants to share other content. For example, the UI may provide a feature to the plurality of participants that would allow to the plurality of participants to cause the computing devices 104 to share content/applications being displayed on a display device associated with the computing devices 104. For instance, through the UI, the plurality of participants may cause the computing devices 104 to share a power point presentation being displayed on the computing devices 104. Additionally, or alternatively, the UI may present a note feature to the plurality of participants on respective computing devices 104. The notes feature may enable the plurality of participants to input meeting notes or keep track of important points discussed during the meeting. For example, the notes feature of the UI may correspond to a space on the UI in which the plurality of participants may input text for his/her reference. Further, the text input by the plurality of participants may correspond to the meeting notes taken by the plurality of participants during the meeting. Additionally, or alternatively, the computing devices 104 may be configured to transmit the meeting notes to the central server 102. Further, in one embodiment, the central server 102 may be configured to share the meeting notes input by the plurality of participants amongst each of the computing devices 104. In an alternative embodiment, the central server 102 may not share the meeting notes input by the plurality of participants amongst each of the computing devices 104. In such an embodiment, the meeting notes are available locally to a participant who inputs the text corresponding to the meeting notes.
  • The plurality of participants may utilize the UI to access the one or more features of the meeting to interact and/or share content with amongst each other. Accordingly, each of the computing devices 104 may generate meeting data during the meeting. As discussed, the meeting data may include, but is not limited to, the audio content generated by the plurality of participants as the plurality of participants speak during the meeting, the video content include video feed of the plurality of participants, the meeting notes input by the plurality of participants during the meeting, the presentation content, the screen sharing content, the file sharing content and/or any other content shared during the meeting. To this end, in an exemplary embodiment, the processor 202 may receive the meeting data from each of the computing devices 104 in real time. Further, the computing device 104 a may be configured to display the meeting data received from the other computing devices 104 through the UI. For example, the UI may be configured to present the video content received from each of the other computing devices 104. In such an embodiment, the video content received from each of the other computing devices 104 may be presented in a grid layout. In some examples, the grid layout may correspond to a matrix layout. To this end, it may be assumed that the video content is received from 9 other computing devices 104. In such an embodiment, the UI may be configured to present the video content in 3×3 matrix layout. Additionally or alternatively, the processor 202 may be configured to transmit the meeting data to the computing devices 104 over the respective communication link.
  • During the meeting, the network monitoring unit 208 may be configured to determine the one or more network parameters associated with the communication link associated with each of the computing devices 104. As discussed, the one or more network parameters may include, but are not limited to, a network bandwidth, a platform through which the computing device is connected to the communication network, an availability of audio bandwidth, an audio opinion score, an availability of video bandwidth, a video opinion score, a location of the computing device, a time of the call, and/or the like. In some examples, the network monitoring unit 208 may be configured to utilize known technologies to determine the one or more network parameters.
  • For example, to determine the location of a computing device (e.g., the computing device 104 a), the network monitoring unit 208 may be configured to determine an IP address of the computing device 104. Thereafter, based on the IP address, the network monitoring unit 208 may be configured to determine the location of the computing device 104 a. It may be understood by a person having ordinary skill in the art that each IP address is assigned to predetermined locations. Accordingly, the network monitoring unit 208 may be configured to determine the location of the computing device 104 a based on the IP address associated with the computing device 104 a. Additionally or alternatively, the network monitoring unit 208 may be configured to utilize Global positioning system (GPS) and/or triangulation techniques to determine the location of the computing device 104 a.
  • In another example, the network monitoring unit 208 may be configured to determine the network bandwidth. For example, the network monitoring unit 208 may be configured to determine a latency measure associated with the communication link of the computing device 104 a. To this end, the network monitoring unit 208 may be configured to transmit an internet control message protocol (ICMP) message to the computing device 104 a to determine the latency associated with the communication link (through which the computing device 104 a is coupled to the central server 102). The latency measure of the communication link is indicative of the network bandwidth of the communication link. Additionally or alternatively, the network monitoring unit 208 may be configured to download a predetermined file from the computing device 104 a to determine an upload bandwidth associated with the communication link through which the computing device 104 a is coupled to the central server 102. For example, the network monitoring unit 208 may be configured to monitor a speed (e.g., in bits/second) at which the predetermined file is being downloaded by the central server 102. The speed at which the central server 102 downloads the predetermined file corresponds to the upload bandwidth of the communication link. Additionally or alternatively, the network monitoring unit 208 may be configured to upload the predetermined file to the computing device 104 a to determine a download bandwidth associated with the communication link through which the computing device 104 a is coupled to the central server 102. For example, the network monitoring unit 208 may be configured to monitor a speed (e.g., in bits/second) at which the central server 102 received the predetermined file. The speed at which the central server 102 receives the predetermined file corresponds to the download bandwidth of the communication link. In an exemplary embodiment, the upload bandwidth and the download bandwidth associated with the communication link corresponds to the network bandwidth associated with the communication link.
  • Additionally or alternatively, the network monitoring unit 208 may be configured to determine a bitrate at which the audio content and/or the video content are to be transmitted to the computing device 104 a. In one example, the network monitoring unit 208 In one example, the bit rate at which the audio content and/or the video content (portion of the meeting data) is to be transmitted to the computing device 104 a, is predetermined. In another example, the network monitoring unit 208 may be configured to determine the bitrate based on the network bandwidth associated with the communication link (through which the computing device 104 a is connected to the central server 102). More particularly, the network monitoring unit 208 may be configured to determine the audio bitrate and the video bitrate based on the audio bandwidth and the video bandwidth associated with the communication link, respectively. For example, the network monitoring unit 208 may be configured to determine the audio bitrate and the video bitrate based on a look-up table. Following table, Table 2, illustrates an example look-up table to determine the audio bitrate and the video bitrate:
  • TABLE 2
    Look-up table to determine the audio bitrate and the video bitrate
    Network Audio Video
    Bandwidth(megabits/second) Bitrate (bits/sec) Bitrate (bits/sec)
    10 Mb/s  96 Kbps  4 Mbps
    20 Mb/s 120 Kbps 7.5 Mbps 
    100 Mb/s  320 Kbps 24 Mbps
  • In yet another example, the network monitoring unit 208 may be configured to automatically determine the audio bitrate and the video bitrate based on known mathematical relation between the network bandwidth, and the audio bitrate and the video bitrate.

  • bitrate=2*bandwidth*log 2(M)  (1)
  • Where,
  • M: Modulation level (e.g., M=4 for QPSK).
    The scope of the disclosure is not limited to determining the bitrate based on aforementioned equations. In an exemplary embodiment, the networking monitoring unit 208 may be configured to utilize another mathematical relation to determine the audio bitrate and the video bitrate.
  • Additionally or alternatively, the network monitoring unit 208 may be configured to the audio opinion score and the video opinion score. In an exemplary embodiment, the audio opinion score and the video opinion score correspond to a mean opinion score (MOS) that is indicative of a quality of audio and/or video being experience by the participant through the computing device 104 a. In some examples, the network monitoring unit 208 may be configured to utilize known protocols defined in the International Telecommunication Union (ITU-T).
  • During the meeting, the network bandwidth of the communication link may vary because of one or more other network services, other than the meeting, using the network bandwidth. For example, during the meeting, another network device connected to the communication link is streaming video on a video streaming service. Accordingly, the network bandwidth available for the meeting may get reduced. To this end, the experience of the meeting, for the participant, may worsen. For example, the computing device 104 a may not be able to receive video content and/or audio content. To this end, the participant may experience choppy video content and/or audio content. Accordingly, to improve the meeting experience, the participant may provide input through the UI to enable/disable one or more features of the meeting. For example, the participant may disable the reception of the video content. In another example, the participant may disable transmission of the video content. Such disabling of reception/transmission of the video content may allow the network bandwidth to be available for seamless reception of the audio content.
  • The meeting experience management unit 212 may be configured to record the modifications made to the one or more features of the meeting. Based on the inputs provided by the participant to modify the one or more features of the meeting.
  • Additionally or alternatively, the processor 202 may be configured to receive information pertaining to one or more peripherals attached to the computing device 104 a from the computing device 104 a. In some examples, the one or more peripherals may correspond to devices through which the participant may transmit/receive the audio content and/or video content. As discussed, examples of the one or more peripherals include, but are not limited to, an audio capturing device, an audio generating device, an image capturing device, and/or the like. In an exemplary embodiment, the information pertaining to the one or more peripherals may include a make of the one or more peripherals coupled to the computing device 104 a. Further, the processor 202 may be configured to determine a quality score of the meeting data generated by the one or more peripherals. For example, the processor 202 may be configured to determine a background noise in the audio content generated by the audio capturing device. Accordingly, based on the background noise, the processor 202 may be configured to determine the quality score associated with the audio content generated by the one or more peripherals. In another example, the processor 202 may be configured to determine pixelation in the video content generated by the image capturing device. Accordingly, based on the pixelation, the processor 202 may be configured to determine the quality score associated with the video content generated by the one or more peripherals. In an exemplary embodiment, the meeting experience management unit 212 may be configured to correlate the quality score associated with the meeting data and the information pertaining to the one or more peripherals coupled to the computing device 104 a.
  • In an exemplary embodiment, the training unit 210 may be configured to generate historical data based on the one or more network parameters recorded by the network monitoring unit 208 and the one or more features of the meeting recorded by the meeting experience management unit 212. Additionally or alternatively, the training unit 210 may be configured to include the correlation between the quality score associated with the meeting data and the information pertaining to the one or more peripherals coupled to the computing device 104 a. In some examples, the historical data may correspond to training data. To this end, the one or more network parameters, the information pertaining to the one or more peripherals, and the quality score of the meeting data generated by the computing device 104 a, may correspond to one or more features of the training data and the one or more features of the meeting corresponds to the one or more labels of the training data. In an exemplary embodiment, the one or more features of training data may correspond to inputs that may be provided to an ML model (trained using the training data), and the one or more labels may correspond to expected output of the ML model. Following table illustrates an example historical data:
  • TABLE 3
    An example of historical data
    One or more network parameters One or more features of the meeting
    Location: Delhi Video disabled
    Time: 8:00PM Screen sharing disabled
    Network bandwidth: 10 MB/s
    Audio bitrate: 24 kbps
    Audio opinion score: 2.5
    Video opinion score: 2.5
    Location: Delhi None of the features of the
    Time: 10:00AM meeting disabled
    Network bandwidth: 10 MB/s
    Audio bitrate: 24 kbps
    Audio opinion score: 4.0
    Video opinion score: 4.0
  • Referring to table 3, it is observed that network bandwidth remains constant, however the meeting time changes. Further, at 8:00 PM, the audio opinion score and the video opinion score are less in comparison to the audio opinion score and the video opinion score at 10:00 AM. Accordingly, the participant may have provided the input to disable the screen sharing option and the video content transmission and/or reception for seamless reception of the audio content.
  • In an exemplary embodiment, the training unit 210 may be configured to train the ML model based on the training data. In some examples, the training unit 210 may be configured to utilize known technologies such as neural network, multi-dimensional Bayes theorem, Gaussian Copula, and/or the like, to train the ML model. In some examples, the scope of the disclosure is not limited to the training unit 210 training the ML model for only the participant associated with the computing device 104 a. In an exemplary embodiment, the training unit 210 may be configured to train the ML model for other participants of the meeting, associated with the other computing devices (e.g., 104 b and 104 c). To this end, the training unit 210 may be configured to train the ML model associated with the other participants using the one or more network parameters associated with the respective communication link through which the respective computing devices 104 connect to the central server 102.
  • In some examples, the meeting experience management unit 212 may be configured to utilize the ML model in future meetings to predict the one or more features of the meeting that are to be enabled/disabled for seamless reception of the meeting data. For example, during the future meeting, the network monitoring unit 208 may be configured to determine a one or more current network parameters associated with the communication link through which the computing device 104 a is coupled to the central server 102. Thereafter, the meeting experience management unit 212 may be configured to utilize the ML model to predict the one or more features of the meeting that are to be enabled/disabled based on the one or more current network parameters. In some examples, the meeting experience management unit 212 may be configured to transmit a notification to the computing device 104 a pertaining to the one or more features of the meeting that are to be enabled/disabled. In another embodiment, the meeting experience management unit 212 may be configured to automatically enable/disable the one or more features of the meeting. For example, the meeting experience management unit 212 may be configured to disable the image capturing device of the computing device 104 a for disabling the transmission of the video content on the computing device 104 a. To this end, the meeting experience management unit 212 may be configured to halt the transmission of the video content to the computing device 104 a.
  • Additionally or alternatively, the meeting experience management unit 212 may be configured to halt transmission of a portion of the video content. For example, the meeting experience management unit 212 may be configured to transmit the video content from the computing device 104 b and halt the transmission of the video content from the computing device 104 c, to the computing device 104 a. Additionally or alternatively, the meeting experience management unit 212 may be configured to modify the UI, itself, based on the one or more features of the meeting predicted using the ML model. For example, the meeting experience management unit 212 may be configured to modify the UI to display video content received from only the computing device 104 b. In such an embodiment, the grid layout of the UI is disabled.
  • In some examples, the scope of the disclosure is not limited to the enabling/disabling the one or more features of the meeting during the meeting. Additionally or alternatively, the meeting experience management unit 212 may be configured to transmit the notification related to the one or more features of the meeting prior to the meeting. For example, the meeting experience management unit 212 may be configured to determine the one or more current network parameters associated with the communication link of the computing device 104 a at a predetermined time period prior to a scheduled time of the meeting. Thereafter, the meeting experience management unit 212 may be configured to utilize the ML model to predict the one or more features of the meeting that may have to be enabled/disabled during the meeting for seamless reception of the meeting data (e.g., the audio content). To this end, the meeting experience management unit 212 may be configured to transmit the notification pertaining to enabling/disabling of the one or more features, to the computing device 104 a prior to the meeting. For example, the meeting experience management unit 212 may be configured to transmit the notification “Since you are joining from location X, disable the reception of the video content for seamless reception of the audio content.” In another example, the meeting experience management unit 212 may be configured to transmit the notification “Since you are joining from location X at time 8:00 PM, disable the reception of the video content for seamless reception of the audio content.”
  • FIG. 3 is a diagram illustrating an exemplary scenario of predicting the one or more features of the meeting, in accordance with an embodiment of the disclosure. Referring to FIG. 3, the exemplary scenario 300 include the ML model 302 associated with the participant-1. The participant-1 is using the computing device 104 a for joining the meeting. It can be observed that the ML model has been created using historical data 304. The historical data 304 includes the one or more previous network parameters associated with the communication link, previously used to connect to the meeting. For example, the historical data includes the one or more network parameters, are illustrated in table 3.
  • The exemplary scenario 300 further illustrates that the central server 102 determines the one or more current network parameters 306. The one or more current network parameters 306 may include that the participant-1 is connecting to the meeting from home at 8:00 PM. The meeting experience management unit 212 may be configured to utilize the ML model 302 to predict the one or more features 308 of the meeting based on the one or more current network parameters 306. The one or more features 308 includes disabling the reception of the video content. Accordingly, the meeting experience management unit 212 transmits a notification to the computing device 104 a pertaining to disabling the video content (depicted by 310). Alternatively, the meeting experience management unit 212 may modify the UI (depicted by 312) presented to the computing device 104 a. For example, the meeting experience management unit 212 may disable grid layout which may cause disabling of the reception of the video content from the other computing devices 104. In yet another embodiment, the meeting experience management unit 212 may enable reception of the video content from a single computing device of the other computing devices 104 (e.g., computing device 104 b).
  • FIG. 4 is a flowchart illustrating a method for training the ML model associated with the participant-1, in accordance with an embodiment of the disclosure. Referring to FIG. 4, at 402, the one or more network parameters are determined. In an exemplary embodiment, the network monitoring unit 208 may be configured to determine the one or more network parameters during the meeting. At 404, inputs to enable/disable the one or more features of the meeting is recorded. In an exemplary embodiment, the processor 202 may be configured to record the input to enable/disable the one or more features of the meeting. At 406, the training data is generated based on the one or more features of the meeting enabled/disabled, and the one or more network parameters associated with the communication link through which the computing device 104 a is connected to the central server 102. In an exemplary embodiment, the training unit 210 may be configured to generate the training data. At 408, the ML model associated with the participant-1 is trained. In an exemplary embodiment, the training unit 210 may be configured to train the ML model.
  • FIG. 5 is a flowchart illustrating a method for predicting the one or more features of the meeting to be enabled/disabled, in accordance with an embodiment of the disclosure. Referring to FIG. 5, at 502, the one or more current network parameters associated with the communication link of the computing device 104 a is determined. In an exemplary embodiment, the network monitoring unit 208 may be configured to determine the one or more current network parameters during the meeting. At 504, the one or more features of the meeting are predicted using the ML model, based on the one or more current network parameters. In an exemplary embodiment, the meeting experience management unit 212 may be configured to predict the one or more features of the meeting based on the one or more current network parameters and the ML model. At 506, the notification is generated based on the one or more features of the meeting predicted using the ML model. In an exemplary embodiment, the meeting experience management unit 212 may be configured to generate the notification.
  • The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art, the operations may be performed in one or more different orders without departing from the various embodiments of the disclosure.
  • The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may include a general purpose processor, a digital signal processor (DSP), a special-purpose processor such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA), a programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, or in addition, some operations or methods may be performed by circuitry that is specific to a given function.
  • In one or more exemplary embodiments, the functions described herein may be implemented by special-purpose hardware or a combination of hardware programmed by firmware or other software. In implementations relying on firmware or other software, the functions may be performed as a result of execution of one or more instructions stored on one or more non-transitory computer-readable media and/or one or more non-transitory processor-readable media. These instructions may be embodied by one or more processor-executable software modules that reside on the one or more non-transitory computer-readable or processor-readable storage media. Non-transitory computer-readable or processor-readable storage media may in this regard comprise any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, disk storage, magnetic storage devices, or the like. Disk storage, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray Disc™, or other storage devices that store data magnetically or optically with lasers. Combinations of the above types of media are also included within the scope of the terms non-transitory computer-readable and processor-readable media. Additionally, any combination of instructions stored on the one or more non-transitory processor-readable or computer-readable media may be referred to herein as a computer program product.
  • Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of teachings presented in the foregoing descriptions and the associated drawings. Although the figures only show certain components of the apparatus and systems described herein, it is understood that various other components may be used in conjunction with the supply management system. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, the operations in the method described above may not necessarily occur in the order depicted in the accompanying diagrams, and in some cases one or more of the operations depicted may occur substantially simultaneously, or additional operations may be involved. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (20)

What is claimed is:
1. A method, comprising:
receiving, by a processor, one or more current network parameters associated with at least one participant of a plurality of participants in a meeting;
predicting, by the processor, one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant, wherein the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant; and
modifying, by the processor, a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting, wherein the UI enables participation of the at least one participant in the meeting.
2. The method of claim 1, wherein the one or more current network parameters comprise a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, and/or a time of the call.
3. The method of claim 1, wherein modifying the user interface comprises enabling or disabling the one or more features of the meeting for the at least one participant.
4. The method of claim 1, further comprising generating, by the processor, a notification based on the one or more current network parameters and the ML model.
5. The method of claim 4, wherein the notification is indicative of enabling or disabling the one or more features of the meeting.
6. The method of claim 1, wherein modifying the user interface comprises modifying a layout of the UI.
7. The method of claim 1, wherein the layout of the UI comprises a grid layout and a single layout.
8. The method of claim 7, wherein the grid layout allows presentation of meeting data from other participants of the meeting.
9. The method of claim 7, wherein the single layout allows presentation of meeting data from one of other participants of the meeting.
10. A central server, comprising:
a memory device comprising a set of instructions; and
a processor communicatively coupled to the memory device, the processor configured to:
receive one or more current network parameters associated with at least one participant of a plurality of participants in a meeting;
predict one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant, wherein the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant; and
modify a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting, wherein the UI enables participation of the at least one participant in the meeting.
11. The central server of claim 10, wherein the one or more current network parameters comprise a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, and/or a time of the call.
12. The central server of claim 10, wherein modifying the user interface comprises enabling or disabling the one or more features of the meeting for the at least one participant.
13. The central server of claim 10, wherein the processor is configured to generate a notification based on the one or more current network parameters and the ML model.
14. The central server of claim 13, wherein the notification is indicative of enabling or disabling the one or more features of the meeting.
15. The central server of claim 10, wherein modifying the user interface comprises modifying a layout of the UI.
16. The central server of claim 10, wherein the layout of the UI comprises a grid layout and a single layout.
17. The central server of claim 16, wherein the grid layout allows presentation of meeting data from other participants of the meeting.
18. The central server of claim 16, wherein the single layout allows presentation of meeting data from one of other participants of the meeting.
19. A non-transitory computer-readable medium having stored thereon, computer-readable instructions, which when executed by a computer, causes a processor in the computer to execute operations, the operations comprising:
receiving, by a processor, one or more current network parameters associated with at least one participant of a plurality of participants in a meeting;
predicting, by the processor, one or more features of the meeting to be enabled and/or disabled for the at least one participant based on the one or more current network parameters and a machine learning (ML) model associated with the at least one participant, wherein the ML model is trained based on the one or more previous network parameters, associated with the at least one participant, received during a previous meeting attended by the at least one participant; and
modifying, by the processor, a user interface (UI) of the meeting being presented to the at least one participant based on the one or more predicted features of the meeting, wherein the UI enables participation of the at least one participant in the meeting.
20. The non-transitory computer-readable medium of claim 19, wherein the one or more current network parameters comprise a network bandwidth, a platform through which the computing device is connected to the communication network, an audio bitrate, an audio opinion score, a video bitrate, a video opinion score, a location of the computing device, and/or a time of the call.
US17/308,887 2020-05-21 2021-05-05 Meeting experience management Abandoned US20210367984A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/308,887 US20210367984A1 (en) 2020-05-21 2021-05-05 Meeting experience management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063028123P 2020-05-21 2020-05-21
US17/308,887 US20210367984A1 (en) 2020-05-21 2021-05-05 Meeting experience management

Publications (1)

Publication Number Publication Date
US20210367984A1 true US20210367984A1 (en) 2021-11-25

Family

ID=78607913

Family Applications (8)

Application Number Title Priority Date Filing Date
US17/308,887 Abandoned US20210367984A1 (en) 2020-05-21 2021-05-05 Meeting experience management
US17/308,623 Active US11488116B2 (en) 2020-05-21 2021-05-05 Dynamically generated news feed
US17/308,916 Abandoned US20210367986A1 (en) 2020-05-21 2021-05-05 Enabling Collaboration Between Users
US17/308,264 Active US11537998B2 (en) 2020-05-21 2021-05-05 Capturing meeting snippets
US17/308,329 Active US11416831B2 (en) 2020-05-21 2021-05-05 Dynamic video layout in video conference meeting
US17/308,586 Abandoned US20210365893A1 (en) 2020-05-21 2021-05-05 Recommendation unit for generating meeting recommendations
US17/308,640 Abandoned US20210367802A1 (en) 2020-05-21 2021-05-05 Meeting summary generation
US17/308,772 Abandoned US20210365896A1 (en) 2020-05-21 2021-05-05 Machine learning (ml) model for participants

Family Applications After (7)

Application Number Title Priority Date Filing Date
US17/308,623 Active US11488116B2 (en) 2020-05-21 2021-05-05 Dynamically generated news feed
US17/308,916 Abandoned US20210367986A1 (en) 2020-05-21 2021-05-05 Enabling Collaboration Between Users
US17/308,264 Active US11537998B2 (en) 2020-05-21 2021-05-05 Capturing meeting snippets
US17/308,329 Active US11416831B2 (en) 2020-05-21 2021-05-05 Dynamic video layout in video conference meeting
US17/308,586 Abandoned US20210365893A1 (en) 2020-05-21 2021-05-05 Recommendation unit for generating meeting recommendations
US17/308,640 Abandoned US20210367802A1 (en) 2020-05-21 2021-05-05 Meeting summary generation
US17/308,772 Abandoned US20210365896A1 (en) 2020-05-21 2021-05-05 Machine learning (ml) model for participants

Country Status (1)

Country Link
US (8) US20210367984A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220192767A1 (en) * 2020-12-21 2022-06-23 Ethicon Llc Dynamic trocar positioning for robotic surgical system

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12400661B2 (en) 2017-07-09 2025-08-26 Otter.ai, Inc. Systems and methods for capturing, processing, and rendering one or more context-aware moment-associating elements
US10263799B1 (en) 2018-08-29 2019-04-16 Capital One Services, Llc Managing meeting data
US11423911B1 (en) 2018-10-17 2022-08-23 Otter.ai, Inc. Systems and methods for live broadcasting of context-aware transcription and/or other elements related to conversations and/or speeches
US11765213B2 (en) * 2019-06-11 2023-09-19 Nextiva, Inc. Mixing and transmitting multiplex audiovisual information
US11595447B2 (en) 2020-08-05 2023-02-28 Toucan Events Inc. Alteration of event user interfaces of an online conferencing service
US11676623B1 (en) 2021-02-26 2023-06-13 Otter.ai, Inc. Systems and methods for automatic joining as a virtual meeting participant for transcription
US12175968B1 (en) * 2021-03-26 2024-12-24 Amazon Technologies, Inc. Skill selection for responding to natural language inputs
US11937016B2 (en) * 2021-05-26 2024-03-19 International Business Machines Corporation System and method for real-time, event-driven video conference analytics
US11894938B2 (en) 2021-06-21 2024-02-06 Toucan Events Inc. Executing scripting for events of an online conferencing service
US11916687B2 (en) 2021-07-28 2024-02-27 Zoom Video Communications, Inc. Topic relevance detection using automated speech recognition
US11330229B1 (en) * 2021-09-28 2022-05-10 Atlassian Pty Ltd. Apparatuses, computer-implemented methods, and computer program products for generating a collaborative contextual summary interface in association with an audio-video conferencing interface service
US20230098137A1 (en) * 2021-09-30 2023-03-30 C/o Uniphore Technologies Inc. Method and apparatus for redacting sensitive information from audio
US11985180B2 (en) * 2021-11-16 2024-05-14 Microsoft Technology Licensing, Llc Meeting-video management engine for a meeting-video management system
US11722536B2 (en) 2021-12-27 2023-08-08 Atlassian Pty Ltd. Apparatuses, computer-implemented methods, and computer program products for managing a shared dynamic collaborative presentation progression interface in association with an audio-video conferencing interface service
WO2023158330A1 (en) * 2022-02-16 2023-08-24 Ringcentral, Inc., System and method for rearranging conference recordings
US20230297208A1 (en) * 2022-03-16 2023-09-21 Figma, Inc. Collaborative widget state synchronization
US12155729B2 (en) * 2022-03-18 2024-11-26 Zoom Video Communications, Inc. App pinning in video conferences
JP7459890B2 (en) * 2022-03-23 2024-04-02 セイコーエプソン株式会社 Display methods, display systems and programs
US12182502B1 (en) 2022-03-28 2024-12-31 Otter.ai, Inc. Systems and methods for automatically generating conversation outlines and annotation summaries
US20230401497A1 (en) * 2022-06-09 2023-12-14 Vmware, Inc. Event recommendations using machine learning
CN117459673A (en) * 2022-07-19 2024-01-26 奥图码股份有限公司 Electronic device and method for video conferencing
US12095580B2 (en) * 2022-10-31 2024-09-17 Docusign, Inc. Conferencing platform integration with agenda generation
US11838139B1 (en) 2022-10-31 2023-12-05 Docusign, Inc. Conferencing platform integration with assent tracking
US12373644B2 (en) * 2022-12-13 2025-07-29 Calabrio, Inc. Evaluating transcripts through repetitive statement analysis
US20240395254A1 (en) * 2023-05-24 2024-11-28 Otter.ai, Inc. Systems and methods for live summarization
US20250119507A1 (en) * 2023-10-09 2025-04-10 Dell Products, L.P. Handling conference room boundaries and/or context

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120192080A1 (en) * 2011-01-21 2012-07-26 Google Inc. Tailoring content based on available bandwidth
US20160307165A1 (en) * 2015-04-20 2016-10-20 Cisco Technology, Inc. Authorizing Participant Access To A Meeting Resource
US20170308866A1 (en) * 2016-04-22 2017-10-26 Microsoft Technology Licensing, Llc Meeting Scheduling Resource Efficiency
US10735211B2 (en) * 2018-05-04 2020-08-04 Microsoft Technology Licensing, Llc Meeting insight computing system
US20210224754A1 (en) * 2020-01-16 2021-07-22 Capital One Services, Llc Computer-based systems configured for automated electronic calendar management with meeting room locating and methods of use thereof
US20210245043A1 (en) * 2020-02-07 2021-08-12 Krikey, Inc. Video tools for mobile rendered augmented reality game

Family Cites Families (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10742433B2 (en) * 2003-06-16 2020-08-11 Meetup, Inc. Web-based interactive meeting facility, such as for progressive announcements
US6963352B2 (en) 2003-06-30 2005-11-08 Nortel Networks Limited Apparatus, method, and computer program for supporting video conferencing in a communication system
US7634540B2 (en) 2006-10-12 2009-12-15 Seiko Epson Corporation Presenter view control system and method
US8180029B2 (en) 2007-06-28 2012-05-15 Voxer Ip Llc Telecommunication and multimedia management method and apparatus
US8073922B2 (en) * 2007-07-27 2011-12-06 Twinstrata, Inc System and method for remote asynchronous data replication
US20090210933A1 (en) * 2008-02-15 2009-08-20 Shear Jeffrey A System and Method for Online Content Production
US9824333B2 (en) * 2008-02-29 2017-11-21 Microsoft Technology Licensing, Llc Collaborative management of activities occurring during the lifecycle of a meeting
US7912901B2 (en) * 2008-08-12 2011-03-22 International Business Machines Corporation Automating application state of a set of computing devices responsive to scheduled events based on historical data
US8214748B2 (en) 2009-09-22 2012-07-03 International Business Machines Corporation Meeting agenda management
US9615056B2 (en) 2010-02-10 2017-04-04 Oovoo, Llc System and method for video communication on mobile devices
US9264659B2 (en) 2010-04-07 2016-02-16 Apple Inc. Video conference network management for a mobile device
US8514263B2 (en) * 2010-05-12 2013-08-20 Blue Jeans Network, Inc. Systems and methods for scalable distributed global infrastructure for real-time multimedia communication
US20130191299A1 (en) 2010-10-28 2013-07-25 Talentcircles, Inc. Methods and apparatus for a social recruiting network
US20120144320A1 (en) 2010-12-03 2012-06-07 Avaya Inc. System and method for enhancing video conference breaks
US9210213B2 (en) * 2011-03-03 2015-12-08 Citrix Systems, Inc. Reverse seamless integration between local and remote computing environments
US9113032B1 (en) 2011-05-31 2015-08-18 Google Inc. Selecting participants in a video conference
US8941708B2 (en) 2011-07-29 2015-01-27 Cisco Technology, Inc. Method, computer-readable storage medium, and apparatus for modifying the layout used by a video composing unit to generate a composite video signal
JP2015507246A (en) 2011-12-06 2015-03-05 アグリーヤ モビリティ インコーポレーテッド Seamless collaboration and communication
US20130282820A1 (en) 2012-04-23 2013-10-24 Onmobile Global Limited Method and System for an Optimized Multimedia Communications System
US8914452B2 (en) * 2012-05-31 2014-12-16 International Business Machines Corporation Automatically generating a personalized digest of meetings
US9141504B2 (en) 2012-06-28 2015-09-22 Apple Inc. Presenting status data received from multiple devices
US9953304B2 (en) 2012-12-30 2018-04-24 Buzd, Llc Situational and global context aware calendar, communications, and relationship management
US10075676B2 (en) 2013-06-26 2018-09-11 Touchcast LLC Intelligent virtual assistant system and method
US9723075B2 (en) * 2013-09-13 2017-08-01 Incontact, Inc. Systems and methods for data synchronization management between call centers and CRM systems
US10484189B2 (en) * 2013-11-13 2019-11-19 Microsoft Technology Licensing, Llc Enhanced collaboration services
US9400833B2 (en) 2013-11-15 2016-07-26 Citrix Systems, Inc. Generating electronic summaries of online meetings
US20150358810A1 (en) 2014-06-10 2015-12-10 Qualcomm Incorporated Software Configurations for Mobile Devices in a Collaborative Environment
US10990620B2 (en) * 2014-07-14 2021-04-27 Verizon Media Inc. Aiding composition of themed articles about popular and novel topics and offering users a navigable experience of associated content
US20160117624A1 (en) 2014-10-23 2016-04-28 International Business Machines Incorporated Intelligent meeting enhancement system
US9939983B2 (en) * 2014-12-17 2018-04-10 Fuji Xerox Co., Ltd. Systems and methods for plan-based hypervideo playback
US9846528B2 (en) * 2015-03-02 2017-12-19 Dropbox, Inc. Native application collaboration
US20160350720A1 (en) 2015-05-29 2016-12-01 Citrix Systems, Inc. Recommending meeting times based on previous meeting acceptance history
US10255946B1 (en) 2015-06-25 2019-04-09 Amazon Technologies, Inc. Generating tags during video upload
DE112016003352T5 (en) 2015-07-24 2018-04-12 Max Andaker Smooth user interface for virtual collaboration, communication and cloud computing
US20190332994A1 (en) * 2015-10-03 2019-10-31 WeWork Companies Inc. Generating insights about meetings in an organization
US10620811B2 (en) * 2015-12-30 2020-04-14 Dropbox, Inc. Native application collaboration
US10572961B2 (en) * 2016-03-15 2020-02-25 Global Tel*Link Corporation Detection and prevention of inmate to inmate message relay
US20180046957A1 (en) * 2016-08-09 2018-02-15 Microsoft Technology Licensing, Llc Online Meetings Optimization
US20180077092A1 (en) 2016-09-09 2018-03-15 Tariq JALIL Method and system for facilitating user collaboration
US10572858B2 (en) * 2016-10-11 2020-02-25 Ricoh Company, Ltd. Managing electronic meetings using artificial intelligence and meeting rules templates
US10510051B2 (en) 2016-10-11 2019-12-17 Ricoh Company, Ltd. Real-time (intra-meeting) processing using artificial intelligence
US20180101760A1 (en) * 2016-10-11 2018-04-12 Ricoh Company, Ltd. Selecting Meeting Participants for Electronic Meetings Using Artificial Intelligence
US9699410B1 (en) 2016-10-28 2017-07-04 Wipro Limited Method and system for dynamic layout generation in video conferencing system
US11568369B2 (en) * 2017-01-13 2023-01-31 Fujifilm Business Innovation Corp. Systems and methods for context aware redirection based on machine-learning
TWI644565B (en) 2017-02-17 2018-12-11 陳延祚 Video image processing method and system using the same
US20180270452A1 (en) 2017-03-15 2018-09-20 Electronics And Telecommunications Research Institute Multi-point connection control apparatus and method for video conference service
US10838396B2 (en) 2017-04-18 2020-11-17 Cisco Technology, Inc. Connecting robotic moving smart building furnishings
US20180331842A1 (en) 2017-05-15 2018-11-15 Microsoft Technology Licensing, Llc Generating a transcript to capture activity of a conference session
CN107342932B (en) * 2017-05-23 2020-12-04 华为技术有限公司 An information interaction method and terminal
US9967520B1 (en) 2017-06-30 2018-05-08 Ringcentral, Inc. Method and system for enhanced conference management
US11412012B2 (en) * 2017-08-24 2022-08-09 Re Mago Holding Ltd Method, apparatus, and computer-readable medium for desktop sharing over a web socket connection in a networked collaboration workspace
US20200106735A1 (en) * 2018-09-27 2020-04-02 Salvatore Guerrieri Systems and Methods for Communications & Commerce Between System Users and Non-System Users
US10553208B2 (en) * 2017-10-09 2020-02-04 Ricoh Company, Ltd. Speech-to-text conversion for interactive whiteboard appliances using multiple services
US20190172017A1 (en) * 2017-12-04 2019-06-06 Microsoft Technology Licensing, Llc Tagging meeting invitees to automatically create tasks
US20190205839A1 (en) * 2017-12-29 2019-07-04 Microsoft Technology Licensing, Llc Enhanced computer experience from personal activity pattern
TWI656942B (en) * 2018-01-12 2019-04-21 財團法人工業技術研究院 Machine tool collision avoidance method and system
US11120199B1 (en) * 2018-02-09 2021-09-14 Voicebase, Inc. Systems for transcribing, anonymizing and scoring audio content
US10757148B2 (en) * 2018-03-02 2020-08-25 Ricoh Company, Ltd. Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices
US20210004735A1 (en) * 2018-03-22 2021-01-07 Siemens Corporation System and method for collaborative decentralized planning using deep reinforcement learning agents in an asynchronous environment
US20190312917A1 (en) 2018-04-05 2019-10-10 Microsoft Technology Licensing, Llc Resource collaboration with co-presence indicators
CN108595645B (en) * 2018-04-26 2020-10-30 深圳市鹰硕技术有限公司 Conference speech management method and device
JP2019215727A (en) * 2018-06-13 2019-12-19 レノボ・シンガポール・プライベート・リミテッド Conference apparatus, conference apparatus control method, program, and conference system
US11367095B2 (en) * 2018-10-16 2022-06-21 Igt Unlockable electronic incentives
US10606576B1 (en) * 2018-10-26 2020-03-31 Salesforce.Com, Inc. Developer experience for live applications in a cloud collaboration platform
US11016993B2 (en) * 2018-11-27 2021-05-25 Slack Technologies, Inc. Dynamic and selective object update for local storage copy based on network connectivity characteristics
CN111586674B (en) * 2019-02-18 2022-01-14 华为技术有限公司 Communication method, device and system
US20200341625A1 (en) 2019-04-26 2020-10-29 Microsoft Technology Licensing, Llc Automated conference modality setting application
US20200374146A1 (en) 2019-05-24 2020-11-26 Microsoft Technology Licensing, Llc Generation of intelligent summaries of shared content based on a contextual analysis of user engagement
US11689379B2 (en) 2019-06-24 2023-06-27 Dropbox, Inc. Generating customized meeting insights based on user interactions and meeting media
US11262886B2 (en) * 2019-10-22 2022-03-01 Microsoft Technology Licensing, Llc Structured arrangements for tracking content items on a shared user interface
US11049511B1 (en) 2019-12-26 2021-06-29 Lenovo (Singapore) Pte. Ltd. Systems and methods to determine whether to unmute microphone based on camera input
US11049077B1 (en) 2019-12-31 2021-06-29 Capital One Services, Llc Computer-based systems configured for automated electronic calendar management and work task scheduling and methods of use thereof
US10999346B1 (en) 2020-01-06 2021-05-04 Dialogic Corporation Dynamically changing characteristics of simulcast video streams in selective forwarding units
US10735212B1 (en) 2020-01-21 2020-08-04 Capital One Services, Llc Computer-implemented systems configured for automated electronic calendar item predictions and methods of use thereof
US11288636B2 (en) 2020-01-23 2022-03-29 Capital One Services, Llc Computer-implemented systems configured for automated electronic calendar item predictions for calendar item rescheduling and methods of use thereof
US11438841B2 (en) 2020-01-31 2022-09-06 Dell Products, Lp Energy savings system based machine learning of wireless performance activity for mobile information handling system connected to plural wireless networks
US11095468B1 (en) 2020-02-13 2021-08-17 Amazon Technologies, Inc. Meeting summary service
US11488114B2 (en) 2020-02-20 2022-11-01 Sap Se Shared collaborative electronic events for calendar services
US11080356B1 (en) * 2020-02-27 2021-08-03 International Business Machines Corporation Enhancing online remote meeting/training experience using machine learning
WO2021194372A1 (en) * 2020-03-26 2021-09-30 Ringcentral, Inc. Methods and systems for managing meeting notes
US20210319408A1 (en) 2020-04-09 2021-10-14 Science House LLC Platform for electronic management of meetings
US11470014B2 (en) 2020-04-30 2022-10-11 Dell Products, Lp System and method of managing data connections to a communication network using tiered devices and telemetry data
US11570219B2 (en) * 2020-05-07 2023-01-31 Re Mago Holding Ltd Method, apparatus, and computer readable medium for virtual conferencing with embedded collaboration tools
US11184560B1 (en) 2020-12-16 2021-11-23 Lenovo (Singapore) Pte. Ltd. Use of sensor input to determine video feed to provide as part of video conference
US11119985B1 (en) 2021-03-19 2021-09-14 Atlassian Pty Ltd. Apparatuses, methods, and computer program products for the programmatic documentation of extrinsic event based data objects in a collaborative documentation service

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120192080A1 (en) * 2011-01-21 2012-07-26 Google Inc. Tailoring content based on available bandwidth
US20160307165A1 (en) * 2015-04-20 2016-10-20 Cisco Technology, Inc. Authorizing Participant Access To A Meeting Resource
US20170308866A1 (en) * 2016-04-22 2017-10-26 Microsoft Technology Licensing, Llc Meeting Scheduling Resource Efficiency
US10735211B2 (en) * 2018-05-04 2020-08-04 Microsoft Technology Licensing, Llc Meeting insight computing system
US20210224754A1 (en) * 2020-01-16 2021-07-22 Capital One Services, Llc Computer-based systems configured for automated electronic calendar management with meeting room locating and methods of use thereof
US20210245043A1 (en) * 2020-02-07 2021-08-12 Krikey, Inc. Video tools for mobile rendered augmented reality game

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220192767A1 (en) * 2020-12-21 2022-06-23 Ethicon Llc Dynamic trocar positioning for robotic surgical system
US12023116B2 (en) * 2020-12-21 2024-07-02 Cilag Gmbh International Dynamic trocar positioning for robotic surgical system

Also Published As

Publication number Publication date
US20210367800A1 (en) 2021-11-25
US20210365896A1 (en) 2021-11-25
US20210367801A1 (en) 2021-11-25
US11488116B2 (en) 2022-11-01
US20210368134A1 (en) 2021-11-25
US11416831B2 (en) 2022-08-16
US20210365893A1 (en) 2021-11-25
US11537998B2 (en) 2022-12-27
US20210367986A1 (en) 2021-11-25
US20210367802A1 (en) 2021-11-25

Similar Documents

Publication Publication Date Title
US20210367984A1 (en) Meeting experience management
US11665169B2 (en) System and method for securely managing recorded video conference sessions
US10666791B2 (en) System and method for evaluating the quality of a communication session
US9973551B2 (en) System, method, and logic for managing content in a virtual meeting
US8739046B2 (en) Dynamic E-meeting summarization
US9565249B2 (en) Adaptive connectivity in network-based collaboration background information
US12063257B2 (en) Virtual collaboration with multiple degrees of availability
WO2017088384A1 (en) Method, apparatus and system for uploading live video
US20130258042A1 (en) Interactive attention monitoring in online conference sessions
US12069036B2 (en) Encrypted shared state for electronic conferencing
US9231840B2 (en) Optimizing the quality of audio within a teleconferencing session via an adaptive codec switching
US20220385491A1 (en) Real-Time Speaker Selection for Multiparty Conferences
US20220253268A1 (en) Smart screen share reception indicator in a conference
JP7574433B2 (en) Conference Robot
US11190734B2 (en) Multiway audio-video conferencing with multiple communication channels per device
US20250087215A1 (en) Detection and correction of performance issues during online meetings
JP6843920B2 (en) Multi-way audio-video conferencing with multiple communication channels per device
US9769682B2 (en) System and method for evaluating the quality of a communication session
WO2017185632A1 (en) Data transmission method and electronic device
US11962626B2 (en) Video conference manager
WO2019012528A1 (en) System and method for establishing a session
US12108190B2 (en) Automatic engagement analytics in collaboration and conferencing
US20240020463A1 (en) Text based contextual audio annotation
US20240305683A1 (en) Call quality analysis, notification and improvement
CN111930367B (en) Data processing method, device and equipment

Legal Events

Date Code Title Description
AS Assignment

Owner name: HUDDL INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAJAMANI, HARISH;DAVULURI, NAVA;YARLAGADDA, KRISHNA;AND OTHERS;SIGNING DATES FROM 20210426 TO 20210503;REEL/FRAME:056297/0298

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION