US20230239436A1 - Enhanced virtual and/or augmented communications interface - Google Patents

Enhanced virtual and/or augmented communications interface Download PDF

Info

Publication number
US20230239436A1
US20230239436A1 US18/190,693 US202318190693A US2023239436A1 US 20230239436 A1 US20230239436 A1 US 20230239436A1 US 202318190693 A US202318190693 A US 202318190693A US 2023239436 A1 US2023239436 A1 US 2023239436A1
Authority
US
United States
Prior art keywords
component
users
video
communication
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/190,693
Inventor
John Jurrius
David Brand
Stephen Brand
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intracom Systems LLC
Original Assignee
Intracom Systems LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intracom Systems LLC filed Critical Intracom Systems LLC
Priority to US18/190,693 priority Critical patent/US20230239436A1/en
Assigned to INTRACOM SYSTEMS, LLC reassignment INTRACOM SYSTEMS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRAND, DAVID, BRAND, STEPHEN, JURRIUS, JOHN
Publication of US20230239436A1 publication Critical patent/US20230239436A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1053IP private branch exchange [PBX] functionality entities or arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • the present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users.
  • the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
  • the communication interface is in communication with at least one additional component selected from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • the present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users.
  • the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
  • the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • a conferencing system for facilitating enhanced communication between users, the conferencing system comprising: a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
  • the communication interface is in communication with a video component that provides video to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree video view for the user).
  • the communication interface is in communication with a data component that provides non-audio data to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree data view for the user).
  • the communication interface is in communication with an audio/video ambience component that provides audio/video to one or more of said users.
  • the communication interface is in communication with a whiteboard component that provides a whiteboard function to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree whiteboard view for the user).
  • the communication interface is configured to allow a user to navigate by swiping, pinching, zooming in, or zooming out on the content being viewed (e.g., the video, data presented, whiteboard, etc.).
  • the communication interface is in communication with an audio analysis component that analyzes incoming human speech audio data for: i) substantive content and/or 2) human emotion content.
  • the audio analysis component comprises artificial intelligence software and/or machine learning software.
  • the substantive content comprises situational context, wherein the situational context comprises at least one situation selected from the group consisting of: a medical emergency, a product or service complaint, a financial inquiry, a product or service order, a product or service review, a credit card inquiry, and a request to display a whiteboard.
  • the human emotion content comprises at least one human emotion selected from the group consisting of: distress, pain, anger, frustration, happiness, satisfaction, annoyance, and panicking.
  • a non-transitory computer readable storage media having instructions stored thereon that, when executed by a conferencing system, direct the conferencing system to perform a method for facilitating enhanced communication between users, the method comprising: during a conference session, providing a virtual and/or augmented conference with multi-channel, multi-access, always-on, and non-blocking communication between a plurality of users.
  • a client communication interface for a conference session comprising: a) a computer processor, and b) non-transitory computer memory comprising one or more computer programs and a database, wherein said one or more computer programs comprises virtual and/or augmented reality client communication interface software, and wherein said one or more computer programs, in conjunction with said computer processor, is/are configured to generate a client communication interface for a conference session with a plurality of users by sending and receiving information from the following first components: i) a com system component, wherein said com system component is configured to, during a conference session, provide a multi-channel, multi-access, always-on, and non-blocking communication; ii) a video system component, wherein said video component provides video to said plurality of users; and iii) a data component, wherein said data component provides non-audio data to said plurality of users.
  • the one or more computer programs, in conjunction with said computer processor, is/are further configured to generate said client communication interface for a conference session with a plurality of users by sending and receiving information from at least one of the following second components: i) audio/video ambience component that provides audio/video to said plurality of users, and ii) a whiteboard component that provides a whiteboard function to said plurality of users.
  • the virtual and/or augmented reality client communication interface software comprises centrally located SaaS software.
  • the SaaS software is hosted on the Internet (e.g., and the users log into the internet to access a client communication interface).
  • a client communications interface e.g., a virtual reality/augmented reality communication interface
  • a com component enables complex, multi-channel voice collaboration, audio routing, and monitoring.
  • a client communication interface e.g., for collaboration and/or training
  • a com component e.g., Matrix & PBX communications
  • a data LAN component for real-time telemetry data monitoring/interfacing
  • a video system component for multi-channel video monitoring/interfacing
  • a control console component providing a control surface, for controlling at least one of: lighting, audio switching, audio mixing consoles, video switching, and local or remote audio and video sources, local and remote telemetry data, and more from an intuitive UI
  • At least three, at least four, a least five or all six of such components are in communication with the client communication interface.
  • one or both of the following additional components are in communication with the client communication interface: 6) an audio/visual ambience component that, for example, creates the feeling of “real life” application specific experiences, and/or 7) geo-positioning (e.g., 3D or 2D) and mapping component.
  • processor and “central processing unit” or “CPU” are used interchangeably and refer to a device that is able to read a program from a computer memory (e.g., ROM or other computer memory) and perform a set of steps according to the program.
  • a computer memory e.g., ROM or other computer memory
  • computer memory and “computer memory device” refer to any storage media readable by a computer processor.
  • Examples of computer memory include, but are not limited to, RAM, ROM, computer chips, digital video disc (DVDs), compact discs (CDs), hard disk drives (HDD), and magnetic tape.
  • computer readable medium refers to any device or system for storing and providing information (e.g., data and instructions) to a computer processor.
  • Examples of computer readable media include, but are not limited to, DVDs, CDs, hard disk drives, magnetic tape and servers for streaming media over networks.
  • Multimedia information and “media information” are used interchangeably to refer to information (e.g., digitized and analog information) encoding or representing audio, video, and/or text. Multimedia information may further carry information not corresponding to audio or video. Multimedia information may be transmitted from one location or device to a second location or device by methods including, but not limited to, electrical, optical, and satellite transmission, and the like.
  • audio information refers to information (e.g., digitized and analog information) encoding or representing audio.
  • audio information may comprise encoded spoken language with or without additional audio.
  • Audio information includes, but is not limited to, audio captured by a microphone and synthesized audio (e.g., computer generated digital audio).
  • video information refers to information (e.g., digitized and analog information) encoding or representing video.
  • Video information includes, but is not limited to video captured by a video camera, images captured by a camera, and synthetic video (e.g., computer generated digital video).
  • text information refers to information (e.g., analog or digital information) encoding or representing written language or other material capable of being represented in text format (e.g., corresponding to spoken audio).
  • the term “configured to receive multimedia information” refers to a device that is capable of receiving multimedia information. Such devices contain one or more components that can receive signal carrying multimedia information.
  • encode refers to the process of converting one type of information or signal into a different type of information or signal to, for example, facilitate the transmission and/or interpretability of the information or signal.
  • audio sound waves can be converted into (i.e., encoded into) electrical or digital information.
  • the term “in electronic communication” refers to electrical devices (e.g., computers, processors, conference bridges, communications equipment) that are configured to communicate with one another through direct or indirect signaling.
  • electrical devices e.g., computers, processors, conference bridges, communications equipment
  • a conference bridge that is connected to a processor through a cable or wire, such that information can pass between the conference bridge and the processor, are in electronic communication with one another.
  • a computer configured to transmit (e.g., through cables, wires, infrared signals, telephone lines, etc) information to another computer or device, is in electronic communication with the other computer or device.
  • transmitting refers to the movement of information (e.g., data) from one location to another (e.g., from one device to another) using any suitable means.
  • FIG. 1 shows an exemplary system architecture for generating a client communication interface, that is in communication with a number of systems (e.g., audio system, com system, video system, data LAN), that is hosted by a virtual and/or augmented control room Saas software, and that can be linked to any given number of users (e.g., so that they may participate in a virtual and/or augmented reality conference together by, for example, wearing a VR headset that provides a visual and audio depiction of the client communication interface).
  • a number of systems e.g., audio system, com system, video system, data LAN
  • a virtual and/or augmented control room Saas software e.g., so that they may participate in a virtual and/or augmented reality conference together by, for example, wearing a VR headset that provides a visual and audio depiction of the client communication interface.
  • the present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users.
  • the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
  • the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • a client communications interface e.g., a virtual reality/augmented reality communication interface
  • users such as collaboration and/or training
  • a com component enables complex, multi-channel voice collaboration, audio routing, and monitoring.
  • a corn component e.g., Matrix & PBX communications
  • a data LAN component for real-time telemetry data monitoring/interfacing
  • a video system component for multi-channel video monitoring/interfacing
  • a control console component providing a control surface, for controlling at least one of: lighting, audio switching, audio mixing consoles, video switching, and local or remote audio and video sources, local and remote telemetry data, and more from an intuitive UI
  • a white board component for 3D white boarding, and 6) an audio system component for providing sound.
  • At least three, at least four, a least five or all six of such components are provided together in communication with the client the communications interface.
  • one or both of the following additional components are provided in communication the communications interface: 6) an audio/visual ambience component that, for example, creates the feeling of “real life” application specific experiences, and/or 7) geo-positioning (e.g., 2D or 3D) and mapping component.
  • the corn component e.g., a Matrix & PBX communications component
  • a Matrix & PBX communications component enables multi-channel, multi-access, always-on, non-blocking communications between users, meaning anyone can listen and/or speak to one or more users, in any complexity and without limitation.
  • Such a component in certain embodiments, enables audio routing, monitoring of virtually unlimited audio channels, and interfacing with source feeds via four-wire audio or SIP.
  • this component incorporates an integrated, fully functional SIP PBX, to add all standard PBX capabilities, and enable seamless communications between matrix and PBX users.
  • a gaze-to-talk (GTT) capability enables activation of voice or audio channels by looking or putting focus on (gazing) towards a selector, object, or form associated with any given channel.
  • a “tap” or click of a game control button can latch the channel.
  • the corn component in particular embodiments, is highly interoperable and readily interfaces with real world IP PBXs, hardware intercom systems, and two-way radios for seamless communications between VR control rooms, physical control rooms, and field operations.
  • the data component is a real-time telemetry data monitoring/interfacing components that allows, for example, users to monitor real-time telemetry data on one or more viewing surfaces that can be sized and placed anywhere within three dimensional space of the client communications interface. That data component may also provide software to interface with telemetry data from local or remote sources.
  • the video system component is a multi-channel video monitoring/interfacing component that allows, for example, users to monitor one or more video surfaces that can be sized and placed anywhere in three dimensional space created in the client communications interface.
  • Such component in certain embodiments, provides software to interface with remote video sources.
  • the data component comprises control surfaces that allow, for example, users to control both physical and soft systems in the client communication interface (e.g., in the virtual reality and augmented reality session that is generated), including communications, lighting, audio switchers and mixing consoles, machine/automation control, and more.
  • control surfaces that allow, for example, users to control both physical and soft systems in the client communication interface (e.g., in the virtual reality and augmented reality session that is generated), including communications, lighting, audio switchers and mixing consoles, machine/automation control, and more.
  • control surfaces that allow, for example, users to control both physical and soft systems in the client communication interface (e.g., in the virtual reality and augmented reality session that is generated), including communications, lighting, audio switchers and mixing consoles, machine/automation control, and more.
  • such component support industry standard protocols for controlling smart systems via web based applications and also make the control APIs available to third parties.
  • the data component provides three-dimensional (3D) White boarding, which enables users, for example to mark-up, write, capture notes, and draw, in collaboration with others, in three dimensional space in the client communications interface.
  • 3D three-dimensional
  • the data component (e.g., that provides non-audio data) provides a geo-positioning (e.g., 2D or 3D) and mapping component.
  • a geo-positioning e.g., 2D or 3D
  • the systems and methods herein collect geo-position data (e.g., 2D or 3D geo-positioning data) from one or more users (e.g., users in the field) and then display such data as part of the conferencing systems described herein.
  • the audio/visual ambience component allows, for example, users to be able to upload images and sounds, stream video, and select from a repository of audio and visual content that become the basis of the “backdrop” of their virtual and/or augmented experiences in the client communications interface.
  • Ambience may include, for example, one or more live cameras and microphones placed in a physical control room to enhance the feeling of “being there” for remote operators using only VR or AR headsets. This is especially useful in limited space control rooms such as video trucks and aircraft testing mobile units, and also suitable for remote training and instructional classes.
  • the Geo-positioning (e.g., 2D or 3D) and mapping component allows, for example, the use of Google Earth (or similar program) in VR, which allows users to walk through any location on earth in 3D virtual and/or augmented reality space with voice and/or audio monitor channels tied to the geo position of each user, both in VR and in the “real world”, and appearing on the map. Users are able to activate talk/listen paths by “gazing” at a location/selector and patch channels by drawing an actual line between two or more. In command and control applications, central command has an actual bird's eye view of all their assets and is able to better coordinate and communicate in real-time.
  • Google Earth or similar program
  • the above mentioned components can be adjusted and placed by users into the desired location within 3D, virtual and/or augmented reality space of the generated client communications interface. Moreover, users have the ability to select certain components in AR or VR only, to better experience their physical environments.
  • a stock exchange in communications with the various components, can be created in VR in the client communications interface.
  • a stock exchange with a physical presence, such as the NYSE, to traders in different geographic locations.
  • a purely virtual and/or augmented stock exchange could be created as well with no physical presence.
  • the customer may be a bank (e.g., major) bank that wants to provide it's one many traders (e.g., one thousand traders) located around the world an “on floor” trading experience.
  • the customer creates an account online from the hosting website (see, e.g., FIG. 1 ).
  • the system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual and/or augmented reality experiences.
  • To set-up a virtual and/or augmented stock exchange the administrator selects a menu option to create a new experience and name it appropriately.
  • the administrator designates, for example, one thousand users, pricing is calculated, and the customer agrees to the fees, billed monthly until discontinued.
  • the administrator can then import or input user information and contact information and later in the process send out invites.
  • all users may or may not be assigned to the same virtual and/or augmented experience(s) set up by the administrator and within each virtual and/or augmented experience of the client communications interface, may or may not be assigned all components.
  • the administrator can determine, based on criteria such as job function and rank, which experiences and components are appropriate or needed on a per user basis.
  • the administrator is then presented with a menu of components that can be used to create the desired experience.
  • Initial components include, for example, and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Add Audio/Video Ambience.”
  • the administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels (“Hoots”), used by traders to facilitate buying and selling of stocks.
  • Hoots non-blocking, always-on conferencing channels
  • One or more Hoots may be set up, named, and assigned to various users.
  • the administrator may also interface to existing Hoots via SIP or four-wire audio interface.
  • the administrator may set up one or more communications channels to stream live financial news for traders to monitor. These sources may be interfaced via four-wire audio, SIP, or chosen from online Internet news audio sources.
  • the Hoots can be displayed in different shapes, sizes, and color combinations and moved in virtual and/or augmented reality space by users.
  • the administrator selects “Add Data” to add real-time trader metrics to the client communications interface, either interfaced from the source at the physical exchange, through a service such a Bloomberg, or from another third party.
  • a service such as a Bloomberg
  • the generated client communication interface provides a number of tools, options, and APIs to enable users to interface real-time data streams and bring them into the VR domain.
  • the telemetry data feed is set-up, named, and saved, it can be assigned to one or more users.
  • the data feed appears in virtual and/or augmented reality space to users as a screen that can be sized and moved to the desired location.
  • the administrator selects “Add Video” to add a live financial news TV channel.
  • source options and strategies for acquisition vary.
  • the source is cable TV and the feed acquired via a video card on a client side PC running software that acquires the feed and brings it into the VR client communications interface for distribution to users.
  • the video feed appears as a screen that can be sized and moved around in virtual and/or augmented space to the desired location.
  • the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos to simulate a stock exchange, or upload a 360 photo of their own, perhaps taken from an actual stock exchange.
  • a previously recorded, looping 360 video can also be used, or a 360 live stream.
  • the backdrop appears as the created world around them.
  • the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.
  • a rocket launch control room can be created in VR as the client communications interface.
  • this application one can extend a rocket launch control room with a physical presence to remote participants.
  • a “stand alone” virtual control room (client interface) with no physical presence can be created around this application as well, or most other applications. Training, simulation, design, and education purposes are possible applications for purely virtual control room experiences, as well as production applications that do not require a physical presence.
  • the customer creates an account online from a hosting website that provides the SaaS software for generating the client communications interface.
  • the system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual reality experiences.
  • To set-up a virtual rocket launch control room the administrator selects a menu option to create a new experience and name it appropriately.
  • the administrator designates the number of users that require access to the experience, pricing is calculated, and the customer agrees to the fees, billed monthly until discontinued.
  • the administrator can then import or input user information and contact information and later in the process send out invites.
  • all users may or may not be assigned to the same virtual experience(s) set up by the administrator and within each virtual experience, may or may not be assigned all components.
  • Initial components could include and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Audio/Video Ambience.”
  • the administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels used by launch personnel to facilitate rocket launches.
  • One or more channels may be set up, named, and assigned to various users.
  • the administrator may also interface to existing communications systems via SIP or four-wire audio interface.
  • the administrator may set up one or more communications channels to live stream audio monitor sources, such as weather reports. These sources may be interfaced via four-wire audio, SIP, or chosen from online Internet news/audio sources.
  • the channels can be displayed in different shapes, sizes, and color combinations and moved in virtual reality space by users.
  • the channels can also be associated with images and objects in virtual reality space.
  • the administrator selects “Add Data” to add one or more real-time telemetry feeds, interfaced from the source at the physical rocket launch facility.
  • the client communications interface generating software provides a number of tools, options, and APIs to enable users to interface real-time data streams and bring them into the VR domain.
  • the telemetry data feed is set-up, named, and saved, it can be assigned to one or more users.
  • the data feed appears in VR space to users as a screen that can be sized and moved to the desired location.
  • the administrator selects “Add Video” to add one or more live video monitoring feeds showing certain views of the rocket.
  • source options and strategies for acquisition vary.
  • the source is cameras places at the physical launch site and the feed is acquired via a video card on a client side PC running software from VR Control Rooms.
  • the software acquires the feed(s) and brings it into the hosting server for distribution to users via the client communications interface.
  • the video feed(s) appears as screens that can be sized and moved around in virtual space to the desired location.
  • the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos, or uploads a 360 photo of their own, perhaps taken from the actual rocket launch site. A previously recorded, looping 360 video can also be used, or a 360 live stream.
  • the ambience selected appears as the backdrop of the virtually created world around them.
  • the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.
  • a television remote truck (OB Van) control room can be created in VR and/or augmented reality.
  • OB Van television remote truck
  • the customer creates an account online from the hosting website.
  • the system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual and/or augmented reality experiences.
  • To set-up a virtual and/or augmented television control room the administrator selects a menu option to create a new experience and name it appropriately.
  • the administrator designates the number of users that require access to the experience, pricing would be calculated, and the customer would agree to the fees, billed monthly until discontinued.
  • the administrator can then import or input user information and contact information and later in the process send out invites.
  • all users may or may not be assigned to the same virtual and/or augmented experience(s) set up by the administrator and within each virtual and/or augmented experience, may or may not be assigned all components.
  • Initial components include and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Audio/Video Ambience.”
  • the administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels used by production personnel to facilitate television productions.
  • One or more channels may be set up, named, and assigned to various users.
  • the administrator may also interface to existing communications systems via SIP or four-wire audio interface.
  • the administrator may set up one or more monitor channels to live stream program audio sources. These sources may be interfaced via four-wire audio or SIP.
  • talk/listen selector images can be displayed in different shapes, sizes, and color combinations and moved in virtual and/or augmented reality space by users.
  • the channels can also be associated with images and objects in VR space.
  • the administrator selects “Add Video” to add one or more live video monitoring feeds showing various views of the production location.
  • source options and strategies for acquisition vary.
  • the source is cameras placed at the physical production site and the feed is acquired via a video card on a client-side PC running software from the hosting software company.
  • the software acquires the feed(s) and brings it into the hosting server for distribution to users via the client communications interface.
  • the video feed(s) appear as screens that can be sized and moved around in 3D VR space to the desired location.
  • the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos, or uploads a 360 photo of their own, perhaps taken from the actual production site as well as the truck control room.
  • a previously recorded, looping 360 video can also be used, or a 360 live stream.
  • the ambience selected appears as the backdrop of the virtually created world around them.
  • the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)
  • Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)

Abstract

The present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users. In certain embodiments, the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication. In particular embodiments, the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.

Description

  • The present application is a continuation of U.S. application Ser. No. 16/916,332, filed Jun. 30, 2020, which is a continuation of U.S. application Ser. No. 15/833,241, filed Dec. 6, 2017, now U.S. Pat. No. 10,701,319, which claims priority to U.S. Provisional application Ser. No. 62/436,892, filed Dec. 20, 2016, each of which are herein incorporated by reference in their entireties.
  • FIELD OF THE INVENTION
  • The present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users. In certain embodiments, the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication. In particular embodiments, the communication interface is in communication with at least one additional component selected from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • BACKGROUND
  • According to the IDC total revenue for virtual reality and augmented reality is projected to increase from $5.2 billion in 2016 to over $162 billion by 2020. While initial market growth is being driven by entertainment and gaming, Deloitte Digital says the largest market opportunities lie within enterprises and the greatest level of adoption will be around collaboration and training applications. Teams that are not in the same physical environment will be able to enter virtual environments to exchange information and ideas in ways that surpass two dimensional video and conferencing. Virtualized reality collaboration will take how we communicate, share ideas and concepts to a completely new level. However, the underlying technology for maximizing the potential of virtualized or augmented reality collaboration is lacking.
  • SUMMARY OF THE INVENTION
  • The present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users. In certain embodiments, the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication. In particular embodiments, the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • In some embodiments, provided herein is a conferencing system for facilitating enhanced communication between users, the conferencing system comprising: a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
  • In certain embodiments, the communication interface is in communication with a video component that provides video to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree video view for the user). In particular embodiments, the communication interface is in communication with a data component that provides non-audio data to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree data view for the user). In some embodiments, the communication interface is in communication with an audio/video ambience component that provides audio/video to one or more of said users. In further embodiments, the communication interface is in communication with a whiteboard component that provides a whiteboard function to one or more of said users (e.g., allowing a 180 degree . . . 270 degree . . . or 360 degree whiteboard view for the user). In certain embodiments, the communication interface is configured to allow a user to navigate by swiping, pinching, zooming in, or zooming out on the content being viewed (e.g., the video, data presented, whiteboard, etc.).
  • In some embodiments, the communication interface is in communication with an audio analysis component that analyzes incoming human speech audio data for: i) substantive content and/or 2) human emotion content. In certain embodiments, the audio analysis component comprises artificial intelligence software and/or machine learning software. In further embodiments, the substantive content comprises situational context, wherein the situational context comprises at least one situation selected from the group consisting of: a medical emergency, a product or service complaint, a financial inquiry, a product or service order, a product or service review, a credit card inquiry, and a request to display a whiteboard. In further embodiments, the human emotion content comprises at least one human emotion selected from the group consisting of: distress, pain, anger, frustration, happiness, satisfaction, annoyance, and panicking.
  • In some embodiments, provided herein is a non-transitory computer readable storage media having instructions stored thereon that, when executed by a conferencing system, direct the conferencing system to perform a method for facilitating enhanced communication between users, the method comprising: during a conference session, providing a virtual and/or augmented conference with multi-channel, multi-access, always-on, and non-blocking communication between a plurality of users.
  • In some embodiments, provides herein are systems for generating a client communication interface for a conference session comprising: a) a computer processor, and b) non-transitory computer memory comprising one or more computer programs and a database, wherein said one or more computer programs comprises virtual and/or augmented reality client communication interface software, and wherein said one or more computer programs, in conjunction with said computer processor, is/are configured to generate a client communication interface for a conference session with a plurality of users by sending and receiving information from the following first components: i) a com system component, wherein said com system component is configured to, during a conference session, provide a multi-channel, multi-access, always-on, and non-blocking communication; ii) a video system component, wherein said video component provides video to said plurality of users; and iii) a data component, wherein said data component provides non-audio data to said plurality of users.
  • In particular embodiments, the one or more computer programs, in conjunction with said computer processor, is/are further configured to generate said client communication interface for a conference session with a plurality of users by sending and receiving information from at least one of the following second components: i) audio/video ambience component that provides audio/video to said plurality of users, and ii) a whiteboard component that provides a whiteboard function to said plurality of users. In certain embodiments, the virtual and/or augmented reality client communication interface software comprises centrally located SaaS software. In other embodiments, the SaaS software is hosted on the Internet (e.g., and the users log into the internet to access a client communication interface).
  • In some embodiments, provided herein are systems and methods for generating a client communications interface (e.g., a virtual reality/augmented reality communication interface) for users, such as for collaboration and/or training, which is generated by being in communication with a com component, wherein said com component enables complex, multi-channel voice collaboration, audio routing, and monitoring. In certain embodiments, provided herein are systems and methods for generating a client communication interface (e.g., for collaboration and/or training), that is in communication with at least two of the following components: 1) a com component (e.g., Matrix & PBX communications), which enables complex, multi-channel voice collaboration, audio routing, and monitoring, seamlessly integrated with standard PBX capabilities, and interoperable with communication systems outside VR; 2) a data LAN component for real-time telemetry data monitoring/interfacing; 3) a video system component, for multi-channel video monitoring/interfacing; 4) a control console component, providing a control surface, for controlling at least one of: lighting, audio switching, audio mixing consoles, video switching, and local or remote audio and video sources, local and remote telemetry data, and more from an intuitive UI; 5) a white board component for 3D white boarding, and 6) an audio system component for providing sound. In certain embodiments, at least three, at least four, a least five or all six of such components are in communication with the client communication interface. In some embodiments, one or both of the following additional components are in communication with the client communication interface: 6) an audio/visual ambience component that, for example, creates the feeling of “real life” application specific experiences, and/or 7) geo-positioning (e.g., 3D or 2D) and mapping component.
  • As used herein the terms “processor” and “central processing unit” or “CPU” are used interchangeably and refer to a device that is able to read a program from a computer memory (e.g., ROM or other computer memory) and perform a set of steps according to the program.
  • As used herein, the terms “computer memory” and “computer memory device” refer to any storage media readable by a computer processor. Examples of computer memory include, but are not limited to, RAM, ROM, computer chips, digital video disc (DVDs), compact discs (CDs), hard disk drives (HDD), and magnetic tape.
  • As used herein, the term “computer readable medium” refers to any device or system for storing and providing information (e.g., data and instructions) to a computer processor. Examples of computer readable media include, but are not limited to, DVDs, CDs, hard disk drives, magnetic tape and servers for streaming media over networks.
  • As used herein the terms “multimedia information” and “media information” are used interchangeably to refer to information (e.g., digitized and analog information) encoding or representing audio, video, and/or text. Multimedia information may further carry information not corresponding to audio or video. Multimedia information may be transmitted from one location or device to a second location or device by methods including, but not limited to, electrical, optical, and satellite transmission, and the like.
  • As used herein the term “audio information” refers to information (e.g., digitized and analog information) encoding or representing audio. For example, audio information may comprise encoded spoken language with or without additional audio. Audio information includes, but is not limited to, audio captured by a microphone and synthesized audio (e.g., computer generated digital audio).
  • As used herein the term “video information” refers to information (e.g., digitized and analog information) encoding or representing video. Video information includes, but is not limited to video captured by a video camera, images captured by a camera, and synthetic video (e.g., computer generated digital video).
  • As used herein the term “text information” refers to information (e.g., analog or digital information) encoding or representing written language or other material capable of being represented in text format (e.g., corresponding to spoken audio).
  • As used herein the term “configured to receive multimedia information” refers to a device that is capable of receiving multimedia information. Such devices contain one or more components that can receive signal carrying multimedia information.
  • As used herein the term “encode” refers to the process of converting one type of information or signal into a different type of information or signal to, for example, facilitate the transmission and/or interpretability of the information or signal. For example, audio sound waves can be converted into (i.e., encoded into) electrical or digital information.
  • As used herein the term “in electronic communication” refers to electrical devices (e.g., computers, processors, conference bridges, communications equipment) that are configured to communicate with one another through direct or indirect signaling. For example, a conference bridge that is connected to a processor through a cable or wire, such that information can pass between the conference bridge and the processor, are in electronic communication with one another. Likewise, a computer configured to transmit (e.g., through cables, wires, infrared signals, telephone lines, etc) information to another computer or device, is in electronic communication with the other computer or device.
  • As used herein the term “transmitting” refers to the movement of information (e.g., data) from one location to another (e.g., from one device to another) using any suitable means.
  • DESCRIPTION OF THE DRAWING
  • FIG. 1 shows an exemplary system architecture for generating a client communication interface, that is in communication with a number of systems (e.g., audio system, com system, video system, data LAN), that is hosted by a virtual and/or augmented control room Saas software, and that can be linked to any given number of users (e.g., so that they may participate in a virtual and/or augmented reality conference together by, for example, wearing a VR headset that provides a visual and audio depiction of the client communication interface).
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users. In certain embodiments, the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication. In particular embodiments, the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.
  • In some embodiments, provided herein are systems and methods for generating a client communications interface (e.g., a virtual reality/augmented reality communication interface) for users, such as collaboration and/or training, which is in communication with a com component, wherein said corn component enables complex, multi-channel voice collaboration, audio routing, and monitoring. In certain embodiments, provided herein are systems and methods for generating a client communication interface that is in communication with at least two of the following components: 1) a corn component (e.g., Matrix & PBX communications), which enables complex, multi-channel voice collaboration, audio routing, and monitoring, seamlessly integrated with standard PBX capabilities, and interoperable with communication systems outside VR; 2) a data LAN component for real-time telemetry data monitoring/interfacing; 3) a video system component, for multi-channel video monitoring/interfacing; 4) a control console component, providing a control surface, for controlling at least one of: lighting, audio switching, audio mixing consoles, video switching, and local or remote audio and video sources, local and remote telemetry data, and more from an intuitive UI; 5) a white board component for 3D white boarding, and 6) an audio system component for providing sound. In certain embodiments, at least three, at least four, a least five or all six of such components are provided together in communication with the client the communications interface. In some embodiments, one or both of the following additional components are provided in communication the communications interface: 6) an audio/visual ambience component that, for example, creates the feeling of “real life” application specific experiences, and/or 7) geo-positioning (e.g., 2D or 3D) and mapping component.
  • In particular embodiments, the corn component (e.g., a Matrix & PBX communications component) enables multi-channel, multi-access, always-on, non-blocking communications between users, meaning anyone can listen and/or speak to one or more users, in any complexity and without limitation. Such a component, in certain embodiments, enables audio routing, monitoring of virtually unlimited audio channels, and interfacing with source feeds via four-wire audio or SIP. Moreover, this component, in certain embodiments, incorporates an integrated, fully functional SIP PBX, to add all standard PBX capabilities, and enable seamless communications between matrix and PBX users. In certain embodiments, a gaze-to-talk (GTT) capability enables activation of voice or audio channels by looking or putting focus on (gazing) towards a selector, object, or form associated with any given channel. Depending on the VR platform and configuration, a “tap” or click of a game control button can latch the channel. The corn component, in particular embodiments, is highly interoperable and readily interfaces with real world IP PBXs, hardware intercom systems, and two-way radios for seamless communications between VR control rooms, physical control rooms, and field operations.
  • In some embodiment, the data component is a real-time telemetry data monitoring/interfacing components that allows, for example, users to monitor real-time telemetry data on one or more viewing surfaces that can be sized and placed anywhere within three dimensional space of the client communications interface. That data component may also provide software to interface with telemetry data from local or remote sources.
  • In particular embodiments, the video system component is a multi-channel video monitoring/interfacing component that allows, for example, users to monitor one or more video surfaces that can be sized and placed anywhere in three dimensional space created in the client communications interface. Such component, in certain embodiments, provides software to interface with remote video sources.
  • In some embodiments, the data component comprises control surfaces that allow, for example, users to control both physical and soft systems in the client communication interface (e.g., in the virtual reality and augmented reality session that is generated), including communications, lighting, audio switchers and mixing consoles, machine/automation control, and more. In particular embodiments, such component support industry standard protocols for controlling smart systems via web based applications and also make the control APIs available to third parties.
  • In further embodiments, the data component provides three-dimensional (3D) White boarding, which enables users, for example to mark-up, write, capture notes, and draw, in collaboration with others, in three dimensional space in the client communications interface.
  • In some embodiments, the data component (e.g., that provides non-audio data) provides a geo-positioning (e.g., 2D or 3D) and mapping component. In particular embodiments, the systems and methods herein collect geo-position data (e.g., 2D or 3D geo-positioning data) from one or more users (e.g., users in the field) and then display such data as part of the conferencing systems described herein.
  • In certain embodiments, the audio/visual ambience component allows, for example, users to be able to upload images and sounds, stream video, and select from a repository of audio and visual content that become the basis of the “backdrop” of their virtual and/or augmented experiences in the client communications interface. Ambience may include, for example, one or more live cameras and microphones placed in a physical control room to enhance the feeling of “being there” for remote operators using only VR or AR headsets. This is especially useful in limited space control rooms such as video trucks and aircraft testing mobile units, and also suitable for remote training and instructional classes.
  • In further embodiments, the Geo-positioning (e.g., 2D or 3D) and mapping component allows, for example, the use of Google Earth (or similar program) in VR, which allows users to walk through any location on earth in 3D virtual and/or augmented reality space with voice and/or audio monitor channels tied to the geo position of each user, both in VR and in the “real world”, and appearing on the map. Users are able to activate talk/listen paths by “gazing” at a location/selector and patch channels by drawing an actual line between two or more. In command and control applications, central command has an actual bird's eye view of all their assets and is able to better coordinate and communicate in real-time.
  • In certain embodiments, the above mentioned components can be adjusted and placed by users into the desired location within 3D, virtual and/or augmented reality space of the generated client communications interface. Moreover, users have the ability to select certain components in AR or VR only, to better experience their physical environments.
  • EXEMPLARY APPLICATIONS
  • Provided below are a number of exemplary applications of the systems and methods.
  • 1. VR Stock Exchange
  • Using the SaaS platform software that generates the client communications interface, in communications with the various components, a stock exchange can be created in VR in the client communications interface. In this application, one can extend a stock exchange with a physical presence, such as the NYSE, to traders in different geographic locations. In certain embodiments, a purely virtual and/or augmented stock exchange could be created as well with no physical presence. The customer may be a bank (e.g., major) bank that wants to provide it's one many traders (e.g., one thousand traders) located around the world an “on floor” trading experience.
  • For example, to begin, the customer creates an account online from the hosting website (see, e.g., FIG. 1 ). The system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual and/or augmented reality experiences. To set-up a virtual and/or augmented stock exchange the administrator selects a menu option to create a new experience and name it appropriately. The administrator then designates, for example, one thousand users, pricing is calculated, and the customer agrees to the fees, billed monthly until discontinued. The administrator can then import or input user information and contact information and later in the process send out invites. Note, all users may or may not be assigned to the same virtual and/or augmented experience(s) set up by the administrator and within each virtual and/or augmented experience of the client communications interface, may or may not be assigned all components. In this fashion the administrator can determine, based on criteria such as job function and rank, which experiences and components are appropriate or needed on a per user basis. The administrator is then presented with a menu of components that can be used to create the desired experience. Initial components include, for example, and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Add Audio/Video Ambience.”
  • The administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels (“Hoots”), used by traders to facilitate buying and selling of stocks. One or more Hoots may be set up, named, and assigned to various users. The administrator may also interface to existing Hoots via SIP or four-wire audio interface. Also using the “Add Comms” option the administrator may set up one or more communications channels to stream live financial news for traders to monitor. These sources may be interfaced via four-wire audio, SIP, or chosen from online Internet news audio sources. Once created the Hoots can be displayed in different shapes, sizes, and color combinations and moved in virtual and/or augmented reality space by users.
  • Next the administrator selects “Add Data” to add real-time trader metrics to the client communications interface, either interfaced from the source at the physical exchange, through a service such a Bloomberg, or from another third party. In this example, it is assumed that the bank has an existing real-time data feed from the exchange. The generated client communication interface provides a number of tools, options, and APIs to enable users to interface real-time data streams and bring them into the VR domain. Once the telemetry data feed is set-up, named, and saved, it can be assigned to one or more users. The data feed appears in virtual and/or augmented reality space to users as a screen that can be sized and moved to the desired location.
  • Next the administrator selects “Add Video” to add a live financial news TV channel. As with the other components, source options and strategies for acquisition vary. In this example the source is cable TV and the feed acquired via a video card on a client side PC running software that acquires the feed and brings it into the VR client communications interface for distribution to users. When users enter the virtual and/or augmented reality experience, the video feed appears as a screen that can be sized and moved around in virtual and/or augmented space to the desired location.
  • Next the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos to simulate a stock exchange, or upload a 360 photo of their own, perhaps taken from an actual stock exchange. A previously recorded, looping 360 video can also be used, or a 360 live stream. When users enter the VR experience of the client communications interface, the backdrop appears as the created world around them.
  • Once all required components are created the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.
  • 2. VR Rocket Launch
  • Using the SaaS platform software and system architecture (e.g., FIG. 1 ), a rocket launch control room can be created in VR as the client communications interface. In this application one can extend a rocket launch control room with a physical presence to remote participants. Note, a “stand alone” virtual control room (client interface) with no physical presence can be created around this application as well, or most other applications. Training, simulation, design, and education purposes are possible applications for purely virtual control room experiences, as well as production applications that do not require a physical presence.
  • To begin, the customer creates an account online from a hosting website that provides the SaaS software for generating the client communications interface. The system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual reality experiences. To set-up a virtual rocket launch control room the administrator selects a menu option to create a new experience and name it appropriately. The administrator then designates the number of users that require access to the experience, pricing is calculated, and the customer agrees to the fees, billed monthly until discontinued. The administrator can then import or input user information and contact information and later in the process send out invites. Note, all users may or may not be assigned to the same virtual experience(s) set up by the administrator and within each virtual experience, may or may not be assigned all components. In this fashion the administrator can determine, based on criteria such as job function and rank, which experiences and components are appropriate or needed on a per user basis. The administrator is then presented with a menu of components that can be used to create the desired experience in the client communications interface. Initial components could include and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Audio/Video Ambience.”
  • The administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels used by launch personnel to facilitate rocket launches. One or more channels may be set up, named, and assigned to various users. The administrator may also interface to existing communications systems via SIP or four-wire audio interface. Also using the “Add Comms” option the administrator may set up one or more communications channels to live stream audio monitor sources, such as weather reports. These sources may be interfaced via four-wire audio, SIP, or chosen from online Internet news/audio sources. Once created the channels can be displayed in different shapes, sizes, and color combinations and moved in virtual reality space by users. The channels can also be associated with images and objects in virtual reality space.
  • Next the administrator selects “Add Data” to add one or more real-time telemetry feeds, interfaced from the source at the physical rocket launch facility. The client communications interface generating software provides a number of tools, options, and APIs to enable users to interface real-time data streams and bring them into the VR domain. Once the telemetry data feed is set-up, named, and saved, it can be assigned to one or more users. The data feed appears in VR space to users as a screen that can be sized and moved to the desired location.
  • Next the administrator selects “Add Video” to add one or more live video monitoring feeds showing certain views of the rocket. As with the other components, source options and strategies for acquisition vary. In this example the source is cameras places at the physical launch site and the feed is acquired via a video card on a client side PC running software from VR Control Rooms. The software acquires the feed(s) and brings it into the hosting server for distribution to users via the client communications interface. When users enter the virtual reality experience, the video feed(s) appears as screens that can be sized and moved around in virtual space to the desired location.
  • Next the administrator selects “Add Control Surface” to allow control of both physical and soft systems used in rocket launches while in VR in the client communications interface. In this application lighting, audio switching, and machine/automation controls are set up and appear as UI surfaces in VR that can be sized and moved around in 3D space.
  • Next the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos, or uploads a 360 photo of their own, perhaps taken from the actual rocket launch site. A previously recorded, looping 360 video can also be used, or a 360 live stream. When users enter the VR experience, the ambience selected appears as the backdrop of the virtually created world around them.
  • Once all required components are created the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.
  • 3. VR TV Broadcast Truck
  • Using the SaaS platform software for generating a client communications interface, a television remote truck (OB Van) control room can be created in VR and/or augmented reality. In this application, one can extend a live television event control room with a physical presence to remote participants.
  • To begin, the customer creates an account online from the hosting website. The system administrator can then login to a private and secure web-based utility that enables the creation of custom virtual and/or augmented reality experiences. To set-up a virtual and/or augmented television control room the administrator selects a menu option to create a new experience and name it appropriately. The administrator then designates the number of users that require access to the experience, pricing would be calculated, and the customer would agree to the fees, billed monthly until discontinued. The administrator can then import or input user information and contact information and later in the process send out invites. Note, all users may or may not be assigned to the same virtual and/or augmented experience(s) set up by the administrator and within each virtual and/or augmented experience, may or may not be assigned all components. In this fashion the administrator can determine, based on criteria such as job function and rank, which experiences and components are appropriate and needed on a per user basis. The administrator is then presented with a menu of components that can be used to create the desired experience. Initial components include and can be listed as: “Add Comms”, “Add Data Feed”, “Add Video”, “Add Control Surface”, “Add White Boarding”, and “Audio/Video Ambience.”
  • The administrator first selects “Add Comms” to create non-blocking, always-on conferencing channels used by production personnel to facilitate television productions. One or more channels may be set up, named, and assigned to various users. The administrator may also interface to existing communications systems via SIP or four-wire audio interface. Also using the “Add Comms” option the administrator may set up one or more monitor channels to live stream program audio sources. These sources may be interfaced via four-wire audio or SIP. Once the channels are created, talk/listen selector images can be displayed in different shapes, sizes, and color combinations and moved in virtual and/or augmented reality space by users. The channels can also be associated with images and objects in VR space.
  • Next the administrator selects “Add Video” to add one or more live video monitoring feeds showing various views of the production location. As with the other components, source options and strategies for acquisition vary. In this example the source is cameras placed at the physical production site and the feed is acquired via a video card on a client-side PC running software from the hosting software company. The software acquires the feed(s) and brings it into the hosting server for distribution to users via the client communications interface. When users enter the VR experience, the video feed(s) appear as screens that can be sized and moved around in 3D VR space to the desired location.
  • Next the administrator selects “Add Control Surface” to allow control of both physical and soft systems used in television production while in VR. In this application lighting, audio switching, video switching, graphics control, and machine/automation controls are set up and appear as UI surfaces in VR that can be sized and moved around in 3D space.
  • Next the administrator selects “Add Ambience” and chooses from a library of high resolution 360 photos, or uploads a 360 photo of their own, perhaps taken from the actual production site as well as the truck control room. A previously recorded, looping 360 video can also be used, or a 360 live stream. When users enter the VR experience (or augmented reality experience), the ambience selected appears as the backdrop of the virtually created world around them.
  • Once all required components are created the administrator can test the VR experience, select to “Go Live”, and send out invites via email or text to assigned users.
  • 4. VR Trainer
  • Many possibilities exist to create VR experiences outside mission critical control rooms with the SaaS client communications interface generating platform. One such application extends to athletic training, coaching, and teaching. Using such a platform, especially its interfacing capabilities to the “real world”, a trainer can be brought to students participating within VR. The A/V ambience feature is used to bring a live stream of a trainer and associated real world background into VR. In this manner the students feels like they are in the same physical room as the trainer. The matrix communications platform is used so that the trainer can communicate in real-time with one or more students, simultaneously.

Claims (20)

We claim:
1. A conferencing system for facilitating enhanced communication between users, the conferencing system comprising: a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication.
2. The system of claim 1, wherein said communication interface is in communication with a video component that provides video to one or more of said users.
3. The system of claim 1, wherein said communication interface is in communication with a data component that provides non-audio data to one or more of said users.
4. The system of claim 3, wherein said data component comprises a geo-positioning and mapping component.
5. The system of claim 4, wherein said geo-positioning and mapping component is configured to collect geo-position data from one or more users.
6. The system of claim 1, wherein said communication interface is in communication with an audio/video ambience component that provides audio/video to one or more of said users.
7. The system of claim 1, wherein said communication interface is in communication with a whiteboard component that provides a whiteboard function to one or more of said users.
8. The system of claim 1, further comprising wherein said communication interface is generated by centrally hosted Saas software.
9. The system of claim 1, wherein said communication interface is in communication with an audio analysis component that analyzes incoming human speech audio data for: i) substantive content and/or 2) human emotion content.
10. The system of claim 9, wherein said audio analysis component comprises artificial intelligence software and/or machine learning software.
11. The system of claim 9, wherein said substantive content comprises situational context, wherein said situational context comprises at least one situation selected from the group consisting of: a medical emergency, a product or service complaint, a financial inquiry, a product or service order, a product or service review, a credit card inquiry, and a request to display a whiteboard.
12. The system of claim 9, wherein said human emotion content comprises at least one human emotion selected from the group consisting of: distress, pain, anger, frustration, happiness, satisfaction, annoyance, and panicking.
13. A non-transitory computer readable storage media having instructions stored thereon that, when executed by a conferencing system, direct the conferencing system to perform a method for facilitating enhanced communication between users, the method comprising: during a conference session, providing a virtual and/or augmented conference with multi-channel, multi-access, always-on, and non-blocking communication between a plurality of users.
14. A system for generating a client communication interface for a conference session comprising:
a) a computer processor, and
b) non-transitory computer memory comprising one or more computer programs and a database, wherein said one or more computer programs comprises virtual and/or augmented reality client communication interface software, and
wherein said one or more computer programs, in conjunction with said computer processor, is/are configured to generate a client communication interface for a conference session with a plurality of users by sending and receiving information from the following first components:
i) a com system component, wherein said com system component is configured to, during a conference session, provide a multi-channel, multi-access, always-on, and non-blocking communication;
ii) a video system component, wherein said video component provides video to said plurality of users; and
iii) a data component, wherein said data component provides non-audio data to said plurality of users.
15. The system of claim 14, wherein said one or more computer programs, in conjunction with said computer processor, is/are further configured to generate said client communication interface for a conference session with a plurality of users by sending and receiving information from at least one of the following second components:
i) audio/video ambience component that provides audio/video to said plurality of users, and
ii) a whiteboard component that provides a whiteboard function to said plurality of users.
16. The system of claim 14, wherein said virtual and/or augmented reality client communication interface software comprises centrally located SaaS software.
17. The system of claim 16, wherein said SaaS software is hosted on the interne.
18. The system of claim 14, wherein said data component comprises a geo-positioning and mapping component.
19. The system of claim 18, wherein said geo-positioning and mapping component is configured to collect geo-position data from one or more users.
20. The system of claim 14, wherein said communication interface is in communication with an audio analysis component that analyzes incoming human speech audio data for: i) substantive content and/or 2) human emotion content.
US18/190,693 2016-12-20 2023-03-27 Enhanced virtual and/or augmented communications interface Pending US20230239436A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/190,693 US20230239436A1 (en) 2016-12-20 2023-03-27 Enhanced virtual and/or augmented communications interface

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662436892P 2016-12-20 2016-12-20
US15/833,241 US10701319B2 (en) 2016-12-20 2017-12-06 Enhanced virtual and/or augmented communications interface
US16/916,332 US20200336702A1 (en) 2016-12-20 2020-06-30 Enhanced virtual and/or augmented communications interface
US18/190,693 US20230239436A1 (en) 2016-12-20 2023-03-27 Enhanced virtual and/or augmented communications interface

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/916,332 Continuation US20200336702A1 (en) 2016-12-20 2020-06-30 Enhanced virtual and/or augmented communications interface

Publications (1)

Publication Number Publication Date
US20230239436A1 true US20230239436A1 (en) 2023-07-27

Family

ID=62562216

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/833,241 Active US10701319B2 (en) 2016-12-20 2017-12-06 Enhanced virtual and/or augmented communications interface
US16/916,332 Abandoned US20200336702A1 (en) 2016-12-20 2020-06-30 Enhanced virtual and/or augmented communications interface
US18/190,693 Pending US20230239436A1 (en) 2016-12-20 2023-03-27 Enhanced virtual and/or augmented communications interface

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US15/833,241 Active US10701319B2 (en) 2016-12-20 2017-12-06 Enhanced virtual and/or augmented communications interface
US16/916,332 Abandoned US20200336702A1 (en) 2016-12-20 2020-06-30 Enhanced virtual and/or augmented communications interface

Country Status (1)

Country Link
US (3) US10701319B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10554931B1 (en) 2018-10-01 2020-02-04 At&T Intellectual Property I, L.P. Method and apparatus for contextual inclusion of objects in a conference
CN111107664B (en) * 2018-10-26 2022-04-01 大唐移动通信设备有限公司 Resource management method, session management function entity and equipment
US11463499B1 (en) * 2020-12-18 2022-10-04 Vr Edu Llc Storage and retrieval of virtual reality sessions state based upon participants
US11979244B2 (en) * 2021-09-30 2024-05-07 Snap Inc. Configuring 360-degree video within a virtual conferencing system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120131089A1 (en) * 2010-11-24 2012-05-24 C/O Ipc Systems, Inc. Communication services and application launch tool
US20160373584A1 (en) * 2015-06-18 2016-12-22 Ipc Systems, Inc. Systems, methods and computer program products for performing call swap

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9369587B2 (en) * 2013-02-28 2016-06-14 Avaya Inc. System and method for software turret phone capabilities
US9524588B2 (en) * 2014-01-24 2016-12-20 Avaya Inc. Enhanced communication between remote participants using augmented and virtual reality
CA2941333A1 (en) * 2015-10-30 2017-04-30 Wal-Mart Stores, Inc. Virtual conference room
US9785741B2 (en) * 2015-12-30 2017-10-10 International Business Machines Corporation Immersive virtual telepresence in a smart environment
US10957306B2 (en) * 2016-11-16 2021-03-23 International Business Machines Corporation Predicting personality traits based on text-speech hybrid data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120131089A1 (en) * 2010-11-24 2012-05-24 C/O Ipc Systems, Inc. Communication services and application launch tool
US20160373584A1 (en) * 2015-06-18 2016-12-22 Ipc Systems, Inc. Systems, methods and computer program products for performing call swap

Also Published As

Publication number Publication date
US20180176511A1 (en) 2018-06-21
US10701319B2 (en) 2020-06-30
US20200336702A1 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
US20230239436A1 (en) Enhanced virtual and/or augmented communications interface
CN114125523B (en) Data processing system and method
US9935987B2 (en) Participation queue system and method for online video conferencing
CA2757847C (en) System and method for hybrid course instruction
Ziegler et al. Present? Remote? Remotely present! New technological approaches to remote simultaneous conference interpreting
CN101939989B (en) Virtual table
Gunkel et al. Social VR platform: Building 360-degree shared VR spaces
CN114115519B (en) System and method for delivering applications in a virtual environment
CN114201037B (en) User authentication system and method using graphical representation-based
CN108322474B (en) Virtual reality system based on shared desktop, related device and method
KR20220029467A (en) Ad hoc virtual communication between approaching user graphical representations
KR20220029454A (en) System and method for virtually broadcasting from within a virtual environment
Porwol et al. VR-Participation: The feasibility of the Virtual Reality-driven multi-modal communication technology facilitating e-Participation
Farouk et al. Using HoloLens for remote collaboration in extended data visualization
KR20220030178A (en) System and method to provision cloud computing-based virtual computing resources within a virtual environment
JP2005055846A (en) Remote educational communication system
US11825026B1 (en) Spatial audio virtualization for conference call applications
KR101687901B1 (en) Method and system for sharing screen writing between devices connected to network
KR20220029471A (en) Spatial video-based presence
US20180160078A1 (en) System and Method for Producing Three-Dimensional Images from a Live Video Production that Appear to Project Forward of or Vertically Above an Electronic Display
Pfeiffer et al. Ubiquitous Virtual Reality: Accessing Shared Virtual Environments through Videoconferencing Technology.
US20230419624A1 (en) System for hybrid virtual and physical experiences
Støckert et al. Hybrid Learning Spaces with Spatial Audio
Aguilera et al. Spatial audio for audioconferencing in mobile devices: Investigating the importance of virtual mobility and private communication and optimizations
Nassani et al. Designing, Prototyping and Testing of $360^{\circ} $ Spatial Audio Conferencing for Virtual Tours

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTRACOM SYSTEMS, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JURRIUS, JOHN;BRAND, DAVID;BRAND, STEPHEN;SIGNING DATES FROM 20180520 TO 20180621;REEL/FRAME:063284/0199

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED