WO2018126134A1 - Plateforme de collaboration d'entreprise unifiée basée sur navigateur - Google Patents

Plateforme de collaboration d'entreprise unifiée basée sur navigateur Download PDF

Info

Publication number
WO2018126134A1
WO2018126134A1 PCT/US2017/068958 US2017068958W WO2018126134A1 WO 2018126134 A1 WO2018126134 A1 WO 2018126134A1 US 2017068958 W US2017068958 W US 2017068958W WO 2018126134 A1 WO2018126134 A1 WO 2018126134A1
Authority
WO
WIPO (PCT)
Prior art keywords
stream
room
collaboration
client
component
Prior art date
Application number
PCT/US2017/068958
Other languages
English (en)
Inventor
Charles E. GERO
Ahbijit C. MEHTA
Thomas HOUMAN
Brandon O. WILLIAMS
Martin Lohner
Dana Burd
Vladmir SHTOKMAN
Original Assignee
Akamai Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/854,393 external-priority patent/US10250849B2/en
Priority claimed from US15/856,652 external-priority patent/US10542057B2/en
Priority claimed from US15/857,020 external-priority patent/US10291783B2/en
Application filed by Akamai Technologies, Inc. filed Critical Akamai Technologies, Inc.
Priority to EP17888103.3A priority Critical patent/EP3563248B1/fr
Priority claimed from US15/857,694 external-priority patent/US10834514B2/en
Priority claimed from US15/857,781 external-priority patent/US10812598B2/en
Publication of WO2018126134A1 publication Critical patent/WO2018126134A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1818Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties

Definitions

  • This application relates generally to cloud-based collaboration among users of com u ting m achines .
  • Real-time communications e.g., videoconferencing, shared document editing, screen sharing, and the like
  • many of the existing technical solutions are not interoperable, and there are still difficult technical problems (e.g., NAT traversal) that can stymie direct peer- to-peer connections, thus dictating the use of relays to ensure connectivity.
  • NAT traversal e.g., NAT traversal
  • call quality suffers.
  • multi-party video conferencing typically requires a separate connection for each pair of users, and this approach does not scale.
  • WebRTC an Internet standard, was created to make videoconferencing and point- to-point data transfer easier to implement.
  • WebRTC which stands for Web Real Time Communications
  • WebRTC seeks to take the most critical elements of video chat and move them to one of the most commonly used tools for accessing the Internet, namely, a web browser.
  • WebRTC is supported with plugins by both Google Chrome and Mozilla Firefox. It allows the browser to access the client machine's camera and microphone, provides a method for establishing a direct connection between two users' browser and to use that connection to send audio and video, and it provides a method for sending arbitrary data streams across a connection. WebRTC also mandates that all data is encrypted.
  • WebRTC provides significant advantages, it does not itself address the scaling challenges associated with connectivity across NAT and multi-party conferencing.
  • a relay infrastructure using TURN
  • TURN a relay infrastructure
  • multi-user video conferencing over WebRTC requires full mesh connectivity between all users; that is, a separate connection must be established between each pair of users.
  • Each user needs to upload their video (and other data) multiple times - once for each peer - and the resources required grow in a way proportional to the square of the number of users, which does not scale.
  • These issues are not limited to WebRTC; indeed, existing, dedicated video conferencing solutions struggle with the same problems. For example, Microsoft's Skype relays are often overloaded, significantly impacting the quality of Skype calls that cannot use a direct peer-to-peer connection.
  • a system for enterprise collaboration is associated with an overlay network, such as a content delivery network (CDN) or other cloud-accessible architecture.
  • CDN content delivery network
  • a system for enterprise collaboration is associated with an overlay network, such as a content delivery network (CDN) or other cloud-accessible architecture.
  • CDN content delivery network
  • the overlay network comprises machines capable of ingress, forwarding and broadcasting traffic, together with a mapping infrastructure.
  • the system comprises a front-end application, a back-end application, and set of one or more APIs through which the front-end application interacts with the back-end application.
  • the front-end application is a web or mobile application component that provides one or more collaboration functions.
  • the back-end application comprises a signaling component that maintains state information about each participant in a collaboration, a connectivity component that manages connections routed through the overlay network, and a multiplexing component that manages a multi-peer collaboration session to enable an end user peer to access other peers' media streams through the overlay network rather than directly from another peer.
  • Peers preferably communicate with the platform using
  • a collaboration manager component enables users to configure, manage and control their collaboration sessions.
  • FIG. 1 is a block diagram illustrating a known distributed computer system configured as a content delivery network (CDN);
  • CDN content delivery network
  • FIG. 2 is a representative CDN edge machine configuration
  • FIG. 3 depicts the various components of a web-based collaboration solution according to this disclosure.
  • FIG. 4 illustrates a multi-party videoconference setup that is enabled by associating the web-based solution of this disclosure with an overlay network.
  • a distributed computer system 100 is configured as a content delivery network (CDN) and is assumed to have a set of machines 102a-n distributed around the Internet.
  • CDN content delivery network
  • machines 102a-n distributed around the Internet.
  • machines typically, most of the machines are servers located near the edge of the Internet, i.e., at or adjacent end user access networks.
  • a network operations command center (NOCC) 104 manages operations of the various machines in the system.
  • Third party sites such as web site 106, offload delivery of content (e.g., HTML, embedded page objects, streaming media, software downloads, and the like) to the distributed computer system 100 and, in particular, to "edge" servers.
  • content e.g., HTML, embedded page objects, streaming media, software downloads, and the like
  • content providers offload their content delivery by aliasing (e.g., by a DNS CNAME) given content provider domains or sub-domains to domains that are managed by the service provider's authoritative domain name service. End users that desire the content are directed to the distributed computer system to obtain that content more reliably and efficiently.
  • the distributed computer system may also include other infrastructure, such as a distributed data collection system 108 that collects usage and other data from the edge servers, aggregates that data across a region or set of regions, and passes that data to other back-end systems 110, 112, 114 and 116 to facilitate monitoring, logging, alerts, billing, management and other operational and administrative functions.
  • Distributed network agents 118 monitor the network as well as the server loads and provide network, traffic and load data to a DNS query handling mechanism 115, which is authoritative for content domains being managed by the CDN.
  • a distributed data transport mechanism 120 may be used to distribute control information (e.g., metadata to manage content, to facilitate load balancing, and the like) to the edge servers.
  • a given machine 200 in the content delivery network comprises commodity hardware (e.g., an Intel Pentium processor) 202 running an operating system kernel (such as Linux or variant) 204 that supports one or more applications 206a-n.
  • an operating system kernel such as Linux or variant
  • given machines typically run a set of applications, such as an HTTP proxy 207 (sometimes referred to as a "global host” or “ghost” process), a name server 208, a local monitoring process 210, a distributed data collection process 212, and the like.
  • the machine may include one or more media servers, such as a Windows Media Server (WMS) or Flash server, as required by the supported media formats, or it may utilize HTTP-based delivery of chunked content fragments that constitute a stream.
  • WMS Windows Media Server
  • a CDN edge server is configured to provide one or more extended content delivery features, preferably on a domain- specific, customer-specific basis, preferably using configuration files that are distributed to the edge servers using a configuration system.
  • a given configuration file preferably is XML-based and includes a set of content handling rules and directives that facilitate one or more advanced content handling features.
  • the configuration file may be delivered to the CDN edge server via the data transport mechanism.
  • U.S. Patent No. 7,111,057 illustrates a useful infrastructure for delivering and managing edge server content control information, and this and other edge server control information can be provisioned by the CDN service provider itself, or (via an extranet or the like) the content provider customer who operates the origin server.
  • the CDN may include a storage subsystem, such as described in U.S. Patent No. 7,472,178, the disclosure of which is incorporated herein by reference.
  • the CDN may operate a server cache hierarchy to provide intermediate caching of customer content; one such cache hierarchy subsystem is described in U.S. Patent No. 7,376,716, the disclosure of which is incorporated herein by reference.
  • the CDN may provide secure content delivery among a client browser, edge server and customer origin server in the manner described in U.S. Publication No. 20040093419. Secure content delivery as described therein enforces SSL-based links between the client and the edge server process, on the one hand, and between the edge server process and an origin server process, on the other hand. This enables an SSL-protected web page and/or components thereof to be delivered via the edge server.
  • a content provider identifies a content provider domain or sub-domain that it desires to have served by the CDN.
  • the CDN service provider associates (e.g., via a canonical name, or CNAME) the content provider domain with an edge network (CDN) hostname, and the CDN provider then provides that edge network hostname to the content provider.
  • CDN edge network
  • those servers respond by returning the edge network hostname.
  • the edge network hostname points to the CDN, and that edge network hostname is then resolved through the CDN name service. To that end, the CDN name service returns one or more IP addresses.
  • the requesting client browser then makes a content request (e.g., via HTTP or HTTPS) to an edge server associated with the IP address.
  • the request includes a host header that includes the original content provider domain or sub-domain.
  • the edge server Upon receipt of the request with the host header, the edge server checks its configuration file to determine whether the content domain or sub-domain requested is actually being handled by the CDN. If so, the edge server applies its content handling rules and directives for that domain or sub-domain as specified in the configuration. These content handling rules and directives may be located within an XML-based "metadata" configuration file.
  • an overlay network fabric is used to provide a unified browser-based enterprise collaboration platform.
  • a platform such as a CDN (as described above)
  • a solution that facilitates multi-user is provided but without requiring full mesh connectivity.
  • a primary use case as described below is for high-quality video conferencing that is scalable to large numbers of users, this is not a limitation, as the cloud- supported multiplexing and relay techniques herein may be used to provide other multiuser collaboration, such as chat, document sharing, and desktop sharing, all in a seamless and scalable manner.
  • the overlay network can also provide additional functions and features to support a collaboration session; as described below, these may include, without limitation, persistent storage and recording of sessions and documents, integration with existing videoconferencing and telecommunications infrastructure (LifeSize rooms, PSTN, etc.), management, and others.
  • FIG. 3 depicts a representative architecture 300 for an enterprise collaboration platform using an overlay network according to an aspect of this disclosure.
  • the front-end application 300 preferably is built on a number of components (described below) that are preferably accessed through the one or more RESTful APIs 302.
  • the platform components 304 include signaling 306, connectivity 308, multiplexing 310, storage 312, and PSTN integration 314.
  • the platform 304 comprises part of an overlay network (or leverages elements thereof), but this is not a requirement, as the solution herein may be provided as a standalone architecture. Further, the notion of a "component" herein may involve multiple machines, whether co-located or distributed, as well as the processes and programs executing thereon.
  • the signaling component 306 preferably is a distributed signaling system that keeps track of users' state (e.g., "Online”, “Away”, “Busy”, etc.), and it is used to transmit the information (i.e., SDP) necessary to initiate an RTCPeerConnection.
  • the signaling component 306 preferably integrates with various user authentication and identity management solutions, although this is not a requirement.
  • the connectivity component 308 manages video, voice and data connections routed though the overlay network platform to handle Network Access Translation (NAT) traversal, as well as to provide enhanced performance and security.
  • NAT Network Access Translation
  • the multiplexing component 310 comprises multiplexing machines to allow for scalable, multi-peer sessions. This component makes it so that each peer only needs to upload its media stream once. Other peers are then able to access peers' media streams through overlay network edge machines (rather than by direct connections to peers).
  • the multiplexing component provides for multiplexing in the cloud to significantly reduce edge bandwidth requirements that would otherwise be required to support WebRTC (which otherwise dictates a new connection be setup for pair of peers in a multi-user
  • the multiplexing component 310 intelligently adjusts the quality of different users' streams to enhance performance - e.g., only deliver HD streams for people who are currently speaking, deliver lower-quality streams to mobile devices, etc.
  • the storage component 312 allows overlay network customers to (optionally) store data from a collaboration session (e.g., record a meeting, save work on a collaborative document, etc.).
  • the PSTN integration component 314 allows users to join sessions from the PSTN and legacy telecommunications equipment, and it allows users to call out over the PSTN.
  • the platform may include a transcoding component that allows for communications between browsers that do not have the same video codecs implemented, and for one-way broadcasting to browsers that do not support WebRTC.
  • the front-end components 300 interact with the back-end platform 304 using an application programming interface, such as RESTful APIs 302.
  • RESTful APIs 302. These APIs 302 provide methods for exchanging SDPs to set up calls, provide information on which chat rooms are available, which media streams are available in each chat room, which user media streams in a given chat room are most "relevant" at any given moment, and so forth.
  • the APIs preferably also provide methods for interacting with other parts of the back-end, e.g., verifying users' identities, accessing storage (saving data, retrieving data, searching), and the like.
  • the APIs also preferably include a JavaScript (JS) API 303, referred to herein as "iris.js," which is a thin layer on top of the base WebRTC API and other HTML5 components.
  • JS JavaScript
  • the iris.js API 303 preferably uses the other RESTful APIs to integrate with the overlay network fabric.
  • the iris.js API allows applications to establish and use video, voice, and data channels.
  • the front-end web app is built on the JavaScript API, and third party applications may use this API to build apps that seamlessly integrate with the platform.
  • the front-end components 300 comprise a web application (or web app) 316, which is a unified communication tool built on iris.js.
  • the web app 316 routes video, voice, and data through the overlay network fabric.
  • the web app also provides (or interfaces to) one or more collaboration functions or technologies, such as video chat, collaborative document editing, desktop sharing, and the like. Because the web app 316 preferably is built in an API (such as iris.js 303, which can support several data channels), it is easily extensible.
  • the web app 316 is skinnable so it can be rebranded and used by enterprise customers.
  • iris.js is built on top of the WebRTC API's, third parties are able to easily adapt existing WebRTC applications to use the solution described herein.
  • the third party applications 318 are depicted here as part of the front-end, but they may be separate and distinct.
  • the RESTful API 302 also makes integration with other collaboration tools possible.
  • the front end may include or have associated therewith legacy on-premises equipment 320, such as LifeSize rooms. Further, the front-end may include or have associated therewith native mobile apps 322, such as devices and tablets that run native iOS and Android apps (as opposed to HTML5 apps in mobile browsers, which are also supported).
  • the API layer 302 enables a service provider or third parties to easily build native mobile applications for the solution.
  • the above-described solution provides a multi-party voice and video chat system.
  • FIG. 4 depicts further implementation details of a multi-party solution implemented within an overlay network 400, such as the Akamai content delivery network (CDN).
  • CDN Akamai content delivery network
  • each peer is associated (e.g., using conventional CDN DNS mapping operations) to respective edge servers 406 and 408.
  • Each peer also establishes a WebRTC connection to a media server 410 that hosts the videoconference (in this example scenario).
  • a signaling back-end is powered by a distributed data store 412.
  • the platform is implemented using a combination of Node.js, PHP, Apache, Cassandra, and Kurento Media server running on Ubuntu Linux machines. Cassandra data is accessed via the
  • RESTful API which is powered by Node.js running behind an Apache proxy 414.
  • signaling information is exchanged via HTTPS interactions using the RESTful API.
  • Multiplexing is accomplished using the Kurento Media Server (KMS) running on cloud Ubuntu VMs running in geographically-distributed locations.
  • KMS Kurento Media Server
  • the Node.js signaling application performs a DNS lookup to the CDN mapping to determine an optimal (in terms of one or more factors such as latency, loss, load, availability, reachability, etc.) media server to which as client should connect.
  • Clients upload their live media stream via WebRTC to the chosen media server.
  • the connection is set up by the signaling layer through the RESTful API.
  • Other clients who wish to subscribe to that media stream connect to the same media server (via the signaling layer) and receive the stream.
  • the underlying network environment may allow for direct connectivity between peers. This requirement is met among users, for example, as long as peers are connected to an enterprise VPN.
  • STUN and TURN servers such as coturn
  • VM cloud virtual machine
  • a TURN-compliant version of a relay network for peer-to-peer connectivity may be used.
  • STUN and TURN are not needed because it is assumed that clients can connect directly to multiplexing servers.
  • Still another approach to connectivity may involve a multicast overlay network to distribute streams.
  • the API is powered by a Node.js web application.
  • the Node.js application interacts with Kurento Media Server and Cassandra to orchestrate calls.
  • the "iris.js" JavaScript API is a client-side ECMAScript 6 library that allows web applications to interact with the system via the Iris RESTful API. It contains functionality that allows for easy WebRTC connection management, call orchestration, and automatic, dynamic quality switching, e.g., as the relevancy of different participants in a room changes.
  • the web application is an HTML5 Web App written on top of iris.js. The views are powered by a PHP application.
  • the overlay network provides various support services to the conferencing platform.
  • these services provide one or more of: deployment, versioning, integration with back-end overlay network infrastructure (key management), load balancing, monitoring, single sign-on, auto-scaling, and so forth.
  • underlying media sessions preferably are end-to-end encrypted.
  • media sessions are encrypted between users' clients and the overlay network.
  • Any Internet-accessible client may be used in a conference provided it has a video camera and microphone/speaker.
  • the solution is a video and audio conversation platform that does not require any special equipment other than a client having a browser, a webcam, and a microphone.
  • the service provider e.g., a CDN
  • a CDN preferably provides (e.g., from a web page) a "lobby" or index/directory from which a user can identify or start a conference.
  • a user By opening his or her browser to the lobby page, a user can create a room (conference), join the room, see who is already in the room, change his or her relevancy (make your own video bigger relative to others), mute others, and mute yourself.
  • the user may be provided the ability to communicate (with other users) that are within the same domain, set download quality, set upload quality, update a room, delete a room, leave feedback (when the user leaves the room), use a non-standard camera, see who created each room, and provide room deep linking (using SSO).
  • a further feature is to enable the user to create a presentation.
  • the rooms may be organized by type, and it may be created programmatically.
  • the person who creates the room may have his or her identity identified, and this information may be captured from a user authentication token.
  • a room may be modified by a person who creates the room.
  • the person may whitelist or blacklist users in a room.
  • a default scenario is to allow no one to join a room except for persons that are explicitly allowed by the room creator via a whitelist.
  • Presentations allow for another type of room type. These types of rooms allow 1- to-many communications. In a standard multi-party room, every participant sees and hears everyone else. In contrast, typically a presentation has one presenter, one moderator, and a number of participants. A participant can raise a question, typically first to the moderator, who may then pass the question (if approved) on to the presenter.
  • a room control panel may be provided to the presenter in his or her display.
  • a participant control panel may be provided to the participant is his or her display.
  • Client-side JavaScript code may be subject to tampering; thus, to improve security preferably all server-side inputs (including URL parameters) are scrubbed and validated. To discourage brute-force attacks, a server-side delay is added to failed authentication attempts. To prevent a guess as to when an extended delay indicates a failed request, even successful authentication attempts are subjected to short delays. Some randomness may be added to both delay types.
  • user passwords are salted, and user passwords are encrypted before leaving the clients. Content is secured by virtue of the WebRTC transport.
  • Authentication preferably is handled via token-based authentication.
  • Client JavaScript preferably is minified (removing unnecessary white spaces, etc.) and uglified (renames variables and functions).
  • Scaling involves multiple media servers, such as the KMS.
  • Enhancements involve uploading each stream to its own media server, and sending streams from one media server to another.
  • the latter technique enables the provider to insert an intermediary stream layer to facilitate fan-out.
  • collaboration session management functions described above may be accessed by an authenticated and authorized user (e.g., an administrator) via a secure web-based portal that is provided by the overlay network service provider.
  • the collaboration management functions are configured and managed from one or more SSL-secured web pages that comprise a secure collaboration session management portal.
  • the technique described herein assumes that the overlay network provides a network of machines capable of ingress, forwarding, and broadcasting traffic, together with a mapping infrastructure that keeps track of the load, connectivity, location, etc., of each machine and can hand this information back to clients using DNS or HTTPS.
  • An approach of this type is described in U.S. Patent Nos. 6,665,726 and 6,751,673, assigned to Akamai Technologies, Inc., the disclosures of which are incorporated herein.
  • the technique described there provides for an application layer-over- IP routing solution (or "OIP routing").
  • the multiplexing component implements or facilitates multicast OIP to distribute individuals' video streams in a multiparty videoconference.
  • Multicast OIP could may also be used as a generic real-time publish-subscribe overlay network or for broadcast of video in real-time.
  • a publisher (which may be just an individual user) sends data to the multicast network.
  • Clients e.g., end user peers running mobile devices, laptops, etc.
  • the overlay network handles intelligently routing and fanning-out the data stream to all subscribers.
  • the forwarding network may use multiple paths, forward error correction, and the like to ensure the reliability and performance of the stream.
  • the intermediate communications also are encrypted.
  • a publisher makes a DNS (or HTTPS) request to a load balancer operated by the overlay network service provider (e.g., Akamai global traffic manager service).
  • the request preferably contains a unique identifier for the publisher's data stream.
  • the load balancer finds an ingress node on the network that has available bandwidth, CPU, and other resources, and that will have good connectivity to the publisher (close by from a network perspective), and hands back an IP address (or URI) corresponding to that node.
  • IP address or URI
  • the overlay network handles distributing the video stream to subscribers.
  • subscribers make a DNS (or HTTPS) request to mapping (overlay network DNS).
  • This request contains the unique identifier of the data stream which the subscriber wants to consume.
  • the mapping system finds an egress node that can deliver the stream to the subscriber, and hands back an IP address (or URI) for that egress node.
  • IP address or URI
  • the system builds a fan-out tree by assigning forwarding nodes between the ingress and egress nodes. The system forwards data through the forwarding nodes to the egress nodes. The subscriber then connects to the IP/URI it got in the first step, and consumes the data stream.
  • a typical use case is WebRTC.
  • the ingress and egress nodes handle WebRTC PeerConnections. Subscribers to a given stream have individual WebRTC PeerConnections to individual egress nodes; the overlay system takes care of distributing the stream from the ingress nodes to the individual egress nodes.
  • the HTML5 Web Audio API is used to have a client browser render different participants' audio at particular positions in 3D space.
  • the 3D position of a speaker might depend on one or more factors, such as a position of that person's video window in the screen, whether that person is speaking, the location of the person, when the person joined the call, or simply by hashing the speaker's ID to a position on a ring.
  • Different speakers' relative positions may remain stable to communicate contextual information. Users move “forward” and “backward,” or “up” and “down” in space, as their relevancy (as described below) changes.
  • the implementation is in client-side JavaScript running in a web browser, preferably that functions as follows:
  • the client Given a set of audio streams (via WebRTC or HTTPS), the client computes a 3D position for each audio stream by carrying out a set of operations.
  • a "target area" region of space is identified. Typically, this region is in front of the listener (e.g., within a 45° cone in front of the viewer, and between, say, 30 cm and 3 m away from the listener.
  • each audio stream is assigned a 3D position within that target region as follows.
  • “left,” “right,” “up,” “down,” “forward” and “backward” are from the listener's perspective. If applicable (i.e., if video corresponding to the audio stream is displayed in the browser window), the 3D position corresponds to the point on the screen where the video is displayed; otherwise, the stream is given a position based on contextual information related to the content of the stream.
  • a hash function takes a unique identifier for each stream (e.g., each participant's name), and maps that to a left/right position within the target area.
  • each individual speaker will maintain a stable position relative to other speakers, even if other speakers join or leave the conference.
  • a hash function maps each stream's unique identifier to a number in the range [0,1], and then each stream is assigned a position within the left/right target region such that the positions are evenly (or otherwise deterministically) spaced within the target region.
  • the up/down and forward/backward position are then based on "relevancy" and other contextual information for each stream. For example, if one person is giving a presentation, their position may be "up” relative to other participants (who may be asking questions). Or, someone who just joins the room may be "back” compared to someone who has recently spoken.
  • the browser renders each audio stream at the 3D position associated with it in step (2).
  • the audio is played over the browser's stereo (or surround sound) speakers.
  • the effect to the end user is that it will sound like each stream originates at its associated 3D position.
  • steps (2) and (3) are performed continuously, so, for example, as an individual stream's associated 3D position changes, the end-user experiences this as that stream "moving" in the stereophonic soundscape.
  • Dynamic speaker selection and live stream delivery for multi-party video conferencing involve gathering an audio/video stream from each individual end user's client, and distributing each user's stream to all other end users. The delivery of these streams is usually facilitated by either a full-mesh topology or a centralized multiplexing server. That approach does not scale to large numbers of users.
  • this disclosure describes a method comprising server- and client- based logic for intelligently and dynamically determining which streams are most important - i.e., which streams correspond to users who are currently speaking, have spoken recently, or are about to speak - and delivering those streams at higher quality. Remaining users' streams preferably are delivered at lower video quality (or audio-only). This approach saves bandwidth and enables scalable, real-time multi-party
  • the platform maintains a set of variables, which preferably are continuously updated for each participant, and which are then used to determine an importance or "relevancy" of each user's audiovisual (a/v) stream.
  • these variables are derived from multiple sources, including audio filters, video filters, user input, and other measures of individual and group behavior.
  • Example variables/filters include, without limitation: speaking (audio), crowd noise (audio), face detection (video), group detection (video), microphone mute (user input), raise hand (user input), and so forth.
  • the one or more of such variables are used to construct a probability function and, in particular, a probability density function (PDF), across several possible stream "attributes,” such as whether a stream represents an individual speaking, a small group, an individual who is participating in a conversation, audience members who are reacting to a main presenter, and so forth.
  • PDF probability density function
  • heuristics are built by leveraging statistical modeling and/or machine learning (ML) techniques (e.g., using a training set of users in a multi-party context) to construct PDFs for each attribute, e.g., from (raw or smoothed) measurements of the variables.
  • ML machine learning
  • these PDFs are then combined using relative weighting techniques to drive both client and server behavior.
  • the nature of the relative weighting techniques may vary. There may be a predefined set of techniques, or a set of best practices, a default set, or some combination.
  • the PDFs are used to drive the end-user experience.
  • the video images of different speakers on the screen and the loudness of different speakers' audio streams may be given different prominence based on the relevance of each speaker, and on the capabilities of the client.
  • a client with a high-bandwidth connectivity displays the two most "relevant" speakers in High Definition (HD), N other speakers in small video windows, and only audio for everyone else, whereas a mobile client with low connectivity displays only one low-quality video, and audio for everyone else.
  • HD High Definition
  • the PDFs preferably are used to guide routing and multiplexing of individual user streams.
  • Streams that are more relevant may be afforded increased bandwidth and resource allocation.
  • the PDFs may also be used to guide assignments of streams to different media servers.
  • one or more additional transforms such as decay functions, are used to dampen oscillations in relevancy changes. For example, if two people are speaking, the decay function prevents constant switching between each person's video.
  • the clients are browsers
  • the servers are cloud machines.
  • Logic is implemented in client-side JavaScript, and in the server-side in Node.js JavaScript.
  • client-server API calls are via RESTful HTTPS requests
  • media flows are via the HTML5 WebRTC API.
  • a client makes a RESTful API call via HTTPS to the server indicating that it wants to join a conference.
  • the server adds the client to conference.
  • the client makes another API request indicating what type of media it has to offer (audio, video, etc%), what capabilities it has (e.g., total bandwidth does it have, type of device, number of video streams it can handle, number of HD video streams it can handle, how many lower quality streams, etc.).
  • the information on client capabilities typically comes from several sources, such as information that the client explicitly sends in the API request, information (e.g., on the client's hardware capabilities, on the client's connectivity performance, etc.) that is collected by a client-side JavaScript code (using HTML5 APIs such as the Navigation Timing API, the Web Performance API, and the Network
  • Information API and WebRTC
  • information that is inferred from server-side code such as the client's User- Agent, the network performance of ISP/network that the client is in, and the actual measured bandwidth, throughput, latency, etc., to the client.
  • the server allocates resources for the media stream that the client will upload. Typically, this step involves making a DNS request to a load balancer to find a free media server, communicating the offer/answer SDP information needed to create a WebRTC connection between the client and the media server, and recording information about the connection in the database.
  • the server adds the client to list of participants in room, and assigns an initial relevancy.
  • relevancy is preferably a multi-dimensional data structure that comprises one or more variables derived from multiple sources, including audio filters, video filters, user input, and other measures of individual and group behavior.
  • Example variables/filters include: speaking (audio), crowd noise (audio), face detection (video), group detection (video), mic mute (user input), raise hand (user input), and the like.
  • the server compiles a list of all participants in room along with each client's full multidimensional relevancy, as well as a summary score.
  • clients can determine an ordered ranking of which participant in the conference is "most relevant” based on the summary score.
  • the "summary" can comprise a single variable (e.g., time since hand last-raised), a weighted average of variables, the PDFs as described above, or some combination.
  • clients periodically poll the server for changes to the participant list and the relevancy for each participant. In lieu of polling, clients may also be notified via a push notifications or via a publisher/subscriber system).
  • clients also periodically send information back to the server that is used to build their relevancy object and summary scores. The server combines this information with information that it collects (for example, information from server-side a/v filters, information on how long a client has been in a room, etc.) and continuously updates the relevancy object and summary score for each participant.
  • clients change which WebRTC media streams they subscribe to, preferably via the following mechanism: (i) first, given the client capabilities (step (3)), the client knows what kind of stream it can handle (e.g., N high-quality streams, M low- quality streams, S audio streams, and so forth); (ii) the client JavaScript (iris.js) sorts the other participants in order of "summary" relevancy score; (iii) the client then associates the most relevant participants with the highest quality streams, preferably in order; and (iv) updates the associations as needed, e.g., by examining if a client is associated with a different quality stream than the client is currently subscribed, unsubscribing from the old quality stream and subscribes to the quality of stream that the client is currently associated with. In other words, the client JavaScript seamlessly swaps the different qualities so that an interruption is not visible to the end user. Subscription preferably is via API calls, as described in step (4).
  • a client can handle one (1) HD stream, four (4) low-quality streams, and one hundred (100) audio streams, then the participant with the highest summary relevancy gets the HD stream, the next four highest summary scores get low- quality streams, and the remaining participants are associated with audio-only streams.
  • a third client may be subscribed to an HD stream for Alice and an SD stream for Bob. If Bob starts speaking, then the client will drop the HD connection to Alice and switch to an SD connection. The client will then subscribe to an HD connection for Bob and swap that in for the old SD connection.
  • the client makes a series of API calls to the server to allocate media server resources and to broker a new connection in a way that is analogous to step (4) above.
  • An alternative method in step (9) involves having each client subscribe to only one WebRTC media stream per another participant.
  • the server performs the above steps of sorting participants based on summary relevancy score, and the server takes care of adjusting the quality of each stream when summary relevancy score changes.
  • the client does not have to subscribe/unsubscribe to a different quality WebRTC stream at any time; clients only need to subscribe/unsubscribe to WebRTC streams when other participants join/leave the conference.
  • the server keeps track of a maximum total bandwidth that a given client can handle, and it makes sure that the aggregate quality of all WebRTC streams delivered to that client is below the maximum total bandwidth threshold. In this case, the client only needs to consume the relevancy to handle the following step.
  • JavaScript on the client preferably uses the full multi-dimensional relevancy object to determine how to display the other participants' video in the browser window.
  • the full multi-dimensional relevancy object For example, current speakers are given a position of prominence, people with hands raised are highlighted, etc.
  • each client preferably always consumes an amount of bandwidth that is below a fixed maximum but, within this constraint, the bandwidth is dynamically apportioned (by the above-described technique) such that the most-relevant participants are delivered at the highest quality, resulting in an enhanced experience for the end-user.
  • relevancy is a property of each media stream; preferably, it is a multi-dimensional object that contains information such as whether the user is speaking, whether the user has muted his or her microphone, how recently the user spoke, if the user is raising his or her hand, and so forth, and that is used to determine which streams are most important.
  • server-side switching is used so that only the most relevant streams are delivered, say, at high quality.
  • switching is done by the client; in particular, the server lets the client know the relevancy of all participants' streams, and the client (based on its capabilities), dynamically subscribes to the high-quality version of the most relevant streams and low-quality versions of other streams.
  • a conference such as a videoconference (or even just an audio conference)
  • the notion here is to collect audio (and perhaps video) from laptops (or other client devices) of different users in the same room, and to use processing in the cloud to create an ad-hoc, high quality microphone array. This is accomplished, preferably as follows.
  • the cloud platform first recognizes when multiple laptops (or other client devices) are in the same room using information such as IP address, the HTML5 Location API, and even by correlating audio feeds (i.e., if two users' microphones pick up the same sound, one can say that they are in the same room). Then, by comparing the audio signals from the different microphones in the same room (perhaps with the aid of having laptops/other devices actually make ultrasonic sounds through their speakers), the cloud platform reconstructs their relative positions. Finally, the cloud platform combines the audio feeds of the different clients to create a high-fidelity room audio feed. The end result is similar to that of using a microphone array and conferencing equipment, but without a "smart" client.
  • the "clients" are browsers that are participating in a real-time multiparty WebRTC call.
  • Each browser is sending its real-time audio feed to a cloud platform such as described above via WebRTC.
  • the following operations are then carried out, preferably in the cloud:
  • the cloud platform identifies a set of clients in a multiparty call who are all in the same physical room.
  • this determination is made using the one or more of the following information: the clients' IP addresses, the clients' physical locations (collected with client-side JavaScript using the HTML5 Geolocation API (if available) and reported back to the platform via RESTful HTTPS requests), explicit client input (e.g., client-side JavaScript exposes a UI element to the end user, and the end user can explicitly specify what room they are in, and the client reports this back to the platform via RESTful HTTPS requests); and passive correlation of the content of each individual audio stream. For example, if one person is speaking in a room, all microphones in that room will pick up that person's speech.
  • the cloud platform uses both expert-system and machine learning techniques to determine that the audio from all microphones are listening to the same audio source. This may be accomplished using algorithms that determine that the streams are highly correlated.
  • the training set for a machine learning algorithm comes from audio streams that have been verified to be in the same room by explicit client input and by active correlation.
  • client-side JavaScript makes the speakers on each individual client machine emit a sound at an ultrasonic frequency inaudible to humans. Each client will emit a uniquely identifiable sound. If this sound is present in any other client audio stream, that other client is in the same room.
  • the cloud platform compares "landmark" events in each audio stream - these are events that correspond to the same sound.
  • a landmark event may be a person saying a particular word, or the uniquely identifiable sounds from the active correlation method.
  • the platform uses triangulation to infer the relative position and orientation of each microphone.
  • the platform combines the individual audio streams from each microphone and uses the relative position information to render a single (stereo or multichannel surround) audio stream which represents a high-quality 3D recording of the audio in the room.
  • the platform mixes audio from the different microphones to enhance the audibility of the person currently speaking.
  • the platform delivers to the other clients the high-quality
  • stereophonic/surround audio stream computed from step (503).
  • Other clients play back this high quality stream to get a high-fidelity 3D representation of the audio in the target room.
  • Each above-described process preferably is implemented in computer software as a set of program instructions executable in one or more processors, as a special-purpose machine.
  • Representative machines on which the subject matter herein is provided may be Intel Pentium-based computers running a Linux or Linux-variant operating system and one or more applications to carry out the described functionality.
  • One or more of the processes described above are implemented as computer programs, namely, as a set of computer instructions, for performing the functionality described.
  • This apparatus may be a particular machine that is specially constructed for the required purposes, or it may comprise a computer otherwise selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including an optical disk, a CD-ROM, and a magnetic-optical disk, a read-only memory (ROM), a random access memory (RAM), a magnetic or optical card, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • a given implementation of the present invention is software written in a given programming language that runs in conjunction with a DNS-compliant name server (e.g., BIND) on a standard Intel hardware platform running an operating system such as Linux.
  • the functionality may be built into the name server code, or it may be executed as an adjunct to that code.
  • a machine implementing the techniques herein comprises a processor, computer memory holding instructions that are executed by the processor to perform the above-described methods.
  • the techniques herein generally provide for the above-described improvements to a technology or technical field, as well as the specific technological improvements to various fields including collaboration technologies including videoconferencing, chat, document sharing and the like, distributed networking, Internet-based overlays, WAN-based networking, efficient utilization of Internet links, and the like, all as described above.

Abstract

Selon l'invention, un système de collaboration d'entreprise est associé à un réseau superposé, tel qu'un réseau de distribution de contenu (CDN). Le réseau superposé comprend des machines capables de recevoir, de réacheminer et de diffuser du trafic, conjointement avec une infrastructure de mappage. Le système comprend une application frontale, une application dorsale, et un ensemble d'une ou plusieurs API grâce auxquelles l'application frontale interagit avec l'application dorsale. L'application frontale est un composant d'application Web ou mobile qui fournit une ou plusieurs fonctions de collaboration. L'application dorsale comprend un composant de signalisation qui maintient des informations d'état à propos de chaque participant dans une collaboration, un composant de connectivité qui gère des connexions routées par le réseau superposé, et un composant de multiplexage qui gère une session de collaboration entre multiples collègues pour permettre à un utilisateur final collègue d'accéder aux flux multimédias d'autres collègues par le réseau superposé plutôt que directement auprès d'un autre collègue. Les collègues communiquent de préférence avec la plateforme grâce à WebRTC. Un composant gestionnaire de collaboration permet aux utilisateurs de configurer, gérer et commander leurs sessions de collaboration.
PCT/US2017/068958 2016-12-30 2017-12-29 Plateforme de collaboration d'entreprise unifiée basée sur navigateur WO2018126134A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17888103.3A EP3563248B1 (fr) 2016-12-30 2017-12-29 Plateforme de collaboration d'entreprise unifiée basée sur navigateur

Applications Claiming Priority (20)

Application Number Priority Date Filing Date Title
US201662440473P 2016-12-30 2016-12-30
US201662440424P 2016-12-30 2016-12-30
US201662440509P 2016-12-30 2016-12-30
US201662440626P 2016-12-30 2016-12-30
US201662440437P 2016-12-30 2016-12-30
US62/440,626 2016-12-30
US62/440,509 2016-12-30
US62/440,424 2016-12-30
US62/440,437 2016-12-30
US62/440,473 2016-12-30
US15/854,393 2017-12-26
US15/854,393 US10250849B2 (en) 2016-12-30 2017-12-26 Dynamic speaker selection and live stream delivery for multi-party conferencing
US15/857,020 2017-12-28
US15/856,652 US10542057B2 (en) 2016-12-30 2017-12-28 Multicast overlay network for delivery of real-time video
US15/857,020 US10291783B2 (en) 2016-12-30 2017-12-28 Collecting and correlating microphone data from multiple co-located clients, and constructing 3D sound profile of a room
US15/856,652 2017-12-28
US15/857,694 2017-12-29
US15/857,694 US10834514B2 (en) 2016-12-30 2017-12-29 Representation of contextual information by projecting different participants' audio from different positions in a 3D soundscape
US15/857,781 2017-12-29
US15/857,781 US10812598B2 (en) 2016-12-30 2017-12-29 Unified, browser-based enterprise collaboration platform

Publications (1)

Publication Number Publication Date
WO2018126134A1 true WO2018126134A1 (fr) 2018-07-05

Family

ID=62710903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/068958 WO2018126134A1 (fr) 2016-12-30 2017-12-29 Plateforme de collaboration d'entreprise unifiée basée sur navigateur

Country Status (1)

Country Link
WO (1) WO2018126134A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111030921A (zh) * 2019-12-17 2020-04-17 杭州涂鸦信息技术有限公司 一种基于网页即时通信的多窗口通信方法及系统
WO2021069065A1 (fr) * 2019-10-08 2021-04-15 Unify Patente Gmbh & Co. Kg Procédé mis en œuvre par ordinateur d'exécution de session de collaboration en temps réel, et système de collaboration web

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665726B1 (en) 2000-01-06 2003-12-16 Akamai Technologies, Inc. Method and system for fault tolerant media streaming over the internet
EP1381237A2 (fr) 2002-07-10 2004-01-14 Seiko Epson Corporation Système de conférence multi-participant avec content contrôlable et fourniture par interface vidéo de voie de retour
US20040093419A1 (en) 2002-10-23 2004-05-13 Weihl William E. Method and system for secure content delivery
US6751673B2 (en) 2001-01-03 2004-06-15 Akamai Technologies, Inc. Streaming media subscription mechanism for a content delivery network
US7111057B1 (en) 2000-10-31 2006-09-19 Akamai Technologies, Inc. Method and system for purging content from a content delivery network
US7376716B2 (en) 2002-04-09 2008-05-20 Akamai Technologies, Inc. Method and system for tiered distribution in a content delivery network
US7472178B2 (en) 2001-04-02 2008-12-30 Akamai Technologies, Inc. Scalable, high performance and highly available distributed storage system for Internet content
US20090089379A1 (en) * 2007-09-27 2009-04-02 Adobe Systems Incorporated Application and data agnostic collaboration services
US20120284638A1 (en) * 2011-05-06 2012-11-08 Kibits Corp. System and method for social interaction, sharing and collaboration
US20140324942A1 (en) * 2013-04-24 2014-10-30 Linkedin Corporation Method and system to update a front end client
WO2015080734A1 (fr) * 2013-11-27 2015-06-04 Citrix Systems, Inc. Édition collaborative de documents en ligne
US20160164968A1 (en) * 2008-11-12 2016-06-09 Adobe Systems Incorporated Adaptive connectivity in network-based collaboration background information
US20160171090A1 (en) 2014-12-11 2016-06-16 University Of Connecticut Systems and Methods for Collaborative Project Analysis

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665726B1 (en) 2000-01-06 2003-12-16 Akamai Technologies, Inc. Method and system for fault tolerant media streaming over the internet
US7111057B1 (en) 2000-10-31 2006-09-19 Akamai Technologies, Inc. Method and system for purging content from a content delivery network
US6751673B2 (en) 2001-01-03 2004-06-15 Akamai Technologies, Inc. Streaming media subscription mechanism for a content delivery network
US7472178B2 (en) 2001-04-02 2008-12-30 Akamai Technologies, Inc. Scalable, high performance and highly available distributed storage system for Internet content
US7376716B2 (en) 2002-04-09 2008-05-20 Akamai Technologies, Inc. Method and system for tiered distribution in a content delivery network
EP1381237A2 (fr) 2002-07-10 2004-01-14 Seiko Epson Corporation Système de conférence multi-participant avec content contrôlable et fourniture par interface vidéo de voie de retour
US20040093419A1 (en) 2002-10-23 2004-05-13 Weihl William E. Method and system for secure content delivery
US20090089379A1 (en) * 2007-09-27 2009-04-02 Adobe Systems Incorporated Application and data agnostic collaboration services
US20160164968A1 (en) * 2008-11-12 2016-06-09 Adobe Systems Incorporated Adaptive connectivity in network-based collaboration background information
US20120284638A1 (en) * 2011-05-06 2012-11-08 Kibits Corp. System and method for social interaction, sharing and collaboration
US20140324942A1 (en) * 2013-04-24 2014-10-30 Linkedin Corporation Method and system to update a front end client
WO2015080734A1 (fr) * 2013-11-27 2015-06-04 Citrix Systems, Inc. Édition collaborative de documents en ligne
US20160171090A1 (en) 2014-12-11 2016-06-16 University Of Connecticut Systems and Methods for Collaborative Project Analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JANG-JACCARD JULIAN ET AL., COMPUTING, SPRINGER, VIENNA, AT, vol. 98, no. 1, 25 September 2014 (2014-09-25), pages 169 - 193
See also references of EP3563248A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021069065A1 (fr) * 2019-10-08 2021-04-15 Unify Patente Gmbh & Co. Kg Procédé mis en œuvre par ordinateur d'exécution de session de collaboration en temps réel, et système de collaboration web
CN111030921A (zh) * 2019-12-17 2020-04-17 杭州涂鸦信息技术有限公司 一种基于网页即时通信的多窗口通信方法及系统

Similar Documents

Publication Publication Date Title
US10623700B2 (en) Dynamic speaker selection and live stream delivery for multi-party conferencing
US11575753B2 (en) Unified, browser-based enterprise collaboration platform
US10587756B2 (en) Collecting and correlating microphone data from multiple co-located clients, and constructing 3D sound profile of a room
US10869001B2 (en) Provision of video conferencing services using a micro pop to extend media processing into enterprise networks
US20210084425A1 (en) Representation of contextual information by projecting different participants' audio from different positions in a 3D soundscape
US10447795B2 (en) System and method for collaborative telepresence amongst non-homogeneous endpoints
US9794201B2 (en) Messaging based signaling for communications sessions
US9402054B2 (en) Provision of video conference services
US8739214B2 (en) Methods, computer program products, and virtual servers for a virtual collaborative environment
US9565396B2 (en) Methods, systems and program products for initiating a process on data network
US20060244818A1 (en) Web-based conferencing system
US11716368B2 (en) Multicast overlay network for delivery of real-time video
US9774824B1 (en) System, method, and logic for managing virtual conferences involving multiple endpoints
US11323660B2 (en) Provision of video conferencing services using a micro pop to extend media processing into enterprise networks
JP2007329917A (ja) テレビ会議システム、複数のテレビ会議出席者が互いを見、聞くことを可能にする方法およびテレビ会議システム用のグラフィカル・ユーザ・インタフェース
US10701116B2 (en) Method, computer-readable storage device and apparatus for establishing persistent messaging sessions
CN104348700B (zh) 用于发布微博的方法和系统
WO2018126134A1 (fr) Plateforme de collaboration d'entreprise unifiée basée sur navigateur
EP3563248B1 (fr) Plateforme de collaboration d'entreprise unifiée basée sur navigateur
Kasetwar et al. A WebRTC based video conferencing system with screen sharing
Sakomaa Analysis of a web conferencing system: development and customisation
Neervan et al. IMPLEMENTATION OF A WEBRTC VIDEO CONFERENCING AND STREAMING APPLICATION
US20110093590A1 (en) Event Management System
Patne et al. Security Implementation in Media Streaming Applications using Open Network Adapter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17888103

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017888103

Country of ref document: EP

Effective date: 20190730