DE102017110431A1 - Method for transmitting information - Google Patents

Method for transmitting information

Info

Publication number
DE102017110431A1
DE102017110431A1 DE102017110431.3A DE102017110431A DE102017110431A1 DE 102017110431 A1 DE102017110431 A1 DE 102017110431A1 DE 102017110431 A DE102017110431 A DE 102017110431A DE 102017110431 A1 DE102017110431 A1 DE 102017110431A1
Authority
DE
Germany
Prior art keywords
data
audio
mcu
programming interface
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
DE102017110431.3A
Other languages
German (de)
Inventor
Andreas Kröpfl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eyeson GmbH
Original Assignee
Eyeson GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyeson GmbH filed Critical Eyeson GmbH
Priority to DE102017110431.3A priority Critical patent/DE102017110431A1/en
Publication of DE102017110431A1 publication Critical patent/DE102017110431A1/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/155Conference systems involving storage of or access to video conference sessions

Abstract

The invention relates to a method for transmitting information by means of the streaming technology via communication channels, wherein at least one end point (2, 3, 8, 9) is provided which has a transmitting and / or receiving device (22, 32) for audio signals. and / or video streams (AV1, AV2, GS) and wherein at least one audio and / or video stream (AV1, AV2) from an end point (2, 9) with a transmitting device (22) via a communication channel to a preferably cloud-based MCU (15) (Multipoint Control Unit) is sent, which from this and possibly other to the MCU (15) transmitted audio and video streams (AV1, AV2) creates a combined audio and / or video stream (GS) and at least one Endpoint (2, 3) sends. According to the invention, the data (D1, D2, D3) are sent from at least one data source (5, 6, 7) to a preferably standardized programming interface (10) and the programming interface (10) sends this data (D1, D2, D3). and transmits at least one data stream (DS) resulting from this processing via at least one communication channel to the MCU (15), and that the MCU (15) said data (D1, D2, D3) of the at least one data stream (DS) and the said audio and / or video streams (AV1, AV2) combined in a total stream (GS) and this total stream (GS) via communication channels to one or more endpoints (2, 3, 8) transmits.

Description

  • The invention relates to a method for transmitting information by means of the streaming technology via communication channels according to the preamble of claim 1.
  • Such methods are known and are used in particular in video conferencing, in which the participants transmit their audio and / or video streams, in particular via the Internet to an MCU. Such audio and / or video streams are, in particular, the audio and video recordings recorded directly at the active subscriber, that is to say generally the word contributions and the face of the respective active subscriber, which are transmitted to the MCU. An MCU is a central hub for video conferencing in particular, which can be implemented in hardware or software. Through an MCU, the audio and / or video streams are received by several conference participants and processed according to the configuration and sent back to all participants. In a processing mode called "Continuous Presence", all audio and / or video streams are aggregated by the MCU and sent back to all subscribers so that the subscribers see each other at the same time. In addition, an MCU may also aggregate the audio and / or video streams of some or all of the participants into a new stream. For example, the current speaker may fill most of the image while the other participants may be seen as a thumbnail at the edge of the screen. An MCU always works with a gatekeeper who is responsible for managing the incoming connections that can be made from the IP or telephone network.
  • The endpoints associated with the active participants are designed, for example, as smartphones (currently the most widely used operating systems are iOS and Android), as a computer in the form of a PC or laptop, or as dedicated all-in-one systems with installed video content management systems , For connection of computers, for example, a variety of video protocols are known, for example the open standard WebRTC (Web Real-Time Communication), which defines a collection of communication protocols and programming interfaces that enable real-time communication over computer-to-computer connections, which in turn applications such as videoconferencing, file transfer, Chat and desktop sharing enabled. WebRTC is a free, open project that provides real-time communications (RTC) capabilities to browsers and mobile applications. In popular browsers, these functions are already included; Software stored and / or installed specifically at the endpoint is not necessary. On the other hand, video conferencing systems are known in which access is accessed via an Internet site. In many cases, the participant then has to download a so-called add-in software to control the local camera and the microphone and to be able to connect to the conference.
  • The endpoints of purely passive subscribers, receiving but not transmitting only audio and / or video streams, can also work with traditional browsers, although the WebRTC standard is not required. The endpoints of partially active participants are those that switch from "passive" to "active" as needed and then count as active participants. Accordingly, these endpoints also have transmit and receive functions.
  • The "Continuous Presence" mode of operation mentioned above is used in many known videoconferencing systems, although it does not meet all the needs of subscribers for incorporating further sources. Although it is known for example from the so-called. Screencasting that an active participant can integrate data from its end point in his video stream before it transmits this video stream to the MCU, but also this method offers only limited possibilities.
  • It is an object of the present invention to provide a method for transmitting information according to the preamble of claim 1, which considerably extends the application possibilities - in particular in video conferences, but not limited thereto. It is also an object of the invention to provide the corresponding devices for carrying out such a method.
  • This object is solved by the features of the independent claims.
  • In the method according to the invention, data, preferably via the Internet, are sent from at least one data source to a preferably standardized programming interface (API, application programming interface, ie interface for application programming). The programming interface processes this data and compiles from these a data stream, which it sends to the MCU via at least one communication channel. The MCU combines said data stream and said audio and / or video streams into a total stream and sends that aggregate stream via communication channels to one or more endpoints, particularly for audio and / or video output to output devices at those endpoints. The total current here is audio and video information In other words, the said data sources are present in the overall stream - as well as the audio and video streams - in the form of audio and video information.
  • According to the invention, data is sent via the programming interface to the MCU and there summarized with directly sent to the MCU audio and / or video streams into a total stream, which is then also an audio and / or video stream. At least one endpoint is then transmitted the total current from the MCU.
  • According to a particularly preferred embodiment of the invention, at least one audio and / or video stream sent to the MCU originates from an active participant in a video conference. However, such a stream can also be sent, for example, outside of a video conference from an IP camera designed as an end point to the MCU, where it is combined with data in a total stream and sent to one or more end points.
  • "Endpoints" in the sense of the present invention are in principle all possible sources or sources and / or receivers of audio and / or video streams. Endpoints may also be associated, in particular, with active participants of a videoconferencing, who according to one embodiment only transmit audio and / or video streams to the MCU (for example live recordings of voice and face and / or recorded movies and / or stored audio streams). In addition, endpoints of active participants are possible, which can additionally cause to send data from a data source to the programming interface, the data being for example pictures or tables. Also, endpoints of passive participants in a videoconference receiving the overall stream may preferentially cause data to be transferred from a data source to the programming interface. An electronic recording device for recording or storing the total current is also to be regarded as an end point, which in this regard is considered a passive subscriber, since the recording device only receives the total current, but does not itself transmit audio and / or video streams. Furthermore, it is possible that audio and video sources such as IP cameras or a YouTube channel are designed as endpoints, in which case a transmission of their audio and video streams via control of the programming interface by, for example, an active or passive participants a video conference or by the occurrence of a particular event, the programming interface of the MCU notifies the associated access data (such as IP address, password and protocol) which then causes the IP camera to send its audio and video stream to the MCU. Such endpoints naturally receive no audio and / or video streams from the MCU.
  • A programming interface is a part of a program that is made available by a software system to other programs for connection to the system. The programming interface preferably acquires access to the MCU via a programming framework known per se. The use of a standardized programming interface ensures source code compatibility. The programming interface is preferably implemented on a different server than the MCU.
  • The MCU is preferably cloud-based and in this case preferably realized in the form of a server. However, the MCU can also be implemented on an intranet server, for example.
  • The data sent to the programming interface are, for example, images, texts, tables, measured values and / or other static data. These are transmitted according to the invention from a data source to the programming interface to be sent from there within a data stream to the MCU. Unlike the audio and / or video streams sent directly from endpoints to the MCU, data is first transferred to the programming interface.
  • Data that is sent to the programming interface, come after the predicted from at least one data source, for example, have an active and / or a passive participants of a video conference immediate access. For example, the data source is a file on the computer that an active or passive participant in the videoconference participates with. In another example, an active or passive participant of a videoconference is stationed in a control center and may e.g. which cause the feeding of work plan files or parts thereof (these are then the data sources in the sense of the invention). Other data sources are, for example, external data sources, which in particular send their data automatically, permanently or triggered by an event to the programming interface. Such independent of one or more participants data sources are then endpoints for the purposes of this invention.
  • Audio and video streams, however, do not fall under the concept of "data" in the context of the present invention. Rather, such streams are not sent to the programming interface but to the MCU.
  • The data transferred from the program interface to the MCU are also represented visually in the total stream in the form of video information on the output devices, for example in the form of tables, location data, integration of logos, counter data of most diverse applications, images of all kinds, texts, etc.
  • As already mentioned, the method according to the invention is set up and designed in particular for carrying out a video conference with active and possibly passive participants. At least one of the active subscribers, and preferably all active subscribers, is then assigned a respective endpoint, with the endpoints of active subscribers sending audio and / or video streams to the MCU and receiving the total stream from the MCU. Passive users, on the other hand, do not send audio and / or video streams to the MCU, but receive the total power.
  • It should be noted that, furthermore, passive participants in a video conference according to an advantageous embodiment can also become active participants, whereby such participants can be referred to as partially or temporarily active participants, who can also transmit audio and / or video streams to the MCU in this state , A passive participant who becomes a so-called partially or temporarily active participant is also referred to as an active participant in the context of this invention during his active time.
  • The access from the endpoints of active participants of a videoconference to the programming interface gives these participants maximum freedom not only in the data integration but also in the control of the videoconference. Particularly preferably, end points of active or passive subscribers can very generally send control commands to the programming interface, which are then preferably integrated in the said data stream to the MCU. These control commands relate, for example, to the addition and / or exclusion of participants in the video conference and / or data sources, the type of reproduction of participants and / or data (size, position, sharpness, temporal behavior) on the output devices. In this way, the representations on the output devices of the end points of the participants can be controlled with advantage. In this case, particularly preferably, the layout of the information contained in the total current can be controlled by an active and / or passive subscriber by means of the programming interface. The data reproduced on the output devices of the end points preferably lie in different levels of the video reproduced the same on all end points.
  • A further variant envisages that the said programming interface accesses predefined stored modes and transmits them to the MCU, these modes for example the manner of reproducing the information of the audio and / or video streams contained in said total stream and data of the data sources in the total stream specify on output devices in particular the endpoints of active and passive participants.
  • By inputting corresponding control commands at the relevant endpoint of an active (or even passive) subscriber via said programming interface, it is thus possible to define how and / or where the data from the MCU is to be placed within the overall current. This makes it possible for a participant in the video conference to influence the output device via the program interface, for example the type of presentation (positioning) of the active users.
  • Access to the programming interface of endpoints of active and / or passive participants preferably takes place via an input interface of the endpoint by means of program parts with the aid of which endpoints can preferably communicate via standardized protocols. For example, if the endpoint is a smartphone, e.g. be accessed via an installed on the smartphone and appropriately configured and trained app on the programming interface. The same applies, of course, if instead of a smartphone, a computer or a dedicated video conferencing system is used on which the corresponding access programs are installed or they are accessible via, for example, the Internet.
  • The audio and / or video streams from the endpoints are most preferably sent directly to the MCU. This constellation offers the advantage that the audio and / or video streams are sent from the end points to the MCU as in the prior art. There are no modifications to the known methods necessary. In contrast, on the basis of the o.g. Control commands, preferably sent by an active or passive participants to the programming interface, the integration of data and the presentation in the total current are influenced and controlled in various ways.
  • The total current demanded by the MCU is particularly preferably a single audio and / or video stream (so-called single stream) consisting of audio and / or video information in which the individual audio and / or video streams of at least one end point and the data of the Data source (s) are combined. Accordingly, the data from the data sources are also embedded in the overall stream in the form of audio and / or video information and ultimately displayed on the output devices of the endpoints of the active and possibly passive subscribers.
  • The total current can be sent in different, even depending on the design and availability dynamically changeable bandwidths via various communication or transmission channels from the MCU to the endpoints.
  • Preferably, at least one of said data sources is an automatic data source, which transmits data to the programming interface, for example, and in particular without causing an endpoint assigned to an active or passive user. For example, location data and / or a company logo may be sent from a data source to the programming interface continuously or at particular times, which are then forwarded to the MCU where it is embedded in the overall stream. Also, the occurrence of a predefined event can automatically cause the sending of data from a data source. If, for example, a measured value of a measured variable of a system is detected continuously or at intervals, the end point can cause the transmission of these measured values or a message with an indication of this being exceeded to the programming interface if the limit value is exceeded or undershot. Such an exceeding or falling below of a limit value can only trigger the automatic initiation of a video conference with a previously defined group of persons.
  • Alternatively or additionally, at least one of the data sources is a computer-based data source. In this case, a computer, which may be embodied, for example, as a client or server, accesses one or more data sources with a computer program, for example an app or another software service or process. This access can be automatic and / or permanent. Access is particularly preferred without prompting or intervention of a person, such as without causing an endpoint associated with an active or passive participant of a videoconference. An event-triggered triggering is also possible, for example when a limit value, for example a measurement result in the monitoring of a patient or an air pollution value or the like, is exceeded. For example, the computer can access these data sources through specialized software protocols such as Representational State Transfer (REST), Simple Object Access Protocol (SOAP), Internet of Things (IoT), Internet Protocol (IP), Internet Protocol (IP), and others. The computer then processes the data and prepares it for subsequent transmission to the programming interface.
  • Preferably, the MCU transmits the total current directly to endpoints of active participants. On the other hand, it can be advantageous if the MCU transmits the total stream to a media streaming server for end points of passive subscribers and the media streaming server distributes this total stream to the passive subscribers.
  • The invention also relates to devices having one or more processors with instructions stored thereon that correspond to the Application Programming Interface (API) described above, the instructions, when executed, causing the processor (s) to perform the following operations:
    • - receiving and processing data from data sources,
    • Processing of control commands, wherein the control commands are predefined commands stored in the programming interface or are commands entered at the endpoint of an active and / or passive user and received by the programming interface, the control commands preferably being based on the reproduction of audio and video information refer to the output devices of the endpoints,
    • Processing the data and / or the control commands and creating at least one data stream resulting from this processing, and
    • - Sending this at least one data stream to an MCU (Multipoint Control Unit), preferably as described above MCU.
  • Said device is particularly preferably implemented on a hardware server which comprises the processor or processors and on which the software-trained programming interface is installed. End points, in particular of active and / or passive subscribers, have access to the said programming interface.
  • Further instructions of the device, received as control commands from at least one of the endpoints, when executed preferably cause the processor (s) to include in the data stream information about the type and / or position of the presentation of audio and / or video streams and / or data, for example Images and / or texts, to embed.
  • Alternatively or additionally, further instructions, when executed, cause the processor (s) to access predefined modes for communicating predetermined presentation types and positions of audio and / or video streams transmitted to the MCU via the endpoints and / or data fed through that programming interface to the MCU. These display types and positions define the type and location of playback of these videos and / or data on the screens of the endpoints of all participants.
  • The invention further relates to a system having one or more processors with instructions stored thereon which correspond to the program functions of a multipoint control unit (MCU) as described above, the instructions, when executed, causing the processor (s) to perform the following operations:
    • Receiving audio and / or video streams from endpoints and / or at least one data stream, in which data and preferably also control commands are integrated, which originate from a device as discussed above with a programming interface as previously discussed,
    • - Generating a total stream in the form of a single stream (single stream) from the one or more audio and / or video streams and / or the at least one data stream, and
    • Send this total stream to endpoints for output to output device endpoints and / or for storage.
  • The said system is particularly preferably implemented on a hardware server which comprises the processor (s) and on which the software-designed MCU is installed. The MCU is preferably implemented in the cloud.
  • In this case, the system is particularly preferably able to generate the said total stream as a single stream in which the audio and / or video streams as well as the data are combined. Single Streams have the advantage of extreme simplicity, since all subscribers receive the same stream - possibly only adjusted in terms of their resolution, to take account of the respective available bandwidth. Otherwise, however, is not differentiated, so that only a merge all sources (audio and / or video streams of participants and integration of the data of the programming interface) is made and then the same total stream is transmitted to all participants in the form of a single stream.
  • The said control commands which the MCU receives and processes from the programming interface may relate in particular to:
    • the reproduction of audio and / or video contents of the total stream on said output devices, preferably in relation to the video plane, position and / or size;
    • - enabling and preventing the integration of data from data sources;
    • - adding and removing endpoints;
    • - displaying selected sources in full screen mode; and or
    • - triggering, managing and / or recording a videoconference.
  • In addition, the invention relates to an end point according to claim 16, a video conferencing system according to claim 17 and a computer program product according to claim 18.
  • It should be noted that a person skilled in the art knows the various options for realizing the transmission of the audio and / or video streams, the data, the data streams and the overall streams. Therefore, in the present case, no closer look is taken to these possibilities known to the person skilled in the art. The corresponding communication channels can be made available in particular via the Internet, local wired or local wireless networks (LAN or WLAN), an intranet or other types of networks. Preferably, wireless transmissions are used to and from endpoints while communication between the programming interface and the MCU is preferably wired. It goes without saying that the corresponding transmitting and receiving devices are present for this purpose, even if this is not always explicitly stated here. In other words, when talking about transmitting, transmitting, transmitting and / or receiving data and streams, corresponding transmitting and / or receiving devices are always implicitly disclosed.
  • It will also be appreciated by those skilled in the art that software (e.g., in the form of the programming interface or MCU) may be installed on one or more computers and / or servers. So it is with data sources. Computer program products can be present, for example, as electronic memories including internal memory, as a USB stick, as a DVD or as being stored on another medium.
  • The invention will be explained below with reference to drawings. Show it:
    • 1 a video conferencing system of the prior art;
    • 2 a video conferencing system according to the invention; and
    • 3 a level structure of a video played on output devices of end points.
  • In the 1 is a video conferencing system known in the art 100 shown. Two endpoints 102 , each associated with an active participant in the videoconference, each comprise a device 121 for recording and creating audio and / or video streams, which usually reproduce the voice and the face of the corresponding active participant and / or for example a recorded film and / or an audio file. In addition, the endpoints include 102 in each case a transmitting and receiving device 122 for transmitting or receiving the audio and / or video streams and an output device 123 for example, in the form of a screen and a speaker.
  • During one of the two endpoints 102 (the left of the two) only audio and / or video streams sent AV, is the other endpoint 102 (the right of the two) additionally designed and arranged to integrate data into the audio and / or video stream, so that a common stream AVD is formed and sent. That of the transmitting and receiving devices 122 sent audio and / or video streams AV respectively. AVD become one MCU 115 (Multipoint Control Unit) which transmits all audio and / or video streams AV . AVD by means of a device 116 to summarize to a common total current GS summarizes. This total current GS which then consists of audio and / or video information is sent to the endpoints 102 the active participant as well as the endpoints 103 Passive participant sends, for this purpose, a receiving device 132 and an output device 133 include (in the 1 is just one example of a single endpoint 103 a passive participant shown). In addition, in the present case, the total current GS to an electronic memory 108 sent, which receives the total current and stores.
  • The endpoints 103 the passive participant of the videoconference can be trained and set up to change from passive status to active status. It goes without saying that then the receiving device 132 also has a transmitting function.
  • In the 2 is a video conferencing system according to the invention 1 to illustrate the method according to the invention for transmitting information. As in the embodiment according to the 1 are two endpoints assigned to one active participant each 2 provided, in turn, each facilities 21 for recording and creating audio and / or video streams AV1 , Transmitting and receiving equipment 22 and output devices 23 for reproducing sound and image. The audio and / or video streams AV1 which in turn can be the voice and face of the active participant, but also, for example, videos called up and recorded by the active participant, are transmitted by the respective transmitting and receiving device 22 - As in the prior art - directly to an MCU 15 (Multipoint Control Unit).
  • The passive participants assigned endpoints 3 (it's just such an endpoint 3 exemplified) comprise at least one receiving device 32 , which if necessary also has a transmission function, in particular for the case that the passive participant becomes part or at times an active participant.
  • Comparable to the memory 108 according to the embodiment of the 1 includes the video conferencing system 1 an electronic memory 8th for recording and storing audio and / or video streams. This store 8th is an endpoint in the sense of the invention, since these endpoints are defined as those with which audio and / or video streams can be sent and / or received.
  • According to the invention, data D1 . D2 . D3 not representing said audio and / or video streams from respective data sources to a programming interface 10 be transmitted. Here are three examples of the reason for the transmission of such data D1 . D2 . D3 in 2 shown. The data are in particular static data, ie images, texts, tables, result series such as measurement results, etc., but no audio and / or video streams.
  • First, according to the illustrated embodiment, one of the two endpoints 2 an internal data source 6 provided the data D1 output, for example in the form of text or image data present in a stored file. The endpoint 2 can now cause the data D1 to a standardized programming interface 10 (also API, called Application Programming Interface), which is programmed as software and is implemented on a server (not shown). This server is preferably physically different from the server hosting the MCU 15 is implemented.
  • Alternatively or additionally, data from other data sources to the programming interface 10 be sent. Another example is for the other endpoint 2 of the second active subscriber shown. This is where the endpoint attacks 2 by means of a control command BF1 to an external data source 7 to and causes that data D2 from this data source 7 to the programming interface 10 be sent. This is done, for example, at the endpoint 2 by the active participant a corresponding command via an input device (eg keyboard or microphone) of the endpoint 2 entered, then in the control command BF1 is processed. The data source 7 may be, for example, a monitoring device whose monitoring data, if necessary, by means of the command BF1 for transmission to the programming interface 10 can be retrieved.
  • Of course, endpoints can 2 that are associated with active participants of the videoconference, may also be designed to work on both data sources 6 at the endpoint 2 yourself or external data sources 7 access and transfer data from these data sources 6 . 7 to the programming interface.
  • Also an endpoint 3 a passive subscriber can by means of such a control command BF2 the transmission of data D2 from the data source 7 to the programming interface 10 cause. Likewise it is possible that the endpoint 3 of a passive subscriber Data from an internal data source (not shown) at this endpoint 3 to the programming interface 10 transfers.
  • The endpoints associated with the active and / or passive participants 2 . 3 may be, in particular, WebRTC or other video protocols configured endpoints to be included via browser invocation, such as personal computers and laptops, with built-in or added camera and microphone, which then provide the said facilities 22 . 32 for recording and creating audio and / or video streams. Alternatively, these endpoints can 2 . 3 be designed as mobile endpoints with applications installed on it, for example, as smartphones, the app via the programming interface 10 access and control them. According to another alternative, the endpoints 2 . 3 dedicated endpoints with video content management systems installed on them.
  • Furthermore, in the 2 a data source 5 represented independently of an endpoint 2 or 3 of an active or passive subscriber data D3 to the programming interface 10 transmitted. An example of this is a data source 5 , which is designed as an automatic or computer-based data source. Such a data source 5 For example, it may be designed as a measuring instrument that is connected to a patient (for example, an implemented blood glucose meter). Through an event e For example, exceeding a reading on said meter becomes the measured data D3 from the data source 5 to the said programming interface 10 Posted. Also it is possible that in such an event e automatically a video conference is started or initiated or that the desired participants of such a videoconference by a message (via SMS, WhatsApp message, phone call, specific signal on a smartphone, etc.) are advised to dial into the videoconference or such to initiate.
  • In the above example, the data source is 5 an automatic data source characterized by the fact that the data D3 without causing the endpoint associated with an active or passive participant 2 respectively. 3 to the programming interface 10 be transmitted. This can be the data source 5 be formed in the form of an automatic data source in various ways. Apart from a triggering by a certain event (see above), for example, even when starting a videoconference a certain logo (present in the form of image data D3 ) from an automatic data source 5 to the programming interface 10 The logo will ultimately be displayed on all output devices 22 . 32 the endpoints 2 . 3 the participant is displayed.
  • data sources 5 that are distinguished by the fact that no endpoint 2 . 3 of active or passive subscribers sending data D3 to the programming interface 10 can also be designed as a computer-based data source. In this case, the data source includes 5 a computer, which may be designed in particular as a client or as a server on which a computer program is installed (for example an app, a service, a process, etc.). The computer program is designed to reference a data source of the endpoint 5 whose data is processed and processed and this processed data D3 then to the programming interface 10 transfers.
  • The data D1 . D2 and or D3 be from the programming interface 10 preferably to a data stream DS summarized and to the MCU 15 transfer. The MCU 15 has a facility 16 to summarize the data stream DS and the one or more audio and / or video streams AV1 to a total flow GS , which is preferably formed as a single stream (single stream). This total current GS which consists of audio and / or video information is sent to the endpoints 2 from active participants, preferably all, and to the endpoints 3 from passive subscribers, preferably all, and on their respective output devices 23 . 33 reproduced in sound and image.
  • In the 2 is another endpoint 9 - additionally or alternatively to the endpoints 2 . 3 . 8th - shown schematically. This endpoint 9 is also a source of audio and / or video streams AV2 without having its own output device. The endpoint 9 For example, it can be designed as an IP camera or YouTube channel. To receive the audio and / or video stream AV2 sends the programming interface 10 first the access data K1 to the MCU 15 , which then - possibly prepared - to the end point 9 sends. This will be the endpoint 9 caused in a known manner, the audio and / or video stream AV2 to the MCU 15 to send. The MCU 15 binds this audio and / or video stream AV2 then into the total stream GS one.
  • In the 2 is still shown as an endpoint 2 respectively. 3 , which is assigned to an active or passive subscriber, control commands S1 respectively. S2 can be entered, which then to the programming interface 10 be sent. The programming interface 10 prepares these control commands and embeds them in the data stream DS one. The MCU 10 then processes these control commands and by means of such control commands S1 . S2 For example, the layout or playback of the audio and / or video information in the audio and / or video streams AV1 . AV2 or the data D1 . D2 . D3 on the output devices 23 . 33 to be influenced. As another example, by means of such control commands S1 . S2 Data from data sources 5 . 6 . 7 be requested or data of such data sources are excluded from the integration into the data stream. Another example is the addition or deactivation of active (and / or passive) participants of a videoconference, by their audio and / or video streams AV1 either with in the total current GS be involved or not. By means of the transmission of such control commands from, for example, the endpoint 2 an active participant, this participant can exert extensive influence on the implementation and / or the acoustic and / or visual reproduction of the video conference.
  • In the 3 schematically different levels or layers "Z: -M" to "Z: N" of a total current defined video is shown, which on the output devices 22 . 32 of endpoints 2 . 3 played by active and passive participants. M and N are natural numbers; the sum M + N + 1 (for the layer Z: 0) represents the total number of levels. The M levels are the lower levels, the N levels are the upper levels, and the 0 level is the conference level.
  • Arrangement and content in each layer are determined by the endpoints 2 . 3 determined, in particular by input of control commands as described above via an input interface, such as a keyboard, an endpoint 2 respectively. 3 an active or passive participant. These commands S1 . S2 are - as described - to the programming interface 10 which then processes the control commands and into the data stream DS to the MCU 15 binds to the MCU 15 the layout of the videoconference on the output devices of the endpoints 2 . 3 in the total stream GS can determine.
  • According to the 3 For example, the position (see "Pos (x / y / z)") and / or the size of the playback of texts (see "TEXT -2" in level Z: -2), images (see "IMAGE -3" in Level Z: -3) or other content such as videos (see "CONTENT 2" in level Z: 2), which data D1 . D2 and or D3 correspond on the output devices 22 . 32 , in particular screens, of the endpoints 2 . 3 be specified. The said layout of the video components of the on the output devices 22 . 32 reproduced total current by command input at the end point 2 respectively. 3 and sending the command to the programming interface 10 can also be influenced by color choice, degree of transparency of individual images or videos, and other commands. The arrangement of the data D1 . D2 and or D3 corresponding video information can be placed above (Z:> 0) or below (Z: <0) the conference level (Z: 0).
  • Furthermore, it is possible that the programming interface 10 Preferably, but not necessarily, after appropriate command input at one of the endpoints 2 . 3 - Accesses predefined modes and the MCU 15 These modes provide information about the desired representations and positions of the data D1 . D2 and or D3 corresponding information as well as the audio and / or video streams AV1 . AV2 contain. This information is then in the total power leaving the MCU GS embedded so that the nature and location of the reproduction of these data and / or video data on the said output devices 22 . 32 the endpoints 2 . 3 are fixed. The said modes can be determined in particular by the endpoints 2 . 3 the active and passive participants are chosen.
  • The invention can be used in many areas for a wide variety of applications cases), eg in the so-called uniform communication, the Internet of Things and in the health sector. For example, in an Internet of Things use case, flows from different endpoints to the MCU 15 and data from multiple interconnected devices or data sources 5 . 6 . 7 (eg static images and texts) to the programming interface 10 sent. According to the invention, the streams and data streams originating from these different endpoints and data sources become direct, data indirectly via the programming interface 10 in the form of a data stream DS - to the MCU 15 Posted. Upon the occurrence of a particular event, a videoconference may then be initiated automatically, especially if this event requires direct intervention, for example, an active participant in the videoconference.
  • For example, in the health sector, the invention can be used to remotely monitor the health of a patient, e.g. to start a videoconference between doctor and patient.
  • Incidentally, the method according to the invention can easily be combined with video content management systems such as Kaltura and YouTube to provide the total power GS record, store, searchable and distribute.
  • The individual elements, as exemplified in the 2 are shown, can also be used in various combinations for carrying out the method according to the invention. For example, a video conference can be realized with only one active user and many passive users without additional internal or external data sources 6 . 7 or automatic or computer-based data sources 5 should be present. Also other endpoints 9 how IP cameras could be alone through the programming interface 10 , for example, driven by an endpoint 2 , and the MCU 15 be addressed and the audio and / or video streams AV2 of the endpoint 9 to the endpoints 2 . 3 active and / or passive participants are sent. Also, only a record in the electronic memory 8th for later evaluation purposes or only for archiving possible. In the case of a patient monitoring as described above, the measured values of a measuring device can also be triggered by an event ( e ), eg exceeding a measured value, are integrated into a videoconference between doctor and patient. A variety of other applications are conceivable without further notice.
  • LIST OF REFERENCE NUMBERS
  • 1
    Video conferencing system
    2
    Active participant
    3
    Passive participant
    5
    Endpoint in the form of a data source
    6
    Internal data source at the endpoint of the active participant
    7
    External data source with access by participants
    8th
    Storage
    9
    Controllable AN source (IP camera, YouTube, ...)
    10
    Programming Interface (API)
    15
    MCU
    16
    Device for combining in single stream
    21
    Device for recording and creating audio and / or video streams
    22
    Transmitting and receiving device
    23
    output device
    32
    receiver
    33
    output device
    100
    Video conference system according to the prior art
    102
    Endpoint of active participant
    103
    Endpoint of passive participant
    108
    Storage
    115
    MCU (Multipoint Control Unit)
    116
    Means to summarize
    121
    Device for recording and creating audio and / or video streams
    122
    Transmitting and receiving device
    123
    output device
    132
    receiver
    133
    output device
    AV1
    Audio and / or video stream
    AV2
    Audio and / or video stream
    AVD
    Audio and / or video stream with integrated data
    BF1
    command
    BF2
    command
    D1
    dates
    D2
    dates
    D3
    dates
    S1
    command
    S2
    command
    DS
    data stream
    K1
    Transmission of access data
    e
    event
    GS
    total current

Claims (18)

  1. A method for transmitting information by means of the streaming technology via communication channels, wherein at least one end point (2, 3, 8, 9) is provided, the transmitting and / or receiving means (22, 32) for audio and / or video streams (AV1, AV2, GS) and at least one audio and / or video stream (AV1, AV2) from an end point (2, 9) with a transmitting device (22) via a communication channel to a preferably cloud-based MCU (15). (Multipoint Control Unit), which from this and possibly other to the MCU (15) transmitted audio and video streams (AV1, AV2) creates a combined audio and / or video stream (GS) and at least one end point (2, 3), characterized in that data (D1, D2, D3) are sent from at least one data source (5, 6, 7) to a preferably standardized programming interface (10) and the programming interface (10) transmits this data (D1, D2 , D3) and at least one data stream (DS) resulting from this processing via at least one communication channel to the MCU (15), and - that the MCU (15) said data (D1, D2, D3) of the at least one data stream (DS) and said audio and / or video streams (AV1, AV2) combined in a total stream (GS) and this total stream (GS) via communication channels to one or more endpoints (2, 3, 8) transmits.
  2. Method according to Claim 1 , characterized in that data (D1, D3, D3) transmitted to the programming interface (10) comprise images, texts, tables, measured values and / or other static data.
  3. Method according to Claim 1 or 2 , characterized in that it is designed as a method for carrying out a video conference with active participants, wherein at least one of the active participants and preferably all active participants in each case a said endpoint (2) is assigned, wherein the endpoints (2) active participant audio and send video streams (AV1) to the MCU (15) and receive the total stream (GS) from the MCU (15).
  4. Method according to Claim 3 , characterized in that it is designed as a method for carrying out a video conference with passive participants, wherein at least one of the passive participants and preferably all passive participants in each case a said endpoint (3) is assigned, wherein the endpoints (3) passive participants the total current ( GS) received from the MCU (15).
  5. Method according to Claim 3 or 4 , Characterized in that at least one an active participant or a passive participant associated endpoint (2, 3) control commands (S1, S2) to the programming interface (10) is sent, which are incorporated preferably in the said data stream (DS) to to control the reproduction of the information in the total stream (GS) on output devices (23, 33) of the end points (2, 3) of active and / or passive participants, and / or audio and / or video streams (AV1) of end points (2, 3, 8, 9) additionally include or exclude, and / or - data (D1, D2, D3) of data sources (5, 6, 7) additionally include or exclude.
  6. Method according to Claim 5 , characterized in that the control commands (S1, S2) which, for example, the type of reproduction of the information contained in said total stream (GS) of the audio and / or video streams (AV1, AV2) and / or data (D1, D2, D3) of the data sources (5, 6, 7) on the said output devices (23, 33) of the end points (2, 3) into which at least one data stream (DS) sent to the MCU (15) is embedded.
  7. Method according to at least one of the preceding claims, characterized in that the said programming interface (10) accesses predefined modes stored in the programming interface (10) and sends them to the MCU (15), for example with regard to the way in which said total current is reproduced (GS) information contained the audio and / or video streams (AV1, AV2) and / or data sources (5, 6, 7) on said output devices (23, 33) of the endpoints (2, 3).
  8. Method according to at least one of the preceding claims, characterized in that at least one end point (2, 3) assigned to an active or passive subscriber accesses and causes a data source (6) of the end point (2, 3) or an external data source (7) that data (D1, D2) are sent from this data source (6, 7) to the programming interface (10).
  9. Method according to at least one of the preceding claims, characterized in that at least one data source (5) is an automatic data source, the data (D3) without initiating or intervening by a person, such as without causing an endpoint associated with an active or passive participant of a video conference ( 2, 3), to the programming interface (10) sends.
  10. Method according to at least one of the preceding claims, characterized in that the transmission of data (D) from a data source (5) to the programming interface (10) is triggered by a predefined event (E).
  11. Method according to at least one of the preceding claims, characterized in that at least one data source (5) is a computer-based data source, wherein a computer by means of a software program without prompting or intervention of a person, such as without causing an active or passive participant of a video conference associated endpoint (2, 3), accessing this data source (5), processed and processed data from this data source (5) and then sends them to the programming interface (10).
  12. Method according to at least one of the preceding claims, characterized in that endpoints (2, 3) are selected from the following group: endpoints set up for WebRTC or other video protocols and to be included via browser call, for example PCs and laptops, with built-in or added ( Add-ons) Camera and microphone as facilities for recording and creating audio and / or video streams (21), - Mobile endpoints with applications (apps) installed on them, for example smartphones, and / or - dedicated endpoints with video content installed on them management systems.
  13. Device having one or more processors with instructions stored thereon, which correspond to the instructions of the program interface (10) according to any one of the preceding claims, wherein the instructions, when executed, cause the processor (s) to perform the following operations in said order: Receiving and processing data (D1, D2, D3) from data sources (5, 6, 7), -Processing of control commands (S1, S2), wherein the control commands are predefined commands stored in the programming interface (10) or are commands entered at the end point (2, 3) of an active and / or passive user and received by the programming interface (10) wherein the control commands (S1, S2) each relate preferably to the reproduction of audio and video information on the output devices (23, 33) of the end points (2, 3), Processing the data and / or the control commands (D1, D2, D3, S1, S2) and creating at least one data stream (DS) resulting from this processing, and - Sending this at least one data stream (DS) to an MCU (15) (Multipoint Control Unit), preferably an MCU according to one of the preceding claims.
  14. A system having one or more processors with instructions stored thereon which correspond to the program functions of a multipoint control unit (MCU) according to any one of the preceding claims, wherein the instructions, when executed, cause the processor (s) to: receive audio signals; and / or video streams (AV1, AV2) of end points (2, 9) and / or at least one data stream (DS), in which data (D1, D2, D3) and preferably also control commands (S1, S2) are integrated, which are generated by a device with a programming interface (10) according to Claim 13 generating a total stream (GS) in the form of a single stream from the one or more audio and / or video streams (AV1, AV2) and / or the at least one data stream (DS), and - transmitting this total stream ( GS) at end points (2, 3, 8) for output on the output devices (23, 33) and / or for storage.
  15. System after Claim 14 characterized in that said control commands (S1, S2) relate to one or more elements of the following group: - playback of audio and / or video content of the total stream (GS) on said output devices (23, 33), preferably in Reference to the video level, position and / or size; - enabling and preventing the integration of data (D1, D2, D3) from data sources (5, 6, 7); Adding and removing audio and / or video streams (AV1, AV2) from endpoints (2, 3, 8, 9) to and from the total stream (GS); - View selected sources in full screen mode; - Trigger, manage and / or record a videoconference.
  16. End point (2, 3) for participation in a videoconference by an active or passive participant, the end point (2, 3) comprising devices (21) for recording and creating audio and / or video streams (AV1, AV2) and a transmission and / or receiving device (22) for transmitting said currents to an MCU (15) (Multipoint Control Unit) and / or a receiving device (22, 32) for receiving audio and video streams (GS), characterized in that the end point (2, 3) is further configured and arranged such that it, preferably via the Internet, to at least one standardized programming interface (10) of a device Claim 13 and to send to this programming interface (10) control commands (S1, S2), in particular for controlling the playback of the video conference on output devices (22, 32).
  17. Videoconferencing system with a device according to Claim 13 , according to a system Claim 14 or 15 and at least one endpoint (2) Claim 16 ,
  18. A computer program product comprising at least one program code stored on a computer-readable medium or loaded directly into an internal memory, the internal memory being part of an endpoint (2, 3) or server unit configured to participate in a videoconference, said program code being executed when executed the method steps according to one or more of the preceding claims is formed.
DE102017110431.3A 2017-05-12 2017-05-12 Method for transmitting information Pending DE102017110431A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE102017110431.3A DE102017110431A1 (en) 2017-05-12 2017-05-12 Method for transmitting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102017110431.3A DE102017110431A1 (en) 2017-05-12 2017-05-12 Method for transmitting information

Publications (1)

Publication Number Publication Date
DE102017110431A1 true DE102017110431A1 (en) 2018-11-15

Family

ID=63962327

Family Applications (1)

Application Number Title Priority Date Filing Date
DE102017110431.3A Pending DE102017110431A1 (en) 2017-05-12 2017-05-12 Method for transmitting information

Country Status (1)

Country Link
DE (1) DE102017110431A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059581A1 (en) * 2006-09-05 2008-03-06 Andrew Pepperell Viewing data as part of a video conference
US20160127508A1 (en) * 2013-06-17 2016-05-05 Square Enix Holdings Co., Ltd. Image processing apparatus, image processing system, image processing method and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059581A1 (en) * 2006-09-05 2008-03-06 Andrew Pepperell Viewing data as part of a video conference
US20160127508A1 (en) * 2013-06-17 2016-05-05 Square Enix Holdings Co., Ltd. Image processing apparatus, image processing system, image processing method and storage medium

Similar Documents

Publication Publication Date Title
CA2506416C (en) System, method and computer program product for video teleconferencing and multimedia presentations
US9367876B2 (en) Systems and methods for multimedia multipoint real-time conferencing allowing real-time bandwidth management and prioritized media distribution
US6278478B1 (en) End-to-end network encoding architecture
US9204097B2 (en) Recording a videoconference using video different from the videoconference
US8654175B2 (en) Video messaging system
CN101433086B (en) System and method for dynamic control of image capture in a video conference system
US8468227B2 (en) System and method for rendering content on multiple devices
RU2518423C2 (en) Techniques for managing media content for multimedia conference event
US7499075B2 (en) Video conference choreographer
US20190028673A1 (en) Video messaging
US20080005246A1 (en) Multipoint processing unit
ES2690080T3 (en) Device to adapt video communications
TWI440346B (en) Open architecture based domain dependent real time multi-lingual communication service
Chen et al. QoS requirements of network applications on the Internet
JP2007329917A (en) Video conference system, and method for enabling a plurality of video conference attendees to see and hear each other, and graphical user interface for videoconference system
US9049338B2 (en) Interactive video collaboration framework
KR20090028561A (en) Online conferencing systems for sharing documents
KR101629748B1 (en) Dynamic adaptive streaming over http client behavior framework and implementation of session management
JP2005318589A (en) Systems and methods for real-time audio-visual communication and data collaboration
US6453336B1 (en) Video conferencing with adaptive client-controlled resource utilization
JP2004343756A (en) Method and system for media reproducing architecture
JP2008022552A (en) Conferencing method and conferencing system
US20090254960A1 (en) Method for a clustered centralized streaming system
EP2070323B1 (en) Method and device for dynamic streaming/archiving configuration
US9113216B2 (en) Methods, computer program products, and virtual servers for a virtual collaborative environment

Legal Events

Date Code Title Description
R163 Identified publications notified
R082 Change of representative

Representative=s name: PATENTANWAELTE CANZLER & BERGMEIER PARTNERSCHA, DE