WO2023017459A1 - System for improving an online meeting - Google Patents

System for improving an online meeting Download PDF

Info

Publication number
WO2023017459A1
WO2023017459A1 PCT/IB2022/057502 IB2022057502W WO2023017459A1 WO 2023017459 A1 WO2023017459 A1 WO 2023017459A1 IB 2022057502 W IB2022057502 W IB 2022057502W WO 2023017459 A1 WO2023017459 A1 WO 2023017459A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
previous
settings
studio
camera
Prior art date
Application number
PCT/IB2022/057502
Other languages
French (fr)
Inventor
Yoeri Bertha J RENDERS
Tom Das
Original Assignee
3-Sixtie Bv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3-Sixtie Bv filed Critical 3-Sixtie Bv
Publication of WO2023017459A1 publication Critical patent/WO2023017459A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the present invention relates to a compact and user- friendly system for improving an online meeting, video conference or live presentation.
  • Video calls Online meetings, video conferences, live presentations , streamings, broadcastings or other similar communications are hereinafter generally referred to with the term video calls .
  • Video and audio are to be interpreted as any camera and sound technology. Any connection can be used for this, for example the internet .
  • a disadvantage of current video calls is the static approach. Participants communicate via a webcamera on a laptop or smartphone . Sharing other information, such as, for example, an Excel sheet or PowerPoint presentation, is done by calling up said information on the computer screen and subsequently sharing the screen in the video call .
  • Editing and staging certain information in advance is known .
  • a presentation is then made, for example, whereby the different sources of information are edited into a multi-information flow.
  • a disadvantage of such systems is that the presentation or information flow is determined beforehand.
  • OBS Studio Another known system for live streaming of what happens on a desktop is OBS Studio. Via Open Broadcasting System (OBS) software, live or recorded videos can be made and streamed to the most popular platforms, such as YouTube, Facebook Live, Twitch, or other streaming services . OBS Studio allows direct recordings to be made with a webcam and microphone . In addition, pictures can be added such as recorded video, as well as existing visual material, data from games or photographs . During a live stream, several pictures can be shown simultaneously, or it is possible to switch to other pictures .
  • OBS Studio Via Open Broadcasting System (OBS) software, live or recorded videos can be made and streamed to the most popular platforms, such as YouTube, Facebook Live, Twitch, or other streaming services . OBS Studio allows direct recordings to be made with a webcam and microphone . In addition, pictures can be added such as recorded video, as well as existing visual material, data from games or photographs . During a live stream, several pictures can be shown simultaneously, or it is possible to switch to
  • OBS uses scenes and sources .
  • a source is information that can be shown in a live video. such as for example an image but also a recording of a camera .
  • Several sources or information can be added and shown in a scene .
  • a new scene is created which uses other sources . In this way, several scenes are built between which it is possible to switch with a mouse click by clicking an icon.
  • a disadvantage of such system like OBS is that the content is shown in a scene as such, in the form of, for example, an uploaded text or photograph, or a video shot with a certain camera .
  • a video call system is needed which radiates the professionality and quality of a multi -camera video production environment such as a direction room, whereby a team of technicians and specialists is needed, controlled by a director, to show a specific picture in a shot and to broadcast live under the perfect sound and light circumstances .
  • Video is a complex interaction of shooting, direction, script, light and editing. This is largely a manual process .
  • a video production environment consists of many types of video production devices, such as video cameras, microphones, video recorders, video switching devices, audiomixers, digital video effect devices, teleprompters , video graphic overlay devices, etc.
  • a standard production team may, for example, consist of camera people, a video technician who operates the camera operating units for a set of cameras, a teleprompter operator, a lighting director who operates the studio lights, a technical director who operates the video switcher, an audiotechnician who operates an audiomixer, operator (s) who operate (s) a series of recorders and playback units, etc.
  • a system whereby a moderator of a video call can respond to the specific questions of the participant (s) and call up and show the requested information in real-time is also needed.
  • Such interaction occurs in a video production environment of, for example, a live talk show where the spectators can ask questions and the relevant information is brought into view by the director .
  • the purpose of the present invention is to provide a solution to at least one of the aforementioned and other disadvantages .
  • the invention relates to a system for improving an online meeting, video conference, live streaming or multi -camera broadcasting via a live communication network, the system comprising one or several scenes which are saved by the user, the system further a studio setup comprising one or several cameras, one or several screens, lighting, audio in/out and a control, whereby a moderator is brought into view via at least one camera, whereby the system comprises one or several sources of content or for generating content which are linked to the aforementioned one or several scenes, whereby the one or several scenes are randomly called up and uploaded in a live communication session, such that the relevant content of the linked sources is presented or generated live, whereby in addition to the sources the studio settings for lighting and audio in/out and all required adjustable studio parameters, including a scene title and memo bar with reminder text, are also preset and saved in each of the one or several scenes .
  • the system according to the invention eliminates the need for an extensive direction team and allows a moderator to simply take over the direction and set up a professional configuration with a minimum of technical knowledge.
  • a scene is a storage of information or a pointer to information coming from one or several sources of information.
  • the information saved in a scene can be called up and shown at all times .
  • the actual information is generated live and shown when calling up the relevant scene .
  • a scene is a storage of information or a pointer to information coming from one or several sources of information and all settings for correctly calling up and showing the information of the relevant scene .
  • a source of information is information that can be shown in a live video, such as for example an image but also a recording of a camera .
  • a live video such as for example an image but also a recording of a camera .
  • sources or information can be added and shown and the settings are preset and saved for a correct showing of the scene .
  • a new scene is created that uses other sources and other settings .
  • a scene can consist of all possible data and is a predefined input .
  • a scene can, for example, be a Picture in picture, a camera point of view (position pan - tilt zoom coordinates) , a presentation or a fixed camera or microscope point of view, a link to a webpage, etc. Every scene is a unique snapshot that is linked to a button and every scene consists of different parameters that can be called up as soon as the button is pressed. Further, specific sound, light and memo bar parameters or information can be linked to every visual scene connected to a button.
  • a scene with one user action is called up in the live communication session such that the content is uploaded and shown and the linked preset studio settings are also uploaded and configured as such.
  • Via a studio control one figurative "press of a button" can call up a scene in a live session.
  • the purpose of the invention is transforming the static approach of current video calls to a dynamic and instantly responding system. And this with a plurality of inputs, including existing content such as website, photograph, video, media, but also dynamic content generated by video equipment the settings of which, such as camera positions, light settings, sound level and many parameters, can be configured and saved, and all this integrated under "one button” . Whereby the multiple content can be called up and shown during the video call as one scene with "one press of the button" in the video call, either by uploading existing content, or by generating dynamic content with the video equipment that is configured according to the set parameters .
  • existing content such as website, photograph, video, media
  • dynamic content generated by video equipment the settings of which, such as camera positions, light settings, sound level and many parameters
  • the system of the invention is compact, centrally controlled and user-friendly. Cameras, microphones, speakers, headphones and screens are simply connected via wires or wirelessly. The system ccaann be set up by every user.
  • the exchanged information is very diverse and comprises several types of information .
  • the purpose of the present invention is to provide a solution for this .
  • KR 2021 0020293 relates to a service system for providing a rental studio for personal broadcasting, and more specifically for providing a rental studio for a user who has difficulty in configuring equipment for personal broadcasting and supports the user to transmit, in a streaming manner, broadcast content generated through equipment configured in the rental studio ,
  • the invention provides, in a rental manner, a user with a studio in which the user can produce and transmit broadcast content for personal broadcasting without a need to purchase equipment for personal broadcasting, thereby reducing the cost burden of a user.
  • the invention supports the setting of each kind of equipment configured in the studio to be automatically set according to a user ' s desired broadcasting type such that the user can easily create high-quality broadcast content and subsequently transmit the content through a broadcasting service server for personal broadcasting.
  • the system has aann effect of enhancing the convenience in personal broadcasting for a user who broadcasts broadcast content .
  • the setting is not scene-specific . If the equipment is used in several scenes but for example with a different setting, the setting needs to be adapted for the relevant scene .
  • the automatic setting is not saved in a scene either .
  • US 2020/177965 describes methods, devices and systems to provide network services, including to manage and or produce a medi a stream, including by a party who is in a location that is remote compared to the live media streaming device, and including to provide the network services to users of aa local area network (“LAN”) , including users of the LAN who may not have an internet connection, as well as to users who connect to the network service over a wide area network, such as the "Internet .
  • LAN local area network
  • KR 101 790 280 relates to a real-time 3D virtual studio broadcast editing and transmission device and method. To be able to edit and transmit 3D broadcasts in virtual studios in real time, a lot of data processing is required. This relates to the real-time setting of a broadcast recording whereby the amount of calculations can be reduced.
  • None of the above known systems provides, nor hints to the system of the present invention for presetting and saving the complete studio settings in and for the one or several scenes .
  • the system of the present invention can call up and show a scene, and consequently also the relevant preset studio settings, with one press of a button without having to update a single setting after calling up the scene in real-time .
  • Scenes can be called up one after the other and shown without having to update a single setting after calling up the scene in real-time .
  • the advantage of the system according to the invention is the possible integration in one scene of content coming from data sources containing statically saved content on the one hand and data sources for the live generating of dynamic content on the other hand, whereby said latter data sources are configured and controlled live and this by setting and saving the recording parameters for both video, sound and lighting, whereby the scene can be called up live at all times with one user action, such as for example a mouse click.
  • the term content relates to both static and dynamic information.
  • Static information is existing content, such as for example a text, an image, a PowerPoint presentation or a recorded video .
  • Dynamic information is content that is generated real-time or live, such as a camera image, lighting and sound.
  • Dynamic content is typically generated by configurable sources, such as a camera, microphones, lighting equipment .
  • the scene relates to a staging such as in the direction room of a video production environment, whereby the scene is called up live in its entirety with one user action in a video call by the moderator /director, whereby the video production devices are configured according to the set recording parameters and in said configuration record or generate dynamic content and can subsequently broadcast it live .
  • a direction room is a room from where something is controlled. In the world of television a live broadcast is carried out from the direction room. In the direction room there are people who take care of the subtitles, change the camera points of view such that the subject is always well framed, and set the lighting and the sound optimally.
  • the system according to the invention relates to carrying out a live-broadcast whereby by calling up one scene, for example, camera points of view are adjusted, subtitles are provided, and the lighting and the sound are optimally set live .
  • a scene is created and saved comprising a source, such as for example an (e-) PTZ camera, which is preset in position, tilt and/or zoom settings such that the camera is directed at an item that needs to be brought into view, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the relevant item.
  • a source such as for example an (e-) PTZ camera, which is preset in position, tilt and/or zoom settings such that the camera is directed at an item that needs to be brought into view
  • the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the relevant item.
  • a scene is created and saved comprising two sources, a first source, such as for example an (e-) PTZ camera, which is preset such in position, tilt and/or zoom settings that the camera is directed at an external monitor or screen on which a background picture is preset via a second source, such as for example an HDMI input, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters .
  • a first source such as for example an (e-) PTZ camera
  • a second source such as for example an HDMI input
  • studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters
  • a scene is created and saved comprising a source for generating dynamic content, whereby the source is preset according to choice, whereby the scene further comprises studio settings such aass light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the content generated by the relevant source .
  • a scene is created and saved comprising a source of static content, such as for example media or an image or document , whereby the source is preset according to choice, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters
  • the system of the invention further comprises the characteristics of the dependent claims .
  • the studio settings comprise, among others: ssettings for the lighting, colour temperature, microphone configuration, speakers oorr headset configuration, title and memo or reminder text .
  • the studio setup can be provided with a monitor or screen for presenting a background picture .
  • the one or several sources can be video sources or any other data , such as for example a camera, HDMI input, hard disk or USB stick. and possibly other connected (wired or wireless) electronic equipment for generating video or any other data, linked either directly physically or wirelessly or via one or other protocol for linking data or pictures .
  • said external sources are linked to the system via a dedicated processor hardware module provided with a memory and processor.
  • the data sources can also be saved in the memory of the module .
  • Said module is provided with several multifunctional connection-slots or wireless connections .
  • the module is also provided with a memory for the system and for saving the scenes .
  • the module provides all functionalities of the system.
  • the module can be linked, via a connection slot or wirelessly, to a stream deck or other operating module for selecting a random scene with one action .
  • the aforementioned studio settings as well as the content of the sources are uploaded, whereby the lighting and/or audio in/out are configured as such according to the preset parameters .
  • Specific studio settings can be preset for every scene .
  • the studio settings are preset in the same phase as the preset or selection of the sources and their content .
  • the studio settings are saved in the same phase as the storage of the sources and their content .
  • the sources, content and studio settings are saved together in one scene .
  • settings can be assigned and saved for several scenes .
  • Parameters and settings that can be linked to a scene and can be called up together with the scene are understood to mean, among others, the variables from the non-exhaustive list of studio settings below:
  • a scene such as a logo, text or ID; memo, reminder text or "cue”; light setting: on/off , DIM value, colour temperature, CCT value (0-10 volt) , via bluetooth, WiFi , DMX data signal, or future connection protocols; sound setting: speaker, in ear, headphones, locally or on the person ’ s body, via bluetooth, wire, USB and future connection protocols; microphone setting: table microphone, pinned on microphone, microphone integrated in ears or headphones, wireless or wired, bluetooth, USB and future signals and protocols; via Visca protocol, RS485, network, wireless signal, USB or any future signal Pan-Tilt-Zoom (PTZ) position ooff aa ccaammeerraa and/or auto tracking/framing activation on the camera; - on/off signal to physically activate or deactivate a hardware device;
  • PTZ Pan-Tilt-Zoom
  • the (external) sources of content or for generating content comprise, among others :
  • HDMI network input signal
  • analogue signal wireless camera static camera micro/macro images high-speed camera AR (augmented reality) or VR (virtual reality) Al presentation or experience .
  • a secondary picture is uploaded in the main picture of a scene, for example as a background picture .
  • aa (PTZ) ccaammeerraa is directed at a background screen.
  • setting the PTZ camera in a scene is combined with setting the background screen.
  • One or several presentations ccaann be provided, created, saved and/or uploaded in the system, each comprising one or several scenes, whereby the scenes are called up randomly in a live communication session.
  • Each created and saved scene can, for example, be shown as a tile in a gallery or a selectable icon digitally selectable button.
  • the tile/icon/button either shows a text or a photograph or camera image of the relevant scene .
  • a created and saved scene in a presentation can be moved, switched, adapted or removed.
  • scenes as necessary can be created in a presentation.
  • the scenes from aa presentation can be selected and uploaded at random as well as be created and saved at random.
  • a scene is called up during a live session via one user action or function activation, in a digital or analogue way, for example via a mouse click or press of a button of a keypad or foot pedal, via a touch on a touchscreen, or any other means with "one touch” function.
  • a physical keypad or touchscreen can be used, such as for example of an "Elegato" stream deck or
  • the system connects via existing online meeting tools, such aass for example Skype, Teams and Zoom, and a live video call is established.
  • existing online meeting tools such as aass for example Skype, Teams and Zoom
  • the hardware sources linked or connected to the system i . e . physically wired or wireless, are recognised as such and are selectable .
  • the system comprises at least the following linked equipment : a PTZ camera, a mouse or keypad or other digital or analogue selection device, lighting and audio equipment .
  • system further comprises a background screen on which the PTZ camera is directed.
  • system is further provided with a laptop, internal memory or USB device .
  • both the content and the studio settings of a scene are called up and shown with one user action in a live communication session.
  • One or several sources can generate personalised content live on demand of, and/or customised to, a user .
  • system is processed in and executable from a dedicated processor hardware module .
  • the sources and other components are linked to the dedicated processor hardware module , in a wired or wireless way or via another protocol .
  • the system comprises a communication interface, and at least one processor configured to be connected to the communication interface, and further a memory that is connected to the at least one processor, in which the memory saves instructions which instruct the at least one processor to upload and play a certain scene .
  • the system comprises a program that is uploaded in the memory, and is configured for creating, adapting, importing and exporting scenes .
  • the system is connected to a communication system that consists of a communication network for jointly connecting a number of users and a communication server for realising a live communication session between at least a first user and a second user of said number of users, whereby the first user is the moderator of the system and the second user a participant .
  • the system is executed as a console provided with multiple input and at least one output interface, a controlling computer provided with the software- or hardware-based codec, an entry system
  • keyboard or similar keyboard or similar
  • visual system and a selection system keyboard, foot pedal, digital button, touchscreen, etc.
  • omnidirectional microphones screens, lighting, loudspeakers, cameras and other configurable peripheral equipment.
  • the system comprises one or several control units such as a
  • RAM random access memory
  • ROM read-only memory
  • CPU read-only memory
  • GPU for executing control functions via programs and data saved in a storage unit and connected via a control bus.
  • the user action can be executed by a stream deck, keyboard or computer mouse, foot pedal or an active floor sensor, or any other selection means.
  • the system comprises a scene editor application/software for creating, adapting, saving, uploading, importing and exporting one or several scenes.
  • the scene editor is integrated in the dedicated processor module.
  • the scene editor is applied, used, operated or controlled as application software from, for example, a computer, laptop, tablet or smartphone.
  • the scene editor comprises an informative and/or a function field, for showing information and possible functions, and an editor field for creating, adapting, importing and exporting the one or several scenes .
  • the one or several scenes in the editor field are shown as tiles in a gallery.
  • the scene editor software is able to run on a server, cloud, laptop, tablet, smartphone, dedicated system or the like .
  • the system comprises a console desk with an integrated scene editor in a configuration with a stream deck, a microphone and a pan-tilt-zoom camera .
  • the system according to the invention can be applied in a sales, educational or medical concept for example .
  • the system according to the invention relates to creating, setting and saving a scene and its desired parameters such as picture, position, sound, light etc. , under one digital button, said button uploading the scene and its parameters during a live presentation or online meeting, configuring the recording and display equipment according to the preset parameters and generating and/or showing content .
  • This provides the advantage that according to the invention a scene can be composed vviiaa the system very simply, including all required studio settings of the equipment, which thereafter can be called up by the moderator with one press of a button or one click of the mouse, and this in an existing stream or video call application.
  • Another advantage of the system according to the invention is the scene editor where a scene can be edited live on the spot by a moderator .
  • Another advantage of the system according to the invention is calling up a scene in an online meeting by means of one press of aa button when wanted by the moderator of the online meeting, whereby the presets of certain parameters are also taken into account, such as for example a scene whereby the active images of a camera are shown including camera positions, the appropriate audio in and out, light intensity and light colour.
  • the system comprises an Elegato ssttrreeaamm deck whereby the keypad comprises visually active buttons of the scene.
  • the system comprises a blue or green screening option whereby the moderator is also shown oonn the presentation screen.
  • the pictures of a drone can be integrated in a scene, whereby the drone parameters for flying and recording are preset in the scene .
  • the user can point in the picture of the scene during a live interaction, whereby a certain item can be interactively marked or highlighted in a scene, such as augmented reality.
  • autoframing is applied in the scene via an Al function whereby the camera will automatically focus and track an object. Consequently the camera obtains an interactive function.
  • a scene comprises, in addition to content. the combination of parameters and settings that are preset and linked in the scene and can be called up with " one press of a button", a mouse click, a foot pedal, a pressure sensitive pressure mat or the like .
  • scene-specific parameters and hardware settings are linked to one signal or "one press of a button" for calling up the scene in a session whereby both the content and the parameters and settings are uploaded, which further means that the preset hardware is configured according to the uploaded parameters and settings linked to the scene and will generate dynamic content in this specific configuration .
  • the corresponding hardware is configured .
  • Parameters and settings that can be linked to a scene and can be called up together with the scene are understood to mean, among others, the variables from the aforementioned non-exhaustive list of studio settings .
  • a scene comprises a main picture comprising, for example, content and/or hardware sources, parameters and settings from the aforementioned non-exhaustive list .
  • a secondary scene can be uploaded in a main picture, for example as background picture, comprising content and/or hardware sources, parameters and settings from the aforementioned non-exhaustive list .
  • the system of the invention comprises a so-called scene editor software or interface for creating scenes .
  • An example is provided below of the creation of a number of scenes via the scene editor of the system.
  • the scene editor application is linked to a hardware device performing said functions .
  • the scene editor application is also able to run on a server, cloud, laptop, tablet, smartphone or the like, for example .
  • server for the sake of simplicity it is referred to as "computer” .
  • the scene editor is split in two panels, whereby the one panel comprises an info field and function field, for informative statements and functions to be selected, and the other panel comprises an editor field. Subsequently, a first scene (Scene 1) can be created. By clicking on "new slide” in the editor field, a new scene is created.
  • the system of the invention is connected to one or several hardware components such as cameras, sound installations, lighting installations and the like .
  • the connected hardware components are identified or recognised by the system and are selectable as sources in the scene editor.
  • CAM2 is selected as an example and is added to Scene 1 for editing.
  • the real-time camera image of CAM2 appears in the editor field and now the camera image can be adapted in the "pan” and “tilt” position via the arrows to the left, right, top and bottom, and in the "zoom” position via the + and - symbols shown in the camera image .
  • CAM2 is directed on a screen that is now visible in the camera image . The screen is also linked to the system.
  • the "background picture" of said background screen can be set by clicking the "set background picture” function in the function field of the sscceennee editor. Said setting determines which content must be shown on the screen in the scene, i . e . when said content is called up in a live session .
  • the background screen is activated as a background scene in the first scene (Scene 1) .
  • the studio settings of Scene 1 can be edited, such aass the settings for lighting, colour temperature, microphone configuration, speakers or headset configuration. title, memo or reminder text, and this in a simple and user-friendly manner.
  • the audio settings are preset .
  • the tile can either show a text or a photograph or camera image of the camera in the relevant scene .
  • the tile can either show a text or a photograph or camera image of the camera in the relevant scene .
  • Scene 1 this could be a recording of CAM2.
  • the Scene 1 tile comprises symbols for moving, switching, adapting or removing the scene .
  • a (connected) source can be selected in the editor field, in this example : three PTZ cameras, "CAM1", “CAM2” and ”CAM3", “Media”, “Document” or “Laptop” .
  • the info field shows the connection of the source to the system, in this case it relates to an HDMI input .
  • a website is selected and added to Scene 2.
  • the studio settings for the scene can be preset, analogous aass to the first scene .
  • the light and audio settings are adapted.
  • the scene is given a title and the memo bar is entered with a reminder text that the moderator can use when the scene goes live .
  • a third slide or scene can be created. In this way it is possible to build the presentation further comprising different scenes and content . Every scene is a slide in the scene editor.
  • the different scenes can be individually called up during a live session via one press of the scene tile, for example via a mouse click.
  • Elegato Stream Deck or physical keypad is used. Every scene is hereby shown on a separate button with a name or photograph or other picture .
  • the physical digital buttons also show the settings of the scenes . In this ccaassee it is possible to see that button 1 is Scene 1 and is active (light blue) whereas button 2 is Scene 2 and still needs to be configured (orange) .
  • the control panel/keypad also provides buttons to scroll through the different scenes, as well as shortcuts for audio on/mute and buttons for navigation to the "Previous" and "Next"
  • a "Media” slide is created as the third scene (Scene 3) in the example .
  • the media can be saved locally on (the internal memory of) the system, but can also be called up via USB, SD card or hard disk via a list from which subsequently the correct media (in this case a photograph) can be selected and said scene can be further configured.
  • the editor field now shows the three configured slides as tiles in a gallery, and the system requests to create a fourth slide or scene if necessary.
  • a fourth scene (Scene 4 ) is created and "CAM3" is selected as source .
  • Camera 3 comes into view and the position of the picture is configured.
  • the system communicates in this example via the VISCA protocol over the network of the camera, whereby the zoom position is set on the camera itself and said setting parameters are linked to this scene . As soon as the desired parameters are set, said parameters are confirmed in the system. No background picture is required in this scene .
  • the studio settings for this scene such as light and audio settings, are set and the scene receives a title .
  • the settings are subsequently saved in Scene 4 .
  • a memo note can subsequently be entered that one wants to see when calling up the scene .
  • the memo is a mnemonic device that is only visible to the operator/speaker/presenter/moderator .
  • the settings are subsequently saved by clicking "save” .
  • the follower who views the video call, stream or recording only gets to see the pictures as soon as the operator calls up the scenes and consequently transmits the pictures .
  • This can be very varied.
  • a speaker can react-respond with scenes very quickly - in a scenario of different scenes prepared by the speaker in response to questions of followers, for example, such that the video call has a high dynamic content , that is interactive, and detailed information can be shared much more clearly whereas that now in the current known configurations this is difficult, complex or not possible and this causes video communication to sometimes be inefficient and unnatural .
  • a fifth scene (Scene 5) is created.
  • the selected source is
  • “Laptop” (external HDMI ) whereby a wireless camera (on batteries) with built-in lighting is connected to the system via Wifi .
  • the lamp in the Wifi camera can be switched on or off remotely and by operating the dimmer the light intensity can be set .
  • the camera is filming a pump on an electrical turntable .
  • the camera source As soon as the camera source is selected and confirmed, it comes into view .
  • the lighting and audio settings are set and the slide and memo are entered. The settings are saved.
  • Scene 6 a sixth scene ( Scene 6) is created with as selected source "Document" .
  • a sales quote in MS Word is uploaded, for example .
  • the parameters for the microphone are set and a memo and title are assigned to the scene .
  • the presentation now comprises six scenes which can be called up randomly during a stream, video call or recording.
  • a presentation is a grouping of scenes .
  • the settings of the system can be modified.
  • a session, video call or stream can be recorded live .
  • the system can be connected via existing e-meeting tools, such as for example Skype, Teams and Zoom, and a live video call can be set up.
  • e-meeting tools such as for example Skype, Teams and Zoom
  • a video call with Zoom is launched by way of an example .
  • the people participating in the video call are visible on the screen Consequently, the moderator is able to communicate or respond interactively by adapting or switching scenes live with other scenes depending on the questions . Meanwhile, the moderator sees the associated memo bar of the actively selected scene .
  • the zoom and pan-tilt position of the camera can be preset and saved in a scene .
  • the moderator can still walk around dynamically in the picture of the camera and indicate something on the screen which at the same time shows a live active background picture .
  • the moderator can have other information shown on the background screen via a laptop.
  • HDMI data is transmitted to the external background screen which the operator, user or moderator can put to good use to explain something in detail . Consequently, the integration/combination of dynamic content which is shown on the screen is produced and the moderator himself /herself who actively takes part in the video call or in which he/she is actively integrated in a scene as aa p peerrssoonn,. The moderator is then part of this scene and the content (extra information) can then be followed on the inserted (picture in picture) scene .
  • This setup uses a monitor/TV screen but this can also be an interactive monitor or a smartboard or a projector which is integrated in the system to be able to make it even more dynamic - interactive .
  • a pan-tilt-zoom position can be set for an e-PTZ webcam with the remote control and saved in a button. Selecting and calling up different pictures via a switcher or video mixer is also known.
  • the principle of "Picture in Picture” and multi-camera for sharing a screen is known. It is not known to preset the studio settings per scene and to link them to the scene, whereby both the content and the studio settings ccaann be called up with "one press of a button" in a live session.
  • the advantage of the system of the present invention is that both the content and the settings are saved in a preset in "one button" and can be called up on demand during aa video call, aa live stream oorr aa video recording by one user action or "one press of a button" ,
  • the system is more efficient and can be operated by one person. Accordingly, the cost is low and the user-friendliness very high.
  • the system is compact and less complex and can be linked to a standard computer or be provided as a dedicated system.
  • the system of the invention comprises a collection of scenes and their preset studio settings which are called up one after the other and shown in real-time .
  • the invention further relates to an encoding device or computer provided to execute the system aass described herein, whereby the encoding device consists of : a processor and memory, a network module and a live communication module which are saved in the memory and executed by the computer processor.
  • the invention also relates to one or several computer- readable media containing instructions which, in response to the execution of the instructions by a processor of a computer device, prompt the computer device to execute the system as described herein.
  • figure 1 shows a first example of a studio setup or configuration of a system according to the invention
  • figure 2 shows a second example of a sales concept configuration of a system according to the invention
  • figure 3 shows an example of a configuration of a medical concept of a system according ttoo the invention.
  • Figure 1 shows a first example of a studio setup for a system according to the invention. Other setups and configurations are also possible .
  • the system of figure 1 is executed as a dedicated system in the form of aa desk console 1, whereby all required components, such as video and audio connections, etc. are compactly grouped in oonnee single installation, : as a remotely operatable video camera or a PTZ camera or webcam 3, aa keypad (stream deck) 4 for calling up a scene, a screen 5, mouse 6 and keyboard 7 and an external laptop 8.
  • aa desk console 1 whereby all required components, such as video and audio connections, etc. are compactly grouped in oonnee single installation, : as a remotely operatable video camera or a PTZ camera or webcam 3, aa keypad (stream deck) 4 for calling up a scene, a screen 5, mouse 6 and keyboard 7 and an external laptop 8.
  • system is linked to a second PTZ camera 2 , a headset 9' and speaker 9" and an external screen 10 for background pictures .
  • the operator 11 walks around freely in the scene and can operate the system (create scenes, adapt and save settings, call up scenes, show live images on the background screen, etc . ) via the keyboard and the mouse, the keypad and the laptop.
  • the system can just as well be built from the individual components as indicated above and any extra components, without the dedicated desk console .
  • the system runs on the laptop and is linked to a dedicated processor hardware module integrated in the desk console 1.
  • the hardware of a system consists of a core computer or server on which the software or the operating system rruunnss .
  • the system comprises a dedicated processor hardware module to which the cameras and other components are linked.
  • the hardware is processed in a system desk.
  • monitors can be linked to this, both as input and output .
  • the configuration can be expanded with extra hardware means such as for example a push button, a foot pedal , a stream deck, a presenter button, to switch scene .
  • Various physical and or wireless cameras can be linked from a static position oorr wwiitthh a pan-tilt-zoom function, but also a microscope for example .
  • the system can be executed as completely mobile or be static .
  • the system can also be executed as a desktop version.
  • a laptop and/or USB stick or other input can be linked to the desk, via HDMI, wireless, a network, or another way. This also applies to an omnidirectional microphone aanndd speakers or a headset with voice and listening function.
  • the electrical interfaces, the dedicated processor or controlling computer, and the software or hardware which allows encoding or decoding or zipping or unzipping data are located in the console .
  • Omnidirectional microphones are linked to the console in the setup, as well as a TV-monitor with loudspeakers and a screen.
  • Figure 2 shows a similar example of a sales concept configuration of a system according to the invention.
  • the studio setup or configuration comprises an exhibited car 13. During a live session the moderator/sales agent is able to hold a sales pitch and present the car in all its details as if the buyer were present in person.
  • the studio setup comprises the necessary lighting 12 to bring the car 13 properly into view.
  • Various wireless cameras can be set up in the car to bring the components and details of the car into view. Said cameras are then also integrated in a scene .
  • a scene can be expanded with other types of data , Via the laptop 8 on the desk 1 a sales quote can be created live for the customer and on the background screen 10 it is possible to scroll through the optional extras of the car live, for example .
  • the car can be replaced by, for example, an electronic microscope and a study-object .
  • the teacher/operator and the students take part in the live session in a two-way communication.
  • Figure 3 shows an example of a configuration of a medical concept of a system according to the invention .
  • Special peripheral equipment such as aass for example digital cameras, microscopes, video endoscopes, medical ultrasound imaging devices can be added to the system.
  • This configuration uusseess aa foot pedal 14 to dynamically switch between different scenes . This allows for handsfree dynamic switching of the scenes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to a system (1) for improving an online meeting, video conference, live streaming or multi-camera broadcasting via a live communication network from a studio setup comprising one or several cameras (2,3), one or several screens (5,10), lighting (12), audio in/out (9', 9") and a control (4,6), whereby a moderator (11) is brought into view via at least one camera (3), whereby the system comprises one or several sources of content or for generating content which are linked to one or several scenes, whereby the one or several scenes are randomly called up and uploaded in a live communication session, such that the relevant content of the linked sources is presented or generated live, whereby in addition to the sources the studio settings for lighting and audio in/out and possible other adjustable studio parameters are also preset and saved in each of the one or several scenes, such as the light and/or audio settings, scene title and memo bar.

Description

System for improving an online meeting .
The present invention relates to a compact and user- friendly system for improving an online meeting, video conference or live presentation.
Online meetings, video conferences, live presentations , streamings, broadcastings or other similar communications are hereinafter generally referred to with the term video calls .
Exchange of information via sound and pictures is made possible in a video call via aa live video and audio connection.
Video and audio are to be interpreted as any camera and sound technology. Any connection can be used for this, for example the internet .
Online meetings and video conferences, ffoorr example, typically relate to the exchange of information between two or more locations and are realised by interactive technologies which ensure at least simultaneous two-way video and audio transmissions . Live presentations , streamings and broadcastings, example, are usually focused on one-way transmissions .
A disadvantage of current video calls is the static approach. Participants communicate via a webcamera on a laptop or smartphone . Sharing other information, such as, for example, an Excel sheet or PowerPoint presentation, is done by calling up said information on the computer screen and subsequently sharing the screen in the video call .
This is a cumbersome process that requires many actions and is limited to information available or saved on the computer . Further, it is difficult to highlight or emphasise certain detailed information.
Usually, one static position of a camera, for example a webcam, combined with sharing a screen is not sufficient to express a situation . This is why in such cases the preference is a physical meeting to be able to discuss multiple information mmoorree easily. This has many disadvantages, such as, among others, being time-consuming and not ecological .
Editing and staging certain information in advance is known . A presentation is then made, for example, whereby the different sources of information are edited into a multi-information flow.
A disadvantage of such systems is that the presentation or information flow is determined beforehand.
Another known system for live streaming of what happens on a desktop is OBS Studio. Via Open Broadcasting System (OBS) software, live or recorded videos can be made and streamed to the most popular platforms, such as YouTube, Facebook Live, Twitch, or other streaming services . OBS Studio allows direct recordings to be made with a webcam and microphone . In addition, pictures can be added such as recorded video, as well as existing visual material, data from games or photographs . During a live stream, several pictures can be shown simultaneously, or it is possible to switch to other pictures .
OBS uses scenes and sources . A source is information that can be shown in a live video. such as for example an image but also a recording of a camera . Several sources or information can be added and shown in a scene . To show other information, a new scene is created which uses other sources . In this way, several scenes are built between which it is possible to switch with a mouse click by clicking an icon.
A disadvantage of such system like OBS is that the content is shown in a scene as such, in the form of, for example, an uploaded text or photograph, or a video shot with a certain camera .
A video call system is needed which radiates the professionality and quality of a multi -camera video production environment such as a direction room, whereby a team of technicians and specialists is needed, controlled by a director, to show a specific picture in a shot and to broadcast live under the perfect sound and light circumstances . Video is a complex interaction of shooting, direction, script, light and editing. This is largely a manual process . A video production environment consists of many types of video production devices, such as video cameras, microphones, video recorders, video switching devices, audiomixers, digital video effect devices, teleprompters , video graphic overlay devices, etc.
In the conventional production environment, most video production devices are manually operated by a production team of artistic and technical personnel who cooperate under the direction of a director . A standard production team may, for example, consist of camera people, a video technician who operates the camera operating units for a set of cameras, a teleprompter operator, a lighting director who operates the studio lights, a technical director who operates the video switcher, an audiotechnician who operates an audiomixer, operator (s) who operate (s) a series of recorders and playback units, etc.
Consequently, there are still many laborious steps between the first recording of media content and the eventual production.
A system whereby a moderator of a video call can respond to the specific questions of the participant (s) and call up and show the requested information in real-time is also needed. Such interaction occurs in a video production environment of, for example, a live talk show where the spectators can ask questions and the relevant information is brought into view by the director . The purpose of the present invention is to provide a solution to at least one of the aforementioned and other disadvantages .
To this end, the invention relates to a system for improving an online meeting, video conference, live streaming or multi -camera broadcasting via a live communication network, the system comprising one or several scenes which are saved by the user, the system further a studio setup comprising one or several cameras, one or several screens, lighting, audio in/out and a control, whereby a moderator is brought into view via at least one camera, whereby the system comprises one or several sources of content or for generating content which are linked to the aforementioned one or several scenes, whereby the one or several scenes are randomly called up and uploaded in a live communication session, such that the relevant content of the linked sources is presented or generated live, whereby in addition to the sources the studio settings for lighting and audio in/out and all required adjustable studio parameters, including a scene title and memo bar with reminder text, are also preset and saved in each of the one or several scenes .
The system according to the invention eliminates the need for an extensive direction team and allows a moderator to simply take over the direction and set up a professional configuration with a minimum of technical knowledge.
A scene is a storage of information or a pointer to information coming from one or several sources of information. The information saved in a scene can be called up and shown at all times . In the case of a pointer to information the actual information is generated live and shown when calling up the relevant scene .
Specific to the present invention is that in a scene, in addition to the information (or the pointer to information) , all settings are also saved for correctly calling up and showing the information of a relevant scene . This concerns at least the settings for light and sound, scene title and memo bar with reminder text .
As such, a scene is a storage of information or a pointer to information coming from one or several sources of information and all settings for correctly calling up and showing the information of the relevant scene .
This is important in a live communication whereby a scene is preset in advance and is called up and shown live , No further settings are required before or after calling up the relevant scene .
Special about the present invention is that with one press of a button a correctly set live scene can be called up and shown . Indeed, in a live communication session there is no time for settings and the information must be shown quickly, efficiently and professionally correct and complete .
A source of information is information that can be shown in a live video, such as for example an image but also a recording of a camera . Within a scene, several sources or information can be added and shown and the settings are preset and saved for a correct showing of the scene . To show other information, a new scene is created that uses other sources and other settings . Several scenes are built in this way between which it is possible to switch with a mouse click by clicking an icon.
A scene can consist of all possible data and is a predefined input . A scene can, for example, be a Picture in picture, a camera point of view (position pan - tilt zoom coordinates) , a presentation or a fixed camera or microscope point of view, a link to a webpage, etc. Every scene is a unique snapshot that is linked to a button and every scene consists of different parameters that can be called up as soon as the button is pressed. Further, specific sound, light and memo bar parameters or information can be linked to every visual scene connected to a button.
In a preferred embodiment of a system according to the invention, a scene with one user action is called up in the live communication session such that the content is uploaded and shown and the linked preset studio settings are also uploaded and configured as such.
This can be done, for example, via one mouse click or via a touchscreen of a smartphone or via an analogue or digital keypad or floor pedal, and such known systems . Via a studio control, one figurative "press of a button" can call up a scene in a live session.
The purpose of the invention is transforming the static approach of current video calls to a dynamic and instantly responding system. And this with a plurality of inputs, including existing content such as website, photograph, video, media, but also dynamic content generated by video equipment the settings of which, such as camera positions, light settings, sound level and many parameters, can be configured and saved, and all this integrated under "one button" . Whereby the multiple content can be called up and shown during the video call as one scene with "one press of the button" in the video call, either by uploading existing content, or by generating dynamic content with the video equipment that is configured according to the set parameters .
The system of the invention is compact, centrally controlled and user-friendly. Cameras, microphones, speakers, headphones and screens are simply connected via wires or wirelessly. The system ccaann be set up by every user.
Existing systems are too complex to create integrated scenes or are built for multi-camera direction with different operators . This implies a moderator must stick to a script agreed in advance and is thus not directing himself/herself .
Within one video call the exchanged information is very diverse and comprises several types of information .
It is not straightforward ttoo call up such diversity of information quickly, efficiently and professionally in a video call, let alone present it professionally as a whole .
The purpose of the present invention is to provide a solution for this .
KR 2021 0020293 relates to a service system for providing a rental studio for personal broadcasting, and more specifically for providing a rental studio for a user who has difficulty in configuring equipment for personal broadcasting and supports the user to transmit, in a streaming manner, broadcast content generated through equipment configured in the rental studio , The invention provides, in a rental manner, a user with a studio in which the user can produce and transmit broadcast content for personal broadcasting without a need to purchase equipment for personal broadcasting, thereby reducing the cost burden of a user. The invention supports the setting of each kind of equipment configured in the studio to be automatically set according to a user ' s desired broadcasting type such that the user can easily create high-quality broadcast content and subsequently transmit the content through a broadcasting service server for personal broadcasting. The system has aann effect of enhancing the convenience in personal broadcasting for a user who broadcasts broadcast content .
This concerns the mere automatic setting of equipment . The setting is not scene-specific . If the equipment is used in several scenes but for example with a different setting, the setting needs to be adapted for the relevant scene . The automatic setting is not saved in a scene either .
US 2020/177965 describes methods, devices and systems to provide network services, including to manage and or produce a medi a stream, including by a party who is in a location that is remote compared to the live media streaming device, and including to provide the network services to users of aa local area network ("LAN") , including users of the LAN who may not have an internet connection, as well as to users who connect to the network service over a wide area network, such as the "Internet .
KR 101 790 280 relates to a real-time 3D virtual studio broadcast editing and transmission device and method. To be able to edit and transmit 3D broadcasts in virtual studios in real time, a lot of data processing is required. This relates to the real-time setting of a broadcast recording whereby the amount of calculations can be reduced.
None of the above known systems provides, nor hints to the system of the present invention for presetting and saving the complete studio settings in and for the one or several scenes . Furthermore, the system of the present invention can call up and show a scene, and consequently also the relevant preset studio settings, with one press of a button without having to update a single setting after calling up the scene in real-time . Scenes can be called up one after the other and shown without having to update a single setting after calling up the scene in real-time . The advantage of the system according to the invention is the possible integration in one scene of content coming from data sources containing statically saved content on the one hand and data sources for the live generating of dynamic content on the other hand, whereby said latter data sources are configured and controlled live and this by setting and saving the recording parameters for both video, sound and lighting, whereby the scene can be called up live at all times with one user action, such as for example a mouse click.
The term content relates to both static and dynamic information. Static information is existing content, such as for example a text, an image, a PowerPoint presentation or a recorded video . Dynamic information is content that is generated real-time or live, such as a camera image, lighting and sound.
Dynamic content is typically generated by configurable sources, such as a camera, microphones, lighting equipment .
Another advantage is that the scene relates to a staging such as in the direction room of a video production environment, whereby the scene is called up live in its entirety with one user action in a video call by the moderator /director, whereby the video production devices are configured according to the set recording parameters and in said configuration record or generate dynamic content and can subsequently broadcast it live . A direction room is a room from where something is controlled. In the world of television a live broadcast is carried out from the direction room. In the direction room there are people who take care of the subtitles, change the camera points of view such that the subject is always well framed, and set the lighting and the sound optimally.
The system according to the invention relates to carrying out a live-broadcast whereby by calling up one scene, for example, camera points of view are adjusted, subtitles are provided, and the lighting and the sound are optimally set live .
In a specific embodiment of the system according to the invention, a scene is created and saved comprising a source, such as for example an (e-) PTZ camera, which is preset in position, tilt and/or zoom settings such that the camera is directed at an item that needs to be brought into view, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the relevant item.
In another embodiment of the system according to the invention, a scene is created and saved comprising two sources, a first source, such as for example an (e-) PTZ camera, which is preset such in position, tilt and/or zoom settings that the camera is directed at an external monitor or screen on which a background picture is preset via a second source, such as for example an HDMI input, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters .
In a specific embodiment of a system according to the invention, a scene is created and saved comprising a source for generating dynamic content, whereby the source is preset according to choice, whereby the scene further comprises studio settings such aass light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the content generated by the relevant source .
In another embodiment of a system according to the invention, a scene is created and saved comprising a source of static content, such as for example media or an image or document , whereby the source is preset according to choice, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters
The system of the invention further comprises the characteristics of the dependent claims .
The studio settings comprise, among others: ssettings for the lighting, colour temperature, microphone configuration, speakers oorr headset configuration, title and memo or reminder text .
The studio setup can be provided with a monitor or screen for presenting a background picture . The one or several sources can be video sources or any other data , such as for example a camera, HDMI input, hard disk or USB stick. and possibly other connected (wired or wireless) electronic equipment for generating video or any other data, linked either directly physically or wirelessly or via one or other protocol for linking data or pictures .
Preferably, said external sources are linked to the system via a dedicated processor hardware module provided with a memory and processor. The data sources can also be saved in the memory of the module .
Said module is provided with several multifunctional connection-slots or wireless connections . The module is also provided with a memory for the system and for saving the scenes . The module provides all functionalities of the system. The module can be linked, via a connection slot or wirelessly, to a stream deck or other operating module for selecting a random scene with one action .
When calling up a scene in the live communication session, the aforementioned studio settings as well as the content of the sources are uploaded, whereby the lighting and/or audio in/out are configured as such according to the preset parameters .
Specific studio settings can be preset for every scene .
Preferably, the studio settings are preset in the same phase as the preset or selection of the sources and their content . Preferably, the studio settings are saved in the same phase as the storage of the sources and their content .
Preferably, the sources, content and studio settings are saved together in one scene .
In a specific embodiment system, settings can be assigned and saved for several scenes .
Parameters and settings that can be linked to a scene and can be called up together with the scene are understood to mean, among others, the variables from the non-exhaustive list of studio settings below:
- description of a scene, such as a logo, text or ID; memo, reminder text or "cue"; light setting: on/off , DIM value, colour temperature, CCT value (0-10 volt) , via bluetooth, WiFi , DMX data signal, or future connection protocols; sound setting: speaker, in ear, headphones, locally or on the person ’ s body, via bluetooth, wire, USB and future connection protocols; microphone setting: table microphone, pinned on microphone, microphone integrated in ears or headphones, wireless or wired, bluetooth, USB and future signals and protocols; via Visca protocol, RS485, network, wireless signal, USB or any future signal Pan-Tilt-Zoom (PTZ) position ooff aa ccaammeerraa and/or auto tracking/framing activation on the camera; - on/off signal to physically activate or deactivate a hardware device;
- position, coordinates, start signal of a drone or movable vehicle .
The (external) sources of content or for generating content comprise, among others :
PTZ or ePTZ camera and (pan/tilt/zoom) settings, position settings with or without auto tracking and framing function webcam powerpoint (ppt) or other presentation form photo movie - video (media) file (pdf or other format) microphone local website via network a link with another computer internet - website local network
HDMI, network input signal, analogue signal wireless camera static camera micro/macro images, high-speed camera AR (augmented reality) or VR (virtual reality) Al presentation or experience .
In a specific embodiment, a secondary picture is uploaded in the main picture of a scene, for example as a background picture . To this end aa (PTZ) ccaammeerraa is directed at a background screen.
Preferably, setting the PTZ camera in a scene is combined with setting the background screen.
One or several presentations ccaann be provided, created, saved and/or uploaded in the system, each comprising one or several scenes, whereby the scenes are called up randomly in a live communication session.
Each created and saved scene can, for example, be shown as a tile in a gallery or a selectable icon digitally selectable button.
Preferably, the tile/icon/button either shows a text or a photograph or camera image of the relevant scene .
A created and saved scene in a presentation can be moved, switched, adapted or removed.
As many scenes as necessary can be created in a presentation. The scenes from aa presentation can be selected and uploaded at random as well as be created and saved at random.
In a preferred embodiment, a scene is called up during a live session via one user action or function activation, in a digital or analogue way, for example via a mouse click or press of a button of a keypad or foot pedal, via a touch on a touchscreen, or any other means with "one touch" function. To this end, a physical keypad or touchscreen can be used, such as for example of an "Elegato" stream deck or
"Elegato" mobile smartphone app . Every scene is shown hereby on a separate button with an associated name or photograph or other picture .
Preferably, the system connects via existing online meeting tools, such aass for example Skype, Teams and Zoom, and a live video call is established.
The hardware sources linked or connected to the system, i . e . physically wired or wireless, are recognised as such and are selectable .
In a preferred embodiment the system comprises at least the following linked equipment : a PTZ camera, a mouse or keypad or other digital or analogue selection device, lighting and audio equipment .
In a preferred embodiment, the system further comprises a background screen on which the PTZ camera is directed.
In a specific embodiment , the system is further provided with a laptop, internal memory or USB device .
In a special embodiment of a system according to the invention, both the content and the studio settings of a scene are called up and shown with one user action in a live communication session. One or several sources can generate personalised content live on demand of, and/or customised to, a user .
In a preferred embodiment, the system is processed in and executable from a dedicated processor hardware module .
Preferably, the sources and other components are linked to the dedicated processor hardware module , in a wired or wireless way or via another protocol .
In a specific embodiment , the system comprises a communication interface, and at least one processor configured to be connected to the communication interface, and further a memory that is connected to the at least one processor, in which the memory saves instructions which instruct the at least one processor to upload and play a certain scene .
In a specific embodiment, the system comprises a program that is uploaded in the memory, and is configured for creating, adapting, importing and exporting scenes .
In a specific embodiment, the system is connected to a communication system that consists of a communication network for jointly connecting a number of users and a communication server for realising a live communication session between at least a first user and a second user of said number of users, whereby the first user is the moderator of the system and the second user a participant . In a specific embodiment, the system is executed as a console provided with multiple input and at least one output interface, a controlling computer provided with the software- or hardware-based codec, an entry system
(keyboard or similar), a visual system and a selection system (keypad, foot pedal, digital button, touchscreen, etc.), and further one or several: omnidirectional microphones, screens, lighting, loudspeakers, cameras and other configurable peripheral equipment.
The system comprises one or several control units such as a
RAM, ROM, CPU, GPU for executing control functions via programs and data saved in a storage unit and connected via a control bus.
The user action can be executed by a stream deck, keyboard or computer mouse, foot pedal or an active floor sensor, or any other selection means.
In a specific embodiment, the system comprises a scene editor application/software for creating, adapting, saving, uploading, importing and exporting one or several scenes.
In a specific embodiment, the scene editor is integrated in the dedicated processor module. In another embodiment, the scene editor is applied, used, operated or controlled as application software from, for example, a computer, laptop, tablet or smartphone.
In a specific embodiment, the scene editor comprises an informative and/or a function field, for showing information and possible functions, and an editor field for creating, adapting, importing and exporting the one or several scenes .
Preferably, the one or several scenes in the editor field are shown as tiles in a gallery.
The scene editor software is able to run on a server, cloud, laptop, tablet, smartphone, dedicated system or the like .
In a specific embodiment, the system comprises a console desk with an integrated scene editor in a configuration with a stream deck, a microphone and a pan-tilt-zoom camera .
The system according to the invention can be applied in a sales, educational or medical concept for example .
The system of the invention is explained in more detail below.
The system according to the invention relates to creating, setting and saving a scene and its desired parameters such as picture, position, sound, light etc. , under one digital button, said button uploading the scene and its parameters during a live presentation or online meeting, configuring the recording and display equipment according to the preset parameters and generating and/or showing content . This provides the advantage that according to the invention a scene can be composed vviiaa the system very simply, including all required studio settings of the equipment, which thereafter can be called up by the moderator with one press of a button or one click of the mouse, and this in an existing stream or video call application.
The static approach of the current video calls can be transformed into a dynamic and instantly responding mechanism in the call.
Another advantage of the system according to the invention is the scene editor where a scene can be edited live on the spot by a moderator .
Another advantage of the system according to the invention is calling up a scene in an online meeting by means of one press of aa button when wanted by the moderator of the online meeting, whereby the presets of certain parameters are also taken into account, such as for example a scene whereby the active images of a camera are shown including camera positions, the appropriate audio in and out, light intensity and light colour.
It is possible to determine in advance or live in a scene how a camera must , for example, record, what camera position to take (pan, tilt, zoom) , to preset the light intensity and colour temperature, but other optional parameters can also be set such as for example the omnidirectional microphone or the microphone of the headphones . Another advantage is the multitude of input data and parameter settings that can be applied by the system: website, photograph, video, media, and parameters such as camera pan-tilt-zoom positions, light settings, sound levels, memo bar, title and many parameters that can be determined and saved under one digital button to be called up and executed during the call .
In a specific embodiment ooff aa system according to the invention, the system comprises an Elegato ssttrreeaamm deck whereby the keypad comprises visually active buttons of the scene.
In yet another embodiment ooff a system according to the invention, the system comprises a blue or green screening option whereby the moderator is also shown oonn the presentation screen.
In yet another embodiment of aa system according to the invention, the pictures of a drone can be integrated in a scene, whereby the drone parameters for flying and recording are preset in the scene .
In yet another embodiment of a system according to the invention the user can point in the picture of the scene during a live interaction, whereby a certain item can be interactively marked or highlighted in a scene, such as augmented reality.
In yet another embodiment of a system according to the invention, autoframing is applied in the scene via an Al function whereby the camera will automatically focus and track an object. Consequently the camera obtains an interactive function.
A scene comprises, in addition to content. the combination of parameters and settings that are preset and linked in the scene and can be called up with " one press of a button", a mouse click, a foot pedal, a pressure sensitive pressure mat or the like .
Thus, scene-specific parameters and hardware settings are linked to one signal or "one press of a button" for calling up the scene in a session whereby both the content and the parameters and settings are uploaded, which further means that the preset hardware is configured according to the uploaded parameters and settings linked to the scene and will generate dynamic content in this specific configuration .
By activating the aforementioned signal, the corresponding hardware is configured .
Parameters and settings that can be linked to a scene and can be called up together with the scene are understood to mean, among others, the variables from the aforementioned non-exhaustive list of studio settings .
An example of a scene in a certain configuration is shown below. A scene comprises a main picture comprising, for example, content and/or hardware sources, parameters and settings from the aforementioned non-exhaustive list .
A secondary scene can be uploaded in a main picture, for example as background picture, comprising content and/or hardware sources, parameters and settings from the aforementioned non-exhaustive list .
The system of the invention comprises a so-called scene editor software or interface for creating scenes . An example is provided below of the creation of a number of scenes via the scene editor of the system.
Preferably, the scene editor application is linked to a hardware device performing said functions . The scene editor application is also able to run on a server, cloud, laptop, tablet, smartphone or the like, for example . For the sake of simplicity it is referred to as "computer" .
After logging on and entering the user name and password, the basic screen of the scene editor is shown . The user is now moderator for creating scenes .
The scene editor is split in two panels, whereby the one panel comprises an info field and function field, for informative statements and functions to be selected, and the other panel comprises an editor field. Subsequently, a first scene (Scene 1) can be created. By clicking on "new slide" in the editor field, a new scene is created.
The system of the invention is connected to one or several hardware components such as cameras, sound installations, lighting installations and the like . The connected hardware components are identified or recognised by the system and are selectable as sources in the scene editor.
Other sources that are recognised in the scene editor are, for example, aann internal memory, linked laptops, a USB device, HDMI. input (external video) and the like .
In this example the following sources are recognised and shown in the editor field of the scene editor: three PTZ cameras, "CAM1", "CAM2" and "CAM3", "Media", "Document" or the external link to a "Laptop" via HDMI (external content) .
By selecting a source, it is added to the scene . This implies that the content of said source can be called up or generated in the scene .
"CAM2" is selected as an example and is added to Scene 1 for editing. The real-time camera image of CAM2 appears in the editor field and now the camera image can be adapted in the "pan" and "tilt" position via the arrows to the left, right, top and bottom, and in the "zoom" position via the + and - symbols shown in the camera image . CAM2 is directed on a screen that is now visible in the camera image . The screen is also linked to the system.
Subsequently, the "background picture" of said background screen can be set by clicking the "set background picture" function in the function field of the sscceennee editor. Said setting determines which content must be shown on the screen in the scene, i . e . when said content is called up in a live session .
In the editor field, a selection can be made from possible background pictures such as in this example, "Media", "Document", "Laptop" or "None" .
By clicking "Laptop" an external image is shown coming from an external laptop, in said configuration said laptop being connected via an HDMI input .
The background screen is activated as a background scene in the first scene (Scene 1) .
Subsequently, the studio settings of Scene 1 can be edited, such aass the settings for lighting, colour temperature, microphone configuration, speakers or headset configuration. title, memo or reminder text, and this in a simple and user-friendly manner.
By clicking on the "Memo Bar" a memo can be entered. By clicking on "Title" you can give the scene a name . This is shown in the info field of the scene editor. In this phase the CCT values (colour temperature values) of the lamps are also adapted, this is visible in real-time in the camera image that is still shown in the editor field.
Further, the audio settings are preset .
As soon as the parameters and settings for the desired configuration are set, said presets are saved or linked to the scene (Scene 1) after clicking on "Save" .
Subsequently, a "new slide" can be selected in the editor field next to the slide 1 with the just created first scene
(Scene 1 ) which is also shown in the editor field as a tile in a gallery or selectable icon or digital button.
The tile can either show a text or a photograph or camera image of the camera in the relevant scene . In the case of
Scene 1 this could be a recording of CAM2.
The Scene 1 tile comprises symbols for moving, switching, adapting or removing the scene .
After clicking "new slide" a second scene (Scene 2) can be created.
Again, a (connected) source can be selected in the editor field, in this example : three PTZ cameras, "CAM1", "CAM2" and ”CAM3", "Media", "Document" or "Laptop" .
"Laptop" is selected now. The info field shows the connection of the source to the system, in this case it relates to an HDMI input . On the external picture of the HDMI input, a website is selected and added to Scene 2.
Again, the studio settings for the scene can be preset, analogous aass to the first scene . The light and audio settings are adapted. The scene is given a title and the memo bar is entered with a reminder text that the moderator can use when the scene goes live .
Slide/Scene 2 is configured and confirmed. The settings for Scene 2 are saved.
Subsequently, a third slide or scene can be created. In this way it is possible to build the presentation further comprising different scenes and content . Every scene is a slide in the scene editor.
The different scenes can be individually called up during a live session via one press of the scene tile, for example via a mouse click.
In this example an Elegato Stream Deck or physical keypad is used. Every scene is hereby shown on a separate button with a name or photograph or other picture . The physical digital buttons also show the settings of the scenes . In this ccaassee it is possible to see that button 1 is Scene 1 and is active (light blue) whereas button 2 is Scene 2 and still needs to be configured (orange) . The control panel/keypad also provides buttons to scroll through the different scenes, as well as shortcuts for audio on/mute and buttons for navigation to the "Previous" and "Next"
Slide .
A "Media" slide is created as the third scene (Scene 3) in the example .
The media can be saved locally on (the internal memory of) the system, but can also be called up via USB, SD card or hard disk via a list from which subsequently the correct media (in this case a photograph) can be selected and said scene can be further configured.
The studio settings, ssuucchh aass among others, the light and audio settings, ccaann be configured and the title and memo are also linked to the third scene after which the scene is saved .
The editor field now shows the three configured slides as tiles in a gallery, and the system requests to create a fourth slide or scene if necessary.
A fourth scene (Scene 4 ) is created and "CAM3" is selected as source .
Camera 3 comes into view and the position of the picture is configured. The system communicates in this example via the VISCA protocol over the network of the camera, whereby the zoom position is set on the camera itself and said setting parameters are linked to this scene . As soon as the desired parameters are set, said parameters are confirmed in the system. No background picture is required in this scene .
After confirmation of the camera settings and not adding a background scene in this scene (picture in picture) the studio settings for this scene, such as light and audio settings, are set and the scene receives a title . The settings are subsequently saved in Scene 4 .
A memo note can subsequently be entered that one wants to see when calling up the scene . The memo is a mnemonic device that is only visible to the operator/speaker/presenter/moderator . The settings are subsequently saved by clicking "save" .
The follower who views the video call, stream or recording, only gets to see the pictures as soon as the operator calls up the scenes and consequently transmits the pictures . This can be very varied. In an example of a video call, a speaker can react-respond with scenes very quickly - in a scenario of different scenes prepared by the speaker in response to questions of followers, for example, such that the video call has a high dynamic content , that is interactive, and detailed information can be shared much more clearly whereas that now in the current known configurations this is difficult, complex or not possible and this causes video communication to sometimes be inefficient and unnatural .
Thus, endless numbers of slides or scenes can be created in the system to prepare a scenario of a video call . The more complex, technical or visual a streaming or video call is, the more images, camera points of view, videos and content are necessary to convey a complex story simply and clearly because one picture says more than a thousand words .
A fifth scene (Scene 5) is created. The selected source is
"Laptop" (external HDMI ) whereby a wireless camera (on batteries) with built-in lighting is connected to the system via Wifi .
The lamp in the Wifi camera can be switched on or off remotely and by operating the dimmer the light intensity can be set . The camera is filming a pump on an electrical turntable .
As soon as the camera source is selected and confirmed, it comes into view . In a next step the lighting and audio settings are set and the slide and memo are entered. The settings are saved.
Finally, a sixth scene ( Scene 6) is created with as selected source "Document" . Here a sales quote in MS Word is uploaded, for example . The parameters for the microphone are set and a memo and title are assigned to the scene .
The presentation now comprises six scenes which can be called up randomly during a stream, video call or recording.
The following selection buttons are available in the function field: export presentation, import presentation, settings, record sessions and online meeting tools . A presentation is a grouping of scenes .
The settings of the system can be modified.
A session, video call or stream can be recorded live .
The system can be connected via existing e-meeting tools, such as for example Skype, Teams and Zoom, and a live video call can be set up.
It is possible to log out from one ' s own account or an account calling up a scene and to log in to another account afterwards or to shut down the system.
A video call with Zoom is launched by way of an example . The people participating in the video call are visible on the screen Consequently, the moderator is able to communicate or respond interactively by adapting or switching scenes live with other scenes depending on the questions . Meanwhile, the moderator sees the associated memo bar of the actively selected scene .
For a scene comprising a picture in picture (scene in scene) (camera and background picture) it does not matter whether first the camera is sseett oorr first the background picture . The order of the configuration is not important .
In an example of an active picture in picture by means of the camera and a background screen, the zoom and pan-tilt position of the camera can be preset and saved in a scene . When calling up the scene in a video call the moderator can still walk around dynamically in the picture of the camera and indicate something on the screen which at the same time shows a live active background picture . During the video call (live) , the moderator can have other information shown on the background screen via a laptop.
In this configuration HDMI data is transmitted to the external background screen which the operator, user or moderator can put to good use to explain something in detail . Consequently, the integration/combination of dynamic content which is shown on the screen is produced and the moderator himself /herself who actively takes part in the video call or in which he/she is actively integrated in a scene as aa p peerrssoonn,. The moderator is then part of this scene and the content (extra information) can then be followed on the inserted (picture in picture) scene .
This setup uses a monitor/TV screen but this can also be an interactive monitor or a smartboard or a projector which is integrated in the system to be able to make it even more dynamic - interactive .
For example, it is known that a pan-tilt-zoom position can be set for an e-PTZ webcam with the remote control and saved in a button. Selecting and calling up different pictures via a switcher or video mixer is also known. The principle of "Picture in Picture" and multi-camera for sharing a screen is known. It is not known to preset the studio settings per scene and to link them to the scene, whereby both the content and the studio settings ccaann be called up with "one press of a button" in a live session.
It is complex to set up and link such configuration for one person without technical knowledge oorr expertise with the existing systems . Several operators are required to set and operate each of the units separately. The system relieves the operator such that he can focus on the content of the video call . The advantage of the system of the present invention is that both the content and the settings are saved in a preset in "one button" and can be called up on demand during aa video call, aa live stream oorr aa video recording by one user action or "one press of a button" ,
Consequently, the system is more efficient and can be operated by one person. Accordingly, the cost is low and the user-friendliness very high. The system is compact and less complex and can be linked to a standard computer or be provided as a dedicated system.
The system of the invention comprises a collection of scenes and their preset studio settings which are called up one after the other and shown in real-time .
The invention further relates to an encoding device or computer provided to execute the system aass described herein, whereby the encoding device consists of : a processor and memory, a network module and a live communication module which are saved in the memory and executed by the computer processor. The invention also relates to one or several computer- readable media containing instructions which, in response to the execution of the instructions by a processor of a computer device, prompt the computer device to execute the system as described herein.
With the intention of better showing the characteristics of the invention, an embodiment is described hereinafter, by way of an example without any limiting nature, of a system and method according to the invention with reference to the accompanying drawings wherein: figure 1 shows a first example of a studio setup or configuration of a system according to the invention; figure 2 shows a second example of a sales concept configuration of a system according to the invention; figure 3 shows an example of a configuration of a medical concept of a system according ttoo the invention.
Figure 1 shows a first example of a studio setup for a system according to the invention. Other setups and configurations are also possible .
The system of figure 1 is executed as a dedicated system in the form of aa desk console 1, whereby all required components, such as video and audio connections, etc. are compactly grouped in oonnee single installation, : as a remotely operatable video camera or a PTZ camera or webcam 3, aa keypad (stream deck) 4 for calling up a scene, a screen 5, mouse 6 and keyboard 7 and an external laptop 8.
Further, the system is linked to a second PTZ camera 2 , a headset 9' and speaker 9" and an external screen 10 for background pictures .
The operator 11 walks around freely in the scene and can operate the system (create scenes, adapt and save settings, call up scenes, show live images on the background screen, etc . ) via the keyboard and the mouse, the keypad and the laptop.
The system can just as well be built from the individual components as indicated above and any extra components, without the dedicated desk console .
The system runs on the laptop and is linked to a dedicated processor hardware module integrated in the desk console 1.
The hardware of a system according to the invention consists of a core computer or server on which the software or the operating system rruunnss .. Preferably, the system comprises a dedicated processor hardware module to which the cameras and other components are linked.
In the example, the hardware is processed in a system desk.
Various monitors can be linked to this, both as input and output .
The configuration can be expanded with extra hardware means such as for example a push button, a foot pedal , a stream deck, a presenter button, to switch scene . Various physical and or wireless cameras can be linked from a static position oorr wwiitthh a pan-tilt-zoom function, but also a microscope for example .
The system can be executed as completely mobile or be static . The system can also be executed as a desktop version.
Further, a laptop and/or USB stick or other input can be linked to the desk, via HDMI, wireless, a network, or another way. This also applies to an omnidirectional microphone aanndd speakers or a headset with voice and listening function.
The electrical interfaces, the dedicated processor or controlling computer, and the software or hardware which allows encoding or decoding or zipping or unzipping data are located in the console .
Omnidirectional microphones are linked to the console in the setup, as well as a TV-monitor with loudspeakers and a screen.
Figure 2 shows a similar example of a sales concept configuration of a system according to the invention.
The studio setup or configuration comprises an exhibited car 13. During a live session the moderator/sales agent is able to hold a sales pitch and present the car in all its details as if the buyer were present in person. The studio setup comprises the necessary lighting 12 to bring the car 13 properly into view.
Various wireless cameras (not shown) can be set up in the car to bring the components and details of the car into view. Said cameras are then also integrated in a scene .
A scene can be expanded with other types of data , Via the laptop 8 on the desk 1 a sales quote can be created live for the customer and on the background screen 10 it is possible to scroll through the optional extras of the car live, for example .
In an example of an educational concept or coaching or distance learning, the car can be replaced by, for example, an electronic microscope and a study-object . The teacher/operator and the students take part in the live session in a two-way communication.
In a possible scenario a course or class is taught remotely, whereby in the video call various scenes are prepared and the various physical educational items can be brought into view.
For example, a configuration for a skeleton of an animal whereby the teacher is able to bring into view all perspectives in detail by means of different camera positions or is able to zoom in on a detail by using a wireless zoom camera . Figure 3 shows an example of a configuration of a medical concept of a system according to the invention .
Special peripheral equipment such aass for example digital cameras, microscopes, video endoscopes, medical ultrasound imaging devices can be added to the system.
This configuration uusseess aa foot pedal 14 to dynamically switch between different scenes . This allows for handsfree dynamic switching of the scenes .
The present invention is by no means limited to the embodiments described as an example and shown in the drawings, bbuutt aa system according to the invention as defined by the claims ccaann be realised according to all kinds of variants without departing from the scope of the invention.

Claims

Claims .
1. System (1) for improving an online meeting, video conference, live streaming or multi-camera broadcasting via a live communication network, the system comprising one or several scenes which are saved by the user, the system further comprising a studio setup comprising one or several cameras (2, 3) , one or several screens (5, 10) , lighting (12) , audio in/out ( 9' , 9") and a control (4 , 6) , whereby a moderator (11) is brought into view via at least one camera (3) , whereby the system comprises one or several sources of content or for generating content which are linked to the one or several scenes, whereby the one or several scenes are randomly called up and uploaded in a live communication session, such that the relevant content of the linked sources is presented or generated live, characterised in that in addition to the sources the studio settings for lighting (12) and audio in/out (9' , 9") and all required adjustable studio parameters, including a scene title and memo bar with reminder text, are also preset and saved in each of the one or several scenes .
2. System according to any one of the previous claims, characterised in that a scene is called up with one user action (4 , 6) in the live communication session such that the content is uploaded and shown as well as the linked preset studio settings are uploaded and configured as such .
3. System according to claim 1, characterised in that the studio settings comprise, among others : settings for the lighting ( 12 ) , colour temperature, microphone configuration, speakers ( 9") or headset ( 9' ) configuration, title and memo or reminder text .
4 . System according to any one of the previous claims, characterised in that the studio setup is further provided with a monitor or screen (10) for presenting a background picture .
5. System according to any one of the previous claims, characterised in that the one or several sources are external video sources or any other data, sources such as for example a camera, HDMI input, hard disk or USB stick, and possibly other connected (wired or wireless) electronic equipment for generating video or any other data, linked either directly physically, wirelessly or via one or other protocol for linking data or video .
6. System according to any one of the previous claims, characterised in that when calling up a scene in the live communication session, the aforementioned studio settings as well as the content of the sources are uploaded, whereby the lighting and/or audio in/out are configured as such according to the preset parameters .
7. System according to any one of the previous claims, characterised in that for every scene specific studio settings are preset .
8. System according ttoo any oonnee of the previous claims, characterised in that the studio settings are preset in the same phase as the preset or selection of the spurces and their content .
9. System according to any one of the previous claims, characterised in that the studio settings are saved in the same phase as the storage of the sources and their content .
10. System according to any one of the previous claims, characterised in that the sources, content and studio settings are saved together in one scene .
11. System according to any one of the previous claims, characterised in that the system settings are assigned and saved for several scenes .
12. System according to any one of the previous claims, characterised in that a scene is created and saved comprising a source. such as for example an (e-) PTZ camera (2, 3) , which is preset in position, tilt and/or zoom settings such that the camera is directed at an item ( 13) that needs to be brought into view, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the relevant item (13) .
13. System according to any one of the previous claims, characterised in that a scene is created and saved comprising two sources, a fist source, such as for example an (e-) PTZ camera (2, 3) , which is preset such in position, tilt and/or zoom settings that the camera is directed at an external monitor or screen (10) on which a background picture is preset via a second source, such as for example an HDMI input, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters .
14. System according to any one of the previous claims, characterised in that a scene is created and saved comprising a source for generating dynamic content, whereby the source is preset according to choice, whereby the scene further comprises studio settings such as light settings, audio settings, title and memo and possibly other scene specific studio parameters for bringing into view the content generated by the relevant source .
15. System according to any one of the previous claims, characterised in that a scene is created and saved comprising a source of static content , such as for example media or aann image or document, whereby the source is preset according to choice, whereby the scene further comprises studio settings such aass light settings, audio settings, title and memo and possibly other scene specific studio parameters .
16. System according to any one of the previous claims, characterised in that the studio settings comprise the following variables, among others, :
- description of a scene, such as a logo, text or ID; memo, reminder text or "cue" ; light setting: on/off , DIM value, colour temperature, CCT value (0-10 volt) , via bluetooth,
WiFi, DMX data signal, or future connection protocols; sound setting: speaker, in ear, headphones, locally or on the person ' s body, via bluetooth, wire, USB and future connection protocols; microphone setting: table microphone, pinned on microphone, microphone integrated in ears or headphones, wireless or wired, bluetooth, USB and future signals and protocols; via Visca protocol, RS485, network, wireless signal, USB or any future signal Pan-Tilt-Zoom
(PTZ) position of a camera and/or auto tracking/framing activation on the camera; on/off signal to physically activate or deactivate a hardware device; position, coordinates, start signal of a drone or movable vehicle .
17. System according to any one of the previous claims, characterised in that the external sources of content or for generating content comprise, among others : PTZ or ePTZ camera and (pan/tilt/zoom) settings, position settings with or without auto tracking and framing function webcam powerpoint (ppt) or other presentation form photo movie - video (media) file (pdf or other format) microphone local website via network a link with another computer- internet - website local network
HDM1, network input signal, analogue signal wireless camera static camera micro/macro images, high-speed camera AR (augmented reality) or VR (virtual reality) Al presentation or experience .
18 . System according to any one of the previous claims, characterised in that a secondary picture can be uploaded in the main picture of a scene , for example as a background picture .
19. System according to any one of the previous claims, characterised in that a PTZ camera (2, 3) is directed at a background screen (10) .
20. System according to any one of the previous claims, characterised in that setting a PTZ ccaammeerraa (2, 3) is combined with setting a background screen (10) .
21 . System according to any one of the previous claims, characterised in that a presentation comprises one or several scenes, whereby the scenes are called up randomly in a live communication session .
22. System according to any one of the previous claims, characterised in that each created and saved scene is shown as a tile in a gallery or a selectable icon or digital selectable button.
23. System according to any one of the previous claims, characterised in that the tile/icon/button either shows a text or a photograph or camera image of the relevant scene .
24. System according to any one of the previous claims, characterised in that a created and saved scene in a presentation is moved, switched, adapted or removed.
25. System according to any one of the previous claims, characterised in that as many scenes as necessary are created in a presentation .
26. System according to any one of the previous claims, characterised in that a scene is called up during a live session via one user action or function-activation, in a digital or analogue way, for example via a mouse click or press of a button of a keypad or foot pedal (14) , via a touch on a touchscreen, or any other means with "one press" function.
27. System according to any one of the previous claims, characterised in that a physical keypad or touchscreen is used, such as for example of an "Elegato" stream deck or "Elegato" mobile smartphone app, for calling up a scene during a live session and every scene is hereby shown on a separate button with an associated name or photograph or other picture .
28. System according to any one of the previous claims, characterised in that the system connects via existing online meeting tools, such as for example Skype, Teams and Zoom, and a live video call is established.
29. System according to any one of the previous claims, characterised in that the hardware sources that are linked to the system, physically wired or wireless, are recognised as such and are selectable .
30. System according to any oonnee of the previous claims, characterised in that the system comprises at least the following linked equipment : a PTZ camera (2, 3) , a mouse ( 6) or keypad (4) or other digital or analogue selection device, lighting and audio equipment .
31. System according to claim 30, characterised in that the system further comprises a background screen (10) on which the PTZ camera (2, 3) is directed .
32. System according to any one of the claims 30 or 31 , characterised in that the system is further provided with a laptop (8 ) , internal memory or USB device .
33. System according to any one of the previous claims, characterised in that both the content and the studio settings of a scene are called up and shown with one user action in a live communication session.
34. System according to any one of the previous claims, characterised in that one or several sources generate personalised content live on demand of and/or customised to, a user .
35. System according to any one of the previous claims, characterised in that the system is processed in, and executable from, a dedicated processor hardware module .
36. System according to any one of the previous claims, characterised in that the sources and other components, are linked to the dedicated processor hardware module using wires or wirelessly or via another protocol .
37. System according to any one of the previous claims, characterised in that the system comprises a communication interface, and at least one processor configured to be connected to the communication interface. and further a memory that is connected to the at least one processor, in which the memory saves instructions which instruct the at least one processor to upload and play a certain scene .
38 . System according to any one of the previous claims, characterised in that the system comprises a program that is uploaded in the memory, and is provided with instructions oorr configured for creating, adapting, importing and exporting scenes .
39. System according to any one of the previous claims, characterised in that the system is connected to a communication system that consists of a communication network for mutually connecting a number of users and a communication server for realising a live communication session between at least a first user and a second user of said number of users, whereby the first user is the moderator of the system and the second user a participant .
40. System according to any one of the previous claims, characterised in that the system is executed as a console (1) provided with multiple input and at least one output interface, a controlling computer provided with the software- or hardware-based codec, an entry system (7 ) (keyboard or similar) . a visual system (5) and a selection system (4 , 6) (keypad, foot pedal, digital button, touchscreen, etc. ) , and further one or several : omnidirectional microphones, screens (10) , lighting (12 ) , loudspeakers, cameras (2 , 3) and other configurable peripheral equipment .
41. System according to any one of the previous claims, characterised in that the user action is executed by an analogue or digital stream deck, keyboard (7 ) or computer mouse ( 6) , keypad (4 ) , foot pedal (14) or an active floor sensor.
42. System according to any one of the previous claims, characterised in that the system comprises a scene editor for creating, adapting, saving, uploading, importing and exporting one or several scenes .
43. System according to claim 42, characterised in that the scene editor comprises an informative and/or a function field, for showing information and possible functions, and an editor field for creating, adapting, importing and exporting the one or several scenes .
44. System according to claim 43, characterised in that the one or several scenes in the editor field are shown as tiles in a gallery.
45 System according to any one of the claims 42 to 44 , characterised in that the scene editor software is able to run on a server, cloud, laptop, tablet, smartphone. dedicated system or other system.
46. System according to any one of the claims 42 to 45, characterised in that the system comprises a console desk with an integrated scene editor in a configuration with a stream deck, aa microphone and a pan-tilt-zoom camera .
47. System according to any one of the previous claims, characterised in that the system is applied in a sales, educational or medical concept .
48. System according to any one of the previous claims, characterised in that a collection of scenes. and also their preset studio settings, are called up and shown in real-time one after the other.
49. An encoding device or computer provided to execute the system according to any one of the previous claims, whereby the encoding device consists of : a processor and memory, a network module and a live communication module which are saved in the memory and executed by the computer processor .
50. One or several computer-readable media containing instructions which, in response to the execution of the instructions by a processor of a computer device, prompt the computer device to execute the system according to 1 to
PCT/IB2022/057502 2021-08-11 2022-08-11 System for improving an online meeting WO2023017459A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
BE2021/5641 2021-08-11
BE20215641A BE1029675B1 (en) 2021-08-11 2021-08-11 System for enriching an online meeting

Publications (1)

Publication Number Publication Date
WO2023017459A1 true WO2023017459A1 (en) 2023-02-16

Family

ID=77447652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/057502 WO2023017459A1 (en) 2021-08-11 2022-08-11 System for improving an online meeting

Country Status (2)

Country Link
BE (1) BE1029675B1 (en)
WO (1) WO2023017459A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101790208B1 (en) * 2017-05-26 2017-10-25 주식회사 대경바스컴 Device and method for real time 3d virtual studio broadcasting
US20200177965A1 (en) * 2018-11-14 2020-06-04 Eventstream, Inc. Network services platform systems, methods, and apparatus
US20200371677A1 (en) * 2019-05-20 2020-11-26 Microsoft Technology Licensing, Llc Providing consistent interaction models in communication sessions
KR20210020293A (en) * 2019-08-14 2021-02-24 주식회사 백그라운드 Service system for providing rental studio for personal broadcasting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101790208B1 (en) * 2017-05-26 2017-10-25 주식회사 대경바스컴 Device and method for real time 3d virtual studio broadcasting
US20200177965A1 (en) * 2018-11-14 2020-06-04 Eventstream, Inc. Network services platform systems, methods, and apparatus
US20200371677A1 (en) * 2019-05-20 2020-11-26 Microsoft Technology Licensing, Llc Providing consistent interaction models in communication sessions
KR20210020293A (en) * 2019-08-14 2021-02-24 주식회사 백그라운드 Service system for providing rental studio for personal broadcasting

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Question / Help - Save Scenes in OBS", OBS FORUMS, 21 April 2020 (2020-04-21), XP055912054, Retrieved from the Internet <URL:https://obsproject.com/forum/threads/save-scenes-in-obs.119831/> [retrieved on 20220412] *
RICHARDS PAUL WILLIAM: "The Unofficial Guide to Open Broadcaster Software OBS: The World's Most Popular Free Live-Streaming Application", 8 January 2020 (2020-01-08), XP055979975, Retrieved from the Internet <URL:http://therez.ca/wp-content/uploads/2021/05/The-Unofficial-Guide-to-Open-Broadcaster-Software-PDF.pdf> [retrieved on 20221110] *
SANDE STEVE: "OBS Chapter 3: Sources and Scenes", ROCKET YARD APP REVIEW, 22 June 2020 (2020-06-22), XP055912073, Retrieved from the Internet <URL:https://eshop.macsales.com/blog/63040-obs-chapter-3-sources-and-scenes/> [retrieved on 20220412] *

Also Published As

Publication number Publication date
BE1029675A1 (en) 2023-03-07
BE1029675B1 (en) 2023-03-13

Similar Documents

Publication Publication Date Title
KR101270780B1 (en) Virtual classroom teaching method and device
WO2020045837A1 (en) Method for smart-remote lecturing using automatic scene-transition technology having artificial intelligence function in virtual and augmented reality lecture room
KR101367260B1 (en) A virtual lecturing apparatus for configuring a lecture picture during a lecture by a lecturer
US11256467B2 (en) Connected classroom
US8300078B2 (en) Computer-processor based interface for telepresence system, method and computer program product
KR101351085B1 (en) Physical picture machine
CN110349456B (en) Intelligent control system, remote control terminal and classroom terminal of interactive classroom
KR20170029725A (en) Method and apparatus for generating video lecture by synthesizing multiple images
JP2007028586A (en) Interactive multimedia content production system
US20120300080A1 (en) System and method of semi-autonomous multimedia presentation creation, recording, display, network streaming, website addition, and playback.
CA2830886A1 (en) Method and system for adapting a television for multimedia conferencing
CN107809609B (en) Video monitoring conference system based on touch equipment
US20110304735A1 (en) Method for Producing a Live Interactive Visual Immersion Entertainment Show
US9998244B2 (en) Method and apparatus for mixing event driven media
WO2012100114A2 (en) Multiple viewpoint electronic media system
JP2006303997A (en) Video conference system
JP2007082182A (en) Creating method of interactive multimedia content
US11405587B1 (en) System and method for interactive video conferencing
KR20110112686A (en) Video conference apparatus and method
CN116114251A (en) Video call method and display device
WO2023017459A1 (en) System for improving an online meeting
JP6727196B2 (en) Video call center
Chagas et al. Exploring Practices and Systems for Remote Teaching
US11659138B1 (en) System and method for interactive video conferencing
JP4565855B2 (en) Broadcast program production equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22768952

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22768952

Country of ref document: EP

Kind code of ref document: A1