GB2606131A - Communication platform - Google Patents

Communication platform Download PDF

Info

Publication number
GB2606131A
GB2606131A GB2103459.0A GB202103459A GB2606131A GB 2606131 A GB2606131 A GB 2606131A GB 202103459 A GB202103459 A GB 202103459A GB 2606131 A GB2606131 A GB 2606131A
Authority
GB
United Kingdom
Prior art keywords
audio
data
video data
communication platform
graphical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2103459.0A
Other versions
GB202103459D0 (en
Inventor
Rosinski Martin
Atkinson David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Palringo Ltd
Original Assignee
Palringo Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Palringo Ltd filed Critical Palringo Ltd
Priority to GB2103459.0A priority Critical patent/GB2606131A/en
Publication of GB202103459D0 publication Critical patent/GB202103459D0/en
Priority to PCT/GB2022/050624 priority patent/WO2022189795A1/en
Publication of GB2606131A publication Critical patent/GB2606131A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1831Tracking arrangements for later retrieval, e.g. recording contents, participants activities or behavior, network status

Abstract

A communication platform system for enabling at least a first user to perform in a performance session comprises a first computing device 103a and a second computing device 102. The first computing device runs communication platform software to control the first computing device to receive audio performance data from a performing user of the first device and to communicate 111 the audio performance data to the second computing device. The second computing device runs communication platform software configured to process the audio performance data in accordance with a rendering process to generate an audio/video data stream 113 corresponding to a representation of the performance. The rendering process is configured to receive first graphical data 114 associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user. The rendering process generates the audio/video data stream such that the video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.

Description

Intellectual Property Office Application No GI32103459.0 RUM Date:26 August 2022 The following terms are registered trade marks and should be read as such wherever they occur in this document: World's Online Festival (page 1) Wi-Fi (page 19) Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo 1.
COMMUNICATION PLATFORM
Technical Field
The present invention relates to techniques for implementing communication platform systems for enabling performing users to perform to other users.
Background
Software communication platforms, such as "chat" platforms that enable users to exchange messages, often "instant messages", in real time are well known and increasingly widely used.
Such platforms are used in many settings, for example in social settings to enable users who know each other or have a common interest to communicate with each other, and in workplace settings to enable colleagues to communicate with each other.
More specialist chat platforms are known that provide further functionality. For example, integrated audio/messaging platforms are known, such as the "World Online Festival" platform that combines an audio function with a messaging function. Specially configured "chat rooms" are provided that enable one or more users to perform via a suitable audio stream (for example by talking, singing or rapping) whilst other users can, in real time, engage with the performance using an instant messaging interface.
An advantage of such an arrangement over, for example, video conferencing software (which could provide the same functionality but by default typically provides an audio, video, and messaging stream for each user) is that it is particularly efficient from a data transmission and processing perspective. This is because the only data that is transmitted is audio stream data and instant messaging data. Minimising data transmission is particularly attractive to certain users, for example those using battery powered devices such as smartphones and who may be connected to a cellular data network on which charges are incurred if data consumption exceeds a set level.
However, whilst such integrated audio/messaging platforms are efficient, they may be considered to have restricted user appeal because visual elements tend to be limited to static avatar representations of the users, for example those users performing via the audio stream.
It is desirable to enhance the user appeal of audio/messaging platforms but doing so without a prohibitive increase in data transmission and processing requirements (as would be required if video streams of users were added) poses a technical problem.
Summary of the Invention
In accordance with a first aspect of the invention, a communication platform system for enabling at least a first user to perform in a performance session is provided. The system comprises a first computing device and a second computing device. The first computing device has running thereon communication platform software configured to control the first computing device to receive audio performance data from a performing user of the first device corresponding to a performance by the performing user and to communicate the audio performance data to the second computing device. The second computing device has running thereon communication platform software configured to process the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance. The rendering process is configured to receive first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user. The rendering process is configured to generate the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
Optionally, the first computing device is a first client device, and the second computing device is a server device.
Optionally, the system further comprises a plurality of further client devices, each further client device associated with a further user. The communication platform software running on the server device is configured to control the server device to communicate the audio/video data stream to the plurality of further client devices, said plurality of further client devices having running thereon communication platform software for producing audio and video output corresponding to the audio/video data stream.
Optionally, the rendering process is configured to generate the audio/video data stream as a plurality of audio/video data segments, each segment associated with a predetermined length of audio performance data.
Optionally, the rendering process is configured to sequentially generate the audio/video data 35 segments Optionally, the rendering process is configured to generate video data of the audio/video data stream in accordance with one or more virtual camera operations, wherein each virtual camera operation dynamically changes a perspective of the three-dimensional environment.
Optionally, the rendering process is configured to receive third graphical data associated with a plurality of further avatars, each further avatar associated with each further user, said rendering process configured to generate video data of the audio/video data stream in accordance with the third graphical data such that the video data comprises a plurality of further graphical representations, each further graphical representation of a further avatar associated with one of the further users.
Optionally, the rendering process is configured to position the first graphical representation of the avatar associated with the performing user in a first stage area location within the representation of the three-dimensional environment, and to position the further graphical representations associated with the further users in a second audience area location.
Optionally, the position of each further avatar within the second audience area location is dependent on attribute data associated with the further user with whom the further avatar is associated.
Optionally, the attribute data is an engagement score indicative of a degree to which each further user has engaged with the communication platform.
Optionally, the communication platform software running on each further client device provides an interface enabling a further user to generate messaging data and to communicate the messaging data to the server device, said rendering process configured to generate video data of the audio/video data stream in accordance with the messaging data such that the video data comprises a first further graphical representation of the messaging data relative to the representation of the three-dimensional environment.
Optionally, the interface enables a first further user to generate interaction data relating to a second further user and to communicate the interaction data to the server device, said rendering process configured to generate the video data in accordance with the interaction data such that the video data of the audio/video data stream comprises a second further graphical representation of the interaction data displayed relative to a graphical representation of the avatar associated with the first further user and the graphical representation of the avatar associated with the second further user.
Optionally, the interaction data relates to an emoticon selected by the first further user, and the second further graphical representation of the interaction data is a graphical representation of the emoticon.
Optionally, the second further graphical representation of the interaction data is displayed relative to the graphical representation of the avatar associated with the first further user and the graphical representation of the avatar associated with the second further user by transitioning along a transition path which starts at a first point and terminates at a second point, wherein the first point is adjacent to a graphical representation of the avatar associated with the first further user and the second point is adjacent to the graphical representation of the avatar associated with the second further user.
Optionally, the second computing device is configured to generate the audio/video data stream contemporaneously with receiving the audio performance data.
Optionally, the system further comprises a storage means for storing the audio performance data, first graphical data and second graphical data for subsequent rendering of a subsequent audio/video data stream.
In accordance with a second aspect of the invention, there is provided a method of implementing a communication platform system. The method comprises: controlling a first computing device to receive audio performance data from a performing user of the first device corresponding to a performance by the performing user; communicating the audio performance data to a second computing device; processing, at the second computing device, the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance. The rendering process comprises receiving first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process comprising generating the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of an animation of the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
In accordance with a third aspect of the invention, there is provided a computing device for use in a communication platform system according to the first aspect. The computing device has running thereon communication platform software configured to process audio performance data received from a first computing device and relating to a performance by a performing user in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance. The rendering process is configured to receive first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process configured to generate the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
In accordance with a fourth aspect of the invention, there is provided a method of implementing a rendering process for use in a communication platform system. The method comprises: receiving audio performance data from a computing device relating to a performance by a performing user and processing the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance. The rendering process comprises receiving first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user. Said rendering process further comprising generating the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
In accordance with a fifth aspect of the invention, there is provided a computer program comprising instructions which when implemented on a suitable computing device, controls the computing device to perform a method according to the third aspect.
In accordance with certain embodiments of the invention, a technique is provided that enables a communication platform, which is configured to enable one or more users to perform to one or more further users, to be implemented, that enhances user appeal by providing a visual rendering of users of the platform in a three-dimensional environment, but at a relatively low data transmission and processing cost because the only data that need be transmitted from the device associated with the performing user is audio data.
Various further features and aspects of the invention are defined in the claims.
Brief Description of the Drawings
Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings where like parts are provided with corresponding reference numerals and in which: Figure 1 provides a simplified schematic diagram depicting a communication platform system arranged in accordance with certain embodiments of the invention; Figure 2 provides a simplified schematic diagram depicting certain graphical elements displayed on a communication platform interface in accordance with certain embodiments of the invention; Figure 3 provides a simplified schematic diagram depicting the operation of server side software for implementing a communication platform system in accordance with certain embodiments of the invention; and Figure 4 provides a simplified schematic diagram depicting certain graphical elements displayed on a communication platform interface in accordance with certain embodiments of the invention and in particular the display of "emoticons".
Detailed Description
Figure 1 provides a simplified schematic diagram of a communication platform system 101 arranged in accordance with certain embodiments of the invention.
The communication platform system 101 comprises a server 102, a plurality of client devices 103a, 103b, 103c and an administrator device 104.
The server 102, plurality of client devices 103a, 103b, 103c and administrator device 104 are connected via a data network 105.
The server 102 has running thereon server-side communication platform software 106 and each of the client devices 103a, 103b, 103c has running thereon corresponding client-side communication platform software 107.
Each of the client devices 103a, 103b, 103c is typically provided by a suitable personal computing user device such as a smartphone, tablet or personal computer providing a display 108. The communication platform software 107 running on each of the client devices 103a, 103b, 103c controls each client device 103a, 103b, 103c to display, on the display 108, a communication platform interface 109 to a user of the client device 103. Each client device 103a, 103b, 103c is typically associated with an individual user.
In use, the communication platform system 101 provides a "char platform that enables one or more users to perform in a virtual environment in front of an audience comprising other users of the platform. The users forming the audience can communicate with each other and with the performing user or users via instant messaging. An example is explained in further detail with reference to Figure 2. An instance of one or more users performing to an audience of other users is referred to as a "performance session". The communication platform system 101 typically enables multiple performance sessions to be conducted concurrently. A user can select a performance session to view and change between performance sessions via a suitable control provided on the interface 109.
The administrator device 104 has running thereon administrator software allowing an administrator to undertake administration functions, for example controlling aspects of operation of the server-side communication platform software 106 and generally monitoring usage of the communication platform system 101 by the users.
Figure 2 provides a simplified schematic diagram depicting parts of a communication platform interface 201 displayed on the display of a user device when a performance session is being viewed. Figure 2 shows, in particular, graphical elements forming an animation of a scene displayed on the communication platform interface 201.
The animation of the scene shown on the communication platform interface 201 includes a first avatar graphic 202 associated with a first user (a performing user) who is performing (i.e., providing a performance). Audio from this first user (for example of the user talking or singing) is communicated to the user device and output from the user device in conjunction with the display of the communication platform interface 201. That is, the graphical elements of the communication platform interface 201 are displayed substantially simultaneously as the audio from the first user is output.
The first avatar graphic 202 is positioned within a three-dimensional representation of a predetermined environment, including a three-dimensional representation of stage area graphic 203. The three-dimensional stage area graphic 203 may include further graphics such as a first three-dimensional prop graphic 204 (of a microphone) and a second three-dimensional prop graphic 205 (of a chair). The three-dimensional stage area graphic 203 may be configured to look like a particular setting, for example the stage of a stand-up comedy club. Other settings could include a theatre, beach bar, stadium and so on.
The animation of the scene shown on the communication platform interface 201 comprises a plurality of further avatar graphics 206 associated with each user who is currently watching the performance session of the first user performing. The further avatar graphics 206 are positioned within a three-dimensional audience area graphic 207.
The communication platform interface 201 further comprises an instant messaging graphic 208 which displays instant messages generated by the users in the audience. The instant messaging graphic 208 comprises a plurality of user messages 210, typically presented in a "time-line" fashion and in which an author of each user messages 210 is identified by a corresponding user identifier graphic 209.
Because graphical elements of the communication platform interface 201 are three-dimensional, their representation (animation) can be readily manipulated in accordance with virtual camera operations. For example, the representation of the three-dimensional stage area graphic 203 and three-dimensional audience area graphic 207 may change periodically to mimic the effect of changing camera angles or, for example, a camera operation such as panning, zooming.
As described above, the audience shown by the further avatar graphics 206 in the three-dimensional audience area graphic 207 includes avatars associated with the users currently watching the performance session. Accordingly, a user of the device is represented as an avatar amongst the further avatar graphics 206 as a user avatar graphic 211.
Returning to Figure 1, operation of the communication platform system 101 will be further explained.
A user of a first client device 103a (a performing user) performs by, for example, speaking or singing. This performance is detected as an audio signal (for example via a microphone associated with the first client device 103a) and, under the control the communication platform software 107 running on the first client device 103a, corresponding audio data 111 (audio performance data) is generated which is communicated to the communication platform software 106 running on the server 102 via the data network 105.
Typically, the user of the first client device 103a is able to select, via the interface 109 provided by the communication platform software 107, a three-dimensional environment which forms the graphical backdrop for their performance (for example, selecting the three-dimensional stage area graphic 203 showing a stage at a stand-up comedy club described with reference to Figure 2).
The communication platform software 107 generates environment selection data 114 corresponding to this selection which is also communicated to the server-side communication platform software 106 running on the server 102.
Meanwhile, via the interface 109 on a second client device 103b, the communication platform software 107 enables a user of the second client device 103b to select a performance session associated with the user of the first client devices 103a. The communication platform software 107 running on the second client device 103b generates streaming request data 112 corresponding to this performance session selection which is then communicated to the communication platform software 106 running on the server 102 via the data network 105.
The server-side communication platform software 106 is configured to process the audio data 111 and the environment selection data 114 received from the first client device 103a and generate/encode (render) audio/video data 113 comprising a three-dimensional animation of a scene of the user of the first client device 103a performing in a three-dimensional representation of the environment specified in the environment selection data 114 along with audio provided in the audio data 111.
This audio/video data 113 is then communicated, typically as an audio/video data stream divided into a series of discrete sequential segments, to the client-side communication platform software 107 running on the second client device 103b. The audio and video of the audio/video data stream and is then reproduced by first client device 103b via the interface 109.
As described above, the interface 109, displaying the performance session, typically includes an instant messaging graphic 208 displaying instant messages from users viewing the performance session.
The interface provided by the communication platform software 107 running on each client device typically enables users to generate message data for displaying on the instant messaging graphic 208. With reference to the example shown in Figure 1, should the user of the second client device 103b input a message, the communication platform software 107 running on the client device 103b is configured to generate corresponding message data 110 which is then communicated from the client device 103b to the server-side communication platform software 106 running on the server 102 via the data network 105.
Similarly, the interface provided by the communication platform software 107 typically enables users to generate interaction data for displaying non-text data such as emoticons. As will be understood, the term "emoficon" refers generally to small graphical symbols/pictographs/ideograms such as "emojis" With reference to the example shown in Figure 1, should the user of the second client device 103b generate such data (for example by selecting an emoticon), the communication platform software 107 running on the client device 103b is configured to generate corresponding interaction data 115 which is then communicated from the client device 103b to the server-side communication platform software 106 running on the server 102 via the data network 105.
The server-side communication platform software 106 is configured to process the message data 110 and interaction data 115 and incorporate them in the video data of the audio/video 113.
Figure 3 provides a simplified schematic diagram depicting more detailed operations of the server-side communication platform software 106 in accordance with certain embodiments of the invention.
In particular, Figure 3 provides a diagram depicting components of the server-side communication platform software 301 and their interaction with a client device 302. As will be understood, the client device 302 is typically one of a plurality of such client devices.
The server-side communication platform software 301 includes a playlist service function 303 which is configured to receive streaming request data from the various client devices relating to particular performance sessions that users of the client devices wish to view on their devices.
The playlist service function 303 passes these streaming requests to an orchestrator function 304. The orchestrator function 304 is configured to process and organise these requests and generate a corresponding sequence of job requests that are then passed to a job recorder function 305.
The job recorder function 305 is configured to gather data from the relevant performance session to which the job request relates and convert this into corresponding data to pass to a renderer function 306. The renderer function 306 then undertakes a rendering operation and generates an audio/video data stream in the form of a sequence of encoded audio/video data segments which are sequentially communicated back to the client device 302. This operation can be undertaken by any suitable streaming protocol as is known in the art for example "HTTP Live Streaming (HLS)".
The audio/video data segments comprise data which when processed by the client-side communication platform software 107 on the client device 302 generates the graphical elements of the communication platform interface 201 as described with reference to Figure 2 along with reproducing the audio data. The audio/video data segments typically comprise individual data files in a suitable format, for example.mp4 data files.
More specifically, when a user wishes to view a performance session, the client-side communication platform software 107 running on the client device 302 generates corresponding streaming request data 307. This is communicated to the server-side communication platform software 106 and received by the playlist service function 303. The playlist service function 303 passes the streaming request data 307 to the orchestrator function 304. The orchestrator function 304 comprises an active requests store 308 which stores the streaming request data 307 along with streaming request data from other client devices.
Under the control of a job queuing logic function 309, these requests are ordered in a job store 310. Under the control of the job queuing logic function 309, the job store 310 communicates a job request 311 to the job recorder function 305.
The job recorder function 305 comprises an event recording function 317 which is configured to receive and "record" event data 324 in "real-time" relating to the performance session that has occurred since the last video segment was generated.
The event data 324 includes group data 312 identifying users who are currently viewing the performance session; audio performance data 313 generated by the user or users undertaking the performance (as described above, for example talking or singing); interaction data 314 for example specifying "emoticons" or other non-text elements generated by or exchanged between users currently viewing the performance session, and message data 315 associated with instant messages generated by the users currently viewing the performance session.
The job recorder function 305 further includes a snapshot capture function 316 which is configured to capture, if required, a current "state" of the performance session (that is, the state of the performance session up to the point the last audio/video segment ended). For example, the snapshot data typically comprises data which represents what was being displayed on the interface at the end of the previous audio/video segment.
The job recorder function 305 typically "records" data for a predetermined period of time (for example 10 seconds) and then combines the event data 324 with snapshot data generated by the snapshot capture function 316 and outputs corresponding job recorder data 318. The job recorder data 318 includes job data 319 and audio data 320. The job data 319 is provided in a suitable file format, for example as a.json file, and includes the group data 312, interaction data 314, message data 315 and the snapshot data. The audio data 320 is provided in a suitable format, for example as a.wav file or a.opus file.
As will be understood, this combination of the event data 324 and the snapshot data provides enough information for an audio/video data segment to be generated that seamlessly transitions from a preceding audio/video segment.
The job recorder data 318 is then passed to the renderer function 306. The renderer function 306 performs a rendering process. To perform this process, the renderer function 306 receives graphics data 325 relating to the performance session including avatar graphics data 321 relating to the avatars associated with the users currently viewing the performance session and environment graphics data 322 relating to the three-dimensional environment that has been selected as the backdrop for the performance session.
The graphics data 325 and the avatar graphics data 321 are typically stored in a suitable graphics data store (not shown). The avatar graphics stored in this data store (provided, for example, by a suitable database) are typically updated if and when user avatars change, or further users join the communication platform, and the environment graphics data may change or be updated as and when new three-dimensional environment data becomes available.
The renderer function 306 is then configured to generate audio/video data using the job recorder data 318 and the graphics data 325.
In particular, the renderer function 306 generates an animation of a scene of the avatars within the three-dimensional environment. The animation of the avatars is typically "dynamic" so that they appear to move, for example swaying, moving up and down, "breathing" (slightly expanding and slightly shrinking etc.), and change appearance (for example changing colour, glowing etc.).
The three-dimensional environment may be similarly, animated. For example, components of the three-dimensional environment may vary over time. For example, if the three-dimensional environment includes a three-dimensional representation of stage area graphic corresponding to a stand-up comedy club, a graphical element of an area illuminated by a spotlight may move slightly.
Further, as described above, the renderer function 306 may be configured to implement various virtual camera operations, where the perspective of the scene changes in correspondence with camera operations such as panning, zooming or with changing camera angles to mimic the scene being captured by multiple cameras.
In some examples, the renderer function 306 is configured to cycle through these virtual camera operations sequentially in accordance with a predetermined pattern. In some examples, the predetermined pattern can be defined by a user, for example a performing user.
The performing user can provide virtual camera operation selection data at the same time as providing the environment selection data.
In certain examples, additionally or alternatively, the camera sequence may be determined algorithmically based on the events recorded within the scene (for example, in response to changes in the event data, such as users joining or leaving the performance session or changes in the audio data indicating the performing user has begun performing or stopped performing), or the placement of objects (such as three-dimensional prop graphic) within the scene.
Typically, the audio/video data is generated as audio/video segment data 323 which provides segments which are of corresponding length to the period of time over which the event data was recorded (e.g., 10 seconds).
The audio/video segment data 323 is communicated back to the client device 302 for playback via the interface 109 to the user of the client device 302.
Typically, to achieve continuous streaming, the client-side communication platform software 107 will continually communicate (i.e., continually poll) the server-side communication platform software 106 until the user quits the performance session, or the performance session ends.
Typically, the renderer function 306 will generate the audio/video data stream substantially at the same time (contemporaneously) as audio performance data is received from the device associated with the performing user or performing users.
Systems arranged in accordance with certain embodiments of the invention may be provided with a "showreel" function. Such a function is configured to generate a "showreel" of the audio/video data associated with selected performance sessions. This audio/video data can be used for example on a "homepage" associated with the communication platform to advertise or otherwise promote certain performance sessions that are currently ongoing or have recently occurred. For example, short sections of video associated with different performance sessions could be cycled through on the home page. These could be "live" -that is relating to currently ongoing performance sessions or relating to performance sessions that have occurred previously.
Figure 3 depicts such a showreel function 326.
The showreel function 326 can be configured to operate in keeping with the client device 302 described above. That is, a showreel stream request 327 is communicated from the showreel function 326 to the playlist service function 303 which is processed in the same way the playlist streaming request data 307 from the client device 302 is processed, such that, a corresponding audio/video segment data 323 is generated and communicated back to the showreel function 326. This audio/video data can then be stored by the showreel function 326 for later playback or played back immediately (for example on a home screen as described above).
Alternatively, or additionally, rather than storing the video segment data 323, the showreel function 326 can be configured to cache the job recorder data 318 and graphics data 325 associated with a particular performance session, and, subsequently, communicate this data directly to the renderer function 306 for audio/video data generation when showreel audio/video data is to be displayed.
In certain embodiments the communication platform interface of communication platform systems can include further features.
For example, with reference to Figure 2, the renderer function 306 may be configured to render the depiction of the user avatar graphic 211 and further avatar graphics 206 in the audience area graphic 207 in accordance with attributes associated with the users corresponding to the avatars.
For example, a position of a user's avatar relative to the three-dimensional stage area graphic can depend on a predetermined score. For example, a user's avatar could be depicted within a first row (closest to the stage), a second row, third row etc. in dependence on their predetermined score. Such predetermined scores may depend on the degree to which a user "engages" with the communication platform. For example, for each user, an engagement score may be maintained by the server-side communication platform software 107, such that the engagement score is increased the more time a user spends viewing performance sessions and/or provides instant messaging comments and/or generates interaction data such as emoticons. In this way, a user is rewarded for increased levels of engagement with the communication platform by having their avatar appear closer to the three-dimensional stage area graphic 203.
As can be seen from Figure 2, the user avatar graphic 211 is in the "front row" of the audience area graphic 207 nearest the three-dimensional stage area graphic 203, which in certain embodiments reflects a high engagement score.
As described above, in certain embodiments, users can generate non-text interaction data, for example emoticons, which the renderer function 306 is configured to render in the video segment data 323 which can then be displayed on the communication platform interface 201.
Figure 4 provides a simplified schematic diagram depicting how this non-text interaction data can be displayed on the communication platform interface 201 in accordance with certain embodiments of the invention.
Figure 4 provides a schematic diagram depicting graphical elements of the communication platform interface 201 described with reference to Figure 2.
However, Figure 4 further depicts how, in the event that a user associated with a first avatar graphic 401 generates interaction data corresponding to a first emoticon, a corresponding first emoticon graphic 402 is rendered on the communication platform interface 201.
In particular, the first emoticon graphic 402 may be rendered in such a way that the first emoticon graphic 402 moves along a first transition path 403 which originates at a first position at or adjacent to that of the first avatar graphic 401 on the communication platform interface 201 and terminates at a second position, away from the first position. When the first avatar graphic 401 reaches the second position, it may be rendered in such a way that it disappears.
In certain embodiments, the interface 109 provides functionality that enables a first user to generate interaction data that is directed to another specific user. For example, the first user could "send" an emoticon to a second user. An example of this is depicted in Figure 4.
Figure 4 shows a second avatar graphic 404 associated with a second user and a third avatar graphic 405 associated with a third user.
In the event that the second user "sends" an emoticon to the third user, a second emoticon graphic 406 is generated which moves along second transition path 407. The second transition path 407 originates at a first position at or adjacent to that of the second avatar graphic 404 and terminates at a second position at or adjacent to the third avatar graphic 405.
In this way, the "sender" of the emoticon (in this case the second user) and the "recipient" of the emoticon On this case the third user) are identified on the communication platform interface 201 by virtue of the second emoticon graphic 406 following the second transition path 407.
The components of systems of the type described above for implementing a communication platform can be provided by any suitable means well known to those skilled in the art.
The client devices can be provided by any suitable personal computing device including: personal computers, tablets, smartphones, smart watches, suitable games consoles, smart televisions and any other suitable computing device that includes the requisite components for performing the function of a client device. Such components including a display, audio equipment (speakers and, if necessary for a performing user a microphone) user input/output means (for example a touchscreen and/or keyboard, mouse, touchpad, trackball etc.), suitable data processor and memory and a suitable data network connecting means.
The server can be provided by any suitable computing device capable of connecting to a suitable data network and performing the requisite data processing tasks to facilitate operation of the server-side communication platform software. The server can be provided by a single computing device or can be provided by a plurality of suitably connected computing devices (in accordance with distributed "cloud computing" techniques).
The data network that connects components of the system is typically provided by conventional public telecommunication data networks as are well known in the art. For example, the client devices and the server are typically connected to each other by the combination of public computer networks known as the Internet". The client devices may be connected to the internet via any suitable cellular telephone network or land line telephone network (connected for example via a "Wi-Fi" access point) as is well known in the art.
As will be understood, the components of the server-side communication platform software described with reference to Figure 3 are illustrative of an example implementation only, and the skilled person will understand that the data processing functionality provided by the server-side communication platform software can be implemented by any suitable combination and arrangement of software components. As described above, these data processing tasks can be performed on a single computing device, or across a plurality of connected computing devices.
In certain embodiments, functions described above as being implemented by the server, can be implemented by the client devices. For example, in certain embodiments, the client devices receive the job recorder data 318 and job data 319 (for example from the server) and perform the rendering function themselves.
The client-side communication platform software running on the client devices can be provided by any suitable software. In some examples it can be provided by dedicated software downloaded onto the client devices (for example as an "app" downloaded from an "app store"), or it can be provided by a suitable web browser implementing instructions for providing the communication platform interface from a corresponding web server function running on the server.
In the embodiments described above, as well as providing a means by users can view a performance by a performing user, the communication platform provides an interface via which users can communicate by exchanging instant messages. In certain embodiments, such a feature may be omitted. In certain embodiments, either additionally or alternatively, suitable means may be provided that enable users to communicate via alternative message types to instant messaging, for example using audio messages or picture/photo messages.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations.
In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, means at least two recitations, or two or more recitations).
It will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope being indicated by the following claims.

Claims (20)

  1. CLAIMS1. A communication platform system for enabling at least a first user to perform in a performance session, said system comprising: a first computing device and a second computing device, wherein said first computing device has running thereon communication platform software configured to control the first computing device to receive audio performance data from a performing user of the first device corresponding to a performance by the performing user, and to communicate the audio performance data to the second computing device, and said second computing device has running thereon communication platform software configured to process the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance, wherein the rendering process is configured to receive first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process configured to generate the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
  2. 2. A communication platform system according to claim 1, wherein the first computing device is a first client device, and the second computing device is a server device.
  3. 3. A communication platform system according to claim 2, further comprising a plurality of further client devices, each further client device associated with a further user, wherein the communication platform software running on the server device is configured to control the server device to communicate the audio/video data stream to the plurality of further client devices, said plurality of further client devices having running thereon communication platform software for producing audio and video output corresponding to the audio/video data stream.
  4. 4. A communication platform system according to any previous claim, wherein the rendering process is configured to generate the audio/video data stream as a plurality of audio/video data segments, each segment associated with a predetermined length of audio performance data.
  5. 5. A communication platform system according to claim 4, wherein the rendering process is configured to sequentially generate the audio/video data segments.
  6. 6. A communication platform system according to any previous claim, wherein the rendering process is configured to generate video data of the audio/video data stream in accordance with one or more virtual camera operations, wherein each virtual camera operation dynamically changes a perspective of the three-dimensional environment.
  7. 7. A communication platform system according to claim 3, wherein the rendering process is configured to receive third graphical data associated with a plurality of further avatars, each further avatar associated with each further user, said rendering process configured to generate video data of the audio/video data stream in accordance with the third graphical data such that the video data comprises a plurality of further graphical representations, each further graphical representation of a further avatar associated with one of the further users.
  8. 8. A communication platform system according to claim 7, wherein the rendering process is configured to position the first graphical representation of the avatar associated with the performing user in a first stage area location within the representation of the three-dimensional environment, and to position the further graphical representations associated with the further users in a second audience area location.
  9. 9. A communication platform system according to claim 8, wherein the position of each further avatar within the second audience area location is dependent on attribute data associated with the further user with which the further avatar is associated.
  10. 10. A communication platform system according to claim 9, wherein the attribute data is an engagement score indicative of a degree to which each further user has engaged with the communication platform.
  11. 11. A communication platform system according to any of claims 7 to 10, wherein the communication platform software running on each further client device provides an interface enabling a further user to generate messaging data and to communicate the messaging data to the server device, said rendering process configured to generate video data of the audio/video data stream in accordance with the messaging data such that the video data comprises a first further graphical representation of the messaging data relative to the representation of the three-dimensional environment.
  12. 12. A communication platform system according to claim 11, wherein the interface enables a first further user to generate interaction data relating to a second further user and to communicate the interaction data to the server device, said rendering process configured to generate the video data in accordance with the interaction data such that the video data of the audio/video data stream comprises a second further graphical representation of the interaction data displayed relative to a graphical representation of the avatar associated with the first further user and the graphical representation of the avatar associated with the second further user.
  13. 13. A communication platform system according to claim 12, wherein the interaction data relates to an emoticon selected by the first further user, and the second further graphical representation of the interaction data is a graphical representation of the emoticon.
  14. 14. A communication platform system according to claim 13, wherein the second further graphical representation of the interaction data is displayed relative to the graphical representation of the avatar associated with the first further user and the graphical representation of the avatar associated with the second further user by transitioning along a transition path which starts at a first point and terminates at a second point, wherein the first point is adjacent to a graphical representation of the avatar associated with the first further user and the second point is adjacent to the graphical representation of the avatar associated with the second further user.
  15. 15. A communication platform system according to any previous claim, wherein the second computing device is configured to generate the audio/video data stream contemporaneously with receiving the audio performance data.
  16. 16. A communication platform system according to any previous claim, further comprising a storage means for storing the audio performance data, first graphical data and second graphical data for subsequent rendering of a subsequent audio/video data stream
  17. 17. A method of implementing a communication platform system, said method comprising: controlling a first computing device to receive audio performance data from a performing user of the first device corresponding to a performance by the performing user; communicating the audio performance data to a second computing device; processing, at the second computing device, the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance, wherein the rendering process comprises receiving first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process comprising generating the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
  18. 18. A computing device for use in a communication platform system according to claim 1, said computing device haying running thereon communication platform software configured to process audio performance data received from a first computing device and relating to a performance by a performing user in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance, wherein the rendering process is configured to receive first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process configured to generate the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
  19. 19. A method of implementing a rendering process for use in a communication platform system, said method comprising: receiving audio performance data from a computing device relating to a performance by a performing user; processing the audio performance data in accordance with a rendering process to generate an audio/video data stream corresponding to a representation of the performance, 30 wherein the rendering process comprises receiving first graphical data associated with a three-dimensional environment and second graphical data associated with an avatar of the performing user, said rendering process comprising generating the audio/video data stream in accordance with the first graphical data and second graphical data such that video data of the audio/video data stream comprises an animation of a scene comprising the first graphical representation of the avatar associated with the performing user in a representation of the three-dimensional environment, and audio data of the audio/video data stream comprises the audio performance data.
  20. 20. A computer program comprising instructions which when implemented on a suitable computing device, controls the computing device to perform a method according to claim 19.
GB2103459.0A 2021-03-12 2021-03-12 Communication platform Pending GB2606131A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2103459.0A GB2606131A (en) 2021-03-12 2021-03-12 Communication platform
PCT/GB2022/050624 WO2022189795A1 (en) 2021-03-12 2022-03-10 Communication platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2103459.0A GB2606131A (en) 2021-03-12 2021-03-12 Communication platform

Publications (2)

Publication Number Publication Date
GB202103459D0 GB202103459D0 (en) 2021-04-28
GB2606131A true GB2606131A (en) 2022-11-02

Family

ID=75623070

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2103459.0A Pending GB2606131A (en) 2021-03-12 2021-03-12 Communication platform

Country Status (2)

Country Link
GB (1) GB2606131A (en)
WO (1) WO2022189795A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002058010A2 (en) * 2001-01-22 2002-07-25 Digital Animations Group Plc. Character animation system
WO2008079505A2 (en) * 2006-12-21 2008-07-03 Motorola, Inc. Method and apparatus for hybrid audio-visual communication
US20100286987A1 (en) * 2009-05-07 2010-11-11 Samsung Electronics Co., Ltd. Apparatus and method for generating avatar based video message
KR20140031956A (en) * 2010-07-23 2014-03-13 주식회사 플럭서스 Method for creating music video contents at a sensor attached device
WO2022056492A2 (en) * 2020-09-14 2022-03-17 NWR Corporation Systems and methods for teleconferencing virtual environments

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9009603B2 (en) * 2007-10-24 2015-04-14 Social Communications Company Web browser interface for spatial communication environments
US8397168B2 (en) * 2008-04-05 2013-03-12 Social Communications Company Interfacing with a spatial virtual communication environment
US8644467B2 (en) * 2011-09-07 2014-02-04 Cisco Technology, Inc. Video conferencing system, method, and computer program storage device
US20170353508A1 (en) * 2016-06-03 2017-12-07 Avaya Inc. Queue organized interactive participation
CA2953311A1 (en) * 2016-12-29 2018-06-29 Dressbot Inc. System and method for multi-user digital interactive experience

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002058010A2 (en) * 2001-01-22 2002-07-25 Digital Animations Group Plc. Character animation system
WO2008079505A2 (en) * 2006-12-21 2008-07-03 Motorola, Inc. Method and apparatus for hybrid audio-visual communication
US20100286987A1 (en) * 2009-05-07 2010-11-11 Samsung Electronics Co., Ltd. Apparatus and method for generating avatar based video message
KR20140031956A (en) * 2010-07-23 2014-03-13 주식회사 플럭서스 Method for creating music video contents at a sensor attached device
WO2022056492A2 (en) * 2020-09-14 2022-03-17 NWR Corporation Systems and methods for teleconferencing virtual environments

Also Published As

Publication number Publication date
GB202103459D0 (en) 2021-04-28
WO2022189795A1 (en) 2022-09-15

Similar Documents

Publication Publication Date Title
US11023092B2 (en) Shared virtual area communication environment based apparatus and methods
US11494993B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
US6948131B1 (en) Communication system and method including rich media tools
US9292163B2 (en) Personalized 3D avatars in a virtual social venue
US10419510B2 (en) Selective capture with rapid sharing of user or mixed reality actions and states using interactive virtual streaming
US8667402B2 (en) Visualizing communications within a social setting
US20140232819A1 (en) Systems and methods for generating and sharing panoramic moments
US20110210962A1 (en) Media recording within a virtual world
AU2001241645A1 (en) Communication system and method including rich media tools
CN113728591B (en) Previewing video content referenced by hyperlinks entered in comments
CN113711618A (en) Authoring comments including typed hyperlinks referencing video content
JP2022133254A (en) Integrated input and output (i/o) for three-dimensional (3d) environment
WO2019182802A1 (en) Remote view manipulation in communication session
US20110225517A1 (en) Pointer tools for a virtual social venue
WO2017026170A1 (en) Client device, server device, display processing method, and data distribution method
GB2606131A (en) Communication platform
US20240087249A1 (en) Providing multiple perspectives for viewing lossless transmissions of vr scenes
KR20230078204A (en) Method for providing a service of metaverse based on based on hallyu contents
CN114979054A (en) Video generation method and device, electronic equipment and readable storage medium