WO2022000040A1 - Image processing system - Google Patents

Image processing system Download PDF

Info

Publication number
WO2022000040A1
WO2022000040A1 PCT/AU2021/050704 AU2021050704W WO2022000040A1 WO 2022000040 A1 WO2022000040 A1 WO 2022000040A1 AU 2021050704 W AU2021050704 W AU 2021050704W WO 2022000040 A1 WO2022000040 A1 WO 2022000040A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
image
nodes
digital image
node
Prior art date
Application number
PCT/AU2021/050704
Other languages
French (fr)
Inventor
Guven Paternoster
Daniel Visser
Original Assignee
Scimian Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2020902225A external-priority patent/AU2020902225A0/en
Application filed by Scimian Pty Ltd filed Critical Scimian Pty Ltd
Publication of WO2022000040A1 publication Critical patent/WO2022000040A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1438Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using more than one graphics controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4092Image resolution transcoding, e.g. by using client-server architectures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2356/00Detection of the display position w.r.t. other display screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/401Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
    • H04L65/4015Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference where at least one of the additional parallel sessions is real time or time sensitive, e.g. white board sharing, collaboration or spawning of a subconference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the present invention relates to an image processing system, and in particular to an image processing system for processing each frame of an image or sequence of images and subdividing it for display across an array of devices.
  • Crowd engagement at sports matches and music events is an important part of the event experience for fans, players, event organisers and sponsors.
  • fans and spectators have engaged with the event individually by bringing home made signs.
  • Fans can engage in a more coordinated manner by singing songs or joining in Mexican waves around the stadium.
  • Jumbo digital display screens are now installed at many stadiums and can increase crowd engagement by showing highlights or replays of the match. Such screens are also used to display video and pictures of spectators around the stadium.
  • LED wristbands have entered the fan engagement space and have been used at music concerts and large scale stadium events.
  • the wristbands can be programmed to emit different colour light and can be provided to fans at the event.
  • the combined visual effect of thousands of fans wearing the bright LED wristbands creates a basic lightshow at the event.
  • Such applications allow event organisers to create a more coordinated fan experience but does not couple the fan experience with an interaction with the event organisers.
  • the invention provides a method for reproducing an image across an array of nodes, comprising the steps of: identifying a plurality of nodes; associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image; determining the position of each node in the array; selecting a digital image for representation across the array; allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
  • the nodes may be client devices. Preferably the nodes are mobile devices. The nodes may be smartphones.
  • the step of allocating a component part of the digital image to each of the nodes in the array comprises: digitally overlaying the digital image onto the array; dividing the digital image into component parts, each component part being overlaid on a node of the array; and, allocating the component part of the digital image to the node on which the component part is overlaid.
  • the aspect ratio of the image is adjusted to overlay the array.
  • the component part of the digital image comprises at least one pixel of the digital image.
  • the step of determining the position of the client device is performed by receiving an absolute position of the node within the array and comparing the absolute position of the node with the absolute positions of other nodes within the array to determine the relative position of each node within the array.
  • the absolute position is a GPS coordinate.
  • the step of determining the position of the client device is performed by receiving a reference location associated with the node, for example the external reference being an allocated seat in a venue.
  • Embodiments perform the further step of providing a plurality of digital images in a predefined sequence, each digital image of the plurality of digital images being separately mapped to the nodes of the array.
  • the plurality of images form an animation.
  • the digital image is mapped against all nodes of the array.
  • the digital image is mapped against a portion of the nodes of the array.
  • the component parts of the digital images are stored in sequence in a sequence component file for each node.
  • the sequence of images comprises a predefined sequence of digital images, each video image having a defined time duration within the sequence, the sequence component file comprising the defined time duration associated with each image within the sequence.
  • the invention provides an image processing platform for performing the steps of the first aspect and comprising the further steps of: communicating with a plurality of client devices across a communications network and providing the sequence component file to each client device.
  • sequence component file is configured such that when it is executed on the client device the client device displays component part of the image.
  • Embodiments include a time synchronization module, the time synchronisation module determining a time for executing the sequence component file on each client device such that the image is synchronously displayed on each client device in the array. All client devices in the array are time synchronized.
  • Embodiments include the further step of receiving position data from client device periodically and updating relative position of each client device within the cluster and updating the component part if necessary.
  • the invention provides a platform for recreating an image across multiple devices by allocating a component part of the image to each device for display on the device.
  • the invention provides a platform for mapping a digital image across multiple client devices forming an array where each client device is mapped to a component part of the digital image, the component part mapped to the client device is determined based on the position of the client device within the array, together the devices configured to produce a representation of the digital image.
  • the invention provides an image processing system for dividing an image into a plurality of component parts, each component part for display on a client device, together the devices reproducing the digital image, comprising: providing a digital image; identifying a plurality of client devices generally forming an array spread over an area , the plurality of devices together for reproducing the digital image; determine the relative position of each of the client devices with respect to other client devices in the array; dividing the digital image across the plurality of devices and mapping a component part of the digital image to each of the client devices in dependence on the relative position of the client device, each device for displaying a component part in its relative location to recreate the digital image across the plurality of client devices.
  • the invention provides a method for dividing an image into a plurality of component parts, the component parts for display on a plurality of nodes, together the nodes reproducing a representation of the image, comprising the steps of: providing a digital image; identifying a plurality of nodes generally forming an array spread across an area, the plurality of nodes together for displaying a combined representation of the digital image; determining the relative position of each of the nodes in the array; mapping a component part of the digital image to each of the nodes in the array depending on its location within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image by the array.
  • the invention provides an image processing system for reproducing an image across an array of nodes, comprising: an array processor for identifying a plurality of nodes and associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image, the array processor determining the position of each node in the array; a video processor for selecting a digital image for representation across the array and allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
  • Embodiments of the invention provide a system that allows crowds of participants having electronic devices with display screens to create a combined large digital display.
  • Display can be any shape or size or length, in full color.
  • Each device forms a component part of the display and each individual can display a part of the overall image on their device.
  • One type of suitable device is a smartphone.
  • the smartphone display screens act as pixels of the combined digital display.
  • a server application identifies the devices forming part of the large digital display and processes video images for rendering on the combined display. The component parts of the image for each device along with the timing for the display on the device are transmitted to the device in a digital file.
  • the device runs a client application which can determine the position of the device, registers with the server to download video data and synchronise the time to ensure that the device displays the video in unison with other devices in the array.
  • Embodiments use GPS coordinates to link to individual mobile devices to generate a lightshow that is derived from a video file. Ultimately, the individual screens combine to act as one large video screen which can be used to display video content which could include colorful logos, pictures, complex patterns or words.
  • Figure 1 shows a representation of an image processing platform.
  • Figure 2 is a first flow diagram showing steps for creating an array.
  • Figure 3 is a second flow diagram showing steps for creating an array.
  • FIG. 4 is a flow diagram showing steps performed during video processing.
  • Figure 5 is a flow diagram showing steps performed at a server and client device.
  • Figure 6 is a flow diagram showing time synchronization.
  • FIG. 7 is a flow diagram showing steps for managing GPS position updates. Detailed Description:
  • Embodiments of the invention provide a platform and system to enable an image or a sequence of images to be reproduced and displayed across the display screens of a plurality of devices which together form an array.
  • the screens of devices in the array together create a combined digital display screen with the display screen of each device forming a part, or a pixel, of the combined digital display screen.
  • the sequence forms an animation or film clip with each image forming a frame of the sequence.
  • Each device within the array is allocated part of an image to be displayed across the screens of the devices within the array.
  • the image itself can be resampled to be sub sampled or super sampled (in which multiple pixels of the original image are blended together for an individual display screen within the array).
  • the timing for displaying the images is time synchronized across all devices. When synchronized and displayed together the parts of the image on each display screen recreate the entire image across the array.
  • the devices are smartphones having coloured display screens. It is envisaged that other devices may also be used within the array, for example mobile computing devices such as tablet computers or portable computers, or other mobile devices.
  • Example applications for embodiments of the invention include use in sports stadiums where supporters with smartphones sitting within specific blocks, or throughout the entire stadium, can participate in creating and displaying images as part of a light show. Each participating device will receive a part of an image to be reproduced across the block or stadium. When the display screens of participating devices in the block are displayed together the complete image is reproduced across the block or the entire stadium. By synchronizing the timing at which the image is displayed across all participating devices, images can be displayed at predefined times.
  • Embodiment applications may include multiple images being displayed in a predefined sequence.
  • the predefined sequence of images form an animation.
  • By time synchronizing the display on devices participating in the array multiple images and video sequences can be displayed across the array of devices in a time synchronized way.
  • the resulting effect across a stadium of participating devices is a light show, animation or video presentation within the stadium in which spectators can participate and which can be viewed by spectators in other parts of the stadium.
  • the platform identifies participating devices. These devices are included within the array.
  • the platform selects an image for display across the array.
  • the platform allocates part of the image to each device in the array based on the relative position of the device within the array. For example with the display screens of the devices acting as pixels in the image.
  • the platform may determine the relative position of a device within the array by a number of different methods, including using GPS location of a device or seat allocation within a stadium or venue, which are discussed in more detail below., In embodiments, the platform determines time scheduling for each of the devices to display the component parts of the image, so the image is displayed in unison across the array.
  • the image data and time scheduling data is determined for each participating device and the relevant image data and time scheduling data is transmitted to each participating device.
  • the platform identifies the array of devices and processes images for reproduction across the array.
  • the platform determines the location of each device within the array.
  • the relative position of devices in the array can be determined based on reference positions, for example seat numbers within a stadium associated with a participating device. In some embodiments, the relative position of devices within the array can be determined based on absolute location of the participating device, for example using a GPS location received from the device or by other positioning methods.
  • the platform creates an array of participating devices.
  • the platform overlays a video image onto the array.
  • the platform overlays the array onto the video image.
  • the video image is processed and component parts of the image are allocated to each participating device based on the relative location of the device within the array.
  • the location of the devices may be received by the platform from the device across a communication network.
  • dedicated software applications software apps may be installed onto devices which are configured to manage communications between the device and the platform. These are discussed in more detail below.
  • Devices communicate with the platform and may provide updates on their location periodically. As the device changes location within the array, the platform can respond by recalculating the relative positions of the devices within the array and reallocating different part of the image to devices so that an image can continue to be displayed correctly in a non- distorted way across the array.
  • the device location information and time synchronization enables the platform to reproduce content including single static images or sequences of images to produce the effect of moving image or consecutive images across the array of devices. It is envisaged that the content may include bespoke video files configured for stadiums, events or other gatherings to convey messages, images, complex patterns or words. Content may include advertisements or other event specific content.
  • the data transmitted to the device for display is provided in the form of a custom dataset of colour information and timing information specific to the relative position of the device within the array for execution on the device.
  • the dataset is provided in a digital video file.
  • the device may interact with the platform via a specific software application which may use custom data packets for communication with the platform. Further details of the communication between the device and platform is provided below.
  • Applications of the technology can be used at events or across various industries in which large numbers of devices are located together at a common location.
  • the system may be implemented by event organizers at sports stadiums, music festivals, mass gatherings or other crowds.
  • the technology is executed on devices of participants attending the event and at predefined times the participants can display the display screens of their devices simultaneously to reproduce the image or other content across the crowd.
  • the platform can also provide individual interaction with participants. For example specific content may be provided to individual participating devices.
  • the platform provides an opportunity for fans and spectators at an event to engage with the event, to be part of a coordinated light show and also provides an opportunity for event organizers and sponsors to engage with fans through the digital platform.
  • FIG. 1 is a schematic diagram of the physical architecture of the image processing platform 1000 and shows its interaction with a client device 2000.
  • the components of platform 1000 may be physically co-located or may be distributed and located remotely and connected across a communication network. In the example of Figure 1 only a single client device 2000 is shown for simplicity.
  • the platform 1000 is configured to communicate with multiple client devices.
  • Platform 1000 includes memory 1100 for storing user data, image data, program code, event data and other data to enable the processor to implement one or more functional components of the platform, processor 1200, transceiver 1300, user input means 1400 and display 1500.
  • Memory 1100 comprises multiple databases to store data to enable the processor to implement one or more functional components of the system.
  • Venue database 1110 stores data relating to venues including stadiums, conference centers, theatres or other venues. For each venue, Venue database 1110 includes data including layout and seating arrangement for each venue. The data files include reference information for each seat, for example stand, block, row, seat number. Detailed information including the physical arrangement and size of blocks, distance between seats and locations and sizes of non-seated areas, for example aisles, breaks between tiers and other physical stadium information is also included within the venue data. Venue data enables the relative positioning of seats or locations within the stadium to be calculated. GPS representation of the seating plan and venue may also be stored in the venue database in which GPS coordinates are provided for each seat and area of the venue. This data allows seats or areas within the venue to be identified and mapped based on GPS coordinates. Venue data files include sufficient information to enable a venue to be digitally modelled. Venue database may be updated to include venue data for new venues or expanded to include different configurations of a specific venue.
  • Venues may use different configurations for different event types, for example sports stadiums may be configured differently for different events.
  • Different venue seating configurations may be used for different events, for example certain seats, blocks, stands or tiers of the stadium may be closed or unallocated during a particular event. For example, during a cricket match, certain seating areas may be closed to accommodate sight screens. In some sports matches, some areas of seating may be closed to help segregation between opposing fans. In other sports matches, blocks may be closed due to lower attendance at those matches. More significant venue configuration changes could be implemented when a venue is reconfigured to host a different type of event, for example when a sports stadium is reconfigured to host a concert.
  • Venue configuration data is also included within the venue database.
  • Database 1100 includes video database 1120.
  • Video database 1120 stores video content files.
  • Video content files include single static digital images or a sequence of digital image, or frames representing a changing image.
  • the images may represent patterns, words, logos, images or other visual representations.
  • Data relating to the size and colour of the image is stored within the video database and associated to the image.
  • For sequence files, frame rates, sequence patterns and other sequence information is stored with the data file within the video database.
  • Video files may be stored in variety of file formats well known to those skilled in the art.
  • Video files may include individual identifiers within database 1200. Video files may be associated with subscribers or content providers, for example sports clubs, music acts, companies, venues or sponsors.
  • Audio data may also be associated with the video files.
  • the audio data may be stored with the video data in the video database 1120.
  • audio data may be stored separately from the video data, the video data and audio data being linked together using identifiers or other data mapping techniques.
  • Database 1100 includes user database 1130.
  • User database 1130 stores personal details and account details for subscribers to the platform.
  • User database includes user ID and passwords associated with the user account.
  • Contact details for the user including, for example email address or mobile phone number are stored with the user account. The contact details may be used for the platform to distribute video data to the user when the user is participating in an event.
  • the platform 1000 may allow users to register specific devices to the account.
  • user database 1130 also stores specific device identifiers or configurations associated with the user for example iPhone or Samsung Galaxy. Different devices have different display configurations, for example screen size or display. Such device details can be relevant when the user is participating in an event. The device details provide the platform with flexibility when determining an array of devices and allocating portions of an image to individual devices within the array.
  • user database stores information about the events to which the user is subscribed and any user specific information associated with the event. Information stored includes user location details at the event. For example, the user’s seat number at the event. User database 1130 may be updated. In embodiments, real time location information for the user is provided to platform 1000, for example the user’s current GPS coordinates are provided to the platform from device 2000. During the event when a user’s location is changed, platform 1000 may reposition the user within the array and allocate the user with a new part of the image for display based on its current relative position within the array. [63] User database can be updated when the user registers with a new event or, during an active location monitoring period for example updating user location during an event.
  • Event database 1140 stores event information. On registering an event with the platform an event file is created to store details associated with the event.
  • Event database stores venue information associated with the event, venue information includes details of the venue including any event specific stadium configurations.
  • Event database stores timings associated with the event. Further event information including sponsorship details, sponsor images, competition details, prize information is stored in event database 1140.
  • the event database includes schedules for video associated with the event.
  • Event database stores references to video data stored in video database to be displayed during the event, the timing for the video to be displayed during the event, location for displaying the video within the venue and any other event related information.
  • Associations or links are created between data in the various databases. Associations may be provided by way of reference IDs. For example users are linked to events and/or venues to which they are subscribed, events are linked to venues, videos are linked to events and/or users subscribed to the event etc.
  • Event information within event database may be updated or created.
  • processor 1200 is configured to implement one or more functional components of the platform.
  • Processor 1200 includes array processor 1210 configured to construct an array of devices to display an image, video processor (visualizer)
  • Processor 1200 comprises transceiver controller 1240 configured to control the transceiver of the platform to receive data from client devices or to transmit data to client devices.
  • processor 1200 The components of processor 1200 are controlled by platform user input device 1400.
  • Input device 1400 is configured to allow platform 1000 to receive user input.
  • User input device may be any suitable user input device, for example a keyboard, touchscreen, or other electronic user input device.
  • User input device may form part of a user interface with display 1500.
  • Array processor 1210 is configured to create a logical map of the array of devices available to reconstruct a specific video, typically in response to user input.
  • the logical map may be a digital representation.
  • array processor 1210 retrieves venue data from venue database and event data from event database.
  • array processor 1210 retrieves the seating plan for the venue and creates a logical map of the array using seating plans for the venue.
  • Each seat within the venue is included within the array as being available to present part of an image. For example in a block of a stadium having Rows A to Z and seats numbered 1 to 108, each seat within the block is mapped into an array.
  • the array including 2808 seats (i.e.
  • each seat having a specific location within the array.
  • Each seat can be allocated to an individual pixel of an image. Or a pixel may be allocated to multiple seats in a group, depending on the resolution of the image to be recreated across the array and the number of seats in the array.
  • Array processor retrieves event data from event database 1140 to determine whether the event includes any variation to seating plans, for example seating restrictions at the venue or other event specific limitations.
  • the logical map of the array is constructed using GPS or other location coordinates associated with user devices known to be at the event.
  • Array processor retrieves location data from user database 1130 for users registered to the event. The location of each registered user at the event is collected. The array processor collects the locations of each device to form an array in which the user devices are arranged in the array in terms of their relative locations to each other. This arrangement creates an array of devices at an event.
  • the array created by the array processor defines the display available for reconstructing the video image at the event.
  • the array may be constructed based on the seating plan for the venue where each seat or group of seats is available as a pixel for the image or constructed based on the location of users, and therefore devices, at the event.
  • the array is essentially the canvas onto which the image will be reconstructed.
  • Video processor (visualizer) 1220 is configured to overlay a selected video image onto an array. Video processor retrieves a selected video file from video database 1120 and receives the array from array processor 1210, generally in response to user input.
  • Video processor 1220 is configured to map the video image onto the array.
  • Video processor includes algorithms to control the position of the video image on the array. The algorithms are configured to move or translate the video around or across the array. Algorithms may be configured to resize the video by fitting the video to the array by stretching or compressing the video to fit the array by changing the aspect ratio.
  • Video processor 1220 is configured to include further debugging tools to allow further manipulation of the video image on the array, for example removing a percentage of the devices in the array, changing the brightness of individual devices in the array, changing seating configuration.
  • device file allocation processor 1230 is configured to split the image across the array by allocating part of the image to each device or each seat in the array based on the relative position of the device or seat within the array.
  • Video processor calculates the image file to be allocated to each device in the array.
  • device allocation processor 1230 uses video processing algorithms or functions configured to smooth colours across the image where necessary.
  • the image may be adjusted.
  • Device file allocation processor divides the video image across the array of devices and allocates a specific component part of the video file to each device according to its position in the array. Typically, the component part is a colour, being a single pixel of the image.
  • Device file allocation computes the sequence data for each individual device. Where the video is a sequence of images, device allocation processor calculates the sequence of colours, and timing for each device within the array.
  • sequence files are stored in the database 1100.
  • video files for each device in the array are stored in the event data database 1 140.
  • the video files are also allocated against the subscriber allocated to the device position in the array.
  • Platform 1000 includes a transceiver 1300 for communicating with subscriber devices 2000 across communication network 3000.
  • Transceiver 1300 is controlled by transceiver controller 1240.
  • Transceiver 1300 is configured to receive data from client device 2000 including registration data for events, GPS data or other data defining location. Further data may also be received from device 2000 by transceiver 1300.
  • Transceiver 1300 is also configured to transmit data to client devices 2000. Data may include video data specific to each device, general event data or other data relating to subscribers.
  • Platform 1000 is configured to be user controlled by user input device 1400.
  • input device 1400 is configured to receive user input.
  • User input device may be any suitable user input device, for example a keyboard, touchscreen, or other electronic user input device.
  • Platform includes display 1500 for representing data presented by processor 1100 to a user. The platform is configured to enable users to interact with processors and data via user input and display 1500.
  • video processor 1220 users may select and manipulate video images across arrays representing devices using algorithms via input device 1400.
  • client device 2000 In the example of Figure 1 a single device 2000 is shown. In applications, many client devices communicate with platform 1000. Each client device 2000 includes transceiver 2500 for communication with platform 1000. Client device 2000 can be a mobile communications device, for example a smartphone, or a portable computer device, for example a tablet device.
  • Client device can install and run a software application 2100 (app) associated with platform 1000.
  • the software application may be a smartphone app. Interaction of client device 2000 with platform 1000 is controlled through software application 2100. Alternatively, the device may connect directly with the platform, for example via the internet.
  • Client device 2000 includes user input device 2600 to enable the smartphone to receive user input from a user.
  • Input device 2600 can be in the form of a touchscreen but can also be in the form of a keyboard, mouse, or microphone for receiving voice activated commands from a user.
  • Software application 2100 is executed by processor 2800 and controlled by user input via user input device 2600.
  • Client device 2000 includes memory 2700 for storing software application 2100.
  • Memory 2700 also stores video data and other data received from platform 1000. GPS data for the device can be stored in memory 2700.
  • Software application wirelessly receives data files, including video sequence files, and requests, including location updates or time requests, transmitted from platform 1000.
  • Application 2100 interacts with GPS module 2200, display 2300, device clock 2400 and transceiver 2500, user input device 2600, memory 2700 and processor 2800.
  • Display 2300 of the client device can be the screen of the smartphone.
  • Display 2300 is configured to display video files or other video data received from platform 1000.
  • the video data may be colour data, the data defining a colour to be displayed by the client device.
  • the display screen emits the colour defined by the colour data.
  • other data may be displayed, for example an image, the screen may be split into different regions.
  • GPS module 2200 calculates the GPS location of the client device. The GPS location can be stored in memory 2700.
  • Communication network 3000 facilitates communications between platform 1000 and client device 2000. It is envisaged that communications network 3000 may be a single mobile communications network to which platform transceiver 1300 and client device transceiver 2500 connect directly or a combination of multiple networks.
  • client device transceiver 2500 may be a short range radio transceiver, for example a Bluetooth transceiver, which connects into a mobile communications network.
  • An event related database is created within platform 1000.
  • the event related database may be associated with an event organizer for example, a company, a sponsor, a sports team, a venue or event.
  • the event related database includes the database components discussed above with respect to Figure 1 , including venue data 1110, video data 1120, user/subscriber data 1 130 and event data 1140.
  • Management of the event related database can be controlled via a user interface.
  • Access controls may be applied to the event related database for example by providing password or other security mechanisms.
  • the user interface may connect to the platform 1000 across a communications network or may be directly connected to the platform. User interface provides a user with access to populate the event related database with data.
  • Event data contains the name and time of the event along with prize information. It also links to the venue data and the video data. User data identifying subscribers to the event is also linked. For example, a sponsor could create three sets of event data where everything is the same except the start time, if the same show was to be activated three times during a sports match, such as before, at half time and afterwards.
  • Data may be accessed and retrieved from other areas of the platform or other connected databases, for example if venue data is stored in a generic region of the database, or may be entered via the user interface.
  • the platform constructs the array for an event.
  • the array is a digital logical model of the positions of the devices at the event.
  • the array defines the combined display screen that is created by the combination of the devices and that the image can be displayed on at the event.
  • the array processor creates a seating plan for the event.
  • the seating plan is typically obtained from stadium databases or event databases.
  • the model assumes that every seat is occupied by a subscriber participating in the event with a smartphone.
  • the array creation process may be controlled via the user interface.
  • array processor 1210 receives input information identifying an event for which an array is to be created.
  • Array processor creates an array data file associated with the event.
  • array processor retrieves event information including identification of the venue.
  • Venue data, including seating plans and any specific venue configuration data associated with the event is also retrieved at 210 and 215. This data may be input directly by a user via the user interface or may be retrieved from venue database 1110 or event database 1140.
  • venue data includes identification of the venue seating plan and other locations allocated to ticket holders at the event.
  • the venue data may define areas of the venue with allocated seating, areas with unallocated seating, standing areas and non-seating areas, including pitch, exits, aisles etc.
  • venue data may also include dimension information and information defining distances between areas of the venue or seats.
  • Array processor 1210 uses the venue data to build a logical map of the array of nodes.
  • the nodes are locations allocated to users attending the event at the venue.
  • each node For a fully seated venue, each node represents a seat in the venue, determined from the seating plan.
  • the array processor may allocate a predefined number of nodes into the designated standing area, to simulate the number of people in the area during the event.
  • the array represents a logical representation of the positions of seat or subscribers for an event with each node being a location of a attendee at the event.
  • the array recognizes the relative positions of the nodes within the array. It recognizes the relative position of the nodes with respect to other nodes.
  • each node is individually allocated a unique node ID.
  • each node is identified by its unique seat location, for example stand, block, row, seat.
  • each node may be identified by GPS location.
  • Creation of the array presents a logical map of the positions that subscribers are expected to be located at during the event.
  • the server can identify the location of the subscriber within the array. The subscriber can be matched to the relevant node ID corresponding to the seat.
  • each node within the array is allocated a part of the digital image.
  • the part of the digital image may be a pixel or may represent a group of pixels of the image.
  • the number of pixels in the images exceeds the number of devices in the array, multiple pixels in the original image are averaged to determine the final value for a grid cell which maps to a single node in the array. Therefore, the data for each node is the average across multiple pixels in the original image.
  • each node displays its contributory part of the image synchronously, the digital image is recreated.
  • the array processor can update the logical array to better capture the location of subscribers at the event to provide a more accurate map of subscribers at the event. For example, certain nodes within the array may be vacant or the position of nodes within the array may be changed. In these situations, the array is updated and the image can be remapped to the updated array.
  • array processor may retrieve ticket sales information to identify which seats have been sold and will be occupied at the event. This information is used to update the array to provide a more specific representation of locations of subscribers at the event. Seats which have not been sold may be excluded from the array since it is expected that the seat will be vacant and so a subscriber with a device will not be positioned in the unsold seat.
  • array processor builds an array using GPS locations of subscriber devices. This process can be run during an event in which participants are located together within a venue or collocated at another location, for example a public gathering. The process may be used in combination with a venue seating plan or venue dimensional data to confirm which seats or areas are occupied by particular subscribers. In such embodiments, the GPS locations of the devices are compared with GPS coordinates of the seats or areas of a stadium to determine the seats that each device is occupying. Alternatively, the GPS process may be used independently to map an array of subscribers at an event.
  • array processor receives input information identifying an event for which a GPS array is to be created.
  • Array processor creates a GPS array data file associated with the event.
  • the array processor determines whether any venue information is associated with the event at 305, for example whether a seating plan array has already been built for the event. If so, the array is retrieved from the event database or venue database.
  • array processor retrieves confirmation of which subscribers are attending the event.
  • Event subscription data may be stored in event data database 1140 and/or within user database 1130.
  • Array processor retrieves GPS coordinates of those subscribers attending the event from user database 1130.
  • array processor constructs a GPS array of nodes where each node represents the location of a subscriber based on its GPS coordinates. The GPS array provides a logical representation of the relative locations of subscribers at the event.
  • the GPS array when used in combination with a seating plan, the GPS array can be overlaid onto the seating plan at 320 to confirm which subscribers are sitting in which seat.
  • the GPS data is used independently of the seating plan and creates a crowd cluster map which maps the location of subscribers in a crowd. In this case, effectively the GPS data becomes the seating plan.
  • Each device is allocated a node ID at 325.
  • the software application on the client device periodically pushes the client device GPS coordinates to platform 1000.
  • the client device GPS coordinates are updated in the user data database 1130.
  • Array processor 1210 periodically retrieves GPS coordinates of the subscribers attending the event from user data database 1130. These coordinates provide a more real time confirmation of the location of users at the event to account for movement of subscribers during the event, for example seat changes.
  • Array processor then updates the array based on the retrieved GPS coordinates.
  • the GPS array is updated periodically to identify movement of subscribers during the event.
  • Software application may be configured so updated GPS data is pushed to the platform by client device when the client device changes location. For example, when software application determines that the GPS coordinates have changed, or when GPS coordinates have changed above a predefined threshold. In other embodiments, software application can be configured to provide GPS coordinates to the platform 1000 at a predefined time intervals. GPS coordinate updates provide a real time representation of the relative locations of subscribers at the event.
  • FIG. 7 shows the process for managing GPS updates at the platform using GPS data in combination with seating plans and stadium data.
  • the process of Figure 7 is executed during an event.
  • GPS based array creation process is initiated at platform 1000.
  • Array processor retrieves a registered GPS position for each participating subscriber at 715 from user data stored in user data database 1130 within database 1100.
  • array processor retrieves seat coordinates and coordinates of other registered areas for the event. The seat coordinates and coordinates of other registered areas are compared with the GPS coordinates.
  • Array processor may apply some tolerance accuracy when comparing the GPS coordinates of the devices to the GPS locations of the seats at 730.
  • the updated GPS positions are stored are stored in database 110 at 735 to update previous GPS positions at 710. Preferably this GPS location update process is run periodically during an event.
  • a best fit bounding box is created based on GPS positions of devices.
  • GPS locations of all subscribing devices are retrieved from database 1100 and a best fit bounding box is created to define the boundaries of the GPS coordinates of the devices to define the boundary of the array.
  • the boundary box defines the array and the array is then used to display the video.
  • Each node is individually allocated a unique node ID.
  • the array for the event may be stored in event database 1140.
  • Video processor 1220 processes a digital video image to be reproduced across the array.
  • the process for processing the video image is now described with reference to Figure 4.
  • video processor retrieves a video data file for reproduction across an array of devices.
  • the video file may represent patterns, words, logos, images.
  • Video file may be a single image or it may include a sequence of images.
  • Video file may include time information defining a period of time for which the image or each image in a sequence should be displayed. The timing data is important for sequences of images which are designed to create a changing visual image across the array.
  • the array of devices for reproducing the image at the event is retrieved from database 1100. Video processor now processes the image for reproduction across the array.
  • video processor 1220 decodes each image or frame of the video is decoded into a full image in memory.
  • the image is compared with the array and the image in memory is resampled to match the aspect ratio of the display array at 415.
  • the aspect ratio corrected memory image is resampled into a grid representing the display array.
  • the number of pixels in the images exceeds the number of devices in the array. In this situation, multiple pixels in the original image are averaged to determine the final value for a grid cell which maps to a single node in the array. Therefore, the data for each node is the average across multiple pixels in the original image. For a given original image, an array having a higher number of nodes can reproduce the image at a higher resolution than an array having few nodes since in an array with fewer nodes, a greater number of pixels from the original image need to be averaged for each node.
  • arrays having different sizes or different number of nodes will result in different video data being allocated to the nodes to represent the same original image. For example in an array having a greater number of nodes, a node will represent the data of fewer pixels compared with an array having fewer nodes, in which the video data for each node will be the average across a greater number of pixels.
  • multiple images are displayed across the array in sequence. For example, a first image may be displayed for 10 seconds, followed by a second image for a further 10 seconds. In other examples moving images are displayed by the array which require different video data to be displayed by each device in a faster specific time sequence.
  • video processor runs separate video processing on each image in the sequence to create a sequence of video data for each node in the array.
  • Time data associated with each image in the sequence is stored with the video data for each node. For each video file a display time is associated with the video data.
  • Node allocation processor creates a separate video file for each node.
  • the file includes video data for display by each node and the time sequence associated with the data, for example display time for each video image or pixel of a video image.
  • An example of a typical video file for a node includes:
  • a subscriber downloads a dedicated software application (app) 2100 associated with platform 1000 onto a mobile device.
  • the software application may be a smartphone app.
  • the application may be accessed via a cloud based application platform, for example Apple iStore, or other suitable application platform.
  • Application 2100 is loaded onto subscriber device 2000 and stored within memory 2700.
  • application 2100 communicates with platform 1000 across a communication network.
  • the subscriber management system may form part of platform 1000.
  • Subscriber management system manages the subscriber account.
  • the subscriber account acquires details from the subscriber including contact details and device details at 405.
  • the user can initiate the software application and interact with the software application via a user interface on the mobile device.
  • the software application After activating the software application, the software application provides access to a list of events for which the platform is providing a light show. These events may include sporting events played with live fans, sponsorship events, music concerts or festivals or other types of mass gatherings, for example protests, art installations, community gatherings.
  • the user may register with any event which he is attending at 510.
  • subscriber data is provided to the platform by the software application. Typically, the subscriber data is stored in user data area 1130 of database 1100.
  • the user On selecting an event the user is presented with information about the event within software application 2100.
  • users can provide details of his allocated seat at 515 to the server application, for example Block G Row Y Seat 17.
  • the GPS location of the device can be determined and provided at 520 to the server application.
  • the user can select to use GPS location to identify his location during the event.
  • This seat information is stored within the user database 1130 at the platform along with other user information. This user information can be linked to event data and also to venue data within database 1100.
  • processor 1200 retrieves positional data related to the subscriber location at the event (for example the subscriber’s seat).
  • Processor 1200 references the subscriber location against the array data for the event to identify the position of the subscriber within the array for the event.
  • the node ID associated with the position is retrieved at the server and this node ID is then used to retrieve the video sequence data file associated with the subscriber location for the event.
  • This video sequence data file is allocated to the subscriber.
  • This data for the event is downloaded to the client device at 540.
  • the data relates to the node allocated to the subscriber only. So subscribers are provided with different data depending on their position within the array.
  • the video sequence data file within video database 1120 is linked to the subscriber within the user database 1130 and event database 1140.
  • the link to the event database 1140 enables the platform to communicate with the subscriber in the event that any changes to the sequence data file are required. For example, if the video show at the event is changed after allocation, updated sequence data files can be transmitted to the subscriber.
  • the platform can track the number of nodes in the array allocated to active subscribers.
  • the array processor may update the array to account for unsold tickets and the video processor to recalibrate the video images and update the sequence data files.
  • each device in the array at the event should display the images synchronously so that the images are correctly reproduced across the array at the event.
  • Platform 1000 executes a time synchronization process with the device at 545 to ensure that the sequence files are displayed at the correct time at the event by all devices so all devices display the video data in sync.
  • the time synchronization process is performed for each device in the array in order that the images are displayed time synchronously by all devices in the array.
  • the synchronization process may be run at the time when the user subscribes to the event, at a predefined period before the event or during the event.
  • the software application 2100 on the client device retrieves the local time on the device defined by device clock 2400 at 605.
  • the time is sent to platform 1000 in a timing packet at 610.
  • the platform retrieves the server master time at 615.
  • Server master time may be retrieved from a dedicated clock for platform 1000. Alternatively, the server master time may be retrieved from an external clock.
  • the difference between the device clock and server master clock is calculated at 620.
  • packet return time offset is measured to calculate transmission time between the platform and subscriber device.
  • the system determines the time difference between the platform clock and the subscriber device clock at 630.
  • the time difference is used to calculate the local time for displaying the video data and initiating the sequence on the subscriber device. Any time difference is stored with the user data and included within the sequence data file for the subscriber device.
  • the time difference between the local clock and the server master clock may be calculated at platform 1000 or at client device 2000.
  • the sequence file including the offset time is transmitted to device 2000.
  • the client application After the sequence data and timing data are provided to the client application, the client application has the data required to run the video sequence data and display the data on the screen of the device.
  • the software application 2100 executes the file by displaying the video data on the device screen at 555 at the time defined in the video sequence data file.
  • the time synchronization process is performed by all devices in the array. Since all devices have stored the time difference between the local clock and the server master clock the sequence data, display of the data by the devices is time synchronized to recreate the image across all devices in the array.
  • the client application may run the video sequence automatically at the designated time.
  • the users can display their device screens to create the digital image across the devices in the array.
  • event organisers can communicate with subscribers via the platform 1000 and software application 2100.
  • the video sequence files can be updated at the platform 1000 and transmitted to the subscriber devices over communication network 3000 (shown as step 450 in Figure 5). Updates to the video sequence files include changes in start time, changes to timing data within the sequence, changes to images within the sequence or inclusion of new images within the sequence. This allows event organisers to modify the light show .
  • changes to the video sequence are created at platform 1000 via user interface.
  • the updated video sequence is created by video processor 1220 using the process described above.
  • An updated or new sequence file is created for each node in the array.
  • Platform 1000 identifies all registered subscribers to the event from user database 1130.
  • the node is identified and the updated video sequence file is allocated to the user.
  • Platform retrieves contact information for the subscriber device from user database. Platform transmits the updated video sequence file to subscribed users.
  • the connection between platform 1000 and software application 2100 also provides event organisers the opportunity to gather qualitative data from subscribers to events. Any fan can participate in the lightshow by downloading the software application , subscribing to the event and entering their seat number. Event organisers can also run competitions or lotteries during the event or related to the event. Subscribers can participate by providing additional information. Event organisers can to tailor specific questions of their fans to gather subscriber data relevant to them.
  • software application will generate a reminder to the user that the event is due to commence.
  • the reminder may be generated from within the software application. Alternatively, the reminder may be transmitted from the platform.
  • Subscribers may be prompted to open the software application at the start of the event and allow the software application to run the video sequence file.
  • the data file may be triggered to play automatically or by manual input.
  • event organisers Before and during an event, event organisers can communicate with subscribers through the software application. Event organsiers may send reminders to subscribers or send other promotional material.
  • Event organisers can also interact with subscribers during the event to promote engagement and to encourage participants to display their devices at the relevant time to join the light show. It is envisaged that a countdown might be provided before the start of the show. This may be part of the video sequence file and displayed on the screen of the client device to alert the subscriber that the light show is about to commence. Further notifications and reminders may be provided at the event across other channels, for example over stadium video screens or event audio systems.
  • the embodiments described above generally describe large scale gatherings of several thousand participants, in which the system creates logical arrays of subscriber devices for displaying video data based on the location of subscriber devices.
  • the overall effect of the subscribers displaying their devices simultaneously within a stadium or venue is the reproduction of a video image or sequence with all devices having a participatory role in the overall image.
  • Further embodiments of the invention allow different user interactions in which sequence files can be provided to participating devices to create games or other interactive events.
  • This sequence can be branded as: Spin the bottle, Next drinking game, Draw the short straw, Truth or dare, Generic random selector for ‘next turn’ on any game
  • [154] Local light show The host creates an event. Participants join the local event. The host selects from a range of light shows such as: Strobe, Colour cycle, Disco flash, or mixed. The host selects a start time and time duration in minutes or selects unlimited duration. The server creates custom show data and all phones download the local show. All of the phones start to flash according to the show data. The event host can then start or stop the show, or change the parameters and new data is sent out.
  • a range of light shows such as: Strobe, Colour cycle, Disco flash, or mixed.
  • the host selects a start time and time duration in minutes or selects unlimited duration.
  • the server creates custom show data and all phones download the local show. All of the phones start to flash according to the show data. The event host can then start or stop the show, or change the parameters and new data is sent out.
  • Embodiments of the invention provide a platform and a software application which engages crowds attending events and enables participants to use their personal mobile devices to broadcast synchronised lightshows.
  • the platform and software application can be used to display colorful sponsor logos, pictures, patterns or words. It allows crowds to participate in something coordinated and entertaining during an event and increase fan engagement.
  • Embodiments also provide an exciting new platform for and more engaging way for sponsors to connect with fans and spectators using digital technology and provide sporting codes and their sponsors a new and engaging channel to promote their brand and/or messages at venues during matches or other events. Embodiments also have applications in the greater entertainment industry incorporating musical concerts, festivals, corporate events, even mass gatherings.
  • the platform includes the functionality to update the show at any time by changing images to be displayed and the timing for presenting images. Connectivity between the software application on the device and the platform allows embodiments of the system to respond to movement of participants during the event without distorting the image by updating video sequences for specific participating devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

An image processing system for reproducing an image across an array of nodes, comprising: an array processor for identifying a plurality of nodes and associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image, the array processor determining the position of each node in the array; a video processor for selecting a digital image for representation across the array and allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.

Description

IMAGE PROCESSING SYSTEM
Field of Invention
[1] The present invention relates to an image processing system, and in particular to an image processing system for processing each frame of an image or sequence of images and subdividing it for display across an array of devices.
Background
[2] Crowd engagement at sports matches and music events is an important part of the event experience for fans, players, event organisers and sponsors. Traditionally, fans and spectators have engaged with the event individually by bringing home made signs. Fans can engage in a more coordinated manner by singing songs or joining in Mexican waves around the stadium. Jumbo digital display screens are now installed at many stadiums and can increase crowd engagement by showing highlights or replays of the match. Such screens are also used to display video and pictures of spectators around the stadium.
[3] Fans and spectators bringing mobile devices to the event sometimes use the lights on their devices to illuminate the stadiums. Although this provides fans with some level of engagement with other spectators and the event generally, it is typically on an individual scale and does not increase engagement between fans and event organisers.
[4] More recently, LED wristbands have entered the fan engagement space and have been used at music concerts and large scale stadium events. The wristbands can be programmed to emit different colour light and can be provided to fans at the event. The combined visual effect of thousands of fans wearing the bright LED wristbands creates a basic lightshow at the event. Such applications allow event organisers to create a more coordinated fan experience but does not couple the fan experience with an interaction with the event organisers.
Summary of the Invention
[5] In a first aspect the invention provides a method for reproducing an image across an array of nodes, comprising the steps of: identifying a plurality of nodes; associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image; determining the position of each node in the array; selecting a digital image for representation across the array; allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
[6] The nodes may be client devices. Preferably the nodes are mobile devices. The nodes may be smartphones.
[7] Preferably the step of allocating a component part of the digital image to each of the nodes in the array comprises: digitally overlaying the digital image onto the array; dividing the digital image into component parts, each component part being overlaid on a node of the array; and, allocating the component part of the digital image to the node on which the component part is overlaid.
[8] In embodiments the aspect ratio of the image is adjusted to overlay the array.
[9] In embodiments the component part of the digital image comprises at least one pixel of the digital image.
[10] In embodiments the step of determining the position of the client device is performed by receiving an absolute position of the node within the array and comparing the absolute position of the node with the absolute positions of other nodes within the array to determine the relative position of each node within the array.
[11] In embodiments the absolute position is a GPS coordinate.
[12] In embodiments the step of determining the position of the client device is performed by receiving a reference location associated with the node, for example the external reference being an allocated seat in a venue.
[13] Embodiments perform the further step of providing a plurality of digital images in a predefined sequence, each digital image of the plurality of digital images being separately mapped to the nodes of the array.
[14] In embodiments, the plurality of images form an animation. [15] In embodiments the digital image is mapped against all nodes of the array.
[16] In embodiments the digital image is mapped against a portion of the nodes of the array.
[17] In embodiments the component parts of the digital images are stored in sequence in a sequence component file for each node.
[18] Preferably the sequence of images comprises a predefined sequence of digital images, each video image having a defined time duration within the sequence, the sequence component file comprising the defined time duration associated with each image within the sequence.
[19] In a second aspect the invention provides an image processing platform for performing the steps of the first aspect and comprising the further steps of: communicating with a plurality of client devices across a communications network and providing the sequence component file to each client device.
[20] Preferably the sequence component file is configured such that when it is executed on the client device the client device displays component part of the image.
[21] Embodiments include a time synchronization module, the time synchronisation module determining a time for executing the sequence component file on each client device such that the image is synchronously displayed on each client device in the array. All client devices in the array are time synchronized.
[22] Embodiments include the further step of receiving position data from client device periodically and updating relative position of each client device within the cluster and updating the component part if necessary.
[23] In a third aspect the invention provides a platform for recreating an image across multiple devices by allocating a component part of the image to each device for display on the device.
[24] In a fourth aspect the invention provides a platform for mapping a digital image across multiple client devices forming an array where each client device is mapped to a component part of the digital image, the component part mapped to the client device is determined based on the position of the client device within the array, together the devices configured to produce a representation of the digital image. [25] In a fifth aspect the invention provides an image processing system for dividing an image into a plurality of component parts, each component part for display on a client device, together the devices reproducing the digital image, comprising: providing a digital image; identifying a plurality of client devices generally forming an array spread over an area , the plurality of devices together for reproducing the digital image; determine the relative position of each of the client devices with respect to other client devices in the array; dividing the digital image across the plurality of devices and mapping a component part of the digital image to each of the client devices in dependence on the relative position of the client device, each device for displaying a component part in its relative location to recreate the digital image across the plurality of client devices.
[26] In a sixth aspect the invention provides a method for dividing an image into a plurality of component parts, the component parts for display on a plurality of nodes, together the nodes reproducing a representation of the image, comprising the steps of: providing a digital image; identifying a plurality of nodes generally forming an array spread across an area, the plurality of nodes together for displaying a combined representation of the digital image; determining the relative position of each of the nodes in the array; mapping a component part of the digital image to each of the nodes in the array depending on its location within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image by the array.
[27] In a seventh aspect the invention provides an image processing system for reproducing an image across an array of nodes, comprising: an array processor for identifying a plurality of nodes and associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image, the array processor determining the position of each node in the array; a video processor for selecting a digital image for representation across the array and allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
[28] Embodiments of the invention provide a system that allows crowds of participants having electronic devices with display screens to create a combined large digital display. Display can be any shape or size or length, in full color. Each device forms a component part of the display and each individual can display a part of the overall image on their device. One type of suitable device is a smartphone. The smartphone display screens act as pixels of the combined digital display. A server application identifies the devices forming part of the large digital display and processes video images for rendering on the combined display. The component parts of the image for each device along with the timing for the display on the device are transmitted to the device in a digital file.
[29] The device runs a client application which can determine the position of the device, registers with the server to download video data and synchronise the time to ensure that the device displays the video in unison with other devices in the array.
[30] Embodiments use GPS coordinates to link to individual mobile devices to generate a lightshow that is derived from a video file. Ultimately, the individual screens combine to act as one large video screen which can be used to display video content which could include colourful logos, pictures, complex patterns or words.
Brief Description of the Figures
[31] In order that the invention be more clearly understood and put into practical effect, reference will now be made to preferred embodiments of an assembly in accordance with the present invention. The ensuing description is given by way of non- limitative example only and is with reference to the accompanying drawings, wherein:
[32] Figure 1 shows a representation of an image processing platform.
[33] Figure 2 is a first flow diagram showing steps for creating an array.
[34] Figure 3 is a second flow diagram showing steps for creating an array.
[35] Figure 4 is a flow diagram showing steps performed during video processing.
[36] Figure 5 is a flow diagram showing steps performed at a server and client device.
[37] Figure 6 is a flow diagram showing time synchronization.
[38] Figure 7 is a flow diagram showing steps for managing GPS position updates. Detailed Description:
[39] Embodiments of the invention provide a platform and system to enable an image or a sequence of images to be reproduced and displayed across the display screens of a plurality of devices which together form an array. The screens of devices in the array together create a combined digital display screen with the display screen of each device forming a part, or a pixel, of the combined digital display screen. For sequences of images, the sequence forms an animation or film clip with each image forming a frame of the sequence. Each device within the array is allocated part of an image to be displayed across the screens of the devices within the array. The image itself can be resampled to be sub sampled or super sampled (in which multiple pixels of the original image are blended together for an individual display screen within the array). The timing for displaying the images is time synchronized across all devices. When synchronized and displayed together the parts of the image on each display screen recreate the entire image across the array.
[40] In one embodiment the devices are smartphones having coloured display screens. It is envisaged that other devices may also be used within the array, for example mobile computing devices such as tablet computers or portable computers, or other mobile devices.
[41 ] Example applications for embodiments of the invention include use in sports stadiums where supporters with smartphones sitting within specific blocks, or throughout the entire stadium, can participate in creating and displaying images as part of a light show. Each participating device will receive a part of an image to be reproduced across the block or stadium. When the display screens of participating devices in the block are displayed together the complete image is reproduced across the block or the entire stadium. By synchronizing the timing at which the image is displayed across all participating devices, images can be displayed at predefined times.
[42] Embodiment applications may include multiple images being displayed in a predefined sequence. The predefined sequence of images form an animation. By time synchronizing the display on devices participating in the array multiple images and video sequences can be displayed across the array of devices in a time synchronized way. The resulting effect across a stadium of participating devices is a light show, animation or video presentation within the stadium in which spectators can participate and which can be viewed by spectators in other parts of the stadium. [43] In an embodiment the platform identifies participating devices. These devices are included within the array. The platform selects an image for display across the array. The platform allocates part of the image to each device in the array based on the relative position of the device within the array. For example with the display screens of the devices acting as pixels in the image. The platform may determine the relative position of a device within the array by a number of different methods, including using GPS location of a device or seat allocation within a stadium or venue, which are discussed in more detail below., In embodiments, the platform determines time scheduling for each of the devices to display the component parts of the image, so the image is displayed in unison across the array. The image data and time scheduling data is determined for each participating device and the relevant image data and time scheduling data is transmitted to each participating device.
[44] The platform identifies the array of devices and processes images for reproduction across the array. The platform determines the location of each device within the array. The relative position of devices in the array can be determined based on reference positions, for example seat numbers within a stadium associated with a participating device. In some embodiments, the relative position of devices within the array can be determined based on absolute location of the participating device, for example using a GPS location received from the device or by other positioning methods.
[45] The platform creates an array of participating devices. The platform overlays a video image onto the array. In other embodiments, the platform overlays the array onto the video image. The video image is processed and component parts of the image are allocated to each participating device based on the relative location of the device within the array.
[46] The location of the devices may be received by the platform from the device across a communication network. In embodiments dedicated software applications (software apps) may be installed onto devices which are configured to manage communications between the device and the platform. These are discussed in more detail below.
[47] Devices communicate with the platform and may provide updates on their location periodically. As the device changes location within the array, the platform can respond by recalculating the relative positions of the devices within the array and reallocating different part of the image to devices so that an image can continue to be displayed correctly in a non- distorted way across the array. [48] The device location information and time synchronization enables the platform to reproduce content including single static images or sequences of images to produce the effect of moving image or consecutive images across the array of devices. It is envisaged that the content may include bespoke video files configured for stadiums, events or other gatherings to convey messages, images, complex patterns or words. Content may include advertisements or other event specific content.
[49] The data transmitted to the device for display is provided in the form of a custom dataset of colour information and timing information specific to the relative position of the device within the array for execution on the device. In some embodiments the dataset is provided in a digital video file. The device may interact with the platform via a specific software application which may use custom data packets for communication with the platform. Further details of the communication between the device and platform is provided below.
[50] Applications of the technology can be used at events or across various industries in which large numbers of devices are located together at a common location. The system may be implemented by event organizers at sports stadiums, music festivals, mass gatherings or other crowds. The technology is executed on devices of participants attending the event and at predefined times the participants can display the display screens of their devices simultaneously to reproduce the image or other content across the crowd.
[51] The platform can also provide individual interaction with participants. For example specific content may be provided to individual participating devices. The platform provides an opportunity for fans and spectators at an event to engage with the event, to be part of a coordinated light show and also provides an opportunity for event organizers and sponsors to engage with fans through the digital platform.
[52] Figure 1 is a schematic diagram of the physical architecture of the image processing platform 1000 and shows its interaction with a client device 2000. The components of platform 1000 may be physically co-located or may be distributed and located remotely and connected across a communication network. In the example of Figure 1 only a single client device 2000 is shown for simplicity. The platform 1000 is configured to communicate with multiple client devices.
[53] Platform 1000 includes memory 1100 for storing user data, image data, program code, event data and other data to enable the processor to implement one or more functional components of the platform, processor 1200, transceiver 1300, user input means 1400 and display 1500.
[54] Turning first to memory 1100, Memory 1100 comprises multiple databases to store data to enable the processor to implement one or more functional components of the system.
[55] Venue database 1110 stores data relating to venues including stadiums, conference centers, theatres or other venues. For each venue, Venue database 1110 includes data including layout and seating arrangement for each venue. The data files include reference information for each seat, for example stand, block, row, seat number. Detailed information including the physical arrangement and size of blocks, distance between seats and locations and sizes of non-seated areas, for example aisles, breaks between tiers and other physical stadium information is also included within the venue data. Venue data enables the relative positioning of seats or locations within the stadium to be calculated. GPS representation of the seating plan and venue may also be stored in the venue database in which GPS coordinates are provided for each seat and area of the venue. This data allows seats or areas within the venue to be identified and mapped based on GPS coordinates. Venue data files include sufficient information to enable a venue to be digitally modelled. Venue database may be updated to include venue data for new venues or expanded to include different configurations of a specific venue.
[56] Venues may use different configurations for different event types, for example sports stadiums may be configured differently for different events. Different venue seating configurations may be used for different events, for example certain seats, blocks, stands or tiers of the stadium may be closed or unallocated during a particular event. For example, during a cricket match, certain seating areas may be closed to accommodate sight screens. In some sports matches, some areas of seating may be closed to help segregation between opposing fans. In other sports matches, blocks may be closed due to lower attendance at those matches. More significant venue configuration changes could be implemented when a venue is reconfigured to host a different type of event, for example when a sports stadium is reconfigured to host a concert. Venue configuration data is also included within the venue database.
[57] Database 1100 includes video database 1120. Video database 1120 stores video content files. Video content files include single static digital images or a sequence of digital image, or frames representing a changing image. The images may represent patterns, words, logos, images or other visual representations. Data relating to the size and colour of the image is stored within the video database and associated to the image. For sequence files, frame rates, sequence patterns and other sequence information is stored with the data file within the video database. Video files may be stored in variety of file formats well known to those skilled in the art.
[58] Video files may include individual identifiers within database 1200. Video files may be associated with subscribers or content providers, for example sports clubs, music acts, companies, venues or sponsors.
[59] Audio data may also be associated with the video files. The audio data may be stored with the video data in the video database 1120.. Alternatively, audio data may be stored separately from the video data, the video data and audio data being linked together using identifiers or other data mapping techniques.
[60] Database 1100 includes user database 1130. User database 1130 stores personal details and account details for subscribers to the platform. For each user, User database includes user ID and passwords associated with the user account. Contact details for the user including, for example email address or mobile phone number are stored with the user account. The contact details may be used for the platform to distribute video data to the user when the user is participating in an event.
[61 ] In embodiments, the platform 1000 may allow users to register specific devices to the account. In such embodiments, user database 1130 also stores specific device identifiers or configurations associated with the user for example iPhone or Samsung Galaxy. Different devices have different display configurations, for example screen size or display. Such device details can be relevant when the user is participating in an event. The device details provide the platform with flexibility when determining an array of devices and allocating portions of an image to individual devices within the array.
[62] When a user subscribes to an event, user database stores information about the events to which the user is subscribed and any user specific information associated with the event. Information stored includes user location details at the event. For example, the user’s seat number at the event. User database 1130 may be updated. In embodiments, real time location information for the user is provided to platform 1000, for example the user’s current GPS coordinates are provided to the platform from device 2000. During the event when a user’s location is changed, platform 1000 may reposition the user within the array and allocate the user with a new part of the image for display based on its current relative position within the array. [63] User database can be updated when the user registers with a new event or, during an active location monitoring period for example updating user location during an event.
[64] Event database 1140 stores event information. On registering an event with the platform an event file is created to store details associated with the event. Event database stores venue information associated with the event, venue information includes details of the venue including any event specific stadium configurations. Event database stores timings associated with the event. Further event information including sponsorship details, sponsor images, competition details, prize information is stored in event database 1140. The event database includes schedules for video associated with the event. Event database stores references to video data stored in video database to be displayed during the event, the timing for the video to be displayed during the event, location for displaying the video within the venue and any other event related information.
[65] Associations or links are created between data in the various databases. Associations may be provided by way of reference IDs. For example users are linked to events and/or venues to which they are subscribed, events are linked to venues, videos are linked to events and/or users subscribed to the event etc.
[66] Event information within event database may be updated or created.
[67] Turning now to processor 1200, processor 1200 is configured to implement one or more functional components of the platform. Processor 1200 includes array processor 1210 configured to construct an array of devices to display an image, video processor (visualizer)
1220 to resample and overlay the source video data onto the array, device file allocation processor 1230 for computing the video sequence data for each device in the array. Processor 1200 comprises transceiver controller 1240 configured to control the transceiver of the platform to receive data from client devices or to transmit data to client devices.
[68] The components of processor 1200 are controlled by platform user input device 1400. Input device 1400 is configured to allow platform 1000 to receive user input. User input device may be any suitable user input device, for example a keyboard, touchscreen, or other electronic user input device. User input device may form part of a user interface with display 1500.
[69] Array processor 1210 is configured to create a logical map of the array of devices available to reconstruct a specific video, typically in response to user input. The logical map may be a digital representation. For a specific event, array processor 1210 retrieves venue data from venue database and event data from event database. In one embodiment, array processor 1210 retrieves the seating plan for the venue and creates a logical map of the array using seating plans for the venue. Each seat within the venue is included within the array as being available to present part of an image. For example in a block of a stadium having Rows A to Z and seats numbered 1 to 108, each seat within the block is mapped into an array. The array including 2808 seats (i.e. 26 rows each having 108 seats) each seat having a specific location within the array. Each seat can be allocated to an individual pixel of an image. Or a pixel may be allocated to multiple seats in a group, depending on the resolution of the image to be recreated across the array and the number of seats in the array.
[70] Array processor retrieves event data from event database 1140 to determine whether the event includes any variation to seating plans, for example seating restrictions at the venue or other event specific limitations.
[71 ] In another embodiment, the logical map of the array is constructed using GPS or other location coordinates associated with user devices known to be at the event. Array processor retrieves location data from user database 1130 for users registered to the event. The location of each registered user at the event is collected. The array processor collects the locations of each device to form an array in which the user devices are arranged in the array in terms of their relative locations to each other. This arrangement creates an array of devices at an event.
[72] The array created by the array processor defines the display available for reconstructing the video image at the event. As discussed above, the array may be constructed based on the seating plan for the venue where each seat or group of seats is available as a pixel for the image or constructed based on the location of users, and therefore devices, at the event. The array is essentially the canvas onto which the image will be reconstructed.
[73] Video processor (visualizer) 1220 is configured to overlay a selected video image onto an array. Video processor retrieves a selected video file from video database 1120 and receives the array from array processor 1210, generally in response to user input.
[74] Video processor 1220 is configured to map the video image onto the array. Video processor includes algorithms to control the position of the video image on the array. The algorithms are configured to move or translate the video around or across the array. Algorithms may be configured to resize the video by fitting the video to the array by stretching or compressing the video to fit the array by changing the aspect ratio. Video processor 1220 is configured to include further debugging tools to allow further manipulation of the video image on the array, for example removing a percentage of the devices in the array, changing the brightness of individual devices in the array, changing seating configuration.
[75] After video processor has mapped the video image onto the array, device file allocation processor 1230 is configured to split the image across the array by allocating part of the image to each device or each seat in the array based on the relative position of the device or seat within the array. Video processor calculates the image file to be allocated to each device in the array.
[76] In an embodiment of the platform, device allocation processor 1230 uses video processing algorithms or functions configured to smooth colours across the image where necessary. The image may be adjusted. Device file allocation processor divides the video image across the array of devices and allocates a specific component part of the video file to each device according to its position in the array. Typically, the component part is a colour, being a single pixel of the image. Device file allocation computes the sequence data for each individual device. Where the video is a sequence of images, device allocation processor calculates the sequence of colours, and timing for each device within the array.
[77] The sequence files are stored in the database 1100. In an embodiment video files for each device in the array are stored in the event data database 1 140. The video files are also allocated against the subscriber allocated to the device position in the array.
[78] Platform 1000 includes a transceiver 1300 for communicating with subscriber devices 2000 across communication network 3000. Transceiver 1300 is controlled by transceiver controller 1240. Transceiver 1300 is configured to receive data from client device 2000 including registration data for events, GPS data or other data defining location. Further data may also be received from device 2000 by transceiver 1300. Transceiver 1300 is also configured to transmit data to client devices 2000. Data may include video data specific to each device, general event data or other data relating to subscribers.
[79] Platform 1000 is configured to be user controlled by user input device 1400. As discussed earlier, input device 1400 is configured to receive user input. User input device may be any suitable user input device, for example a keyboard, touchscreen, or other electronic user input device. Platform includes display 1500 for representing data presented by processor 1100 to a user. The platform is configured to enable users to interact with processors and data via user input and display 1500. [80] In the example of video processor 1220 users may select and manipulate video images across arrays representing devices using algorithms via input device 1400.
[81] Turning now to client device 2000. In the example of Figure 1 a single device 2000 is shown. In applications, many client devices communicate with platform 1000. Each client device 2000 includes transceiver 2500 for communication with platform 1000. Client device 2000 can be a mobile communications device, for example a smartphone, or a portable computer device, for example a tablet device.
[82] Client device can install and run a software application 2100 (app) associated with platform 1000. The software application may be a smartphone app. Interaction of client device 2000 with platform 1000 is controlled through software application 2100. Alternatively, the device may connect directly with the platform, for example via the internet.
[83] Client device 2000 includes user input device 2600 to enable the smartphone to receive user input from a user. Input device 2600 can be in the form of a touchscreen but can also be in the form of a keyboard, mouse, or microphone for receiving voice activated commands from a user.
[84] Software application 2100 is executed by processor 2800 and controlled by user input via user input device 2600.
[85] Client device 2000 includes memory 2700 for storing software application 2100.
Memory 2700 also stores video data and other data received from platform 1000. GPS data for the device can be stored in memory 2700.
[86] Software application wirelessly receives data files, including video sequence files, and requests, including location updates or time requests, transmitted from platform 1000. Application 2100 interacts with GPS module 2200, display 2300, device clock 2400 and transceiver 2500, user input device 2600, memory 2700 and processor 2800.
[87] Display 2300 of the client device can be the screen of the smartphone. Display 2300 is configured to display video files or other video data received from platform 1000. The video data may be colour data, the data defining a colour to be displayed by the client device. In this example, the display screen emits the colour defined by the colour data. In further embodiments other data may be displayed, for example an image, the screen may be split into different regions. [88] GPS module 2200 calculates the GPS location of the client device. The GPS location can be stored in memory 2700.
[89] Communication network 3000 facilitates communications between platform 1000 and client device 2000. It is envisaged that communications network 3000 may be a single mobile communications network to which platform transceiver 1300 and client device transceiver 2500 connect directly or a combination of multiple networks. For example, client device transceiver 2500 may be a short range radio transceiver, for example a Bluetooth transceiver, which connects into a mobile communications network.
[90] The process for creating a video show across an array of devices is now described with reference to the accompanying Figures 1 to 6.
[91] An event related database is created within platform 1000. The event related database may be associated with an event organizer for example, a company, a sponsor, a sports team, a venue or event. The event related database includes the database components discussed above with respect to Figure 1 , including venue data 1110, video data 1120, user/subscriber data 1 130 and event data 1140.
[92] Management of the event related database can be controlled via a user interface.
Access controls may be applied to the event related database for example by providing password or other security mechanisms. The user interface may connect to the platform 1000 across a communications network or may be directly connected to the platform. User interface provides a user with access to populate the event related database with data.
[93] Event data contains the name and time of the event along with prize information. It also links to the venue data and the video data. User data identifying subscribers to the event is also linked. For example, a sponsor could create three sets of event data where everything is the same except the start time, if the same show was to be activated three times during a sports match, such as before, at half time and afterwards.
[94] Data may be accessed and retrieved from other areas of the platform or other connected databases, for example if venue data is stored in a generic region of the database, or may be entered via the user interface.
[95] The process for constructing an array is now described with reference to figure 2. [96] In the example now described with reference to Figure 2 the platform constructs the array for an event. The array is a digital logical model of the positions of the devices at the event. The array defines the combined display screen that is created by the combination of the devices and that the image can be displayed on at the event. In a seated venue, the array processor creates a seating plan for the event. The seating plan is typically obtained from stadium databases or event databases. In a first example the model assumes that every seat is occupied by a subscriber participating in the event with a smartphone.
[97] The array creation process may be controlled via the user interface. At 200 array processor 1210 receives input information identifying an event for which an array is to be created. Array processor creates an array data file associated with the event. At 205 and 210 array processor retrieves event information including identification of the venue. Venue data, including seating plans and any specific venue configuration data associated with the event is also retrieved at 210 and 215. This data may be input directly by a user via the user interface or may be retrieved from venue database 1110 or event database 1140.
[98] The relevant venue data is retrieved from venue database 1 110 at 215. As discussed above, venue data includes identification of the venue seating plan and other locations allocated to ticket holders at the event. The venue data may define areas of the venue with allocated seating, areas with unallocated seating, standing areas and non-seating areas, including pitch, exits, aisles etc. As discussed above, venue data may also include dimension information and information defining distances between areas of the venue or seats.
[99] Array processor 1210 uses the venue data to build a logical map of the array of nodes.
In the example, the nodes are locations allocated to users attending the event at the venue. For a fully seated venue, each node represents a seat in the venue, determined from the seating plan. For events including standing areas for predefined numbers of people, the array processor may allocate a predefined number of nodes into the designated standing area, to simulate the number of people in the area during the event. The array represents a logical representation of the positions of seat or subscribers for an event with each node being a location of a attendee at the event. The array recognizes the relative positions of the nodes within the array. It recognizes the relative position of the nodes with respect to other nodes. As mentioned above, in a block containing rows A to Z and Seats 1 to 108, the relative position of each of the 2808 seats is recognized by the array. [100] Each node is individually allocated a unique node ID. In an example, in a seated stadium each node is identified by its unique seat location, for example stand, block, row, seat. Depending on venue data, each node may be identified by GPS location.
[101 ] Creation of the array presents a logical map of the positions that subscribers are expected to be located at during the event. When a subscriber is allocated a specific seat and the subscriber notifies the server of the seat, the server can identify the location of the subscriber within the array. The subscriber can be matched to the relevant node ID corresponding to the seat.
[102] When a digital image is overlaid onto the array, each node within the array is allocated a part of the digital image. The part of the digital image may be a pixel or may represent a group of pixels of the image. When the number of pixels in the images exceeds the number of devices in the array, multiple pixels in the original image are averaged to determine the final value for a grid cell which maps to a single node in the array. Therefore, the data for each node is the average across multiple pixels in the original image. When each node displays its contributory part of the image synchronously, the digital image is recreated.
[103] In other embodiments the array processor can update the logical array to better capture the location of subscribers at the event to provide a more accurate map of subscribers at the event. For example, certain nodes within the array may be vacant or the position of nodes within the array may be changed. In these situations, the array is updated and the image can be remapped to the updated array. In a first example, array processor may retrieve ticket sales information to identify which seats have been sold and will be occupied at the event. This information is used to update the array to provide a more specific representation of locations of subscribers at the event. Seats which have not been sold may be excluded from the array since it is expected that the seat will be vacant and so a subscriber with a device will not be positioned in the unsold seat.
[104] In a second example now described with reference to Figure 3, array processor builds an array using GPS locations of subscriber devices. This process can be run during an event in which participants are located together within a venue or collocated at another location, for example a public gathering. The process may be used in combination with a venue seating plan or venue dimensional data to confirm which seats or areas are occupied by particular subscribers. In such embodiments, the GPS locations of the devices are compared with GPS coordinates of the seats or areas of a stadium to determine the seats that each device is occupying. Alternatively, the GPS process may be used independently to map an array of subscribers at an event.
[105] At 300 array processor receives input information identifying an event for which a GPS array is to be created. Array processor creates a GPS array data file associated with the event. The array processor determines whether any venue information is associated with the event at 305, for example whether a seating plan array has already been built for the event. If so, the array is retrieved from the event database or venue database.
[106] At 310 array processor retrieves confirmation of which subscribers are attending the event. Event subscription data may be stored in event data database 1140 and/or within user database 1130. Array processor retrieves GPS coordinates of those subscribers attending the event from user database 1130. After receiving GPS coordinates from subscribers attending the event, array processor constructs a GPS array of nodes where each node represents the location of a subscriber based on its GPS coordinates. The GPS array provides a logical representation of the relative locations of subscribers at the event.
[107] As mentioned above, when the GPS array is used in combination with a seating plan, the GPS array can be overlaid onto the seating plan at 320 to confirm which subscribers are sitting in which seat. Alternatively the GPS data is used independently of the seating plan and creates a crowd cluster map which maps the location of subscribers in a crowd. In this case, effectively the GPS data becomes the seating plan. Each device is allocated a node ID at 325.
[108] Typically, during an event the software application on the client device periodically pushes the client device GPS coordinates to platform 1000. The client device GPS coordinates are updated in the user data database 1130. Array processor 1210 periodically retrieves GPS coordinates of the subscribers attending the event from user data database 1130. These coordinates provide a more real time confirmation of the location of users at the event to account for movement of subscribers during the event, for example seat changes. Array processor then updates the array based on the retrieved GPS coordinates. The GPS array is updated periodically to identify movement of subscribers during the event.
[109] Software application may be configured so updated GPS data is pushed to the platform by client device when the client device changes location. For example, when software application determines that the GPS coordinates have changed, or when GPS coordinates have changed above a predefined threshold. In other embodiments, software application can be configured to provide GPS coordinates to the platform 1000 at a predefined time intervals. GPS coordinate updates provide a real time representation of the relative locations of subscribers at the event.
[110] A further embodiment is now described with reference to Figure 7 which shows the process for managing GPS updates at the platform using GPS data in combination with seating plans and stadium data. Preferably the process of Figure 7 is executed during an event.
[111] At 705 GPS based array creation process is initiated at platform 1000. Array processor retrieves a registered GPS position for each participating subscriber at 715 from user data stored in user data database 1130 within database 1100. At 720 and 725, array processor retrieves seat coordinates and coordinates of other registered areas for the event. The seat coordinates and coordinates of other registered areas are compared with the GPS coordinates. Array processor may apply some tolerance accuracy when comparing the GPS coordinates of the devices to the GPS locations of the seats at 730. The updated GPS positions are stored are stored in database 110 at 735 to update previous GPS positions at 710. Preferably this GPS location update process is run periodically during an event.
[112] In some embodiments a best fit bounding box is created based on GPS positions of devices. In such embodiments, GPS locations of all subscribing devices are retrieved from database 1100 and a best fit bounding box is created to define the boundaries of the GPS coordinates of the devices to define the boundary of the array. The boundary box defines the array and the array is then used to display the video.
[113] Each node is individually allocated a unique node ID.
[114] The array for the event may be stored in event database 1140.
[115] The steps for processing video to be displayed on the array is now described. Video processor 1220 processes a digital video image to be reproduced across the array. The process for processing the video image is now described with reference to Figure 4.
[116] At 405 video processor retrieves a video data file for reproduction across an array of devices. As discussed above, the video file may represent patterns, words, logos, images. Video file may be a single image or it may include a sequence of images. Video file may include time information defining a period of time for which the image or each image in a sequence should be displayed. The timing data is important for sequences of images which are designed to create a changing visual image across the array. [117] The array of devices for reproducing the image at the event is retrieved from database 1100. Video processor now processes the image for reproduction across the array.
[118] At 410 video processor 1220 decodes each image or frame of the video is decoded into a full image in memory. The image is compared with the array and the image in memory is resampled to match the aspect ratio of the display array at 415. At 420 the aspect ratio corrected memory image is resampled into a grid representing the display array. For high resolution images, typically the number of pixels in the images exceeds the number of devices in the array. In this situation, multiple pixels in the original image are averaged to determine the final value for a grid cell which maps to a single node in the array. Therefore, the data for each node is the average across multiple pixels in the original image. For a given original image, an array having a higher number of nodes can reproduce the image at a higher resolution than an array having few nodes since in an array with fewer nodes, a greater number of pixels from the original image need to be averaged for each node.
[119] At 425, the process of steps 405 to 420 is repeated for subsequent Images / Frames. At 430 the colour data associated with each node in the image is created and is stored within data base 1100. Timing instructions for each node in the array are stored with the video data in data database 1100. The original image file is now defined in terms of video data for each separate node of the array.
[120] As mentioned above, arrays having different sizes or different number of nodes will result in different video data being allocated to the nodes to represent the same original image. For example in an array having a greater number of nodes, a node will represent the data of fewer pixels compared with an array having fewer nodes, in which the video data for each node will be the average across a greater number of pixels.
[121 ] At an event, multiple images are displayed across the array in sequence. For example, a first image may be displayed for 10 seconds, followed by a second image for a further 10 seconds. In other examples moving images are displayed by the array which require different video data to be displayed by each device in a faster specific time sequence. For sequence data, video processor runs separate video processing on each image in the sequence to create a sequence of video data for each node in the array. [122] Time data associated with each image in the sequence is stored with the video data for each node. For each video file a display time is associated with the video data.
[123] Node allocation processor creates a separate video file for each node. The file includes video data for display by each node and the time sequence associated with the data, for example display time for each video image or pixel of a video image.
[124] An example of a typical video file for a node includes:
Start time;
RGB colour;
Time duration;
RGB colour;
Time duration;
RGB colour;
Time duration.
[125] The process of subscribing to the platform, registering with an event and interacting with the platform is now described with reference to Figure 5.
[126] At 500 a subscriber downloads a dedicated software application (app) 2100 associated with platform 1000 onto a mobile device. The software application may be a smartphone app. Typically, the application may be accessed via a cloud based application platform, for example Apple iStore, or other suitable application platform. Application 2100 is loaded onto subscriber device 2000 and stored within memory 2700. As discussed above, application 2100 communicates with platform 1000 across a communication network. The subscriber management system may form part of platform 1000.
[127] Subscriber management system manages the subscriber account. The subscriber account acquires details from the subscriber including contact details and device details at 405.
[128] The user can initiate the software application and interact with the software application via a user interface on the mobile device. After activating the software application, the software application provides access to a list of events for which the platform is providing a light show. These events may include sporting events played with live fans, sponsorship events, music concerts or festivals or other types of mass gatherings, for example protests, art installations, community gatherings. The user may register with any event which he is attending at 510. [129] On registering with an event, subscriber data is provided to the platform by the software application. Typically, the subscriber data is stored in user data area 1130 of database 1100.
[130] On selecting an event the user is presented with information about the event within software application 2100. For seated or ticketed events, users can provide details of his allocated seat at 515 to the server application, for example Block G Row Y Seat 17. Alternatively, or additionally, the GPS location of the device can be determined and provided at 520 to the server application. In situations when the user registers with the event in advance of the event, the user can select to use GPS location to identify his location during the event. This seat information is stored within the user database 1130 at the platform along with other user information. This user information can be linked to event data and also to venue data within database 1100.
[131] The process of video data being provided to subscribers is now described. When video data has already been created for the event and stored within video database 1120 at platform
1000, processor 1200 retrieves positional data related to the subscriber location at the event (for example the subscriber’s seat). Processor 1200 references the subscriber location against the array data for the event to identify the position of the subscriber within the array for the event. The node ID associated with the position is retrieved at the server and this node ID is then used to retrieve the video sequence data file associated with the subscriber location for the event.
This video sequence data file is allocated to the subscriber. This data for the event is downloaded to the client device at 540. Preferably the data relates to the node allocated to the subscriber only. So subscribers are provided with different data depending on their position within the array.
[132] The video sequence data file within video database 1120 is linked to the subscriber within the user database 1130 and event database 1140. The link to the event database 1140 enables the platform to communicate with the subscriber in the event that any changes to the sequence data file are required. For example, if the video show at the event is changed after allocation, updated sequence data files can be transmitted to the subscriber.
[133] As tickets are sold for the event and ticket holders subscribe to the platform, the platform can track the number of nodes in the array allocated to active subscribers.
[134] In an embodiment if tickets are not sold then some nodes of the array (associated with unsold tickets) are unallocated. This creates missing pixels (or holes) in the image but the overall size and number of nodes of the array is preserved. In other embodiments, the array processor may update the array to account for unsold tickets and the video processor to recalibrate the video images and update the sequence data files.
[135] As discussed above, each device in the array at the event should display the images synchronously so that the images are correctly reproduced across the array at the event.
Failure for the devices to display the video data in time synchronization will result in the image being distorted and the reproduction effect of the image will be lost. Platform 1000 executes a time synchronization process with the device at 545 to ensure that the sequence files are displayed at the correct time at the event by all devices so all devices display the video data in sync.
[136] The synchronization process is now described with reference to Figure 6. The time synchronization process is performed for each device in the array in order that the images are displayed time synchronously by all devices in the array. The synchronization process may be run at the time when the user subscribes to the event, at a predefined period before the event or during the event. The software application 2100 on the client device retrieves the local time on the device defined by device clock 2400 at 605. The time is sent to platform 1000 in a timing packet at 610. On receipt of the timing packet, the platform retrieves the server master time at 615. Server master time may be retrieved from a dedicated clock for platform 1000. Alternatively, the server master time may be retrieved from an external clock. The difference between the device clock and server master clock is calculated at 620.
[137] At 625 packet return time offset is measured to calculate transmission time between the platform and subscriber device. The system then determines the time difference between the platform clock and the subscriber device clock at 630.
[138] The time difference is used to calculate the local time for displaying the video data and initiating the sequence on the subscriber device. Any time difference is stored with the user data and included within the sequence data file for the subscriber device.
[139] The time difference between the local clock and the server master clock may be calculated at platform 1000 or at client device 2000. In embodiments in which the time difference is calculated at the platform, the sequence file including the offset time is transmitted to device 2000. [140] After the sequence data and timing data are provided to the client application, the client application has the data required to run the video sequence data and display the data on the screen of the device. The software application 2100 executes the file by displaying the video data on the device screen at 555 at the time defined in the video sequence data file.
[141] The time synchronization process is performed by all devices in the array. Since all devices have stored the time difference between the local clock and the server master clock the sequence data, display of the data by the devices is time synchronized to recreate the image across all devices in the array.
[142] The client application may run the video sequence automatically at the designated time. At an event in which many of the attendees have subscribed, the users can display their device screens to create the digital image across the devices in the array.
[143] After subscribing to an event, event organisers can communicate with subscribers via the platform 1000 and software application 2100. Importantly, if there are any updates to the video data before the show, the video sequence files can be updated at the platform 1000 and transmitted to the subscriber devices over communication network 3000 (shown as step 450 in Figure 5). Updates to the video sequence files include changes in start time, changes to timing data within the sequence, changes to images within the sequence or inclusion of new images within the sequence. This allows event organisers to modify the light show .
[144] In an embodiment, changes to the video sequence are created at platform 1000 via user interface. The updated video sequence is created by video processor 1220 using the process described above. An updated or new sequence file is created for each node in the array. Platform 1000 identifies all registered subscribers to the event from user database 1130.
[145] In an example in which the subscribers have already been allocated to a node of the array, based on location of the subscriber device at the event, for example by allocated seat number or using GPS, the node is identified and the updated video sequence file is allocated to the user. Platform retrieves contact information for the subscriber device from user database. Platform transmits the updated video sequence file to subscribed users.
[146] The connection between platform 1000 and software application 2100 also provides event organisers the opportunity to gather qualitative data from subscribers to events. Any fan can participate in the lightshow by downloading the software application , subscribing to the event and entering their seat number. Event organisers can also run competitions or lotteries during the event or related to the event. Subscribers can participate by providing additional information. Event organisers can to tailor specific questions of their fans to gather subscriber data relevant to them.
[147] At the start of the event, or before the start of the event, software application will generate a reminder to the user that the event is due to commence. The reminder may be generated from within the software application. Alternatively, the reminder may be transmitted from the platform.
[148] Subscribers may be prompted to open the software application at the start of the event and allow the software application to run the video sequence file. The data file may be triggered to play automatically or by manual input.
[149] Before and during an event, event organisers can communicate with subscribers through the software application. Event organsiers may send reminders to subscribers or send other promotional material.
[150] Event organisers can also interact with subscribers during the event to promote engagement and to encourage participants to display their devices at the relevant time to join the light show. It is envisaged that a countdown might be provided before the start of the show. This may be part of the video sequence file and displayed on the screen of the client device to alert the subscriber that the light show is about to commence. Further notifications and reminders may be provided at the event across other channels, for example over stadium video screens or event audio systems.
[151 ] The embodiments described above generally describe large scale gatherings of several thousand participants, in which the system creates logical arrays of subscriber devices for displaying video data based on the location of subscriber devices. The overall effect of the subscribers displaying their devices simultaneously within a stadium or venue is the reproduction of a video image or sequence with all devices having a participatory role in the overall image. Further embodiments of the invention allow different user interactions in which sequence files can be provided to participating devices to create games or other interactive events.
[152] In general, all the shimmr games operate in a similar way. Participants join a local event, and the host of the event is able to start the game. Each phone flashes different colors in random sequence or in order of joining (which allows people to sit in a circle and see the flash travel around the circle and then stop on someone). When the flash stops on one phone, all other phones are dark and the the ‘winner’ is left highlighted and flashing. The event host can then start another round. The host can allow new players to join and remove players that have left. Players can also choose to join or leave.
[153] This sequence can be branded as: Spin the bottle, Next drinking game, Draw the short straw, Truth or dare, Generic random selector for ‘next turn’ on any game
[154] Local light show: The host creates an event. Participants join the local event. The host selects from a range of light shows such as: Strobe, Colour cycle, Disco flash, or mixed. The host selects a start time and time duration in minutes or selects unlimited duration. The server creates custom show data and all phones download the local show. All of the phones start to flash according to the show data. The event host can then start or stop the show, or change the parameters and new data is sent out.
[155] Simon says: The host creates a game. Participants join the game. The sequence starts and pays on all devices included in the sequence. Each player must then press the screen in the correct order to match the sequence. When a mistake is made, the game is over.
[156] Embodiments of the invention provide a platform and a software application which engages crowds attending events and enables participants to use their personal mobile devices to broadcast synchronised lightshows. The platform and software application can be used to display colourful sponsor logos, pictures, patterns or words. It allows crowds to participate in something coordinated and entertaining during an event and increase fan engagement.
[157] Embodiments also provide an exciting new platform for and more engaging way for sponsors to connect with fans and spectators using digital technology and provide sporting codes and their sponsors a new and engaging channel to promote their brand and/or messages at venues during matches or other events. Embodiments also have applications in the greater entertainment industry incorporating musical concerts, festivals, corporate events, even mass gatherings.
[158] The platform includes the functionality to update the show at any time by changing images to be displayed and the timing for presenting images. Connectivity between the software application on the device and the platform allows embodiments of the system to respond to movement of participants during the event without distorting the image by updating video sequences for specific participating devices.
[159] It is to be understood that, if any prior art publication is referred to herein, such reference does not constitute an admission that the publication forms a part of the common general knowledge in the art, in Australia or any other country.
[160] In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, namely, to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
[161 ] It is to be understood that the aforegoing description refers merely to preferred embodiments of invention, and that variations and modifications will be possible thereto without departing from the spirit and scope of the invention, the ambit of which is to be determined from the following claims.

Claims

Claims:
1. A method for reproducing an image across an array of nodes, comprising the steps of: identifying a plurality of nodes; associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image; determining the position of each node in the array; selecting a digital image for representation across the array; allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
2. A method according to claim 1 wherein the nodes are client devices.
3. A method according to claim 1 or 2 wherein the nodes are mobile communication devices.
4. A method according to claim 1 , 2 or 3 wherein the nodes are smartphones.
5. A method according to claim 1 , 2, 3 or 4 wherein each node comprises a display.
6. A method according to claim 1 , 2, 3, 4 or 5 wherein the step of allocating a component part of the digital image to each of the nodes in the array comprises: digitally overlaying the digital image onto the array; dividing the digital image into component parts, each component part being overlaid on a node of the array; and, allocating the component part of the digital image to the node on which the component part is overlaid.
7. A method according to claim 6 wherein the image is adjusted when it is overlaid on the array.
8. A method according to any of claims 6 or 7 wherein the aspect ratio of the image is adjusted to overlay the array.
9. A method according to any preceding claim wherein the component part of the digital image comprises at least one pixel of the digital image.
10. A method according to any preceding claim wherein the step of determining the position of the client device is performed by receiving an absolute position of the node within the array and comparing the absolute position of the node with the absolute positions of other nodes within the array to determine the relative position of each node within the array.
11. A method according to any preceding claim wherein the absolute position is a GPS coordinate.
12. A method according to any preceding claim wherein the step of determining the position of the client device is performed by receiving a reference location associated with the node, for example the external reference being an allocated seat in a venue.
13. A method according to any preceding claim further comprising the step of providing a plurality of digital images in a predefined sequence, each digital image being separately mapped to the nodes of the array.
14. A method according to any preceding claim wherein a component part of the digital image is allocated to all nodes of the array.
15. A method according to any of claims 1 to 13 wherein a component part of the digital image is allocated to some of the nodes of the array.
16. A method according to any preceding claim comprising the step of storing the component parts of the digital images in a sequence component file, the sequence component file being associated with a node.
17. A method according to any preceding claim wherein the sequence of images comprises a predefined sequence of digital images, each video image having a defined time duration within the sequence, the sequence component file comprising the defined time duration associated with each image within the sequence.
18. A method according to any preceding claim comprising the step of transmitting the component part of the digital image allocated to a node to the node.
19. A method according to claim 18 further comprising the step of transmitting time data associated with the component part of the digital image, the time data relating to the time for displaying the component part of the digital image.
20. An image processing platform for performing the steps of any of claims 1 to 19 and comprising the further steps of communicating with a plurality of client devices across a communications network and providing the sequence component file to each client device.
21 . An image processing platform according to claim 20 wherein the sequence component file is configured such that when it is executed on the client device the client device displays component part of the image.
22. A method according to claim 17 or 18 comprising a synchronization module, the synchronisation module for receiving client device clock data from client devices in the array, comparing the client device clock data to a server clock and determining an offset time period between client device clock and the server clock, the offset time period used to determine the time the sequence component file is executed on the client device.
23. An application configured to be executed on a client device to manage communications from the client device to the platform.
24. A platform for mapping a digital image across multiple client devices forming an array where each client device is mapped to a component part of the digital image, the component part mapped to the client device is determined based on the position of the client device within the array, together the devices configured to produce a representation of the digital image.
25. An image processing system for dividing an image into a plurality of component parts, each component part for display on a client device, together the devices reproducing the digital image, the system performing the steps of: providing a digital image; identifying a plurality of client devices generally forming an array spread over an area , the plurality of devices together for reproducing the digital image; determining the relative position of each of the client devices with respect to other client devices in the array; dividing the digital image across the plurality of devices and mapping a component part of the digital image to each of the client devices in dependence on the relative position of the client device, each device for displaying a component part in its relative location to recreate the digital image across the plurality of client devices.
26. A method for dividing an image into a plurality of component parts, the component parts for display on a plurality of nodes, together the nodes reproducing a representation of the image, comprising the steps of: providing a digital image; identifying a plurality of nodes generally forming an array spread across an area, the plurality of nodes together for displaying a combined representation of the digital image; determining the relative position of each of the nodes in the array; mapping a component part of the digital image to each of the nodes in the array depending on its location within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image by the array.
27. An image processing system for reproducing an image across an array of nodes, comprising: an array processor for identifying a plurality of nodes and associating the plurality of nodes to form an array, the array for displaying a combined representation of a digital image, the array processor determining the position of each node in the array; a video processor for selecting a digital image for representation across the array and allocating a component part of the digital image to nodes in the array in dependence on the position of the node within the array, such that simultaneous display of the component parts by the nodes produces a representation of the digital image across the array.
28. An image processing system according to claim 27 wherein the nodes are client devices.
29. An image processing system according to claim 27 or 28 wherein the nodes are mobile communication devices.
30. An image processing system according to claim 27, 28 or 29 wherein the nodes are smartphones.
31 . An image processing system according to claim 27, 28, 29 or 30 wherein each node comprises a display.
32. An image processing system according to claim 27, 28, 29, 30 or 31 wherein the video processor performs the steps of: digitally overlaying the digital image onto the array; dividing the digital image into component parts, each component part being overlaid on a node of the array; and, allocating the component part of the digital image to the node on which the component part is overlaid.
33. An image processing system according to claim 32 wherein the video processor adjusts the image when it is overlaid on the array.
34. An image processing system according to any of claims 32 or 33 wherein the aspect ratio of the image is adjusted to overlay the array.
35. An image processing system according to any of claims 27 to 34 wherein the component part of the digital image comprises at least one pixel of the digital image.
36. An image processing system according to any of claims 27 to 35 wherein the array processor determines the position of the client device is performed by receiving an absolute position of the node within the array and comparing the absolute position of the node with the absolute positions of other nodes within the array to determine the relative position of each node within the array.
37. An image processing system according to claim 36 wherein the node comprises a GPS device and the absolute position is a GPS coordinate.
38. An image processing system according to any of claims 27 to 37 wherein the step of determining the position of the client device is performed by receiving a reference location associated with the node, for example the external reference being an allocated seat in a venue.
39. An image processing system according to any of claims 27 to 38 wherein the video processor provides a plurality of digital images in a predefined sequence, each digital image being separately mapped to the nodes of the array.
40. An image processing system according to any of claims 27 to 39 wherein a component part of the digital image is allocated to all nodes of the array.
41 . An image processing system according to any of claims 27 to 39 wherein a component part of the digital image is allocated to some of the nodes of the array.
42. An image processing system according to any of claims 27 to 41 comprising a sequence component file wherein the component parts of the digital images are stored in a sequence component file for each node.
43. An image processing system according to any preceding claim wherein the sequence of images comprises a predefined sequence of digital images, each video image having a defined time duration within the sequence, the sequence component file comprising the defined time duration associated with each image within the sequence.
44. An image processing system according to any of claims 27 to 43 comprising a transceiver, the transceiver transmitting the component part of the digital image allocated to a node to the node.
45. An image processing system according to claim 44 further comprising the step of transmitting time data associated with the component part of the digital image, the time data relating to the time for displaying the component part of the digital image.
46. An image processing platform comprising the image processing system of any of claims 27 to 45, further comprising a transceiver for communicating with a plurality of client devices across a communications network and providing the sequence component file to each client device.
47. An image processing platform according to any of claims 27 to 46 comprising a synchronization module, the synchronization module for receiving client device clock data from at least one client device in the array, comparing the client device clock data to a server clock and determining an offset time period between client device clock and the server clock, the offset time period used to determine the time the sequence component file is executed on the client device.
48. An image processing platform according to claim 47 wherein the offset time period is transmitted to the client device, such that the image is synchronously displayed on each client device in the array.
49. A method according to claim 13 wherein the sequence is an animation or a film.
50. A method according to claim 22 comprising the step of transmitting the offset time period to the client device.
51 . An image processing system according to claim 39 wherein the sequence is an animation or a film.
PCT/AU2021/050704 2020-06-30 2021-06-30 Image processing system WO2022000040A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2020902225A AU2020902225A0 (en) 2020-06-30 Image Processing System
AU2020902225 2020-06-30
AU2020273377A AU2020273377A1 (en) 2020-06-30 2020-11-20 Image Processing System
AU2020273377 2020-11-20

Publications (1)

Publication Number Publication Date
WO2022000040A1 true WO2022000040A1 (en) 2022-01-06

Family

ID=79302832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2021/050704 WO2022000040A1 (en) 2020-06-30 2021-06-30 Image processing system

Country Status (2)

Country Link
AU (1) AU2020273377A1 (en)
WO (1) WO2022000040A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050168399A1 (en) * 2003-12-19 2005-08-04 Palmquist Robert D. Display of visual data as a function of position of display device
US20140313408A1 (en) * 2013-04-19 2014-10-23 Qualcomm Incorporated Modifying one or more session parameters for a coordinated display session between a plurality of proximate client devices based upon eye movements of a viewing population
US20150061971A1 (en) * 2013-08-30 2015-03-05 Samsung Electronics Co., Ltd. Method and system for presenting content
US20150189490A1 (en) * 2013-12-31 2015-07-02 Verizon Patent And Licensing Inc. Orchestrating user devices to form images at venue events
US9094489B1 (en) * 2012-05-29 2015-07-28 West Corporation Controlling a crowd of multiple mobile station devices
US20180300097A1 (en) * 2010-11-02 2018-10-18 Kemal Leslie Communication to an Audience at an Event
CN109640152A (en) * 2018-12-07 2019-04-16 李清辉 Control method for playing back, device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050168399A1 (en) * 2003-12-19 2005-08-04 Palmquist Robert D. Display of visual data as a function of position of display device
US20180300097A1 (en) * 2010-11-02 2018-10-18 Kemal Leslie Communication to an Audience at an Event
US9094489B1 (en) * 2012-05-29 2015-07-28 West Corporation Controlling a crowd of multiple mobile station devices
US20140313408A1 (en) * 2013-04-19 2014-10-23 Qualcomm Incorporated Modifying one or more session parameters for a coordinated display session between a plurality of proximate client devices based upon eye movements of a viewing population
US20150061971A1 (en) * 2013-08-30 2015-03-05 Samsung Electronics Co., Ltd. Method and system for presenting content
US20150189490A1 (en) * 2013-12-31 2015-07-02 Verizon Patent And Licensing Inc. Orchestrating user devices to form images at venue events
CN109640152A (en) * 2018-12-07 2019-04-16 李清辉 Control method for playing back, device, storage medium and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KONDO TOHRU; MAEDA KAORI: "Cloud-based Dynamic Tiled Display Adapting to Grouping by Distinction of Mobile Devices", 2019 TWELFTH INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND UBIQUITOUS NETWORK (ICMU), 4 November 2019 (2019-11-04), pages 1 - 6, XP033720594, DOI: 10.23919/ICMU48249.2019.9006637 *
NIELSEN, H. S. ET AL.: "JuxtaPinch: Exploring Multi-Device Interaction in Collocated Photo Sharing", PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES & SERVICES, September 2014 (2014-09-01), pages 183 - 192, XP055896473 *
TAKASHI OHTA ; JUN TANAKA: "MovieTile", ACM, 2 November 2015 (2015-11-02) - 6 November 2015 (2015-11-06), 2 Penn Plaza, Suite 701 New York NY 10121-0701 USA , pages 1 - 7, XP058075508, ISBN: 978-1-4503-3928-5, DOI: 10.1145/2818427.2818436 *

Also Published As

Publication number Publication date
AU2020273377A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
US20180300097A1 (en) Communication to an Audience at an Event
JP7483985B2 (en) Method for controlling multiple cheering light-emitting devices at a performance venue
US7697925B1 (en) Synchronized light shows on cellular handsets of users at a gathering
US20150012308A1 (en) Ad hoc groups of mobile devices to present visual and/or audio effects in venues
US20200404344A1 (en) Event Production and Distribution Networks, Systems, Apparatuses, and Methods Related Thereto
US20120052941A1 (en) Method and system for multiple player, location, and operator gaming via interactive digital signage
US9318043B2 (en) Systems and methods for coordinating portable display devices
US20160192308A1 (en) Mobile Device Synchronization of Screen Content and Audio
US9271137B2 (en) Orchestrating user devices to form images at venue events
US20130109364A1 (en) Mobile application for ad-hoc image display
US11638338B2 (en) Group-performance control method using light-emitting devices
US9576297B1 (en) Controlling a crowd of multiple mobile station devices
CN109688203B (en) Multi-subject and multi-theme display method based on multimedia exhibition hall
US9596574B1 (en) Controlling a crowd of multiple mobile station devices
CN105531666A (en) Method and system for self addressed information display
US11974381B2 (en) System and method for directing performance
CN104461420A (en) Mobile screen system and method
US20240155757A1 (en) System and method for directing performance
CN105659555A (en) Communication method for an interactive application between terminals in a screen projection room
WO2022000040A1 (en) Image processing system
US20180063803A1 (en) System and method for production and synchronization of group experiences using mobile devices
CN108846110A (en) A kind of information issuing system and dissemination method
WO2020181529A1 (en) Display screen configuration method, apparatus and system
WO2014075128A1 (en) Content presentation method and apparatus
EP3796151A1 (en) Method, system and computer program product for enabling spectators of a live event to participate in a light show using electronic devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21834501

Country of ref document: EP

Kind code of ref document: A1