US20100123732A1 - Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays - Google Patents

Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays Download PDF

Info

Publication number
US20100123732A1
US20100123732A1 US12/545,026 US54502609A US2010123732A1 US 20100123732 A1 US20100123732 A1 US 20100123732A1 US 54502609 A US54502609 A US 54502609A US 2010123732 A1 US2010123732 A1 US 2010123732A1
Authority
US
United States
Prior art keywords
display
image
message
digital image
display units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/545,026
Other versions
US8410993B2 (en
Inventor
Stephen F. Jenks
Sung-jin Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of California
Original Assignee
University of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of California filed Critical University of California
Priority to US12/545,026 priority Critical patent/US8410993B2/en
Assigned to THE REGENTS OF THE UNIVERSITY OF CALIFORNIA reassignment THE REGENTS OF THE UNIVERSITY OF CALIFORNIA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JENKS, STEPHEN F., KIM, SUNG-JIN
Publication of US20100123732A1 publication Critical patent/US20100123732A1/en
Priority to US13/854,814 priority patent/US20140098006A1/en
Application granted granted Critical
Publication of US8410993B2 publication Critical patent/US8410993B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers

Definitions

  • This disclosure generally relates to visualization technologies. More specifically, this disclosure relates to display devices which can be used to form a display system formed of tiled arrays of display devices.
  • PC computer systems
  • typical “PC” computer systems include several “expansion slots” which can accept certain types of video cards.
  • Motherboards of some “PCs” are built with one or more PCI, PCI Express, AGP, etc., slots that can accept video cards. In this manner, a single computer can be provided with multiple video cards to increase the number of displays that can be controlled by the computer.
  • the present disclosure relates to methods and systems for displaying and manipulating large images or datasets.
  • Embodiments of the present disclosure can be particularly advantageous for displaying large digital images that can be tens or hundreds of millions of pixels, or even billions of pixels, on tiled array display systems in a highly interactive manner.
  • Some embodiments can allow for display of datasets that can be over a gigabyte in size.
  • a system is disclosed that can display a large image on a tiled array display that includes one or more display units and then can manipulate the image by panning, zooming, rotating, color filtering, and the like.
  • each of the display units of the system retain a full copy of an original image file (e.g., in local memory) to be displayed, then in parallel, each display unit processes the original image file and displays only that portion of the original image corresponding to the position of the display unit in the overall array.
  • the display units can be provided with a configuration parameter, indicating in which part of the array the corresponding unit is positioned. For example, in an array of 20 display units formed of 4 rows and 5 columns, the display unit in the upper left hand corner can be identified as the column 1, row 1 display device.
  • the positions of the display units can also be correlated to the ranges of rows and columns of pixels each display unit can display, for example, at their respective “native” resolutions, together forming a “single display” including all of the pixels from each display device.
  • the display units can also be configured to receive a position command associated with an image file located in each units' local memory.
  • the position command can be any data indicative of the position of the associated image file on the array.
  • the position command can be an identification of the desired location of a predetermined pixel in the associated image.
  • a predetermined pixel for example, can be the pixel at the center of the image that would result from the associated image file being processed at its native resolution.
  • Other predetermined pixels can also be used.
  • the position command can include other display characteristics, such as the desired rotational angle of the displayed image, magnification, resolution, etc.
  • the display units can be configured to use the received position command to determine what portion (if any) of an associated image corresponds to the pixels of that display unit. For example, the display unit can be configured to determine the requested location of the predetermined pixel of the image resulting from processing of the associated image file, for example, with reference to a virtual wireframe representation of the overall array. Then the display unit can resolve the image file to then determine what portion of the image (if any) corresponds to the display on that display unit.
  • the display unit can process the image file and display the discrete portion of the resulting image in the correct orientation and thereby, together with the other image portions displayed by the other units in the array, display an observable mosaic of the original image.
  • the system can generate the desired view of the image more quickly than a system in which an image is broken down and pre-processed into individual parts, which are then individually distributed to the corresponding display unit.
  • the display units receive a series of position commands, which request the display of an image stored on the local memory of each display unit at different locations on the array, all of the display units can re-process and display their respective image portions much more quickly, thereby providing the user with a more responsive and useful system.
  • Some embodiments of the systems disclosed herein can also allow the display of one or more images that can be larger than the memory of the individual display units in the tiled array display can handle.
  • multiple images can be prohibitively expensive to load due to, for example, disk access times of loading hundreds of megabytes or gigabytes, and a lack of available memory.
  • each display unit in the tiled array display can be configured to only load or process the data needed for its local portion of the overall image displayed on the tiled array display.
  • the system can thus greatly reduce the amount of data to load into memory, for example, and allows rapid manipulation of the image portions on the individual display units. If more data is needed, the required data can also be loaded and displayed.
  • the system can advantageously employs parallel processing techniques, such as multithreading, to improve system performance.
  • each full size or original image can be preprocessed and stored in a hierarchical format that includes one or more sub-images, where each sub-image can be a reduced size or reduced resolution version of the original image.
  • the largest sub-image can be the same size as the original image and/or include image content from the original image.
  • each sub-image can be stored as one or more blocks, or tiles, to allow rapid access to a particular part of the image without having to access entire rows or columns of pixels.
  • each sub-image can be stored as one or more resolution layers to allow for rapid access to, and display of, a particular resolution of the original image.
  • each tile node can be particularly beneficial because only the portion of each original image required for each tile of the tiled array display needs to be resident in the memory of the respective display node. Thus, only the blocks or resolution layers of the appropriate sub-image may need to be loaded, which improves responsiveness of the tiled array display system and performance of the individual display units. In some embodiments, surrounding blocks and blocks from higher and lower levels in the hierarchy can also be pre-fetched for improved performance. This can be advantageous for enabling each tile node to support the display of more than one original image or portions thereof.
  • a resource management approach can be used to support increased interactivity by reducing the amount of data loaded into memory, for example, and/or allowing portions of several images to be resident on each individual tile display unit, which supports the display and manipulation of multiple full size images on the array display.
  • FIG. 1A is a schematic representation illustrating an embodiment of a highly interactive tiled array display.
  • FIG. 1B is a schematic diagram illustrating an embodiment of an interactive tiled display system for displaying and manipulating one or more images on an array-type display.
  • FIG. 2 is a flow chart illustrating an embodiment of a method for processing one or more images for display on an array display.
  • FIG. 3 is a flow chart illustrating an embodiment of a method for displaying an image on an array display.
  • FIG. 4 is a flow chart illustrating an embodiment of a method for controlling the display of one or more images on an array display.
  • FIG. 5A schematically illustrates an image overlapping two display nodes in an array display.
  • FIG. 5B schematically illustrates the image of FIG. 5A partitioned over two display nodes.
  • FIG. 6 is a schematic diagram of a tiled array display and a control node, the control node including a user interface having a schematic representation of the tiled array display.
  • the present disclosure generally relates to the interactive display and manipulation of large images or large datasets on an array-type display, such as a tiled display system.
  • a system that implements a highly interactive large image or parallel display system can be used.
  • the interactive tiled display system comprises a high resolution, large format display system configured to render large-scale images or datasets on a large array of display units, or “nodes.”
  • the interactive tiled display system advantageously allows a viewer to see minute, high-resolution detail in the context of a large overall image.
  • the interactive tiled display system allows a user to interact with the display in real-time.
  • embodiments described below can allow panning, zooming, resizing, cropping, rotating, color shading, transparency controlling, and the like of images and/or other content on the tiled display, thereby enabling users to examine content with increased flexibility and effectiveness.
  • the interactive tiled display system can employ parallel processing techniques (e.g., multithreading) to increase the speed with which the large image or dataset is displayed and/or manipulated on the tiled array display.
  • parallel processing techniques e.g., multithreading
  • the use of parallel processing techniques can advantageously result in increased scalability without sacrificing performance.
  • the interactive tiled display system comprises a symmetric multiprocessing (SMP) system, wherein each of the tiled display units share the same memory, system bus, and input/output (I/O) system.
  • the interactive tiled display system comprises a massively parallel processing (MPP) system, wherein each of the display units, or nodes, has its own memory, bus, and I/O system.
  • SMP symmetric multiprocessing
  • I/O input/output
  • MPP massively parallel processing
  • the interactive tiled display system comprises a clustered system, wherein each of the display nodes are coupled using local area network (LAN) technology and each of the display nodes comprises an SMP machine.
  • the interactive tiled display system comprises a distributed memory parallel processing system, such as a non uniform memory access (NUMA) or distributed shared memory system.
  • the interactive tiled display system comprises a global shared memory system, such as a uniform memory access (UMA) system.
  • the interactive tiled display system comprises a cache only memory architecture (COMA) system.
  • the interactive tiled display system comprises a parallel random access machine (PRAM) system.
  • Other types of parallel processing systems or modified versions of the above systems can also be used without departing from the spirit and/or scope of the disclosure.
  • the interactive tiled display system can comprise, without limitation, one or more of the following parallel processing architectures: linear array, ring array, binary tree, 2D mesh, torus, shared-memory, and hypercube.
  • the nodes forming the interactive tiled display systems described below can identify and process a discrete portion of a full image to be displayed on the array. This can reduce the amount of processing required by a control node used to control the nodes, and thus can increase the responsiveness of the overall tiled display system.
  • the interactive tiled display system can also allow for movement of images around the tiled display at a rate that is much faster than other techniques.
  • the interactive tiled display system can be configured to dynamically set up binary trees to increase the speed with which information or data is communicated to the display nodes. The number of binary trees can be dynamically modified (e.g., increased or decreased) depending on communication received from the control node.
  • Some of the embodiments described below can accommodate viewing of multiple highly detailed, large images, which can exceed billions of pixels, to be displayed as part of a high resolution, coordinated workspace on a tiled display.
  • the real-time or near real-time interaction with the multiple image data, which can be received from multiple image data sources, can include panning, zooming, rotating, color filtering, and transparency control of the images more quickly.
  • the interactive tiled display system can be beneficial for viewing or visualizing various types of image data, such as medical, cancer cells, satellite, geosciences, oil monitoring, weather monitoring or prediction, traffic control, astronomy, artwork, and the like.
  • a control unit can function as a front end user interface to allow a user to control the placement and manipulation of content on the tiled array display via user interaction devices (e.g., keyboard, mouse) associated with the control unit.
  • the control unit can include a graphical user interface having a display that “mirrors” the tiled array display. Manipulation of an image on the graphical user interface on the control unit can be used to control manipulation of the image on the tiled array display.
  • Such systems and methods can be useful when a user desires to display an image that is larger than a traditional display connected to a user computer can handle.
  • FIG. 1A is a schematic representation illustrating an embodiment of a highly interactive tiled array display 10 .
  • the interactive tiled array display 10 can comprise a plurality of display units 100 , wherein each display unit 100 is configured to determine a respective portion of the large image to be output on its associated display and to process and display the respective portion in parallel.
  • the interactive tiled array display 10 includes a 5-by-5 array of display units.
  • the highly interactive tiled array display 10 comprises a 10-by-5 array of display units (each comprising a 2560 ⁇ 1600 pixel display) to form a 25,600 ⁇ 8,000 pixel display.
  • the number of display units 100 and thus, the size of the tiled array is not limited to any particular array size, and can be expanded as large as space and network bandwidth permit.
  • FIG. 1B is a block diagram illustrating an interactive tiled display system 50 for displaying and manipulating one or more images on an array-type display, such as the highly interactive tiled array display 10 , which can be “parallelized.”
  • the interactive tiled display system 50 can comprise a plurality of display nodes, or display units 100 (including display nodes 100 A, 100 B, and 100 N that are representative of any quantity of display nodes) that are in communication with a network 160 and other devices via the network 160 , including a control node 102 .
  • the plurality of display units 100 can be connected in a serial configuration (e.g., a daisy chain implementation) instead of a parallel implementation.
  • an original image data source 164 which can be, for example, a storage device (e.g., a network-attached storage device, a RAID storage system, or a shared local storage over parallel file system, such as a parallel virtual file system (PVFS)) or a computing device, is also in communication with the network 160 .
  • the original image data source 164 comprises a shared memory storage device/file system that is accessible by each of the display nodes 100 via the network 160 .
  • the image data source 164 can comprise a plurality of preloaded content, such as large, high-resolution digital images and/or datasets.
  • the control node 102 can comprise one or more computer devices that gather or make available information about the state of the overall tiled display system 50 , including the display nodes 100 , through the use of messages.
  • messages can be any type of message including data, packets, signals, state values, display parameters, etc., and are referred to interchangeably below as “messages” or “state messages.”
  • the messages comprise “global” messages, which can be broadcast to all the display nodes 100 by the control node 102 via the network 160 .
  • the control node 102 can also be configured to transmit such messages to a subset of one or more of the display nodes (e.g., via multicast or unicast message techniques).
  • the messages can include information about a current state of an image to be, or being, displayed on the tiled array display.
  • the messages can include information about a location, size, resolution, orientation, color scheme, or identification of the image.
  • the messages comprise image data or other data content.
  • the communication of messages to the display nodes 100 can be controlled by a multi-port Gigabit Ethernet switch; however other switches, hubs, or routers can also be used.
  • the control node 102 can comprise a desktop computer, laptop, tablet, notebook, handheld computing device (e.g., a smartphone or PDA), a server, or the like.
  • the control node 102 comprises one or more multi-core and/or multiprocessor computing systems configured for use in implementing parallel processing techniques, such as MIMD or SPMD techniques using shared and/or distributed memory.
  • control node 102 can function as a front end user interface to the interactive tiled display system 50 that allows a user to interact with the overall system by manipulating the image or images displayed on its associated display, which in turn manipulates the image or images on the tiled display. Such functions are described in more detail below.
  • any of the display nodes 100 N and/or the control node 102 can be used to implement the systems and methods described herein.
  • the display node 100 A and the control node 102 can be configured to manage the display of information on tiled display systems.
  • the control node 102 and the display nodes 100 are configured to implement a shared memory MIMD system and/or a message passing MIMD system.
  • the functionality provided for in the components and modules of the display node 100 A and the control node 102 can be combined into fewer components and modules or further separated into additional components and modules.
  • each of the display nodes 100 is configured to render and display a portion of a large, high-resolution image.
  • all the display nodes 100 work in parallel to render the total overall image across the plurality of display nodes 100 , thereby avoiding the performance limitations that would arise from dividing and rendering the entire high resolution image into discrete parts on a single computing device, then transmitting the discrete parts to the corresponding nodes of the tiled array.
  • the display nodes 100 can display digital images or other data content of 1 gigabyte or larger in size.
  • the display node 100 A can include, for example, a computing device, such as a personal computer (PC), that is IBM, Macintosh, or Linux/Unix compatible.
  • the computing device comprises a server, a laptop computer, a monitor with a built-in PC, a cell phone, a personal digital assistant, a kiosk, or an audio player, for example.
  • the display node 100 A includes a central processing unit (“CPU”) 105 , which can include one or more multi-core processors, microprocessors, graphics processors, digital signal processors, and/or the like.
  • the display node 100 A can further include a memory 130 , such as random access memory (“RAM”) for temporary storage of information and a read only memory (“ROM”) for permanent storage of information, and a mass storage device 120 , such as one or more hard drives, diskettes, and/or optical media storage devices. Other arrangements of memory devices can also be used.
  • the display node 100 A can be considered as having a “memory system.”
  • the term “memory system” can comprise only one or any combination of the memory device 130 , mass storage device 120 , processed image data source 162 , and any number of additional memory devices 130 , mass storage devices 120 , processed image data sources 162 , or any other type of memory.
  • the modules of the display node 100 A are connected to the CPU 105 using a standards-based bus system.
  • the standards-based bus system can be Peripheral Component Interconnect (PCI), Microchannel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures, for example.
  • PCI Peripheral Component Interconnect
  • ISA Industrial Standard Architecture
  • EISA Extended ISA
  • Other types of systems can also be used.
  • PCI Peripheral Component Interconnect
  • ISA Industrial Standard Architecture
  • EISA Extended ISA
  • the box illustrated in FIG. 1B and identified with the reference numeral 100 A can be considered as a schematic representation of a housing of the display node 100 A.
  • a housing can be formed in any manner, such as those designs currently used on commercially available monitors or televisions, including LCD, plasma, LED, or other types of devices.
  • Such a housing can be shaped to enclose the devices noted above, for example, enlarged if necessary. Such shaping is fully within the skill of one of ordinary skill in the art of video monitor or television design.
  • the housing can also include a mount, such as the mounting hardware normally included on the rear sides of the LCD, plasma, LED monitors and televisions that are currently widely available on the commercial market. Such mounts can be described as “wall mounts.” Other hardware can also be used. Such hardware can be used in conjunction with an appropriately shaped rack designed to connect to and support a plurality of display nodes 100 A having such mounts on their respective housings with the plurality of display nodes 100 A arranged adjacent each other, in a tiled layout, such as the layout schematically represented in FIG. 1A .
  • the display node 100 A is generally controlled and coordinated by operating system software, such as Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Linux, SunOS, Solaris, a real-time operating system (RTOS), MAC OS X, or other compatible operating systems.
  • operating system software such as Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Linux, SunOS, Solaris, a real-time operating system (RTOS), MAC OS X, or other compatible operating systems.
  • RTOS real-time operating system
  • MAC OS X or other compatible operating systems.
  • the operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things.
  • GUI graphical user interface
  • the display node 100 A can include one or more input/output (I/O) devices and interfaces 110 , such as a keyboard, a mouse, a touchpad, a scroll wheel, a trackball, a voice activation unit, a haptic input device, a printer, or I/O ports.
  • the display node 100 A can include one or more display devices 166 , such as a monitor, that allows the visual presentation of data, such as the image data described herein, to a user.
  • the display device 166 can comprise an LCD display, such as a 30-inch LCD Cinema Display available from Apple, Inc. or a 46-inch 460 UXn-UD LCD display available from Samsung Electronics.
  • the display device 166 comprises a plasma, CRT, or Organic LED display. In yet other embodiments, the display device 166 comprises a projector-based display. In some embodiments, the display device 166 provides for the presentation of scientific data, GUIs, application software data, and multimedia presentations, for example.
  • the display node 100 A may also include one or more multimedia devices 140 , such as speakers, video cards, graphics accelerators, cameras, webcams, and microphones, for example.
  • the I/O devices and interfaces 110 can provide a communication interface to various external devices.
  • the display node 100 A can be coupled to a network 160 that comprises one or more of a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a personal area network (PAN), a virtual private network (VPN), or the Internet, for example, via a wired, wireless, or combination of wired and wireless, communication link 115 .
  • the network 160 communicates with various computing devices and/or other electronic devices via wired and/or wireless communication links.
  • the network 160 comprises an IEEE 802.11g WLAN.
  • the display node 100 A can include, or may be coupled to via a network connection, a processed image data source 162 , such as a database, that includes information about one or more images to display on the tiled array display.
  • the information supplied by the processed image data source 162 can include a full size or original image that was or will be preprocessed and stored in a hierarchical format that includes sub-images, with each sub-image being a reduced size and/or reduced resolution version of the original image.
  • a reduced resolution sub-image can be generated from an original full resolution image by deleting rows and columns of the pixels of the original image at predetermined spacings, thereby generating a lower resolution version of the full image.
  • a reduced size sub image can be generated by cropping the original image. Any other technique of creating sub-images can also be used.
  • the largest sub-image can be the same size as the original image and/or include image content from the original image.
  • each sub-image can be stored as one or more blocks, or tiles, to allow rapid access to a particular part of the original image without having to access entire rows and/or columns of pixels. In some embodiments, this can allow the display node 100 A to fetch exactly the level of detail (sub-image) it requires and/or to quickly fetch the needed blocks that make up the portion of the image to be output to the display 166 .
  • the display node 100 A can be connected to other computing devices through a bus or the network 160 .
  • the original image data source 164 can include one or more original or full size digital images.
  • the digital images can comprise JPEG, GIF, PNG, TIFF, BMP, and/or other digital image formats.
  • the original image data source 164 can include one or more original or full size digital images that can be tens or hundreds of millions of pixels (e.g., 200-600 Megapixels), or even billions of pixels.
  • the display node 100 A can preload and preprocess the original images stored in the original image data source 164 and store the result in a hierarchical format in the processed image data source 162 .
  • the display node 100 A can determine the correct portion(s) of the original image(s) to be displayed on its associated display 166 and output the corresponding preprocessed image data for display.
  • the processed image data source 162 can be used to reduce the amount of data that needs to be loaded in memory and support faster manipulation or modulation of images on the tiled array display.
  • the hierarchical format used for storing the processed image data comprises a tiled pyramidal TIFF format.
  • other formats can also be used.
  • the tiled pyramidal TIFF format can allow the display node 100 A to store multiple resolutions of a preloaded image in the mass storage device 120 , memory 130 or in the processed image data source 162 .
  • the tiled pyramidal TIFF format can allow the display node 100 A to partition the original image into smaller tiles of data so as to reduce the amount of image data to be fetched and processed, thereby enhancing system performance.
  • the original images stored in the original image data source 164 can be compressed or uncompressed images.
  • the processed image data source 162 can also be configured to receive a compressed image from the original image data source 164 . Once received, the display node 100 A can decompress an original image and then preprocess the original image into a set of one or more images that are compressed or decompressed and store them in the processed image data source 162 , in the mass storage device 120 , or in memory 130 .
  • each tile of the tiled pyramidal TIFF images can be compressed using lossless or lossy compression algorithms, such as Deflate, LZW, or JPEG. Spatial identifiers can be used to identify various portions or resolutions of the sub-images to facilitate efficient extraction of different regions or resolutions of the original image(s).
  • one or more of the image data sources may be implemented using a relational database, such as Sybase, Oracle, CodeBase and Microsoft® SQL Server, as well as other types of databases such as, for example, a flat file database, an entity-relationship database, an object-oriented database, and/or a record-based database.
  • a relational database such as Sybase, Oracle, CodeBase and Microsoft® SQL Server
  • other types of databases such as, for example, a flat file database, an entity-relationship database, an object-oriented database, and/or a record-based database.
  • the display node 100 A can also include application modules that can be executed by the CPU 105 .
  • the application modules include the image processing module 150 and the image display module 155 , which are discussed in further detail below.
  • These modules can include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • each display node 100 A can be configured to execute instructions in the image processing module 150 , among others, in order to support user interactivity by reducing the amount of data loaded into the memory 130 when an image needs to be displayed on the interactive tiled display system 50 .
  • the image processing module 150 can allow portions of several images to be resident on the display 166 , thus supporting display and manipulation of multiple large images across the multiple display nodes 100 .
  • an original large image can be tens of billions of pixels.
  • the image processing module 150 can preprocess and store multiple full size or original images by determining the correct portion of the original images to be displayed on a specific display node 100 .
  • each original image can be stored in a hierarchical format (e.g., tiled pyramidal TIFF format) that includes sub-images that can be a reduced size and/or reduced resolution version of the original image.
  • the largest sub-image can be the same size as the original image and/or include image content from the original image to support resizing of the image, such as zooming in and/or out.
  • the image processing module 150 can store each sub-image of the original image as one or more blocks, or tiles, to allow rapid access to a particular part of the full size image without having to access entire rows or columns of pixels.
  • the image processing module 150 can be configured to store each sub-image of the original image as one or more layers of resolution to allow rapid access to a particular resolution layer without having to decompress or compress large resolution images for display, thereby improving efficiency. Storing multiple resolutions of the preloaded image can advantageously allow the use of parallel processing techniques (e.g., threading) to improve performance speed.
  • the image processing module 150 can be further configured to send requests to control node 102 for information about other display nodes (e.g., 100 B, 100 C, etc.) and/or vice versa.
  • messages can be exchanged between the control node 102 and/or other display nodes (e.g., 100 B, 100 C, etc.) that include information about the overall state of the aggregate tiled display, or about a particular display node.
  • the messages comprise Scalable Parallel and Distributed Systems (SPDS) messages.
  • SPDS Messaging is a C++ library that encompasses sockets and data into abstractions called Endpoints and Messages.
  • control path uses multicast UDP Endpoints, which means Messages are received by one or more subscriber nodes. Broadcast UDP Endpoints can also be used to send Messages to all nodes on a subnet.
  • the SPDS Messaging library can use Unix/Winsock socket operations to implement the Endpoints and the send and receive operations used to transfer Messages.
  • the display node 100 A can also execute instructions in the image display module 155 to display one or more images or portions thereof and manipulate the images.
  • an original image that is full size can be preprocessed by the image processing module 150 and then stored in the processed image data source 162 or in other local memory (e.g., memory 130 ).
  • the image display module 155 can enable a highly interactive display space that spans multiple display nodes 100 .
  • the tiled display system 50 can implement further parallel processing techniques (e.g., MIMD or SPMD techniques and/or multithreading) to improve system performance.
  • the image display module 155 can load the appropriate sub-image of an original image in memory 130 and on the display 166 .
  • surrounding blocks and blocks from higher and lower levels can also be pre-fetched for higher performance by the image display module 155 . This can allow each display node 100 A to support the display of more than one such image or portions thereof.
  • the image display module 155 can determine its individual display boundary based on the resolution layer determined by the image processing module 150 (e.g., the resolution layer having the smallest image size larger than the native resolution of the display 166 ). Based on the determination of the individual display boundary, the image display module 155 can determine which tiles overlap with its display boundary and can fetch the overlapping and/or surrounding blocks, or tiles. Additionally, a resource management approach can support interactivity by reducing the amount of data loaded and allowing portions of several images to be resident on each tile, thus supporting display and manipulation of multiple large images.
  • the image display module 155 can be configured to allow the use of multiple highly detailed images, which can exceed billions of pixels, to be displayed as part of a high resolution, coordinated workspace on a tiled display that includes multiple display nodes 100 . Further, the image display module 155 can be configured to allow in real-time or in near real-time interaction with multiple images by allowing moving, zooming, rotating, color filtering, and transparency controlling of images on the display node 100 . In some embodiments, the image display module 155 can be configured to perform color filtering, shading or transparency control using a vertex and/or pixel shader, such as a Cg shader.
  • the user may use a front end interface, such as the control node 102 , and select to rotate an image on the tiled display.
  • the user can manipulate the display via user input devices 151 (for example, a keyboard or 3D mouse).
  • the image display module 155 can respond to the user's selection, by using a reduced size or reduced resolution version (e.g., thumbnail) of the original image, which may be stored in the processed image data source 162 , to quickly adjust its display 166 .
  • a reduced size or reduced resolution version e.g., thumbnail
  • the image display module 155 can replace the image being displayed with the thumbnail during the rotation process.
  • an increased resolution image can be displayed upon completion of the rotation or other manipulation of the image. Additionally, user manipulations such as those noted above, can be used to generate messages sent to the control nodes, described in greater detail below.
  • the image display module 155 can also be configured to exchange messages with the control node 102 or the other display nodes (e.g., 100 B, 100 C, etc.) about the state of the tiled display, such as which portion of the original image needs to be displayed by respective nodes.
  • the image display module 155 can provide a highly interactive experience that has numerous applications, including the manipulation of data about medical conditions, cancer cells, satellite images, geosciences, oil monitoring, weather monitoring or prediction, astronomy, and the like.
  • FIG. 1B has been described with respect to display nodes 100 , a control node 102 , and an image data source 164 , certain of the features of the system shown in FIG. 1B can be implemented using other types of computing devices communicating over the network 160 .
  • the control node 102 can communicate over the network 160 with a media source device (instead of the image data source 164 ) and one or more destination computing devices (instead of the display nodes 100 ).
  • the control node 102 can broker a connection between the media source device and a destination computing device.
  • the control node 102 locates media data stored on the media source device and obtains the media data or a portion thereof (such as a thumbnail) from the media source device.
  • the control node 102 can also be configured to send the media data or the portion thereof to the destination computing device, along with network communication or connectivity data.
  • the network communication data can enable the destination computing device to communicate with the media source device to obtain media data.
  • the network communication data could include, for example, a network address (such as an IP address) of the media source device, a proxy for the media source device, an anycast IP address for a plurality of media source devices, or the like.
  • providing the network communication data from the control node 102 to the destination computing device enables the destination computing device to obtain media, including media updates, from the media source device.
  • the control node 102 can be less of a bottleneck for communications between the media source device and the destination computing device.
  • the destination computing device can report or otherwise provide the media updates it receives or a portion thereof to the control node 102 .
  • the destination computing device can provide a thumbnail, a reduced frame rate video, metadata associated with the media updates, combinations of the same, and the like.
  • the control node 102 can therefore keep track of the media data provided to the destination control device.
  • control node 102 can provide network communication information to the media source device instead of or in addition to providing communication information to the destination computing device. This network communication information can allow the media source device to communicate with the destination computing device. For example, the control node 102 can provide a network address of the destination computing device to the media source device. The media source device can then push media to the destination computing device.
  • control node 102 can identify media stored on the media computing device without requesting the media.
  • the control node 102 can provide network communication data to the destination computing device, which allows the destination computing device to obtain the media from the media server. Thus, little or no media might pass through the control node 102 from the media source device to the destination computing device, further reducing bottleneck effects of the control node 102 .
  • the interactive tiled display system 50 can also be configured to support other data formats, in addition to large digital images.
  • the interactive tiled display system 50 can be configured to support and display Normalized Difference Vegetation Index (NDVI) data, MRI scan data, SOAR 3D terrain data, digital video data (including standard and HDTV format), and streaming content (e.g., from a webcam).
  • NDVI Normalized Difference Vegetation Index
  • the interactive tiled display system 50 can also be configured to display a real-time replica of content displayed on the display screen of one or more computing devices in communication with the network 160 , as described in U.S. patent application Ser. No. 12/487,590 entitled “Systems, Methods, and Devices for Dynamic Management of Data Streams Updating Displays,” the entire content of which is hereby expressly incorporated herein by reference in its entirety.
  • the interactive tiled display system 50 comprises middleware that can be configured to render and display an OpenGL application by executing instructions to launch and manage instances of the OpenGL application on each of the display nodes 100 in parallel through a thread-based network communication layer.
  • FIG. 2 is a flowchart illustrating an embodiment of a method of preprocessing images that can provide a high level of interaction and manipulation of the images on tiled display systems.
  • the method illustrated in FIG. 2 can be stored as process instructions (for example, on any type of computer-readable storage medium) accessible and executable by the image processing module 150 and/or other components of the display node 100 A, the control node 102 , or any other computer or system connected to the tiled display system 50 directly or over any type of network.
  • process instructions for example, on any type of computer-readable storage medium
  • the image processing module 150 accessible and executable by the image processing module 150 and/or other components of the display node 100 A, the control node 102 , or any other computer or system connected to the tiled display system 50 directly or over any type of network.
  • certain of the blocks described below can be removed, others may be added, and the sequence of the blocks can be altered.
  • each of the display nodes 100 can be configured to perform the method, as well as other methods discussed below, using parallel
  • one or more full size or “original” images is received by each of the display nodes 100 .
  • the one or more full size images can be sent from the original image data source 164 to the display nodes 100 by way of the control node 102 .
  • the full size images can also be sent over the network 160 from a computing device or received via a physical I/O port of the display node 100 A, such as a USB port.
  • the full size images can include various types of data for visualization, such as still photographs, videos, or other images of anything including but without limitation, medical, satellite, geosciences, oil, weather monitoring or prediction, astronomy imagery, and include those discussed above with reference to FIG. 1B .
  • the image types can vary greatly and be based on a variety of possible types of data.
  • a set of sub-images that allow for rapid access to portions of the one or more full size images can be created.
  • the image processing module 150 preprocesses each original full size image and creates a set of sub-images that are a reduced size and/or reduced resolution (e.g., thumbnail) version of the original image.
  • a sub-image can be formed from a cropped portion of the original image, at the same resolution of the original image.
  • a 10240 ⁇ 7680 pixel image can be stored as 256 ⁇ 256 pixel tiles.
  • the overall image comprises 40 ⁇ 30 tiles, or sub-images, each comprising 256 ⁇ 256 pixels.
  • Other tile sizes can also be used.
  • the tiles can be roughly square.
  • the tiles can be comprise rectangular strips. Storing the image data as sub-images of smaller tile sizes can reduce the total amount of data to be processed for display on each of the display nodes 100 .
  • Each of the display nodes 100 can be configured to only decompress or otherwise “process” the tiles necessary to cover its display area rather than decompressing or processing the entire image.
  • each of the display nodes 100 can be configured to process groups of the tiles that correspond to the respective portions of the image to be displayed on each display unit.
  • each of the groups can comprise different subsets of all the tiles, although not necessarily exclusive groups.
  • groups associated with two adjacent display units may have one or more tiles in common.
  • Storing the image data in a tiled format can advantageously allow faster processing when the zoom level of the image is increased to show more precise detail.
  • the display node instead of processing the lower zoom level portion of the image displayed on a particular display node, the display node can be configured to process only the tiles corresponding to the higher zoom level portion of the image.
  • each of the display nodes containing a portion of the image on the overlapping tile can process the same overlapping tile.
  • the sub-images can comprise multiple resolution layers of the original full size image.
  • a first sub-image can be created that is the same resolution as the original image
  • a second sub-image can be created that is half the resolution of the first sub-image
  • a third sub-image can be created that is half the resolution of the second sub-image, and so on.
  • a 10240 ⁇ 7680 pixel image can be stored in five resolution layers as follows:
  • the resolution layers can also be processed into tiles or tiled sub-images. Storing the image in multiple layers of resolution can advantageously improve efficiency. For example, if the resolution of the display 166 is much smaller than the resolution of the image, the display 166 can only show the detail at the lower resolution and actual pixel details of the image are lost. In some embodiments, it can be advantageous to decompress a lower resolution layer of the image than the original high resolution image.
  • the image processing module 150 can create sub-images of neighboring portions of the large image.
  • the image processing module 150 can create sub-images of portions of the large image to be displayed on neighboring display nodes. This can advantageously result in improved processing efficiency when the image is manipulated (e.g., rotated, panned, or resized) by a user and one of the sub-images overlaps a border between two adjacent display nodes.
  • the process can move to block 230
  • the set of sub-images can be stored in the processed image data structure 162 or in memory.
  • each sub-image can be stored in a hierarchical format (e.g., tiled pyramidal TIFF), such that one or more blocks, or tiles, allow rapid access to a particular part of the image without having to access entire rows or columns.
  • this can advantageously allow a display node 100 A of the tiled system that knows the portion of an image it needs to display to fetch exactly the level of detail, for example a corresponding sub-image, and/or to quickly fetch the needed tiles that make up the image portion to be displayed.
  • the sub-images can include a spatial and/or resolution identifier to allow for quick retrieval from the processed image data structure 162 .
  • the sub-images e.g., tiles
  • the sub-images can be identified and retrieved based on a position command, coordinate, or value received in a message from the control node 102 (e.g., a “requested position”). For example, if the size of each tile is 256 ⁇ 256, a position coordinate of (0,0) or (124,0) can return the first tile and a position coordinate of (256,0) or (300,0) can return the next tile down on the x-axis.
  • the identification and retrieval of tiles can be an operation supported by the TIFF software library (libTIFF).
  • the image data (which can be divided into sub-images of various resolution layers and tiles) is stored as a single file format, with the sub-images being capable of being retrieved by the TIFF software library.
  • each of the sub-images can be stored individually, for example, in a directory file system.
  • each resolution layer can be a different level of directory in a file system, with each directory including all of the tiles for that resolution layer.
  • FIG. 3 schematically illustrates an embodiment of another method of displaying and manipulating images on a tiled array display system.
  • certain of the blocks described below may be removed, others may be added, and the sequence of the blocks may be altered.
  • each of the display nodes 100 can be configured to perform the method in parallel, thereby resulting in faster display and movement of the overall image.
  • a portion of a full size image to display on a particular display node can be calculated or determined. Additionally, multiple portions of one or more full size images to display can also be calculated or determined.
  • this reduces the amount of data to be loaded on each particular display node 100 , as well as a controlling computer, such as the control node 102 , and thus increases the responsiveness of the overall tiled array display. Because of the increased responsiveness, manipulation of images on the tiled array display can be improved.
  • the one or more sub-images that correspond to the portion of the full size image to display are loaded into memory 130 , for example. Because disk access times for loading a full size image can be impractical, each display node 100 A can load tiles or blocks of the appropriate sub-images needed for its local portion of the overall tiled display. In some embodiments, the correct portions of multiple full size images can also be loaded into the memory 130 of the corresponding display node 100 A, 100 B, 100 C, etc.
  • the loading of tiles can advantageously be performed using parallel processing techniques, such as multithreading. Multiple threads can be used to simultaneously load the requested tiles from the processed image data source 162 or from the mass storage device 120 or other memory/file system.
  • the display node 100 A may render the sub-images resident in the memory 130 using a multimedia device 140 , such as a video card.
  • the rendered sub-images can then be placed on display 166 , such as an LCD monitor.
  • FIG. 4 schematically illustrates an embodiment of a method of controlling the display of one or more large images (e.g., hundreds of megapixels or larger) on an interactive tiled array display.
  • large images e.g., hundreds of megapixels or larger
  • certain of the blocks described below may be removed, others may be added, and the sequence of the blocks may be altered.
  • the method is described with reference to a single display node 100 A, the method can be performed by any of the display nodes 100 .
  • each of the display nodes 100 can be configured to perform the method in parallel, thereby resulting in faster display and movement of the overall image.
  • the display node 100 A receives a message (e.g., a command, data, packet, etc) from the control node 102 , or another source, via the network 160 .
  • the message can be sent over the network 160 by any number of communication mechanisms, for example, TCP/IP over Ethernet or specialized high-speed interconnects, such as Myrinet.
  • the message contains information regarding an image to be displayed, or an image that is currently displayed, on the tiled array display.
  • the message can comprise initial display parameters designated for an image that has not yet been displayed or an image that is currently being displayed.
  • Such display parameters can comprise data indicating a desired or requested position for the corresponding image.
  • the content in the message indicative of the “position” included in the message is referred to as a “requested position.”
  • Such a “requested position” could be generated by the control node 102 when a user places, drags to, or clicks on a representation of a subject image ( 510 in FIG. 6 ) on a wire frame representation of the array 10 on a user interface 600 ( FIG. 6 ).
  • the user interface 600 can calculate, estimate, etc., the location of a reference pixel in the image 510 on the array 10 .
  • the reference pixel of the image 510 can be any pixel of the image.
  • the reference pixel can be the center pixel of the image, or a pixel immediately above, below, left or right of the center of the image, if the image does not mathematically have a center pixel.
  • Other reference pixels or reference parts of the image can also be used.
  • the portion of the message indicative of the requested position can be in any form.
  • the “position” can be expressed as a row and column of a virtual pixel grid representing all the pixels of the entire array 10 .
  • the requested position can be indicative of a pixel on a particular display unit, e.g., pixel 1, 1 on display unit 100 G.
  • Other formats can also be used.
  • the message can also include other aspects of the requested position, such as angular orientation.
  • the message can include data indicative that the image should be rotated by an angle.
  • the angle can be relative to the reference pixel, such as the center pixel noted above, any other pixel of the image, or any other point of reference.
  • the message can include further aspects of the requested position, such as zoom or magnification.
  • the message can include an indication that the image should be presented in a 1 to 1 manner, e.g., every pixel of the image represented on a respective single pixel of the corresponding display units.
  • the message can also include other data indicative of other display parameters.
  • the message can contain image data, for example, streaming video data, still image data, or a live display feed from a computing device or other data source in communication with the display nodes 100 via the network 160 as described in U.S. patent application Ser. No. 12/487,590 entitled “Systems, Methods, and Devices for Dynamic Management of Data Streams Updating Displays,” the entire content of which is hereby expressly incorporated herein by reference in its entirety.
  • the message can be received via broadcast, multicast, unicast, a reliable network overlay topology to parallelize transfer, or any other network messaging technique.
  • the multicast messages can be transmitted by unreliable multicast with error checking and handling.
  • the display nodes 100 can subscribe to a distribution group. For example, the display nodes 100 involved in displaying a portion of an image that does not cover the entire tiled display array can subscribe to, or join, a multicast group address. After the display nodes 100 have joined a multicast group, they can receive the multicast messages sent to the multicast group address.
  • the message can comprise a configuration message to initialize or reconfigure a coordinate system of the tiled array display.
  • the configuration message can include a node identifier and a node coordinate.
  • the coordinate system can be determined using letters for the columns and numbers for the rows in spreadsheet fashion. For example, the top left display node can be A1, the display node just to the right of A1 can be B1, the display node just below A1 can be A2, and so on.
  • the coordinate system can be based on a numerical method.
  • the top left display node can be (0,0) with a monitor size of (1.0, 1.0).
  • the display node beneath the top left display node can be (0,1) with a monitor size of (1.0, 1.0).
  • different monitor sizes can be applied when using the numerical coordinate system.
  • Such a coordinate system, or other coordinate systems can be used to allow each of the display nodes 100 to determine which portion of an image to display on its associated display 166 , as also described above.
  • the display nodes 100 can be “pre-configured.”
  • the control node 102 can remotely access each of the display nodes 100 (for example, using virtual network computing (VNC) software) and rename, or otherwise configure, the display nodes 100 , according to a predefined display configuration scheme (for example, the coordinate systems described above).
  • VNC virtual network computing
  • the message received from the control node 102 can also comprise a state message.
  • State messages can be received at periodic intervals or any time a change in a state of an image occurs (for example, due to user interaction or manipulation of the image on the user interface 600 , FIG. 6 ).
  • the state message can be received via broadcast to all of the nodes, multicast to a subset of the nodes, or unicast to a single node.
  • the state messages can include state values of the image to be displayed or updated state values of the image currently being displayed.
  • the state values can include values regarding image position, image size, image resolution, image cropping, angle of rotation, pixel colorimetry or shading, transparency and/or other display parameters.
  • the state values can include X and Y values to indicate a position of the image.
  • the X and Y values can indicate a delta change in position, or offset, from the previous position.
  • Received X and Y values can also indicate an updated location of the central pixel or another reference pixel of an image.
  • the X value indicates the left boundary of the image and the Y value indicates the top boundary of the image.
  • the X and Y values can be represented in resolution units or coordinate values, for example.
  • X and Y values can be used to indicate a size of the image on the overall tiled display.
  • the message can include two sets of X and Y values, indicating the locations of two reference pixels in the image.
  • the two sets of X and Y values can indicate the locations of two opposite corners of the image.
  • reference positions of any two reference pixels in the image could also be used.
  • the X and Y values can be updated to accommodate panning of the image around the tiled array display.
  • the state values can include a Z value to indicate a zoom level or “magnification” of the image.
  • the Z value can be updated to accommodate resizing of the image by zooming in and out.
  • the Z value can be used to determine the appropriate resolution layer to access as a starting point for processing of the image, as described in more detail above. For example, the resolution level just larger (in both height and width) than, or equivalent to, the specified zoom level can be selected as the starting point.
  • the state values can also include an angle value.
  • the angle value can be used to indicate a requested angle of rotation of the corresponding image.
  • the angle value can be updated to accommodate rotation of the image on the tiled array display.
  • the state values can include colorimetry values of the image pixels.
  • the colorimetry values can include red, green, and blue (RGB) color values.
  • RGB red, green, and blue
  • CMYK other color schemes
  • the colorimetery values can also include information configured to adjust other parameters or viewing conditions of the image or to transfer from one color scheme to another, such as chromacities or transfer functions.
  • the colorimetry values can be updated to accommodate color filtration. Color filtration can advantageously be used to expose new visual information or to match desired aesthetics (for example, for medical image data).
  • the state values can also include an alpha value to indicate a level of transparency of the pixels. The alpha value can be used to accommodate adjustment of the transparency of the image (from fully opaque to fully transparent) and to accommodate visual overlays of multiple images.
  • multiple state messages can be transmitted, with each broadcast including an ID of the image to which it pertains in addition to the state values of the image and multiple state messages can be sent.
  • Other image values can also be included within the messages sent by the control node 102 .
  • the message can include an image portion and a display parameter portion.
  • the image portion can comprise image data, such as a computer file in the form of a compressed or uncompressed image, or an identification of an image that is already stored on local memory (e.g., 130 FIG. 1B ) of the display units.
  • the display parameter portion of the message can comprise commands or instructions regarding the display and rendering of the large digital image on the tiled array display.
  • the display parameter portion can include instructions indicating where the image is to be positioned on the tiled array display if it is not intended to take up the entire screen and/or at what resolution the image is to be displayed.
  • the display parameter portion can include resolution information to indicate the resolution with which to display the large image.
  • the display nodes 100 can determine the appropriate resolution based on the native resolution of their associated displays 166 and the various levels of resolution of the received large image.
  • the display parameter portion can include colorimetery or transparency information.
  • the CPU 105 of the display node 100 A determines whether the state message impacts a display of an image or other data on its associated display 166 . If no action is required by the display node 100 A, then the display node 100 A continues to receive the state messages. If, in decision block 410 , it is determined that action is required, then the process continues to block 415 .
  • each of the display nodes can include a position module configured to store data indicative of any position in a tiled array display.
  • a position module configured to store data indicative of any position in a tiled array display.
  • such a module can be anywhere in the memory system.
  • the module can be configured to accept and retain an input indicative of any position on an array of any size.
  • the module can be embodied in software and accessible through a user interface, not illustrated.
  • the display node 100 A can be physically positioned in the array.
  • the display node 100 A can be positioned adjacent other nodes in an array, such as the array 10 .
  • a user can also input at least a first data indicative of the position of the display node 100 A in the array, the position module receiving and retaining the at least a first data or other data indicative of the position.
  • the user can remotely connect to the display node 100 A through the use of VNC software, as described above, to indicate the position of the display node 100 A in the array.
  • Each of the display nodes can also include one or more an image processing modules, such as the image processing module 150 , which can be combined into a single module with the position module described above, or other modules described herein, can be separate from all other modules, or combined into any of various possible combinations of all of the modules described herein.
  • the module used for the determination performed in decision block 410 can be referred to as an “action required determination module.”
  • the image processing module 150 can be configured to process the message received in operation block 405 in order to determine if the information in the message is indicative of a request for the display unit to process and generate an image on its corresponding local display 106 .
  • the display nodes 100 can generate a virtual map of the entire array 10 , and calculate the boundaries of the image resulting from processing the image in the manner indicated in the message. In some embodiments, the resulting boundaries can be compared to the position of the node 100 A retained in the position module, described above.
  • the result of decision block 410 is “NO”, and the routine can return to operation block 405 and repeat. If, on the other hand, the result indicates that any part of the resulting image would lie on the display unit at the position in the array retained in the position module, then the result of decision block 405 is “YES” and the process can continue to operation block 415 .
  • the display units can also be configured to perform other analyses on the information in the message. For example, if the message includes information indicating that only a portion of an image currently displayed on the display unit has changed, but the portion of the image displayed on the display unit performing the analysis has not changed, then the result determined in decision block 410 can also be “NO.”
  • each of the display nodes 100 can determine the respective portion of the large image that it is responsible for displaying based on the state values of the message.
  • each of the display nodes include a module for further processing the message received to determine what discrete part of the image is to be displayed on the display node performing operation block 415 . For example, in a continuation of the operation described above with reference to decision block 410 , or repeating all of those processes, the display node can determine which pixels of the image fall within its position in the above described virtual array.
  • the display node can decompress the entire image or only a discrete part of the image, depending on the compression format.
  • the display node can process the image file at the maximum resolution of the image, or at another resolution, for example, another resolution included in a pyramidal image data format.
  • the display node can first calculate a mosaic representation of the image, using the boundaries of the sub tiles of the image noted above.
  • the display unit can perform the determination of operation block 415 by determining which tiles or sub tiles of the image would be positioned on the display unit, if the image was oriented according to the message.
  • the display unit can be configured to determine that a sub tile would be positioned on the display unit if any portion of the sub tile would lie on the display unit if the image were positioned according to the message.
  • a determination, such as the above-described determination performed in operation block 415 can be performed by an image processing module, such as the image processing module 150 , an additional module combined with the image processing module 150 , a separate image processing module, or any other device or module configured to perform the functions described herein.
  • the module used to perform the operation performed in operation block 415 can be referred to as a “portion identification module.”
  • the routine can continue to optional operation block 420 .
  • the display node 100 A can update image data for display, if beneficial, for example, based on the state values of the message or based on the specific techniques used to perform the above steps. For example, if the Z value has changed (e.g., indicating a zoom in has been requested), new tiles or sub-images can be loaded to illustrate the increased detail of the image portion being displayed on the display node 100 A. In some arrangements where new image data is received, the display node 100 A can create one or more sub-images of the large image based on the image data corresponding to the portion of the large image that it is responsible for displaying.
  • the display unit can process the tiles or sub tiles identified in operation block 415 .
  • the display unit can selectively process the image data according to the identified tiles or sub tiles of a selected resolution or layer of the original image data, for example, where a the original image is in the form of a pyramidal resolution format image, with each of the different resolutions broken down into tiles or sub tiles.
  • This technique can provide a further advantage in that, if an image spans at least two (2) display units such that two different portions are identified by these respective display units in operation block 415 , neither of the display units need to process the entire image. Rather, the analysis of operation block 415 can be used to help the display units to efficiently determine which part of an image to process for display, without having to process, resolve, decompress, map, etc., the entire original image at full resolution.
  • One or more sub-images can be generated by the image processing module 150 as described above with respect to block 220 of FIG. 2 .
  • the created sub-images can be stored in memory or in a preprocessed image data structure, as described above with respect to block 230 of FIG. 2 .
  • the step of updating data in operation block 420 can include loading individual tiles, or sub-images via multithreading. For example, multiple tile threads can be configured to run in parallel to increase performance speed, as described above.
  • the display node 100 A outputs its respective portion of the large image on its associated display 166 , thereby presenting a high-resolution rendering of the original large image on the tiled array display.
  • certain tiles may not be loaded from a tile thread in time to display on the display 166 , resulting in black tiles being displayed.
  • a lower or the lowest resolution layer image can be used, processed, displayed, etc., to temporarily mask the black tiles until the higher resolution tile is loaded from the tile thread.
  • the display nodes 100 can be configured to adjust their respective display output to compensate for the presence of bezels between the display nodes 100 .
  • the display units can be configured to operate in two or more modes, such as an aspect ratio preservation mode or an image data preservation mode.
  • the display units can use a map of all the pixels in the array 10 which includes virtual pixels that fill the gaps between adjacent display nodes. These virtual pixels are not assigned to any of the display nodes. As such, an image displayed on the array 10 which overlaps multiple display units will have portions of the image missing, i.e., any portion of a displayed image falling within the gaps formed by the bezels, would not be displayed. However, the portions of the image that are displayed will have the aspect ratio of the original image.
  • the term “unified display of the digital image on the array of display units” includes an image displayed as in the aspect ratio preservation mode, despite the missing data.
  • This mode can be desirable when viewing images for aesthetics, images or art, architecture, etc., for example, because the alignment of features which span bezels will be preserved, as well as other applications.
  • the image data preservation mode described below, can also be desirable for viewing images for aesthetic reasons.
  • the display units can use a map of all the pixels in the array 10 which does not include or utilize unassigned pixels in the gaps between adjacent display nodes. Instead, every pixel of the original image to be displayed on the array 10 is assigned to a display node. As such, the aspect ratio of an image displayed on the array 10 which overlaps multiple display units will be affected/distorted, in the portion spanning the bezels between the displays ( 166 FIG. 1B ) of the display nodes. However, the portions of the image that are displayed will have the aspect ratio of the original image.
  • the term “unified display of the digital image on the array of display units” includes an image displayed as in the image data preservation mode, despite the distortion on the resulting aspect ratio.
  • This mode can be desirable for scientific analyses of images because all of the image data is displayed, regardless if an image spans a bezel.
  • the aspect ratio preservation mode can also be helpful in scientific analyses, for example, where it is desired to compare proportional sizes of features of one or more images.
  • the display nodes 100 can be configured to activate one or a plurality of predetermined functions to change the position and/or size of the image to conform to one or more physical or logical boundaries of the tiled array display, such as the “snapping” function described in U.S. Provisional Application 61/218,378, entitled “Systems, Methods, and Devices For Manipulation of Images on Tiled Displays,” the entire content of which is hereby expressly incorporated herein by reference.
  • FIGS. 5A and 5B schematically depict how an image can be displayed on one or more display units 100 A, 100 B, 100 C, 100 D of a tiled array 500 .
  • the tiled array 500 can be in the same form as the array 10 , or another form.
  • an image 510 can be displayed in such a manner that it overlaps two display units 100 A and 100 C of the array 500 .
  • the associated image data from the image source can be selectively sent to those display units 100 A and 100 C (e.g., via a multicast message). This can help reduce bandwidth requirements for transmission of image data.
  • the image 510 when overlapping two display units 100 A, 100 C, can be broken down into parts corresponding to those portions displayed on different display units. As shown in FIG. 5B , the image 510 can be segregated into a first portion 511 (displayed on display unit 100 A) and a second portion 512 (displayed on display unit 100 C). As such, in some embodiments, the tiled array 500 can be configured to send the image data corresponding to the portion 511 to the display unit 100 A and the other image data corresponding to the portion 512 to the display unit 100 C, along with data indicating the position at which these portions 511 , 512 should be displayed.
  • the image data corresponding to the image 510 does not need to be broadcast to the other display nodes (e.g., 100 B, 100 D). Rather, in some embodiments, the control node 102 can be configured to only send image data to those display nodes in the array 500 that will be involved in displaying at least a portion of the image 510 (e.g., via multicast). This can greatly reduce the magnitude of data flowing into and out of each of the display nodes 100 .
  • the control node 102 can be configured to generate a schematic representation of the display nodes 100 for use as a graphical user interface.
  • the control node 102 can be configured to display the images being displayed on the tiled array display on its own associated display, such as a display monitor 600 .
  • the display monitor 600 of the control node 102 can comprise a wire frame 605 of the tiled array display 500 .
  • the display monitor 600 of the control node 102 can include other visual cues for representing the borders of the individual displays 166 of each display node of the tiled array display. As shown in the lower portion of FIG.
  • the image 510 can be displayed both on the tiled array display 500 and the display monitor 600 of the control node 102 .
  • the representation of the image 510 on the control node display 600 can be in the form of a reduced resolution version of the image 510 , a full resolution version, a thumbnail version, or other versions.
  • the display monitor 600 of the control node can have any amount of resolution. However, in some embodiments, the resolution of the display 600 of the control node 102 will be significantly less than the total resolution of the tiled array display 500 including all the individual display units 100 A, 100 B, etc.
  • the wire frame 605 schematically representing the tiled array display 500 can be presented on the display 600 of the control node 102 to provide an approximation of the proportions of the tiled array display 500 .
  • the display 600 of the control node 102 might include 1/10 th of that resolution, e.g., 1,920 pixels wide by 1,080 pixels high, or other resolutions.
  • the wire frame display can be the same or similar to the schematic wire frame representation illustrated in FIGS. 5A and 5B .
  • the control node 102 can be further configured to allow a user to manipulate the placement of an image, such as the image 510 , on the tiled array display. For example, a user can resize, reposition, rotate, and/or adjust color filtration or transparency of the one or more images.
  • user interaction devices such as a keyboard, mouse, 3D mouse, speech recognition unit, gesture recognition device, Wii remote, electronic pointing device, or haptic input device can be used to manipulate the one or more images on the tiled array display 500 .
  • An arrow pointer or other visual indicator can be displayed on the tiled array display to allow for manipulation of the images via the user interaction devices.
  • the manipulation of the image 510 on the control node display 600 corresponds with the manipulation of the image on the tiled array display 500 .
  • a user can provide inputs, for example, by employing a mouse based control to drag and drop images on the wire frame representation to change the position or orientation of images on the tiled array display 500 .
  • These inputs can be used by the control node 102 to generate the messages (e.g., state messages comprising image state values) described above.
  • module refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, Objective-C, C or C++.
  • a software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts.
  • Software instructions may be embedded in firmware, such as an EPROM.
  • hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
  • the modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
  • any coupling or connection discussed herein could be a local area network, wireless local area network, wide area network, metropolitan area network, storage area network, system area network, server area network, small area network, campus area network, controller area network, cluster area network, personal area network, desk area network or any other type of network.
  • a computer system may include a bus or other communication mechanism for communicating information, and a processor coupled with bus for processing information.
  • Computer system may also includes a main memory, such as a random access memory (RAM), flash memory, or other dynamic storage device, coupled to bus for storing information and instructions to be executed by processor.
  • Main memory also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor.
  • Computer system may further include a read only memory (ROM) or other static storage device coupled to a bus for storing static information and instructions for processor.
  • ROM read only memory
  • a storage device such as a magnetic disk, flash memory or optical disk, may be provided and coupled to bus for storing information and instructions.
  • inventions herein are related to the use of computer system for the techniques and functions described herein in a network system.
  • such techniques and functions are provided by a computer system in response to processor executing one or more sequences of one or more instructions contained in main memory.
  • Such instructions may be read into main memory from another computer-readable storage medium, such as storage device.
  • Execution of the sequences of instructions contained in main memory may cause a processor to perform the process steps described herein.
  • hard-wired circuitry may be used in place of or in combination with software instructions to implement embodiments.
  • embodiments are not limited to any specific combination of hardware circuitry and software.
  • Non-volatile media includes, for example, optical or magnetic disks, such as a storage device.
  • Volatile media includes dynamic memory, such as main memory.
  • Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge.
  • Computer systems can send messages and receive data, including program code, through the networks or other couplings.
  • the received code may be executed by a processor as it is received, and/or stored in storage device, or other non-volatile storage for later execution.
  • broadcast as used herein, in addition to having its ordinary meaning, may refer to any transmission of information over a network.
  • multicast as used herein, in addition to having its ordinary meaning, can include broadcasting information to a subset of devices in communication with a network.

Abstract

Display units which can be arranged to form a single arrayed display system, thereby allowing the display of much larger images than can be shown on a single display. Each display unit can include an image display, a communication mechanism, such as a network interface card or wireless interface card, and an image display module, such as a video card and an image processor. Each display module can be configured to selectively process and display portions of large digital images.

Description

  • The present application is based on and claims priority to U.S. Provisional Patent Application No. 61/090,581 filed on Aug. 20, 2008, the entire contents of which is expressly incorporated by reference herein.
  • BACKGROUND
  • 1. Field
  • This disclosure generally relates to visualization technologies. More specifically, this disclosure relates to display devices which can be used to form a display system formed of tiled arrays of display devices.
  • 2. Description of the Related Art
  • Traditionally, personal computers and workstations are connected to one or a small number of adjacent display devices, often LCD type monitors. Such systems can provide the user with the ability to view a larger number of pixels than that typically displayable on a single monitor.
  • Commercially available computer systems can often support one or two monitors for each video controller (sometimes constructed in the form of a “video card”) connected to the system. For example, typical “PC” computer systems include several “expansion slots” which can accept certain types of video cards. Motherboards of some “PCs” are built with one or more PCI, PCI Express, AGP, etc., slots that can accept video cards. In this manner, a single computer can be provided with multiple video cards to increase the number of displays that can be controlled by the computer.
  • SUMMARY
  • The present disclosure relates to methods and systems for displaying and manipulating large images or datasets. Embodiments of the present disclosure can be particularly advantageous for displaying large digital images that can be tens or hundreds of millions of pixels, or even billions of pixels, on tiled array display systems in a highly interactive manner. Some embodiments can allow for display of datasets that can be over a gigabyte in size. In some embodiments, a system is disclosed that can display a large image on a tiled array display that includes one or more display units and then can manipulate the image by panning, zooming, rotating, color filtering, and the like.
  • Some embodiments of the systems disclosed herein can be configured in such a way that each of the display units of the system retain a full copy of an original image file (e.g., in local memory) to be displayed, then in parallel, each display unit processes the original image file and displays only that portion of the original image corresponding to the position of the display unit in the overall array. For example, in some embodiments, the display units can be provided with a configuration parameter, indicating in which part of the array the corresponding unit is positioned. For example, in an array of 20 display units formed of 4 rows and 5 columns, the display unit in the upper left hand corner can be identified as the column 1, row 1 display device. The positions of the display units can also be correlated to the ranges of rows and columns of pixels each display unit can display, for example, at their respective “native” resolutions, together forming a “single display” including all of the pixels from each display device.
  • The display units can also be configured to receive a position command associated with an image file located in each units' local memory. The position command can be any data indicative of the position of the associated image file on the array. For example, the position command can be an identification of the desired location of a predetermined pixel in the associated image. Such a predetermined pixel, for example, can be the pixel at the center of the image that would result from the associated image file being processed at its native resolution. Other predetermined pixels can also be used. Additionally, the position command can include other display characteristics, such as the desired rotational angle of the displayed image, magnification, resolution, etc.
  • In some embodiments, the display units can be configured to use the received position command to determine what portion (if any) of an associated image corresponds to the pixels of that display unit. For example, the display unit can be configured to determine the requested location of the predetermined pixel of the image resulting from processing of the associated image file, for example, with reference to a virtual wireframe representation of the overall array. Then the display unit can resolve the image file to then determine what portion of the image (if any) corresponds to the display on that display unit.
  • After obtaining the result of this initial analysis, the display unit can process the image file and display the discrete portion of the resulting image in the correct orientation and thereby, together with the other image portions displayed by the other units in the array, display an observable mosaic of the original image.
  • As noted above, by configuring the display units to receive a basic position command and then, in parallel, process the position command, process the image file, and display the corresponding portion of the image, the system can generate the desired view of the image more quickly than a system in which an image is broken down and pre-processed into individual parts, which are then individually distributed to the corresponding display unit. Instead, in some embodiments disclosed herein, as the display units receive a series of position commands, which request the display of an image stored on the local memory of each display unit at different locations on the array, all of the display units can re-process and display their respective image portions much more quickly, thereby providing the user with a more responsive and useful system.
  • Some embodiments of the systems disclosed herein can also allow the display of one or more images that can be larger than the memory of the individual display units in the tiled array display can handle. Conventionally, multiple images can be prohibitively expensive to load due to, for example, disk access times of loading hundreds of megabytes or gigabytes, and a lack of available memory. Advantageously, in some embodiments, each display unit in the tiled array display can be configured to only load or process the data needed for its local portion of the overall image displayed on the tiled array display. The system can thus greatly reduce the amount of data to load into memory, for example, and allows rapid manipulation of the image portions on the individual display units. If more data is needed, the required data can also be loaded and displayed. The system can advantageously employs parallel processing techniques, such as multithreading, to improve system performance.
  • In some embodiments, each full size or original image can be preprocessed and stored in a hierarchical format that includes one or more sub-images, where each sub-image can be a reduced size or reduced resolution version of the original image. The largest sub-image can be the same size as the original image and/or include image content from the original image. In some embodiments, each sub-image can be stored as one or more blocks, or tiles, to allow rapid access to a particular part of the image without having to access entire rows or columns of pixels. This can advantageously allow a display unit of the tiled array display to determine the discrete portion of an image it needs (e.g., has been requested) to display and to fetch exactly the level of detail, for example the sub-image, and/or to quickly fetch the needed blocks, or tiles, that make up the portion of the image displayed on the display unit. In other embodiments, each sub-image can be stored as one or more resolution layers to allow for rapid access to, and display of, a particular resolution of the original image.
  • This can be particularly beneficial because only the portion of each original image required for each tile of the tiled array display needs to be resident in the memory of the respective display node. Thus, only the blocks or resolution layers of the appropriate sub-image may need to be loaded, which improves responsiveness of the tiled array display system and performance of the individual display units. In some embodiments, surrounding blocks and blocks from higher and lower levels in the hierarchy can also be pre-fetched for improved performance. This can be advantageous for enabling each tile node to support the display of more than one original image or portions thereof. Additionally, a resource management approach can be used to support increased interactivity by reducing the amount of data loaded into memory, for example, and/or allowing portions of several images to be resident on each individual tile display unit, which supports the display and manipulation of multiple full size images on the array display.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above-mentioned and other features of the inventions disclosed herein are described below with reference to the drawings of preferred embodiments. The illustrated embodiments are intended to illustrate, but not to limit the inventions. The drawings contain the following Figures:
  • FIG. 1A is a schematic representation illustrating an embodiment of a highly interactive tiled array display.
  • FIG. 1B is a schematic diagram illustrating an embodiment of an interactive tiled display system for displaying and manipulating one or more images on an array-type display.
  • FIG. 2 is a flow chart illustrating an embodiment of a method for processing one or more images for display on an array display.
  • FIG. 3 is a flow chart illustrating an embodiment of a method for displaying an image on an array display.
  • FIG. 4 is a flow chart illustrating an embodiment of a method for controlling the display of one or more images on an array display.
  • FIG. 5A schematically illustrates an image overlapping two display nodes in an array display.
  • FIG. 5B schematically illustrates the image of FIG. 5A partitioned over two display nodes.
  • FIG. 6 is a schematic diagram of a tiled array display and a control node, the control node including a user interface having a schematic representation of the tiled array display.
  • DETAILED DESCRIPTION
  • The present disclosure generally relates to the interactive display and manipulation of large images or large datasets on an array-type display, such as a tiled display system. In some embodiments, a system that implements a highly interactive large image or parallel display system can be used. In some of the embodiments described below, the interactive tiled display system comprises a high resolution, large format display system configured to render large-scale images or datasets on a large array of display units, or “nodes.” The interactive tiled display system advantageously allows a viewer to see minute, high-resolution detail in the context of a large overall image.
  • In some embodiments, the interactive tiled display system allows a user to interact with the display in real-time. For example, embodiments described below can allow panning, zooming, resizing, cropping, rotating, color shading, transparency controlling, and the like of images and/or other content on the tiled display, thereby enabling users to examine content with increased flexibility and effectiveness.
  • The interactive tiled display system can employ parallel processing techniques (e.g., multithreading) to increase the speed with which the large image or dataset is displayed and/or manipulated on the tiled array display. The use of parallel processing techniques can advantageously result in increased scalability without sacrificing performance.
  • In some embodiments, the interactive tiled display system comprises a symmetric multiprocessing (SMP) system, wherein each of the tiled display units share the same memory, system bus, and input/output (I/O) system. In other embodiments, the interactive tiled display system comprises a massively parallel processing (MPP) system, wherein each of the display units, or nodes, has its own memory, bus, and I/O system.
  • In yet other embodiments, the interactive tiled display system comprises a clustered system, wherein each of the display nodes are coupled using local area network (LAN) technology and each of the display nodes comprises an SMP machine. In still other embodiments, the interactive tiled display system comprises a distributed memory parallel processing system, such as a non uniform memory access (NUMA) or distributed shared memory system. In some embodiments, the interactive tiled display system comprises a global shared memory system, such as a uniform memory access (UMA) system. In other embodiments, the interactive tiled display system comprises a cache only memory architecture (COMA) system. In still other embodiments, the interactive tiled display system comprises a parallel random access machine (PRAM) system. Other types of parallel processing systems or modified versions of the above systems can also be used without departing from the spirit and/or scope of the disclosure.
  • In some embodiments, the interactive tiled display system can comprise, without limitation, one or more of the following parallel processing architectures: linear array, ring array, binary tree, 2D mesh, torus, shared-memory, and hypercube.
  • In some embodiments, the nodes forming the interactive tiled display systems described below can identify and process a discrete portion of a full image to be displayed on the array. This can reduce the amount of processing required by a control node used to control the nodes, and thus can increase the responsiveness of the overall tiled display system. The interactive tiled display system can also allow for movement of images around the tiled display at a rate that is much faster than other techniques. For example, the interactive tiled display system can be configured to dynamically set up binary trees to increase the speed with which information or data is communicated to the display nodes. The number of binary trees can be dynamically modified (e.g., increased or decreased) depending on communication received from the control node.
  • Some of the embodiments described below can accommodate viewing of multiple highly detailed, large images, which can exceed billions of pixels, to be displayed as part of a high resolution, coordinated workspace on a tiled display. The real-time or near real-time interaction with the multiple image data, which can be received from multiple image data sources, can include panning, zooming, rotating, color filtering, and transparency control of the images more quickly. The interactive tiled display system can be beneficial for viewing or visualizing various types of image data, such as medical, cancer cells, satellite, geosciences, oil monitoring, weather monitoring or prediction, traffic control, astronomy, artwork, and the like.
  • In some embodiments, a control unit can function as a front end user interface to allow a user to control the placement and manipulation of content on the tiled array display via user interaction devices (e.g., keyboard, mouse) associated with the control unit. For example, the control unit can include a graphical user interface having a display that “mirrors” the tiled array display. Manipulation of an image on the graphical user interface on the control unit can be used to control manipulation of the image on the tiled array display. Such systems and methods can be useful when a user desires to display an image that is larger than a traditional display connected to a user computer can handle.
  • Embodiments are described below with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of some specific embodiments of the invention. Furthermore, embodiments of the inventions may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the embodiments herein described.
  • FIG. 1A is a schematic representation illustrating an embodiment of a highly interactive tiled array display 10. The interactive tiled array display 10 can comprise a plurality of display units 100, wherein each display unit 100 is configured to determine a respective portion of the large image to be output on its associated display and to process and display the respective portion in parallel. For example, the interactive tiled array display 10 includes a 5-by-5 array of display units. In some embodiments, the highly interactive tiled array display 10 comprises a 10-by-5 array of display units (each comprising a 2560×1600 pixel display) to form a 25,600×8,000 pixel display. However, the number of display units 100, and thus, the size of the tiled array is not limited to any particular array size, and can be expanded as large as space and network bandwidth permit.
  • FIG. 1B is a block diagram illustrating an interactive tiled display system 50 for displaying and manipulating one or more images on an array-type display, such as the highly interactive tiled array display 10, which can be “parallelized.” The interactive tiled display system 50 can comprise a plurality of display nodes, or display units 100 (including display nodes 100A, 100B, and 100N that are representative of any quantity of display nodes) that are in communication with a network 160 and other devices via the network 160, including a control node 102. In alternative embodiments, the plurality of display units 100 can be connected in a serial configuration (e.g., a daisy chain implementation) instead of a parallel implementation.
  • In the illustrated embodiment, an original image data source 164, which can be, for example, a storage device (e.g., a network-attached storage device, a RAID storage system, or a shared local storage over parallel file system, such as a parallel virtual file system (PVFS)) or a computing device, is also in communication with the network 160. In some embodiments, the original image data source 164 comprises a shared memory storage device/file system that is accessible by each of the display nodes 100 via the network 160. The image data source 164 can comprise a plurality of preloaded content, such as large, high-resolution digital images and/or datasets.
  • Generally, the control node 102 can comprise one or more computer devices that gather or make available information about the state of the overall tiled display system 50, including the display nodes 100, through the use of messages. As used herein, “messages” can be any type of message including data, packets, signals, state values, display parameters, etc., and are referred to interchangeably below as “messages” or “state messages.” In some embodiments, the messages comprise “global” messages, which can be broadcast to all the display nodes 100 by the control node 102 via the network 160. The control node 102 can also be configured to transmit such messages to a subset of one or more of the display nodes (e.g., via multicast or unicast message techniques).
  • The messages can include information about a current state of an image to be, or being, displayed on the tiled array display. For example the messages can include information about a location, size, resolution, orientation, color scheme, or identification of the image. In some embodiments, the messages comprise image data or other data content.
  • In some embodiments, the communication of messages to the display nodes 100 can be controlled by a multi-port Gigabit Ethernet switch; however other switches, hubs, or routers can also be used. The control node 102 can comprise a desktop computer, laptop, tablet, notebook, handheld computing device (e.g., a smartphone or PDA), a server, or the like. In some embodiments, the control node 102 comprises one or more multi-core and/or multiprocessor computing systems configured for use in implementing parallel processing techniques, such as MIMD or SPMD techniques using shared and/or distributed memory. In addition, the control node 102 can function as a front end user interface to the interactive tiled display system 50 that allows a user to interact with the overall system by manipulating the image or images displayed on its associated display, which in turn manipulates the image or images on the tiled display. Such functions are described in more detail below.
  • Any of the display nodes 100N and/or the control node 102 can be used to implement the systems and methods described herein. For example, in some embodiments, the display node 100A and the control node 102 can be configured to manage the display of information on tiled display systems. In some embodiments, the control node 102 and the display nodes 100 are configured to implement a shared memory MIMD system and/or a message passing MIMD system. The functionality provided for in the components and modules of the display node 100A and the control node 102 can be combined into fewer components and modules or further separated into additional components and modules.
  • With continued reference to FIG. 1B, although only exemplary components of the display node 100A are described in detail, it is to be understood that the descriptions of the display node 100A set forth herein also apply to the other nodes 100B, 100N.
  • In some embodiments, each of the display nodes 100 is configured to render and display a portion of a large, high-resolution image. In some embodiments, all the display nodes 100 work in parallel to render the total overall image across the plurality of display nodes 100, thereby avoiding the performance limitations that would arise from dividing and rendering the entire high resolution image into discrete parts on a single computing device, then transmitting the discrete parts to the corresponding nodes of the tiled array. In some embodiments, the display nodes 100 can display digital images or other data content of 1 gigabyte or larger in size.
  • In some embodiments, the display node 100A can include, for example, a computing device, such as a personal computer (PC), that is IBM, Macintosh, or Linux/Unix compatible. In some embodiments, the computing device comprises a server, a laptop computer, a monitor with a built-in PC, a cell phone, a personal digital assistant, a kiosk, or an audio player, for example.
  • In some embodiments, the display node 100A includes a central processing unit (“CPU”) 105, which can include one or more multi-core processors, microprocessors, graphics processors, digital signal processors, and/or the like. The display node 100A can further include a memory 130, such as random access memory (“RAM”) for temporary storage of information and a read only memory (“ROM”) for permanent storage of information, and a mass storage device 120, such as one or more hard drives, diskettes, and/or optical media storage devices. Other arrangements of memory devices can also be used. Thus, the display node 100A can be considered as having a “memory system.” Thus, as used herein, the term “memory system” can comprise only one or any combination of the memory device 130, mass storage device 120, processed image data source 162, and any number of additional memory devices 130, mass storage devices 120, processed image data sources 162, or any other type of memory.
  • Typically, the modules of the display node 100A are connected to the CPU 105 using a standards-based bus system. In different embodiments, the standards-based bus system can be Peripheral Component Interconnect (PCI), Microchannel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures, for example. Other types of systems can also be used. Using any of the above or other systems provides for an operable connection between the devices 105, 110, 120, 130, 140, 150, 155, 162, 166, any memory system as described above, or other devices.
  • The box illustrated in FIG. 1B and identified with the reference numeral 100A, can be considered as a schematic representation of a housing of the display node 100A. Such a housing can be formed in any manner, such as those designs currently used on commercially available monitors or televisions, including LCD, plasma, LED, or other types of devices. Such a housing can be shaped to enclose the devices noted above, for example, enlarged if necessary. Such shaping is fully within the skill of one of ordinary skill in the art of video monitor or television design.
  • The housing can also include a mount, such as the mounting hardware normally included on the rear sides of the LCD, plasma, LED monitors and televisions that are currently widely available on the commercial market. Such mounts can be described as “wall mounts.” Other hardware can also be used. Such hardware can be used in conjunction with an appropriately shaped rack designed to connect to and support a plurality of display nodes 100A having such mounts on their respective housings with the plurality of display nodes 100A arranged adjacent each other, in a tiled layout, such as the layout schematically represented in FIG. 1A.
  • The display node 100A is generally controlled and coordinated by operating system software, such as Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Linux, SunOS, Solaris, a real-time operating system (RTOS), MAC OS X, or other compatible operating systems. In other embodiments, the display node 100A may be controlled by a proprietary operating system. The operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things.
  • The display node 100A can include one or more input/output (I/O) devices and interfaces 110, such as a keyboard, a mouse, a touchpad, a scroll wheel, a trackball, a voice activation unit, a haptic input device, a printer, or I/O ports. In addition, the display node 100A can include one or more display devices 166, such as a monitor, that allows the visual presentation of data, such as the image data described herein, to a user. In some embodiments, the display device 166 can comprise an LCD display, such as a 30-inch LCD Cinema Display available from Apple, Inc. or a 46-inch 460 UXn-UD LCD display available from Samsung Electronics. In other embodiments, the display device 166 comprises a plasma, CRT, or Organic LED display. In yet other embodiments, the display device 166 comprises a projector-based display. In some embodiments, the display device 166 provides for the presentation of scientific data, GUIs, application software data, and multimedia presentations, for example. The display node 100A may also include one or more multimedia devices 140, such as speakers, video cards, graphics accelerators, cameras, webcams, and microphones, for example.
  • In some embodiments, the I/O devices and interfaces 110 can provide a communication interface to various external devices. The display node 100A can be coupled to a network 160 that comprises one or more of a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a personal area network (PAN), a virtual private network (VPN), or the Internet, for example, via a wired, wireless, or combination of wired and wireless, communication link 115. The network 160 communicates with various computing devices and/or other electronic devices via wired and/or wireless communication links. In some embodiments, the network 160 comprises an IEEE 802.11g WLAN.
  • In some embodiments, the display node 100A can include, or may be coupled to via a network connection, a processed image data source 162, such as a database, that includes information about one or more images to display on the tiled array display. The information supplied by the processed image data source 162 can include a full size or original image that was or will be preprocessed and stored in a hierarchical format that includes sub-images, with each sub-image being a reduced size and/or reduced resolution version of the original image. For example, a reduced resolution sub-image can be generated from an original full resolution image by deleting rows and columns of the pixels of the original image at predetermined spacings, thereby generating a lower resolution version of the full image. A reduced size sub image can be generated by cropping the original image. Any other technique of creating sub-images can also be used.
  • In some embodiments, the largest sub-image can be the same size as the original image and/or include image content from the original image. For example, each sub-image can be stored as one or more blocks, or tiles, to allow rapid access to a particular part of the original image without having to access entire rows and/or columns of pixels. In some embodiments, this can allow the display node 100A to fetch exactly the level of detail (sub-image) it requires and/or to quickly fetch the needed blocks that make up the portion of the image to be output to the display 166. In addition to the devices that are illustrated in FIG. 1B, the display node 100A can be connected to other computing devices through a bus or the network 160.
  • In some embodiments, the original image data source 164 can include one or more original or full size digital images. The digital images can comprise JPEG, GIF, PNG, TIFF, BMP, and/or other digital image formats. In other embodiments, the original image data source 164 can include one or more original or full size digital images that can be tens or hundreds of millions of pixels (e.g., 200-600 Megapixels), or even billions of pixels. In some embodiments, the display node 100A can preload and preprocess the original images stored in the original image data source 164 and store the result in a hierarchical format in the processed image data source 162. In other embodiments, the display node 100A can determine the correct portion(s) of the original image(s) to be displayed on its associated display 166 and output the corresponding preprocessed image data for display. Thus, the processed image data source 162 can be used to reduce the amount of data that needs to be loaded in memory and support faster manipulation or modulation of images on the tiled array display.
  • In some embodiments, the hierarchical format used for storing the processed image data comprises a tiled pyramidal TIFF format. However, other formats can also be used.
  • The tiled pyramidal TIFF format can allow the display node 100A to store multiple resolutions of a preloaded image in the mass storage device 120, memory 130 or in the processed image data source 162. In addition, the tiled pyramidal TIFF format can allow the display node 100A to partition the original image into smaller tiles of data so as to reduce the amount of image data to be fetched and processed, thereby enhancing system performance.
  • In some embodiments, the original images stored in the original image data source 164 can be compressed or uncompressed images. In some embodiments, the processed image data source 162 can also be configured to receive a compressed image from the original image data source 164. Once received, the display node 100A can decompress an original image and then preprocess the original image into a set of one or more images that are compressed or decompressed and store them in the processed image data source 162, in the mass storage device 120, or in memory 130. In some embodiments, each tile of the tiled pyramidal TIFF images can be compressed using lossless or lossy compression algorithms, such as Deflate, LZW, or JPEG. Spatial identifiers can be used to identify various portions or resolutions of the sub-images to facilitate efficient extraction of different regions or resolutions of the original image(s).
  • In some embodiments, one or more of the image data sources may be implemented using a relational database, such as Sybase, Oracle, CodeBase and Microsoft® SQL Server, as well as other types of databases such as, for example, a flat file database, an entity-relationship database, an object-oriented database, and/or a record-based database.
  • With continued reference to FIG. 1B, in some embodiments the display node 100A can also include application modules that can be executed by the CPU 105. In some embodiments, the application modules include the image processing module 150 and the image display module 155, which are discussed in further detail below. These modules can include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • In some of the embodiments described herein, each display node 100A can be configured to execute instructions in the image processing module 150, among others, in order to support user interactivity by reducing the amount of data loaded into the memory 130 when an image needs to be displayed on the interactive tiled display system 50. In addition, the image processing module 150 can allow portions of several images to be resident on the display 166, thus supporting display and manipulation of multiple large images across the multiple display nodes 100. For example, in some embodiments, an original large image can be tens of billions of pixels. The image processing module 150 can preprocess and store multiple full size or original images by determining the correct portion of the original images to be displayed on a specific display node 100.
  • In some embodiments, each original image can be stored in a hierarchical format (e.g., tiled pyramidal TIFF format) that includes sub-images that can be a reduced size and/or reduced resolution version of the original image. In some embodiments, the largest sub-image can be the same size as the original image and/or include image content from the original image to support resizing of the image, such as zooming in and/or out. The image processing module 150 can store each sub-image of the original image as one or more blocks, or tiles, to allow rapid access to a particular part of the full size image without having to access entire rows or columns of pixels. This advantageously allows a display node 100A that knows which portion of the original image is needed to output on its display 166 to fetch the level of detail needed, such as a sub-image and/or to quickly fetch the needed blocks that make up the image portion. Such blocks or tiles can be provided with identifiers that indicate the position of the bock or tile in the original image, and/or other characteristics. The image processing module 150 can be configured to store each sub-image of the original image as one or more layers of resolution to allow rapid access to a particular resolution layer without having to decompress or compress large resolution images for display, thereby improving efficiency. Storing multiple resolutions of the preloaded image can advantageously allow the use of parallel processing techniques (e.g., threading) to improve performance speed.
  • The image processing module 150 can be further configured to send requests to control node 102 for information about other display nodes (e.g., 100B, 100C, etc.) and/or vice versa. In some embodiments, messages can be exchanged between the control node 102 and/or other display nodes (e.g., 100B, 100C, etc.) that include information about the overall state of the aggregate tiled display, or about a particular display node. In some embodiments, the messages comprise Scalable Parallel and Distributed Systems (SPDS) messages. SPDS Messaging is a C++ library that encompasses sockets and data into abstractions called Endpoints and Messages. In some embodiments, the control path uses multicast UDP Endpoints, which means Messages are received by one or more subscriber nodes. Broadcast UDP Endpoints can also be used to send Messages to all nodes on a subnet. The SPDS Messaging library can use Unix/Winsock socket operations to implement the Endpoints and the send and receive operations used to transfer Messages.
  • The display node 100A can also execute instructions in the image display module 155 to display one or more images or portions thereof and manipulate the images. As noted above, an original image that is full size can be preprocessed by the image processing module 150 and then stored in the processed image data source 162 or in other local memory (e.g., memory 130). Because the amount of data loaded into memory 130 can be reduced when an original image is stored in a hierarchical format (such as tiled pyramidal TIFF), the image display module 155 can enable a highly interactive display space that spans multiple display nodes 100. By storing the data in a hierarchical format, the tiled display system 50 can implement further parallel processing techniques (e.g., MIMD or SPMD techniques and/or multithreading) to improve system performance.
  • For example, the image display module 155 can load the appropriate sub-image of an original image in memory 130 and on the display 166. In some embodiments, surrounding blocks and blocks from higher and lower levels can also be pre-fetched for higher performance by the image display module 155. This can allow each display node 100A to support the display of more than one such image or portions thereof. For example, the image display module 155 can determine its individual display boundary based on the resolution layer determined by the image processing module 150 (e.g., the resolution layer having the smallest image size larger than the native resolution of the display 166). Based on the determination of the individual display boundary, the image display module 155 can determine which tiles overlap with its display boundary and can fetch the overlapping and/or surrounding blocks, or tiles. Additionally, a resource management approach can support interactivity by reducing the amount of data loaded and allowing portions of several images to be resident on each tile, thus supporting display and manipulation of multiple large images.
  • Advantageously, the image display module 155 can be configured to allow the use of multiple highly detailed images, which can exceed billions of pixels, to be displayed as part of a high resolution, coordinated workspace on a tiled display that includes multiple display nodes 100. Further, the image display module 155 can be configured to allow in real-time or in near real-time interaction with multiple images by allowing moving, zooming, rotating, color filtering, and transparency controlling of images on the display node 100. In some embodiments, the image display module 155 can be configured to perform color filtering, shading or transparency control using a vertex and/or pixel shader, such as a Cg shader.
  • For example, in some embodiments, the user may use a front end interface, such as the control node 102, and select to rotate an image on the tiled display. In some embodiments, the user can manipulate the display via user input devices 151 (for example, a keyboard or 3D mouse). The image display module 155 can respond to the user's selection, by using a reduced size or reduced resolution version (e.g., thumbnail) of the original image, which may be stored in the processed image data source 162, to quickly adjust its display 166. For example, when the image on the system is initially selected for rotation, the image display module 155 can replace the image being displayed with the thumbnail during the rotation process. In some embodiments, as the thumbnail of the original image is rotated and thus redrawn at different angular orientations, less processing power is required to complete the redraw process, thereby providing a quicker response time. In some embodiments, an increased resolution image can be displayed upon completion of the rotation or other manipulation of the image. Additionally, user manipulations such as those noted above, can be used to generate messages sent to the control nodes, described in greater detail below.
  • In addition, the image display module 155 can also be configured to exchange messages with the control node 102 or the other display nodes (e.g., 100B, 100C, etc.) about the state of the tiled display, such as which portion of the original image needs to be displayed by respective nodes. Thus, the image display module 155 can provide a highly interactive experience that has numerous applications, including the manipulation of data about medical conditions, cancer cells, satellite images, geosciences, oil monitoring, weather monitoring or prediction, astronomy, and the like.
  • Although FIG. 1B has been described with respect to display nodes 100, a control node 102, and an image data source 164, certain of the features of the system shown in FIG. 1B can be implemented using other types of computing devices communicating over the network 160. For example, the control node 102 can communicate over the network 160 with a media source device (instead of the image data source 164) and one or more destination computing devices (instead of the display nodes 100). The control node 102 can broker a connection between the media source device and a destination computing device. In some embodiments, the control node 102 locates media data stored on the media source device and obtains the media data or a portion thereof (such as a thumbnail) from the media source device. The control node 102 can also be configured to send the media data or the portion thereof to the destination computing device, along with network communication or connectivity data. The network communication data can enable the destination computing device to communicate with the media source device to obtain media data. The network communication data could include, for example, a network address (such as an IP address) of the media source device, a proxy for the media source device, an anycast IP address for a plurality of media source devices, or the like.
  • In some embodiments, providing the network communication data from the control node 102 to the destination computing device enables the destination computing device to obtain media, including media updates, from the media source device. As a result, the control node 102 can be less of a bottleneck for communications between the media source device and the destination computing device.
  • In some embodiments, the destination computing device can report or otherwise provide the media updates it receives or a portion thereof to the control node 102. For example, the destination computing device can provide a thumbnail, a reduced frame rate video, metadata associated with the media updates, combinations of the same, and the like. The control node 102 can therefore keep track of the media data provided to the destination control device.
  • In some embodiments, the control node 102 can provide network communication information to the media source device instead of or in addition to providing communication information to the destination computing device. This network communication information can allow the media source device to communicate with the destination computing device. For example, the control node 102 can provide a network address of the destination computing device to the media source device. The media source device can then push media to the destination computing device.
  • In some embodiments, the control node 102 can identify media stored on the media computing device without requesting the media. The control node 102 can provide network communication data to the destination computing device, which allows the destination computing device to obtain the media from the media server. Thus, little or no media might pass through the control node 102 from the media source device to the destination computing device, further reducing bottleneck effects of the control node 102.
  • In some embodiments, the interactive tiled display system 50 can also be configured to support other data formats, in addition to large digital images. For example, the interactive tiled display system 50 can be configured to support and display Normalized Difference Vegetation Index (NDVI) data, MRI scan data, SOAR 3D terrain data, digital video data (including standard and HDTV format), and streaming content (e.g., from a webcam).
  • The interactive tiled display system 50 can also be configured to display a real-time replica of content displayed on the display screen of one or more computing devices in communication with the network 160, as described in U.S. patent application Ser. No. 12/487,590 entitled “Systems, Methods, and Devices for Dynamic Management of Data Streams Updating Displays,” the entire content of which is hereby expressly incorporated herein by reference in its entirety. In some embodiments, the interactive tiled display system 50 comprises middleware that can be configured to render and display an OpenGL application by executing instructions to launch and manage instances of the OpenGL application on each of the display nodes 100 in parallel through a thread-based network communication layer.
  • FIG. 2 is a flowchart illustrating an embodiment of a method of preprocessing images that can provide a high level of interaction and manipulation of the images on tiled display systems. The method illustrated in FIG. 2, as well as other methods disclosed below, can be stored as process instructions (for example, on any type of computer-readable storage medium) accessible and executable by the image processing module 150 and/or other components of the display node 100A, the control node 102, or any other computer or system connected to the tiled display system 50 directly or over any type of network. Depending on the embodiment, certain of the blocks described below can be removed, others may be added, and the sequence of the blocks can be altered. In some embodiments, each of the display nodes 100 can be configured to perform the method, as well as other methods discussed below, using parallel processing techniques to improve system performance.
  • Beginning at block 210, one or more full size or “original” images is received by each of the display nodes 100. In some embodiments, the one or more full size images can be sent from the original image data source 164 to the display nodes 100 by way of the control node 102. The full size images can also be sent over the network 160 from a computing device or received via a physical I/O port of the display node 100A, such as a USB port.
  • The full size images can include various types of data for visualization, such as still photographs, videos, or other images of anything including but without limitation, medical, satellite, geosciences, oil, weather monitoring or prediction, astronomy imagery, and include those discussed above with reference to FIG. 1B. As those of skill in the art will recognize, the image types can vary greatly and be based on a variety of possible types of data. After the full size images are received in operation block 210, the process can move to block 220.
  • In block 220, a set of sub-images that allow for rapid access to portions of the one or more full size images can be created. In some embodiments, the image processing module 150 preprocesses each original full size image and creates a set of sub-images that are a reduced size and/or reduced resolution (e.g., thumbnail) version of the original image. For example, a sub-image can be formed from a cropped portion of the original image, at the same resolution of the original image. For example, a 10240×7680 pixel image can be stored as 256×256 pixel tiles. In such an embodiment, the overall image comprises 40×30 tiles, or sub-images, each comprising 256×256 pixels. Other tile sizes can also be used. In some embodiments, the tiles can be roughly square. In other embodiments, the tiles can be comprise rectangular strips. Storing the image data as sub-images of smaller tile sizes can reduce the total amount of data to be processed for display on each of the display nodes 100.
  • Each of the display nodes 100 can be configured to only decompress or otherwise “process” the tiles necessary to cover its display area rather than decompressing or processing the entire image. For example, each of the display nodes 100 can be configured to process groups of the tiles that correspond to the respective portions of the image to be displayed on each display unit. In operation, each of the groups can comprise different subsets of all the tiles, although not necessarily exclusive groups. For example, groups associated with two adjacent display units may have one or more tiles in common.
  • Storing the image data in a tiled format can advantageously allow faster processing when the zoom level of the image is increased to show more precise detail. For example, instead of processing the lower zoom level portion of the image displayed on a particular display node, the display node can be configured to process only the tiles corresponding to the higher zoom level portion of the image. In some embodiments, if a particular tile overlaps a boundary between adjacent display nodes, each of the display nodes containing a portion of the image on the overlapping tile can process the same overlapping tile.
  • In some embodiments, the sub-images can comprise multiple resolution layers of the original full size image. A first sub-image can be created that is the same resolution as the original image, a second sub-image can be created that is half the resolution of the first sub-image, a third sub-image can be created that is half the resolution of the second sub-image, and so on. For example, a 10240×7680 pixel image can be stored in five resolution layers as follows:
  • Layer 0: 10240×7680
  • Layer 1: 5120×3840
  • Layer 2: 2560×1920
  • Layer 3: 1280×960
  • Layer 4: 640×480
  • In some embodiments, the resolution layers can also be processed into tiles or tiled sub-images. Storing the image in multiple layers of resolution can advantageously improve efficiency. For example, if the resolution of the display 166 is much smaller than the resolution of the image, the display 166 can only show the detail at the lower resolution and actual pixel details of the image are lost. In some embodiments, it can be advantageous to decompress a lower resolution layer of the image than the original high resolution image.
  • In some embodiments, the image processing module 150 can create sub-images of neighboring portions of the large image. For example, the image processing module 150 can create sub-images of portions of the large image to be displayed on neighboring display nodes. This can advantageously result in improved processing efficiency when the image is manipulated (e.g., rotated, panned, or resized) by a user and one of the sub-images overlaps a border between two adjacent display nodes. After the operation block 220, the process can move to block 230
  • In block 230, the set of sub-images can be stored in the processed image data structure 162 or in memory. In some embodiments, each sub-image can be stored in a hierarchical format (e.g., tiled pyramidal TIFF), such that one or more blocks, or tiles, allow rapid access to a particular part of the image without having to access entire rows or columns. In some embodiments, this can advantageously allow a display node 100A of the tiled system that knows the portion of an image it needs to display to fetch exactly the level of detail, for example a corresponding sub-image, and/or to quickly fetch the needed tiles that make up the image portion to be displayed. The sub-images can include a spatial and/or resolution identifier to allow for quick retrieval from the processed image data structure 162. In some embodiments, the sub-images (e.g., tiles) can be identified and retrieved based on a position command, coordinate, or value received in a message from the control node 102 (e.g., a “requested position”). For example, if the size of each tile is 256×256, a position coordinate of (0,0) or (124,0) can return the first tile and a position coordinate of (256,0) or (300,0) can return the next tile down on the x-axis. The identification and retrieval of tiles can be an operation supported by the TIFF software library (libTIFF).
  • In some embodiments, the image data (which can be divided into sub-images of various resolution layers and tiles) is stored as a single file format, with the sub-images being capable of being retrieved by the TIFF software library. In other embodiments, each of the sub-images can be stored individually, for example, in a directory file system. For example, each resolution layer can be a different level of directory in a file system, with each directory including all of the tiles for that resolution layer.
  • FIG. 3 schematically illustrates an embodiment of another method of displaying and manipulating images on a tiled array display system. Depending on the embodiment, certain of the blocks described below may be removed, others may be added, and the sequence of the blocks may be altered. In some embodiments, each of the display nodes 100 can be configured to perform the method in parallel, thereby resulting in faster display and movement of the overall image.
  • Beginning at block 310, a portion of a full size image to display on a particular display node (e.g., display node 100A) can be calculated or determined. Additionally, multiple portions of one or more full size images to display can also be calculated or determined. Advantageously, this reduces the amount of data to be loaded on each particular display node 100, as well as a controlling computer, such as the control node 102, and thus increases the responsiveness of the overall tiled array display. Because of the increased responsiveness, manipulation of images on the tiled array display can be improved.
  • Moving to block 320, the one or more sub-images that correspond to the portion of the full size image to display are loaded into memory 130, for example. Because disk access times for loading a full size image can be impractical, each display node 100A can load tiles or blocks of the appropriate sub-images needed for its local portion of the overall tiled display. In some embodiments, the correct portions of multiple full size images can also be loaded into the memory 130 of the corresponding display node 100A, 100B, 100C, etc. The loading of tiles can advantageously be performed using parallel processing techniques, such as multithreading. Multiple threads can be used to simultaneously load the requested tiles from the processed image data source 162 or from the mass storage device 120 or other memory/file system.
  • Moving to block 330, the one or more sub-images are displayed. In some embodiments, the display node 100A may render the sub-images resident in the memory 130 using a multimedia device 140, such as a video card. The rendered sub-images can then be placed on display 166, such as an LCD monitor.
  • FIG. 4 schematically illustrates an embodiment of a method of controlling the display of one or more large images (e.g., hundreds of megapixels or larger) on an interactive tiled array display. Depending on the embodiment, certain of the blocks described below may be removed, others may be added, and the sequence of the blocks may be altered. Although the method is described with reference to a single display node 100A, the method can be performed by any of the display nodes 100. In some embodiments, each of the display nodes 100 can be configured to perform the method in parallel, thereby resulting in faster display and movement of the overall image.
  • At Block 405, the display node 100A receives a message (e.g., a command, data, packet, etc) from the control node 102, or another source, via the network 160. The message can be sent over the network 160 by any number of communication mechanisms, for example, TCP/IP over Ethernet or specialized high-speed interconnects, such as Myrinet. In some embodiments, the message contains information regarding an image to be displayed, or an image that is currently displayed, on the tiled array display.
  • For example, the message can comprise initial display parameters designated for an image that has not yet been displayed or an image that is currently being displayed. Such display parameters can comprise data indicating a desired or requested position for the corresponding image. Hereinafter, the content in the message indicative of the “position” included in the message is referred to as a “requested position.”
  • Such a “requested position” could be generated by the control node 102 when a user places, drags to, or clicks on a representation of a subject image (510 in FIG. 6) on a wire frame representation of the array 10 on a user interface 600 (FIG. 6). In such an example, the user interface 600 can calculate, estimate, etc., the location of a reference pixel in the image 510 on the array 10.
  • The reference pixel of the image 510 can be any pixel of the image. In some embodiments, the reference pixel can be the center pixel of the image, or a pixel immediately above, below, left or right of the center of the image, if the image does not mathematically have a center pixel. Other reference pixels or reference parts of the image can also be used.
  • The portion of the message indicative of the requested position can be in any form. For example, the “position” can be expressed as a row and column of a virtual pixel grid representing all the pixels of the entire array 10. In some embodiments, the requested position can be indicative of a pixel on a particular display unit, e.g., pixel 1, 1 on display unit 100G. Other formats can also be used.
  • The message can also include other aspects of the requested position, such as angular orientation. For example, the message can include data indicative that the image should be rotated by an angle. The angle can be relative to the reference pixel, such as the center pixel noted above, any other pixel of the image, or any other point of reference.
  • The message can include further aspects of the requested position, such as zoom or magnification. For example, the message can include an indication that the image should be presented in a 1 to 1 manner, e.g., every pixel of the image represented on a respective single pixel of the corresponding display units. The message can also include other data indicative of other display parameters.
  • In some embodiments, the message can contain image data, for example, streaming video data, still image data, or a live display feed from a computing device or other data source in communication with the display nodes 100 via the network 160 as described in U.S. patent application Ser. No. 12/487,590 entitled “Systems, Methods, and Devices for Dynamic Management of Data Streams Updating Displays,” the entire content of which is hereby expressly incorporated herein by reference in its entirety.
  • With continued reference to Block 405, the message can be received via broadcast, multicast, unicast, a reliable network overlay topology to parallelize transfer, or any other network messaging technique. In some embodiments, the multicast messages can be transmitted by unreliable multicast with error checking and handling. In some embodiments, the display nodes 100 can subscribe to a distribution group. For example, the display nodes 100 involved in displaying a portion of an image that does not cover the entire tiled display array can subscribe to, or join, a multicast group address. After the display nodes 100 have joined a multicast group, they can receive the multicast messages sent to the multicast group address.
  • In some embodiments, the message can comprise a configuration message to initialize or reconfigure a coordinate system of the tiled array display. The configuration message can include a node identifier and a node coordinate. In some embodiments, the coordinate system can be determined using letters for the columns and numbers for the rows in spreadsheet fashion. For example, the top left display node can be A1, the display node just to the right of A1 can be B1, the display node just below A1 can be A2, and so on.
  • In some embodiments, the coordinate system can be based on a numerical method. For example, the top left display node can be (0,0) with a monitor size of (1.0, 1.0). The display node beneath the top left display node can be (0,1) with a monitor size of (1.0, 1.0). In some embodiments, different monitor sizes can be applied when using the numerical coordinate system. Such a coordinate system, or other coordinate systems can be used to allow each of the display nodes 100 to determine which portion of an image to display on its associated display 166, as also described above.
  • In other embodiments, the display nodes 100 can be “pre-configured.” For example, the control node 102 can remotely access each of the display nodes 100 (for example, using virtual network computing (VNC) software) and rename, or otherwise configure, the display nodes 100, according to a predefined display configuration scheme (for example, the coordinate systems described above).
  • In some embodiments, the message received from the control node 102 can also comprise a state message. State messages can be received at periodic intervals or any time a change in a state of an image occurs (for example, due to user interaction or manipulation of the image on the user interface 600, FIG. 6). The state message can be received via broadcast to all of the nodes, multicast to a subset of the nodes, or unicast to a single node.
  • The state messages can include state values of the image to be displayed or updated state values of the image currently being displayed. The state values can include values regarding image position, image size, image resolution, image cropping, angle of rotation, pixel colorimetry or shading, transparency and/or other display parameters. For example, as described above with regard to messages generally, the state values can include X and Y values to indicate a position of the image. In some embodiments, the X and Y values can indicate a delta change in position, or offset, from the previous position. Received X and Y values can also indicate an updated location of the central pixel or another reference pixel of an image. In some embodiments, the X value indicates the left boundary of the image and the Y value indicates the top boundary of the image. The X and Y values can be represented in resolution units or coordinate values, for example.
  • Additionally, X and Y values can be used to indicate a size of the image on the overall tiled display. For example, the message can include two sets of X and Y values, indicating the locations of two reference pixels in the image. In an example, the two sets of X and Y values can indicate the locations of two opposite corners of the image. However, reference positions of any two reference pixels in the image could also be used. The X and Y values can be updated to accommodate panning of the image around the tiled array display.
  • The state values can include a Z value to indicate a zoom level or “magnification” of the image. The Z value can be updated to accommodate resizing of the image by zooming in and out. In some embodiments, the Z value can be used to determine the appropriate resolution layer to access as a starting point for processing of the image, as described in more detail above. For example, the resolution level just larger (in both height and width) than, or equivalent to, the specified zoom level can be selected as the starting point. As noted above, the state values can also include an angle value. The angle value can be used to indicate a requested angle of rotation of the corresponding image. The angle value can be updated to accommodate rotation of the image on the tiled array display.
  • In some embodiments, the state values can include colorimetry values of the image pixels. In some embodiments, the colorimetry values can include red, green, and blue (RGB) color values. In other embodiments, other color schemes, such as CMYK, can be used. The colorimetery values can also include information configured to adjust other parameters or viewing conditions of the image or to transfer from one color scheme to another, such as chromacities or transfer functions. The colorimetry values can be updated to accommodate color filtration. Color filtration can advantageously be used to expose new visual information or to match desired aesthetics (for example, for medical image data). The state values can also include an alpha value to indicate a level of transparency of the pixels. The alpha value can be used to accommodate adjustment of the transparency of the image (from fully opaque to fully transparent) and to accommodate visual overlays of multiple images.
  • When multiple images are being displayed on the tiled array display and state messages are being sent via broadcast, multiple state messages can be transmitted, with each broadcast including an ID of the image to which it pertains in addition to the state values of the image and multiple state messages can be sent. Other image values can also be included within the messages sent by the control node 102.
  • In some embodiments, as noted above, the message can include an image portion and a display parameter portion. The image portion can comprise image data, such as a computer file in the form of a compressed or uncompressed image, or an identification of an image that is already stored on local memory (e.g., 130 FIG. 1B) of the display units. The display parameter portion of the message can comprise commands or instructions regarding the display and rendering of the large digital image on the tiled array display. For example, the display parameter portion can include instructions indicating where the image is to be positioned on the tiled array display if it is not intended to take up the entire screen and/or at what resolution the image is to be displayed.
  • In some embodiments, the display parameter portion can include resolution information to indicate the resolution with which to display the large image. In other embodiments, the display nodes 100 can determine the appropriate resolution based on the native resolution of their associated displays 166 and the various levels of resolution of the received large image. In some embodiments, the display parameter portion can include colorimetery or transparency information.
  • At decision block 410, the CPU 105 of the display node 100A determines whether the state message impacts a display of an image or other data on its associated display 166. If no action is required by the display node 100A, then the display node 100A continues to receive the state messages. If, in decision block 410, it is determined that action is required, then the process continues to block 415.
  • For example, each of the display nodes can include a position module configured to store data indicative of any position in a tiled array display. For example, such a module can be anywhere in the memory system. Further, because there is no clear limit on the possible size of the array 10, the module can be configured to accept and retain an input indicative of any position on an array of any size. In some embodiments, the module can be embodied in software and accessible through a user interface, not illustrated.
  • In preparing a display node 100A for operation as a node in an arrayed display, such as the array 10, the display node 100A can be physically positioned in the array. For example, the display node 100A can be positioned adjacent other nodes in an array, such as the array 10. A user can also input at least a first data indicative of the position of the display node 100A in the array, the position module receiving and retaining the at least a first data or other data indicative of the position. For example, the user can remotely connect to the display node 100A through the use of VNC software, as described above, to indicate the position of the display node 100A in the array.
  • Each of the display nodes can also include one or more an image processing modules, such as the image processing module 150, which can be combined into a single module with the position module described above, or other modules described herein, can be separate from all other modules, or combined into any of various possible combinations of all of the modules described herein. In some embodiments, the module used for the determination performed in decision block 410 can be referred to as an “action required determination module.”
  • In some embodiments, the image processing module 150 can be configured to process the message received in operation block 405 in order to determine if the information in the message is indicative of a request for the display unit to process and generate an image on its corresponding local display 106. In some embodiments, the display nodes 100 can generate a virtual map of the entire array 10, and calculate the boundaries of the image resulting from processing the image in the manner indicated in the message. In some embodiments, the resulting boundaries can be compared to the position of the node 100A retained in the position module, described above.
  • If the position of the resulting boundaries indicates that no part of the resulting image would lie on the display unit at the position in the array retained in the position module, then the result of decision block 410 is “NO”, and the routine can return to operation block 405 and repeat. If, on the other hand, the result indicates that any part of the resulting image would lie on the display unit at the position in the array retained in the position module, then the result of decision block 405 is “YES” and the process can continue to operation block 415.
  • The display units can also be configured to perform other analyses on the information in the message. For example, if the message includes information indicating that only a portion of an image currently displayed on the display unit has changed, but the portion of the image displayed on the display unit performing the analysis has not changed, then the result determined in decision block 410 can also be “NO.”
  • At block 415, each of the display nodes 100 can determine the respective portion of the large image that it is responsible for displaying based on the state values of the message. In some embodiments, each of the display nodes include a module for further processing the message received to determine what discrete part of the image is to be displayed on the display node performing operation block 415. For example, in a continuation of the operation described above with reference to decision block 410, or repeating all of those processes, the display node can determine which pixels of the image fall within its position in the above described virtual array.
  • In some embodiments, where the image file is stored on the local display node memory in a compressed format, the display node can decompress the entire image or only a discrete part of the image, depending on the compression format. Optionally, the display node can process the image file at the maximum resolution of the image, or at another resolution, for example, another resolution included in a pyramidal image data format.
  • In some embodiments, the display node can first calculate a mosaic representation of the image, using the boundaries of the sub tiles of the image noted above. In this example, the display unit can perform the determination of operation block 415 by determining which tiles or sub tiles of the image would be positioned on the display unit, if the image was oriented according to the message. In some embodiments, the display unit can be configured to determine that a sub tile would be positioned on the display unit if any portion of the sub tile would lie on the display unit if the image were positioned according to the message.
  • A determination, such as the above-described determination performed in operation block 415, can be performed by an image processing module, such as the image processing module 150, an additional module combined with the image processing module 150, a separate image processing module, or any other device or module configured to perform the functions described herein. The module used to perform the operation performed in operation block 415, can be referred to as a “portion identification module.”
  • After determining the discrete portion of the image to display which could include the entire image, a specific list of pixels, a range of rows and columns of pixels, a list of sub tiles of the image, or another description of the discrete portion, the routine can continue to optional operation block 420.
  • At optional block 420, the display node 100A can update image data for display, if beneficial, for example, based on the state values of the message or based on the specific techniques used to perform the above steps. For example, if the Z value has changed (e.g., indicating a zoom in has been requested), new tiles or sub-images can be loaded to illustrate the increased detail of the image portion being displayed on the display node 100A. In some arrangements where new image data is received, the display node 100A can create one or more sub-images of the large image based on the image data corresponding to the portion of the large image that it is responsible for displaying.
  • In some embodiments, for example, where the display units perform the operation block 415 based on the position of tiles or sub tiles of the image, the display unit can process the tiles or sub tiles identified in operation block 415. As noted above, depending on information in the message, such as an indication of magnification, zoom, or other requested parameters regarding the display of the image, the display unit can selectively process the image data according to the identified tiles or sub tiles of a selected resolution or layer of the original image data, for example, where a the original image is in the form of a pyramidal resolution format image, with each of the different resolutions broken down into tiles or sub tiles. This technique can provide a further advantage in that, if an image spans at least two (2) display units such that two different portions are identified by these respective display units in operation block 415, neither of the display units need to process the entire image. Rather, the analysis of operation block 415 can be used to help the display units to efficiently determine which part of an image to process for display, without having to process, resolve, decompress, map, etc., the entire original image at full resolution.
  • One or more sub-images can be generated by the image processing module 150 as described above with respect to block 220 of FIG. 2. The created sub-images can be stored in memory or in a preprocessed image data structure, as described above with respect to block 230 of FIG. 2. The step of updating data in operation block 420 can include loading individual tiles, or sub-images via multithreading. For example, multiple tile threads can be configured to run in parallel to increase performance speed, as described above.
  • At block 425, the display node 100A outputs its respective portion of the large image on its associated display 166, thereby presenting a high-resolution rendering of the original large image on the tiled array display. When an image is being moved around the display quickly, certain tiles may not be loaded from a tile thread in time to display on the display 166, resulting in black tiles being displayed. Accordingly, in some embodiments, a lower or the lowest resolution layer image can be used, processed, displayed, etc., to temporarily mask the black tiles until the higher resolution tile is loaded from the tile thread.
  • In some embodiments, such as when an image spans multiple display nodes, the display nodes 100 can be configured to adjust their respective display output to compensate for the presence of bezels between the display nodes 100. For example, the display units can be configured to operate in two or more modes, such as an aspect ratio preservation mode or an image data preservation mode.
  • In an “aspect ratio preservation mode”, in the decision block 410 and the operation block 415, the display units can use a map of all the pixels in the array 10 which includes virtual pixels that fill the gaps between adjacent display nodes. These virtual pixels are not assigned to any of the display nodes. As such, an image displayed on the array 10 which overlaps multiple display units will have portions of the image missing, i.e., any portion of a displayed image falling within the gaps formed by the bezels, would not be displayed. However, the portions of the image that are displayed will have the aspect ratio of the original image. As used herein, the term “unified display of the digital image on the array of display units” includes an image displayed as in the aspect ratio preservation mode, despite the missing data.
  • This mode can be desirable when viewing images for aesthetics, images or art, architecture, etc., for example, because the alignment of features which span bezels will be preserved, as well as other applications. However, the image data preservation mode, described below, can also be desirable for viewing images for aesthetic reasons.
  • In an “image data preservation mode”, in the decision block 410 and the operation block 415, the display units can use a map of all the pixels in the array 10 which does not include or utilize unassigned pixels in the gaps between adjacent display nodes. Instead, every pixel of the original image to be displayed on the array 10 is assigned to a display node. As such, the aspect ratio of an image displayed on the array 10 which overlaps multiple display units will be affected/distorted, in the portion spanning the bezels between the displays (166 FIG. 1B) of the display nodes. However, the portions of the image that are displayed will have the aspect ratio of the original image. As used herein, the term “unified display of the digital image on the array of display units” includes an image displayed as in the image data preservation mode, despite the distortion on the resulting aspect ratio.
  • This mode can be desirable for scientific analyses of images because all of the image data is displayed, regardless if an image spans a bezel. However, the aspect ratio preservation mode can also be helpful in scientific analyses, for example, where it is desired to compare proportional sizes of features of one or more images.
  • In some embodiments, the display nodes 100 can be configured to activate one or a plurality of predetermined functions to change the position and/or size of the image to conform to one or more physical or logical boundaries of the tiled array display, such as the “snapping” function described in U.S. Provisional Application 61/218,378, entitled “Systems, Methods, and Devices For Manipulation of Images on Tiled Displays,” the entire content of which is hereby expressly incorporated herein by reference.
  • FIGS. 5A and 5B schematically depict how an image can be displayed on one or more display units 100A, 100B, 100C, 100D of a tiled array 500. The tiled array 500 can be in the same form as the array 10, or another form.
  • As depicted in FIG. 5A, an image 510 can be displayed in such a manner that it overlaps two display units 100A and 100C of the array 500. As further detailed in FIG. 5B, if the array 500 is only displaying an image on a limited number of the total number of display units, such as only the units 100A and 100C, then the associated image data from the image source can be selectively sent to those display units 100A and 100C (e.g., via a multicast message). This can help reduce bandwidth requirements for transmission of image data.
  • For example, with continued reference to FIGS. 5A and 5B, the image 510, when overlapping two display units 100A, 100C, can be broken down into parts corresponding to those portions displayed on different display units. As shown in FIG. 5B, the image 510 can be segregated into a first portion 511 (displayed on display unit 100A) and a second portion 512 (displayed on display unit 100C). As such, in some embodiments, the tiled array 500 can be configured to send the image data corresponding to the portion 511 to the display unit 100A and the other image data corresponding to the portion 512 to the display unit 100C, along with data indicating the position at which these portions 511, 512 should be displayed.
  • As such, the image data corresponding to the image 510 does not need to be broadcast to the other display nodes (e.g., 100B, 100D). Rather, in some embodiments, the control node 102 can be configured to only send image data to those display nodes in the array 500 that will be involved in displaying at least a portion of the image 510 (e.g., via multicast). This can greatly reduce the magnitude of data flowing into and out of each of the display nodes 100.
  • With reference to FIG. 6, in some embodiments, the control node 102 can be configured to generate a schematic representation of the display nodes 100 for use as a graphical user interface. In some embodiments, the control node 102 can be configured to display the images being displayed on the tiled array display on its own associated display, such as a display monitor 600. The display monitor 600 of the control node 102 can comprise a wire frame 605 of the tiled array display 500. In other embodiments, the display monitor 600 of the control node 102 can include other visual cues for representing the borders of the individual displays 166 of each display node of the tiled array display. As shown in the lower portion of FIG. 6, the image 510 can be displayed both on the tiled array display 500 and the display monitor 600 of the control node 102. The representation of the image 510 on the control node display 600 can be in the form of a reduced resolution version of the image 510, a full resolution version, a thumbnail version, or other versions.
  • The display monitor 600 of the control node can have any amount of resolution. However, in some embodiments, the resolution of the display 600 of the control node 102 will be significantly less than the total resolution of the tiled array display 500 including all the individual display units 100A, 100B, etc.
  • Thus, the wire frame 605 schematically representing the tiled array display 500 can be presented on the display 600 of the control node 102 to provide an approximation of the proportions of the tiled array display 500. For example, if the tiled array display 500 provides a total of 19,200 pixels wide by 10,800 pixels high, the display 600 of the control node 102 might include 1/10th of that resolution, e.g., 1,920 pixels wide by 1,080 pixels high, or other resolutions.
  • The wire frame display can be the same or similar to the schematic wire frame representation illustrated in FIGS. 5A and 5B. The control node 102 can be further configured to allow a user to manipulate the placement of an image, such as the image 510, on the tiled array display. For example, a user can resize, reposition, rotate, and/or adjust color filtration or transparency of the one or more images. In some embodiments, user interaction devices, such as a keyboard, mouse, 3D mouse, speech recognition unit, gesture recognition device, Wii remote, electronic pointing device, or haptic input device can be used to manipulate the one or more images on the tiled array display 500. An arrow pointer or other visual indicator can be displayed on the tiled array display to allow for manipulation of the images via the user interaction devices.
  • In some embodiments, the manipulation of the image 510 on the control node display 600 corresponds with the manipulation of the image on the tiled array display 500. For example, a user can provide inputs, for example, by employing a mouse based control to drag and drop images on the wire frame representation to change the position or orientation of images on the tiled array display 500. These inputs can be used by the control node 102 to generate the messages (e.g., state messages comprising image state values) described above.
  • The word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, Objective-C, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
  • In addition the types of connections and couplings discussed above, any coupling or connection discussed herein could be a local area network, wireless local area network, wide area network, metropolitan area network, storage area network, system area network, server area network, small area network, campus area network, controller area network, cluster area network, personal area network, desk area network or any other type of network.
  • Any of the computers, laptops, server, including the proxy server, control nodes, workstation, or other devices herein may be any type of computer system. A computer system may include a bus or other communication mechanism for communicating information, and a processor coupled with bus for processing information. Computer system may also includes a main memory, such as a random access memory (RAM), flash memory, or other dynamic storage device, coupled to bus for storing information and instructions to be executed by processor. Main memory also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor. Computer system may further include a read only memory (ROM) or other static storage device coupled to a bus for storing static information and instructions for processor. A storage device, such as a magnetic disk, flash memory or optical disk, may be provided and coupled to bus for storing information and instructions.
  • The embodiments herein are related to the use of computer system for the techniques and functions described herein in a network system. In some embodiments, such techniques and functions are provided by a computer system in response to processor executing one or more sequences of one or more instructions contained in main memory. Such instructions may be read into main memory from another computer-readable storage medium, such as storage device. Execution of the sequences of instructions contained in main memory may cause a processor to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement embodiments. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • The term “computer-readable storage medium” as used herein, in addition to having its ordinary meaning, refers to any medium that participates in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as a storage device. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus.
  • Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge.
  • Computer systems can send messages and receive data, including program code, through the networks or other couplings. The received code may be executed by a processor as it is received, and/or stored in storage device, or other non-volatile storage for later execution. The term “broadcast” as used herein, in addition to having its ordinary meaning, may refer to any transmission of information over a network. The term “multicast” as used herein, in addition to having its ordinary meaning, can include broadcasting information to a subset of devices in communication with a network.
  • Although the foregoing inventions have been described in terms of some embodiments, other embodiments will be apparent to those of ordinary skill in the art from the disclosure herein. Moreover, the described embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms without departing from the spirit thereof. Accordingly, other combinations, omissions, substitutions and modifications will be apparent to the skilled artisan in view of the disclosure herein. Thus, the present inventions are not intended to be limited by the preferred embodiments.

Claims (37)

1. A tiled display system comprising:
an array of display units in communication with a network, each of the display units comprising an image display device and a processor, the image display devices being arranged in a tiled layout;
a control unit in communication with each of the display units via the network, the control unit configured to transmit to the display units, a message indicative of a state of a digital image;
wherein the processor of each of the display units is configured to determine a respective portion of the digital image to output on its respective image display device based at least in part on the message and to output the respective portions on the respective image display devices so as to collectively present a unified display of the digital image on the array of display units.
2. The system of claim 1, wherein all of the display units are configured to store the digital image in the form of discrete tiles, a combination of the tiles forming the entirety of the digital image.
3. The system of claim 2, wherein all of the display units are configured to define the respective portion of the digital image as a subset of the discrete tiles, and to process and display image data only from the subset of discrete tiles in response to the message.
4. The system of claim 1, wherein at least a plurality of the display units are configured to store the digital image in the form of discrete portions, a combination of the portions forming the entirety of the digital image.
5. The system of claim 4, wherein at least a plurality of the display units are configured to define the respective portion of the digital image as a subset of the discrete portions, and to process and display image data only from the subset of discrete portions in response to the message.
6. The system of claim 1, wherein at least a plurality of the display units are configured to store a plurality of copies of the digital image, the plurality of copies comprising data representing the digital image at different resolutions.
7. The system of claim 6, wherein at least a plurality of the display units are configured to selectively process and display at least a portion of one of the plurality of copies based on the message.
8. The system of claim 1, wherein the digital image comprises an image having at least 200 Megapixels.
9. The system of claim 1, wherein the message comprises one or more display parameters of the digital image.
10. The system of claim 6, wherein the one or more display parameters includes a location of a reference pixel of the digital image on the array of display units.
11. The system of claim 6, wherein the one or more display parameters include a zoom level of the digital image.
12. The system of claim 6, wherein the one or more display parameters include at least one of an indication of a size of the digital image, a pixel color value, and a pixel transparency value.
13. The system of claim 1, wherein the message comprises at least a portion of the digital image.
14. The system of claim 1, wherein the control unit is configured to broadcast the message to all of the display units.
15. The system of claim 1, wherein the control unit is configured to transmit the message to a plurality of the display units via a multicast message.
16. A method of presenting a large digital image on an array of display units, the array including a control unit communicating with each of the display units over a network, each of the display units comprising a display monitor and a processor:
transmitting a message from the control unit to the display units, the message comprising at least one display parameter of a large digital image;
storing at least a portion of the large digital image on a local memory of each of the display units;
using a plurality of the display units in parallel to identify the respective portions of the large digital image to display on each of the respective display units, based at least in part on the at least one display parameter;
using the respective ones of the plurality of the display units to process the identified respective portions of the large digital image; and
using the respective ones of the plurality of the display units to output the respective portion of the large image on the respective display monitor of the display units in parallel, based at least in part on the at least one display parameter.
17. The method of claim 16, further comprising receiving the large digital image from an image data source in communication with the network.
18. The method of claim 16, wherein transmitting comprises transmitting a location of a reference pixel of the large image on the array.
19. The method of claim 16, wherein transmitting the message from the control unit to the display units comprises broadcasting the message to each of the display units.
20. The method of claim 16, wherein transmitting the message from the control unit to the display units comprises transmitting a multicast message to a plurality of the display units.
21. The method of claim 16, wherein transmitting the message from the control unit to the display units comprises transmitting a multicast message to at least one of the display units.
22. The method of claim 16, further comprising storing sub-images of at least a portion of the large image on each of the display units.
23. The method of claim 22, wherein storing comprises storing multiple resolution layers of at least a portion the large image.
24. The method of claim 22, wherein storing comprises storing a plurality of image blocks of at least a portion of the large image.
25. A method of presenting a large digital image on an array of tiled display nodes, each of the tiled display units comprising a display monitor and a processor, the array also including a control unit communicating with each of the tiled display nodes over a network, the method comprising:
receiving a message from a control unit, the message comprising at least one display parameter of a large digital image;
determining a respective portion of the large digital image to display on each of the display nodes in response to the message received, based on the at least one display parameter;
using each of the display nodes in parallel to process groups of one or more stored sub-images corresponding to the respective portions of the large digital image;
using each of the display nodes in parallel to output a display of the respective portions of the large digital image on the respective display monitors of each of the display nodes.
26. The method of claim 25, further comprising:
receiving a second message from the control unit, the second message comprising a second display parameter; and
updating the display based on the second display parameter, the second display parameter having a different value than the at least one display parameter.
27. A method of displaying an image on a subset of an array of a plurality of connected displays to which a control unit is also connected, as implemented by one or more computing devices, the method comprising:
determining which of one or more connected displays, chosen from the plurality of displays connected to the control unit, should be sent an updated image message;
establishing communication between the one or more connected displays and the control unit;
receiving the updated image message at the one or more connected displays from the control unit; and
updating the original image being displayed on the one or more connected displays based on the updated image message in parallel to display an updated image on said connected displays.
28. The method of claim 27, wherein establishing communication between the one or more connected displays and the control unit comprises establishing a multicast group address.
29. The method of claim 27, wherein the updated image message comprises at least one state value of the updated image.
30. The method of claim 29, wherein the at least one state value is generated based upon a detection of a user interaction event.
31. A method of presenting a plurality of digital images on a parallelized array of display units, each of the display units comprising a display monitor and a processor, and a control unit communicating with each of the display units over a network, the method comprising:
transmitting a first message from the control unit to a first subset of the display units, the first message comprising a first set of display parameters of a first digital image;
transmitting a second message from the control unit to a second subset of the display units, the second message comprising a second set of display parameters of a second digital image;
identifying a respective portion of the first digital image on each of the display units of the first subset to output for display based at least in part on the first set of display parameters;
identifying a respective portion of the second digital image on each of the display units of the second subset to output for display based at least in part on the second set of display parameters;
outputting the respective portion of the first digital image on the display monitor of each of the display units of the first subset in parallel based at least in part on the first set of display parameters, and
outputting the respective portion of the second digital image on the display monitor of each of the display units of the second subset in parallel based at least in part on the second set of display parameters.
32. The method of claim 32, wherein the first message and the second message comprise broadcast messages.
33. The method of claim 32, wherein the first message and the second message comprise multicast messages.
34. A display unit configured to operate as a node of an arrayed display system which can be formed of a plurality of the display units physically mounted adjacent to each other in a tiled layout, the display unit comprising:
a housing;
an image display device connected to the housing;
at least one processor operably connected to the image display device;
a memory system comprising at least one memory device configured to store at least one digital image and operably connected to the processor;
at least one network communication device operably connected to the processor and configured to receive at least one message transmitted over a network, the message including data indicative of a movement of a an image that can be displayed on arrayed display system comprising a tiled display arrangement larger than the image display device; and
a portion identification module configured to access at least one digital image that can be stored in the memory system, the portion identification module also being configured to identify a portion of the at least one digital image, the portion being less than an entirety of the at least one digital image, in response to information in a message received by the network communication device and based on a portion of an arrayed display system corresponding to the display unit.
35. The display unit of claim 34, wherein the memory system is configured to store a plurality of independently readable portions of a digital image, a combination of all of the independently readable portions forming an entirety of the digital image.
36. The display unit of claim 34, wherein the portion identification module is configured to identify the portion of the digital image as comprising a sub set of a plurality of independently readable portions of the digital image, a combination of all of the independently readable portions forming an entirety of the digital image.
37. The display unit of claim 34, wherein the portion identification module is configured to determine positions of boundaries of at least one digital image stored in the memory system, positioned in accordance with data indicative of a requested position in an arrayed display system received by the at least one network communication device.
US12/545,026 2008-08-20 2009-08-20 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays Active 2032-01-19 US8410993B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/545,026 US8410993B2 (en) 2008-08-20 2009-08-20 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays
US13/854,814 US20140098006A1 (en) 2008-08-20 2013-04-01 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US9058108P 2008-08-20 2008-08-20
US12/545,026 US8410993B2 (en) 2008-08-20 2009-08-20 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/854,814 Continuation US20140098006A1 (en) 2008-08-20 2013-04-01 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays

Publications (2)

Publication Number Publication Date
US20100123732A1 true US20100123732A1 (en) 2010-05-20
US8410993B2 US8410993B2 (en) 2013-04-02

Family

ID=41695889

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/487,590 Active 2033-06-05 US8797233B2 (en) 2008-08-20 2009-06-18 Systems, methods, and devices for dynamic management of data streams updating displays
US12/545,026 Active 2032-01-19 US8410993B2 (en) 2008-08-20 2009-08-20 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays
US13/854,814 Abandoned US20140098006A1 (en) 2008-08-20 2013-04-01 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/487,590 Active 2033-06-05 US8797233B2 (en) 2008-08-20 2009-06-18 Systems, methods, and devices for dynamic management of data streams updating displays

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/854,814 Abandoned US20140098006A1 (en) 2008-08-20 2013-04-01 Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays

Country Status (1)

Country Link
US (3) US8797233B2 (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110115990A1 (en) * 2009-11-13 2011-05-19 Joe Bhaktiar Display system
US20110169860A1 (en) * 2009-02-05 2011-07-14 Masahiro Ito Information Display Device
US20110310070A1 (en) * 2010-06-17 2011-12-22 Henry Zeng Image splitting in a multi-monitor system
US20120026134A1 (en) * 2009-04-05 2012-02-02 Sharon Ehrilich Unified input and display system and method
US20120032929A1 (en) * 2010-08-06 2012-02-09 Cho Byoung Gu Modular display
US20120139947A1 (en) * 2010-12-02 2012-06-07 Sony Corporation Information processor, information processing method and program
US20120266102A1 (en) * 2006-01-31 2012-10-18 Accenture Global Services Limited System For Storage And Navigation Of Application States And Interactions
US8410993B2 (en) * 2008-08-20 2013-04-02 The Regents Of The University Of California Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays
WO2013081624A1 (en) 2011-12-02 2013-06-06 Hewlett-Packard Development Company, L.P. Video clone for a display matrix
US20130201176A1 (en) * 2012-02-08 2013-08-08 Samsung Electronics Co., Ltd. Display apparatus
US20130321342A1 (en) * 2012-06-01 2013-12-05 Chun-Yuan Cheng Optical touch screen expansion method
WO2012037419A3 (en) * 2010-09-16 2014-03-20 Omnyx, LLC Digital pathology image manipulation
US8706802B1 (en) * 2009-11-24 2014-04-22 Google Inc. Latency-guided web content retrieval, serving, and rendering
US20140189487A1 (en) * 2012-12-27 2014-07-03 Qualcomm Innovation Center, Inc. Predictive web page rendering using a scroll vector
US8824125B1 (en) 2013-03-16 2014-09-02 ADTI Media, LLC Modular installation and conversion kit for electronic sign structure and method of using same
US20140269930A1 (en) * 2013-03-14 2014-09-18 Comcast Cable Communications, Llc Efficient compositing of multiple video transmissions into a single session
WO2014160537A1 (en) * 2013-03-15 2014-10-02 Harborside Press, LLC Interactive synchronized multi-screen display
US8929083B2 (en) 2013-03-16 2015-01-06 ADIT Media, LLC Compound structural frame and method of using same for efficient retrofitting
US8935431B2 (en) 2010-12-17 2015-01-13 International Business Machines Corporation Highly scalable and distributed data sharing and storage
US20150040075A1 (en) * 2013-08-05 2015-02-05 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
US8975808B2 (en) 2010-01-26 2015-03-10 Lightizer Korea Inc. Light diffusion of visible edge lines in a multi-dimensional modular display
CN104469536A (en) * 2013-09-17 2015-03-25 株式会社理光 Distribution management apparatus and distribution system
US9047791B2 (en) 2013-03-16 2015-06-02 Adti Media, Llc. Sign construction with sectional sign assemblies and installation kit and method of using same
US9134773B2 (en) 2013-12-31 2015-09-15 Ultravision Technologies, Llc Modular display panel
US20150295991A1 (en) * 2013-06-25 2015-10-15 Tencent Technology (Shenzhen) Company Limited Method and device for browsing network data, and storage medium
US9164722B2 (en) 2013-12-31 2015-10-20 Ultravision Technologies, Llc Modular display panels with different pitches
US9207904B2 (en) 2013-12-31 2015-12-08 Ultravision Technologies, Llc Multi-panel display with hot swappable display panels and methods of servicing thereof
US9235905B2 (en) 2013-03-13 2016-01-12 Ologn Technologies Ag Efficient screen image transfer
US20160019831A1 (en) * 2014-07-16 2016-01-21 Ultravision Technologies, Llc Display System having Module Display Panel with Circuitry for Bidirectional Communication
US20160034240A1 (en) * 2014-08-04 2016-02-04 At&T Intellectual Property I, Lp Method and apparatus for presentation of media content
US20160162243A1 (en) * 2014-12-04 2016-06-09 Henge Docks Llc Method for Logically Positioning Multiple Display Screens
US9416551B2 (en) 2013-12-31 2016-08-16 Ultravision Technologies, Llc Preassembled display systems and methods of installation thereof
US9666105B2 (en) 2013-03-16 2017-05-30 ADTI Media, LLC Sign construction with modular wire harness arrangements and methods of using same for backside to frontside power and data distribution schemes
US20170186345A1 (en) * 2015-12-29 2017-06-29 Christie Digital Systems Usa, Inc. System for mounting a plurality of display units
US9761157B2 (en) 2013-03-16 2017-09-12 Adti Media Llc Customized sectional sign assembly kit and method of using kit for construction and installation of same
US9800828B1 (en) * 2013-03-15 2017-10-24 Cox Communications, Inc. Method for pre-rendering video thumbnails at other than macroblock boundaries
US9852666B2 (en) 2013-03-16 2017-12-26 Adti Media Llc Full height sectional sign assembly and installation kit and method of using same
US20170372346A1 (en) * 2016-06-22 2017-12-28 Fujifilm North America Corporation Automatic generation of image-based print product offering
US20180004475A1 (en) * 2015-03-19 2018-01-04 Fujitsu Limited Display method and display control apparatus
US10061553B2 (en) 2013-12-31 2018-08-28 Ultravision Technologies, Llc Power and data communication arrangement between panels
US20190246098A1 (en) * 2018-02-07 2019-08-08 Lockheed Martin Corporation Distributed Multi-Screen Array for High Density Display
US10429968B2 (en) * 2014-11-06 2019-10-01 Visteon Global Technologies, Inc. Reconfigurable messaging assembly
US20200004491A1 (en) * 2018-07-02 2020-01-02 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling thereof
US20200034614A1 (en) * 2018-07-30 2020-01-30 Ncr Corporation Item identification with low resolution image processing
US10930709B2 (en) 2017-10-03 2021-02-23 Lockheed Martin Corporation Stacked transparent pixel structures for image sensors
US20210314647A1 (en) * 2017-02-03 2021-10-07 Tv One Limited Method of video transmission and display
US11146781B2 (en) 2018-02-07 2021-10-12 Lockheed Martin Corporation In-layer signal processing
WO2021251585A1 (en) * 2020-06-10 2021-12-16 삼성전자주식회사 Electronic device for recognizing each of plurality of display modules and method for recognizing multi-display
US20210397398A1 (en) * 2019-03-13 2021-12-23 Xi'an Novastar Tech Co., Ltd. Method, Device and System for Configuring Display Screen
US11269577B2 (en) * 2016-04-22 2022-03-08 Displaylink (Uk) Limited Distributed video pipe
US11321042B2 (en) 2018-03-28 2022-05-03 Eizo Corporation Display system and program
US11347466B2 (en) * 2017-08-14 2022-05-31 Imax Theatres International Limited Wireless content delivery for a tiled LED display
US11616941B2 (en) 2018-02-07 2023-03-28 Lockheed Martin Corporation Direct camera-to-display system

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8970448B2 (en) * 2009-06-18 2015-03-03 Hiperwall, Inc. Systems, methods, and devices for manipulation of images on tiled displays
JP5589366B2 (en) * 2009-11-27 2014-09-17 ソニー株式会社 Information processing apparatus, information processing method, and program thereof
US9276986B2 (en) * 2010-04-27 2016-03-01 Nokia Technologies Oy Systems, methods, and apparatuses for facilitating remote data processing
JP5430491B2 (en) * 2010-05-17 2014-02-26 キヤノン株式会社 Information processing apparatus, display apparatus, display system, information processing apparatus control method, and display apparatus control method
US20110320944A1 (en) * 2010-06-29 2011-12-29 Nokia Corporation Systems, methods, and apparatuses for generating an integrated user interface
JP5229289B2 (en) 2010-09-24 2013-07-03 日本電気株式会社 Display device, coordinate notification method and program
JP5854232B2 (en) * 2011-02-10 2016-02-09 日本電気株式会社 Inter-video correspondence display system and inter-video correspondence display method
US9064447B2 (en) * 2011-12-13 2015-06-23 Vmware, Inc. Methods and devices for filtering and displaying data
JP5890688B2 (en) * 2012-01-06 2016-03-22 キヤノン株式会社 Information processing apparatus, control method, and program
GB2505944A (en) 2012-09-17 2014-03-19 Canon Kk A video projector, a duster of video projectors and a method for wirelessly transmitting image data within the cluster of video projectors
EP3005649B1 (en) 2013-06-06 2019-03-13 Google LLC Systems, methods, and media for presenting media content
JP2015102742A (en) * 2013-11-26 2015-06-04 ソニー株式会社 Image processing apparatus and image processing method
US20150220300A1 (en) * 2014-02-03 2015-08-06 Tv One Limited Systems and methods for configuring a video wall
KR101386285B1 (en) * 2014-02-07 2014-04-17 에이치씨테크(주) Image processing board and survelliance system using image processing board
US9082018B1 (en) 2014-09-30 2015-07-14 Google Inc. Method and system for retroactively changing a display characteristic of event indicators on an event timeline
US10140827B2 (en) 2014-07-07 2018-11-27 Google Llc Method and system for processing motion event notifications
USD782495S1 (en) 2014-10-07 2017-03-28 Google Inc. Display screen or portion thereof with graphical user interface
WO2016196424A1 (en) * 2015-05-29 2016-12-08 Legends Attractions, Llc Thematic interactive attraction
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
CN105141876B (en) * 2015-09-24 2019-02-22 京东方科技集团股份有限公司 Video signal conversion method, video-signal converting apparatus and display system
US10061552B2 (en) 2015-11-25 2018-08-28 International Business Machines Corporation Identifying the positioning in a multiple display grid
CN107168660A (en) * 2016-03-08 2017-09-15 成都锐成芯微科技股份有限公司 Image procossing caching system and method
CN105867861B (en) * 2016-03-28 2024-03-26 京东方科技集团股份有限公司 Tiled display system and control method thereof
US10972574B2 (en) 2016-04-27 2021-04-06 Seven Bridges Genomics Inc. Methods and systems for stream-processing of biomedical data
US10506237B1 (en) 2016-05-27 2019-12-10 Google Llc Methods and devices for dynamic adaptation of encoding bitrate for video streaming
US10957171B2 (en) 2016-07-11 2021-03-23 Google Llc Methods and systems for providing event alerts
US10380429B2 (en) 2016-07-11 2019-08-13 Google Llc Methods and systems for person detection in a video feed
KR20180068470A (en) * 2016-12-14 2018-06-22 삼성전자주식회사 Display apparatus consisting a multi display system and control method thereof
US10410086B2 (en) 2017-05-30 2019-09-10 Google Llc Systems and methods of person recognition in video streams
US11783010B2 (en) 2017-05-30 2023-10-10 Google Llc Systems and methods of person recognition in video streams
US10664688B2 (en) 2017-09-20 2020-05-26 Google Llc Systems and methods of detecting and responding to a visitor to a smart home environment
US11494209B2 (en) 2019-09-04 2022-11-08 Hiperwall, Inc. Multi-active browser application
DE112020005127T5 (en) * 2019-10-25 2022-12-01 H2Vr Holdco, Inc. UNLIMITED PIXEL CANVAS FOR LED VIDEO WALLS
US11893795B2 (en) 2019-12-09 2024-02-06 Google Llc Interacting with visitors of a connected home environment
US11594173B1 (en) 2020-12-21 2023-02-28 Cirrus Systems, Inc. Modular display system with wireless mesh networking
CN114640887A (en) * 2022-03-22 2022-06-17 深圳创维-Rgb电子有限公司 Display method, device, equipment and computer readable storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657046A (en) * 1989-11-14 1997-08-12 Imtech International, Inc. Video moving message display
US6208319B1 (en) * 1996-03-26 2001-03-27 Fourie, Inc. Display device
US6232932B1 (en) * 1998-07-16 2001-05-15 Craig A. Thorner Apparatus and method for providing modular reconfigurable multi-function displays for computer simulations
US20010006375A1 (en) * 1999-11-30 2001-07-05 International Business Machines Corporation Host device, image display device, image display system, image display method, panel attribute reading-out method and image display control method
US20030020671A1 (en) * 1999-10-29 2003-01-30 Ovid Santoro System and method for simultaneous display of multiple information sources
US20030098820A1 (en) * 1999-06-14 2003-05-29 Mitsubishi Denki Kabushiki Kaisha Image signal generating apparatus, image signal transmission apparatus, image signal generating method, image signal transmission method, image display unit, control method for an image display unit, and image display system
US6611241B1 (en) * 1997-12-02 2003-08-26 Sarnoff Corporation Modular display system
US7053862B2 (en) * 2003-12-31 2006-05-30 Zerphy Byron L System and method for rapidly refreshing a dynamic message sign display panel
US20060256035A1 (en) * 2005-04-28 2006-11-16 Sony Corporation Display device and method, recording medium, program, and display device securing mechanism, and display system
US7193583B2 (en) * 2003-12-31 2007-03-20 Zerphy Byron L Automatic detection of dynamic message sign display panel configuration
US20090201224A1 (en) * 2002-10-29 2009-08-13 National Readerboard Supply Company Readerboard system
US20100045594A1 (en) * 2008-08-20 2010-02-25 The Regents Of The University Of California Systems, methods, and devices for dynamic management of data streams updating displays
US7683856B2 (en) * 2006-03-31 2010-03-23 Sony Corporation E-ink touchscreen visualizer for home AV system
US7686454B2 (en) * 2003-09-08 2010-03-30 Nec Corporation Image combining system, image combining method, and program
US7778842B2 (en) * 1996-01-16 2010-08-17 The Nasdaq Omx Group, Inc. Media wall for displaying financial information
US7880687B2 (en) * 2005-04-28 2011-02-01 Sony Corporation Display device, display method, program, recording medium, and composite image display apparatus

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07146671A (en) * 1993-06-16 1995-06-06 Mitsubishi Electric Corp Large-sized video display device
JPH11161590A (en) * 1997-11-27 1999-06-18 Hitachi Ltd Information viewing device
US6473088B1 (en) * 1998-06-16 2002-10-29 Canon Kabushiki Kaisha System for displaying multiple images and display method therefor
US20070174429A1 (en) * 2006-01-24 2007-07-26 Citrix Systems, Inc. Methods and servers for establishing a connection between a client system and a virtual machine hosting a requested computing environment
US8713186B2 (en) * 2007-03-13 2014-04-29 Oracle International Corporation Server-side connection resource pooling
US20090079694A1 (en) * 2007-09-20 2009-03-26 Rgb Spectrum Integrated control system with keyboard video mouse (kvm)

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657046A (en) * 1989-11-14 1997-08-12 Imtech International, Inc. Video moving message display
US7778842B2 (en) * 1996-01-16 2010-08-17 The Nasdaq Omx Group, Inc. Media wall for displaying financial information
US6208319B1 (en) * 1996-03-26 2001-03-27 Fourie, Inc. Display device
US6611241B1 (en) * 1997-12-02 2003-08-26 Sarnoff Corporation Modular display system
US6232932B1 (en) * 1998-07-16 2001-05-15 Craig A. Thorner Apparatus and method for providing modular reconfigurable multi-function displays for computer simulations
US20030098820A1 (en) * 1999-06-14 2003-05-29 Mitsubishi Denki Kabushiki Kaisha Image signal generating apparatus, image signal transmission apparatus, image signal generating method, image signal transmission method, image display unit, control method for an image display unit, and image display system
US20030020671A1 (en) * 1999-10-29 2003-01-30 Ovid Santoro System and method for simultaneous display of multiple information sources
US20010006375A1 (en) * 1999-11-30 2001-07-05 International Business Machines Corporation Host device, image display device, image display system, image display method, panel attribute reading-out method and image display control method
US20090201224A1 (en) * 2002-10-29 2009-08-13 National Readerboard Supply Company Readerboard system
US7686454B2 (en) * 2003-09-08 2010-03-30 Nec Corporation Image combining system, image combining method, and program
US7053862B2 (en) * 2003-12-31 2006-05-30 Zerphy Byron L System and method for rapidly refreshing a dynamic message sign display panel
US7193583B2 (en) * 2003-12-31 2007-03-20 Zerphy Byron L Automatic detection of dynamic message sign display panel configuration
US20060256035A1 (en) * 2005-04-28 2006-11-16 Sony Corporation Display device and method, recording medium, program, and display device securing mechanism, and display system
US7880687B2 (en) * 2005-04-28 2011-02-01 Sony Corporation Display device, display method, program, recording medium, and composite image display apparatus
US8102333B2 (en) * 2005-04-28 2012-01-24 Sony Corporation Display device securing mechanism and display system that rotates display devices around a rotational axis
US7683856B2 (en) * 2006-03-31 2010-03-23 Sony Corporation E-ink touchscreen visualizer for home AV system
US20100045594A1 (en) * 2008-08-20 2010-02-25 The Regents Of The University Of California Systems, methods, and devices for dynamic management of data streams updating displays

Cited By (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120266102A1 (en) * 2006-01-31 2012-10-18 Accenture Global Services Limited System For Storage And Navigation Of Application States And Interactions
US9141937B2 (en) * 2006-01-31 2015-09-22 Accenture Global Services Limited System for storage and navigation of application states and interactions
US9575640B2 (en) 2006-01-31 2017-02-21 Accenture Global Services Limited System for storage and navigation of application states and interactions
US8410993B2 (en) * 2008-08-20 2013-04-02 The Regents Of The University Of California Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays
US8497883B2 (en) * 2009-02-05 2013-07-30 Yappa Corporation Information display device
US20110169860A1 (en) * 2009-02-05 2011-07-14 Masahiro Ito Information Display Device
US8884925B2 (en) * 2009-04-05 2014-11-11 Radion Engineering Co. Ltd. Display system and method utilizing optical sensors
US20120026134A1 (en) * 2009-04-05 2012-02-02 Sharon Ehrilich Unified input and display system and method
US20110115990A1 (en) * 2009-11-13 2011-05-19 Joe Bhaktiar Display system
US9247028B1 (en) 2009-11-24 2016-01-26 Google Inc. Latency-guided web content retrieval, serving, and rendering
US8706802B1 (en) * 2009-11-24 2014-04-22 Google Inc. Latency-guided web content retrieval, serving, and rendering
US8975808B2 (en) 2010-01-26 2015-03-10 Lightizer Korea Inc. Light diffusion of visible edge lines in a multi-dimensional modular display
US20110310070A1 (en) * 2010-06-17 2011-12-22 Henry Zeng Image splitting in a multi-monitor system
US20120032929A1 (en) * 2010-08-06 2012-02-09 Cho Byoung Gu Modular display
WO2012037419A3 (en) * 2010-09-16 2014-03-20 Omnyx, LLC Digital pathology image manipulation
US20120139947A1 (en) * 2010-12-02 2012-06-07 Sony Corporation Information processor, information processing method and program
US8935431B2 (en) 2010-12-17 2015-01-13 International Business Machines Corporation Highly scalable and distributed data sharing and storage
WO2013081624A1 (en) 2011-12-02 2013-06-06 Hewlett-Packard Development Company, L.P. Video clone for a display matrix
US9691356B2 (en) * 2011-12-02 2017-06-27 Hewlett-Packard Development Company, L.P. Displaying portions of a video image at a display matrix
CN103959811A (en) * 2011-12-02 2014-07-30 惠普发展公司,有限责任合伙企业 Video clone for a display matrix
US20140293132A1 (en) * 2011-12-02 2014-10-02 Kent E. Biggs Video clone for a display matrix
EP2786590A4 (en) * 2011-12-02 2015-07-15 Hewlett Packard Development Co Video clone for a display matrix
US20130201176A1 (en) * 2012-02-08 2013-08-08 Samsung Electronics Co., Ltd. Display apparatus
US10475414B2 (en) 2012-02-08 2019-11-12 Samsung Electronics Co., Ltd. Display apparatus for selectively displaying images on transparent display region based on aspect ratios of images and the transparent display region
US9997130B2 (en) * 2012-02-08 2018-06-12 Samsung Electronics Co., Ltd. Display apparatus for displaying additional information on transparent display region
US20130321342A1 (en) * 2012-06-01 2013-12-05 Chun-Yuan Cheng Optical touch screen expansion method
US20140189487A1 (en) * 2012-12-27 2014-07-03 Qualcomm Innovation Center, Inc. Predictive web page rendering using a scroll vector
US9367641B2 (en) * 2012-12-27 2016-06-14 Qualcomm Innovation Center, Inc. Predictive web page rendering using a scroll vector
US9848207B2 (en) 2013-03-13 2017-12-19 Ologn Technologies Ag Efficient screen image transfer
US9235905B2 (en) 2013-03-13 2016-01-12 Ologn Technologies Ag Efficient screen image transfer
US20140269930A1 (en) * 2013-03-14 2014-09-18 Comcast Cable Communications, Llc Efficient compositing of multiple video transmissions into a single session
US9800828B1 (en) * 2013-03-15 2017-10-24 Cox Communications, Inc. Method for pre-rendering video thumbnails at other than macroblock boundaries
US10187667B1 (en) 2013-03-15 2019-01-22 Cox Communications, Inc. Simultaneously optimizing transport bandwidth and client device performance
US10558333B1 (en) 2013-03-15 2020-02-11 Cox Communications, Inc System and method for providing network-based video manipulation resources to a client device
WO2014160537A1 (en) * 2013-03-15 2014-10-02 Harborside Press, LLC Interactive synchronized multi-screen display
US9761157B2 (en) 2013-03-16 2017-09-12 Adti Media Llc Customized sectional sign assembly kit and method of using kit for construction and installation of same
US9666105B2 (en) 2013-03-16 2017-05-30 ADTI Media, LLC Sign construction with modular wire harness arrangements and methods of using same for backside to frontside power and data distribution schemes
US8824125B1 (en) 2013-03-16 2014-09-02 ADTI Media, LLC Modular installation and conversion kit for electronic sign structure and method of using same
US8929083B2 (en) 2013-03-16 2015-01-06 ADIT Media, LLC Compound structural frame and method of using same for efficient retrofitting
US10210778B2 (en) 2013-03-16 2019-02-19 Adti Media Llc Sign construction with sectional sign assemblies and installation kit and method of using same
US10192468B2 (en) 2013-03-16 2019-01-29 ADTI Media, LLC Sign construction with modular installation and conversion kit for electronic sign structure and method of using same
US9852666B2 (en) 2013-03-16 2017-12-26 Adti Media Llc Full height sectional sign assembly and installation kit and method of using same
US9047791B2 (en) 2013-03-16 2015-06-02 Adti Media, Llc. Sign construction with sectional sign assemblies and installation kit and method of using same
US9536457B2 (en) 2013-03-16 2017-01-03 Adti Media Llc Installation kit and method of using same for sign construction with sectional sign assemblies
US9787755B2 (en) * 2013-06-25 2017-10-10 Tencent Technology (Shenzhen) Company Limited Method and device for browsing network data, and storage medium
US20150295991A1 (en) * 2013-06-25 2015-10-15 Tencent Technology (Shenzhen) Company Limited Method and device for browsing network data, and storage medium
US20150040075A1 (en) * 2013-08-05 2015-02-05 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
CN104469536A (en) * 2013-09-17 2015-03-25 株式会社理光 Distribution management apparatus and distribution system
US10380925B2 (en) 2013-12-31 2019-08-13 Ultravision Technologies, Llc Modular display panel
US9984603B1 (en) 2013-12-31 2018-05-29 Ultravision Technologies, Llc Modular display panel
US9535650B2 (en) 2013-12-31 2017-01-03 Ultravision Technologies, Llc System for modular multi-panel display wherein each display is sealed to be waterproof and includes array of display elements arranged to form display panel surface
US9164722B2 (en) 2013-12-31 2015-10-20 Ultravision Technologies, Llc Modular display panels with different pitches
US10373535B2 (en) 2013-12-31 2019-08-06 Ultravision Technologies, Llc Modular display panel
US9528283B2 (en) 2013-12-31 2016-12-27 Ultravision Technologies, Llc Method of performing an installation of a display unit
US9513863B2 (en) 2013-12-31 2016-12-06 Ultravision Technologies, Llc Modular display panel
US9134773B2 (en) 2013-12-31 2015-09-15 Ultravision Technologies, Llc Modular display panel
US10410552B2 (en) 2013-12-31 2019-09-10 Ultravision Technologies, Llc Modular display panel
US9832897B2 (en) 2013-12-31 2017-11-28 Ultravision Technologies, Llc Method of assembling a modular multi-panel display system
US9416551B2 (en) 2013-12-31 2016-08-16 Ultravision Technologies, Llc Preassembled display systems and methods of installation thereof
US9207904B2 (en) 2013-12-31 2015-12-08 Ultravision Technologies, Llc Multi-panel display with hot swappable display panels and methods of servicing thereof
US10871932B2 (en) 2013-12-31 2020-12-22 Ultravision Technologies, Llc Modular display panels
US9195281B2 (en) 2013-12-31 2015-11-24 Ultravision Technologies, Llc System and method for a modular multi-panel display
US9916782B2 (en) 2013-12-31 2018-03-13 Ultravision Technologies, Llc Modular display panel
US9940856B2 (en) 2013-12-31 2018-04-10 Ultravision Technologies, Llc Preassembled display systems and methods of installation thereof
US9978294B1 (en) 2013-12-31 2018-05-22 Ultravision Technologies, Llc Modular display panel
US10540917B2 (en) 2013-12-31 2020-01-21 Ultravision Technologies, Llc Modular display panel
US9990869B1 (en) 2013-12-31 2018-06-05 Ultravision Technologies, Llc Modular display panel
US9582237B2 (en) 2013-12-31 2017-02-28 Ultravision Technologies, Llc Modular display panels with different pitches
US10061553B2 (en) 2013-12-31 2018-08-28 Ultravision Technologies, Llc Power and data communication arrangement between panels
US9642272B1 (en) 2013-12-31 2017-05-02 Ultravision Technologies, Llc Method for modular multi-panel display wherein each display is sealed to be waterproof and includes array of display elements arranged to form display panel surface
US10248372B2 (en) 2013-12-31 2019-04-02 Ultravision Technologies, Llc Modular display panels
US9349306B2 (en) 2013-12-31 2016-05-24 Ultravision Technologies, Llc Modular display panel
US20160019831A1 (en) * 2014-07-16 2016-01-21 Ultravision Technologies, Llc Display System having Module Display Panel with Circuitry for Bidirectional Communication
US10706770B2 (en) * 2014-07-16 2020-07-07 Ultravision Technologies, Llc Display system having module display panel with circuitry for bidirectional communication
US9311847B2 (en) 2014-07-16 2016-04-12 Ultravision Technologies, Llc Display system having monitoring circuit and methods thereof
US10108389B2 (en) 2014-08-04 2018-10-23 At&T Intellectual Property I, L.P. Method and apparatus for presentation of media content
US9817627B2 (en) * 2014-08-04 2017-11-14 At&T Intellectual Property I, L.P. Method and apparatus for presentation of media content
US10592195B2 (en) * 2014-08-04 2020-03-17 At&T Intellectual Property I, L.P. Method and apparatus for presentation of media content
US20160034240A1 (en) * 2014-08-04 2016-02-04 At&T Intellectual Property I, Lp Method and apparatus for presentation of media content
US10429968B2 (en) * 2014-11-06 2019-10-01 Visteon Global Technologies, Inc. Reconfigurable messaging assembly
US20160162243A1 (en) * 2014-12-04 2016-06-09 Henge Docks Llc Method for Logically Positioning Multiple Display Screens
US10157032B2 (en) * 2014-12-04 2018-12-18 Henge Docks Llc Method for logically positioning multiple display screens
US10203930B2 (en) * 2015-03-19 2019-02-12 Fujitsu Limited Display method and display control apparatus
US20180004475A1 (en) * 2015-03-19 2018-01-04 Fujitsu Limited Display method and display control apparatus
US20170186345A1 (en) * 2015-12-29 2017-06-29 Christie Digital Systems Usa, Inc. System for mounting a plurality of display units
US10529260B2 (en) * 2015-12-29 2020-01-07 Christie Digital Systems Usa, Inc. System for mounting a plurality of display units
US11269577B2 (en) * 2016-04-22 2022-03-08 Displaylink (Uk) Limited Distributed video pipe
US11526321B2 (en) 2016-04-22 2022-12-13 Displaylink (Uk) Limited Distributed video pipe
US10803505B2 (en) * 2016-06-22 2020-10-13 Fujifilm North America Corporation Computer-implemented methods, computer-readable medium, and computer-implemented system for automatic generation of image-based print product offering
US20170372346A1 (en) * 2016-06-22 2017-12-28 Fujifilm North America Corporation Automatic generation of image-based print product offering
US11354717B2 (en) 2016-06-22 2022-06-07 Fujifilm North America Corporation Methods, system, and computer-readable medium for automatic generation of image-based print product offering
US11792463B2 (en) * 2017-02-03 2023-10-17 Tv One Limited Method of video transmission and display
US20210314647A1 (en) * 2017-02-03 2021-10-07 Tv One Limited Method of video transmission and display
US11347466B2 (en) * 2017-08-14 2022-05-31 Imax Theatres International Limited Wireless content delivery for a tiled LED display
US11659751B2 (en) 2017-10-03 2023-05-23 Lockheed Martin Corporation Stacked transparent pixel structures for electronic displays
US10930709B2 (en) 2017-10-03 2021-02-23 Lockheed Martin Corporation Stacked transparent pixel structures for image sensors
US11146781B2 (en) 2018-02-07 2021-10-12 Lockheed Martin Corporation In-layer signal processing
US10951883B2 (en) * 2018-02-07 2021-03-16 Lockheed Martin Corporation Distributed multi-screen array for high density display
US20190246098A1 (en) * 2018-02-07 2019-08-08 Lockheed Martin Corporation Distributed Multi-Screen Array for High Density Display
US11616941B2 (en) 2018-02-07 2023-03-28 Lockheed Martin Corporation Direct camera-to-display system
US11321042B2 (en) 2018-03-28 2022-05-03 Eizo Corporation Display system and program
US11150856B2 (en) * 2018-07-02 2021-10-19 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling thereof
US20200004491A1 (en) * 2018-07-02 2020-01-02 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling thereof
US20200034614A1 (en) * 2018-07-30 2020-01-30 Ncr Corporation Item identification with low resolution image processing
US11138430B2 (en) * 2018-07-30 2021-10-05 Ncr Corporation Item identification with low resolution image processing
US20210397398A1 (en) * 2019-03-13 2021-12-23 Xi'an Novastar Tech Co., Ltd. Method, Device and System for Configuring Display Screen
US11494152B2 (en) * 2019-03-13 2022-11-08 Xi'an Novastar Tech Co., Ltd. Method, device and system for configuring display screen
WO2021251585A1 (en) * 2020-06-10 2021-12-16 삼성전자주식회사 Electronic device for recognizing each of plurality of display modules and method for recognizing multi-display

Also Published As

Publication number Publication date
US20100045594A1 (en) 2010-02-25
US20140098006A1 (en) 2014-04-10
US8797233B2 (en) 2014-08-05
US8410993B2 (en) 2013-04-02

Similar Documents

Publication Publication Date Title
US8410993B2 (en) Systems, methods, and devices for highly interactive large image display and manipulation on tiled displays
US10037184B2 (en) Systems, methods, and devices for manipulation of images on tiled displays
US10437850B1 (en) Server implemented geographic information system with graphical interface
US6912695B2 (en) Data storage and retrieval system and method
US9250700B2 (en) System and method for virtual displays
US7119811B2 (en) Image display system
US11243786B2 (en) Streaming application visuals using page-like splitting of individual windows
US10713997B2 (en) Controlling image display via mapping of pixel values to pixels
US20180173486A1 (en) Systems, methods, and devices for animation on tiled displays
US20160148359A1 (en) Fast Computation of a Laplacian Pyramid in a Parallel Computing Environment
Yamaoka et al. Visualization of high-resolution image collections on large tiled display walls
Bria et al. An open-source VAA3D plugin for real-time 3D visualization of terabyte-sized volumetric images
US7840908B2 (en) High resolution display of large electronically stored or communicated images with real time roaming
CN114969409A (en) Image display method and device and readable medium
Schmauder et al. Distributed visual analytics on large-scale high-resolution displays
Matsui et al. Virtual desktop display acceleration technology: RVEC
US20230360167A1 (en) Rendering pipeline for tiled images
Fraser et al. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery
Jiang et al. Interactive browsing of large images on multi-projector display wall system
EP4270319A2 (en) Accelerated image gradient based on one-dimensional data
Lou et al. Magic View: An optimized ultra-large scientific image viewer for SAGE tiled-display environment
Makhinya Performance challenges in distributed rendering systems
ES2805804T3 (en) Multimodal viewer
Pietzsch et al. Bigdataviewer: Interactive visualization and image processing for terabyte data sets
AU2002325000B2 (en) Image display system

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA,CALIFO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JENKS, STEPHEN F.;KIM, SUNG-JIN;REEL/FRAME:023873/0685

Effective date: 20100115

Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JENKS, STEPHEN F.;KIM, SUNG-JIN;REEL/FRAME:023873/0685

Effective date: 20100115

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8