WO2010099178A2 - System and method for displaying multiple images/videos on a single display - Google Patents

System and method for displaying multiple images/videos on a single display Download PDF

Info

Publication number
WO2010099178A2
WO2010099178A2 PCT/US2010/025202 US2010025202W WO2010099178A2 WO 2010099178 A2 WO2010099178 A2 WO 2010099178A2 US 2010025202 W US2010025202 W US 2010025202W WO 2010099178 A2 WO2010099178 A2 WO 2010099178A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
display
electrical communication
video source
source
Prior art date
Application number
PCT/US2010/025202
Other languages
French (fr)
Other versions
WO2010099178A3 (en
Inventor
William Dunn
Gerald Fraschilla
Rick De Laet
Original Assignee
Manufacturing Resources International, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Manufacturing Resources International, Inc. filed Critical Manufacturing Resources International, Inc.
Priority to EP10746750A priority Critical patent/EP2401736A4/en
Priority to CN2010800182565A priority patent/CN102422339A/en
Priority to JP2011552124A priority patent/JP2012518815A/en
Priority to AU2010218074A priority patent/AU2010218074A1/en
Priority to CA2753422A priority patent/CA2753422A1/en
Priority to RU2011139150/08A priority patent/RU2011139150A/en
Publication of WO2010099178A2 publication Critical patent/WO2010099178A2/en
Publication of WO2010099178A3 publication Critical patent/WO2010099178A3/en
Priority to IL214817A priority patent/IL214817A0/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F19/00Advertising or display means not otherwise provided for
    • G09F19/12Advertising or display means not otherwise provided for using special optical effects
    • G09F19/14Advertising or display means not otherwise provided for using special optical effects displaying different signs depending upon the view-point of the observer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/06Remotely controlled electronic signs other than labels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2217/00Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
    • H04R2217/03Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops

Definitions

  • This invention generally relates to electronic displays and systems which accept three independent video/image sources and display them on a single display.
  • Exemplary embodiments include a system and method for accepting three independent video/image sources and displaying them on a single display, where the viewing angle relative to the display determines which image or video may be seen by an observer.
  • the three sources may be combined onto a single network cable and sent to one or more displays.
  • the displays contain a special masking which permits only certain pixel rows or columns to be seen from different angles relative to the display. As an observer passes by the display, they will see three separate images. If one of the video sources were to fail, a default image or video may be displayed to prevent any portion of the display from going blank.
  • FIGURE 1 is a block diagram of components/processes for an exemplary system accepting 1080 video sources.
  • FIGURE 2 is a block diagram of components/processes for an exemplary system accepting video sources of varying formats.
  • FIGURE 3 shows an exemplary embodiment where a display is used to show three different images to three separate positions where three sound focusing devices also transmit separate audio streams to three positions.
  • FIGURE 1 provides a block diagram of the components/steps which may be used in an exemplary system.
  • Three independent video sources 10, 1 1 , and 12 supply the video/image data to the system.
  • Each source may contain relatively static images (e.g. still photos or logos) or may contain dynamic video (e.g. movie previews or commercials).
  • Each video source 10, 1 1 , and 12 is independent and may not require synchronization.
  • each video source is in the 1080 format which is the desired format for the end display 22.
  • embodiments can be designed which can accept any resolution video source.
  • a line capture process 13 may be used where every third vertical line (if using vertical masking on the display) of each video source is captured. Thus, in this embodiment lines 1 , 4, 7, 10, ... for each video source are captured and the remaining lines are discarded. Any type of microprocessor or central processing unit (CPU) could be programmed to perform the line capture process.
  • the captured images may be stored in a buffer 14 and then sent through an image compression device 15. Any form of encoding or compression may be used. However, in an exemplary embodiment JPEG 2000 compression (encoding) chips may be used. Some embodiments may transmit the video data without compression, depending on the variables of the transmission method (wired, wireless, bandwidth, data transfer rates, etc.).
  • This information is then sent to a transmitter network processor 16 which may be used to multiplex the three compressed signals onto a single network cable 17.
  • the transmitter network processor 16 may combine the three compressed signals onto a CAT5 or CAT6 network cable.
  • the compressed signals may also be transmitted wirelessly.
  • a receiver network processor 18 may be used to receive the compressed images from the transmitter network processor 16.
  • the receiver network processor 18 may demux the combination into the original three separate signals and then send each signal to a decompression (decoding) device 19.
  • decompression decoding
  • an exemplary embodiment would use JPEG 2000 decompression (decoding) chips, but these are not required and compression/decompression may not be necessary.
  • Each decompressed signal is then written to its proper location within a frame buffer 20. The following may be used for the proper locations within an exemplary 1080 system embodiment: [0012] Source 1 - Lines 1 , 4, 7, 10, ...., 1075, 1078. [0013] Source 2 - Lines 2, 5, 8, 1 1 , ...., 1076, 1079. [0014] Source 3 - Lines 3, 6, 9, 12, ...., 1077, 1080.
  • Source 1 (10) may be visible from a first side angle to the display 22
  • Source 2 (1 1 ) may be visible at the center of the display
  • Source 3 (12) may be visible from a second angle to the display.
  • a single buffer may be used.
  • a double buffer system may be used prior to sending the information to the display 22.
  • Using two buffers provides for an advantage at least because the frames may be written at a first frame rate and then read at a second frame rate. For example, the frames may be written at 30 fps and then read at 60 fps.
  • the buffers can act as a frame rate converter going from 30 Hz to 60 Hz, if necessary.
  • each of the video sources is independent and does not require synchronization prior to being transmitted.
  • the receiver network processor 18 prepares the decompression devices 19 and these devices may be synchronized.
  • the decompression devices 19 begin operating at their given frame rate (ex. 30 fps) they must receive data during each frame rate. If a frame is not received in time then the previous frame for that particular video source may be repeated. If a new frame still does not arrive at the decompression device 19, then a standard logo may be displayed for that video source while the other video sources continue to operate in standard fashion.
  • FIGURE 2 shows a second embodiment where the video sources 30, 31 .
  • scaling chips 33 may be used to modify the incoming resolution of the video source (if needed) and convert the signals into the desired resolution of 1920 x 360. Even if each of the video sources were in the desired end display resolution, scaling chips may still be used to convert each 1920 x 1080 incoming signal into the desired 1920 x 360.
  • the scaling chips 33 are used instead of the frame capture method which was shown in the Figure 1 embodiment. Using scaling chips 33 may be advantageous as it may reduce or substantially eliminate any image artifacts (as compared to the line capture method where the unused lines are discarded). The remaining components for this embodiment are similar to those discussed above for
  • Some embodiments may use scaling chips only to convert the incoming image resolution and may still use the line capture method.
  • the desired resolution for exemplary video sources may be 1080P as this is typically the desired resolution for modern displays. However, this is not required, and systems can easily be adapted for 10801, 720P, 7201, or any other resolutions, higher or lower than the ones listed here. All that is required is that the image sources are properly scaled when accepted by the system. [0021]
  • the embodiments described herein may work with the commercially available LCD displays from LG Electronics, Englewood Cliffs, NJ. An example would be model number M4714V.
  • audio data may also be included with each of the video sources.
  • the sound focusing techniques from co-pending U.S. Application No. 12/248255 filed on October 9, 2008 may also be used.
  • one type of audio data may be directed where the proper angle for viewing Source 1 would be.
  • a second type of audio data may be directed where the proper angle for viewing Source 3 would be.
  • a third type of audio data may be directed towards the center of the display where Source 2 may be visible. Matching different audio transmissions with the distinct video source may create an attention- capturing effect that will attract observers who may be passing by the display.
  • FIGURE 3 shows an exemplary embodiment where a display 40 is used to show three different images 47, 48, and 49 to three separate positions.
  • This embodiment also contains three sound focusing devices 41 , 42, and 43.
  • first image 47 viewable at the first position along with a first message 44.
  • second image 48 viewable at a second position along with a second message 45.
  • third image 49 at a third position along with a third message 46.
  • the sound focusing devices in the preferred embodiments may include a parametric audio amplifier system. These systems employ an acoustic transducer for projecting an ultrasonic carrier signal modulated with a processed audio signal through the air for subsequent regeneration of the audio signal along a selected path of projection.
  • a conventional parametric audio amplifier system may include a modulator configured to modulate an ultrasonic carrier signal with a processed audio signal, a driver amplifier configured to amplify the modulated carrier signal, and at least one acoustic transducer configured to project a sonic beam corresponding to the modulated ultrasonic carrier signal through the air along a selected projection path.
  • the projected sonic beam is demodulated as it passes through the air to regenerate the audio signal along the selected projection path.
  • These systems are beneficial for focusing sound because the sound is transmitted in an ultrasound frequency (ie. above 20k Hz) so that they are inaudible unless the listener is located near the desired position. Also, due to the high frequency of the carrier ultrasound wave, the direction of the wave and the desired position can be tightly controlled.
  • Exemplary models may include the Audio Spotlight® line of products from Holosonic
  • ATC American Traffic Corporation
  • ATC of San Diego, CA; www.atcsd.com.
  • Exemplary models from ATC may include the SoundSaber® and the HyperSonic Sound® systems.
  • the sound focusing techniques in the preferred embodiments may include focused sound solutions from Dakota Audio, of Bismark, ND;
  • Exemplary models may include the MA-4, FA-603, FA-602, and
  • the sound focusing techniques in the preferred embodiments may include focused sound solutions from Brown Innovations, Inc. of Chicago, IL; www.browninnovations.com.
  • Exemplary models may include Maestro, FlushMount
  • Maestro, MiniMaestro, and the SonicBeamTM are also use an array of traditional, high-quality loudspeakers and may also utilize a sound dome.
  • the sound focusing elements may utilize the sound focusing techniques taught in U.S. Patent No. 7,204,342 to Lee entitled "Sound Focus Speaker of Gas-Filled Sound Lens Attachment Type.”
  • the parametric audio amplifier system may be preferred. Additionally, if two positions are relatively close to one another, or many positions are desired, the parametric audio amplifier system may be used for its ability to tightly control the sound projection.

Abstract

A system and method for displaying multiple images on a single display. A plurality of unique video sources may be provided where each video source provides video frames having a plurality of vertical lines. A portion of the vertical lines are captured for each video source and transmitted to the display. The partial video frames may then be re-assembled into a single video frame which can be shown on the display. The display contains a masking where only certain vertical lines are viewable at certain angles. The masking may allow two or three images to be simultaneously visible to the observer, depending on the angle of viewing the display. Wireless or wired transmission may be used. Some embodiments also use a sound focusing device which may be synced with the video source so that separate sound messages can also be observed depending on the angle of viewing the display.

Description

SYSTEM AND METHOD FOR DISPLAYING MULTIPLE IMAGES/VIDEOS ON A SINGLE DISPLAY
Inventors: William Dunn, Jerry Fraschilla, and Rick De Laet
Technical Field
[0001] This invention generally relates to electronic displays and systems which accept three independent video/image sources and display them on a single display.
Background of the Art
[0002] Traditionally, advanced electronic display systems have only been used for indoor entertainment applications. However, modern electronic displays are also being used for advertising and informational purposes. When used for advertising, catching the attention of the consumer can sometimes be a difficult task. Further, advertising space is limited, and there is a strong desire to include as much advertising information as possible within a given physical space.
Summary of the Exemplary Embodiments
[0003] Exemplary embodiments include a system and method for accepting three independent video/image sources and displaying them on a single display, where the viewing angle relative to the display determines which image or video may be seen by an observer. The three sources may be combined onto a single network cable and sent to one or more displays. The displays contain a special masking which permits only certain pixel rows or columns to be seen from different angles relative to the display. As an observer passes by the display, they will see three separate images. If one of the video sources were to fail, a default image or video may be displayed to prevent any portion of the display from going blank.
[0004] The foregoing and other features and advantages of exemplary embodiments will be apparent from the following more detailed description of the particular embodiments, as illustrated in the accompanying drawings.
Brief Description of the Drawings
[0005] A better understanding of an exemplary embodiment will be obtained from a reading of the following detailed description and the accompanying drawings wherein identical reference characters refer to identical parts and in which: [0006] FIGURE 1 is a block diagram of components/processes for an exemplary system accepting 1080 video sources.
[0007] FIGURE 2 is a block diagram of components/processes for an exemplary system accepting video sources of varying formats.
[0008] FIGURE 3 shows an exemplary embodiment where a display is used to show three different images to three separate positions where three sound focusing devices also transmit separate audio streams to three positions.
Detailed Description
[0009] FIGURE 1 provides a block diagram of the components/steps which may be used in an exemplary system. Three independent video sources 10, 1 1 , and 12 supply the video/image data to the system. Each source may contain relatively static images (e.g. still photos or logos) or may contain dynamic video (e.g. movie previews or commercials). Each video source 10, 1 1 , and 12 is independent and may not require synchronization. In this embodiment, each video source is in the 1080 format which is the desired format for the end display 22. However, as discussed below, embodiments can be designed which can accept any resolution video source.
[0010] A line capture process 13 may be used where every third vertical line (if using vertical masking on the display) of each video source is captured. Thus, in this embodiment lines 1 , 4, 7, 10, ... for each video source are captured and the remaining lines are discarded. Any type of microprocessor or central processing unit (CPU) could be programmed to perform the line capture process. The captured images may be stored in a buffer 14 and then sent through an image compression device 15. Any form of encoding or compression may be used. However, in an exemplary embodiment JPEG 2000 compression (encoding) chips may be used. Some embodiments may transmit the video data without compression, depending on the variables of the transmission method (wired, wireless, bandwidth, data transfer rates, etc.). This information is then sent to a transmitter network processor 16 which may be used to multiplex the three compressed signals onto a single network cable 17. In an exemplary embodiment, the transmitter network processor 16 may combine the three compressed signals onto a CAT5 or CAT6 network cable. The compressed signals may also be transmitted wirelessly.
[0011] A receiver network processor 18 may be used to receive the compressed images from the transmitter network processor 16. The receiver network processor 18 may demux the combination into the original three separate signals and then send each signal to a decompression (decoding) device 19. Again, an exemplary embodiment would use JPEG 2000 decompression (decoding) chips, but these are not required and compression/decompression may not be necessary. Each decompressed signal is then written to its proper location within a frame buffer 20. The following may be used for the proper locations within an exemplary 1080 system embodiment: [0012] Source 1 - Lines 1 , 4, 7, 10, ...., 1075, 1078. [0013] Source 2 - Lines 2, 5, 8, 1 1 , ...., 1076, 1079. [0014] Source 3 - Lines 3, 6, 9, 12, ...., 1077, 1080.
[0015] This information may then be sent to the display 22 where Source 1 (10) may be visible from a first side angle to the display 22, Source 2 (1 1 ) may be visible at the center of the display, and Source 3 (12) may be visible from a second angle to the display. In some embodiments, a single buffer may be used. In other embodiments, a double buffer system may be used prior to sending the information to the display 22. Using two buffers provides for an advantage at least because the frames may be written at a first frame rate and then read at a second frame rate. For example, the frames may be written at 30 fps and then read at 60 fps. Thus, the buffers can act as a frame rate converter going from 30 Hz to 60 Hz, if necessary.
[0016] As mentioned above, each of the video sources is independent and does not require synchronization prior to being transmitted. However, the receiver network processor 18 prepares the decompression devices 19 and these devices may be synchronized. Thus, once the decompression devices 19 begin operating at their given frame rate (ex. 30 fps) they must receive data during each frame rate. If a frame is not received in time then the previous frame for that particular video source may be repeated. If a new frame still does not arrive at the decompression device 19, then a standard logo may be displayed for that video source while the other video sources continue to operate in standard fashion.
[0017] FIGURE 2 shows a second embodiment where the video sources 30, 31 , and
32 are not all in the native resolution of the end display 22. Here, the end display 22 is still 1080 but video source 3 (32) is 1366 x 768. In this embodiment, scaling chips 33 may be used to modify the incoming resolution of the video source (if needed) and convert the signals into the desired resolution of 1920 x 360. Even if each of the video sources were in the desired end display resolution, scaling chips may still be used to convert each 1920 x 1080 incoming signal into the desired 1920 x 360.
[0018] In this embodiment, the scaling chips 33 are used instead of the frame capture method which was shown in the Figure 1 embodiment. Using scaling chips 33 may be advantageous as it may reduce or substantially eliminate any image artifacts (as compared to the line capture method where the unused lines are discarded). The remaining components for this embodiment are similar to those discussed above for
Figure 1 and will not be discussed in detail.
[0019] Some embodiments may use scaling chips only to convert the incoming image resolution and may still use the line capture method.
[0020] The desired resolution for exemplary video sources may be 1080P as this is typically the desired resolution for modern displays. However, this is not required, and systems can easily be adapted for 10801, 720P, 7201, or any other resolutions, higher or lower than the ones listed here. All that is required is that the image sources are properly scaled when accepted by the system. [0021] The embodiments described herein may work with the commercially available LCD displays from LG Electronics, Englewood Cliffs, NJ. An example would be model number M4714V. http://us.lge.com- These types of displays are also available from Manufacturing Resources International of Alpharetta, GA as the Triple View™ Displays, http://www.outdoor-displays.com/ These or similar displays contain the proper vertical line masking so that only certain pixel sets are viewable at discreet angles relative to the display. As described above, with reference to the vertical masking, the image shown to an observer varies as the observer travels horizontally in front of the display (such as when an observer walks on a horizontal surface past a mounted display). Other embodiments could use horizontal masking so that the image shown to the observer would change as the user travels vertically in front of the display (such as when an observer goes down a flight of stairs or escalator). [0022] As discussed above, audio data may also be included with each of the video sources. With any of the embodiments discussed herein, the sound focusing techniques from co-pending U.S. Application No. 12/248255 filed on October 9, 2008 may also be used. Thus, one type of audio data may be directed where the proper angle for viewing Source 1 would be. A second type of audio data may be directed where the proper angle for viewing Source 3 would be. A third type of audio data may be directed towards the center of the display where Source 2 may be visible. Matching different audio transmissions with the distinct video source may create an attention- capturing effect that will attract observers who may be passing by the display. [0023] FIGURE 3 shows an exemplary embodiment where a display 40 is used to show three different images 47, 48, and 49 to three separate positions. This embodiment also contains three sound focusing devices 41 , 42, and 43. Thus, as the consumer 9 passes the display 40, there may be a first image 47 viewable at the first position along with a first message 44. Accordingly, as the consumer 9 continues to travel, there may be a second image 48 viewable at a second position along with a second message 45. Finally, the consumer 9 may see a third image 49 at a third position along with a third message 46. By simply changing the masking on the display and using only two video sources, this embodiment could also show two images and transmit two messages rather than three separate images and three separate messages.
[0024] The sound focusing devices in the preferred embodiments may include a parametric audio amplifier system. These systems employ an acoustic transducer for projecting an ultrasonic carrier signal modulated with a processed audio signal through the air for subsequent regeneration of the audio signal along a selected path of projection. A conventional parametric audio amplifier system may include a modulator configured to modulate an ultrasonic carrier signal with a processed audio signal, a driver amplifier configured to amplify the modulated carrier signal, and at least one acoustic transducer configured to project a sonic beam corresponding to the modulated ultrasonic carrier signal through the air along a selected projection path. Because of the nonlinear propagation characteristics of the air, the projected sonic beam is demodulated as it passes through the air to regenerate the audio signal along the selected projection path. These systems are beneficial for focusing sound because the sound is transmitted in an ultrasound frequency (ie. above 20k Hz) so that they are inaudible unless the listener is located near the desired position. Also, due to the high frequency of the carrier ultrasound wave, the direction of the wave and the desired position can be tightly controlled.
[0025] Exemplary parametric audio amplifier systems are commercially available from
Holosonic Research Labs, Inc., of Watertown, Massachusetts; wwwjTojosonjcs^com.
Exemplary models may include the Audio Spotlight® line of products from Holosonic
Research Labs. Further exemplary systems are available from American Technology
Corporation (ATC), of San Diego, CA; www.atcsd.com. Exemplary models from ATC may include the SoundSaber® and the HyperSonic Sound® systems.
[0026] Alternatively, the sound focusing techniques in the preferred embodiments may include focused sound solutions from Dakota Audio, of Bismark, ND;
Figure imgf000009_0001
Exemplary models may include the MA-4, FA-603, FA-602, and
FA-501. These models use an array of traditional, high-quality loudspeakers where the signals to the speakers may be delayed so that the sound waves propagate and develop in a specific position.
[0027] Also, the sound focusing techniques in the preferred embodiments may include focused sound solutions from Brown Innovations, Inc. of Chicago, IL; www.browninnovations.com. Exemplary models may include Maestro, FlushMount
Maestro, MiniMaestro, and the SonicBeam™. These models also use an array of traditional, high-quality loudspeakers and may also utilize a sound dome.
[0028] Still further, the sound focusing elements may utilize the sound focusing techniques taught in U.S. Patent No. 7,204,342 to Lee entitled "Sound Focus Speaker of Gas-Filled Sound Lens Attachment Type." [0029] Note that for projecting sound across large distances the parametric audio amplifier system may be preferred. Additionally, if two positions are relatively close to one another, or many positions are desired, the parametric audio amplifier system may be used for its ability to tightly control the sound projection.
[0030] Having shown and described a preferred embodiment of the invention, those skilled in the art will realize that many variations and modifications may be made to affect the described invention and still be within the scope of the claimed invention. Additionally, many of the elements indicated above may be altered or replaced by different elements which will provide the same result and fall within the spirit of the claimed invention. It is the intention, therefore, to limit the invention only as indicated by the scope of the claims.

Claims

1. A system for displaying a plurality of video sources on a single display, the system comprising: a first, second, and third video system, each video system having: a video source supplying video data; a line capture processor in electrical communication with the video source; a buffer in electrical communication with the line capture processor; a transmitter in electrical communication with the first, second, and third video systems; a receiver in electrical communication with the transmitter; a frame buffer in electrical communication with the receiver; and a display in electrical communication with the frame buffer.
2. The system of claim 1 wherein: the line capture processor is adapted to capture one out of every three vertical lines of video data.
3. The system of claim 1 further comprising: a compression device in electrical communication with the buffer of each video system; and a decompression device in electrical communication between the receiver and frame buffer.
4. The system of any one of claims 1 -3 further comprising: a scaling chip in electrical communication with the video source of each video system.
5. The system of claim 1 further comprising: a sound focusing device at the display.
6. The system of claim 5 wherein: the sound focusing device is a parametric audio amplifier system.
7. The system of any one of claims 1 -3 further comprising: a first, second, and third sound focusing device at the display where the first sound focusing device is synced with the first video source, the second sound focusing device is synced with the second video source, and the third sound focusing device is synced with the third video source.
8. A system for displaying a plurality of video sources on a single display, the system comprising: a first and second video system, each video system having: a video source supplying video data which contains a plurality of vertical lines; a scaling chip in electrical communication with the video source which captures every other vertical line of video data; a buffer in electrical communication with the line capture processor; a transmitter in electrical communication with the first and second video systems; a receiver in electrical communication with the transmitter; a frame buffer in electrical communication with the receiver; and a display in electrical communication with the frame buffer which displays the captured lines of video data from the first and second video systems simultaneously.
9. The system of claim 8 further comprising: a compression device in electrical communication with the buffer of each video system; and a decompression device in electrical communication between the receiver and frame buffer.
10. The system of claim 8 further comprising: a sound focusing device at the display.
1 1. The system of claim 10 wherein: the sound focusing device is a parametric audio amplifier system.
12. A method for displaying multiple images on a single display, the method comprising the steps of: providing three video sources which produce video frames having a plurality of vertical lines; capturing one out of every three vertical lines of the video frames from each video source to produce partial video frames for each video source; transmitting the partial video frames to a display having a vertical line masking; assembling the partial video frames for each video source into a single video frame; and displaying the single video frame on the display such that the partial video frame of the first video source is viewable from a first angle to the display, the partial video frame of the second video source is viewable from a second angle to the display, and the partial video frame of the third video source is viewable from a third angle to the display.
13. The method of claim 12 further comprising the step of: compressing the partial video frames prior to transmitting them.
14. The method of claim 12 further comprising the step of: scaling the video frames prior to capturing one out of every three vertical lines.
15. The method of claim 12 further comprising the step of: storing the single video frame in a frame buffer prior to displaying the single video frame.
16. The method of any one of claims 12-15 further comprising the steps of: transmitting a first, second, and third audio packet to the display in addition to the partial video frames, the first packet being synced with the first video source, the second packet being synced with the second video source, and the third packet being synced with the third video source; broadcasting the first audio packet towards a first angle relative to the display; broadcasting the second audio packet towards a second angle relative to the display; and broadcasting the third audio packet towards a third angle relative to the display.
PCT/US2010/025202 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display WO2010099178A2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
EP10746750A EP2401736A4 (en) 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display
CN2010800182565A CN102422339A (en) 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display
JP2011552124A JP2012518815A (en) 2009-02-24 2010-02-24 System and method for displaying multiple images / videos on a single display
AU2010218074A AU2010218074A1 (en) 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display
CA2753422A CA2753422A1 (en) 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display
RU2011139150/08A RU2011139150A (en) 2009-02-24 2010-02-24 SYSTEM AND METHOD FOR DISPLAYING NUMEROUS IMAGES / VIDEO IMAGES ON ONE DISPLAY DEVICE
IL214817A IL214817A0 (en) 2009-02-24 2011-08-24 System and method for displaying multiple images/videos on a single display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15510009P 2009-02-24 2009-02-24
US61/155,100 2009-02-24

Publications (2)

Publication Number Publication Date
WO2010099178A2 true WO2010099178A2 (en) 2010-09-02
WO2010099178A3 WO2010099178A3 (en) 2010-11-18

Family

ID=42666188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/025202 WO2010099178A2 (en) 2009-02-24 2010-02-24 System and method for displaying multiple images/videos on a single display

Country Status (9)

Country Link
EP (1) EP2401736A4 (en)
JP (1) JP2012518815A (en)
KR (1) KR20110118177A (en)
CN (1) CN102422339A (en)
AU (1) AU2010218074A1 (en)
CA (1) CA2753422A1 (en)
IL (1) IL214817A0 (en)
RU (1) RU2011139150A (en)
WO (1) WO2010099178A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015038253A1 (en) * 2013-09-11 2015-03-19 Apple Inc. A display port link between a processor and a display device
US10269156B2 (en) 2015-06-05 2019-04-23 Manufacturing Resources International, Inc. System and method for blending order confirmation over menu board background
US10313037B2 (en) 2016-05-31 2019-06-04 Manufacturing Resources International, Inc. Electronic display remote image verification system and method
US10319408B2 (en) 2015-03-30 2019-06-11 Manufacturing Resources International, Inc. Monolithic display with separately controllable sections
US10319271B2 (en) 2016-03-22 2019-06-11 Manufacturing Resources International, Inc. Cyclic redundancy check for electronic displays
US10510304B2 (en) 2016-08-10 2019-12-17 Manufacturing Resources International, Inc. Dynamic dimming LED backlight for LCD array
US10922736B2 (en) 2015-05-15 2021-02-16 Manufacturing Resources International, Inc. Smart electronic display for restaurants
US11895362B2 (en) 2021-10-29 2024-02-06 Manufacturing Resources International, Inc. Proof of play for images displayed at electronic displays

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7204342B2 (en) 2002-04-25 2007-04-17 Postech Foundation Sound focus speaker of gas-filled sound lens attachment type

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2840012B2 (en) * 1993-09-01 1998-12-24 シャープ株式会社 3D display device
TW582015B (en) * 2000-06-30 2004-04-01 Nichia Corp Display unit communication system, communication method, display unit, communication circuit and terminal adapter
US7589737B2 (en) * 2001-10-31 2009-09-15 Hewlett-Packard Development Company, L.P. System and method for communicating graphics image data over a communication network
US20030117428A1 (en) * 2001-12-20 2003-06-26 Koninklijke Philips Electronics N.V. Visual summary of audio-visual program features
JP3988601B2 (en) * 2002-09-27 2007-10-10 株式会社セガ Game image generation method using image display device for display corresponding to viewing angle, game program for controlling execution of game, and game device for executing the same
EP1632104A2 (en) * 2003-06-09 2006-03-08 American Technology Corporation System and method for delivering audio-visual content along a customer waiting line
JP2006154759A (en) * 2004-10-29 2006-06-15 Fujitsu Ten Ltd Image interpolation device and display device
WO2006049217A1 (en) * 2004-11-02 2006-05-11 Fujitsu Ten Limited Display control device and display device
JP2006184859A (en) * 2004-11-30 2006-07-13 Fujitsu Ten Ltd Display controller and display device
KR101195929B1 (en) * 2005-05-20 2012-10-30 삼성전자주식회사 Multi-channel imaging system
JP5508662B2 (en) * 2007-01-12 2014-06-04 株式会社半導体エネルギー研究所 Display device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7204342B2 (en) 2002-04-25 2007-04-17 Postech Foundation Sound focus speaker of gas-filled sound lens attachment type

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2401736A4

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015038253A1 (en) * 2013-09-11 2015-03-19 Apple Inc. A display port link between a processor and a display device
US9684942B2 (en) 2013-09-11 2017-06-20 Apple Inc. Link aggregator for an electronic display
US10699363B2 (en) 2013-09-11 2020-06-30 Apple Inc. Link aggregator for an electronic display
US10319408B2 (en) 2015-03-30 2019-06-11 Manufacturing Resources International, Inc. Monolithic display with separately controllable sections
US10922736B2 (en) 2015-05-15 2021-02-16 Manufacturing Resources International, Inc. Smart electronic display for restaurants
US10269156B2 (en) 2015-06-05 2019-04-23 Manufacturing Resources International, Inc. System and method for blending order confirmation over menu board background
US10467610B2 (en) 2015-06-05 2019-11-05 Manufacturing Resources International, Inc. System and method for a redundant multi-panel electronic display
US10319271B2 (en) 2016-03-22 2019-06-11 Manufacturing Resources International, Inc. Cyclic redundancy check for electronic displays
US10313037B2 (en) 2016-05-31 2019-06-04 Manufacturing Resources International, Inc. Electronic display remote image verification system and method
US10756836B2 (en) 2016-05-31 2020-08-25 Manufacturing Resources International, Inc. Electronic display remote image verification system and method
US10510304B2 (en) 2016-08-10 2019-12-17 Manufacturing Resources International, Inc. Dynamic dimming LED backlight for LCD array
US11895362B2 (en) 2021-10-29 2024-02-06 Manufacturing Resources International, Inc. Proof of play for images displayed at electronic displays

Also Published As

Publication number Publication date
KR20110118177A (en) 2011-10-28
CA2753422A1 (en) 2010-09-02
AU2010218074A1 (en) 2011-09-22
RU2011139150A (en) 2013-04-10
EP2401736A4 (en) 2012-11-28
EP2401736A2 (en) 2012-01-04
CN102422339A (en) 2012-04-18
IL214817A0 (en) 2011-11-30
JP2012518815A (en) 2012-08-16
WO2010099178A3 (en) 2010-11-18

Similar Documents

Publication Publication Date Title
US8400570B2 (en) System and method for displaying multiple images/videos on a single display
WO2010099178A2 (en) System and method for displaying multiple images/videos on a single display
TWI595786B (en) Timestamp-based audio and video processing method and system thereof
US10531127B2 (en) Processes systems and methods for improving virtual and augmented reality applications
EP2202970A1 (en) A method and a system of video communication and a device for video communication
US20110229106A1 (en) System for playback of ultra high resolution video using multiple displays
TW200607362A (en) 3D television broadcasting system and method for broadcasting three-dimensional image
US9420324B2 (en) Content isolation and processing for inline video playback
US11490137B2 (en) Method and system for transmitting alternative image content of a physical display to different viewers
WO2004077809A3 (en) Method and system for reduced bandwidth
CN101795418A (en) Method and system for realizing wireless communication
WO2013048618A1 (en) Systems and methods for synchronizing the presentation of a combined video program
CN103856809A (en) Method, system and terminal equipment for multipoint at the same screen
CN101047872B (en) Stereo audio vedio device for TV
US20110157302A1 (en) Three-dimensional video display system with multi-stream sending/receiving operation
EP2312859A2 (en) Method and system for communicating 3D video via a wireless communication link
JP5265468B2 (en) Video receiving device and display device
JP2017016041A (en) Still image transmission-reception synchronous reproducing apparatus
JP2013017106A (en) Video audio demodulation circuit, television receiver, television reception tuner, and video reproduction device
CN111385590A (en) Live broadcast data processing method and device and terminal
US7554605B2 (en) Method for progressive and interlace TV signal simultaneous output
JP2013062839A (en) Video transmission system, video input device, and video output device
CN102340680A (en) Image playing system, correlated apparatus thereof and methods thereof
CN103139510A (en) Television (TV) picture split-screen expansion system
KR20110078845A (en) Tablet pc based 3d tv reception apparatus

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080018256.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10746750

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 2753422

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011552124

Country of ref document: JP

Ref document number: 214817

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 2010218074

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2010746750

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2010218074

Country of ref document: AU

Date of ref document: 20100224

Kind code of ref document: A

Ref document number: 20117022219

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2011139150

Country of ref document: RU