GB2507482A - Progressive Transmission of Still Images to a Video Display System - Google Patents

Progressive Transmission of Still Images to a Video Display System Download PDF

Info

Publication number
GB2507482A
GB2507482A GB201219219A GB201219219A GB2507482A GB 2507482 A GB2507482 A GB 2507482A GB 201219219 A GB201219219 A GB 201219219A GB 201219219 A GB201219219 A GB 201219219A GB 2507482 A GB2507482 A GB 2507482A
Authority
GB
United Kingdom
Prior art keywords
still image
video
base layer
received
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB201219219A
Other versions
GB201219219D0 (en
GB2507482B (en
Inventor
Falk Tannhauser
Hirohiko Inohiza
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB201219219A priority Critical patent/GB2507482B/en
Publication of GB201219219D0 publication Critical patent/GB201219219D0/en
Publication of GB2507482A publication Critical patent/GB2507482A/en
Application granted granted Critical
Publication of GB2507482B publication Critical patent/GB2507482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/36Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00095Systems or arrangements for the transmission of the picture signal
    • H04N1/00098Systems or arrangements for the transmission of the picture signal via a television channel, e.g. for a series of still pictures with or without sound
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • H04N1/417Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information using predictive or differential encoding
    • H04N1/4172Progressive encoding, i.e. by decomposition into high and low resolution components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Abstract

A still image (120) to be displayed on a screen of a video display device (110) is progressively transmitted via a video network to the display operating at a predetermined frame rate. The still images to be transmitted are encoded into a base layer and at least one enhancement layer, the base layer allowing display of still images in low resolution. The base layer and the at least one enhancement layer are transmitted in at least two different video frames according to the predetermined frame rate, the encoding preferably being based on the available bandwidth of the network. A video image may also be transmitted within each of the video frames. The still image may be superimposed on the background video display, and may comprise a picture in picture image. Alternatively, the still image may be displayed alone. The base layer and the at least one enhancement layer may comprise chroma sub-sampling information.

Description

METHOD AND DEVICE FOR PROGRESSIVE TRANSMISSION OF STILL
IMAGES TO A VIDEO DISPLAY SYSTEM
FIELD OF THE INVENTION
The present invention relates generally to video display and projection systems and more specifically to a method and a device for progressive transmission of still images to a video display system comprising a video display apparatus or a group of aggregated video display apparatuses.
BACKGROUND OF THE INVENTION
Videos are generally displayed or projected as sequences of images on a video screen by a video display system composed of a single video apparatus, e.g. a single video projector, or multiple video apparatus, e.g. multiple video projectors, generating adjacent, partially overlapping sub-images. The use of several video projectors improves brightness and resolution.
Displayed images can be of a standard definition or of a high definition (HD), offering high image quality.
While each frame of HD video is typically 1,920 pixels wide and 1,080 pixels high (1920 x 1080 pixels, also known as lO8Op, the letter "p' standing for progressive scan) or, less often, 3.840 pixels wide and 2,160 pixels high (3840 x 2160 pixels, also known as 4k2k), the resolution of video screens is higher to improve visual comfort of users. To that end, several video display apparatus may be combined to impiove the overall resolution without increasing costs excessively. For example, 4 video projectors of the SXGA-'-resolution (providing 1,400 x 1,050 pixels, SXGA+ standing for Super Extended Graphics Array Plus), arranged in a 2 x 2 array, reach a display resolution of 2,520 x 1,890 pixels (with 20% blending). Similarly, 9 video projectors of the SXGA+ resolution arranged in a 3 x 3 array reach a display resolution of 3,640 x 2,730 pixels and 16 video projectors of the SXGA+ resolution arranged in a 4 x 4 array reach a display resolution of 4,760 x 3,570 pixels.
The video projectors are connected to a video source through a wired or wireless video network offering a limited bandwidth allowing transmission of HD video streams. The latter can be, for example, an uncompressed 1,920 x 1,080 video stream with 12 bits per pixel (corresponding to chroma sub-sampling of the 4:2:0 type) at 60 frames per second. In such a case, the required network bandwidth is 1.5 Obits per second (excluding a possible overhead for transmission protocol (frame headers) and error detection and correction bits).
When the display system resolution is higher than that of the received video frames, the latter are up-scaled in the video projectors (each video projector up-scaling the part of the frame it has the task of projecting) so as to benefit from the larger display surface while respecting the initial aspect ratio. For the sake of illustration, when considering an image of the 1,920 x 1,080 pixel resolution having a 16:9 aspect ratio that is to be projected by a video system comprising four video projectors of the SXGA+, arranged in a 2 x 2 array to reach a display resolution of 2,520 x 1,890 pixels, the up-scaling factor is 21/16 which results in a display surface of 2,520 x 1,418 pixels.
Such video systems can be used to display still images either alone or in combination with video images (in different windows or as picture-in-picture, generically referred to below as picture-in-picture). However, bearing in mind that still images issued from most available digital cameras offer resolutions that are greater than that of videos, typically ranging from 10 to 36 Mpixels, displayed still images must be down-scaled before being transmitted, which means that displaying still images does not benefit from the resolution offered by the video systems.
Alternatively, still images can be transmitted according to a lower frame rate than that of video frames, which results in perceptible display lag. For example, the size of an uncompressed still image of 5,760 x 3,840 pixel resolution with chroma sub-sampling of the 4:4:4 type (24 bits/pixel) is about 531 Mbits. Accordingly, its transmission time is about 0.354 second at 1.5 Gbit/s, which represents more than 21 video frames at 60 frames per second. Therefore, there is a perceptible delay when displaying such a still image on user request (e.g. through a remote control) which may disrupt the user experience. Progressive top-to-bottom image build-up could also be visually perturbing to a user.
Accordingly, there is a need to provide the capability of displaying still images with a resolution close to that of the display system (which is higher than that for with the communication network has been designed) without introducing perceptible delay when displaying the images. In other words, there is a need to minimize the user perceptible delay when displaying a high-resolution still image, while fully utilizing the resolution offered by the display system and letting the user enjoy the highest possible level of details present in the original still image.
Furthermore, it is desirable to maintain backward compatibility at the network level with legacy video projectors in an heterogeneous cluster of video projectors comprising some standard video projectors.
SUMMARY OF THE INVENTION
Faced with these constraints, the inventors provide a method and a device for progressive transmission of still images to a video display system comprising a video display apparatus or a group of aggregated video display apparatuses.
It is a broad object of the invention to remedy the shortcomings of the prior art as described above.
According to a first aspect of the invention there is provided a method for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, the method comprising: -encoding the still image into a base layer and at least one enhancement layer, the base layer allowing display of the still image in low resolution; -transmitting the base layer and the at least one enhancement layer in at least two different video frames according to the predetermined frame rate.
The claimed method allows transmission of a still image in such a way that the user-perceptible display build-up of the still image is very fast and without visually perturbing effects.
In an embodiment the method further comprises a step of determining an available bandwidth of the video network for transmitting the still image, the encoding of the still image into the base layer and the at least one enhancement layer being based on the determined available bandwidth, so as to optimize still image transmission.
In an embodiment the still image is encoded so that the bit size of each of the base layer and the at least one enhancement layer is smaller or equal to the amount of data that can be transferred in a time period as determined by the predetermined frame rate.
In an embodiment the method further comprises a step of transmitting a video image within each of the at least two video frames.
In an embodiment the resolution of the still image is equal to the resolution of a video image to be transmitted within one of the at least two video frames.
In an embodiment the resolution of the still image is greater than the resolution of a video image to be transmitted within one of the at least two video frames. Accordingly, a still image can be displayed with the highest resolution available with a given display, even if the primary video stream displayed at the same time has a lower resolution.
In an embodiment the available bandwidth is determined as a function of a bandwidth of the video network and as a function of the bit size of a video image to be transmitted within one of the at least two video frames so as to optimize still image transmission.
In an embodiment the available bandwidth is further determined as a function of an amount of pixels of a video image to be transmitted within one of the at least two video frames, that are hidden by pixels of the still image when displaying the video image and the still image, to further optimize still image transmission.
In an embodiment the step of encoding the still image comprises a step of determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
In an embodiment the step of encoding the still image comprises a step of splitting the still image into pixel blocks, each pixel block comprising as many pixels as the number of layers, each pixel of a pixel block being assigned to a particular layer.
In an embodiment the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the method further comprising a step of transmitting further chroma information in at least one further video frame.
A second aspect of the invention provides a method for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, the method comprising: -receiving a fist video frame comprising the base layer and decoding the received base layer; -displaying a low resolution version of the still image based on the received base layer; -receiving at least one second video frame comprising the at least one enhancement layer and decoding the received at least one enhancement layer; and -refining the displayed still image according to the received at least one enhancement layer.
The claimed method allows the display of a still image in such a way that the user-perceptible display build-up of the still image is very fast and without visually perturbing effects.
In an embodiment the method further comprises a step of determining an available bandwidth of the video network for transmitting the still image, the decoding of the base layer and of the at least one enhancement layer being based on the determined available bandwidth, so as to optimize still image transmission.
In an embodiment the method further comprises the steps of decoding a first and at least one second video image received in the first and the at least one second video frame, respectively, and of displaying the decoded first and at least one second video image simultaneously with the displayed still image.
In an embodiment the resolution of the still image is equal to the resolution of a received video image. Alternately, the resolution of the still image is greater than the resolution of a received video image.
In an embodiment the available bandwidth is determined as a function of a bandwidth of the video network and as a function of the bit size of a received video image so as to optimize still image transmission.
In an embodiment the available bandwidth is further determined as a function of an amount of pixels of a received video image that are hidden by pixels of the still image when displaying the video image and the still image so as to further optimize still image transmission.
In an embodiment the step of decoding the received base layer comprises a step of determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
In an embodiment the step of refining the displayed still image according to the received at least one enhancement layer comprises a step of interpolating pixel values from the received base layer and the received at least one enhancement layer so as to improve display rendering of the still image.
In an embodiment the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the method further comprising a step of receiving further chroma information in at least one further video frame and a step of refining the displayed still image according to the received further chroma information.
According to a third aspect of the invention there is provided a computer program comprising instructions for carrying out each step of the method described above when the program is loaded and executed by a programmable apparatus.
The claimed computer program allows the display of a still image in such a way that the user-perceptible display build-up of the still image is very fast and without visually perturbing effects.
A fourth aspect of the invention provides an apparatus for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, the apparatus comprising: -means for encoding the still image into a base layer and at least one enhancement layer, the base layer allowing display of the still image in low resolution; -means for transmitting the base layer and the at least one enhancement layer in at least two different video trames according to the predetermined frame rate.
The claimed apparatus allows the display of a still image in such a way that the user-perceptible display build-up of the still image is very fast and without visually perturbing effects.
In an embodiment the apparatus further comprises means for determining an available bandwidth of the video network for transmitting the still image, the encoding of the still image into the base layer and the at least one enhancement layer being based on the determined available bandwidth, so as to optimize still image transmission.
In an embodiment the apparatus further comprises means for transmitting a video image within each of the at least two video frames.
In an embodiment the means for encoding the still image comprise means for determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
In an embodiment the means for encoding the still image comprise means for splitting the still image into pixel blocks, each pixel block comprising as many pixels as the number of layers, each pixel of a pixel block being assigned to a particular layer.
In an embodiment the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the apparatus further comprising means for transmitting further chronia information in at least one further video frame.
According to a fifth aspect of the invention there is provided an apparatus for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, the apparatus comprising: -means for receiving a fist video frame comprising the base layer and decoding the received base layer; -means for displaying a low resolution version of the still image based on the received base layer; -means for receiving at least one second video frame comprising the at least one enhancement layer and decoding the received at least one enhancement layer; and -means for refining the displayed still image according to the received at least one enhancement layer.
The claimed apparatus allows the display of a still image in such a way that the user-perceptible display build-up of the still image is very fast and without visually perturbing effects.
In an embodiment the apparatus further comprises means for determining an available bandwidth of the video network for transmitting the still image, the decoding of the base layer and of the at least one enhancement layer being based on the determined available bandwidth, so as to optimize still image transmission.
In an embodiment the apparatus further comprising means for decoding a first and at least one second video image received in the first and the at least one second video frame, respectively, and means for displaying the decoded first and at least one second video image simultaneously with the displayed still image.
In an embodiment the means for decoding the received base layer comprise means for determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
In an embodiment the means for refining the displayed still image according to the received at least one enhancement layer comprise means for interpolating pixel values from the received base layer and the received at least one enhancement layer.
In an embodiment the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the apparatus further comprising means for receiving further chroma information in at least one further video frame and means for refining the displayed still image according to the received further chroma information.
BRIEF DESCRIPTION OF THE DRAWINGS
Further advantages of the present invention will become apparent to those skilled in the art upon examination of the drawings and detailed description. It is intended that any additional advantages be incoipoiated herein.
Figure 1 schematically represents an example of a video projection system complising multiple video projectors for displaying video images and still images eithei simultaneously or sequentially.
Figure 2 is a flowchart of steps of an algorithm according to a particular embodiment foi splitting and progressively transmitting a still image.
Figure 3, comprising Figures 3A and 3B, illustrates the progressive transmission of pixels belonging to a pixel block of a still image.
Figure 4 is a flowchart of steps of an algorithm according to a particulai embodiment for displaying a low resolution still image and progressively refining the displayed image upon reception of data.
Figure 5, complising Figures 5A, 5B, and 5C, illustrates an example of progressive increase of chioma sub-sampling for transmitting a still image via a video network.
Figure 6 is a functional block diagram of a video projector adapted to carry out steps according to embodiments of the invention.
Figure 7 is a block diagiam illustrating components of a processing device in which one or more embodiments may be implemented for splitting and progressively transmitting a still image.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
For the sake of clarity below, a video image refers to an image displayed on a video screen by a video system according to a video frame rate (the content of the video image typically changes each time period of the video frame late) while a still image is an image whose content is static that is to say that does not change according to a video frame rate. Still tor the sake of clarity below, a video frame refers to data transmitted from a video source to a video display system according to a video frame rate. A video frame may comprise data belonging to a video image and/or to a still image.
According to a particular embodiment, a still image to be displayed in superimposition on video images of a high definition (HD) video stream, typically as a picture-in-picture, or alone in a full display area is broken down into layers that are transmitted as parts of video frames or video frames according to a predetermined frame rate or timeslot size.
To that end, a first step consists in determining a down-sampling resolution that is determined as a function of the amount of data that can be transmitted during each time period as defined by the predetermined frame rate or timeslot size. Such an amount of data depends, in particular, on the transmission bandwidth and on the amount of video image data, if any, to be transmitted. The down-sampling resolution can be characterized by two values, preferably two integer values, one representing a width ratio and the other a height ratio.
Knowing the amount of data that can be transmitted, the down-sampling resolution may be defined so that the number of pixels of the still image to be displayed times the coding size of each pixel divided by the product of the two values characterizing the down-sampling resolution is at most equal to the amount of data.
The two values can be equal to each other or not.
For the sake of illustration, these values may be both equal to 2 so that the amount of data of the down-sampled image is one fourth that of the image to be displayed. Similarly, these values may be equal to 2 and 4, respectively, so that the amount of data of the down-sampled image is one eighth that of the image to be displayed, or may be both equal to 4 so that the amount of data of the down-sampled image is one sixteenth that of the image to be displayed.
This can also be combined with chroma sub-sampling (for example, the 4:2:0 coding scheme further reduces the amount of data by a factor of 2).
After having determined the down-sampling resolution, the still image to be displayed is split into pixel blocks as a function of the down-sampling resolution (the numbers of pixel blocks per line and per column result from the image width and height and from the width and height ratio, respectively).
Next, the still image to be displayed is transmitted layer by layer, one layer being transmitted per time period as defined by the predetermined frame rate or timeslot size. According to a particular embodiment, each layer comprises one pixel of each pixel block according to a given pixel order within the pixel block (a first layer comprises a first pixel of each pixel block, a second layer comprises a second pixel of each pixel block and so on).
When receiving a first layer of a still image during the timeslot attributed to a video frame, the receiving video projectors display a low-resolution version of the still image to be displayed along with the part of the video image not hidden by said still image if a video image is to be displayed. The received low-resolution image is up-scaled (e.g. using nearest-neighbour interpolation) to be displayed (after receiving the first layer, the number of pixels of the received low-resolution image equals the number of pixel blocks) to fit the physical dimensions of the screen area allotted to displaying the still image.
On reception of the subsequent layers in further video frames, the resolution of the displayed still image is progressively increased (while maintaining its physical dimensions) until it reaches the resolution of the still image to be displayed so as to refine the displayed still image. For example, if the values characterizing the down-sampling resolution are both equal to 4, the still image to be displayed is displayed in its final resolution after 16 time periods as defined by the predetermined frame rate or timeslot size or 32 time periods if combined with a chroma sub-sampling.
It is to be noted that when several video projectors are used and when a still image is to be displayed within a video image as picture-in-picture, each of the video projectors has knowledge of the display areas of the video image and of the still image and they can distinguish in a received video frame the data that correspond to the video image from the data that correspond to the still image.
It is also to be noted that if the resolution of the original still image is greater than that of the still image to be displayed, the original still image can be down-sampled so as to avoid transmitting unnecessary pixels.
According to a particular embodiment according to which a still image is to be displayed within a video image, the down-sampling resolution is set as that of the video image. Accordingly, the still image can be displayed by any standard projector (although with low resolution). In such a case, progressive chroma sub-sampling cannot be applied.
Figure 1 represents an example of a video projection system, or display system, comprising multiple video projectors for displaying video images and still images either simultaneously as picture-in-picture or sequentially.
According to the given example, the video projection system comprises four video projectors 100-1 to 100-4, preferably of the same model. A convex quadrilateral projection area corresponds to each of the four projectors. As illustrated with thin continuous lines, projection areas 105-1 to 105-4 of a flat projection screen 110 correspond to projectors 100-1 100-4, respectively. They are arranged in two horizontal rows and two vertical columns and their border zones overlap (blending zones).
Due to the fact that the projectors' optical axes are not necessarily perfectly orthogonal to the plane of the projection screen 110 and due to mechanical tolerances in the mountings of the four projectors 100-1 to 100-4, projection areas 105-1 to 105-4 generally suffer from geometric distortions. Consequently, they are not perfectly rectangular and their borders are generally not parallel to the borders of the projection screen 110. To handle such distortions, the projectors preferably perform geometric distortion (keystone) corrections in addition to blending. The computation of such corrections is usually done during an initial calibration step that is based on procedures that are well known to the person skilled in the art.
Of course, the number of projectors of the video projection system is not limited to four. Likewise, the number of projectors per row and column is not limited to two and the number of projectors per row can be different from the number of projectors per column.
A main projection area 115 (illustrated with thick dashed line in Figure 1) is set on projection screen 110 during the calibration step. It is set so as to fit within the projection area 105-1 to 105-4. Its aspect ratio corresponds to that of the video images to be projected. A secondary projection area 120 (also illustrated with thick dashed line in Figure 1) is also set on projection screen 110 during the calibration step. Like projection area 115, it is set so as to fit within the projection area 105-1 to 105-4. Its aspect ratio corresponds to that of the still images to be displayed. The size and position of secondary projection area 120 may be fixed or variable (e.g. chosen by a user using a remote control).
A control apparatus 125 is connected to the projectors 100-1 to 100-4 by way of a control network 130. Control apparatus 125 transmits the parameters of the geometric corrections to be applied to the projectors 100-1 to 100-4 as well as parameters (typically coordinates) characterizing the parts of the video images and of the still images that each projector has to project, including the blending zones, onto projection area 115 and 120.
According to a particular embodiment, control apparatus 125 is physically embedded within one of the projectors 100-1 to 100-4 acting as a master projector while the others are slave projectors. Alternatively, the function of control apparatus can be split into several of these projectors. In such a case, one projector acts as a master projector performing the parts of processing that need to be centralized while the other projectors embedding the remaining parts of the control apparatus are in charge of the remaining processing in a distributed manner. Control data can be exchanged via control network 130.
Control apparatus 125 also controls a video source 135 and a source 140 of still images. The sources can be implemented in two distinct apparatus or in a single one. The video source 135 is preferably a source of HD videos. For the sake of illustration, the sources 135 and 140 can be a digital video camera and a still camera, respectively, a hard-disk or solid-state drive, a digital video recorder, a personal computer, a set-top box, a video game console, or the like. Each source can comprise one or several of such apparatuses.
Sources 135 and 140 are connected to each projectors 100-1 to 100-4 through a high-speed, low latency video network 145 that can be any wired or wireless LAN offering a data rate supporting transmission of HD videos. For the sake of illustration, such video network can be based on one or several of the standards known under the names IEEE 802.11, IEEE 802.3, W-HDMI, IEEE 1394, and USB. The format of the video frames used to transmit video images and still images may be compressed (e.g. MPEG or H.264) or not (RGB or YCbCr), possibly with chroma sub-sampling, with HD resolution lOBOp (1,920 x 1,080) or higher, color depth of 24 or 36 bits per pixel and frame rates of 30 or 60 frames per second.
Data transmission through video network 145 can be of the point-to-multipoint type, according to which each of the projectors 100-1 to 100-4 receives a common whole video stream, or of the point-to-point type, according to which each projector receives only a part of the video stream (representing the portion of the image the projector is in charge of projecting).
According to a particular embodiment, a same physical network takes the role of both the video network 145 and the control network 130.
A user may interact with the control apparatus 125 using, for example, a remote control (not represented) to select video images and/or still images from the sources 135 and/or 140, to switch between single-display mode and picture-in-picture mode, and, in the latter case, to choose the size and the position of the display area of the projection screen 110 where a still image is to be projected.
Figure 2 is a flowchart of steps of an algorithm according to a particular embodiment for splitting and progressively transmitting a still image. The represented steps may be executed by a device of the still image source 140, possibly in cooperation with the control apparatus 125, in coordination and synchronization with video source 135. An example of such a device is described here after by reference to Figure 7.
The algorithm is triggered whenever the display of a still image is requested or when the size and/or position of the secondary projection area 120 for displaying a still image change (e.g. when the user issues a corresponding command through a remote control).
As described above, if the resolution of the original still image is greater than that of the still image to be displayed, the original still image can be down-sampled in a preliminary step (not represented) so as to avoid transmitting unnecessary pixels.
A first step comprises a test for determining whether or not the display of video images from video source 135 is requested simultaneously with displaying the still image (step 200). If the display of video images from video source 135 is requested simultaneously with displaying the still image, typically as picture-in-picture, the algorithm proceeds with step 202 wherein video image resolution and video trame rate are determined. Video image resolution typically comprises pixel width, pixel height, and color depth (in bits per pixel, taking into account chroma sub-sampling if implemented).
As described above, video image resolution and frame rate allow the determination of the bandwidth required on video network 145 to transmit video frames from the video source 135 to the projectors 100-1 to 100-4. If a point-to-point transmission protocol is used, it may be worthwhile to consider the required bandwidth projector by projector.
On the contrary, if the display of video images from video source 135 is not requested simultaneously with displaying the still image, that is to say if a still image is to be displayed alone, the algorithm proceeds with step 204 wherein a video frame rate (for still image layer transmission) is determined. Such video frame rate can be determined, for example, as a function of the operating mode of the display devices or as a function of the timeslot length of a TDMA (Time Division Multiple Access) transmission sequence on the video network 145.
Steps 202 and 204 are followed by step 206.
The position and size of the projection area 120 where the still image is to be displayed are obtained at step 206. As described above, these parameters may be determined during a calibration step or later by a user. The projection resolution of the still image to be displayed is also determined at step 206, preferably as a function of the projection area where the still image is to be displayed and as a function of the projector's resolutions. It is to be noted here that the display resolution of the still image can be considerably higher than the display resolution of the video images.
In a following step, the bandwidth of the video network 145 that is available for transmitting data of the still image is determined (step 208). This is done by taking into account the video network capacity (the bandwidth available for user data, excluding protocol overhead like packet headers, acknowledgements and retransmissions and possible redundancy for error detection or correction) and the bandwidth that is needed for transmitting video images, if any, excluding the parts of the video images that are hidden by the still image, that is to say the part of a video image corresponding to projection area 120, which hence doesn't need to be transmitted.
Next, a pixel block size for progressively transmitting the still image is determined (step 210). Such a pixel block size is directly linked to the down-sampling resolution described previously.
The pixel blocks, preferably all of equal size, cover the whole still image, preferably without overlap. They are advantageously arranged on a regular basis forming, for example, a rectangular or square grid. For a given bandwidth, the bigger the display resolution of the still image (as determined in step 206) in comparison to the video image resolution, the bigger the pixel blocks should be since during each video frame transmission, one pixel of each block is transmitted.
To that end, the network bandwidth available for still image transmission (as determined in step 208) is divided by the video frame rate in order to obtain a maximum amount of data belonging to the still image that can be transmitted during a time period as defined by the video frame rate (referred to as the time period herein below). From such an amount of data, it is possible, knowing the bit size of a pixel, to determine the number of pixels of the still image to be displayed that can be transmitted during a time period. Next, dividing the number of pixels of the still image to be displayed by the number of pixels that can be transmitted during a time period yields the minimum pixel block size. If this result is not an integer, the closest greater integer is chosen. It is to be noted that a greater block size can be chosen (hence spreading the transmission over more video frames), for example to obtain square or nearly square blocks that result in a better rendering of the still image in low resolution (during the first time periods of transmission).
After having determined the pixel block size for progressively transmitting the still image, the order for transmitting the pixels of each block is determined at step 212. Such pixel order is advantageously determined so as to optimize the visual rendering of the blocks. An example of pixel order for transmitting pixels of each pixel block is illustrated in Figure 3A. Such pixel order can be obtained, for example, from configuration data of the algorithm.
Next, the value of an integer variable referred to as p/xe/counter is set to 1 (step 214).
If the still image is to be displayed simultaneously with a video image, the process awaits the transmission of a video frame at step 216.
Next, the pixels whose order number in the pixel blocks correspond to the value of the variable p/xe/counter, for all the pixel blocks of the still image to display, forming a base layer (if the value of the variable p/xe/counter is equal to 1) or an enhancement layer (if the value of the variable pixeL counter is greater than 1), are transmitted from the still image source 140 to projectors of the display system via video network 145 (step 218). Each pixel is transmitted to all projectors or not depending on the transmission mode (point-to-multipoint or point-to-point).
In a particular embodiment according to which the still image is to be displayed simultaneously with a video image and according to which the resolution of the still image is the same as that of the video image, the pixels to be transmitted are advantageously embedded into the video frame at the place of the corresponding pixels of the video image that are hidden by that of the still image (projection area 120).
Alternatively, one or several particular timeslots are reserved for still image pixel transmission during the transmission of each video frame. If needed, splitting the set of pixels to be transmitted among the different projectors and blending areas are managed by the still image source 140 similarly to what is done in video source 135.
Next, the value of the variable p/xe/counter is incremented by 1 (step 220) and a test is performed to determine whether or not all the pixels of each pixel block have been transmitted (step 222), that is to say whether or not the value of the variable p/xe/counter is greater than the size of the pixel blocks. If all the pixels of each pixel block have not been transmitted, the algorithm proceeds with step 216 to transmit an enhancement layer whose pixel order number is equal to the value of the variable pixeL counter. On the contrary, if all the pixels of each pixel block have been transmitted, the algorithm ends.
Figure 3, comprising Figures 3A and 3B, illustrates the progressive transmission of pixels belonging to a pixel block of a still image. More precisely, Figure 3A shows an example of a pixel transmission order within a pixel block for progressive and interleaved transmission of a still image while Figure 3B shows an example of a video frame sequence as it may be transmitted through video network 145, together with the interleaved rendering of a pixel block by a projector.
For the sake of illustration, the pixel block comprises 16 pixels arranged in a square shape (4 x 4). Therefore, according to the example given, 16 time periods are necessary to transmit a still image in its full resolution.
As illustrated in Figure 3A, the first pixel is the one positioned on the first line and first column, the second one is that positioned on the third line and third column, and so on. As one can observe, the order is chosen so that: -after reception of a first pixel of a pixel block, a projector creates a block of 4 x 4 pixels of the same color linked to that of the received pixel; -after reception of a second pixel of the pixel block, the created block is refined as a block comprising 2 sub-blocks of 4 x 2 pixels, all the pixels of a sub-block sharing the same color linked to that of a received pixel (the pixel color of the first sub-block pixels is linked to that of the first received pixel and the pixel color of the second sub-block pixels is linked to that of the second received pixel); -after reception of a third and a fourth pixel of the pixel block, the created block is further refined as a block comprising 4 sub-blocks of 2 x 2 pixels (again, all the pixels of a sub-block share the same color linked to that of a particular received pixel); -after reception of a fifth to an eighth pixel of the pixel block, the created block is further refined as a block comprising 8 sub-blocks of 2 x 1 pixels (once again, all the pixels of a sub-block share the same color linked to that of a particular received pixel); and, -after reception of a ninth to a sixteenth pixel of the pixel block, the created block is further refined as a block comprising 16 unitary sub-blocks (the color of each pixel of the refined block is linked to that of a received pixel) Naturally, any other order may be used for the block whose size and shape are illustrated in Figure 3A and a similar order may be applied to a pixel block having a different size and/or shape. However, it is most advantageous to choose an order in which a good approximation of the whole pixel block is obtained by the receiving side as soon as possible. This is the case if consecutively transmitted pixels are situated far from each other as well as from corresponding pixels in adjacent blocks. This is the case for example for pixels n° 1 and 2 as presented in figure 3A. The pixel n° 2 has the maximal distance to the pixels n° 1 of the current block as well as of the adjacent blocks (e.g. to the right of the current block, below the current block and to the reight below the current block).
Figure 3B illustrates an example of a video frame sequence for transmitting data allowing the display of a still image simultaneously with video images, that is based on the block size and shape as well as on the pixel order of the example given in Figure 3A.
As illustrated, the data transmitted during each time period, that is to say within each video frame, are split into a still image part (originating from still image souice 140), generically referenced 300, that represents a base layer or an enhancement layer of the still image, and a video part (originating from video source 135), generically referenced 305. Naturally, the video part is absent if no video image is to be displayed simultaneously with a still image. These data parts may be transmitted sequentially in time as suggested in Figure 3B or split and interleaved.
According to a particular embodiment, a destination projector successively receiving still picture data spread over several video frames stores the received still picture data in memory. After having received a first part of the still image, that is to say the base layer, the still image is displayed with a low resolution. Then, each time an enhancement layer is received, the displayed image is refined. This is preferably repeated until it reaches its full resolution.
According to a particular embodiment, after reception of a first pixel of each pixel block (pixel having number 1 as pixel order), all other pixels within the same block are assigned the color of the first pixel. As illustrated in Figure 3B, when a first pixel of a block is received in still image part 300-1 of a first video frame also comprising video data 305-1, the corresponding pixel block 310-1 is created. The received pixel value corresponds to the pixel positioned on the first line and first column (underlined reference 1 of block 310-1). Since the pixel block does not contain any value before receiving the first pixel, all the pixels of this bloc are based on the received value (bold reference 1 of block 310-1). The same applies for all the blocks of the still image to display.
After having been created, the obtained blocks may be displayed, forming a low resolution image, without waiting for the other pixel values.
Next, when a second pixel of the block is received in still image part 300-2 of a second video frame, the corresponding pixel block 310-1 is refined and gives pixel block 310-2. As illustrated, the received pixel value corresponds to the pixel positioned on the third line and third column (underlined reference 2 of block 310-2). Based on the received value, the color of all the neighbor pixels is refined (bold reference 2 of block 310-2). The refined still image is displayed.
As illustrated, the process is repeated for the fourteen following video frames, that is to say until each pixel color of each block has been received.
According to another embodiment, the color of a pixel that has not been received is interpolated from that of its neighbors (belonging to the same pixel block or to adjacent pixel blocks) using, for example, bi-linear or bi-cubic interpolation. In such a case, the originally received pixel data should be stored in a particular memory zone, different from the video memory, so that the original, non-interpolated color values of the received pixels remain available during processing of subsequent video frames.
Figure 4 is a flowchart of steps of an algorithm according to a particular embodiment for displaying a low resolution still image and progressively refining the displayed image upon reception of data.
The represented steps may be executed by a device of each projector of a display system. An example of such a device is described below by reference to Figure 6.
Like the algorithm described by reference to Figure 2, the algorithm illustrated in Figure 4 is triggered whenever the display of a still image is requested or when the size and/or position of the secondary projection area 120 for displaying a still image changes (e.g. when a user issues a corresponding command through a remote control).
According to the given example, steps 400 to 414 are similar to steps 200 to 214 described by reference to Figure 2 and aims at obtaining display and communication parameters, in particular a frame rate, still image screen size and resolution, pixel block size and arrangement, and pixel transmission order within pixel blocks.
It is to be noted that although these parameters can be at least partially computed in projectors, they can also be received from the still image source 140.
After having determined or received at least pad of the display and communication parameters, pixels of a still image to display are received. More precisely, each time a video frame is received, still image data are recovered and pixels of a layer of the still image to display are obtained (step 418).
Next, a test is performed to determine whether or not the received pixels belong to the base layer or to an enhancement layer, that is to say if the value of the variable pixeL counter is equal to or greater than 1 (step 420).
If the received pixels belong to the base layer, pixel blocks are created (step 422), one pixel block being created for each received pixel. As described previously, the color of all pixels of each block is preferably set to that of the corresponding received pixel.
On the contrary, if the received pixels belong to an enhancement layer, the color of pixels of the pixel blocks is refined according to the received data (step 424).
As described above, such a refinement may consist in replacing the color value of pixels by that of a received pixel or may be based on interpolation functions.
Next, the image formed with the created or refined pixel blocks is displayed (step 426), the value of the variable pixeL counter is incremented by 1 (step 428), and a test is performed to determine whether or not all the pixels of each pixel block have been received (step 430), that is to say whether or not the value of the variable pixeL counter is greater than the size of the pixel block. If not all the pixels of each pixel block have been received, the algorithm proceeds with step 418 to receive an enhancement layer. On the contrary, if all the pixels of each pixel block have been received, the algorithm ends.
It is to be noted that if the still image is to be displayed simultaneously with a video frame, the latter is decoded and displayed in a standard way.
Figure 5, complising Figures 5A, 5B, and 5C, illustrates an example of progressive increase of chroma sub-sampling for transmitting a still image via a video network. Progressive increase of chroma sub-sampling may be used in conjunction with the progressive transmission of pixel values as described herein above.
According to this particular embodiment, a still image is progressively transmitted using first chrorna sub-sampling using an algorithm like the one described by reference to Figure 2. Next, in sub-sequent video frames, missing chroma data are progressively transmitted.
For the sake of illustration, it is assumed that a still image has been transmitted on the basis of 4 x 4 pixel blocks (as illustrated in Figure 3) using chroma sub-sampling of the 4:2:0 type. It is to be recalled here that chroma sub-sampling of the 4:2:0 type means that chroma information (Cb and Cr) is present both with half the horizontal and half the vertical resolution of luma information (Y), as represented in Figure 5A.
In a sub-sequent video frame, missing chroma data are transmitted so as to attain chroma sub-sampling of the 4:2:2 type according to which chroma information (Cb and Cr) is present with half the horizontal resolution of luma information (Y) and the same vertical resolution, as represented in Figure SB. This is done according to a predetermined order, for example that given in Figure SB with bold characters. It is to be noted that as the amount of data per pixel is less than for the first progressive transmission of the still image as represented in Figure 5A (average of 4 bits/pixel versus 12 bits/pixel), it is possible to fit the transmission of the chroma data for several pixels into one video frame.
Next, in a further video frame, missing chroma data are transmitted so as to attain chroma sub-sampling of the 4:4:4 type according to which chroma information (Cb and Cr) is present with the same horizontal and vertical resolution as luma information (Y), as represented in Figure 50. Again, since the quantity of data per pixel is less than for the first progressive transmission of the still image as represented in Figure 5A (average of 8 bits/pixel versus 12 bits/pixel), it is possible to fit the transmission of the chroma data for several pixels into one video frame.
Figure 6 is a functional block diagram of a video projector adapted to carry out steps according to embodiments of the invention.
As illustrated, the projector comprises a processor or microcontroller 600 that can be used to manage most of the configuration tasks and to execute several algorithms like those described in relation with the previous figures, in particular Figure 4. Those algorithms generate configuration values that can be set in corresponding functional blocks thanks to a processor interconnection bus 602. All blocks that need to be configured by processor 600 are linked to bus 602.
A random access memory (RAM) 604 is also linked to bus 602 to cooperate with processor 600, in particular for storing program instructions and data handled by processor 600.
Optionally, a camera 606 is connected to bus 602 for acquiring images that may be used for calibrating and initializing the display system.
A video source interface module 608, also connected to bus 602, is configured to receive synchronization information and video data from a video source and a still image source, for example via an HOMI and a USB adapter, respectively, as illustrated. Video source interface module 608 outputs video data to a video buffer controller 610 and outputs synchronization signals to a synchronization controller 612.
Synchronization controller 612 is responsible for generating video synchronization signals used to manage video rendering. Video buffer controller 610 manages the access to a video buffer 614 that may be implemented in a RAM. It wiites data from video source interface module 608 and from a wireless local area network (WLAN) controller 616 into video buffer 614. It also reads video data out from video buffer 614 and provides them either to WLAN controller 616 or to a local display controller 618 depending on video data destination.
Figure 7 is a block diagram illustrating components of a processing device 700 in which one or more embodiments may be implemented for splitting and progressively transmitting a still image. The processing device 700 comprises a communication bus 713 connected to: -a central processing unit 711, such as a microprocessor, denoted CPU; -a read only memory 707, denoted ROM, for storing computer programs for implementing the invention; -a random access memory 712, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method of encoding and progressively transmitting still images according to embodiments of the invention; and -a communication interface 702 connected to a communication network 703 through which encoded still images can be progressively transmitted.
Optionally, the processing device 700 may also include the following components: -a data storage means 704 such as a hard disk, for storing computer programs for implementing methods of one or more embodiments of the invention and data used or produced during the implementation of one or more embodiments of the invention; -a disk drive 705 for a disk 706, the disk drive being adapted to read data from the disk 706 or to write data onto said disk; -a screen 709 for displaying data and/or serving as a graphical interface with a user, by means of a keyboard 710, a mouse or any other pointing means.
The processing device 700 can be connected to various peripherals, such as for example a digital camera 720 or a microphone 708, each being connected to an input/output card (not shown) so as to supply multimedia data to the processing device 700.
The communication bus provides communication and interoperability between the various elements included in the processing device 700 or connected to it.
The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the processing device 700 directly or by means of another element of the processing device 700.
The executable code may be stored either in read only memory 707, on the hard disk 704, or on a removable digital medium such as for example a disk 706 as described previously. According to a variant, the executable code of the programs can be received by means of the communication network 703, via the interface 702, in order to be stored in one of the storage means of the processing device 700 before being executed, such as the hard disk 704.
The central processing unit 711 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 704 or in the read only memory 707, are transferred into the random access memory 712, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
Naturally, in order to satisfy local and specific requirements, a person skilled in the art may apply to the solution described above many modifications and alteiations all of which, however, are included within the scope of protection of the invention as defined by the following claims.

Claims (38)

  1. CLAIMS1. A method for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, the method comprising: -encoding the still image into a base layer and at least one enhancement layer, the base layer allowing display of the still image in low resolution; -transmitting the base layer and the at least one enhancement layer in at least two different video frames according to the predetermined frame rate.
  2. 2. The method of Claim 1 further comprising a step of determining an available bandwidth of the video network for transmitting the still image, the encoding of the still image into the base layer and the at least one enhancement layer being based on the determined available bandwidth.
  3. 3. The method of Claim 2 wherein the still image is encoded so that the bit size of each of the base layer and the at least one enhancement layer is smaller or equal to the amount of data that can be transferred in a time period as determined by the predetermined frame rate.
  4. 4. The method of any of Claims 1 to 3 further comprising a step of transmitting a video image within each of the at least two video frames.
  5. 5. The method of Claim 4 wherein the resolution of the still image is equal to the resolution of a video image to be transmitted within one of the at least two video fra in es.
  6. 6. The method of Claim 4 wherein the resolution of the still image is greater than the resolution of a video image to be transmitted within one of the at least two video frames.
  7. 7. The method of any one of Claims 4 to 6, depending on Claim 2, wherein the available bandwidth is determined as a function of a bandwidth of the video network and as a function of the bit size of a video image to be transmitted within one of the at least two video frames.
  8. 8. The method of Claim 7 wherein the available bandwidth is further determined as a function of an amount of pixels of a video image to be transmitted within one of the at least two video frames, that are hidden by pixels of the still image when displaying the video image and the still image.
  9. 9. The method of any one of Claims 1 to 8 wherein the step of encoding the still image comprises a step of determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
  10. 10. The method of Claim 9 wherein the step of encoding the still image comprises a step of splitting the still image into pixel blocks, each pixel block comprising as many pixels as the number of layers, each pixel of a pixel block being assigned to a particular layer.
  11. 11. The method of any one of Claims ito 10, wherein the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the method further comprising a step of transmitting further chroma information in at least one further video frame.
  12. 12. A method for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, the method comprising: -receiving a fist video frame comprising the base layer and decoding the received base layer; -displaying a low resolution version of the still image based on the received base layer; -receiving at least one second video frame comprising the at least one enhancement layer and decoding the received at least one enhancement layer; and -refining the displayed still image according to the received at least one enhancement layer.
  13. 13. The method of Claim 12 further comprising a step of determining an available bandwidth of the video network for transmitting the still image, the decoding of the base layer and of the at least one enhancement layer being based on the determined available bandwidth.
  14. 14. The method of either Claim 12 or Claim 13 further comprising the steps of decoding a first and at least one second video image received in the first and the at least one second video frame, respectively, and of displaying the decoded first and at least one second video image simultaneously with the displayed still image.
  15. 15. The method of Claim 14 wherein the resolution of the still image is equal to the resolution of a received video image.
  16. 16. The method of Claim 14 wherein the resolution of the still image is greater than the resolution of a received video image.
  17. 17. The method of any one of Claims 14 to 16, depending on Claim 13, wherein the available bandwidth is determined as a function of a bandwidth of the video network and as a function of the bit size of a received video image.
  18. 18. The method of Claim 17 wherein the available bandwidth is further determined as a function of an amount of pixels of a received video image that are hidden by pixels of the still image when displaying the video image and the still image.
  19. 19. The method of any one of Claims 12 to 18, wherein the step of decoding the received base layer comprises a step of determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
  20. 20. The method of any one of Claims 12 to 19, wherein the step of refining the displayed still image according to the received at least one enhancement layer comprises a step of interpolating pixel values from the received base layer and the received at least one enhancement layer.
  21. 21. The method of any one of Claims 12 to 20, wherein the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the method further comprising a step of receiving further chroma information in at least one further video frame and a step of refining the displayed still image according to the received further chroma information.
  22. 22. A computer program comprising instructions for carrying out each step of the method according to any one of claims 1 to 21 when the program is loaded and executed by a programmable apparatus.
  23. 23. An apparatus for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, the apparatus comprising: -means for encoding the still image into a base layer and at least one enhancement layer, the base layer allowing display of the still image in low resolution; -means for transmitting the base layer and the at least one enhancement layer in at least two different video frames according to the predetermined frame rate.
  24. 24. The apparatus of Claim 23 further comprising means for determining an available bandwidth of the video network for transmitting the still image, the encoding of the still image into the base layer and the at least one enhancement layer being based on the determined available bandwidth.
  25. 25. The apparatus of any one of Claims 23 and 24 further comprising means for transmitting a video image within each of the at least two video frames.
  26. 26. The apparatus of any one of Claims 23 to 25 wherein the means for encoding the still image comprise means for determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
  27. 27. The apparatus of Claim 26 wherein the means for encoding the still image comprise means for splitting the still image into pixel blocks, each pixel block comprising as many pixels as the number of layers, each pixel of a pixel block being assigned to a particular layer.
  28. 28. The apparatus of any one of Claims 23 to 27, wherein the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the apparatus further comprising means for transmitting further chroma information in at least one further video frame.
  29. 29. An apparatus for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, the apparatus comprising: -means for receiving a fist video frame comprising the base layer and decoding the received base layer; -means for displaying a low resolution version of the still image based on the received base layer; -means for receiving at least one second video frame comprising the at least one enhancement layer and decoding the received at least one enhancement layer; and -means for refining the displayed still image according to the received at least one enhancement layer.
  30. 30. The apparatus of Claim 29 further comprising means for determining an available bandwidth of the video network for transmitting the still image, the decoding of the base layer and of the at least one enhancement layer being based on the determined available bandwidth.
  31. 31. The apparatus of either Claim 29 or Claim 30 further comprising means for decoding a first and at least one second video image received in the first and the at least one second video frame, respectively, and means for displaying the decoded first and at least one second video image simultaneously with the displayed still image.
  32. 32. The apparatus of any one of Claims 29 to 31, wherein the means for decoding the received base layer comprise means for determining a number of layers, the still image being encoded in the base layer and as many enhancement layers as the determined number of enhancement layers minus one.
  33. 33. The apparatus of any one of Claims 29 to 32, wherein the means for refining the displayed still image according to the received at least one enhancement layer comprise means for interpolating pixel values from the received base layer and the received at least one enhancement layer.
  34. 34. The apparatus of any one of Claims 29 to 33, wherein the base layer and the at least one enhancement layer comprise chroma sub-sampling information, the apparatus further comprising means for receiving further chroma information in at least one further video frame and means for refining the displayed still image according to the received further chroma information.
  35. 35. A method for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, substantially as herein described with reference to, and as shown in, Figure 2 of the accompanying drawings.
  36. 36. A method for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, substantially as herein described with reference to, and as shown in, Figure 4 of the accompanying drawings.
  37. 37. An apparatus for progressively transmitting, via a video network, a still image to be displayed by a video display device operating at a predetermined frame rate, substantially as herein described with reference to, and as shown in, Figure 7 of the accompanying drawings.
  38. 38. An apparatus for progressively displaying a still image received from a video network, the still image being at least partially displayed by a video display device operating at a predetermined frame rate, the still image being encoded into a base layer and at least one enhancement layer, substantially as herein described with reference to, and as shown in, Figure 6 of the accompanying drawings.
GB201219219A 2012-10-25 2012-10-25 Method and device for progressive transmission of still images to a video display system Active GB2507482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201219219A GB2507482B (en) 2012-10-25 2012-10-25 Method and device for progressive transmission of still images to a video display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201219219A GB2507482B (en) 2012-10-25 2012-10-25 Method and device for progressive transmission of still images to a video display system

Publications (3)

Publication Number Publication Date
GB201219219D0 GB201219219D0 (en) 2012-12-12
GB2507482A true GB2507482A (en) 2014-05-07
GB2507482B GB2507482B (en) 2014-11-12

Family

ID=47358651

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201219219A Active GB2507482B (en) 2012-10-25 2012-10-25 Method and device for progressive transmission of still images to a video display system

Country Status (1)

Country Link
GB (1) GB2507482B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161957A1 (en) * 2017-03-10 2018-09-13 广东欧珀移动通信有限公司 Method and device for layer drawing control, and mobile terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724070A (en) * 1995-11-20 1998-03-03 Microsoft Corporation Common digital representation of still images for data transfer with both slow and fast data transfer rates
US5880856A (en) * 1994-12-05 1999-03-09 Microsoft Corporation Progressive image transmission using discrete wavelet transforms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880856A (en) * 1994-12-05 1999-03-09 Microsoft Corporation Progressive image transmission using discrete wavelet transforms
US5724070A (en) * 1995-11-20 1998-03-03 Microsoft Corporation Common digital representation of still images for data transfer with both slow and fast data transfer rates

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161957A1 (en) * 2017-03-10 2018-09-13 广东欧珀移动通信有限公司 Method and device for layer drawing control, and mobile terminal
US11100901B2 (en) 2017-03-10 2021-08-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for controlling rendering of layers, terminal, and storage medium

Also Published As

Publication number Publication date
GB201219219D0 (en) 2012-12-12
GB2507482B (en) 2014-11-12

Similar Documents

Publication Publication Date Title
KR102155681B1 (en) Systems and methods for virtual reality video conversion and streaming
CN112204993B (en) Adaptive panoramic video streaming using overlapping partitioned segments
US9741316B2 (en) Method and system for displaying pixels on display devices
US20180098090A1 (en) Method and Apparatus for Rearranging VR Video Format and Constrained Encoding Parameters
CN108780584B (en) Conversion and preprocessing of spherical video for streaming and rendering
US9177359B2 (en) Information processor, cloud platform, information processing method, and computer program product thereof
US20140125554A1 (en) Apparatus and algorithm to implement smart mirroring for a multiple display system
CN109640167B (en) Video processing method and device, electronic equipment and storage medium
US20210314647A1 (en) Method of video transmission and display
EP4300985A2 (en) Adaptive panoramic video streaming using composite pictures
GB2501161A (en) Image processing for projection with multiple projectors
JP2014531807A (en) Image processing system and method
CN101778226B (en) High-definition image sawtooth-prevention method, device and digital television receiving terminal
KR101152952B1 (en) Real-time three dimension formating module for ultra high-definition image and system using thereof
CN102026007A (en) Method and system for processing video
WO2021199205A1 (en) Image data transfer apparatus, image display system, and image data transfer method
GB2507482A (en) Progressive Transmission of Still Images to a Video Display System
GB2526148A (en) Seamless display of a video sequence with increased frame rate
WO2021193361A1 (en) Image data transfer device, image display system, and image transfer method
WO2015132957A1 (en) Video device and video processing method
JP4672561B2 (en) Image processing apparatus, receiving apparatus, broadcast system, image processing method, image processing program, and recording medium
TW202218421A (en) Content display process
Doyen et al. Display independent real-time multi-view rendering engine
GB2510814A (en) Luma-indexed chroma sub-sampling