GB2527577A - Frame rate augmentation - Google Patents

Frame rate augmentation Download PDF

Info

Publication number
GB2527577A
GB2527577A GB1411401.1A GB201411401A GB2527577A GB 2527577 A GB2527577 A GB 2527577A GB 201411401 A GB201411401 A GB 201411401A GB 2527577 A GB2527577 A GB 2527577A
Authority
GB
United Kingdom
Prior art keywords
image
display
video
frame
time information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1411401.1A
Other versions
GB2527577B (en
GB201411401D0 (en
Inventor
Arnaud Closset
Brice Le Houerou
Falk Tannhauser
Tristan Halna Du Fretay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1411401.1A priority Critical patent/GB2527577B/en
Publication of GB201411401D0 publication Critical patent/GB201411401D0/en
Publication of GB2527577A publication Critical patent/GB2527577A/en
Application granted granted Critical
Publication of GB2527577B publication Critical patent/GB2527577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

Processing image data for image sequence display comprises receiving image data representing at least one current image portion of the sequence, e.g. a subdivision grid for multi-projection system image sequence display. First time information representative of an expected completion time of the current image receipt (t4) is determined based on at least one of an image sequence image period and the delay for receiving the image data from a source (t0 t0), and a second time information representative of a scheduled time of current image display (SOF) is also determined. The first and second time information are compared. The comparison result triggers the display of either the current image or an image based on at least one precedent image (e.g. the precedent image itself). In this way, if the image is expected to be received in time, its displayed, otherwise a previous image is displayed. The first time information may be received with the image data. The scheduled time may be determined according to a display frame rate. The image based on at least one precedent image may be a result of a motion estimation process between successive frames of the video sequence.

Description

FRAME RATE AUGMENTATION.
FIELD OF THE INVENTION
The present invention relates to video data transmission from a source device to a display device. The present invention also relates to the control of the display of the video data.
BACKGROUND OF THE INVENTION
In multi-projection systems, scalable composite display is based on distributed video processing technology and distributed video networking technology. Each video projector is capable of creating adjusted video frames for its local display unit.
The video processing units of the video projectors are fed with raw video frame portions that are supplied by a distributed network unit which controls the communications between the video projectors and the video source.
For each video projector, a raw video frame is made of several image areas (the "frame portions"), some of them, the "blended zones", being displayed by several video projectors at the same time.
In a multi sources environment, (e.g. applications like source selection, Picture in Picture "PIP", etc.), each source is split into several portions, each portion being delivered to a designated video projector using a network transmission layer. In this perspective, it is desirable to consider a high display rate capability of the video projection system, high enough to cover any heterogeneous input frame rates and resolutions.
In order to provide synchronization in a composite display system, it must be made sure that all source frame portions displayed by the different video projectors during a display frame period belong to the same initial video source frame. The frame rate of the video sources is generally lower than the frame rate of the composite display rate. Thus, the frame rate of the image sequences from the video sources must be increased to the frame rate of the video projectors.
Document WO 2010/010497 discloses a frame rate up-converter implementing a triple buffering scheme to perform either an N:M standard frame duplication method (N and M representing the ratio between source and display frame rate), or a more elaborated motion compensated frame rate conversion to reduce motion judder introduced by the standard N:M up-conversion method.
A drawback of such techniques, in particular when implemented in a multi projection system, at the source location and before entering the network, is that the network bandwidth allocation for each video source is maximized with respect to the composite display rate. This may overflow the network bandwidth capability when the number of video sources within the system is increased.
Conversely, implementation of such techniques at the video projector level sets a problem of feasibility and coherency between the different up-conversion processing of each individual portion among the different projectors.
This may be due to the fact that the frame rates of the input video sources are not synchronized. This may also be due to the fact that the sizes of the frame portions delivered to the video projectors are heterogeneous. This may also be due to fluctuations in the network transmission delay can reasonably be considered fluctuating (medium access, error management, etc.).
From coherency perspective, at the beginning of a new composite display period, it may not be possible to make sure that all video projectors will perform the same processing on source frame portions extracted from a same frame of a video source, thereby performing a coherent display of the image portions. Decision is made on network reception buffers load conditions at the time of each start of frame display event.
From feasibility perspective, complexity of motion compensated frame rate conversion methods becomes critical when input source frames are split into portions. Distributed motion compensation becomes difficult to implement.
Thus, there is still a need for improvements to up-conversion methods for increasing the frame rate of video sequences for display. Such improvements are needed in multi-projection systems as well as in point-to-point systems wherein one display device displays a video sequence from a source device.
The invention lies within this context.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method of processing image data for displaying an image sequence, the method comprising: -receiving image data representing at least one portion of a current image of the image sequence, -determining a first time information representative of an expected time of completion of the receipt of said current image based on at least one of an image period of the image sequence and the delay for receiving the image data from a source device; -determining a second time information representative of a scheduled time of display of said current image, -comparing said first and second time information, and -triggering display of said current image or a display of an image based on at least one precedent image, depending on a result of said comparison.
A method according to the first aspect makes it possible to increase the frame rate in a video system in a non-complex fashion.
The completion of the receipt of the current image may be understood as the receipt of the all the portions or the current image.
According to embodiments said first time information is received with said at least one portion of said image data.
For example, the images of said image sequence are subdivided into a plurality of image portions, said first time information being associated with one of said image portions and representing an expected time of completion of receipt of said plurality of image portions.
According to embodiments, said scheduled time is determined according to a display frame rate.
For example, said first time information is associated with each image portion of said plurality of image portions.
The image portions may correspond to transmission portions from a transmission scheme from a source device to a display device.
For example, said image portion correspond to a subdivision grid for display of the image sequence by a multi-projection system.
The image based on at least one precedent image may be the precedent image.
For example, said image based on at least one precedent image is a result of a motion estimation process between successive frames of said video sequence.
According to a second aspect of the invention there are provided computer programs and computer program products comprising instructions for implementing methods according to the first aspect of the invention, when loaded and executed on computer means of a programmable apparatus.
According to a third aspect of the invention, there is provided a device configured for implementing methods according to the first aspect.
According to a fourth aspect of the invention, there is provided a multi-projection system comprising a plurality of devices according to the third aspect.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which: -Figures IA-lB schematically illustrate a context of implementation of embodiments; -Figure 2 schematically illustrates a use case of a display organization shared between video projectors; -Figures 3A-3B illustrate exemplary frame rate up-conversion methods; -Figures 4-6 schematically illustrate an architecture of a video projector according to embodiments; -Figures 7-9 are flowcharts of steps performed for computing and extracting timestamps in order to duplicate video frames according to embodiments.
DETAILED DESCRIPTION OF THE INVENTION
In what follows, embodiments of the invention are described.
Figure IA illustrates a context of implementation of embodiments of the invention.
A source device 1000 delivers frames of a video sequence to a display device 1001. For example, the frames of the video sequence are transmitted through a network 1002. Each frame 1003 of the video sequence is split into portions and each portion is transmitted to the display device. For example, data representing frame 1003 is split into data packets 1004, 1005, 1006, 1007. Each data packet comprises a data portion 1004a, lOOSa, 1006a and 1007a along with a timestamp portion 1004b, lOOSb, 1006b and 1007b.
Data packet 1004 is transmitted by the source device at a time tO and is received by the display device at a time tO', the time difference tO-tO depends on the transmission processing by the devices and the network. Similarly, data packets 1005, 1006 and 1007 are transmitted at times ti, t2, t3 and are received at times ti, t2', t3'.
The source device completes the sending of data packet 1007 at time t4. The time between tO and t4 defines a source frame rate. The source frame rate is the number of frames the source device can send per second. The source frame rate would be here 1/(t4-tO).
The display device may have a display frame rate different from the source frame rate. The display frame rate corresponds to the number of frames the display device is capable of display per second. The display frame rate is set higher than the source frame rate.
The display device displays frames received from the source device according to its frame rate. Start of frame signals indicate the times SOFO, SOF1, SOF2, SOF3 at which the display device should display a frame.
In order for the display device to decide whether it has to display a frame received from the source device or a frame previously received, the source device associates with the data portions of the frames the timestamp portion which is representative of an expected time of completion of the transmission of the frame. The source device may take into account its own frame rate (time t4-tO) and also the latency introduced by the network. Thus, upon receipt of the data packets, the display device can determine whether upon each start of frame, it will be able to display the frame currently being received. In case it will not be able to do so, it will duplicate a previous frame.
In the context of Figure 1A, at time SOFO, no data packet of frame 1003 has been received, the display device thus displays the frame previously received. At time to,, it receives data packet 1004. It parses the timestamp portion 1 004b and determines from it that the receipt of frame 1003, namely the receipt of data packets 1004, 1005, 1006 and 1007 will be completed at time t4'.
The time of receipt completion may be updated in each data packet. Thus, the display device can update its prediction at each receipt of a data packet.
The display device can thus compare the time of receipt completion with each start of frame time. At time SOF1, the transmission of the frame is not completed, thus the display device duplicates the previous frame again. At time SOF2, the transmission of the frame is not completed yet, thus the display device duplicates the previous frame again. However, at time SOF3, the transmission of the frame is completed, thus the display device can display frame 1003.
In case the display device determines that it will not receive the complete frame upon the next start of frame, it may display a previous frame as such or display a frame calculated based on previous frames, for example, based on motion estimation calculations.
The use of timestamp data as described hereinabove can be used in multi-projection systems. One issue in multi-projection systems is that all the video projectors have to synchronously display their respective frame portions in order to display the frame portions of a same frame. The use of timestamp data, as compared to the buffering techniques of the prior art, makes it possible to ensure that all the video projectors will make the same decision as whether to display the frame currently received or duplicate the previous frame.
In the case of a multi-projection system, the timestamps may be associated with the subdivision of each frame according to the subdivision grid defined by the spatial configuration of the video-projectors.
Figure lB illustrates an exemplary wireless video composite system comprising four video projectors A, B, C and D. Each video projector contributes locally to a quarter of the composite display resolution. The aggregated composite display resolution is thus four times the display resolution of each individual video projector's display resolution.
Video projector A is connected to a video source application 102b, delivered by a video server 104b. It contributes to display area 103a of composite display 103.
Video projector B is connected to a video source application 102d, delivered by a video server 104d. It contributes to display area 103b of composite display 103.
Video projector C is connected to a video source application 1 02a, delivered by a video server 104a. It contributes to display area 103c of composite display 103.
Video projector D is connected to a video source application 102c, delivered by a video server 104c. It contributes to display area 103d of global display 103.
Communications between video projectors are operated through a network 110.
Computation of functional elements of each video projector is performed by a controller 105 and delivered using network 110.
Each source frame portion received by a video projector is associated with timestamp information indicative of the availability of the full source frame portion payload in a receipt buffer allocated to this source frame
B
portion data. Each video-projector may have one or several buffers. Several buffers respectively allocated to frame portions displayed by the video projector may be provided for applications such as picture-in-picture as described in what follows.
Upon each common start of frame display within each video projector, and according to the context of the different timestamp information, a decision engine per source frame portion buffer may be designated to deliver either new or duplicated data when video data corresponding to this source portion will be requested during display processing.
Display frames may be constructed dynamically with new or duplicated video data portions using multi-instances of decision engines and independent management of the data portions.
Same timestamp information is delivered to all video projectors sharing portions of a same source. Thus, the decision is the same for all decision engines in the different video projectors involved in displaying a part of the source frame data.
Figure 2 illustrates a use case of display organization within composite display 103 of Figure 1.
Display resolution of each video projector corresponds to an N x M array of Macro Blocs, one Macro Bloc representing, for example, an arrangement of 16x16 pixels elements.
The resolution of video input applications may be variable, up to the composite display resolution of 4 x N x M Macro Blocs.
In the example of Figure 2, all Macro Blocs of video application 102b initially connected to video projector A are rendered by video projector B. Also, video application 102c initially connected to video projector D is rendered partially by video projector A and partially by video projector D. Video application 1 02d initially connected to video projector A is rendered partially by each video projector. Video application 102a initially connected to video projector C is rendered partially by video projector C and partially by video projector D. The partial rendering, by a video-projector, of a portion of another video-projector may correspond to picture-in-picture applications, wherein while a video is currently displayed, a display area within the images of said video is used for displaying another video.
In order to increase the frame rate of the video sequence displayed, several techniques may be used. Frames may be duplicated as such or more complex processing may be performed.
Figures 3A-3B illustrate such exemplary techniques.
The example of Figure 3A is based on the so called N:M conversion, N and M respectively representing the source and display frame rate. This alternative consists in repeating periodically one frame coming from the source.
Figure 3B illustrates a more complex way which can be implemented in order to avoid motion judder when up-converting film-based content from 24Hz to 120 Hz. Time is shown along the horizontal axis and the position of a moving object is shown along the vertical axis. Straight line 20 indicates the true path of the moving object. At time t=0 the object is at the position labelled A, in frame A. At time t=1, the object is at position labelled B, in frame B. The original frames A, B of the film are extracted and displayed. Motion estimation and motion-compensation techniques using for instance interpolation techniques are then used for calculating intermediate frames 11-14. Such techniques require full frame content to compute frame interpolation.
Figure 4 depicts an exemplary architecture for a video projector 400 according to embodiments.
A video receiver module 406 is connected to a video data input interface 401. The detection of a "Vsync source IN" signal 402 indicates the receipt of a new frame. The video receiver module is configured to split input video frames data into video frame portions. The video frame portions are then delivered to remote video projectors through a network output interface 408, or to a local video rendering module 405 through an internal link 415.
The video rendering module 405 receives video frame portions through the local link 415 or from other video projectors of the network 110 through the network input interface 409. The video frame portions are combined in order to deliver, through an interface 403, a video stream to a display engine 410. Start of video frames display events are synchronized according to a display signal 404 "Vsync display OUT".
The video receiver module 406 is described in more details with reference to Figure 5.
A timestamp computing unit 501 is connected to the video input interface. This module computes a new timestamp upon each occurrence of the Vsync source IN signal 402. Timestamp and video frame data are delivered to a FIFO (First In First Out") buffer 504. A cut processing module 502 can access the buffer 504 in order to extract rectangular video frame portions from the input video frame data, according to a pre-defined configuration stored in a configuration module 503.
For each new video frame period, a computed timestamp reflects from the current time, the estimated delay for making available the last data (pixel) of any frame portion to any remote video projector connected to network 110. Considering that a video frame portion extracted from a video frame payload can be displayed in any location of a rendered video projector frame display. According to embodiments, in order to compute the delay period, the addition of the time necessary to have all the pixels of the frame available and ready for transmission with the transmission time is to be considered.
The timestamp can be roughly estimated as two times the frame period. One first time period can account for the reading/receiving of all the data of one frame (from example from the video source via a dedicated video connection). One second time period can account for the maximum transmission time. For the second time period, it can reasonably be assumed that the transmission time of one frame is necessarily lower than one frame period (otherwise, buffer overflow may occur). During this time period, all portions would reach their destinations.
The timestamp can be more precisely computed as the input source frame period plus a maximum transfer delay (lower than a frame period), i.e. as the sum of the maximum network transmission delay which is a pre-defined system parameter, and the input frame period.
In a scenario in which the acquisition time of video frames is low compared to the frame periods (for example when reading from local storage), the transfer delay becomes dominant for the determination of the expected time of completion of the receipt of the current frame. In this scenario, the timestamp can be computed based on the maximum network transmission delay only, that is to say that the timestamp can be equal to current time plus the maximum network transmission delay.
A portion extracted from an input video frame is not pre-determined, for instance, if a portion to be extracted is spatially located at the bottom right of an input video frame, almost one frame period elapses before the video portion is available for transmission to network.
Each video frame portion is associated with the timestamp previously computed when the beginning of the frame period was detected, and delivered to the network interface 408 or to the local link 415 according to configuration data in module 503.
As illustrated, three portions 505.1, 505.2 and 505.3 having different sizes are delivered through network interface 408, while two portions 504.5 and 505.5 are delivered locally to the internal link 415.
A Network synchronization module 530 is configured to control a network synchronization reference used for synchronizing the display rate of all the video projectors of the video composite system 103. One video projector may be designated as a master device for this reference clock. In this case, network synchronization is driven by this master video projector, the others being listening to this reference. Nevertheless, any slave video projector may be able to request display rate acceleration, in accordance to the assumption that display rates are higher than any source input rate. In case an input video source rate is detected as higher than display frame rate, a request of display frame rate acceleration is generated and sent to the master video projector. The master video projector then modifies the network reference clock in accordance with the acceleration request.
The video rendering module 405 of Figure 4 is described in more details wither reference to Figure 6.
Each video frame portion issued from the network input interface 409 or the internal link 415 is delivered to a different video frame renderer 610.1 to 6l0.p. A network synchronization module 640 receives network synchronization signals from the network 110, in order to define the display rate for each video projector and transmit display signals Vsync display OUT 404.
One video frame portion renderer 61 0.x comprises: * A data buffer 606, * A timestamp buffer 625, * A token buffer 645, * Atimestamp extraction module 602, * A duplication decision engine 660, * A multiplexer 614, * A Ready signal 616, * A Selector signal 603.
Within each video frame portion renderer, video frame portion data is pushed in buffer 606 and timestamp information is pushed in timestamp buffer 625.
Addressing of buffer 606 is controlled using a frame boundary scheme.
Timestamp extraction module 602 recovers one by one the timestamps written in buffer 625. Each time a new timestamp is pulled from the buffer 625, a timer is initialized with timestamp information. Once the timer expires, a pulse is generated on Ready signal 616, and a token is written in token buffer 645.
Duplication decision module 660 is in charge of recovering data from buffer 606 and to drive corresponding data to a merge processing engine 620.
Upon receipt of Vsync display OUT signals 404, the module checks if a token is available in token buffer 645. In such a case, it is determined that all source portion data is available in Data buffer 606 at the beginning of the next frame display period. Thus, new data from Data buffer 606 will be delivered to the Merge processing module 650. When there is no token available at the time a new event occurs on signal 404, same data already delivered during previous display frame periods are delivered to module 650. Selection of new or duplicated data is achieved using Selector signal 603, connected to Data buffer 603 and multiplexer 414. The behavior of the video frame portion renderers 610.1 to 6l0.p is mutually independent.
The merge processing module 650 is connected to all outputs 615.1 to GlS.p of the different video frame portion renderers 610.1 to GiO.p. Upon events 404 and according to video frame portion configuration 620 within frame display content, video data consumption from video frame portion renderers is performed along frame display period. Video frame data is outputted to display engine 410 using interface Display OUT 405 and Vsync Display OUT signal 404.
Figure 7 is a flowchart of steps for computing timestamps according to embodiments. The method may be performed by a control unit of a video receiver.
After an initialization step 701, it is awaited for the receipt (event 702) of a synchronization signal Vsync source IN 402 from a source device.
Upon receipt of the signal, the time of receipt is stored during a step 703. The source frame period is then computed during step 704 based on the previous Vsync source IN signal received.
Next, during step 705, a new timestamp is computed. For example, according to embodiments, the source frame period and the maximum network transfer delay are added to the current time of receipt of the synchronization signal. Other embodiments can also be considered for computing the new timestamp as explained with reference to description of figure 5.
Once computed, the timestamp is appended to all video frame portions created from the initial source frame during the source frame period (step 706).
Next, a test 707 is performed in order to check whether the display rate shall be accelerated. The source frame period is compared to the display period. In case the source frame period is lower than the display period, the frame rate cannot be accelerated and the process goes back to awaiting the next Vsync source IN signal event 702. Otherwise, a request to the master video projector is sent for accelerating the display rate during step 708. The process then goes back to awaiting the next Vsync source IN signal event 702.
Figure 8 is a flowchart of steps performed for extracting a timestamp during video rendering.
Each time a new timestamp is extracted from an incoming video portion data, it is written in the timestamp buffer 625 as already described hereinabove with reference to Figure 6.
After an initialization step 800, a test is performed during step 810 until it is detected that the timestamp buffer is not empty. In such a case, the timestamp in the buffer is extracted (step 811) and used for starting a timer (step 812). Once the timer expires (test 813), a Ready" signal 616 is pulsed before going back to test 810. Therefore, a new token is written in the token buffer 645 as already described with reference to Figure 6.
Figure 9 is a flowchart of steps performed for deciding whether of not to duplicate a frame. The method may be performed by the duplication decision engine 660 of Figure 6.
After an initialization step 900, during a step 910, a context indicating that no data have ever been delivered from buffer 606 is flagged, using variable Dummy data enable as true.
Next, a waiting state 911 is reached up to receive a new Vsync Display signal 404 (event 911).
Once the Vsync Display signal is detected, a test 912 is performed in order to check whether the Token buffer 645 is empty or not.
In case the token buffer 645 is empty, and until the variable "Dummy data enable" is true (which indicates that no data for the concerned video source portion have yet been displayed), decision is made to deliver dummy data for the merging processing for the concerned video source portion, along next display frame period (step 914). Next, the process goes back to awaiting next event 911.
In case the token buffer 645 is empty, and the variable "Dummy data enable" is false (which indicates that data for the concerned video source portion have already been displayed along previous display frame period), decision is made to deliver the same previous data to the merging processing for the concerned video source portion, along next display frame period. In this case, duplicated data are delivered to multiplexer 414 using data path 612 (step 915). Next, the process goes back to awaiting next event 911.
In case the token buffer 645 was not empty, which indicates that all video source portion data associated with the next display frame period is available in Data buffer 606, decision is made to deliver new data for the concerned video source portion, along next display frame period (step 918).
Variable "Dummy data enable" is set to false for any subsequent display period (step 917). Next, the process goes back to awaiting next event 911.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.

Claims (23)

  1. CLAIMS1. A method of processing image data for displaying an image sequence, the method comprising: -receiving image data representing at least one portion of a current image of the image sequence, -determining a first time information representative of an expected time of completion of the receipt of said current image based on at least one of an image period of the image sequence and the delay for receiving the image data from a source device; -determining a second time information representative of a scheduled time of display of said current image, -comparing said first and second time information, and -triggering display of said current image or a display of an image based on at least one precedent image, depending on a result of said comparison.
  2. 2. A method according to claim 1, wherein said first time information is received with said at least one portion of said image data.
  3. 3. A method according to any one of the preceding claims, wherein the images of said image sequence are subdivided into a plurality of image portions, said first time information being associated with one of said image portions and representing an expected time of completion of receipt of said plurality of image portions.
  4. 4. A method according to any one of the preceding claims, wherein said scheduled time is determined according to a display frame rate.
  5. 5. A method according to any one of the preceding claims, wherein said first time information is associated with each image portion of said plurality of image portions.
  6. 6. A method according to any one of the preceding claims, wherein said image portions correspond to transmission portions from a transmission scheme from a source device to a display device.
  7. 7. A method according to any one of the preceding claims, wherein said image portion correspond to a subdivision grid for display of the image sequence by a multi-projection system.
  8. 8. A method according to any one of the preceding claims, wherein said image based on at least one precedent image is the precedent image.
  9. 9. A method according to any of claims 1 to 7, wherein said image based on at least one precedent image is a result of a motion estimation process between successive frames of said video sequence.
  10. 10. A device for processing image data for displaying an image sequence, the device comprising: -a communication unit configured to receive image data representing at least one portion of a current image of the image sequence, and -a control unit configured to determine a first time information representative of an expected time of completion of the receipt of said current image based on at least one of an image period of the image sequence and the delay for receiving the image data from a source device, to determine a second time information representative of a scheduled time of display of said current image, compare said first and second time information, and to trigger display of said current image or a display of an image based on at least one precedent image, depending on a result of said comparison.
  11. 11. A device according to claim 10, wherein said first time information is received with said at least one portion of said image data.
  12. 12. A device according to any one of the claims 10 and 11, wherein the images of said image sequence are subdivided into a plurality of image portions, said first time information being associated with one of said image portions and representing an expected time of completion of receipt of said plurality of image portions.
  13. 13. A device according to any one of claims 10 to 12, wherein said scheduled time is determined according to a display frame rate.
  14. 14. A device according to any one of the claims 10 to 13, wherein said first time information is associated with each image portion of said plurality of image portions.
  15. 15. A device according to any one of claims 10 to 14, wherein said image portions correspond to transmission portions from a transmission scheme from a source device to a display device.
  16. 16. A device according to any one of claims 10 to 15, wherein said image portion correspond to a subdivision grid for display of the image sequence by a multi-projection system.
  17. 17. A device according to any one of claims 10 to 16, wherein said image based on at least one precedent image is the precedent image.
  18. 18. A device according to any of claims 10 to 16, wherein said image based on at least one precedent image is a result of a motion estimation process between successive frames of said video sequence.
  19. 19. A multi-projection system comprising a plurality of devices according to any one of claims 10 to 18.
  20. 20. A computer program product comprising instructions for implementing a method according to any one of claims 1 to 9 when the program is loaded and executed by a programmable apparatus.
  21. 21. A non-transitory information storage means readable by a computer or a microprocessor storing instructions of a computer program, for implementing a method according to any one of claims 1 to 9, when the program is loaded and executed by the computer or microprocessor.
  22. 22. A device substantially as hereinbefore described with reference to, and as shown in, Figures 4 to 6 of the accompanying drawings.
  23. 23. A method substantially as hereinbefore described with reference to, and as shown in, Figures 7 to 9 of the accompanying drawings.
GB1411401.1A 2014-06-26 2014-06-26 Frame rate augmentation Active GB2527577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1411401.1A GB2527577B (en) 2014-06-26 2014-06-26 Frame rate augmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1411401.1A GB2527577B (en) 2014-06-26 2014-06-26 Frame rate augmentation

Publications (3)

Publication Number Publication Date
GB201411401D0 GB201411401D0 (en) 2014-08-13
GB2527577A true GB2527577A (en) 2015-12-30
GB2527577B GB2527577B (en) 2016-09-14

Family

ID=51410185

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1411401.1A Active GB2527577B (en) 2014-06-26 2014-06-26 Frame rate augmentation

Country Status (1)

Country Link
GB (1) GB2527577B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100667824B1 (en) * 2005-10-19 2007-01-11 삼성전자주식회사 Method and apparatus for providing data service included in digital broadcasting
US20120154685A1 (en) * 2010-12-20 2012-06-21 Sachie Yokoyama Video Display Apparatus, Video Display Method and Computer Readable Storage Medium
US20140028656A1 (en) * 2011-04-11 2014-01-30 Sony Corporation Display control apparatus, display control method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100667824B1 (en) * 2005-10-19 2007-01-11 삼성전자주식회사 Method and apparatus for providing data service included in digital broadcasting
US20120154685A1 (en) * 2010-12-20 2012-06-21 Sachie Yokoyama Video Display Apparatus, Video Display Method and Computer Readable Storage Medium
US20140028656A1 (en) * 2011-04-11 2014-01-30 Sony Corporation Display control apparatus, display control method, and program

Also Published As

Publication number Publication date
GB2527577B (en) 2016-09-14
GB201411401D0 (en) 2014-08-13

Similar Documents

Publication Publication Date Title
US9741316B2 (en) Method and system for displaying pixels on display devices
US20160357493A1 (en) Synchronization of videos in a display wall
CN104375789B (en) The synchronous display method and system of mosaic screen
US8711207B2 (en) Method and system for presenting live video from video capture devices on a computer monitor
CN103795979A (en) Method and device for synchronizing distributed image stitching
CN104216671B (en) Method for realizing synchronous cooperated display on multiple sets of spliced display screens
US9832421B2 (en) Apparatus and method for converting a frame rate
US20150172559A1 (en) System and Method for Processing Video and or Audio Signals
US20090237560A1 (en) Networked ip video wall
US20170272693A1 (en) Carriage of ptp time reference data in a real-time video signal
JP2015082845A (en) Method and device for ip video signal synchronization
JP6412263B2 (en) Separation method and apparatus for relay video and live display video
US10230920B1 (en) Adjusting interpolation phase for MEMC using image analysis
CN113055712A (en) Multi-screen splicing display method and system
KR20110055011A (en) Real-time input/output module system for ultra high-definition image
JP2015096920A (en) Image processor and control method of image processing system
GB2527577A (en) Frame rate augmentation
CN112272305B (en) Multi-channel real-time interactive video cache storage method
CN112272306B (en) Multi-channel real-time interactive video fusion transmission method
US11210261B2 (en) Systems and methods for synchronizing frame processing within a multi-stage modular architecture
CN111031194B (en) Multi-channel decoder synchronization control method, equipment and storage medium
TWI520577B (en) Stereo image output apparatus and associated stereo image output method
CN109545122B (en) Compensation method and compensation device for VR display and display system
KR20180108967A (en) Multi-vision screen image rendering system, device and method
CN112995737B (en) Distributed network screen-on-screen synchronization method, system and equipment for multi-screen display card output