WO1994023416A1 - Multi-source video synchronization - Google Patents
Multi-source video synchronization Download PDFInfo
- Publication number
- WO1994023416A1 WO1994023416A1 PCT/NL1994/000068 NL9400068W WO9423416A1 WO 1994023416 A1 WO1994023416 A1 WO 1994023416A1 NL 9400068 W NL9400068 W NL 9400068W WO 9423416 A1 WO9423416 A1 WO 9423416A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- memory
- field
- line
- display
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/14—Display of multiple viewports
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
- G09G2340/125—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/123—Frame memory handling using interleaving
Definitions
- the invention relates to multi-source video synchronization.
- each video signal contains line and field synchronization pulses, which are converted to horizontal and vertical deflection signals of a monitor on which the video signal is displayed.
- the major problem is that the line and field synchronization pulses contained in the different video signals do not occur at the same time.
- one of the video signals is used as reference signal, that is the horizontal and vertical deflection signals for a display are derived from this signal, then the following artifacts may appear: o Images that are represented by the other video signals (called the subsignals) will be shifted spatially on the display with regard to the reference signal, o Odd and even video fields in the subsignals may be interchanged which gives rise to visual artifacts like jagged edges and line flicker. o In case line and field frequencies of the video subsignals differ from the reference video signal, then the images represented by these subsignals will crawl across the screen. o Finally, so-called cut-line artifacts may become visible, i.e. different parts of the same displayed image originate from different field/frame periods, which can be rather annoying in moving images where some parts of moving objects seem to lag behind.
- video synchronizers are built with frame stores that are capable to delay video signals from a few samples to a number of video frame periods.
- One of these video signals is selected as a reference signal and is not delayed. All samples of the other signals are written into frame stores (one store per signal) as soon as the start of a new frame is detected in these signals.
- the read-out of the frame memory is initiated. This way, the vertical synchronization signals contained in the reference and other video signals appear at the same time at the outputs of the synchronization module.
- Fig. 1 illustrates synchronization of a video signal with a reference video signal using a FIFO.
- Fig. 1 shows two independent video signals with their vertical (field) synchronization pulses FP, and the location of read and write pointers in a First-In-First-Out (FIFO) frame store.
- FIFO First-In-First-Out
- SW at the end of the subsignal (SS) field synchronisation pulses FP
- writing the subsignal samples a,b,c,d,e,f,g into the FIFO starts.
- SR at the end of the synchronisation pulses FP of the reference signal RS
- FIG. 2 illustrates locking fields of a video input signal to opposite fields of reference, by selectively delaying one field of input signal by one line, whereby delay is implemented by delaying the read-out of the FIFO.
- the locking is shown for the case that the read address of the FIFO is manipulated: the displayed, image is shifted down by one line. It is also possible to achieve this by manipulating the write address: a line delay in the write will cause upward shifting of the displayed image by one line.
- the left-hand part of Fig. 2 shows the reference video signal RS, the right-hand part of Fig. 2 shows the video subsignal SS. In each part, the frame line numbers are shown at the left side.
- the lines 1,3,5,7,9 are in odd fields, while the lines 2,4,6,8,10 are in the even field.
- the line- numbers 10, IE etc. in the fields are shown at the right side.
- Arrow Al illustrates that the even field of the subsignal SS locks to the odd field of the reference signal RS.
- Arrow A2 illustrates that the odd field of the subsignal SS locks to the even field of the reference signal RS.
- the arrows A3 illustrate the delay of the complete even field of the subsignal SS by one line to correct the interlace disorder.
- a drawback of field inversion is that an additional field-dependent line delay is necessary which will shift up or down one line whenever a cross occurs in the next field period. This may become annoying when the number of pixels read and write during a field period are very different. E.g. 20% for PAL-NTSC synchronization will give rise to a line shift every 5 field periods, i.e. 10 times per second for display at the PAL standard, which is a visually disturbing artifact.
- cut-lines i.e. different parts of the image originate from different field periods causing "cut-lines" to appear in moving images, due to crossing of read and write address pointers of the memory in the visible part of the displayed image, a field-skip should be made.
- a first aspect of the invention provides a synchronizing system as defined in claim 1.
- Advantageous embodiments are defined in the dependent claims.
- a system for synchronizing input video signals from a plurality of video sources comprises a plurality of buffering units each coupled to receive respective one of the input video signals.
- the buffering units have mutually independent read and write operations. Each buffer write operation is locked to the corresponding video input signal. Each buffer read operation is locked to a system clock.
- the buffering units are substantially smaller than required to store a video signal field.
- the system further comprises a storage arrangement for storing a composite signal composed from the input video signals, and a communication network for communicating data from the buffering units to the storage arrangement, pixel and line addresses of the buffering units and of the storage arrangement being coupled.
- Fig. 1 illustrates synchronization of a video signal with a reference video signal using a FIFO
- Fig. 2 illustrates locking fields of a video input signal to opposite fields of reference, by selectively delaying one field of input signal by one line, whereby delay is implemented by delaying the read-out of the FIFO;
- Fig. 3 shows the overall architecture of the multi-window / multi-source real-time video display system of the invention
- Fig. 4 shows the architecture of an input-buffer module and its local event- list memory/address calculation units
- Fig. 5 shows the improved architecture of a display-segment module and its local event-list memory/address calculation units.
- Fig. 6 shows a display memory architecture for multi-source video synchronization and window composition;
- Fig. 8 shows a reduced frame memory with overlapping ODD/EVEN field sections;
- Fig. 9 shows an example of a controller
- Fig. 10 shows a circuit to obtain X and Y address information from the data stored in a buffer Bi;
- Fig. 11 shows a possible embodiment of a buffer read out control arrangement.
- a multi-window/multi-source system for real-time video signals are highly determined by its memory components, since most functions in such a system can only be realized with memory.
- memory components are used to implement the following functions: o synchronization. Different video images are synchronized by aligning their signal components for horizontal and vertical synchronization. To this purpose a memory of the size of a complete frame/field must be used to delay each additional video signal with an appropriate number of pixels, see US-A-4,797,743, US-A-4,766,506, and US-A-4,249,198. Synchronization is necessary to allow simultaneous processing of different video signals by video algorithms such as fades, wipes and windowing.
- the size of the field/frame memories can be reduced to match the size of the subsampled signals. Note that in this case, the number and size of windows on the screen are no longer variable. o positioning. After a video signal has been processed, the resulting image must be put at a certain location on the display. To this purpose a memory is required to shift the image in the horizontal and vertical directions. In the worst case, the size of this memory will be a complete video field. Note that the positioning function can be combined with the synchronization function using the same memory, provided that no video processing on combinations of images is required. o time compression.
- section 2 discusses the main advantages and drawbacks of the use of a single display (field) memory in a multi-window / multi-source real-time video display system.
- An architectural concept is reviewed in which the display (field) memory is split into several parts such that it becomes possible to implement most of the memory functions listed above as well as the fast-switch function.
- Section 3 describes the architectural concept of section 2 for multi-window real-time video display. It discusses an efficient geometrical segmentation of the display screen and the mapping of these screen segments into RAM modules that allow for minimal memory overhead and maximum performance.
- Section 4 gives an architecture for multi-window / multi-source real-time video display systems that uses the RAM segmentation derived in section 3.
- a fast Random Access display memory can be used to combine several
- the display memory can be used as multi-source video synchronizer, provided that no combined processing is required of the different video signals, and that the memory provides sufficient write-access for the different input signals.
- the necessary time shifting to be done for video synchronization can be obtained by writing to different x,y addresses in the display memory.
- a display memory In contrast to the memory based functions discussed above, prevention for motion artifacts cannot be realized by a display memory with a capacity of only one video field (note that separate field-FIFOs with a fast-switch suffer from the same problem). Therefore, a display memory should be sufficiently large to hold a complete a video frame.
- prior-art access and clock-rate problems are solved by splitting the display memory in several separate of smaller size. If there are N signals to be displayed in N-windows, then we use M (M>LN) RAMs of size F/M, where F is the size of a complete video frame. This approach solves the access problem if each video signal is written to a different RAM-segment of size F/M. Note that in case faster RAMs can be used, e.g. ones that allow access of f video sources at the same time, then only M/f RAMs of size f*F/M are required to solve the access problem.
- M-l additional buffer elements buffer the data streams of the M-l video signals to solve the access conflict (assuming that the number of video sources that can access the buffer concurrently equals one). If, during a certain time interval, no video source requires any access to a memory segment, then the data from one of the buffers can be transferred to this segment.
- the size of each buffer in this approach heavily depends on how the screen is subdivided into different geometrical areas, where each screen segment is mapped onto one of the RAM segments. This is the subject of the next section.
- each one of these screen parts is associated with a different RAM segment (M in total) with capacity F/M, where F is the size of a video frame (or video field if no motion artifacts need to be corrected). Addresses within each segment correspond to the x,y coordinates within the associated screen part, which has the advantage that no additional storage capacity for the addresses of pixels needs to be reserved. This property becomes even more important in HD (high definition) display systems that will appear on the market during the current and next decade and which have four times as much pixels as in SD (standard definition) displays.
- the drawback of this approach is that additional memory is required to buffer those video data streams that require access to the same RAM segments at the same time.
- the size of the buffers depends on the maximum length of the time intervals during which concurrent access takes place as well as the time intervals that a segment is not accessed at all. Namely, these latter "free" time intervals must be sufficiently large to flush the contents of the buffer before the next write cycle occurs to this buffer.
- the main advantage of this approach is that it allows segments to be accessed at line basis. This enables the application of cheap DRAMs with a write-page-mode, since page-mode addressing allows fast addressing of pixels within a single row (video line) of such a RAM.
- the chosen level of interleaving of video-signal will heavily depend on the chosen configuration of screen segments. As has been set out in section 3.2 of the priority application, non-horizontal segmentation strategies only lead to suboptimal solutions: more buffer memory is required than in the case of horizontal segmentation. Therefore, in the sequel only a display memory segmentation based on sub-line interleaving (horizontal segmentation) will be considered.
- the basic architecture of a horizontally segmented display memory with buffers solving all access conflicts comprises look-up tables (so-called event lists) to store the coordinates of window outlines as well at the locations where different windows intersect: these tables are called "event lists”.
- event lists When at a certain time instant during a video field - referenced by global line and pixel counters - an event occurs, then the event counter(s) increment and new control signals for the input buffers and switch matrix, and new addresses for the RAM segments are read from the event lists.
- the event lists are substituted by a so-called Z-buffer, see US-A-5,068,650.
- a Z-buffer is a memory that stores a number of "window-access permission" flags for each pixel on the screen. Access-permission flags indicate which input signal must be written at a certain pixel location in one of the display segments, hence determine the source signal of a pixel (buffer identification and switch-control).
- graphics data of only one window can be written to a certain pixel while access is refused to other windows. This way arbitrary window borders and overlapping window patterns can be implemented.
- Z-buffers with "run-length" encoding are used.
- Run-length encoding means that for each sequence of pixels, the horizontal start position of the sequence and the number of pixels herein is stored for each line. Consequently, a reduced Z-buffer can be used.
- a Z-buffer is equivalent to an event list that stores the horizontal events of each line.
- a true event list based on rectangular windows, can be considered as a two- dimensional Z-buffer with two-dimensional run-length encoding.
- true event lists for rectangular windows
- Z-buffer implementation offers the realization of arbitrary window shapes, since Z-buffers define window borders by means of a (run-length encoded) bit-map.
- windows borders must be interpolated by the event-generation logic, which requires extensive real-time computations for exotic window-shapes.
- a separate input buffer is used for each video signal.
- the number of intersections between windows and the part of the video signal that is displayed in the window determine the number of events per field and so the length of the event lists. Note that if such a list is maintained for every pixel on the screen, a complete video field memory is required to store all events. Events are sorted in the same order as which video images are scanned, such that a simple event counter can be used to step from one control mode to the next.
- the overlay hierarchy is added as control information to the different events in the event list.
- the event lists contain information to control the buffers and the RAM segments in the architecture.
- an event-entry in the list must contain a set of N enable signals for the input buffers, and a set of M enable signals for the display segments. Moreover, it must contain display segment addresses as well as a row-address for each display segment.
- event lists are local to display segments and buffers. Then, only the events that are relevant to a specific display segment and/or a buffer will be in its local event-list.
- a local event list will contain: o one line/pixel coordinate, o one row address (or none if row address is computed in real time), o the address of the columns inside the current row where writing starts and stops (stop offset is not strictly necessary), o two enable signals (read/write/inhibit for display segment, and read/inhibit for buffer from which the display segment will get its data).
- a local event-list only contains write or inhibit events: o one line/pixel coordinate stored per event, o an enable signal (write/inhibit) for the buffer. No (row) addresses are needed since all buffers operate as FIFOs.
- Figs. 3-5 illustrate an embodiment of the invention in which the above considerations have been taken into account.
- Fig. 3 shows the overall architecture of the multi-window / multi-source real-time video display system of the invention.
- Fig. 4 shows the architecture of an input-buffer module and its local event-list memory/address calculation units, which implements the improvements to the display-architecture as described above.
- Fig. 5 shows the improved architecture of a display-segment module and its local event-list memory/address calculation units.
- Fig. 3 shows the overall architecture of the multi-window / multi-source real-time video display system.
- the architecture comprises a plurality of RAMs 601-608 respectively corresponding to adjacent display segments.
- Each RAM has its own local event list and logic.
- Each RAM is connected (thin lines) to a bus comprising buffer-read- enable signals, each line of the bus being connected to a respective I/O buffer 624-636.
- Each I/O buffer 624-636 has its own local event list and logic.
- Each I/O buffer 624-636 is connected to a respective video source or destination VSD. Data transfer between the I/O buffers 624-636 and the RAMs 602-608 takes place through a buffer I/O signal bus (fat lines).
- Fig. 4 shows the architecture of an input-buffer module and its local event- list memory /address calculation units, which implements the improvements to the display- architecture as described above.
- a local event list 401 receives from an event-status evaluation and next-event computation (ESEC) unit 403 an event address (ev-addr) and furnishes to the ESEC unit 403 an X/Y event indicator (X/Y-ev-indic) and an event- coordinate (ev-coord).
- ESEC event-status evaluation and next-event computation
- a global line count Y and a global pixel count X are applied to the ESEC unit 403.
- the ESEC unit 403 also furnishes an event status (ev-stat) to a buffer access control and address computation (BAAC) unit 405, which receives an event type (ev) from the local event list 401.
- the BAAC unit 405 furnishes a buffer write enable signal (buff-w-en) signal to a buffer 407. From a read enable input the buffer receives a buffer read enable signal (buff-r-en).
- the buffer 407 receives a data input (D-in) and furnishes a data output (D-out).
- FIG. 5 shows the improved architecture of a display-segment module and its local event-list memory/address calculation units.
- a local event list 501 receives from an event-status evaluation and next-event computation (ESEC) unit 503 an event address (ev- addr) and furnishes to the ESEC unit 503 an X/Y event indicator (X/Y-ev-indic) and an event-coordinate (ev-coord).
- ESEC event-status evaluation and next-event computation
- ev- addr event address
- X/Y-ev-indic X/Y event indicator
- ev-coord event-coordinate
- the ESEC unit 503 also furnishes an event status (ev-stat) to a segment access control and address computation (DAAC) unit 505, which receives an event type and memory address (ev & mem-addr) from the local event list 501.
- DAAC unit 505 furnishes a row address (RAM-r-addr) to a RAM segment 507.
- the local event list 501 furnishes a RAM write enable (RAM-w-en) and a RAM read enable (RAM-r-en) to the RAM segment 507, and a buffer address (buff-addr) to an address decoder addr-dec 509 with tri-state outputs (3-S-out) en-1, en-2, en-3, .., en-N connected to read enable inputs of the N buffers.
- the address decoder 509 is connected to a data switch (D-sw) 511 which has N data inputs D-in-1, D-in-2, D-in-3, .., D-in-N connected to the data outputs of the N buffers.
- the data switch 511 has a data output connected to a data I/O port of the RAM segment 507 which is also connected to a tri-state data output (3-S D-out).
- Fig. 3 it is quite easy to extend the architecture to a multi-window real-time video display system with bi-directional access ports, bi ⁇ directional switches and bi-directional buffers. This way, the user can decide how many of the I/O ports of the display system must be used for input and how many for output.
- An example is the use of the display memory architecture of Fig. 3 for the purpose of 100 Hz upconversion with median filtering according to G. de Haan, Motion Estimation and Compensation, An integrated approach to consumer display field rate conversion, 1992, pp. 51-53.
- the address calculation units associated with the event lists as indicated in Figs. 4, 5 can be split into two functional parts. o Event-Status Evaluation and Next Event Computation (ESEC) o Display-segment Memory Access operation Control and Address Calculation (DAAC) for display segments, and Buffer Memory Access operation Control and Address Calculation (BAAC) for input buffers. These parts are discussed in the sequel.
- ESEC Event-Status Evaluation and Next Event Computation
- DAAC Display-segment Memory Access operation Control and Address Calculation
- BAAC Buffer Memory Access operation Control and Address Calculation
- the inputs to this block are the global line/pixel counters, the X or Y coordinate of the current event and a one-bit signal indicating if the current coordinate is of type X or Y.
- the occurrence of a new event is detected if the Y-coordinate of the event equals the value of the line-counter and the X-coordinate equals the pixel-count.
- ESEC Event-Status-Evaluation and Next-Event-Computation
- the event list is sorted on Y and X-values and the ESEC stores an address for the event list that points to the current active event.
- the event-list address-pointer is then incremented to the next event in the list as soon as the X/Y coordinates of the next event match the current line/pixel count.
- the increment rate of a line-counter is much lower than the increment-rate of a pixel-counter. Therefore, it is sufficient to compare the Y-value of the next-event in the list only once every line, while the X-coordinate of the next event must be compared for every pixel. For this reason, the events in the event lists contain a single coordinate which can be a Y or a X coordinate as well as a flag indicating the type of the event (X or Y).
- the ESEC When all X-events in a group have become valid (end of line is reached) then the next Y-event is encountered. At this point, the ESEC must decide whether the next Y-event is valid or not. If it is valid, then the address-pointer is incremented. However, if the next Y-event is not valid for the current line-count, then it means that the previous Y-event remains valid and the ESEC resets the address-pointer to the first X-event following the previous Y-event in the event list.
- This latter unit uses the event-status to compute a new memory address for the display segment and/or input buffer. This is described below.
- the buffer memory-access control and address calculation unit increments the write pointer address of the input buffer (only if the buffer does not do this itself) and activates the "WRTTE-ENABLE” input port of the buffer.
- the BAAC also takes care of horizontal subsampling (if required) according to the Bresham algorithm, see
- the display-segment memory-access control and address-calculation unit (DAAC) of a specific display segment controls the actual transfer of video data from an input buffer to the display segment DRAM. To this purpose it computes the current row address of the memory segment using the row address as specified by the current event and the number of iteration cycles (status is
- the DAAC does the row-address computation according to the Bresham algorithm of EP-A-0,384,419, so that vertical subsampling is achieved if specified by the user. Furthermore, the DAAC increments the column address of the display segment in case the same event-status is evaluated as was the case with the previous event-status evaluation.
- Another important function that is carried out in real ⁇ time by the DAAC is the so-called flexible row-line partitioning of memory segments. Namely, it is not necessary that rows in the DRAM segments uniquely correspond to parts of a line on the display.
- the DAAC can control this. This is done as follows. If the DAAC detects the end of a currently accessed row of the current display segment RAM, it disables the buffer's read output, issues a RAS for the next row of the RAM and resumes writing of the RAM. Note that also the algorithms for event generation must modify the address generation for RAM-display segments in case flexible row/line partitioning is required.
- the unique identification of the source-buffer - as specified by the current event - is used to compute the switch-settings. Then, the read or write enable strobe of the display segment RAM is activated and a read or write operation is executed by the display segment.
- the functionality of the ESEC and the B/DAAC can be implemented by simple logic circuits like counters, comparators and adders/subtracters. No multipliers or dividers are needed.
- Section 5 of the priority application contains a detailed computation of the number and the size of the input buffers for horizontal segmentation which is unessential for explaining the operation of the present invention.
- a memory architecture is described that allows the concurrent display of real-time video signals, including HDTV signals, in multiple windows of arbitrary sizes and at arbitrary positions on the screen.
- the architecture allows generation of 100 Hz output signals, progressive scan signals and HDTV signals by using several output channels to increase the total output bandwidth of the memory architecture.
- the display memory architecture can make a trade-off between required input bandwidth and output bandwidth within the total access bandwidth offered by the display memory architecture. This way one can make a HDTV multi-window I/O system with medium speed DRAMs.
- the display memory in this architecture is smaller or equal to one video frame (so-called reduced video frame) and is built from a few number of page-mode DRAMs. If the maximum access on the used DRAMs is f times the video data rate (in pixels/sec), then for N-windows, N/f DRAMs are required with a capacity of f*F/N pixels, where F indicates the number of pixels of a reduced video-frame.
- the architecture uses N input buffers (one buffer per input signal with write-access rate equal to pixel rate) with a capacity of approximately 3/2 video line per buffer (see section 5 of the priority application).
- N 6
- Standard Definition video signals 720 pixels per line, 8 bits/pixel for luminance and 8 bits/pixel for color
- a look-up table with control events (row/column addresses, read-inhibit-, write-inhibit-strobes for display segment and read- inhibit-strobe for input buffer, X- or Y coordinate) is used which has a maximum capacity of 4.N 2 + 5.N - 5 events.
- a look-up table with control events (write-inhibit- strobe, X- or Y coordinate) is used which has a maximum capacity of 6.N - 2 events.
- the address calculation for the look-up table is implemented by an event counter, a comparator and some glue logic.
- subpixel level There are two possibilities: (1) video signals are sampled with a line-locked clock, or (2) video signals are sampled with a constant clock. In case (1), the sample clocks of the different video signals are unrelated, but the distance between the horizontal sync pulse (start of a line) and the sample moments are the same (ignoring jitter) for all video signals. Synchronization of video signals at the subpixel level is now concerned with conversion of the sample data rate (pixel rate) of incoming video signals to another sample rate (pixel rate): the rate at which pixels are displayed on the screen. This can be done with a small FIFO memory or a subpixel interpolator.
- the buffers are arranged for taking care of all variations in read- write frequencies of the buffers, while read-write address distances in the display memory remain the same until a field or frame skip takes place, i.e. the display memory read-write address pointer distance is changed during the field blanking.
- the display memory read-write address pointer distance is changed during the field blanking.
- the buffers only care for variations in the line period and are transparent otherwise, which implies that the display memory read-write address pointer distance is changed continuously. This requires somewhat more control effort, but the buffers may be smaller than with the first method.
- Subpixel synchronization can be done with the input buffers of the architecture of Fig. 3 if video data is sampled with a line-locked clock. These buffers can be written at clock rates different from the system clock, while read-out of the buffers occurs at the system clock. Because of this sample rate conversion, the capacity of input buffers must be increased. This increase depends on the maximum sample rate conversion factor that may be required in practical situations.
- f_source denote the sample rate of an input video source that must be converted to the sample rate of the display system f_sys
- r_max max ⁇ f_source/f_sys , f_sys/f_source ⁇
- r_max times more samples are read from the buffer than are written to it or, r_max times more samples are written to the buffer than are read from it.
- this cannot go on forever since then the buffer would either underflow or overflow. Therefore, a minimum time period must be identified after which writing or reading can be stopped (not both) such that the buffer can flush data to prevent overflow or that the buffer can fill up to prevent underflow.
- the first buffering method it is noted that when writing is stopped, samples are lost, while stopping of reading causes that blank pixels are inserted in the video stream. In both cases, visual artifacts are introduced. Therefore, the time period after which the buffer is flushed or filled must be as large as possible to reduce the number of visual artifacts to a minimum. On the other hand, if a large time period is used before flushing or filling takes place, then many samples must be stored in the buffer, which increases the worst case buffer capacity considerably.
- Each video signal contains synchronization reference points at the start of each line (H-sync) and field (V-sync), hence the most convenient moment in time to perform such an operation is at the start of a new line or field in the video input stream. This is described in sections 5.2 (pixel level synchronization) and 5.3 (line level synchronization). As a consequence, the time period, where writing and reading should not be interrupted, must be equal to the complete visible part of a video line- or video field period. In the next two subsections, the worst case increase of buffer capacity is computed for buffer filling and flushing at field- and line rate.
- the complete vertical blanking time is available for filling and flushing of the input buffers. Filling and flushing is (1) to interrupt reading from a buffer when a buffer underflow occurs, (2) to interrupt writing into the buffer when a buffer overflow occurs, or (3) to increase the read frequency when a buffer overflow occurs.
- the vertical blanking period is sufficiently large such that filling and flushing can be completed, then no loss of visible pixels occurs within a single field period. Remark that for vertical (line-level) synchronization of video signals, a periodic drop of a complete field cannot be avoided if input buffers are of finite size (see section 5.3.).
- ⁇ C_buf (r_max-l)*F.
- Resynchronization of the V-sync (start of field) of the incoming video signal with the display V-sync is done at the end of the vertical blanking period (start of new field) of the incoming video signal. This is described in section 5.3.
- a good alternative that does not cause any visual artifacts and that does not increase the required buffer capacity, is to store samples (pixels) during the complete visual part of a video line period (of the incoming video signal) and do the flushing and filling of buffers during the line blanking period.
- the display architecture of Fig. 3 already uses the line-blanking period to increase the total access to the display memory.
- a significant increase of filling or flushing time of input buffers is achieved without increasing the total buffer capacity. If more display segments are used (M > 6), then the fill flush interval L/M becomes shorter.
- Horizontal alignment can also be obtained with the input buffers of the display architecture.
- the actual horizontal synchronization is obtained automatically if a few video lines before the start of each field, read and write addresses of input buffers are set to zero. For, during a complete video field no samples are lost due to underflow or overflow, while the number of pixels per line is the same for all video input signals (for line-locked sampling) and the display, hence no horizontal or vertical shift can ever occur during a field period. As a consequence no additional hardware and software is required - as compared to the hardware/ software requirements described in the previous subsection - to implement horizontal pixel-level synchronization.
- the number of pixels per line may vary with each line period, which asks for a resynchronization at line basis. This is also the case when line-locked sampling is applied and input buffers are flushed or filled in the line blanking period of the video input signals to prevent underflow or overflow of input buffers during the visible part of each line period. Resynchronization can be obtained resetting the read address of the input buffer to the start of the line that is currently being written to the input buffers. In case the capacity of input buffers is just sufficient to prevent under/overflow during a single line period, a periodic line skip cannot be avoided.
- the main drawbacks of the approach is that the I/O access to the display memory is decreased (one horizontal time slot must be reserved for filling/flushing) and that frequent line skips will lead to a less stable image reproduction.
- Another possibility is to compute a source-dependent row offset (for vertical alignment) at a field by field basis, which can be performed by the address-calculation and control logic of the display segments.
- a field by field basis a line by line basis or a pixel by pixel basis (in general: on an access basis) are also possible.
- the distance between read and write addresses of input buffers must be within a specified range to prevent that underflow or overflow occurs during a single field period.
- the distance between read and write addresses may occur not be within the specified range.
- This problem is solved by applying an additional row offset such that no vertical shift is noticed on the screen. All this can be performed with a simple logic circuits as edge detectors, counters, adders / subtracters and comparators. These circuits will be part of the address calculation and control logic of each display-segment module.
- the synchronization mechanism sketched above is robust enough to synchronize video signals that have a different number of lines per field than is displayed on the screen. Even in case the number of lines per field varies with time, synchronization is possible since the address for the display RAMs is computed and set for each field or each access. If the difference of lines per field is larger than the vertical blanking time, visual artifacts will be visible on the screen (e.g. blank lines).
- the display-memory architecture of Fig. 3 can be used to synchronize a large number of different video sources (for display on a single screen) without requiring an increase of display-memory capacity. It is capable to synchronize video signals that are sampled with a line-locked or a constant clock whose rates may deviate considerably from the display clock. The allowed deviation is determined by the bandwidth of the display memory DRAMs, display clock rate, number of input signals, and bandwidth and capacity of the buffers.
- video signals having a different number of lines per field than is displayed on the screen are easily synchronized with the architecture.
- a different vertical offset of incoming video signals can be computed by the controllers of the architecture 4.5) at a field by field basis using very simple logic or by locking the DRAM-controllers to incoming signals when they access a specific DRAM.
- multi-source video synchronization with a single display memory arrangement is proposed.
- a significant reduction of synchronization memory is obtained when all video input signals are synchronized by one central "display" memory before they are display on the same monitor.
- the central display memory can perform this function, together with variable scaling and positioning of video images within the display memory.
- a composite multi- window image is obtained by simply reading out the display memory.
- a number of aspects are associated with the new approach: 1.
- a memory with very high-bandwidth is required when not only scaled windows must be shown on the display, but also cut-outs of parts of input images in windows, or a memory with many input ports. However, memories with many input ports do not exist.
- Fig. 6 shows another display memory architecture for multi-source video synchronization and window composition.
- This system comprises one central display memory comprising several memory banks DRAM-1, DRAM-2, .., DRAM-M that can be written and read concurrently using the communication network 110 and the input/output buffers 124-136.
- Buffers 124-132 are input buffers, while buffer 136 is an output buffer.
- the sum of the I/O bandwidths of the individual memory banks (DRAMs) 102-106 can be freely distributed over the input and output channels, hence a very high I/O bandwidth can be achieved (aspect 1).
- M 4 is the number of DRAMs.
- the accessed DRAMs are indicated.
- the horizontal axis indicates time T, starting from the begin BOL of a video line having L pixel-clock periods, and ending with the end EOL of the video line.
- Interval LB indicates the line blanking period.
- Interval FP indicates a free part of the line blanking period.
- Intervals L/M last L/M pixels.
- Intervals ->Bout indicate a data transfer to output buffer 136.
- Intervals Bx-> indicate a data transfers from the indicated input buffer 124, 128 or 132.
- Fig. 7 shows an example of possible access intervals to the different DRAMs of the display memory for reads and writes such that no access conflicts remain (aspect 3).
- These intervals can be chosen differently, especially if the input buffers are small SRAM devices with two I/O ports, such that the incoming video data can be read out in a different order than that it is written in.
- To implement the small I/O buffers with small SRAMs with two I/O ports and one or two addresses for input and output data is cost-effective. Just one address for either input or output is sufficient to allow for a different read/write order, while the other port is just a serial port.
- one DRAM can be used for the display memory if it is sufficiently fast (e.g. synchronous DRAM or Rambus DRAM as described by Fred Jones et al., "A new Era of Fast Dynamic RAMs", IEEE spectrum, pages 43-49, October 1992).
- the "small” input buffers (approximately L/M to L pixels, where L is the number of pixels per line and M the number of DRAM memory modules in Fig. 6) take care of the sample rate conversion, allowing different read/ write clocks (aspect 2).
- the write and read pixel rates may be different and also the number of pixels being written and read per field period may be different.
- the number of clock-cycles per line that no accesses occur to the display memory can be used to perform additional writes to the display memory (and additional reads from the buffers).
- a large time-slot L/M pixel times, see Fig. 7 can be used within each line period if one of the input channels is removed in e.g. Fig. 6.
- o Buffers can be small FIFOs or multiport (static) RAMs (smaller than 1 video line).
- Different interleaving strategies can be applied for the display memory: pixel by pixel, segments of pixels by segments of pixels or line by line, which still result in small input buffers as has been described by A.A.J. de Lange and G.D. La Hei, "Low-cost Display Memory Architectures for Full-motion Video and Graphics", IS&T/SPIE High-Speed Networking and Multimedia Computing Conference, San Jose, USA, February 6-10, 1994.
- the display memory can be a single DRAM with one I/O port only if it is sufficiently fast, or consist of a number of banks of DRAMs (DRAM segments), with one I/O port each, to increase the access rate of the display memory.
- DRAM segments DRAM segments
- multiported DRAMs can also be used. In this case less DRAMs are required to achieve the same I/O bandwidth o
- the DRAM I/O port can also be a serial port. It must however be row-addressable such is the case for Video RAM (VRAM). o If DRAMs of the display memory have a page-mode DRAM, this can be fully exploited.
- a single FIFO cannot be used for synchronization of multiple video input signals, since (1) each input video signal must be written at a different address in the FIFO, depending on its screen position, and (2) address switching must be done at pixel or line basis to limit the size of input buffers.
- the capacity of a buffer is in the order of 1/M-th of a line (M is the number of DRAMs in the display memory) up to a full video line for M-DRAM banks, see copending application (Atty. docket PHN 14.791).
- o According to Fig. 7, all input video signals can obtain access to the display memory during the complete line period, hence access is possible, independently of the differences between line and pixel positions between incoming and outgoing video signals.
- Both horizontal and vertical synchronization and positioning can be obtained by synchronizing the address generators of the display memory DRAM-banks to the pixel/line positions of incoming video signals on the basis of the access-intervals, see e.g. Fig. 7, which typically occurs for segments of contiguous pixels, where the segment size varies between L/M and 2L (i.e. 2 video lines) pixels.
- Control generators for input buffers are locked to the sync signals of the incoming video signals. They are not only locked to the pixel and line positions, but also to the pixel-clocks of the incoming video signals: one write-controller per input buffer.
- Cut-line artifacts can be prevented using a frame memory (2 fields): in case a cross of read/write address pointers is about to happen in the next field period, then writing of the new field is redirected to the same field-part of the frame memory, causing a field skip.
- the ODD-field part of the frame memory is written with the EVEN field of the incoming video signal and the EVEN-field part of the frame memory is written with the ODD-field of the incoming video signal.
- field inversion is required which is implemented with a field dependent line delay. Note that such a line delay is easily implemented with the display memory by incrementing or decrementing the address generators of the display memory DRAMs with one line.
- Fig. 8 shows a reduced frame memory rFM with overlapping ODD/EVEN field sections.
- the read address is indicated by RA.
- the first line of the even field is indicated by 1-E, while the last even field line is indicated by 1-E.
- the first line of the odd field is indicated by l-O, while the last odd field line is indicated by l-O.
- the position of ODD and EVEN field parts in the display memory is no longer fixed and ODD and EVEN field parts overlap each other.
- the number of field-skips per second will be higher in the case of a reduced frame memory than it is in case of a full frame memory.
- the size of the reduced frame memory should be chosen sufficiently high to reduce the number of field-skips per second to an acceptable level. This is also highly dependent on the difference in pixel/line/field rates between the different video input signals and the reference signal. A logical result from this conclusion is that the display memory should consist of many frame memories to bring down the number of field-skips per second. On the other hand, the display memory can be reduced considerably if the differences in pixel, line, and field rates between incoming and outgoing signals is small enough to ensure that the number of field skips per second is low.
- Fig. 8 shows also an example what happens if the write address is moved-up with one field in the reduced frame memory.
- the actual size of the reduced 3-field memory can be chosen such that it matches well the number of rows in available memory devices (always 2 A N, with
- a field or frame skip is done by stalling the WRITING of a new field of an incoming video signal (via an input buffer) in another part of the display memory, where the number of lines between read and write address pointers is increased with one field or frame. Note that for a circular reduced memory, increase of the distance between read and write will decrease the distance between write and read, see Fig. 8. o A field/frame skip is done when during the previous field frame period a "cross" of read/write address pointers is predicted. o Prediction of a cross is simply implemented by monitoring the number of lines/pixels between read/write address pointers and the differential of the number of line/pixels between read/write pointers between subsequent video field frame periods.
- Overlay encoding As has been mentioned in copending application (Atty. docket PHN 14.791) claiming the same priority, run-length encoding is preferably used to encode the overlay for multiple overlapping windows, each relating to a particular one of plurality of image signals. Coordinates of a boundary of a particular window and a number of pixels per video line falling within the boundaries of the particular window are stored. This type of encoding is particularly suitable for video data in raster-scan formats, since it allows sequential retrieval of overlay information from a run-length buffer. Run-length encoding typically decreases memory requirements for overlay-code storage, but typically increases the control complexity.
- run-length encoding In case of a one-dimensional run-length encoding, a different set of run- lengths is created for each line of the compound image. For two-dimensional run-length encoding, run-lengths are made for both the horizontal and the vertical d ections in the compound image, resulting in a list of horizontal run-lengths that are valid within specific vertical screen intervals. This approach is particular suitable for rectangular windows as is explained below.
- a disadvantage associated with this type of encoding resides in a relatively large difference between peak performance and average performance of the controller. On the one hand, fast control generations are needed if events rapidly follows one another, on the other hand the controller is allowed to idle in the absence of the events.
- the controller in the invention comprises: a run- length encoded event table, a control signal generator for supply of control signals, and a cache memory between an output of the table and an input of the generator to store functionally successive run-length codes retrieved from the table.
- a minimal-sized buffer stores a small number of commands such that the control state (overlay) generator can run at an average speed.
- the buffer enables the low-level high-speed control-signal generator to handle performance peaks when necessary.
- Controller 1000 includes a run-length/event buffer 1002 that includes a table of two- dimensional run-length encoded events, e.g., boundaries of the visible portions of the windows (events) and the number of pixels and or lines (run-length) between successive events.
- a run-length/event buffer 1002 that includes a table of two- dimensional run-length encoded events, e.g., boundaries of the visible portions of the windows (events) and the number of pixels and or lines (run-length) between successive events.
- a run-length/event buffer 1002 that includes a table of two- dimensional run-length encoded events, e.g., boundaries of the visible portions of the windows (events) and the number of pixels and or lines (run-length) between successive events.
- raster-scan format of full-motion video signals pixels are written consecutively to every next address location in the display memory of monitor 134, from the left to the right of the screen, and lines of pixels follow one another from top to bottom
- the number of line Yb coinciding with a horizontal boundary of a visible portion of a particular rectangular window and first encountered in the raster-scan is listed, together with the number #W0 of consecutive pixels, starting at the left most pixel, is specified that not belong to the particular window. This fixes the horizontal position of the left hand boundary of the visible portion of the particular window.
- Each line Yj within the visible portion of the particular window can now be coded by dividing the visible part of line Yj in successive and alternate intervals of pixels that are to be written and are not to be written, thus taking account of overlap.
- the division may result in a number #W1 of the first consecutive pixels to be written, a number #NW2 of the next consecutive pixels not to be written, a number #W3 of the following consecutive pixels to be written (if any), a number #NW4 of the succeeding pixels not to be written (if any), etc.
- the last line Yt coinciding with the horizontal boundary of the particular window or of a coherent part thereof and last encountered in the raster scan is listed in the table of buffer 1002 as well.
- Buffer 1002 supplies these event codes to a low-level high-speed control generator 1004 that thereupon generates appropriate control signals, e.g., commands (read, write, inhibit) to govern input buffers 124, 128 or 132 or addresses and commands to control memory modules 102-106 or bus access control commands for control of bus 108 via an output 1006.
- a run-length counter 1008 keeps track of the number of pixels still to go until the next event occurs. When counter 1008 arrives at zero run-length, generator 1004 and counter 1008 must be loaded with a new code and new run-length from buffer 1002.
- a control-state evaluator 1010 keeps track of the current pixel and line in the display memory via an input 1012.
- Input 1012 receives a pixel address "X" and a line address " Y" of the current location in the display memory. As long as the current Y-value has not reached first horizontal boundary Yb of the visible part of the particular window, no action is triggered and no write or read commands are generated by generator 1004.
- the relevant write and not-write numbers #W and #NW as specified above are retrieved from the table in buffer 1002 for supply to generator 1004 and counter 1008. This is repeated for all Y-values until the last horizontal boundary Yt of the visible rectangular window has been reached. For this reason, the current Y-value at input 1012 has to be compared to the Yt value stored in the table of buffer 1002. When the current Y-value has reached Yt, the handling of the visible portion of the particular window constituted by consecutive lines is terminated. A plurality of values Yb and Yt can be stored for the same particular window, indicating that the particular window extends vertically beyond an overlapping other window.
- Evaluator 1010 then activates the corresponding new overlay/control state for transmission to generator 1004.
- the control state can change very fast if several windows are overlapping one another whose left or right boundaries are closely spaced to one another. For this reason, a small cache 1014 is coupled between buffer 1002 and generator 1004. The cache size can be minimized by choosing a minimal width for a window.
- the minimal window size can be chosen such that there is a large distance (in the number of pixels) between the extreme edges of a window gap, i.e., the part of a window being invisible due to the overlap by another window.
- a local low-speed control-state evaluator 1010 is used for each I/O buffer 124, 128, 132 and 136 or for each memory module 102-106, then the transfer of commands should occur during the invisible part of the window, i.e., during its being overlapped by another window. As a result, the duration of the transfer time-interval is maximized this way.
- the interval is at least equal to the number of clock cycles required to write a window having a minimal width.
- Two commands are transferred to the cache: one giving the run-length of the visible part of the window (shortest run-length) that starts when the current run-length is terminated, and one giving the run-length of the subsequent invisible part of the same window (longest run- length).
- the use of cache 1014 thus renders controller 1000 suitable to meet the peak performance requirements.
- the same controller can also be used to control respective ones of the buffers 124-136 if the generator 1004 is modified to furnish a buffer write enable signals 1006.
- Fig. 10 shows a circuit to obtain X and Y address information from the data stored in a buffer (Bi) 1020.
- Incoming video data is applied to the buffer 1020, whose write clock input W receives a pixel clock signal of the incoming video data.
- Read- out of the buffer 1020 is clocked by the system clock SCLK applied to a read clock input R of the buffer 1020.
- a horizontal sync detector 1022 is connected to an output of the buffer 1020 to detect horizontal synchronization information in the buffer output signal.
- the video data in the buffer 1020 includes reserved horizontal and vertical synchronization words.
- Detected horizontal synchronization information resets a pixel counter (PCNT) 1024 which is clocked by the system clock SCLK and which furnishes the pixel count X.
- a vertical sync detector 1026 is connected to the output of the buffer 1020 to detect vertical synchronization information in the buffer output signal.
- Detected vertical synchronization information resets a line counter (LCNT) 1028 which is clocked by the detected horizontal synchronization information and which furnishes the line count Y.
- LCNT line counter
- Fig. 11 shows a possible embodiment of a buffer read out control a ⁇ angement.
- the incoming video signal is applied to a synchronizing information separation circuit 1018 having a data output which is connected to the input of the buffer 1020.
- a pixel count output of the synchronizing information separation circuit 1018 is applied to the write clock input W of the buffer 1020 and to an increase input of an up/down counter (CNT) 1030.
- the system clock is applied to a read control (R CTRL) circuit 1032 having an output which is connected to the read clock input R of the buffer 1020 and to a decrease input of the counter 1030.
- the counter 1030 thus counts the number of pixels contained in the buffer 1020.
- an output (>0) of the counter 1030 which indicates that the buffer is not empty, is connected to an enable input of the read control circuit 1032, so that the system clock SCLK is only conveyed to the read clock input R of the buffer 1020 if the buffer 1020 contains pixels whilst reading from the buffer 1020 is disabled if the buffer is empty.
- Overflow of the buffer 1020 can be avoided if the read segments shown in Fig. 7 are made slightly larger than L/M. It will be obvious from Figs. 10, 11 that the shown circuits can be nicely combined into one circuit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Image Input (AREA)
- Image Processing (AREA)
- Memory System (AREA)
- Editing Of Facsimile Originals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/335,805 US5517253A (en) | 1993-03-29 | 1994-03-29 | Multi-source video synchronization |
JP6521944A JPH07507883A (en) | 1993-03-29 | 1994-03-29 | Multi-source video synchronization system |
EP94913205A EP0642690B1 (en) | 1993-03-29 | 1994-03-29 | Multi-source video synchronization |
DE69411477T DE69411477T2 (en) | 1993-03-29 | 1994-03-29 | VIDEOSYNCHRONIZING MULTIPLE SOURCES. |
PCT/IB1995/000148 WO1995026605A2 (en) | 1994-03-29 | 1995-03-09 | Image display system and multiwindow image display method |
EP95909089A EP0700561B1 (en) | 1994-03-29 | 1995-03-09 | Image display system and multiwindow image display method |
JP7525068A JPH08511358A (en) | 1994-03-29 | 1995-03-09 | Image display system and multi-window image display method |
DE69521574T DE69521574T2 (en) | 1994-03-29 | 1995-03-09 | IMAGE DISPLAY SYSTEM AND MULTI-WINDOW IMAGE DISPLAY METHOD |
US08/407,421 US5777687A (en) | 1994-03-29 | 1995-03-17 | Image display system and multi-window image display method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP93200895.6 | 1993-03-29 | ||
EP93200895 | 1993-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1994023416A1 true WO1994023416A1 (en) | 1994-10-13 |
Family
ID=8213725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NL1994/000068 WO1994023416A1 (en) | 1993-03-29 | 1994-03-29 | Multi-source video synchronization |
Country Status (5)
Country | Link |
---|---|
US (2) | US5517253A (en) |
EP (1) | EP0642690B1 (en) |
JP (2) | JPH0792952A (en) |
DE (2) | DE69422324T2 (en) |
WO (1) | WO1994023416A1 (en) |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5519449A (en) * | 1991-09-17 | 1996-05-21 | Hitachi, Ltd. | Image composing and displaying method and apparatus for displaying a composite image of video signals and computer graphics |
US8821276B2 (en) | 1992-05-22 | 2014-09-02 | Bassilic Technologies Llc | Image integration, mapping and linking system and methodology |
US5553864A (en) * | 1992-05-22 | 1996-09-10 | Sitrick; David H. | User image integration into audiovisual presentation system and methodology |
US6469741B2 (en) | 1993-07-26 | 2002-10-22 | Pixel Instruments Corp. | Apparatus and method for processing television signals |
EP0700561B1 (en) * | 1994-03-29 | 2001-07-04 | Koninklijke Philips Electronics N.V. | Image display system and multiwindow image display method |
US5883676A (en) * | 1994-11-28 | 1999-03-16 | Sanyo Electric Company, Ltd. | Image signal outputting apparatus |
US5710595A (en) * | 1994-12-29 | 1998-01-20 | Lucent Technologies Inc. | Method and apparatus for controlling quantization and buffering for digital signal compression |
WO1997039437A1 (en) * | 1996-04-12 | 1997-10-23 | Intergraph Corporation | High-speed video frame buffer using single port memory chips where pixel intensity values for display regions are stored at consecutive addresses of memory blocks |
AU6882998A (en) * | 1997-03-31 | 1998-10-22 | Broadband Associates | Method and system for providing a presentation on a network |
US7490169B1 (en) | 1997-03-31 | 2009-02-10 | West Corporation | Providing a presentation on a network having a plurality of synchronized media types |
US7412533B1 (en) * | 1997-03-31 | 2008-08-12 | West Corporation | Providing a presentation on a network having a plurality of synchronized media types |
US7143177B1 (en) | 1997-03-31 | 2006-11-28 | West Corporation | Providing a presentation on a network having a plurality of synchronized media types |
US6278645B1 (en) | 1997-04-11 | 2001-08-21 | 3Dlabs Inc., Ltd. | High speed video frame buffer |
US6020900A (en) * | 1997-04-14 | 2000-02-01 | International Business Machines Corporation | Video capture method |
US6177922B1 (en) | 1997-04-15 | 2001-01-23 | Genesis Microship, Inc. | Multi-scan video timing generator for format conversion |
US6069606A (en) * | 1997-05-15 | 2000-05-30 | Sony Corporation | Display of multiple images based on a temporal relationship among them with various operations available to a user as a function of the image size |
US6286062B1 (en) | 1997-07-01 | 2001-09-04 | Micron Technology, Inc. | Pipelined packet-oriented memory system having a unidirectional command and address bus and a bidirectional data bus |
US6032219A (en) * | 1997-08-01 | 2000-02-29 | Garmin Corporation | System and method for buffering data |
KR100299119B1 (en) * | 1997-09-30 | 2001-09-03 | 윤종용 | PC possessing apparatus for controlling flash ROM |
KR100287728B1 (en) * | 1998-01-17 | 2001-04-16 | 구자홍 | System and method for synchronizing video frames |
US6697632B1 (en) | 1998-05-07 | 2004-02-24 | Sharp Laboratories Of America, Inc. | Multi-media coordinated delivery system and method |
DE19843709A1 (en) * | 1998-09-23 | 1999-12-30 | Siemens Ag | Image signal processing for personal computer or television monitor |
US6792615B1 (en) * | 1999-05-19 | 2004-09-14 | New Horizons Telecasting, Inc. | Encapsulated, streaming media automation and distribution system |
US6447450B1 (en) * | 1999-11-02 | 2002-09-10 | Ge Medical Systems Global Technology Company, Llc | ECG gated ultrasonic image compounding |
DE19962730C2 (en) * | 1999-12-23 | 2002-03-21 | Harman Becker Automotive Sys | Video signal processing system or video signal processing method |
EP1402334B1 (en) * | 2001-06-08 | 2006-05-31 | xSides Corporation | Method and system for maintaining secure data input and output |
US7007025B1 (en) * | 2001-06-08 | 2006-02-28 | Xsides Corporation | Method and system for maintaining secure data input and output |
KR20040020082A (en) * | 2001-08-06 | 2004-03-06 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Method and device for displaying program information in a banner |
JP2003060974A (en) * | 2001-08-08 | 2003-02-28 | Hitachi Kokusai Electric Inc | Television camera |
JP3970716B2 (en) * | 2002-08-05 | 2007-09-05 | 松下電器産業株式会社 | Semiconductor memory device and inspection method thereof |
US20040075741A1 (en) * | 2002-10-17 | 2004-04-22 | Berkey Thomas F. | Multiple camera image multiplexer |
US20040174998A1 (en) * | 2003-03-05 | 2004-09-09 | Xsides Corporation | System and method for data encryption |
US20050021947A1 (en) * | 2003-06-05 | 2005-01-27 | International Business Machines Corporation | Method, system and program product for limiting insertion of content between computer programs |
US20050010701A1 (en) * | 2003-06-30 | 2005-01-13 | Intel Corporation | Frequency translation techniques |
US7983160B2 (en) * | 2004-09-08 | 2011-07-19 | Sony Corporation | Method and apparatus for transmitting a coded video signal |
KR101019482B1 (en) * | 2004-09-17 | 2011-03-07 | 엘지전자 주식회사 | Apparatus for changing a channel in Digital TV and Method for the same |
US7908080B2 (en) | 2004-12-31 | 2011-03-15 | Google Inc. | Transportation routing |
US8077974B2 (en) | 2006-07-28 | 2011-12-13 | Hewlett-Packard Development Company, L.P. | Compact stylus-based input technique for indic scripts |
US8102470B2 (en) * | 2008-02-22 | 2012-01-24 | Cisco Technology, Inc. | Video synchronization system |
US9124847B2 (en) * | 2008-04-10 | 2015-09-01 | Imagine Communications Corp. | Video multiviewer system for generating video data based upon multiple video inputs with added graphic content and related methods |
US8363067B1 (en) * | 2009-02-05 | 2013-01-29 | Matrox Graphics, Inc. | Processing multiple regions of an image in a graphics display system |
US20110119454A1 (en) * | 2009-11-17 | 2011-05-19 | Hsiang-Tsung Kung | Display system for simultaneous displaying of windows generated by multiple window systems belonging to the same computer platform |
US8390743B2 (en) * | 2011-03-31 | 2013-03-05 | Intersil Americas Inc. | System and methods for the synchronization and display of video input signals |
JP2014052902A (en) * | 2012-09-07 | 2014-03-20 | Sharp Corp | Memory controller, portable terminal, memory control program and computer readable recording medium |
CN103780920B (en) * | 2012-10-17 | 2018-04-27 | 华为技术有限公司 | Handle the method and device of video code flow |
US9485294B2 (en) * | 2012-10-17 | 2016-11-01 | Huawei Technologies Co., Ltd. | Method and apparatus for processing video stream |
US9285858B2 (en) * | 2013-01-29 | 2016-03-15 | Blackberry Limited | Methods for monitoring and adjusting performance of a mobile computing device |
US20220377402A1 (en) * | 2021-05-19 | 2022-11-24 | Cypress Semiconductor Corporation | Systems, methods, and devices for buffer handshake in video streaming |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4907086A (en) * | 1987-09-04 | 1990-03-06 | Texas Instruments Incorporated | Method and apparatus for overlaying a displayable image with a second image |
US5068650A (en) * | 1988-10-04 | 1991-11-26 | Bell Communications Research, Inc. | Memory system for high definition television display |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4101926A (en) * | 1976-03-19 | 1978-07-18 | Rca Corporation | Television synchronizing apparatus |
GB1576621A (en) * | 1976-03-19 | 1980-10-08 | Rca Corp | Television synchronizing apparatus |
US4121283A (en) * | 1977-01-17 | 1978-10-17 | Cromemco Inc. | Interface device for encoding a digital image for a CRT display |
JPS6043707B2 (en) * | 1978-03-08 | 1985-09-30 | 株式会社東京放送 | phase conversion device |
US4218710A (en) * | 1978-05-15 | 1980-08-19 | Nippon Electric Company, Ltd. | Digital video effect system comprising only one memory of a conventional capacity |
DE3041898A1 (en) * | 1980-11-06 | 1982-06-09 | Robert Bosch Gmbh, 7000 Stuttgart | SYNCHRONIZING SYSTEM FOR TELEVISION SIGNALS |
US4434502A (en) * | 1981-04-03 | 1984-02-28 | Nippon Electric Co., Ltd. | Memory system handling a plurality of bits as a unit to be processed |
US4682215A (en) * | 1984-05-28 | 1987-07-21 | Ricoh Company, Ltd. | Coding system for image processing apparatus |
JPS61166283A (en) * | 1985-01-18 | 1986-07-26 | Tokyo Electric Co Ltd | Television synchronizing signal waveform processing unit |
EP0192139A3 (en) * | 1985-02-19 | 1990-04-25 | Tektronix, Inc. | Frame buffer memory controller |
JPS62206976A (en) * | 1986-03-06 | 1987-09-11 | Pioneer Electronic Corp | Control device for video memory |
CA1272312A (en) * | 1987-03-30 | 1990-07-31 | Arthur Gary Ryman | Method and system for processing a two-dimensional image in a microprocessor |
US4947257A (en) * | 1988-10-04 | 1990-08-07 | Bell Communications Research, Inc. | Raster assembly processor |
DE69033539T2 (en) * | 1989-02-02 | 2001-01-18 | Dainippon Printing Co Ltd | Image processing device |
US5283561A (en) * | 1989-02-24 | 1994-02-01 | International Business Machines Corporation | Color television window for a video display unit |
JPH05324821A (en) * | 1990-04-24 | 1993-12-10 | Sony Corp | High-resolution video and graphic display device |
US5168270A (en) * | 1990-05-16 | 1992-12-01 | Nippon Telegraph And Telephone Corporation | Liquid crystal display device capable of selecting display definition modes, and driving method therefor |
US5351129A (en) * | 1992-03-24 | 1994-09-27 | Rgb Technology D/B/A Rgb Spectrum | Video multiplexor-encoder and decoder-converter |
EP0601647B1 (en) * | 1992-12-11 | 1997-04-09 | Koninklijke Philips Electronics N.V. | System for combining multiple-format multiple-source video signals |
-
1994
- 1994-03-23 DE DE69422324T patent/DE69422324T2/en not_active Expired - Fee Related
- 1994-03-29 US US08/335,805 patent/US5517253A/en not_active Expired - Fee Related
- 1994-03-29 DE DE69411477T patent/DE69411477T2/en not_active Expired - Fee Related
- 1994-03-29 JP JP6058788A patent/JPH0792952A/en active Pending
- 1994-03-29 JP JP6521944A patent/JPH07507883A/en not_active Ceased
- 1994-03-29 WO PCT/NL1994/000068 patent/WO1994023416A1/en active IP Right Grant
- 1994-03-29 EP EP94913205A patent/EP0642690B1/en not_active Expired - Lifetime
-
1997
- 1997-04-28 US US08/847,836 patent/US5731811A/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4907086A (en) * | 1987-09-04 | 1990-03-06 | Texas Instruments Incorporated | Method and apparatus for overlaying a displayable image with a second image |
US5068650A (en) * | 1988-10-04 | 1991-11-26 | Bell Communications Research, Inc. | Memory system for high definition television display |
Also Published As
Publication number | Publication date |
---|---|
EP0642690A1 (en) | 1995-03-15 |
DE69422324D1 (en) | 2000-02-03 |
US5517253A (en) | 1996-05-14 |
EP0642690B1 (en) | 1998-07-08 |
JPH0792952A (en) | 1995-04-07 |
US5731811A (en) | 1998-03-24 |
DE69411477D1 (en) | 1998-08-13 |
DE69411477T2 (en) | 1999-02-11 |
DE69422324T2 (en) | 2000-07-27 |
JPH07507883A (en) | 1995-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0642690B1 (en) | Multi-source video synchronization | |
US5784047A (en) | Method and apparatus for a display scaler | |
US5469223A (en) | Shared line buffer architecture for a video processing circuit | |
EP0791265B1 (en) | System and method for generating video in a computer system | |
US6157374A (en) | Graphics display system and method for providing internally timed time-varying properties of display attributes | |
KR19980071592A (en) | Image upscale method and device | |
US6844879B2 (en) | Drawing apparatus | |
KR19980081437A (en) | Multiscan Video Timing Generator for Format Conversion | |
JPH06214550A (en) | Equipment and method for provision of frame buffer memory for output display of computer | |
US7589745B2 (en) | Image signal processing circuit and image display apparatus | |
US5729303A (en) | Memory control system and picture decoder using the same | |
JP2880168B2 (en) | Video signal processing circuit capable of enlarged display | |
CA2661678A1 (en) | Video multiviewer system using direct memory access (dma) registers and block ram | |
US20010056526A1 (en) | Memory interface device and memory address generation device | |
KR20020072454A (en) | Image processing apparatus and method for performing picture in picture with frame rate conversion | |
US5764240A (en) | Method and apparatus for correction of video tearing associated with a video and graphics shared frame buffer, as displayed on a graphics monitor | |
US5610630A (en) | Graphic display control system | |
US4941127A (en) | Method for operating semiconductor memory system in the storage and readout of video signal data | |
JP2001092429A (en) | Frame rate converter | |
KR100245275B1 (en) | Graphics sub-system for computer system | |
JP2001111968A (en) | Frame rate converter | |
EP0618560B1 (en) | Window-based memory architecture for image compilation | |
JPH0143333B2 (en) | ||
JP3295036B2 (en) | Multi-screen display device | |
JP2622622B2 (en) | Scan line number conversion control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1994913205 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 08335805 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1994913205 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1994913205 Country of ref document: EP |