CN1664915A - Compositing multiple full-motion video streams for display on a video monitor - Google Patents

Compositing multiple full-motion video streams for display on a video monitor Download PDF

Info

Publication number
CN1664915A
CN1664915A CN2005100513179A CN200510051317A CN1664915A CN 1664915 A CN1664915 A CN 1664915A CN 2005100513179 A CN2005100513179 A CN 2005100513179A CN 200510051317 A CN200510051317 A CN 200510051317A CN 1664915 A CN1664915 A CN 1664915A
Authority
CN
China
Prior art keywords
frame
frame buffer
picture
video signal
dynamic video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2005100513179A
Other languages
Chinese (zh)
Other versions
CN100578606C (en
Inventor
埃里克·沃格斯伯格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN1664915A publication Critical patent/CN1664915A/en
Application granted granted Critical
Publication of CN100578606C publication Critical patent/CN100578606C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Television Systems (AREA)

Abstract

Frame tearing in an arbitrarily large number of incoming motion video signals incorporated into a single composite display is prevented using as few as three frame buffers. Independently and concurrently for each incoming motion video signal, one of the frame buffers is reserved for writing captured pixel data, another is identified as storing the most recently completely captured frame, and one is identified as currently being read in forming a frame of the outgoing composite display. Frames of the outgoing composite display are collected from the multiple frame buffers accordingly to designations of the motion video signals of the read frame buffer for each.

Description

Synthesizing of a plurality of full dynamic video stream that is used on video monitor, showing
Technical field
The present invention relates to the video display system field, particularly relate to the demonstration that a plurality of asynchronous videos are presented in the single picture that does not have frame to tear.
Background technology
The dynamic video of many types can obtain from various sources.The example in such source comprises radio and television (as NTSC, PAL etc.), video camera, reaches graphoscope.Each dynamic video source feature group that have himself, that be different from other video source.Such feature comprise frame frequency, image size size, and frame whether be interleaved.For example, frame frequency can be from changing to above 100fps less than 24 frame per seconds (fps).
Frame synchronous or that the failure of coordination indicating characteristic causes people to know is usually torn between dynamic video that receives from video source and video demonstration.The change that frame is torn by the content of frame buffer during showing causes.For spectators, shown image looks and is separated between two different images.Image is usually by temporary transient related but be substituted.For example, the frame of striding the walking figure of image is torn and may be shown: leg is advanced in trunk a little earlier.Be understandable that this is undesirable artifact.In inside, problem is that the part of two different incoming frames is displayed in the output frame.
The problem of tearing for frame some solutions of having deducted a percentage.The United States Patent (USP) 6,307,565 that licenses to the United States Patent (USP) 5,914,711 of Mangerson etc. and license to Quirk etc. has been described the solution of frame being torn when asynchronous when from the demonstration of the dynamic video of video source and dynamic video respectively.Yet the system of the two description all relates to the full screen display of motion video.In other words, shown dynamic video is not shared display space with other display element.
May wish to be combined to from the dynamic video of the asynchronous reception of video source the environment of the superset demonstration that comprises other display element.For example, the asynchronous dynamical video should be presented in the environment of computer desktop demonstration, and computer desktop shows graphic user interface (GUI) instrument of the demonstration that comprises control asynchronous dynamical video and/or other composition of computer system.Similarly, asynchronous dynamical video Ying Keyu shows simultaneously from other dynamic video of other asynchronous dynamical video source.Monitoring in this editor at dynamic video, the many safety cameras, a plurality of video cameras of use are useful to the aspects such as coordination of the video covering of site activity.
Except avoiding frame tears, also wish dynamic video each frame reception and show between the delay minimum.Thereby any reaction time that is used between the demonstration that solution that frame tears also should make the reception of frame of dynamic video and this frame is the shortest.
Summary of the invention
According to the present invention, a plurality of input dynamic video signals are by independence and side by side send to one of a plurality of frame buffers.For each input dynamic video signal, the designated new pixel data of representing incoming frame that receives of one of frame buffer, another frame buffer can be registered as the expression that the frame of reception is finished in preservation recently, and another frame buffer comprises and just is incorporated into frame in the synthetic picture, that early finish.
Transmission is simultaneous, because a plurality of dynamic video stream receives in real time is combined in the single synthetic picture.Transmission is independently, because each input dynamic video signal has its oneself the indication importing, finish recently and read frame buffer of being used to.For example, single frame buffer can be because of a dynamic video signal currently be written into, be being read and be marked as because of another dynamic video signal and finish but as yet not because a dynamic video signal is read again.The frame that independent and transmission simultaneously only allows three frame buffers to be used for suitably managing many dynamic video signals avoids frame to tear with all signals at picture.
When forming synthetic picture, pixel data is assembled from a plurality of frame buffers according to the indication that is used to read frame buffer of each dynamic video signal.Particularly, for each pixel, pixel data is from all frame buffer retrievals.In addition, to discern which dynamic video signal be visible (if any) in this particular pixels to key frame.As seen the dynamic video signal to read frame buffer selected, and retrieval is incorporated in the synthetic video image from the pixel data of this frame buffer.
Frame in a plurality of dynamic video signals is torn by stoping and incoming frame is written to the frame buffer that is just reading for same dynamic video signal is avoided.Particularly, when beginning to receive the new frame of dynamic video signal, can be to be different from the arbitrary frame impact damper of just being read when the formation synthetic video picture in order to write the input pixel data to one of wherein frame buffer, read frame buffer if be different from, it also is different from the frame buffer of the frame of finishing recently of preserving the dynamic video signal.Catching on the basis of finishing of the frame of input dynamic video signal, the frame buffer that the frame of finishing recently is written to it is registered as the frame buffer of finishing recently, some the time be called as next and read frame buffer.For next incoming frame of dynamic video signal, select frame buffer that the process that incoming frame is stored in wherein is repeated.
When writing of the incoming frame of each dynamic video signal finished, preserve the frame of finishing recently the mutual asynchronous variation of frame buffer and with the frame scan of output synthetic video picture finish and asynchronous variation.Therefore, the input dynamic video signal of various frame frequencies can be provided.
Be the scanning frame impact damper forming the new frame of synthetic video picture image, upgrade all indications of reading frame buffer from the indication of the frame buffer finished recently.Owing to write frame buffer, do not import pixel data and be written in any of the frame buffer finished recently by the mode of selecting as mentioned above---read frame buffer and upgrade according to it.Therefore, if this renewal causes reading the variation of the indication of frame buffer, the frame buffer of reading when upgrading is not to write frame buffer.
This mechanism can be handled the input video stream of arbitrarily big quantity and can provide background images on the dynamic video stream that shows.Background images can comprise the image of being responsible for and moving arbitrarily that still image (" wallpaper ") and/or computing machine produce.Input dynamic video stream can take on a different character widely.
This mechanism also repeats incoming frame (if the input frame frequency is lower than the output frame frequency) as required automatically or abandons incoming frame (if the input frame frequency is faster than output frame frequency).Especially, if the more than one frame of input dynamic video signal is finished during the single output scanning of frame buffer, periodic variation is repeatedly before the indication of reading frame buffer that is used to upgrade this dynamic video image for the frame buffer that then is registered as the frame that preservation finishes recently.Thereby, be dropped from all frames except that the last frame of finishing of finishing since the preceding output scanning.Similarly, if because the relatively slow frame frequency of dynamic video signal, the words that the continuous output scanning of frame buffer is finished before another frame of dynamic video signal is received, then do not change in the frame buffer of the frame that preservation is finished recently when new output scanning begins, and the frame of the previous dynamic video signal that shows is repeated in synthesizing picture.
This mechanism table has revealed the essential improvement of relative existing system, is all avoided because frame is torn in the input dynamic video stream of arbitrarily big quantity.
Description of drawings
Fig. 1 is that wherein frame is torn in a plurality of dynamic video windows and all avoided according to the sketch that comprises the picture of a plurality of dynamic video windows of the present invention.
Fig. 2 is the structural drawing according to synthesis system of the present invention.
Fig. 3 is the detailed structure view of the renewal logical circuit among Fig. 2.
Fig. 4 is the logical flow chart according to the processing of input H-sync of the present invention.
Fig. 5 is the logical flow chart according to the processing of input V-sync of the present invention.
Fig. 6 is for selecting the new logical flow chart of writing frame pointer among Fig. 5.
Fig. 7 newly writes the logical flow chart of another embodiment of frame pointer for selection.
Fig. 8 is the logical flow chart according to the processing of output V-sync of the present invention.
Fig. 9 is the structural drawing of synthesis system according to another embodiment of the present invention.
Figure 10 is the detailed structure view of the renewal logical circuit of Fig. 9.
Figure 11 is the structural drawing of the mixed logic circuit that can use in conjunction with the synthesis system of Fig. 2 and Fig. 9.
Figure 12 and 13 shows respectively according to other picture of the present invention and key frame data, the dirigibility during with explanation definition visibility region.
Embodiment
According to the present invention, a plurality of video source are sent to each frame buffer among a plurality of frame buffer 204A-C (Fig. 2) of synthesis system 100, and output frame is made up of the selected portion of frame buffer.Thereby, can avoid the frame in the multitude of video source to tear by the frame buffer that only uses the relatively small number amount.Particularly, for the various piece of picture 102 (Fig. 1), which zone of key frame 202 identification frame buffer 204A-D is corresponding in a large amount of eikongens which.Such eikongen can be the input asynchronous dynamical vision signal 210A-D (Fig. 2) of any amount, and background 106 (Fig. 1).Read that frame pointer 214 determines frame buffer 204A-D which be selected for each pixel location when being presented on picture 102 on the monitor, write frame pointer 218 and determine which frame buffer 204A-C each frame of each dynamic video signal is written to.Write the pixel of which frame buffer and each picture and from which frame buffer read by coordinating each incoming frame, the frame of all dynamic videos of picture is torn all and is avoided.
Fig. 1 shows the picture 102 that comprises a plurality of dynamic video window 104A-C and background 106.Each representative of dynamic video window 104A-C is exclusively used in the part of the picture displayed 102 of input dynamic video signal.Therefore, " window " uses with its general meaning, i.e. the part of picture, itself and shown relevance.The user of computing machine is at window manager such as Linux The Mac OS of the Apple Computer of the sawfish of operating system, WindowMaker, IceWM etc., California , or any Windows of washingtonian Microsoft Frequent ground warp is gone through window in the environment of operating system.Window manager is related with each window with a plurality of graphic user interfaces (GUI) unit usually.At this, it is the part of background 106 that such unit is construed to, because the main content of paying close attention to is the dynamic video signal of expressing among the dynamic video window 104A-C.Particularly, the expression dynamic video requires when the GUI unit of each window manager reaches the out of Memory of being presented to the user by computing machine to carry out change very little degree and/or very not frequent usually with a large amount of display message of very fast Velocity Updating.
Fig. 2 shows key frame 202 and frame buffer 204A-D, the content viewable that its co expression shows in picture 102 (Fig. 1).Among the frame buffer 204A-D each is a frame buffer, promptly determines the pixel data array of color separately in position separately in picture 102, and therefore, picture 102 refreshes with the frame frequency of picture 102.Thereby, for picture 102 is appeared on the display device, read pixel data from all frame buffer 204A-D, and pixel data be converted into the analog or digital signal and be included to suitable timing and auxiliary signal (as V-sync and H-sync) to drive display device.This process is known, and only is incorporated in this to help to understand and pay attention to the common role of frame buffer when being presented on video data on the display device.Because all pixels of all frame buffer 204A-D representative picture 102, thereby definition picture 102, any variation in picture 102 can be by writing the one or more realizations of new pixel data in the frame buffer 204A-D.
In order to show, frame buffer 204A-D is by common addressing.Particularly, frame buffer 204A-D shares and is used for from the addressing logic of frame buffer 204A-D read data.Similarly, frame buffer 204A-C shares the addressing logic that is used for data are write frame buffer 204A-C.In this illustrative embodiment, frame buffer 204D is used to express the content viewable that is different from the dynamic video signal.Thereby for writing, frame buffer 204D is not by common addressing.But processor 240 (as CPU or GPU) is write the data that expression is different from the content viewable of dynamic video signal among the frame buffer 204D.Such content viewable can comprise still image and graphical content, as photo, text, button, cursor, and any one various GUI unit of each window manager of various operating systems.At this, background 106 is represented all such content viewables that are different from dynamic video.In another embodiment, frame buffer 204D is omitted, and background 106 is written among one or more among the frame buffer 204A-C.The appropriate location reason legacy window manager of the fuzzy part of background 106 is finished in a conventional manner, and so fuzzy part is not indicated in the frame buffer 204A-C.
Key frame 202 by common addressing to read with frame buffer 204A-D, to each pixel location, its determine in the multiple source which be visible.In this illustrative embodiment, the source is any among background 106 or a large amount of asynchronous dynamical vision signal 210A-D that imports, and at this, it is called as incoming video signal 210A-D sometimes.The size of frame buffer 204A-D is corresponding to the display resolution of picture 102, and it is flesh and blood of common definite picture 102 all.In this illustrative embodiment, key frame 202 is the arrays with the size that is similar to frame buffer 204A-D, and therefore is identified for the source of each single pixel.In another embodiment, the source of each of key frame 202 definite a plurality of group of pixels.In either case, key frame 202 indicates the source of each pixel of picture 102.
Key frame upgrades the content of logical circuit 252 control key frames 202.For example, a plurality of user interface events can cause dynamic video window 104A-C to be positioned to the appearance shown in Fig. 1.Such incident comprises that windowing is to be shown in dynamic video wherein, moving window, and to adjust window size.All such incidents are by handling as above-mentioned window manager.Window manager upgrades logical circuit 252 with such event notice to key frame, makes key frame upgrade logical circuit 252 and has enough information to determine that be visible in picture 102 which vision signal in which position.No matter when this information changes, and key frame upgrades logical circuit 252 and changes the content of key frame 202 to represent the current state of picture 102 exactly.Key frame upgrades logical circuit 252 and also such change notification is given renewal logical circuit 212, makes the pixel of incoming video signal 210A-D be written to the interior appropriate location of frame buffer 204A-C.With respect to the input and output frame frequency, the change of the appropriate address information in key frame 202 and the renewal logical circuit 212 is rare.Therefore, during the processing of many input and output frames, key frame 202 and the address information of upgrading in the logical circuit 212 remain unchanged usually.
The control that key frame 202 provides each vision signal to appear at the pixel-pixel in the picture 102 (Fig. 1), thus give free completely for the position and the size of the video window in the picture 102.In the exemplary example of Fig. 1, each of dynamic video window 104A-C and background 106 are corresponding to unique source identifier.For example, key frame 202 is kept at the position that makes incoming video signal 210B will be visible as dynamic video window 104B with the source identifier related with incoming video signal 210B.For each pixel of picture 102, it is visible with in indication incoming video signal 210A-D or the background 106 which at ad-hoc location that key frame 202 (Fig. 2) is preserved these source identifiers.
[output frame scanning general view]
Scan all frame buffer 204A-D to send a frame to picture 102 by following work.Video timing pulse generator 242 is provided for the timing signal of picture 102, comprises pixel clock 250 and H-sync and V-sync signal.These signals use with scanning frame impact damper 204A-D by display logic circuit 200 and produce the colouring information that is used for picture.This colouring information then sends to picture together with H-sync and V-sync and any other necessary timing signal.
Video timing pulse generator 242 can be nonsynchronously maybe can be synchronized to one of incoming video signal 210A-D or synchronously (obtain the method for proving by what be known as GENLOCK usually) to another vision signal that has with the timing signal of picture 102 compatibilities.
The scanning of frame begins with vertical synchronizing signal, is called as V-sync sometimes, and the processing of first row of pixel begins.For each pixel in the row, display logic circuit 200 is from the source identifier of key frame 202 retrieval pixels.The addressing logic of sharing between key frame 202 and frame buffer 204A-D of reading makes color each retrieval from frame buffer 204A-D at the same time of pixel.Thereby one of color that the 200 use source identifiers selections of display logic circuit are retrieved is used as representing that the data that will be displayed on the pixel in the picture 102 (Fig. 1) send.
Read the selected impact damper that frame pointer 214 is determined among the frame buffer 204A-D, it is corresponding to each source identifier.In this embodiment, selected respective frame impact damper is determined that by the control signal that can be used for multiplexer 220 multiplexer 220 is used for from the selection of one of color of frame buffer 204A-D retrieval.For example, reading frame pointer 214 can specify its identifier will retrieve from frame buffer 204B (Fig. 2) for the source (as incoming video signal 210A) of " 5 ".In this exemplary embodiment, read frame pointer 214 and be indicated in the look-up table, wherein determine that for the frame pointer of reading of " 5 " two control signal " 01 " selects color at multiplexer 220 from frame buffer 204B corresponding to source identifier.Certainly, also can use the control signal of other type.
When suitable frame buffer 204A-D selects suitable color, display logic circuit 200 will be applied to from the source identifier that key frame 202 retrieves and read frame pointer 214, thereby cause corresponding frame buffer to select signal application to multiplexer 220.For example, the pixel value of selecting by multiplexer 220 drives the digital to analog converter 246 that is used for showing in the analog display device and/or drives the digit emitter 248 that digital distance scope is used to show.Pixel data can be tabled look-up by color and 244 is converted to RGB (or other color format) value from numerical value, and perhaps, the color format that can prepare to show is kept among the frame buffer 204A-D, thereby color is tabled look-up and 244 can be omitted.
To each pixel of the row of key frame 202 and frame buffer 204A-D, display logic circuit 200 repeats this frame buffer selection course.When this row was finished, display logic circuit 200 received horizontal-drive signal from video timing pulse generator 242, and it is called as H-sync sometimes.After H-sync, to the next line pixel, display logic circuit 200 repeats this process.When all row of pixel are all processed, receive another V-sync from video timing pulse generator 242, this process begins once more at the top of key frame 202 and frame buffer 204A-D.
By using key frame 202 by this way and reading frame pointer 214, display logic circuit 200 readable a plurality of frame buffer 204A-D are to form the single frames of picture 102 (Fig. 1).This makes to the asynchronous dynamical vision signal of a plurality of inputs in the big shows signal, can carry out the distribution of frame write and read between a plurality of frame buffer.For example, the uncompleted frame of incoming video signal 210A can be written into frame buffer 204A, and simultaneously, the frame of before having finished reads from frame buffer 204B.Simultaneously, the uncompleted frame of incoming video signal 210B can be written among the frame buffer 204B, and the frame of before having finished reads from frame buffer 204A.In this simple case, picture 102 (Fig. 1) part is by frame buffer 204A definition, and part is defined by frame buffer 204B.
In this exemplary embodiment, frame buffer 204D is reserved and is used for background.Therefore, in this example, frame buffer 204D also defines the part of picture 102 (Fig. 1), particularly, i.e. and the visible part of background 106.
The dirigibility that is provided by key frame 202 (Fig. 2) when the visible part of definition picture 102 is provided Figure 12-13.Particularly, picture 102B (Figure 12) comprises the dynamic video 1204A-C of three (3) pictures, and each includes the GUI unit of being represented by regional 1206A-C respectively.Such GUI unit can comprise the broadcast that is used for user control, suspends, stops, the gui tool of F.F., reviewing etc., and it is represented by the graphic element that computing machine produces usually.
Figure 13 shows the expression 202B as the picture 102B of expression in key frame 202 (Fig. 2).Expression 202B comprises background 1206, and background comprises regional 1206A-C (Figure 12) and regional 1206D, and regional 1206D comprises the remainder of the picture 102B that is different from dynamic video 1204A-C and regional 1206A-C.It should be noted that the shape of background 1206 (Figure 13) is not limited to straight vertical and horizontal edge, and the zone that is not limited to adjoin.In the example of Figure 13, background 1206 comprises the circular edges edge that surrounds dynamic video 1204B, and is included in the non-frame zone of adjoining between dynamic video 1204A and the 1204C.In fact, Figure 12-13 shows the ability that picture is arranged in the picture-in-picture.
[incoming frame is write general view]
A plurality of incoming video signals write frame buffer 204A-C as follows and tear to stop the frame in the picture 102.Among a plurality of incoming video signal 210A-D each is related with the specific buffers among the frame buffer 204A-C by writing frame pointer 218, and it only is not written into and is the frame buffer that frame pointer 214 uses of reading in display logic circuit 200 and the synthetic picture 102 immediately.Particularly, the frame pointer of writing of each new frame of any incoming video signal 210A-D is selected as being different from and reads frame pointer and next reads frame pointer, wherein read the input signal that frame pointer is used for reading frame pointer 214 expressions, next is read frame pointer and is used for the input signal that next reads frame pointer 216 expressions.
In order to change at the frame frequency between compensation picture 102 and the incoming video signal 210A-D under the situation that does not have frame to tear, the frame of incoming video signal 210A-D or be dropped or be repeated makes the frame that has only complete sum to finish be incorporated in the picture 102.The process that the frame of guaranteeing to have only complete sum to finish just is shown will describe in detail below, and whole process will be described to help to understand avoiding that frame according to the present invention tears by concise and to the point.Only consider that single incoming video signal is that incoming video signal 210A is helpful.The asynchronous dynamical vision signal 210B-D of input handles in a similar fashion simultaneously.
Read among the frame pointer 214 indication frame buffer 204A-C which frame representing the complete of incoming video signal 210A and finish, it just is being incorporated in the picture 102.Next reads the frame of finishing recently of the asynchronous dynamical vision signal 210A of which the representative input among the frame pointer 216 indication frame buffer 204A-C, and it will then be incorporated in the picture 102.Among the frame buffer 204A-C which the current uncompleted frame of writing frame pointer 218 indication incoming video signal 210A just be written into.When writing of each frame of incoming video signal 210A finished, next reads to be modified the frame that is defined as finishing recently with the frame that will newly finish in the frame pointer 216, and it is selected and be illustrated in and write in the frame pointer 218 to be used for the new frame buffer of next frame of incoming video signal 210A.Before display logic circuit 200 has been finished the frame of picture 102 and do not begun next frame synthetic as yet, read frame pointer 214 and be not changed usually.At that time, display logic circuit 200 is read frame pointer 216 renewals from next and is read frame pointer 214.
Select for incoming video signal 210A new when writing frame pointer, should note avoiding selecting incoming video signal 210A read frame pointer or next reads frame pointer.By avoiding selecting to read the write frame pointer of frame pointer, be prevented to the writing of frame of reading frame pointer 214 sensings as the asynchronous dynamical vision signal 210A that imports.In addition, read frame pointer 214 be determined point to incoming video signal 210A-D finish frame and those frames that between the synthesis phase of finishing frame of the picture 102 that display logic circuit 200 carries out, remains unchanged.By avoiding selecting next to read the write frame pointer of new of frame pointer as the asynchronous dynamical vision signal 210A of input, read frame pointer 214 be defined in read frame pointer 214 from next read to point to when frame pointer 216 upgrades input asynchronous dynamical vision signal 210A-D finish frame.Especially, reading frame pointer 214 when next is read frame pointer 216 and upgrades, write frame pointer 218 and do not allow to write to the frame pointer 214 guided any frames of reading that upgrade.
Usually, be preferably in the picture 102 show incoming video signal each frame once and only once, the demonstration time is determined by the timing of incoming video signal self.Yet this requires the frame frequency of incoming video signal and the frame frequency of picture 102 accurately to mate.Frequently, the frame frequency of incoming video signal is different from the frame frequency of picture 102, and this just requires to abandon or repeat the frame of incoming video signal.If the frame frequency of incoming video signal is greater than the frame frequency of picture 102, incoming video signal comprises too many frame so that can not be shown by picture 102, and some frames of incoming video signal are dropped and are not displayed in the picture 102.If the frame frequency of incoming video signal is less than the frame frequency of picture 102, frame very little is included in and is used in the incoming video signal only showing once at picture 102, then repeats some frames of incoming video signal in picture 102.
The frame frequency that the abandoning of the frame of incoming video signal 210A occurs in incoming video signal 210A is during greater than the frame frequency of picture 102.In this case, compare the corresponding renewal frequency of reading frame pointer 214 and change more frequently corresponding to one of the frame pointer 218 of writing of incoming video signal 210A.Following Example is illustrative, and setting the frame buffer of reading the current scanning of frame pointer 214 indications is frame buffer 204A, and it comprises the frame of incoming video signal 210A.Also setting next, to read frame pointer 216 indication frame buffers are frame buffer 204B, it comprise incoming video signal 210A finish recently and the frame of one scan down.Therefore, writing frame pointer 218 and make the frame of incoming video signal 210A of current reception will be written into the frame buffer outside the frame buffer 204A-B, is frame buffer 204C in this example.This situation is summarized in down in the Table A.
Table A
The asynchronous dynamical vision signal 210A of input is state the preceding
Read frame buffer Frame buffer 204A
Next reads frame buffer Frame buffer 204B
Write frame buffer Frame buffer 204C
In this example, owing to import frame frequency greater than the picture frame frequency, the output scanning of some frame was not finished before writing of one or more incoming frames finished.In this case, write in frame buffer 204C finishes at incoming frame, reading frame pointer 214, to continue these institute's scanning frame impact dampers that are used for incoming video signal 210A of indication be frame buffer 204A.In this example, the frame newly finished is indicated on next and is read in the frame pointer 216 by pointing to frame buffer 204C, before by next read that frame pointer is 216 that point to, the frame of before having finished of incoming video signal 210A among the frame buffer 204B is dropped.This situation is summarized among the following table B.
Table B
The asynchronous dynamical vision signal 2104 of input is the state under frame frequency faster subsequently
Read frame buffer Frame buffer 204A
Next reads frame buffer Frame buffer 204C
Write frame buffer Frame buffer 204B
Because incoming frame write fully before the end of scan of frame buffer 204A, corresponding next read frame pointer 216 and can be copied in its previous value and read to change before the frame pointer 214.In the situation of Table A representative, the frame that is illustrated in the incoming video signal 210A among the frame buffer 204B will not be displayed in the picture 102, thereby be dropped.
When incoming frame alternately writes frame buffer 204B and 204C in the above described manner, a plurality of frames can be dropped, up to display logic circuit 200 finish the scanning of frame buffer 204A for the demonstration of the present frame of picture 102 and next is read frame pointer 216 copy to read frame pointer 214 till.
The frame frequency that the repeating of the frame of incoming video signal 210A occurs in incoming video signal 210A is during less than the frame frequency of picture 102.In this case, to upgrade the frequency variation of reading frame pointer 214 more not frequent compared to read frame pointer 216 from next for the frame pointer of writing of incoming video signal 210A.Following Example is illustrative, the same situation of representing in the Table A above the imagination, wherein frame pointer 214,216,218 indicates frame buffer 204A, 204B, 204C to preserve the frame of the current scanning of incoming video signal 210A respectively, finish and frame that next is read, and the current frame that writes recently respectively.In this example, because the input frame frequency is less than the picture frame frequency, the scanning of some output frame was finished before writing of the incoming frame of correspondence finished.In this case, reading frame pointer 216 from next upgrades and reads that frame pointer 214 causes frame buffer 204B and to import dynamic video signal 210A related.This situation is summarized among the following table C.
Table C
The asynchronous dynamical vision signal 210A of input is the state under slower frame frequency subsequently
Read frame buffer Frame buffer 204B
Next reads frame buffer Frame buffer 204B
Write frame buffer Frame buffer 204C
If the scanning of the next frame of picture 102 was finished before the other whole frame that receives and write incoming video signal 210A, next is read frame pointer 216 and continues to indicate the frame of finishing recently of incoming video signal 210A also to be indicated among the frame buffer 204B.Therefore, next that read that 216 pairs of frame pointers read frame pointer 214 from next upgraded the change of reading frame pointer 214 can not cause about incoming video signal 210A.Thereby in another frame of picture 102, table C continues to represent exactly the state of incoming video signal 210A.Thereby the frame that is illustrated in the incoming video signal 210A among the frame buffer 204B is combined in another frame of picture 102, thereby has repeated this frame of incoming video signal 210A.
The asynchronous dynamical vision signal of input, particularly, incoming video signal 210A-D, normally digital pixel color value stream.Each stream comprises H-sync and V-sync signal.H-sync separates the final pixel of the one scan row of dynamic video frame with the pixel value that is lower than of next scan line.Scan line refers to the single file pixel.V-sync separates the final pixel of a frame of dynamic video signal first pixel with next frame.Frame refers to the single image in a plurality of sequential images of dynamic video signal.In this exemplary embodiment, the asynchronous dynamical vision signal 210A-D of input is all processed, size and form that the asynchronous dynamical vision signal 210A-D of feasible input all for ready for shows in picture 102, and do not need further modification.For example, any in big or small adjustment, color-match, the cancellation interlace etc. all carried out on incoming video signal 210A-D.It should be noted that at aspects such as size, frame frequency, phase (timing pip of V-sync signal), sizes, incoming video signal 210A-D can be different from picture 102, they each other also can be different.
A plurality of incoming video signal 210A-D handle according to the following procedure.A plurality of incoming video signal 210A-D receive by upgrading logical circuit 212.When shown in Fig. 2 being four (4) individual input asynchronous dynamical vision signals, will be appreciated that in the system described here to be limited to this numeral without any part.Still less or more incoming video signal all can mode described here handle.
In the context below, upgrade logical circuit 212 and will more intactly be described in conjunction with Fig. 3.Briefly, renewal logical circuit 212 will be imported pixel and be associated with the interior pixel location of picture 102 (Fig. 1), and thereby addressing in key frame 202 (Fig. 2) and frame buffer 204A-C.212 pairs of renewal logical circuits are imported the reception of pixel data and are write with the reference address that is associated and coordinate.The output of upgrading logical circuit 212 is a series of pixel records, and each record comprises the address 230 of the pixel data 232 of represent color, this pixel data and writes selection signal 228.The writing of each pixel selects signal controlling that among the frame buffer 204A-C which pixel data 232 write.Upgrading logical circuit 212 uses the source identifier related with specific incoming video signal to write selection signal 228 from writing frame pointer 218 retrievals.Write and select signal 228 to use demultiplexer control that among the frame buffer 204A-C which pixel data 230 write about reading modes that frame pointer 214 and multiplexer 220 replenish to above-mentioned.Particularly, write and select signal 228 will write enable signal 238 to send to selected frame buffer among the frame buffer 204A-C through demultiplexer 234.Address 230 and pixel data 232 are sent to all frame buffer 204A-C.Write and select signal 228 and write enable signal 238 and specify jointly and only make and to write one of frame buffer 204A-C.Thereby, write frame pointer 218 and allow among a plurality of incoming video signal 210A-D each to be written to different impact dampers among the frame buffer 204A-C.Similarly, writing frame pointer 218 allows to change the frame buffer 204A-C that is write by changing the corresponding pointer of writing in the frame pointer 218 simply.
Thereby renewal logical circuit 212 will be imported pixel and be distributed between the frame buffer 204A-C, and display logic circuit 200 is collected pixel to form picture 102 from frame buffer 204A-C.Writing frame pointer 218 has stoped the frame in any vision signal that is presented in the picture 102 to be torn with the careful management of reading frame pointer 214.
[detailed description that incoming frame is write]
Fig. 3 shows in detail and upgrades logical circuit 212.Each incoming video signal 210A-D is received by video router 302A-D respectively.As mentioned above, incoming video signal and corresponding video router can be less than or more than four (4) shown in Fig. 2 and 3.Video router 302A-D is similar mutually.Thereby, can equally be applied among the video router 302B-D each to the description of video router 302A below.
Video router 302A comprises initial X address 306, X counter 308, initial Y address 310, Y counter 312, reaches base address 318.These values will be imported pixel and be mapped to key frame 202 (Fig. 2) and the interior relevant position of frame buffer 204A-C.Also be initialised in initial X address 306 (Fig. 3) and the initial value initialization of Y address 310 in key frame 202, for example, usually the time in response to arbitrary user interface event of causing among the dynamic video window 104A-C (Fig. 1) any to change size or moving.Initial X address 306 (Fig. 3) and initial Y address 310 and base address 318 determine that together first pixel of incoming frame will be written in the address in key frame 202 (Fig. 2) and the frame buffer 204A-C.When receiving the V-sync of incoming video signal 210A, upgrade logical circuit 212 and in step 502 (Fig. 5) X counter 308 is made as and equals initial X address 306, and in step 504 (Fig. 5) Y counter 312 is made as and equals initial Y address 310.The remainder of logical flow chart 500 will be described below.
X counter 308 and Y counter 312 all increase as required to be illustrated in key frame 202 and the frame buffer 204A-C pixel data with the address that is written to.When receiving each pixel of incoming video signal 210A, upgrade logical circuit 212 X counter 308 is increased, because vision signal is flatly scanned usually, each delegation.In this exemplary embodiment, X counter 308 and Y counter 312 are used for according to the destination address in the following equation calculating frame buffer 204A-C:
DestinationAdress=BaseAddress318+Xcounter308+(Ycounter312×Width FB)(1)
Base address 318 refers to the address in the upper left corner of any among the frame buffer 204A-C.In another embodiment,, reduced multiplying by using the individual address register for efficient, this register V-sync be initialized to base address 318 and X0306 and, and each pixel all is increased, and increases stride values at H-sync.Stride values is poor between the width of asynchronous dynamical vision signal 210A of the width of frame buffer 204A-C and input.Thereby in this additional embodiments, equation (1) is replaced by independent additive operation.
Video router 302A also comprises source identifier 314, and its vision signal 210A with input is defined as content source, and its each frame will be by pointer 214,216 and 218 as single entities.All other source identifiers that source identifier 314 uses for synthesis system 100 are unique.In the context of describing video router 302A, the source that is identified by source identifier 314 is called as the theme source sometimes.The key frame verifier of video router 302A examine key frame 202 (Fig. 2) indication theme source in the base address 318, the position that indicates of X counter 308 and Y counter 312 is visible, the common assigned address 226 of this three.Key frame verifier 316 carries out such determining by source identifier 314 is compared with 226 place's key frames, the 202 interior sources that identify, address.If the theme source 226 is visible in the address, if promptly from the source identifier and source identifier 314 couplings of key frame 202, key frame verifier 316 adds the data of expression present picture element in the pixel write queue 304 to.Otherwise video router 302A abandons present picture element, and present picture element is not added to pixel write queue 304.
When key frame verifier 316 during from key frame 202 retrieval source identifiers, same source identifier is applied to writes frame pointer 218 (Fig. 2), and the pointer related with the source identifier of being retrieved selected reception among 320 (Fig. 3) writing of video router 302A.When source identifier 314 is defined as the source with incoming video signal 210A, writes and select 320 to determine that the pixel of incoming video signal 210A will be written among one of frame buffer 204A-C.
If present picture element is visible, add present picture element to pixel formation 304, upgrade logical circuit 212 will represent the pixel data 322 of present picture element, address 226, and the selection 320 of writing of video router 302A write pixel write queue 304.Similar pixel record from video router 302B-D is put into pixel write queue 304 similarly to write frame buffer 204A-C successively.
Upgrade logical circuit 212 and will write frame buffer 204A-C from the pixel of pixel write queue 304 by following.Write and enable 238 unlatchings always.Upgrade logical circuit 212 from pixel write queue 304 retrieval pixels, in the context of pixel write queue 304, be sometimes referred to as and write pixel.Writing pixel comprises pixel data 232, pixel addresses 230 and writes selection 226.As shown in Figure 2, pixel data 232 and the pixel addresses 230 of writing pixel is applied to frame buffer 204A-C simultaneously.As above such in the face of the description of writing selection 320, write the selected impact damper of selecting among the identification frame buffer 204A-C.Write and select 226 control demultiplexers 234 to write to enable the 238 selected impact dampers that send among the frame buffer 204A-C, and demultiplexer 234 sends and writes inhibit signal to other impact damper among the frame buffer 204A-C.
When the full line of pixel had been received, video router 302A received H-sync, and it indicates next pixel will be on new row.Logical flow chart 400 (Fig. 4) expression video router 302A is in response to the processing of H-sync.In step 402, video router 302A (Fig. 3) is made as initial X address 306 again with X counter 308.At step 404 (Fig. 4), video router 302A (Fig. 3) increases progressively Y counter 312.Thereby the proper address in key frame 202 and the frame buffer 204A-C is continued to represent in X counter 308 and Y counter 312 and base address 318 when receiving the newline of pixel.As above described in conjunction with additional embodiments, in this additional embodiments, address counter increases progressively with span, rather than the processing in the logical flow chart 400.
When whole frames of pixel had been received, video router 302A received V-sync, and its indication present frame has been received fully and new frame will begin with next pixel.Logical flow chart 500 (Fig. 5) expression video router 302A is in response to the processing of V-sync.About the suitable map addresses of the described maintenance of step 502-504, the complete new frame of video router 302A indication incoming video signal 210A has been saved and has been ready to and shown by display logic circuit 200 except as above.Particularly, video router 302A will corresponding to source identifier 314 write that one of frame pointer 218 copies same source identifier to next read in the frame pointer 216 next read frame pointer.Next is read frame pointer 216 and determines that among the frame buffer 204A-D which comprises the frame of finishing recently in each source.
As shown in logical flow chart 800 (Fig. 8), when display logic circuit 200 receives the new output frame of indication with the V-sync signal of beginning, display logic circuit 200 is read frame pointer 216 in step 802 (Fig. 8) with next and is copied to and read frame pointer 214, makes the frame of finishing recently in each source be included in the output frame that newly begins of picture 102 (Fig. 1).
In one embodiment, video router 302A (Fig. 3) directly forwards step 512 according to the processing of logical flow chart 500 (Fig. 5) to from step 506.In step 512, video router 302A selects the new impact damper among the frame buffer 204A-C to be written into wherein with the next frame that will import asynchronous dynamical vision signal 210A.Video router 302A revises the frame pointer of writing in the frame pointer 218 of writing corresponding to source identifier 314, to determine next impact damper among the frame buffer 204A-C.Step 512 will be discussed in more detail below.
Step 508-510 represents the performance improvement according to the minimizing reaction time of another embodiment.At testing procedure 508, video router 302A compares the key frame 202 of display logic circuit 200 current scannings and the row of frame buffer 204A-D with initial Y address 310.The row of display logic circuit 200 current scannings is called as current display line sometimes at this.If current display line is before initial Y address 310, the demonstration in the source that provided by video router 302A is not provided display logic circuit 200 as yet, and the frame of just having finished of incoming video signal 210A can be included in the present frame of picture 102.Thereby router three 02A will write that the frame pointer of writing corresponding to source identifier 314 copies the frame pointer of reading of reading to have in the frame pointer 214 same source identifier in the frame pointer 218.Therefore, display logic circuit 200 will be in current output frame the frame of just having finished in the source of display video router three 02A, rather than wait for next picture V-sync.As a result, the asynchronous dynamical vision signal 210A of input and the reaction time between the demonstration in picture 102 thereof are reduced.
On the contrary, if the row of current demonstration is equal to or greater than initial Y address 310, video router 302A skips steps 510 and processing forward step 512 to.Step 512 will be logical flow chart 512 (Fig. 6) by detailed icon.
Briefly, video router 302A (Fig. 3) is just read frame pointer 214 or next reads arbitrary impact damper that frame pointer 216 is read by selecting not to be indicated as among the frame buffer 204A-C, and selects the new impact damper among the frame buffer 204A-C (Fig. 2) to be written into wherein with the next frame with incoming video signal 210A.In other words, next to write frame can be to be different from the current frame of reading and next with the arbitrary frame of the frame read.Certainly, this one of can be in many ways realizes that it is just like shown in the logical flow chart 512 (Fig. 6), and it is as the part of this exemplary embodiments.
In testing procedure 602, whether video router 302A (Fig. 3) to read frame pointer 216 related with the theme source with frame 204A if determining to read frame pointer 214 (Fig. 2) or next.If no, handle and forward step 604 (Fig. 6) to, wherein video router 302A (Fig. 3) is related in writing frame pointer 218 with the theme source with frame 204A (Fig. 2).
On the contrary, read frame pointer 216 and made frame 204A related, then handle forwarding testing procedure 606 (Fig. 6) to the theme source if read frame pointer 214 or next.In testing procedure 606, whether video router 302A (Fig. 3) to read frame pointer 216 related with the theme source with frame 204B if determining to read frame pointer 214 (Fig. 2) or next.If no, handle and forward step 608 (Fig. 6) to, wherein video router 302A is related in writing frame pointer 218 with the theme source with frame 204B.
On the contrary, read frame pointer 216 and made frame 204B related, then handle forwarding testing procedure 610 to the theme source if read frame pointer 214 or next.In step 610, video router 302A is related in writing frame pointer 218 with the theme source with frame 204C.
Step 604,608 or 610 arbitrary after, according to the processing of logical flow chart 512, thereby step 512 (Fig. 5) is finished.After step 512, according to the finishing dealing with in response to the V-sync among the incoming video signal 210A of logical flow chart 500.
Result according to the processing of logical flow chart 500 is, video router 302A (i) follows the trail of exactly from incoming video signal 210A to key frame 202 and the pixel addresses mapping in the pixel addresses space of frame buffer 204A-C, and the next frame of (ii) guaranteeing incoming video signal 210A will be written in the impact damper that can be not immediately use for display logic circuit 200 among the frame buffer 204A-C.
As above described with reference to logical flow chart 500 (Fig. 5), the reaction time between the reception of the incoming frame of dynamic video and the demonstration of this frame is reduced, and it is by because of above-mentioned former thereby comprise that step 508 and step 510 realize.This reaction time also can be by further reducing in response to the time execution in step 508-510 before the V-sync in the incoming video signal.This is illustrated among the logical flow chart 400B (Fig. 7), and it is the replacement of logical flow chart 400 (Fig. 4), is used for the processing in response to the H-sync of input dynamic video signal.
Logical flow chart 400B (Fig. 7) comprises step 402-404, and it is the same with top description with reference to figure 4.Processing forwards testing procedure 702 to from step 404 (Fig. 7), and wherein video router 302A (Fig. 3) determines that whether Y counter 312 indicates the current line of input of the pixel of incoming video signal 210A is predetermined test line.Predetermined test line is represented the limit, and in this limit, the time that the incoming frame of incoming video signal 210A will lack with the output scanning than whole incoming frame is received fully.This relation can be expressed as follows:
Time read(Y 0Y end)<Time write(Y testY end)??????(2)
In equation (2), Time Read(Y 0 Y End) represent from the needed time of frame that frame buffer 204A-C reads incoming video signal 210A.This value depends on the quantity of the scan line that the frame of the frame frequency of picture 102 and incoming video signal 210A takies.Time Write(Y Test Y End) expression is saved in the needed time of frame buffer 204A-C with the part of the frame of input signal 210A, wherein this part comprises by Ytest to the determined row of the end of frame.This value depends on the frame frequency of incoming video signal 210A and the selected row that Ytest determines.Ytest is selected as the row the earliest in the incoming video signal 210A, so that equation (2) is true.
In testing procedure 702 (Fig. 7), video router 302A determines whether the line of input of pixel is the row that is confirmed as test line.If not, according to the processing end of logical flow chart 400B.
On the contrary, if the line of input of pixel is predetermined test line, handle forwarding step 508-510 to, it is described with reference to figure 5 in the above.Thereby, above the minimizing in refer step 508-510 reaction time of describing can be employed in such circumstances, wherein, the reception of frame is not finished as yet, but will finish before the output scanning of entire frame is finished.
[additional embodiments of synthesis system 100 and renewal logical circuit 212]
Fig. 9 and 10 shows the synthesis system 900 of synthesis system 100 (Fig. 2) respectively and upgrades logical circuit 912 and the additional embodiments of renewal logical circuit 212 (Fig. 3).Fig. 9 and 10 directly is similar to Fig. 2 and 3 respectively, except mentioning below.The element that has similar mark in the accompanying drawing is directly similar mutually.
In Fig. 9, upgrading logical circuit 912 provides source identifier signal 926.Be different from and upgrade logical circuit 212 (Fig. 2), upgrade logical circuit 912 (Fig. 9) and do not comprise the obstruction check, it is by source identifier 314 (Figure 10) and the comparison that is illustrated in the visible source in the key frame 202 (Fig. 9).Instead, be used to block the logical circuit of check in the outside of upgrading logical circuit 912.
Especially, renewal logical circuit 912 sends to source identifier 926 and writes frame pointer 218 and match logic circuitry 936.Match logic circuitry 936 is used the same address signal reference source identifier 926 that is applied to frame buffer 204A-C and from the source identifier of key frame 202 retrievals, and address signal is promptly together with the address mark 930 of data 932, and it is assigned address in the following manner jointly.Match logic circuitry 936 produces writes enable signal 928, if source identifier 926 couplings are from the source identifier of key frame 202 retrievals, its enable write; Otherwise, forbid writing.
Demultiplexer 934 is applied to one of frame buffer 204A-C according to writing enable signal 928 from the control signal of writing frame pointer 218 retrievals, and forbids that all other impact dampers in frame buffer 204A-C write.From the control signal of writing frame pointer 218 corresponding to source identifier 926.Certainly, other logical circuit also can be used for according to corresponding to the writing one of frame pointer 218 and will write enable signal 928 and be applied to one of frame buffer 204A-C of source identifier 226, and forbids that all other impact dampers in frame buffer 204A-C write.
For the economy of data volume mobile in synthesis system 900, each the single pixel value that will write is not followed in the address.But pixel value is collected at together, writes with the form of continuation address stream in the mode of following more complete description.Particularly, data line 932 comprises the pixel data of address date or address mark 930 indications.If address mark 930 indication addresses appear on the data line 932, the addressing logic of key frame 202 and frame buffer 204A-C is preserved this address.On the contrary, if address mark 930 indication pixel datas appear on the data line 932, pixel data is written to the address of previous preservation, and next increases progressively the address of being preserved, thus definite next pixel location that will write.So, because the address of pixel data subsequently increases progressively automatically, pixel data stream can be written in after the single address that indicates.
Figure 10 shows in detail and upgrades logical circuit 912.Video router 1002A comprises formation 1006, and wherein, the pixel data that is received is buffered together with frame end V-sync and the last H-sync signal of row, with the relative pixel location in the frame that helps definite incoming video signal 210A.Address in the frame buffer 204A-C uses data field 306-312 and 318 to obtain in the above described manner.1004 controls of pixel traffic management device are from the use of video router 1002A-D through 1008 couples of frame buffer 204A-C of multiplexer.
The information that pixel traffic management device 1004 uses about formation separately such as the formation 1006 of video router 1002A-D be a collection of assembling from the pixel data of each formation, thereby the optimum of achieve frame impact damper 204A-C is used.Particularly, video router 1002A sends Q_HI, Q_LO, V-sync and H-sync signal to pixel traffic management device 1004.Video router 1002B-D sends similar signal to pixel traffic management device 1004.Q_HI signal indication formation 1006 from video router 1002A is relative to expiring and advising that to pixel traffic management device 1004 in the time can using frame buffer 204A-C, video router 1002A should guarantee right of priority.Q_LO signal indication formation 1006 is relatively low and to 1004 suggestions of pixel traffic management device, and video router 1002A should guarantee lower right of priority, makes other video router can use frame buffer 204A-C.V-sync and H-sync signal allow pixel traffic management device 1004 arranges to change paths through multiplexer 1008 time with send the address to frame buffer 204A-C need consistent.No matter when any among the video router 1002A-D obtains path through multiplexer 1008, and the video router that obtains path sends new address date to frame buffer 204A-C through multiplexer 1008.
By making the pixel number maximization of the specific scan line that will write on the incoming video signal in the flanking sequence, pixel traffic management device 1004 has been avoided the transmission of address date as much as possible.Preferably, under the situation that newly address may be specified by any way, pixel traffic management device 1004 only causes the conversion of the use from one of video router 1002A-D to another through multiplexer 1008.Remove the whole width that nonspecific source takies frame buffer 204A-C, any H-sync signal will cause discrete transfer writing pixel data in the address at its place.Thereby when the current video router sent the H-sync signal to pixel traffic management device 1004, pixel traffic management device 1004 changed path.In the environment of Figure 10, the current video router is a router that has among the video router 1002A-D through the current path of multiplexer 1008.
Remove nonspecific source and take the whole of frame buffer 204A-C, any V-sync signal will cause discrete transfer writing pixel data in the address at its place.Thereby when the current video router sent the V-sync signal to pixel traffic management device 1004, pixel traffic management device 1004 changed path.The H-sync of incoming video signal and V-sync normally switch the golden hour of institute's buffer memory pixel data of handling another incoming video signal.
When the path that changes through multiplexer 1008, pixel traffic management device 1004 uses the Q_HI that received and Q_LO signal to determine relative priority rank between the video router 1002A-D.
By avoiding sending address information for each pixel of being write, the embodiment of Fig. 9-10 makes necessary data/address path circulation of frame buffer 204A-C minimum, and therefore the effective path of writing to frame buffer 204A-C is provided.When in real time handling a plurality of dynamic video signal, this effectively writes the path particular importance.Yet the processing of inaccessible pixel takies writes circulation.If the particular pixels that will write by obturation, is write enable signal 928 and will be forbidden all writing in the cycle period of writing of handling inaccessible pixel as represented in the key frame 202.As a comparison, the embodiment of Fig. 2-3 has abandoned inaccessible pixel, and with the waste of the store access cycle of avoiding frame buffer 204A-C, frame buffer 204A-C's effectively write path thereby provide equally.
[picture-in-picture mixing]
Figure 11 shows the variation scheme that can be applied to synthesis system 100 (Fig. 2) or synthesis system 900 (Fig. 9).Mixing ratio array 1102 with mixing ratio with read frame pointer 214 (Fig. 2), next reads frame pointer 216 and writes in the frame pointer 218 each source identifier that uses to be associated.Particularly, for each source identifier, opacity is specified in mixing ratio.Opacity is by the numeric representation of scope from 0 to 1, and wherein 0 represents full impregnated bright (promptly invisible), and 1 expression is all opaque.
Fig. 2 and 9 multiplexer 220 are replaced by multiplexer 1120 (Figure 11), and it only receives pixel data from frame buffer 204A-C.Pixel data from frame buffer 204D is received by mixer 1204.Mixer 1104 also receives pixel data through multiplexer 1220, and it is selected from frame buffer 204A-C according to being selected from the frame pointer of reading frame pointer 214 in the above described manner.Mixer 1104 mixes the pixel data that is received according to the opacity that receives from mixing ratio array 1102.Describe by following equation by the mixing that mixer 1104 is carried out.
Pixel 1104=α×Pixel 1120+(1-α)×Pixel 204D????(3)
In equation (3), α represents the opacity of the pixel data that received.Mixing ratio array 1102 allows that various opacity are designated to be used for a plurality of input asynchronous dynamical vision signals, and allows it easily and individually to be revised.Thereby, be illustrated in each video window in the picture 102 and can have in various degree transparency.
Top description only is exemplary, nonrestrictive.The present invention is only limited by the four corner of claim and equivalent thereof.

Claims (21)

1, frame buffer device comprises:
A. two or more frame buffers;
B. general data, to each of two or more parts of shown image, it specifies the corresponding composition of two or more pictures in forming, and wherein at least one is the dynamic video signal;
C. each in forming for two or more pictures:
I. read frame pointer, it discerns the frame buffer of reading in frame buffer, reads frame buffer picture composition from this and will be read to be used for demonstration;
Ii. write frame pointer, it discerns the frame buffer of writing in frame buffer, and the data of the expression picture that receives in addition composition will be written to this and write frame buffer;
D. upgrade logical circuit, its (i) detects the new frame in the dynamic video signal, (ii) writes down in the frame buffer and is soon read with the selected frame buffer of dynamic video signal association, reaches the frame pointer of (iii) revising with the dynamic video signal association of writing; And
E. display logic circuit, it detects the new frame in the shown image, and in response to it, upgrade read frame pointer with discern that representative is finished recently in two or more frame buffers, as writing the selected frame buffer that picture that logical circuit write down is formed.
2, frame buffer device according to claim 1 is wherein by the whole frame that frame buffer comprises the dynamic video signal of reading of reading frame pointer identification of dynamic video signal.
3, frame buffer device according to claim 2, wherein the input data of dynamic video signal are written to the frame buffer of writing of writing frame pointer identification by the dynamic video signal.
4, frame buffer device according to claim 1 also comprises:
C.iii. next of each that is used for that two or more pictures form read frame pointer, and wherein next reads the next frame impact damper of frame pointer identification frame buffer, and it is included in and has been ready for the frame that picture displayed is formed in the shown image.
5, frame buffer device according to claim 4 wherein upgrades logical circuit and is soon read with the selected frame buffer of dynamic video signal association by selected frame buffer and dynamic video signal are read to be associated in the frame pointer to write down in the frame buffer in next of dynamic video signal.
6, frame buffer device according to claim 4, wherein the display logic circuit copies to and reads frame pointer and upgrade and read frame pointer by next being read frame pointer.
7, frame buffer device according to claim 4, wherein upgrade logical circuit:
I. the new portion of determining the selected composition formed at picture is complete and when being ready to show, the reading of the frame buffer data of the picture image of the present frame of definition picture image begins but do not reach the expression that selected picture is formed in the frame buffer as yet;
Ii. determine in response to this and the reading of selected picture is formed in frame buffer expression before, soon read by selected frame buffer and selected picture being formed the selected impact damper of forming at selected picture that writes down in the frame buffer that is associated with selected picture composition of reading to be associated in the frame pointer.
8, frame buffer device according to claim 1, the part of wherein shown image is a pixel.
9, frame buffer device according to claim 1, wherein at least one composition of picture composition is a background.
10, frame buffer device according to claim 9, wherein background comprises the graphical content that computing machine produces.
11, frame buffer device according to claim 1, wherein the overlapping composition of general data assigned picture composition is visible at least a portion.
12, frame buffer device according to claim 1, wherein the display logic circuit produces the frame of shown image with the picture frame frequency of the input frame frequency that is different from the dynamic video signal.
13, frame buffer device according to claim 1, wherein the display logic circuit produces the frame of shown image in mutually at the picture of the input phase that is different from the dynamic video signal.
14, the method for displayed image, this method comprises:
Each part for two or more parts of image:
I. discern the selected frame buffer in two or more frame buffers, wherein selected frame buffer is preserved this partial data of representing images;
Ii. make that this part of image shows from selected frame buffer.
15, method according to claim 14, wherein at least one of Tu Xiang two or more parts comprises at least a portion background frame content.
16, method according to claim 15, wherein at least one of Tu Xiang two or more parts represented the dynamic video signal.
17, method according to claim 16 also comprises:
Determine the opaque degree of dynamic video signal;
In addition, make (ii) that wherein step comprises:
According to opaque degree with dynamic video signal and background frame content mix.
18, method according to claim 14, each part in wherein two or more parts is a pixel.
19, method according to claim 14 makes that wherein step comprises:
Address signal is applied to two or more frame buffers visits two or more frame buffers to use an address signal.
20, be used to show the method for the composite image that comprises that two or more pictures are formed, this method comprises:
Each independence that two or more pictures are formed is also carried out following step simultaneously:
I. select one of two or more frame buffers to write this frame buffer with the input picture data that picture is formed;
Ii. on the basis that the part that picture is formed is finished, the frame buffer of finishing in the frame buffer is recorded as preservation, and this finishes part; And
Iii. the part of picture being formed of finishing is combined to the synthetic picture from the frame buffer of finishing.
21, be used for the picture composition of dynamic video signal is arrived the method for synthetic picture, wherein synthetic picture comprises the picture of dynamic video signal and is different from the image content of dynamic video signal that this method comprises:
A. specify the frame buffer of writing in two or more frame buffers, the incoming frame of dynamic video signal is written to this and writes frame buffer;
B. writing incoming frame to writing on the basis that frame buffer finishes,
I. will write frame buffer and be recorded as the frame buffer of finishing recently; And
Ii. newly write frame buffer in the designated frame impact damper, next incoming frame of dynamic video signal will be written to this and newly write frame buffer, wherein newly write frame buffer and be different from the frame buffer of finishing recently;
C. by being different from the image content of dynamic video signal and the incoming frame of finishing of dynamic video signal be combined to synthetic picture from the frame buffer incoming frame that search complete finished recently and from the different impact dampers retrievals of frame buffer.
CN200510051317A 2004-03-04 2005-03-04 Frame buffering device and image display method Expired - Fee Related CN100578606C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/795,088 US20050195206A1 (en) 2004-03-04 2004-03-04 Compositing multiple full-motion video streams for display on a video monitor
US10/795,088 2004-03-04

Publications (2)

Publication Number Publication Date
CN1664915A true CN1664915A (en) 2005-09-07
CN100578606C CN100578606C (en) 2010-01-06

Family

ID=34912431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200510051317A Expired - Fee Related CN100578606C (en) 2004-03-04 2005-03-04 Frame buffering device and image display method

Country Status (3)

Country Link
US (1) US20050195206A1 (en)
EP (1) EP1589521A3 (en)
CN (1) CN100578606C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101427303B (en) * 2006-04-20 2011-03-23 思科技术公司 Method and device for latency reduction in a display device
CN101232611B (en) * 2007-01-23 2011-11-09 三星电子株式会社 Image process apparatus and method thereof
CN113066450A (en) * 2021-03-16 2021-07-02 长沙景嘉微电子股份有限公司 Image display method, device, electronic equipment and storage medium
CN114449309A (en) * 2022-02-14 2022-05-06 杭州登虹科技有限公司 Moving picture playing method for cloud directing

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6961099B2 (en) * 2001-10-16 2005-11-01 Sony Corporation Method and apparatus for automatically switching between analog and digital input signals
US7425962B2 (en) * 2004-07-27 2008-09-16 Hewlett-Packard Development Company, L.P. Systems and methods for generating a composite video signal from a plurality of independent video signals
JP4267598B2 (en) * 2005-07-11 2009-05-27 ザイオソフト株式会社 Image fusion processing method, image fusion processing program, and image fusion processing apparatus
US7710450B2 (en) * 2006-04-20 2010-05-04 Cisco Technology, Inc. System and method for dynamic control of image capture in a video conference system
US20090282443A1 (en) * 2008-05-07 2009-11-12 Samsung Electronics Co., Ltd. Streaming method and apparatus using key frame
US9318056B2 (en) 2010-02-25 2016-04-19 Nokia Technologies Oy Apparatus, display module and methods for controlling the loading of frames to a display module
US20120251085A1 (en) * 2011-03-31 2012-10-04 Hown Cheng Video multiplexing
BR112013031128A2 (en) * 2011-06-10 2017-06-27 Pictometry Int Corp system and method for forming video stream containing real-time gis data
US8937623B2 (en) 2012-10-15 2015-01-20 Apple Inc. Page flipping with backend scaling at high resolutions
JP6056453B2 (en) * 2012-12-20 2017-01-11 富士通株式会社 Program, data management method, and information processing apparatus
EP2887352A1 (en) * 2013-12-19 2015-06-24 Nokia Corporation Video editing
US10319408B2 (en) * 2015-03-30 2019-06-11 Manufacturing Resources International, Inc. Monolithic display with separately controllable sections
GB2545221B (en) * 2015-12-09 2021-02-24 7Th Sense Design Ltd Video storage
US10313037B2 (en) 2016-05-31 2019-06-04 Manufacturing Resources International, Inc. Electronic display remote image verification system and method
CN112235518B (en) * 2020-10-14 2023-02-03 天津津航计算技术研究所 Digital video image fusion and superposition method
US11895362B2 (en) 2021-10-29 2024-02-06 Manufacturing Resources International, Inc. Proof of play for images displayed at electronic displays

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0166045B1 (en) * 1984-06-25 1988-11-30 International Business Machines Corporation Graphics display terminal
US4954970A (en) * 1988-04-08 1990-09-04 Walker James T Video overlay image processing apparatus
JP3015140B2 (en) * 1991-05-29 2000-03-06 株式会社日立製作所 Display control device
US6088045A (en) * 1991-07-22 2000-07-11 International Business Machines Corporation High definition multimedia display
JP3413201B2 (en) * 1992-12-17 2003-06-03 セイコーエプソン株式会社 Graphics control plane for windowing and other display operations
US5519825A (en) * 1993-11-16 1996-05-21 Sun Microsystems, Inc. Method and apparatus for NTSC display of full range animation
US5546518A (en) * 1995-01-06 1996-08-13 Microsoft Corporation System and method for composing a display frame of multiple layered graphic sprites
US5867178A (en) * 1995-05-08 1999-02-02 Apple Computer, Inc. Computer system for displaying video and graphic data with reduced memory bandwidth
GB9517806D0 (en) * 1995-08-31 1995-11-01 Philips Electronics Uk Ltd Information handling for interactive apparatus
US5808629A (en) * 1996-02-06 1998-09-15 Cirrus Logic, Inc. Apparatus, systems and methods for controlling tearing during the display of data in multimedia data processing and display systems
US5914711A (en) * 1996-04-29 1999-06-22 Gateway 2000, Inc. Method and apparatus for buffering full-motion video for display on a video monitor
JP3022405B2 (en) * 1997-06-03 2000-03-21 日本電気株式会社 Image memory controller
US6353460B1 (en) * 1997-09-30 2002-03-05 Matsushita Electric Industrial Co., Ltd. Television receiver, video signal processing device, image processing device and image processing method
US6349143B1 (en) * 1998-11-25 2002-02-19 Acuson Corporation Method and system for simultaneously displaying diagnostic medical ultrasound image clips
US6307565B1 (en) * 1998-12-23 2001-10-23 Honeywell International Inc. System for dual buffering of asynchronous input to dual port memory for a raster scanned display
US6621509B1 (en) * 1999-01-08 2003-09-16 Ati International Srl Method and apparatus for providing a three dimensional graphical user interface
US6658056B1 (en) * 1999-03-30 2003-12-02 Sony Corporation Digital video decoding, buffering and frame-rate converting method and apparatus
SE9901605L (en) * 1999-05-04 2000-11-05 Net Insight Ab Buffer management method and apparatus
US8266657B2 (en) * 2001-03-15 2012-09-11 Sling Media Inc. Method for effectively implementing a multi-room television system
US6614441B1 (en) * 2000-01-07 2003-09-02 Intel Corporation Method and mechanism of automatic video buffer flipping and display sequence management
US6864894B1 (en) * 2000-11-17 2005-03-08 Hewlett-Packard Development Company, L.P. Single logical screen system and method for rendering graphical data
US6621500B1 (en) * 2000-11-17 2003-09-16 Hewlett-Packard Development Company, L.P. Systems and methods for rendering graphical data
US7038690B2 (en) * 2001-03-23 2006-05-02 Microsoft Corporation Methods and systems for displaying animated graphics on a computing device
US6894692B2 (en) * 2002-06-11 2005-05-17 Hewlett-Packard Development Company, L.P. System and method for sychronizing video data streams
US8065614B2 (en) * 2003-04-09 2011-11-22 Ati Technologies, Inc. System for displaying video and method thereof
EP1698146A1 (en) * 2003-12-08 2006-09-06 QUALCOMM Incorporated High data rate interface with improved link synchronization

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101427303B (en) * 2006-04-20 2011-03-23 思科技术公司 Method and device for latency reduction in a display device
CN101232611B (en) * 2007-01-23 2011-11-09 三星电子株式会社 Image process apparatus and method thereof
CN113066450A (en) * 2021-03-16 2021-07-02 长沙景嘉微电子股份有限公司 Image display method, device, electronic equipment and storage medium
CN114449309A (en) * 2022-02-14 2022-05-06 杭州登虹科技有限公司 Moving picture playing method for cloud directing
CN114449309B (en) * 2022-02-14 2023-10-13 杭州登虹科技有限公司 Dynamic diagram playing method for cloud guide

Also Published As

Publication number Publication date
CN100578606C (en) 2010-01-06
US20050195206A1 (en) 2005-09-08
EP1589521A3 (en) 2010-03-17
EP1589521A2 (en) 2005-10-26

Similar Documents

Publication Publication Date Title
CN1664915A (en) Compositing multiple full-motion video streams for display on a video monitor
CN1162832C (en) Image display device
CN1233148C (en) Digita pick-up device able to process image
CN1308895C (en) Systems and methods for generating visual representations of graphical data and digital document processing
CN101080698A (en) Real-time display post-processing using programmable hardware
CN1428762A (en) Display drive control circuit
CN1921594A (en) Video processing apparatus, video processing method
JP2010096956A (en) Display panel drive circuit, display panel module, display device, and method for driving display panel
CN1117314C (en) Apparatus and method for automatically controlling centering monitor screen
CN1107936C (en) Programmable display device
CN1113317C (en) Graphic processor and graphic processing method
CN1831931A (en) Display controller allowing overlapping display
JP2013162406A (en) Digital recording device
CN1950794A (en) Window display system, window display method, program development support device, and server device
CN1750607A (en) Device and method for processing picture of network TV
CN1283362A (en) Method and apparatus for reducing flicker in television display of network application data
CN101030363A (en) Image synthesizing device
CN1283363A (en) Flicker filter and interlacer implamented in television system displaying network application data
CN1269348C (en) Moving image composition device, moving image composition method, and information terminal with moving image composition function
CN1698079A (en) Matrix type display device and display method thereof
CN1859551A (en) Time translation realizing method based on multiple tuner system
CN1588530A (en) Video display device and method with transparent effect
US20020154343A1 (en) System and method of capturing a digital picture
KR101607431B1 (en) Method and apparatus for real-time image processing using motion recognition
CN2587120Y (en) Font access apparatus used for OSD

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100106

Termination date: 20100304