The application requires the U.S. Provisional Application No.60/793 of submission on April 18th, 2006,288, the U.S. Provisional Application No.60/793 that submitted on April 18th, 2006,276, the U.S. Provisional Application No.60/793 that submitted on April 18th, 2006, the U.S. Provisional Application No.60/793 that on April 18th, 277 and 2006 submitted to, 275 priority all is incorporated into this with the disclosure of above-mentioned each provisional application hereby by reference.
Embodiment
The present invention relates to be used for various piece in one or more video pipeline levels of one or more passages reduces memory access bandwidth and shared storage and other and handles resources to produce the method and apparatus of one or more high-quality output signals.
Fig. 4 illustrates television display system in accordance with the principles of the present invention.Television display system shown in Figure 4 can comprise television broadcasting signal 202, dual tuner 410, MPEG codec 230, sheet external memory device 240, chip external memory 300, double vision processor 400, memory interface 530 and at least one external module 270 frequently.Dual tuner 410 can received tv broadcast signal 202 and is produced first vision signal 412 and second vision signal 414.Vision signal 412 and 414 can be provided for dual decoding device 420 subsequently.Dual decoding device 420 is shown in double vision processor 400 inside frequently, but it also can change in video processor 400 outsides.Dual decoding device 420 can be carried out and the similar function of demodulator 220 (Fig. 2) first vision signal 412 and second vision signal 414.Dual decoding device 420 can comprise a multiplexer 424 and two decoders 422 at least.In another kind was arranged, multiplexer 424 and one or two decoder 422 can be in dual decoding device 420 outsides.Decoder 422 provides the vision signal output 426 and 428 through decoding.Should be appreciated that decoder 422 can be any NTSC/PAL/SECAM decoder that is different from mpeg decoder.The input of decoder 422 can be digital CVBS, S-Video or component video signal, and the output of decoder 422 can be the digital standard sharpness signal such as the Y-Cb-Cr data-signal.To provide more detailed argumentation in conjunction with Fig. 7,8,9 and 10 to the operation of dual decoding device 420.
Multiplexer 424 can be used for selecting two vision signals 412 and 414 or at least one of the incoming video signal of any number.This at least one selecteed vision signal 425 is provided for decoder 422 subsequently.This at least one selecteed vision signal 425 it seems it is single video signal in the drawings, to avoid making this figure overcrowding, but, should be appreciated that vision signal 425 can represent to be provided to the vision signal of any number of input of the decoder 422 of any number.For example, multiplexer 424 can receive 5 incoming video signals, and two in these 5 incoming video signals can be offered two different decoders 422.
Particular video signal shown in Figure 4 handle arrange can so that double vision frequently the inside dual decoding device 420 on the processor 400 can be used, thereby reduce to use the cost of outer decoder, and this outer decoder is needed during may to be time shift use.For example, the output 426 of dual decoding device 420 and one of 428 can be provided for 656 encoders 440, so that before vision signal is interweaved vision signal suitably is encoded to reference format.656 encoders 440 can be used to reduce size of data, so that handle with clock frequency faster.For example, in certain embodiments, 656 encoders 440 can be with 16 data, and promptly h-sync and v-sync signal reduce to 8, so that handle with double frequency.This can be the standard of the interface between SD video and any NTSC/PAL/SECAM decoder and the mpeg encoder.Encoded vision signal 413 can for example be provided for outside MPEG codec 230 via the port on the video processor subsequently, to generate the vision signal through time shift.Another port, promptly the flexible port (flexiport) 450 on the double vision frequency processor 400 can be used to receive the vision signal through time shift from MPEG codec 230.Reduce the complexity of video processor like this by some parts, may suit the requirements at video processor external treatment digital video signal.In addition, the 230 performed time shifts of MPEG codec may require to comprise compression, decompress and with the operation of non-volatile mass storage device interfaces, these can be outside the scope of video processor.
Such as show on cursor, the screen or the various other forms of demonstration that can be used at least one external module 270 or otherwise offer external module except broadcast video signal 202 also can utilize double vision frequently processor 400 generate.For example, double vision frequency processor 400 can comprise graphics port 460 or the pattern maker 470 that is used for this purpose.
Vision signal and various other vision signals, graphic generator 460 or pattern maker 470 through decoding can be provided for selector 480.Selector 480 is selected at least one in these vision signals, and selected signal is offered Video processing portion 490 on the plate.Vision signal 482 and 484 is two illustrative signals that can be offered Video processing portion 490 on the plate by selector 480.
Video processing portion 490 can carry out any suitable video processing function on the plate, for example deinterleaves, convergent-divergent, frame-rate conversion and passage mix and color management.Any processing resource in the double vision frequency processor 400 can send data and receive data from it to chip external memory 300 (it can be the volatile storage of SDRAM, RAMBUS or any other type) via memory interface 530.To come each in these functions of more detailed description in conjunction with description to Fig. 5.
At last, the one or more video output signals 492 of double vision processor 400 outputs frequently.Video output signals 492 can be provided for one or more external modules 270, to be used to show, to store, further to handle or any other suitable purposes.For example, a video output signals 492 can be a main output signal of supporting high definition TV (HDTV) resolution, and second video output signals 492 can be the pair output of supporting single-definition TV (SDTV) resolution.Main output signal can be used for driving high-end external module 270, for example digital TV or projecting apparatus, secondary simultaneously output is used to single-definition (DVD) video recorder, single-definition TV (SDTV), single-definition preview demonstration or any other suitable Video Applications.Like this, secondary output signal can allow the user to watch program simultaneously on the HDTV display so that the user can go up record HDTV program at any suitable SDTV medium (for example DVD) simultaneously.
Fig. 5 is shown in further detail the double vision function of Video processing portion 490 on the plate of processor 400 frequently.Video processing portion 490 can comprise input signal configuration 510, memory interface 530, configuration interface 520, front end streamline portion 540, frame-rate conversion (FRC) and scalable stream bootopping 550, color treatments and passage mixed production line portion 560 and backend pipeline portion 570 on the plate.
Configuration interface 520 can be via I2C interface for example from the external module receiving control information 522 such as processor.Configuration interface 522 can be used for input signal configuration 510, front end 540, frame-rate conversion 550, color processor 560, rear end 570 and memory interface 530 are configured.Input signal configuration 510 can be coupled to the outside input on the double vision frequency processor 400, so that receive vision signal (for example HDTV signal, SDTV signal or any other suitable digital video signal) and selected vision signal 482 and 484 (Fig. 4) in the input 502.Input signal configuration 510 can be configured in the received vision signal (for example signal 482,484 and 502) at least one offered front end 540 as video source 512 subsequently.
Based on this configuration, can utilize the difference that the Video processing streamline was handled in these inputs that offer Video processing portion 490 on the plate in the different time on the plate to import.For example, in one embodiment, double vision processor 400 frequently can comprise eight input ports.Exemplary port can comprise display port on two 16 HDTV signal ports, 20 HDTV signal port, three 8 SDTV video signal port (it can be the CCIR656 form), 24 bit pattern port and 16 external screen.
Front end 540 can be configured to select between at least one video signal flow 512 (that is passage) of available input and handle this (one or more) selected vision signal along one or more Video processing pipeline stages.Front end 540 can (one or more) are treated vision signal be provided to frame-rate conversion and scalable stream pipeline stage 550 from one or more pipeline stages.In certain embodiments, front end 540 can comprise three Video processing streamlines, and provides three outputs that are separated to FRC and scalable stream pipeline stage 550.In FRC and scalable stream pipeline stage 550, have one or more treatment channel.For example, first passage can comprise main scaler and frame-rate conversion unit, and second channel can comprise another scaler and frame-rate conversion unit, and third channel can comprise the scaler of lower cost.Scaler can be independently of one another.For example, a scaler can increase the size of input picture, and another can reduce the size of image.Two convergent-divergents can be in conjunction with 444 pixels (24 of RGB/YUB) or 422 pixels (16 of YC) work.
Color treatments and passage mixed production line level 560 can be configured to provide the color management function.These functions can comprise that color remaps, brightness, contrast, form and aspect and saturation enhancing, gamma correction and pixel checking.In addition, color treatments and passage mixed production line level 560 can provide the video mix function of the different passages of stack, perhaps the video channel of two mixing are mixed mutually with third channel or superpose.
Backend pipeline level 570 can be configured to carry out data formatting, symbol arranged/the unsigned number conversion, saturation logic, clock delay, perhaps any other suitable final signal operation that may need before double vision processor 400 outputs frequently at one or more passages.
In each pipeline segment each can be configured to utilize memory interface 530 to send data and receive data from it to chip external memory 300.Memory interface 530 can comprise Memory Controller and memory interface at least.Memory Controller can be configured to move with the maximal rate that memory is supported.In one embodiment, data/address bus can be 32, and can be operated in the frequency of 200MHz.This bus can provide basically the throughput near 12.8 kilomegabit per seconds.Each uses the functional block (that is memory client) of memory interface 530 to come memory addressing by the burst operation pattern.Arbitration between each memory client can be finished by circulation (round robin) mode or any other proper arbitration scheme.Will be in conjunction with the more detailed argumentation that Figure 12,19,20,21 and 22 description is provided to each streamline fragment.
Each assembly in the double vision frequency processor 400 may need different clock mechanism or clock frequency with pipeline stages.Fig. 6 shows and generates the clock generation system 600 of multiple clock signal to be used for this purpose.Clock generation system 600 comprises crystal oscillator 610, general-purpose simulation phase-locked loop circuit 620, digital PLL circuit 640a-n and memory analog phase-locked look circuit 630 at least.The shake output 612 of device 610 of crystal can be coupled to general phase-locked loop 620, memory phase-locked loop 630, double vision another assembly in the processor 400 or any suitable assembly of processor outside frequently as required.
Memory analog phase-locked look circuit 630 can be used to generate other clock signals 636 of memory clock signal 632 and different frequency, and these clock signals can be selected with the clock signal 652 as operational store equipment (for example 200MHz DDR memory) or another system component by selected device 650.
General-purpose simulation phase-locked loop 620 can generate the 200MHz clock, and this 200MHz clock can be used as the fundamental clock of one or more digital phase-locked loops (PLL) circuit 640a-n.Digital PLL circuit 640a-n can use in open loop mode, and it shows as frequency synthesizer (that is, the fundamental clock frequency being multiply by a rational) in this pattern.Perhaps, digital PLL circuit 640a-n can use in closed loop mode, and it can realize frequency lock by locking onto on the corresponding input clock signal 642a-n (for example audio video synchronization input) in this pattern.Digital PLL has the ability of the precise frequency locking that is implemented to utmost point slow clock signal in closed loop mode.For example, in field of video processing, vertical video clock signal (for example v-sync) can be in 50 to 60Hz scope.Each system component can be used for the output 644a-n of digital PLL circuit 640a-n may need the different operating of multiple open loop or closed signal.Among the output 640a-n each is appreciated that the clock signal that different frequency or same frequency can be provided.
For example, can use an assembly of the clock signal that is generated by digital PLL circuit 640a-n is dual decoding device 420 (Fig. 4), and its operation will come more detailed description in conjunction with Fig. 7,8,9 and 10.Dual decoding device 420 can comprise decoder 422 (Fig. 4).As describing in conjunction with Fig. 7,8 and 9, decoder 422 can use in different operation modes.
Fig. 7,8 and 9 shows and uses decoder 422 to generate three kinds of exemplary mode of operation of vision signal 426 and 428.These three kinds of operator schemes for example can provide composite video signal, s-video signal and component video signal.
First kind in these three kinds of patterns can be used for generating composite video signal, and this pattern illustrates in conjunction with Fig. 7.First decoder mode can comprise DC recovery unit 720, analog to digital converter 730 and decoder 422, and wherein each all can be included in the dual decoding device 420 (Fig. 4).Can be provided for DC recovery unit 720 by the vision signal 425 (Fig. 4) that dual tuner 410 provides or provided by multiplexer 424 in another kind is arranged.Lost its DC benchmark and it has periodically been reset when the vision signal 425 that may be the AC coupled signal, can use DC recovery unit 720 so that when keeping video properties information such as brightness.From the vision signal of DC recovery unit 720 by analog to digital converter 730 digitlizations and be provided for decoder 422.
In first pattern, decoder 422 can use and generate composite video signal from single analog to digital converter through digitized vision signal 732.Analog to digital converter 730 and decoder 422 can operate by receiving dagital clock signal 644a-n (Fig. 6)-and these dagital clock signals 644a-n for example can be 20,21,22,23,24,25,26,27,28,29 or 30MHz.In addition, decoder 422 can utilize output feedback signal 427 to control the operation of DC recovery unit 720.Output feedback signal 427 for example can be 2 control signals that indication DC recovery unit 720 increases or reduce to offer the DC output on the vision signal of analog to digital converter 730.
Second kind in three kinds of patterns can be used for generating the s-video signal, and this pattern illustrates in conjunction with Fig. 8.Second decoder mode can comprise all elements of describing in first pattern, and second analog to digital converter 820.Vision signal 425 (Fig. 4) can be divided into first 812 and second portion 810.The first 812 of the signal of the vision signal 425 (Fig. 4) that can be provided by multiplexer 424 can be provided for DC recovery unit 720, and the second portion 810 of the signal of vision signal 425 (Fig. 4) can be imported into second analog to digital converter 820.From the first 812 of the vision signal 425 of DC recovery unit 720 by 730 digitlizations of second analog to digital converter and be provided for decoder 422.In addition, the second portion 810 of vision signal 425 is also offered decoder 422 by analog to digital converter 820.S-Video signal demand two-wire analog port is used to be connected to various device (for example VCR, DVD player, or the like).
In this second pattern, decoder 422 can use and generate the s-video signal from two analog to digital converters 730 and 820 through digitized vision signal 732 and 832.Analog to digital converter 730 and 820 and decoder 422 can operate by receiving dagital clock signal 644a-n (Fig. 6)-these dagital clock signals 644a-n for example can be 20,21,22,23,24,25,26,27,28,29 or 30MHz.In certain embodiments, the first 812 of vision signal can be the Y passage of vision signal 425, and the second portion 810 of vision signal 425 can be the chrominance channel of vision signal.
In three kinds of patterns the third can be used for generating component video signal, and this pattern illustrates in conjunction with Fig. 9.The 3rd decoder mode can comprise all elements of describing in second pattern, and the second and the 3rd DC recovery unit 930 and 920 and multiplexer 940.Vision signal 425 can be divided into first 914, second portion 910 and third part 912.The first 914 of the vision signal 425 (Fig. 4) that can be provided by multiplexer 424 can be provided for DC recovery unit 720, the second portion 910 of the signal of vision signal 425 (Fig. 4) can be provided for DC recovery unit 930, and the third part 912 of the signal of vision signal 425 (Fig. 4) can be provided for DC recovery unit 920.Component video signal needs three-way analog port, is used to be connected to various device (for example VCR, DVD player, or the like).
From the first 914 of the vision signal 425 of DC recovery unit 720 by analog to digital converter 730 digitlizations and be provided for decoder 422.From second and the third part 910 and 912 of the vision signal 425 of DC recovery unit 930 and 920 by optionally digitlization of analog to digital converter 820 (for example, by utilizing multiplexer 940 to select) and be provided for decoder 422.Multiplexer 940 can receive the control signal 429 from decoder 422, so that undertaken time-multiplexed by second and the third part 910 and 912 of 820 pairs of vision signals 425 of analog to digital converter.
In certain embodiments, in three-mode, decoder 422 can use and generate component video signal from two analog to digital converters 730,820 through digitized vision signal 732 and 832.Analog to digital converter 730 and 820 and decoder 422 can operate by receiving dagital clock signal 644a-n (Fig. 6)-these dagital clock signals 644a-n for example can be 20,21,22,23,24,25,26,27,28,29 or 30MHz.In addition, decoder 422 can utilize output feedback signal 427 to control the operation of DC recovery unit 720,930 and 920.In certain embodiments, first, second of vision signal 425 and third part 914,910 and 912 can be respectively Y passage, U passage and the V passage of vision signal 425.
The DC recovery unit, digital to analog converter and the Video Decoder that should be appreciated that various common obtainable types can be used for carrying out above-mentioned functions, and for the sake of brevity, omit their concrete operations in the argumentation here.
In an embodiment shown in Figure 10, three kinds of decoder mode all can utilize two decoders 422 and three analog to digital converters 730 or 820 to realize.The described layout of Figure 10 can so that dual decoding device 420 (Fig. 4) can provide simultaneously basically can with any two kinds of corresponding at least two vision signals 426 and 428 (that is, a vision signal being arranged) in three kinds of patterns from each decoder.
Figure 10 shows and utilizes two decoders to generate the exemplary implementation of the synthetic and s-video signal of two composite video signals,, synthetic and one-component vision signal or two s-video signals.Exemplary implementation shown in Figure 10 comprises one group of multiplexer 1020,1022,1023,1025,1021,1024,1026,1027 and 1028; Three analog to digital converters 730,820,1010; Four DC recovery units 720,721,930,920; Demultiplexer 1040; And two decoder 422a and 422b.
When being used to generate two composite video signals, the exemplary implementation of Figure 10 can be operated in the following manner.The first vision signal 425a can be coupled to first input of multiplexer 1020, and second vision signal 914 can be coupled to second input of multiplexer 1024.First input of multiplexer 1020 can be selected and be outputed to the 4th input of multiplexer 1021, so that be imported into DC recovery unit 720.Second input of multiplexer 1024 can be selected and be outputed to DC recovery unit 721.The class of operation of the remainder of this implementation is similar to the operation that the usefulness of describing in conjunction with Fig. 7 generates composite video signal.For example, DC recovery unit 720 and 721, analog to digital converter 730 and 1010 and decoder 422a and 422b operate in a similar fashion to generate composite video signal, as shown in Figure 7.
Utilizing exemplary implementation among Figure 10 to generate the synthetic and one-component vision signal of synthetic and s-video signal or is to carry out with the similar mode of two composite video signals of above-mentioned generation.For example, first and second video signal portions 812 and 810 that are used to generate the vision signal 425 of s-video signal are provided for multiplexer 1022 and 1026.Multiplexer 1022 and 1026 output are provided for multiplexer 1021 and 1027, and this multiplexer 1021 and 1027 is selected will be by analog to digital converter 730 and 820 vision signals of handling.Similarly, which vision signal multiplexer 1024 selects to be handled by analog to digital converter 1010.Being described in more detail in the following table 1 that illustrates of multiplexer input selection to various operator schemes provides.
Exemplary implementation shown in Figure 10 also makes it possible to generate two s-video signals 426 and 428.For this function is provided, the first clock signal 644a of work is provided for analog to digital converter 730 and decoder 422a under first frequency and first phase place (for example 20MHz).May can be provided for analog to digital converter 1010 and decoder 422b with the second clock signal 644b that the second frequency (for example 20MHz of 180 degree out-phase) of first clock signal, 180 degree out-phase is worked down.Being in may be that the twice basically of frequency of first clock signal and the 3rd clock signal 644c with the 3rd frequency (for example 40MHz) of the phase place identical with first clock signal can be provided for analog to digital converter 820.Clock signal 644b is provided for multiplexer 1030, optionally clock signal 644b is coupled to multiplexer 1026 and 1027.By clock signal being coupled to the selected input of multiplexer 1026 and 1027, can carry out time division multiplexing to the input of the vision signal on the analog to digital converter 820 810a-c.Clock signal 644a is coupled to demultiplexer 1040, with to the time sub video signal carry out demultiplexing.To provide the clearer description that time division multiplexing is operated in conjunction with Figure 11.
Figure 11 shows two second portions 820 that are used for two vision signals 425 and carries out time-multiplexed exemplary sequential chart.By time division multiplexing is carried out in operation, can eliminate needs, thereby reduce the double vision total cost of processor 400 frequently the 4th analog to digital converter.Sequential chart shown in Figure 11 comprises respectively and first, second and the 3rd clock signal 644a, 644b and corresponding three clock signals of 644c, and the output of three analog to digital converters 730,1010 and 820.As shown in FIG., clock 1 and clock 2 are worked with half frequency of clock 3, and change along with the trailing edge of clock 3.
As shown in the figure, between the time period of T1 and T4, the whole cycle of clock 644a (clock 1) finishes, and can be used for handling for decoder 422a with the output of the corresponding analog to digital converter 730 of the 812a-c of first of first vision signal (S0) (ADC1).The rising edge of the clock 3 when the time period, T2 began, analog to digital converter 820 (ADC 3) begins to handle the second portion 810a-c of second vision signal (S1), and finishes processing when time period T3 finishes.
When the time period, T3 began, analog to digital converter 820 (ADC 2) began to handle the 810a-c of first of vision signal S1, and finished when time period T6 finishes.Become when time period T6 finishes with the output of the corresponding ADC 2 of the 810a-c of first of vision signal S1 and can be used for handling for decoder 422b.The rising edge of the clock 3 when the time period, T4 began, analog to digital converter 820 (ADC 3) begins to handle the second portion 810a-c of vision signal S0, and finishes processing when time period T5 finishes.
Thereby when time period T6 finished, two parts of two vision signal S0 and S1 had only utilized three analog to digital converters to finish processing.
The rising edge of the clock 3 between time period T5 and T6, demultiplexer 1040 is provided to decoder 644a with the output of the second portion 810a-c of vision signal S0 from ADC 3, to produce treated vision signal 426.Simultaneously, the second portion 812 of vision signal S1 is selected for analog to digital converter 820 (ADC 3) processing, and becomes available when time period T7 finishes.
More than showed and utilized three analog to digital converters 730,1010 and 820 to produce an embodiment of two s-video signals 426 and 428.Following table 1 has been summed up the various exemplary selection signal of the various combinations that can be provided for corresponding multiplexer synthetic to produce (cst), component (cmp) and s-video signal (svid).
Table 1
Dual decoding device 420 can also be configured to handle may be from the unsettled analog or digital signal of video tape recorder (VCR) reception.Unsettled signal may be to be produced by VCR owing to the various operator schemes such as F.F., rewind down or park mode.During this situation, dual decoding device 420 can be handled the signal of these types so that the second best in quality output signal to be provided.
Unsettled vision signal may be caused by the unsettled synchronizing signal that VCR generated.A kind of suitable technology that is used to handle unsettled synchronizing signal can be that unsettled vision signal is cushioned.For example, can with before went out earlier (FIFO) buffer be placed on decoder output near.At first, utilize unsettled synchronizing signal, the decoder dateout can be written to fifo buffer as benchmark.Can regenerate or create again synchronizing signal and clock from the logical block in the decoder, can when running into this operator scheme, use it for then from the fifo buffer reading of data.Like this, unsettled vision signal can be exported with stable synchronizing signal.In every other scene or operator scheme, can walk around fifo buffer, and output can be identical with the input of FIFO.
Perhaps, realize that in chip external memory fifo buffer can make it possible to unsettled synchronizing signal is carried out suitable processing.For example, when unsettled synchronizing signal was detected, decoder can be placed in the 2-D pattern, thereby used chip external memory still less.Most of chip external memory 300 that is generally used for 3-D operation becomes idle, and can be used to realize above-mentioned fifo buffer (that is, the equivalent of at least one complete data vector can be used as free memory space).In addition, the fifo buffer in the chip external memory may be able to be stored the pixel of entire frame, even therefore writing speed and read rate do not match, in output place, frame also can or be repeated or is dropped.The repetition of the field in particular frame or the frame or abandon still can be so that system can show goodish picture.
Figure 12 has been shown in further detail the exemplary functions of the front end 54 in the video pipeline.Particularly, channel to channel adapter 1212 can be configured to select four passages from a plurality of video source 512.These four passages can be processed along 4 pipeline stages in the front end 540.In certain embodiments, these four passages can comprise: show (OSD) passage and DATA REASONING (data instrumentation) or test channel on main video channel, PIP passage, the screen.
Front end 540 can be in passage any one on realize various Video processing level 1220a, 1220b, 1230 and 1240.In certain embodiments, each passage can be shared from any one one or more resources in other grades, to increase the processing power of each passage.Some examples of the function that Video processing level 1220a and 1220b can provide can comprise the noise reduction that can be used for producing the highest picture quality and deinterleave.The noise reduction and the function that deinterleaves also can be shared chip external memory 300, and like this, this memory is represented as shared storage level 1260, will come this shared storage level 1260 of more detailed description in conjunction with the description to Figure 13 and 15.For fear of making figure too crowded, shared storage level 1260 is illustrated as in Figure 12 and passage 1 a corresponding part of handling level.But, should be appreciated that one or more shared storage levels 1260 can be the parts of any passage streamline in the front end 540.
Noise reduction can be removed impulsive noise, Gaussian noise (space with time) and the MPEG illusion such as block noise and mosquito noise.Deinterleaving can comprise by utilize the row of any disappearance of edge self-adaption interpolation method interpolation under the situation that has motion, comes to generate video in proper order from interleaved video.Perhaps, deinterleaving function can be based on the combination of Motion Adaptive ground service time and spatial interpolation.Denoiser and deinterleaver can be operated in the 3-D territory, and all may need the field of frame is stored in the chip external memory.Therefore, deinterleaver and denoiser can serve as the client of memory interface 530, and this memory interface 530 can be used to visit chip external memory.In certain embodiments, denoiser and deinterleaver can be shared chip external memory so that maximization storage space and come deal with data-shown in shared storage level 1260 in mode the most efficiently.To come this process of more detailed description in conjunction with description to Figure 13 and 15.
Among three Video processing level 1220a, the 1220b and 1230 any one can be moved format conversion so that vision signal is transformed in the required territory.For example, such conversion can be used for incoming video signal stream is changed over the YC4:2:2 form of 601 or 709 color spaces.
Front end 540 also can provide measures streamline 1240 with the service data measurement function.Measure streamline 1240 and for example can be used to find out the starting and ending pixel and the line position of motion video, and be used under there is the situation of controllable phase sampler (ADC) in the upstream, finding out preferred sampling clock phase.Carry out these operations and can help to detect automatically the input channel parameter, for example resolution, letter boxing, about add frame.In addition, detecting this channel parameters can help to utilize them to come the feature such as convergent-divergent and aspect ratio conversion by microcontroller or the control of any other suitable treatment element.Front end 540 can also be to all four passages operation synchronous video signal measurement functions, so as to detect the losing of synchronizing signal, clock signal lose or super scope synchronously or clock signal.These functions can also be used for coming driving power management control by microcontroller or any other suitable treatment element.
End at front end 540, one group of fifo buffer 1250a-c can sample to video flowing, so that the vision signal 1252,1254 and 1256 through sampling to be provided between front end 540 and frame-rate conversion and convergent-divergent 550 (Fig. 5) pipeline stages, when this vision signal 1252,1254 and 1256 through sampling can be used for selected passage reset.
Provide more detailed description in conjunction with description to shared storage level 1260 to Figure 13 and 15.Particularly, as shown in figure 13, shared storage level 1260 can comprise the function of denoiser 330 and deinterleaver 340 at least.These functions can be need carry out the frame storage to produce the timeliness function of high quality graphic.By making each memory access piece (that is, memory client) can share chip external memory 300, can reduce the size of chip external memory 300 and carry out the needed bandwidth of interface with chip external memory 300.
In the 3-D pattern, denoiser 330 can be worked on two fields of input that interweave.Two fields that denoiser 330 can be worked thereon can comprise when front court 1262 with in the field before two of front court 1262 (that is, more front court (previous to the previous field) 332).In the 3-D pattern, deinterleaver 340 can be worked on three fields that interweave.These three fields can comprise when front court 1262, previous field 1330 and front court 332 more.
As Figure 13 and shown in Figure 14, field buffer 1310 and 1312 can be shared by denoiser 330 and deinterleaver 340.Denoiser 330 can be outside sheet chip memory 300 read more front court 332 from field buffer 1310, and with it with handling so that the output 322 through noise reduction to be provided when front court 1262.Output 322 through noise reduction can be written in the field buffer 1312 of chip external memory 300.Deinterleaver 340 can be outside sheet chip memory 300 read from the previous field 1330 of field buffer 1312 and from the more front court 332 of field buffer 1310, and the field of being read with handling when front court 1262 or through the output 322 of noise reduction, and is provided through the video 1320 that deinterleaves and is used as output.
For example, as shown in figure 14, when front court 1262 (FIELD 1) can be provided for denoiser 330, so that output is through the output 322 of noise processed during very first time section (being T1).After denoiser 330 is finished the processing of FIELD 1 or before (promptly, during time period T2), output 322 (FIELD 1) through noise reduction can be offered deinterleaver 340 by denoiser 330, perhaps can walk around denoiser 330 and directly offer deinterleaver 340 (for example, if do not need noise reduction) via 1262.Under any situation, during second time period (that is, time period T2), can be write field buffer 1312 in the chip external memory 300 by denoiser 330 through the output 322 (FIELD 1) of noise reduction.
In processed frame next is in front court (FIELD 2), and during time period T2, the output 1330 of field buffer 1312 (FIELD 1) can be read from chip external memory 300 by deinterleaver 340.Field buffer 1312 is provided at output 322 (FIELD2) (that is, before the front court) the processed output through noise reduction (FIELD 1) before through noise processed subsequently.
During the 3rd time period (being T3), finish after the processing of next (being FIELD2) in the front court 1262 or before, the previous field 1330 of working as the front court of field buffer 1312 can be written to field buffer 1310 at denoiser 330.The next output (FIELD 1) that can replace through noise reduction through the output 322 (FIELD 2) of noise reduction is written to field buffer 1312.During time period T3, the content of field buffer 1312 is the output (FIELD 2) (that is, the last front court of working as) through noise reduction, and the content of field buffer 1310 is the output (FIELD 1) (that is the more preceding front court of working as) through noise reduction.
During time period T3, denoiser 330 can worked as front court 1262 (FIELD 3) and more preceding when upward work of front court 332 (FIELD 1).At one time during the section T3, deinterleaver 340 can when front court 1262 (FIELD 3) or through the output (FIELD 3) of noise reduction, when before the front court when front court 1330 (FIELD 2) and lastly before the front court, go up work when front court 332 (FIELD2).Between denoiser 330 and the deinterleaver 340 thereby sharing of chip external memory 300 caused only using 2 field buffer unit, and as shown in Figure 3, in chip external memory 300, generally needed four field buffer unit that similar function is provided.
By reducing the number of the field buffer unit in the memory, can under situation, provide extra Video processing streamline, thereby make it possible at least two passages are carried out high-quality Video processing with equal processing power and bigger memory stores and bandwidth.In addition, the data carousel that can reduce between double vision frequency processor 400 and the chip external memory 300 is wide, because can only use single write port and two read ports that above-mentioned functions is provided.
In some other embodiment, denoiser 330 and deinterleaver 340 can be worked on a plurality of the row in each frame simultaneously.As shown in figure 15, each in these row can be stored in when front court line buffer 1520, last when front court line buffer 1530 and more preceding in front court line buffer 1510.Line buffer 1510,1520 and 1530 can be the memory cell in the double vision frequency processor 400, and these memory cell can provide high efficiency and speed when storage and visit.In order further to reduce the amount of memory space, can between denoiser and deinterleaver module, be shared by denoiser 330 and deinterleaver 340 both employed line buffers 1510.
As shown in figure 15, when front court 1262 is received by denoiser 330 and deinterleaver 340, except being used for of describing in conjunction with Figure 13 and 14 will be worked as the front court and be stored in the operation of field buffer 1312, when front court 1262 can also be stored in front court line buffer 1520.This makes denoiser 330 and deinterleaver 340 can visit a plurality of front court row of working as that receive at interval at different time simultaneously.Similarly, the content that is stored in field buffer unit 1310 and 1312 can be moved to corresponding line buffer 1510 and 1530, this line buffer 1510 and 1530 and then be respectively last when front court (in the output through noise reduction before the front court) with more precedingly provide buffering when front court row (in the last output through noise reduction before working as the front court).It is a plurality of last when front court row and more preceding when the front court row that this makes denoiser 330 and deinterleaver 340 to visit simultaneously.Owing to comprised a line buffer, denoiser 330 and deinterleaver 340 can be worked on a plurality of row simultaneously.Therefore, because denoiser 330 and deinterleaver 340 are shared to being stored in the more preceding visit when the front court in the field buffer unit 1310, so they also can share the visit to corresponding line buffer 1510.This so can reduce double vision frequently on the processor 400 or very near the double vision memory space that needs of processor 400 places frequently.
Though three line buffers only are shown in Figure 15, should be appreciated that to provide any several destination field line buffer.The number of the field line buffer that is provided particularly, depends on the number of double vision simultaneous the row that frequently amount of available memory space and/or denoiser 330 and deinterleaver 340 may need on the processor 400.But, should be appreciated that the extra noise reduction unit that any number can be provided and the unit that deinterleaves, to help to handle a plurality of row.
For example, can handle three two denoisers 330 and two deinterleavers 340 when the front court row separately simultaneously if provide, then can use eight when front court line buffer 1520, six are last a plurality of row-wherein the output of each line buffer will be coupled to the corresponding input of denoiser and deinterleaver unit when front court line buffer 1510 is handled when front court line buffer 1530 and six are more preceding.In fact, expect, if can obtain space on the number of required denoiser and deinterleaver and the sheet, then can be in field buffer with the content stores of one or more frames.
Figure 16 has been shown in further detail frame-rate conversion and scalable stream waterline 550 (Fig. 5) (FRC streamline).FRC streamline 550 can comprise convergent-divergent and frame-rate conversion function at least.Particularly, FRC streamline 550 can comprise two modules that are used for convergent-divergent at least, these two modules can be placed among in scaler slot 16 30,1632,1634 and 1636 two-scaler and be used to provide convergent-divergent to first passage, and one is used to provide the convergent-divergent to second channel.In the description to Figure 17, it is clearer that the advantage of this layout will become.In these Zoom modules in the scaler slot 16 30,1632,1634 and 1636 each may be able to be carried out by the amplification of any zoom ratio or dwindle.Scaler can also comprise the circuit that is used to carry out aspect ratio conversion, horizontal nonlinearity 3 district's convergent-divergents, interleaving and de-interleaving.In certain embodiments, convergent-divergent can be carried out (that is, output is synchronous with input) in synchronous mode, perhaps can carry out (that is, output can be positioned in any position with respect to input) by chip external memory 300.
FRC streamline 550 can also comprise the function that is used for frame-rate conversion (FRC).At least two in the passage can comprise frame rate conversion circuitry.In order to carry out FRC, video data should be written to storage buffer and be read from this buffer with required output speed.For example, owing to read output buffer quickly compared with incoming frame, so the frame rate increase, thereby cause specific frame along with the time was repeated in the past.Owing to from buffer, read the frame (that is, reading frame) that to export with the speed slower, so frame rate reduces with the speed slower than input rate than the speed that writes particular frame.Owing to during the period (that is, motion video) that video data can be used, read specific frame, may cause frame to be torn or the video illusion.
Particularly, appear in the motion video for fear of the video illusion such as frame is torn, the repetition of frame or abandon should occur on the whole incoming frame, rather than the centre of the field in a frame.In other words, video discontinuous should only stride frame boundaries (that is, do not provide between the horizontal or vertical sync period of picture data take place) takes place, and generation in the zone of motion video.Do not have and to tear the time that controlling organization 1610 can operate to read the part of the frame in the memory by for example control storage interface 530 and alleviate discontinuous.Can tear execution FRC in the pattern (that is, utilizing nothing to tear controlling organization 1610) at normal mode or nothing.
Two scaler among being placed on two in scaler slot 16 30,1632,1634 and 1636 in first and second passages each, a low side scaler 1640 can also be arranged on third channel.Low side scaler 1640 can be more basic scaler, for example only carries out the scaler of 1:1 or 1:2 amplification or any other necessary zoom ratio.Perhaps, one of scaler in first and second passages can be carried out convergent-divergent to third channel.In at least three passages which multiplexer 1620 and 1622 can be controlled and be directed in the available scaler which.For example, multiplexer 1620 can selector channel 3 carrying out first kind zoom operations in the scaler in slot 16 30 or 1632, multiplexer 1622 can selector channel 1 to carry out the second class zoom operations in the scaler in slot 16 34 or 1636.Should be appreciated that a passage also can use the available scaler of any number.
FRC streamline 550 can also comprise the smoothing film pattern, so that it is unstable to reduce motion.For example, may have a film mode detection piece in deinterleaver, it detects the pattern of incoming video signal.If video input signals is in first frequency (for example 60Hz) operation down, then it can be switched to higher frequency (for example 72Hz) or lower frequency (for example 48Hz).Be transformed under the situation of higher frequency, frame repeats index signal can be provided to the FRC piece from the film mode detection piece.Frame repeats index signal and can be high during first framing (for example one of frame) of the data that can be generated by deinterleaver and be low during second framing (for example four frames).Repeating index signal at frame is high that part of time durations, and FRC can repeating frame, thereby generates correct data sequence with higher frequency.Similarly, be transformed under more low-frequency situation, frame abandons index signal can be provided to the FRC piece from the film mode detection piece.During to abandon index signal be high time period, a specific framing was abandoned from sequence, thereby has generated correct data sequence with lower frequency at frame.
The type that depends on required convergent-divergent, as shown in scaler locating module 1660, scaler can be configured to be placed in each scaler slot 16 30,1632,1634 and 1636.Scaler slot 16 32 and 1636 all is positioned at after the memory interface, but scaler slot 16 32 is corresponding to the zoom operations that first passage is carried out, and scaler slot 16 36 is corresponding to the zoom operations that second channel is carried out.As shown in the figure, a scaler locating module 1660 can comprise to be selected to dispose the multiplexer 1624 of corresponding output with specific scaler, and another scaler locating module 1660 can not comprise multiplexer, but can make the output of scaler be directly coupled to another video pipeline assembly.Multiplexer 1624 provides and has only utilized two scaler slots to realize the flexibility of three kinds of operator schemes (in conjunction with Figure 17 more detailed description).For example, if multiplexer 1624 is provided, then is positioned at scaler in the slot 16 30 and can be coupled to memory and dwindles or amplify, and be coupled to multiplexer 1624 to provide.If do not need storage operation, then multiplexer 1624 can be selected the output of scaler slot 16 30.Perhaps, storage operation if desired, then the scaler in the scaler slot 16 30 can be carried out convergent-divergent to data, and multiplexer 1624 can select the data from another scaler, and this another scaler is amplified data or dwindled and be placed in the scaler slot 16 32.The output of multiplexer 1624 can be provided to another video pipeline assembly subsequently, and blanking time optimizer 1650 for example will come the more detailed description should blanking time optimizer 1650 in conjunction with the description to Figure 18.
As shown in figure 17, scaler locating module 1660 can comprise at least input FIFO buffering 1760, with being connected of memory interface 530, three scaler locate in the slot 1730,1734 and 1736 at least one, write fifo buffer 1740, read fifo buffer 1750 and output fifo buffer 1770.Scaler location slot can be corresponding to the slot of describing among Figure 16.For example, scaler location slot 1734 can be corresponding to slot 16 30 or 1634, similarly, scaler location slot 1730 can be corresponding to slot 16 30-as mentioned above, makes slot 16 30 that the function of scaler location slot 1730 and 1734 can be provided to the use of multiplexer 1624.One or two scaler can be positioned among any one or two in the slots 1730,1734 or 1736 of three scaler location with respect to memory interface 530.Scaler locating module 1660 can be the part of any passage streamline in the FRC streamline 550.
When the needs synchronous mode, scaler can be positioned in the slot 1730 of scaler location.In this pattern, can not have FRC in the system, thereby eliminated the needs that visit memory by specific FRC passage streamline.In this pattern, output v-sync signal can be locked into input v-sync signal.
Scaler also can change into and being positioned in the slot 1734 of scaler location.When needs FRC and input data should be reduced, may wish scaler is positioned in the slot 1734.Before being written to memory, the input data are dwindled (that is, because may need less frame sign) thereby reduced the amount of memory storage that may need.Owing to storage still less can be arrived memory, therefore can reduce the dateout reading rate, thereby also reduce needed total memory bandwidth (and then having reduced cost) and system more efficiently is provided.
In another scene, scaler can be positioned in the slot 1736 of scaler location.When needs FRC and input data should be exaggerated the time, may wish scaler is positioned in the slot 1736.The speed that data are provided to memory can be lower than the speed that reads dateout (that is, the frame sign specific output place of input littler).And then, by storing littler frame and utilizing scaler to increase frame sign in output place afterwards, data still less can be written to memory.For example, if on the other hand, scaler is positioned in the memory slot 1734 before and is used to amplify the input data, and then bigger frame will be stored in memory, thereby need more bandwidth.But, in the case,, littler frame can be stored into memory (thereby consume still less bandwidth) at first, and afterwards it be read back and amplify by scaler being positioned at after the memory.
Because for first and second passages, in the scaler locating module 1660 of two separation, have two independently scaler, if therefore on these two scaler locating modules 1660 the memory access demand is arranged all, then may occur that: one of them needs high bandwidth, and another may need the low bandwidth memory visit.Blanking time optimizer (BTO) multiplexer 1650 can provide one or more storage buffers (even as big as storing one or more row), thereby so that the field that reduces bandwidth of memory and make the passage of any number to share to be stored go-reduces the memory stores demand.
Figure 18 is the illustrated examples of the operation of BTO multiplexer 1650 (Figure 16).As shown in figure 18, first passage (master) occupies the major part of screen 1810, and second channel (PIP) occupies the smaller portions of screen 1810.As a result, compared with coming the main channel, at one time interim PIP passage may have still less activity data and need be to the still less visit of memory, thereby need less bandwidth.
For example, if a field row in the frame comprises 16 pixels, then the PIP passage can only occupy 4 pixels of whole in this frame, and the main channel can occupy remaining 12 pixel.Therefore, the PIP passage must reference to storage be four double-lengths of main channel with the time quantum of handling 4 pixels, thereby the bandwidth that needs is littler, shown in memory access time line 1840 (that is, PIP has the bigger blanking time at interval).Therefore, in order to reduce required bandwidth of memory, the PIP passage can by slowly many speed references to storage, make the main channel can use remaining bandwidth.
BTO multiplexer 1650 can be configured to use various clock rates when the memory of visit on the different passages.For example, in the time on specific passage, may needing slower clock rate, BTO multiplexer 1650 can utilize a clock rate 1844 from memory access piece (client) 1820 (promptly, the PIP passage) receives the data of being asked, in this storage capable storage buffer on the scene, and utilize second clock speed (it may be slower) 1846 to visit memory.Use high clock rate to come DASD and change the use line buffer into to reduce bandwidth demand by preventing client with slower clock rate reference to storage.
BTO multiplexer 1650 can make it possible to share different channel field line buffers, and this can further reduce the required memory space of chip external memory 300.Like this, BTO multiplexer 1650 can use shared field line buffer to mix or superpose and share a part of different passages that show.
The output of BTO multiplexer 1650 can be provided for color treatments and passage mixed video streamline 560 (Fig. 5).The more detailed description that Figure 19 shows color treatments and passage mixes (CPCB) video pipeline 560.CPCB video pipeline 560 comprises sampler 1910, visual processes and sampling module 1920, stack engine 2000, subaisle stack 1962, other main channel and subaisle convergent-divergent and processing module 1970 and 1972, signature accumulator 1990 and reducer 1980 at least.
The function of CPCB video pipeline 560 can comprise at least improves video signal characteristic, for example strengthens by lightness and chroma edge and carries out the figure image intensifying, and carry out film grain generation and interpolation by the blue noise forming mask.In addition, CPCB video pipeline 560 can mix at least two passages.The output of the passage through mixing can optionally be mixed with third channel mutually, mixes output and two passages mixing output so that triple channel to be provided.
As shown in figure 21, the CMU 1930 that can be included in stack engine 2000 parts of CPCB video pipeline 560 can improve at least one video signal characteristic.The gamma control 2150 that brightness, contrast, form and aspect and saturation adjustment, the local color intelligence that video signal characteristic can comprise that the self adaptation contrast strengthens 2120, the overall situation is carried out in the image remaps 2130, keep the constant intelligent saturation control of form and aspect and brightness, undertaken by look-up table and 2160 and to the color space conversion (CSC) 2120 in required color space.
The architecture of CMU 1930 makes CMU can receive the video channel signals 1942 of any form and will export 1932 and is transformed into any other form.The CSC 2110 of CMU streamline front can receiver, video channel signal 1942 and any 3 possible color space conversion can be handled space (for example, RGB being transformed into YCbCr) to vedio color.In addition, the CSC at place, CMU streamline end can be from the color treatments space conversion to output 3 color spaces.Overall situation processing capacity 2140 can be used to adjust brightness, contrast, form and aspect and/or saturation and can share with output CSC.Because CSC and overall processing capacity 2140 are carried out the matrix multiplication operation, therefore two matrix multiplications can be combined into one.This class is shared and can be carried out by calculate final coefficient in advance after two matrix multiplication operations of combination.
CPCB video pipeline 560 can also provide shake (dithering) to the position of given number, and this may be that display device is needed.Can also be provided for the interleaver of at least one passage output.CPCB video pipeline 560 can also generate control output (Hsync, Vsync, Field) at least one that can be presented in the passage output on the equipment.In addition, CPCB video pipeline 560 can be at least one separating luminance, contrast, form and aspect and the saturation adjustment globally in the output channel, and provides extra convergent-divergent and FRC in the output channel at least one.
Refer again to Figure 16 and 19, export 1656,1652 and 1654 from the passage of FRC streamline 550 and be provided for CPCB video pipeline 560.First passage 1656 can be processed along first path, sampler 1910 can be used so that the vision signal on the first passage 1656 is carried out up-sampling in this first path, and the output 1912 of sampler 1910 can be provided for main channel stack 1960 and subaisle stack 1962, and both think that in the output at least one produces vision-mix.Second channel 1652 can be processed along second path that visual processes and sampling module 1920 are provided.The output of visual processes and sampling module 1920 (it can carry out up-sampling to vision signal) can be imported into video superimpose 1940 (engine 2000 perhaps superposes), so that third channel 1654 is mixed or location third channel 1654 (this third channel 1654 also can be moved by sampler 1910) mutually with output.To come the function of more detailed description stack engine 2000 in conjunction with Figure 20.
The output 1942 of video superimpose (it can be first video channel signals 1623 that is superimposed with second video channel signals 1625) can be provided for main channel stack 1960 by CMU 1930, and can be provided for multiplexer 1950.Except the output 1942 of receiver, video stack, multiplexer 1950 can also receive the output of visual processes and sampling module 1920 and sampler 1910.Multiplexer 1650 is operated to select that in its video input which offered subaisle stack 1962.Perhaps, multiplexer 1951 can be selected the output 1932 of the output of multiplexer 1950 or CMU1930 to be used as vision signal output 1934 and offer subaisle stack 1962.The layout of the processing unit before main channel stack and the subaisle stack makes same vision signal can be provided for main channel stack and subaisle stack.After further handling by unit 1970 and 1972, same vision signal (VI) can be simultaneously 1) be output so that be displayed in the main output 1974 as main output signal, and 2) be output so that further dwindled before being shown or being stored in the secondary output 1976 as secondary output signal.
For both provide independently data to select control to main output 1974 and secondary output 1976, main channel and subaisle can form from first and second video channel signals 1932 and 1934 of the first and second video channel laminating modules 1940 by selecting independently.Subaisle laminating module 1962 can be selected first and second video channel signals 1942 of first video channel signals 1652, second video channel signals 1654 or stack.Because CMU 1930 is applied to first video channel signals 1652, therefore, depend on that first and second video channel signals have identical or different color spaces, second video channel signals 154 device 1951 that can be re-used before or after CMU 1930 is selected.In addition, first and second video channel signals 1932 can be mixed with the 3rd video channel signals 1956 independently mutually with 1934.
CPCB video pipeline 560 can also be exported 1976 for pair convergent-divergent and FRC are provided, and this is by reducer 1980 expressions.For the pair output 1976 that is separated with main output 1974 is provided, this feature may be essential.Because higher frequency clock should be selected as the convergent-divergent clock, so CPCB video pipeline 560 can break away from main output clock, because secondary clock frequency may be less than or equal to the frequency of master clock.Reducer 1980 can also have the ability that generates the data that interweave, and these data that interweave can experience FRC and dateout format, to be used as secondary output.
In some scenes, when first passage is the SDTV vision signal, and main output 1974 should be the HDTV signal, and secondary output 1976 is should be the SDTV vision signal time, CMU 1930 can convert first passage SD vision signal to the HD video, carries out the HD color treatments then.In the case, multiplexer 1950 can select vision signal 1942 (may without the signal of CMU1930) as its output, thereby provides HD signal to main channel laminating module 1960, and provides treated SDTV signal to subaisle stack 1962.Other subaisle convergent-divergent and processing module 1972 can be exported 1976 for pair and carry out color control.
In some other scenes, when first passage is the HDTV vision signal, and main output 1974 should be the HDTV signal, and secondary output 1976 is should be the SDTV vision signal time, CMU1930 can carry out HD and handle, and multiplexer 1951 can be selected the output of CMU1932, so that treated HDTV signal is offered subaisle laminating module 1962.Other subaisle convergent-divergent and processing module 1972 can be exported 1976 for pair and carry out color control so that color space is changed to SDTV.
In some other scenes, main output 1974 and secondary output 1976 all should be the SD vision signals, and other passage convergent-divergent and processing module 1970 and 1972 can be carried out similar color controlled function signal is placed the situation that outputs to corresponding main output 1974 and secondary output 1976.
Should be appreciated that if video channel is not used the specific part of the streamline among streamline fragment 540,550,560 and 570 (Fig. 5) then this part can be configured to by the use of another video channel to strengthen video quality.For example, if second video channel 1264 is not used the deinterleaver 340 in the FRC streamline 550, then first video channel 1262 can be configured to use the deinterleaver 340 of the second video channel streamline, so that improve its video quality.As described in conjunction with Figure 15, extra denoiser 330 and extra deinterleaver 340 can be handled the quality that extra field row (for example, to 6 simultaneous processing of going) improves particular video signal simultaneously by allowing shared storage streamline fragment 1260.
Some exemplary output formats of utilizing CPCB video pipeline 560 to provide comprise national television system committee (NTSC) and the main output of phase alternation capable (PAL) and time output of same input picture, main output of the HD of same input picture and SD (HTSC or PAL) and time output, (wherein the first passage image provides in main output in two different outputs, the second channel image provides in pair output), passage vision signal (first passage or second channel) in first and second passage vision signals of the stack in the main output and the secondary output, different OSD hybrid cytokines (alpha value) on leading output and pair being exported, independently brightness in main output and the secondary output, contrast, form and aspect and saturation adjustment, the different colours space of main output and secondary output (for example, be output as Rec.709 for main, be output as Rec.601) for pair, and/or by the sharper keen/more level and smooth image in the pair output of using not zoom factor on the same group to obtain on first passage scaler and the second channel scaler.
Figure 20 has been shown in further detail stack engine 2000 (Figure 19).Stack engine 2000 comprises video superimpose module 1940, CMU 1930, first and second channel parameters 2020 and 2030, selector 2010 and main M plane laminating module 2060 at least.Should be appreciated that main M plane stack 2060 is similar to main channel stack 1960 (Figure 19), but can comprise extra function that this extra function can be used to other passage vision signal 2040 be mixed mutually with third channel input 1912 (Figure 19) or superpose.
Stack engine 2000 can generate single video channel stream by M available independent vide/graphics plane is placed on the final display background (display canvas).In a particular embodiment, stack engine 2000 can generate single passage stream by 6 planes are placed on the final display background.The position of each plane on display screen can be configurable.The priority on each plane also can be configurable.For example, if the position of plane on display background is overlapping, then which plane is priority rank can be used for solving and be placed on the problem that top layer and which plane are hidden.Stack also can be used for assigning optional border for each plane.
The example in other video channel signals 2040 and source thereof can comprise can be first passage vision signal 1652 primary flat, can be second channel vision signal 1654 the PIP plane, can utilize the character OSD plane that character OSD maker on the sheet generates, the mapping OSD plane, position that can utilize position mapping OSD engine to generate.The OSD image can be stored in the memory, and wherein memory interface can be used for obtaining the objects stored in advance of the various positions mapping in the memory, and places them on the background, and background also can be stored in the memory.Memory interface can also be carried out format conversion when obtaining required object.Position mapping OSD engine can read the background of storage and send it to stack by raster scan order.Extra video channel signals 2040 can comprise can by cursor OSD engine generate and can use little on-chip memory store the cursor OSD plane of the bitmap of small object as the cursor, from outside OSD plane that external source receives.Outside OSD engine can send out grating control signal and read clock.Outside OSD can use in the source these control signals to send data as benchmark and by raster order.These data can be routed to stack.If outside OSD plane is activated, then flexible port can be used for receiving outside osd data.
Stack 1940 before the CMU 1,930 first video channel stream, 1653 and second video channel stream 1655 that can superpose.Stack 1940 is worked on single video flowing by allowing CMU 1930, thereby has eliminated the needs that duplicate the module in the CMU 1930 for a plurality of video channel streams, thereby can be so that CMU 1930 can carry out more efficiently.Stack 1940 is except providing to CMU 1930 the single video channel signals 1942, can also provide part (promptly to CMU 1930, by pixel) designator 1944, this part designator 1944 identifies video section and belongs to first video channel stream or second video channel stream.
Can provide with first video channel stream, 1653 and second video channel and flow 1655 corresponding two groups of programmable parameters 2020 and 2030.Selector 2010 can use part designator 1944 to select to provide which programmable parameter to CMU 1930.For example, indicate CMU 1930 handled parts as fruit part designator 1944 and belong to first video channel stream 1653, then selector 2010 can provide with first video channel to CMU 1930 and flow 1653 corresponding programmable parameters 2020.
May there be layer with the number similar number of video plane.Layer 0 can be the bottom, and layer subsequently can have increasing layer index.Layer may not have size or position characteristic, but their orders that should be stacked can be provided.Stack engine 2000 can begin moving up mixed layer from layer 0.Utilize be placed on layer 1 on the hybrid cytokine that is associated of video plane, layer 1 can at first mix with layer 0.The output that layer 0 and layer 1 mix can mix with layer 2 subsequently.Operable hybrid cytokine can be and be placed on the plane of layer on 2 and be associated.The output that layer 0, layer 1 and layer 2 mix can mix with layer 3 subsequently, and the rest may be inferred, and layer to the last is by mixed.Should be appreciated that those of ordinary skill in the art can select to come mixed layer by any combination, and do not break away from instruction of the present invention.For example, layer 1 can mix with layer 3, mixes with layer 2 then.
Describe in conjunction with main output channel though it is also understood that stack engine 2000, color treatments and passage mixed production line 560 also can be modified, and provide the stack of M plane to utilize stack engine 2000 on secondary output channel.
Figure 22 has been shown in further detail the backend pipeline level 570 of video pipeline.Backend pipeline level 570 can comprise main output format device 2280, signature accumulator 1990, secondary output format device 2220 and selector 2230 at least.
Backend pipeline level 570 can both carry out output formatization for main output and secondary output, and can generate control output (Hsync, Vsync, Field) and be used as secondary output.Backend pipeline level 570 can help digital interface and analog interface.Main output format device 2280 can receive treated main video channel signals 1974, and generates corresponding main output signal 492a.Secondary output format device 2220 can receive treated secondary video channel signals 1976, and generates corresponding secondary output signal 492b.Signature accumulator 1990 can receive secondary video channel signals 1976, and the difference between the signal of accumulating and relatively being accumulated, with the video signal quality of definite outputting video signal, and this information can be offered a processor to change system parameters as required.
Secondary video channel signals 1976 formatted so that before the output 492b, can also be provided for CCIR656 encoder (not shown).The CCIR656 encoder can be carried out the coding of any necessity, so that signal is in the situation of exterior storage or certain other suitable means.Perhaps, select the secondary video channel signals 2240 of bypasses by utilizing selector 2230, secondary video channel signals 1976 can be provided as output signal 492b, and is not encoded or formats.
Interleaving block in the backend pipeline level 570 (not shown) can also be provided.If input signal is interleaved, it can at first be converted in proper order by deinterleaver 340 (Figure 13).Deinterleaver may be necessary, because all subsequent module in the video pipeline level all may be operated in proper order in the territory.The output that interweaves if desired, then the interleaver in the backend pipeline level 570 can be by the unlatching of selectivity.
The interleaver module can comprise the memory even as big as storage at least two row pixels at least, but can be modified to the storage entire frame if desired.Can utilize regularly will import in proper order in proper order and be written to memory.According to pixels half of speed generates and interweaving regularly of timely locking in proper order.Can utilize the regularly reading of data from memory that interweaves.It is capable to abandon even field in odd field, and it is capable to abandon odd field in even field.This so produced the output that interweaves that is suitable for to locking equipment.
Therefore, as can be seen, provide to be used to utilize shared storage that the apparatus and method of a plurality of high-quality video passage streams are provided.Those skilled in the art will be appreciated that, can use the mode except described embodiment to realize the present invention, provide described embodiment and be in order to illustrate rather than to limit, and the present invention is only limited by claims.