CN100593188C - Image frame processing method and device for displaying moving images to a variety of displays - Google Patents

Image frame processing method and device for displaying moving images to a variety of displays Download PDF

Info

Publication number
CN100593188C
CN100593188C CN200580024443A CN200580024443A CN100593188C CN 100593188 C CN100593188 C CN 100593188C CN 200580024443 A CN200580024443 A CN 200580024443A CN 200580024443 A CN200580024443 A CN 200580024443A CN 100593188 C CN100593188 C CN 100593188C
Authority
CN
China
Prior art keywords
frame
sequence
picture
picture frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200580024443A
Other languages
Chinese (zh)
Other versions
CN1989545A (en
Inventor
青木幸代
大场章男
冈正昭
佐佐木伸夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Publication of CN1989545A publication Critical patent/CN1989545A/en
Application granted granted Critical
Publication of CN100593188C publication Critical patent/CN100593188C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A rendering process for rendering an image frame and a postprocess for adapting the image frame to a display are separated. A rendering processing unit 42 generates an image frame sequence by performing rendering at a predetermined frame rate regardless of a condition that the image frame should meet for output to the display. A postprocessing unit 50 subjects the image frame sequence generated bythe rendering processing unit to a merge process so as to generate and output an updated image frame sequence that meets the condition. Since the rendering process and the postprocess are separated,the image frame sequence can be generated regardless of the specification of the display such as resolution and frame rate of the display.

Description

Image frame processing method and to the equipment of various display show events images
Technical field
The present invention relates generally to the picture frame treatment technology, more specifically relate to generation and be suitable for the various picture frame treatment technologies that present the sequence of image frames of condition.
Background technology
Along with the improvement of the technology that is used to make the thin display such as LCD and plasma scope with and the minimizing of price, around us, have the reproduction that various display devices are used for activity (moving) image now.
The specification of displayable image data that comprises frame frequency and resolution is according to type of display and different.According to correlation technique, in display device (display device), handle view data from picture production device output so that generate the image that is suitable for this display.In this is arranged, the increase of the frame frequency that should be suitable for along with this display and the kind of resolution, exploitation circuit and the needed manpower of software will increase.Related therewith, place the processing load on the display device also to increase.
Summary of the invention
Considered the problems referred to above and made the present invention, and the purpose of invention provides the picture frame treatment technology that is suitable for various types of displays.
One aspect of the present invention is an image frame processing method.This method comprises: be used to present presenting processing and being used to make the aftertreatment that is suitable for display by the sequence of image frames that presents the processing generation of sequence of image frames.Present in the processing at this, generate sequence of image frames and do not consider in order to output to the condition that display should satisfy picture frame by carrying out with predetermined frame frequency to present.In this aftertreatment, present the sequence of image frames of handling generation by this and stand predetermined process so that generate and export the sequence of image frames that satisfies this condition.
Separate with aftertreatment because present to handle, the specification so can generate sequence of image frames with display such as exploration on display resolution ratio and frame frequency has nothing to do.
Another aspect of the present invention provides and has been used for carrying out predetermined process so that the Dynatic image display method that shows from storer reading images frame sequence and to these sequence of image frames, it comprises: when occurring advancing request fast, read the picture frame that will be shown as the picture that advances fast selectively from storer; And by the updated images frame is created in the predetermined integrated processing of picture frame execution of being read; And the picture frame of display update.
According to this aspect, because created the frame that advances fast, so can obtain the picture that advances fast of the value of being added (value-added) by integrated a plurality of picture frames." integrated processing " herein is meant uses part or all image information that is included in a plurality of picture frames to create the updated images frame.
Also can be used as other pattern of the present invention and put into practice with the realization of the present invention of method, device, system, computer program and recording medium form.
Description of drawings
Fig. 1 shows the hardware configuration in the entertainment device.
Fig. 2 is the functional block diagram of picture frame treatment facility.
Fig. 3 shows the basis coordinate system.
Fig. 4 shows from the sequence of image frames that presents processing unit output, how in per four frames change off-set value.
Fig. 5 has illustrated the point sampling that relates to four picture frames.
Fig. 6 has illustrated the generation for the picture frame of the size of original four picture frames four times.
Fig. 7 has illustrated the motion blur that uses four picture frames.
Fig. 8 has illustrated four picture frames that merged with the picture frame that produces same size.
Fig. 9 has illustrated by carrying out point sampling and bilinear interpolation subsequently on four picture frames, generating enlarged image.
Figure 10 has illustrated the bilinearity sampling.
Figure 11 is the process flow diagram that is used for merging the definite merging method in condition setting unit.
Figure 12 shows the hardware configuration according to the picture frame treatment facility 200 of second embodiment of the invention.
Figure 13 is the functional block diagram according to the picture frame treatment facility of example 4.
Figure 14 shows and is used for extracting picture frame and creating the principle (concept) of the processing of the frame that advances fast from sequence of image frames.
Figure 15 shows and is used to merge the principle of a plurality of picture frames with the processing of creating the frame that advances fast.
Figure 16 shows the principle of the processing of the picture that advances fast that is used to reduce merged picture frame number and creates the pace with reduction.
Figure 17 shows the functional block diagram according to the picture frame treatment facility of example 5.
Figure 18 shows the principle that is used for extracting based on monochrome information the processing of picture frame.
Figure 19 is the functional block diagram according to the picture frame treatment facility of example 6.
Figure 20 shows the principle that is used for picture frame is separated into the processing of specific image zone and nonspecific image-region.
Figure 21 is the functional block diagram according to the picture frame treatment facility of example 7.
Figure 22 shows the principle of path display process.
Figure 23 is the functional block diagram that can realize according to the picture frame treatment facility of the processing of example 4 to 7.
Embodiment
(first embodiment)
To provide description by introducing wherein the embodiment of the entertainment device that should creativeness equipment be applied to present three dimensional computer graphics (CG) image now according to picture frame treatment facility of the present invention.
Fig. 1 shows the hardware configuration in the entertainment device 100.Entertainment device 100 has graphic chips 18 and can present and show in real time 3-D view on display 26 by execution.
Entertainment device 100 has host CPU 12, primary memory 14, geometrical processor 16, graphic chips 18, display controller 28 and input and output (I/O) port 30.These pieces are connected with each other to allow mutual data to transmit and to receive via graphics bus 36.Display controller 28 can be connected to one of various displays 26, and each in these displays all has different specifications and display condition.
Input and output port 30 is connected to the External memory equipment 32 such as CD-ROM drive, DVD-ROM driver or hard disk drive, and is connected to the input equipment 34 such as keyboard or mouse and keys in data and coordinate data so that feed to entertainment device 100.Input and output port 30 is controlled at External memory equipment 32 and the input equipment 34 data input and output in the two.Input and output port 30 reads to be stored in and presents data or program in the External memory equipment 32, then they is offered host CPU 12 and geometrical processor 16.For example, presenting data can be the object data that presents object.Input and output port 30 can be configured to communicate to import with miscellaneous equipment present data and program.
Host CPU 12 is controlled entertainment device 100 globally, and carries out the program that presents that is stored in the External memory equipment 32.When executive routine, host CPU 12 is by using the input control graphic chips 18 of input equipment 34, the demonstration that comes the control chart picture according to the user.
Host CPU 12 by being controlled at the data transmission between the constitution equipment control entertainment device 100.For example, the host CPU geometric data that uses primary memory 14 to control as buffering to be generated by geometrical processor 16 is to the transmission of graphic chips 18.Host CPU 12 also manage data transmission between graphic chips 18, External memory equipment 32, input equipment 34 and display 26 synchronously.In this embodiment, geometrical processor 16 and host CPU 12 are provided respectively.As selection, these elements can integrate so that host CPU 12 can be carried out the function of geometrical processor 16.
Object configuration (configuration) data that primary memory 14 storage is read from External memory equipment 32 and present program.Each object data comprises a plurality of polygonal vertex data that constitutes associated objects.Primary memory 14 has texture buffer, is used to store the texture that is used for texture.
Under the control of host CPU 12, the object data that 16 pairs of geometrical processors are stored in the primary memory 14 is carried out such as the coordinate conversion of definition position or configuration and the geometric manipulations the conversion, perhaps with the relevant processing of light source of shining the summit.The geometric data that obtains as the result of geometric manipulations comprises the apex coordinate of object, in the texture coordinate and the object properties such as the brightness of summit at place, summit.
Graphic chips 18 comprises and presents processor 20, memory interface 22 and the video memory such as EDRAM 24.Present processor 20 and under the control of host CPU 12, sequentially read the geometric data that generates by geometrical processor 16, and this geometric data execution is presented processing to generate picture frame.The α value of the rgb value of the pixel in the picture frame and indication transparency is stored in the video memory 24.The Z value of indication pixel depth is stored in the Z buffer zone (not shown).Can in video memory 24, provide Z buffer zone.
Graphic chips 18 present processor 20 according to the order that presents that provides from host CPU 12, be presented on picture frame in the video memory 24 via memory interface 22.Presenting between processor 20 and the memory interface 22 and between memory interface 22 and video memory 24, setting up the high-speed bus connection, can present processing in video memory 24, to carry out at a high speed so that present processor 20.For instance, present the picture frame that processor 20 presents 640 * 480 pixels.
Temporarily be stored in the video memory 24 by presenting the picture frame that processor 20 presents.Host CPU 12 is via memory interface 22 retrieving images frame from video memory 24, and this picture frame is written in other storer such as primary memory 14.As required, host CPU 12 is converted to this picture frame the picture frame that can show on display 26.Display controller 28 receives this picture frame and show it on display 26 via bus 36 then.
Fig. 2 is the functional block diagram of picture frame treatment facility 10.Function among Fig. 2 is mainly realized by graphic chips 18, host CPU 12 and primary memory 14.Fig. 2 is the explanation that concentrates on function.Therefore, these functions can be only by hardware, only differently realize by software or by the combination of hardware and software.
Object data reading unit 40 reads the geometric data of the object that will present.Present processing unit 42 and sequentially present the picture frame that comprises object with predetermined resolution with predetermined frame frequency.The picture frame that is presented is stored in the first memory 44 as impact damper.Present processing unit 42 and present this picture frame with the frame frequency of the maximum frame rate of the display 26 that equals or be higher than plan and use.For example, present processing unit 42 and present processor 20 corresponding to Fig. 1.
Transmission control unit (TCU) 46 is read the picture frame that is stored in the first memory 44, then it is stored in the second memory 48.Second memory 48 is stored a plurality of picture frames, so that can be identified in the chronological order between a plurality of picture frames.For example, first memory 44 is corresponding to the video memory among Fig. 1 24, and second memory 48 is corresponding to primary memory 14.Alternatively, second memory can in picture frame treatment facility 10, provide, any memory device or storer such as External memory equipment 32.In addition, alternatively, each in first memory 44 or the second memory 48 can be corresponding to the different storage zone in the identical physically storer.
Interface unit 52 obtains the resolution or the frame frequency information of the display 26 that is connected to picture frame treatment facility 10.Interface unit 52 can or present the information that program is obtained content from host CPU 12, is the rest image or the information of live image such as this image of indication.Interface unit 52 can obtain exploration on display resolution ratio or frame frequency information from the user via input equipment 34.Give post-processing unit 50 with the information delivery that is obtained.For example, this post-processing unit 50 can be corresponding to the host CPU among Fig. 1 12.
Post-processing unit 50 comprises and merges condition setting unit 54, frame sequence acquiring unit 56 and merge performance element 58.Processing unit 42 presents and the sequence of image frames that is stored in the second memory 48 is carried out aftertreatment by presenting for 50 pairs of post-processing units, so that generate the image that can show on display.
Particularly, merge condition setting unit 54 and appropriate merging condition is set based on the information that receives from interface unit 52, for sequence of image frames.To this processing be described with reference to Figure 10 after a while.Frame sequence acquiring unit 56 is according to by merging the conditional information retrieval sequence of image frames that condition setting unit 54 is provided with, and the picture frame of being retrieved is delivered to merges performance element 58.Merge 58 pairs of picture frames of being received of performance element and carry out the merging processing.Herein " merge and handle " is meant in the middle of a plurality of picture frames and generates the single image frame.
To output to image-display units 60 by merging the picture frame of handling generation with the frame frequency that is provided with by merging condition setting unit 54 by merging performance element 58.
As mentioned above, the feature of first embodiment is that sequence of image frames " presents " and " aftertreatment " do not carried out on same chip, carries out but separate.Presenting the advantage that is obtained with aftertreatment by separation is the dependence of having eliminated the type of display that is connected to the picture frame treatment facility (dependence).
More specifically, though interface unit 52 has obtained the condition that the picture frame that outputs to display should satisfy, present that processing unit 42 generates sequence of image frames and the conditional independence of being obtained with interface unit 52.Subsequently, post-processing unit 50 to by presenting the sequence of image frames execution predetermined process that processing unit 42 generates, thinks that this display generates the updated images frame sequence according to the condition of being obtained by interface unit 52.Thus, the rendering method that presents in the processing unit 42 needn't change according to the condition that picture frame should satisfy.Therefore, present processing unit 42 and only need to have general structure.When the display that is connected to picture frame treatment facility 10 changes into when dissimilar, the modification by the processing in post-processing unit 50 adapts to (accommodate) this change.Therefore, can connect a greater variety of displays so that guarantee quite high-grade compatibility.This reduces exploitation circuit and software the most at last so that treatment facility is suitable for the needed manpower of various displays.
In addition, according to correlation technique, even have height when presenting ability when presenting processor, the specification of display is also usually failed to be suitable for this and is presented ability, and this need limit the ability that presents that presents processor.Yet,, needn't limit but can utilize the ability that presents processor fully according to first embodiment.Can arrange aftertreatment like this, make the capability distribution that will present with high frame frequency raising to resolution etc.By guaranteeing that aftertreatment has the height versatility, can improve the dirigibility aspect the designed image frame processing device.In addition, because do not distribute the task of execution graph picture frame processing, so can reduce the processing load that places on the display to display.
Considerablely be during frame frequency in changing the live image that is presenting such as animated video, can show on display that being different from animated image that image creation person wants represents (exhibiting) and move more in addition.According to correlation technique, a kind of method of handling this problem is to consider the frame frequency that finally is presented at the picture frame on the display and prepare a plurality of image sequences, so that enjoy optimum viewing experience with any frame frequency.On the contrary,, only need to prepare to have a sequence of image frames of high frame frequency according to first embodiment, and irrelevant with the frame frequency of display.
First memory 44 plays frame buffer, its frame by frame (by one frame) storage by presenting the sequence of image frames that processing unit 42 presents.The picture frame that temporarily is stored in the first memory 44 sequentially is transferred to second memory 48.Therefore, second memory 48 main perform regions of playing post-processing unit 50.The first memory 44 that plays the frame buffer effect is usually with the general very high realizations such as EDRAM of its cost.If second memory is not provided in this embodiment, then first memory need be a jumbo storer, and this is because as described later, during four frame times, should handle maximum four frames of storage for merging.By second memory 48 is provided, first memory 44 only needs to have storage by the capacity that presents at least one picture frame that processing unit 42 presents.Therefore, also providing second memory 48 except that first memory 44 is favourable as the perform region of post-processing unit 50.First memory 44 can be built in wherein built-in to present in the same semiconductor circuit components of processing unit 42.
Fig. 3 shows wherein, and the pixel of each picture frame is located at the basis coordinate system that presents in the processing unit 42.Indicated by y by the longitudinal axis by the x indication for transverse axis.The pixel coordinate collection by (x, y) indicated.In x and the y coordinate figure each all is a fixed-point value, and it is represented by 12 integral parts and 4 fraction parts.As described, each pixel is centered close to rounded coordinate point place in the basis coordinate system.Be stored in the first memory picture frame individual element that is presented.The coordinate system that is used in reference to the position in first memory is called as window coordinates system.Using this coordinate system to implement storage address calculates.Window coordinates systems is the coordinate system that is used in reference to the position in frame buffer, its with the upper left point (top left) of the rectangular area in the impact damper as initial point.
Suppose the basis coordinate figure for (Px, Py) and off-set value be (Offx, Offy), then can provide with following equation the window coordinates value (Wx, Wy).
Wx=Px-Offx
Wy=Py-Offy
The several examples that generated the sequence of image frames that is suitable for various display conditions by picture frame treatment facility shown in Figure 2 in aftertreatment will be described now.Suppose that presenting processing unit 42 presents 640 * 480 image with the frame frequency of 240 frame per seconds (being called fps hereinafter).
(example 1)
Fig. 4 shows from the sequence of image frames that presents processing unit 42 outputs, how in per four frames change off-set value.For convenience, suppose by presenting to have generated picture frame F1, F2, F3 and F4 with described order.Present processing unit 42 and present the first picture frame F1, present the second picture frame F2, present the 3rd picture frame F3 with the skew of (0,0.5), and present the 4th picture frame F4 with the skew of (0.5,0.5) with the skew of (0.5,0) without any skew ground.By in presentation space continuously displacement implement by the skew that presents in presenting that processing unit 42 carries out as the coordinate that presents starting point.Hereinafter, such processing will be called as " pixel displacement processing ".
Fig. 5 and Fig. 6 have illustrated and have merged first example of handling.In this example 1, merge frame sequence so that be created as the picture frame of four times of the sizes of the picture frame that is presented.
Fig. 5 illustrates by presenting the synoptic diagram that processing unit is 42 that present, how pixel that have pixel displacement is arranged in the same window coordinate system.Referring to Fig. 5 and Fig. 6, by the circle representative of " 1 " indication pixel from the first picture frame F1, by the circle representative of " 2 " indication pixel from the second picture frame F2, by the circle representative of " 3 " indication pixel from the 3rd picture frame F3, and by the circle representative of " 4 " indication pixel from the 4th picture frame F4.Spacing between the center of the neighbor in each frame all is " 1 " at x and y direction.As carrying out the result that pixel displacement is handled by presenting processing unit 42, pixel from picture frame F2 has been shifted 0.5 with respect to the associated pixel from picture frame F1 along the x direction, pixel from picture frame F3 has been shifted 0.5 with respect to the associated pixel from picture frame F1 along the y direction, and from the pixel of picture frame F4 with respect to being shifted 0.5 and be shifted 0.5 along the x direction along the y direction from the associated pixel of picture frame F1.Therefore, when these four picture frames are arranged in same coordinate system, as described in Figure 5, from the pixel of respective image frame in x direction and the equal spacing 0.5 of y direction.
By in window coordinates systems, being unit rather than being that unit carries out grid (grid) sampling, can be created on the picture frame that x direction and y direction have the twice number of pixels with 1 pixel with 0.5 pixel.To be described this with reference to figure 6.Fig. 6 illustrates the synoptic diagram how pixel arranges.Though for the sake of brevity, Fig. 6 only for picture frame 102,104,106 and 108 show horizontal 4 pixels and vertical 3 vertically, in fact horizontal 640 pixels and vertical 480 pixels are arranged in each frame.All pixel is arranged with the 110 described relations of the picture frame among Fig. 6.By arranging this 640 * 480-pixel map picture frame 102,104,106 and 108 with grid configuration as mentioned above, created picture frame 110 with 1280 * 960 pixel sizes, its size is four times of 640 * 480 pixel map picture frames.Hereinafter, such method of sampling is called " point sampling ".
According to this example 1, in presenting processing unit 42, generate a plurality of picture frames by presenting with space displacement.Subsequently, 50 pairs of picture frames of post-processing unit are carried out the merging that keeps the displacement between the picture frame and are handled, to generate the sequence of image frames that has higher spatial resolution than from the sequence of image frames that presents processing unit 42 outputs.Phrase " keep picture frame between displacement " means by using pixel from the picture frame of each skew to obtain final picture frame without any modification.Thus, can generate the sequence of image frames of the different resolution that is suitable for different displays by aftertreatment.
Example 1 also can be construed as reducing the measure of frame frequency.For example, by generating picture frame as shown in Figure 3 from four picture frames, frame frequency will be lowered to 1/4.The situation that this highest frame frequency for display is lower than the frame frequency that presents processing unit 42 is favourable, can obtain low frame rate and high-resolution image by carrying out point sampling.
Current embodiment is not limited to and generates a picture frame from four picture frame.Alternatively, by from nine picture frames, generating picture frame, might generate pixel and Duo nine times picture frame than original image frame.When the picture frame of the more big figure that relates to, this is correct equally.The number of picture frame is big more, and the frame frequency of the final picture frame that obtains is low more.
(example 2)
Fig. 7 and 8 has illustrated and has merged second example of handling.In this example 2, realize motion blur effects by merging successive frame.
Referring to Fig. 7, empty circle representative is from the pixel of the picture frame F1-F4 that has stood the pixel displacement processing.In this example, from the rgb value of four adjacent image points, obtain average RGB value, the value that the result is obtained is as new rgb value then.Generate the centre between the center that picture frame makes shown in the diagonal line shaded circles among Fig. 7, the center of each pixel is in four empty circle pixels.
Fig. 8 schematically illustrates this layout.That is to say, generated 680 * 480 picture frame, it has by the rgb value with the pixel among 640 * 480 the picture frame F1-F4 and multiply by 0.25 rgb value that obtains.
According to example 2, in presenting processing unit 42, generate a plurality of picture frames by presenting with space displacement.Subsequently, 50 pairs of picture frames of post-processing unit are carried out to merge and are handled with the displacement of payment between picture frame.Therefore, generated and the updated images frame sequence that has same spatial resolution from the sequence of image frames that presents processing unit 42 outputs.Phrase " displacement between the payment picture frame " means by the mixed picture frame that presents that has been offset and finally obtains unmigrated picture frame.In this embodiment, mix four pixels and generated a pixel.Counteracting is equivalent to generate the image that obtains by time division frame between two time continuous images frames in fact and uses the image of the average image of institute's divided image as target frame in the displacement between the picture frame.Therefore, if be live image, then offset displacement and can use motion blur effects to this live image by the content that presents the sequence of image frames that processing unit presents.As example 1, example 2 can be understood that to reduce the measure of frame frequency.More specifically, example 2 makes when keeping original image frame resolution, and the output frame frequency is that 1/4 the sequence of image frames that presents the frame frequency of processing unit 42 becomes possibility.
Also can usage example 2 when content is rest image.In this case, to applying anti-aliasing (antialiasing) effect by merging the rest image of handling acquisition.In this case, merging is handled to be similar to and wherein original pixels is divided into sub-pixel so that obtain to be used for the data of the object pixel of image, and wherein adopts " super sampling (the super sampling) " of the average data of sub-pixel as pixel data.
(example 3)
Fig. 9 has illustrated according to the merging of example 3 and has handled.In example 3, generate horizontal vertical ratio and be different from picture frame by the horizontal vertical ratio that presents the original image frame that processing unit 42 generated.For example, suppose that presenting processing unit 42 generates each and all have first picture frame 112, second picture frame 114, the 3rd picture frame 116 and the 4th picture frame 118 of the resolution of 720 * 480 pixels, and from picture frame 112118, generate have the aspect ratio that is different from original image frame 112-118, resolution is the target image frame 122 of 1920 * 1080 pixels.Referring to Fig. 9, it is respectively from the pixel of first picture frame 112, second picture frame 114, the 3rd picture frame 116 and the 4th picture frame 118 that the pixel that is associated is indicated in numeral " 1 ", " 2 ", " 3 " and " 4 ".
In first step 130, carry out the point sampling of in example 1, describing.Thus, generated the picture frame 120 of 1440 * 960 pixels (being four times of size of each original image frame 112-118).Subsequently, in second step 132, carry out the bilinearity sampling so that generate the picture frame 122 of resolution with 1920 * 1080 pixels.The bilinearity sampling is a kind of image interpolation method.In this example, determine the pixel color that will present by the linear interpolation of the rgb value of four pixels on every side.
To provide the description of bilinearity sampling with reference to Figure 10.Suppose scaled picture frame 120 (1440 * 960), the coordinate of the pixel center of calculating chart picture frame 122 in the coordinate system of picture frame 120 of arriving of picture frame 122 (1920 * 1080).Figure 10 shows the part 124 of the picture frame 120 among Fig. 9.The center of the pixel of empty circle 140 representatives in picture frame 122.For the color of determining when these coordinate places present this pixel, will use, make rgb value stand linear interpolation according to the coordinate displacement at the center of the pixel 124a among distance Figure 10,124b, 124c and 124d.Should be noted that pixel 124a is the pixel from first picture frame 112 among Fig. 9, pixel 124b is the pixel from second picture frame 114, and pixel 124c is the pixel from the 3rd picture frame 116, and pixel 124d is the pixel from the 4th picture frame 118.The displacement of supposing the center along continuous straight runs of four pixel 124a-124d of distance is α, and displacement vertically is β (referring to Figure 10), is then provided the rgb value of the empty circle 140 that calculates by linear interpolation by following equation.
R=(1-α)(1-β)Rs1+α(1-β)Rs2+(1-α)βRs3+αβRs4(1)
G=(1-α)(1-β)Gs1+α(1-β)Gs2+(1-α)βGs3+αβGs4(2)
B=(1-α)(1-β)Bs1+α(1-β)Bs2+(1-α)βBs3+αβBs4(3)
Wherein Rs, Gs and Bs represent the rgb value of four pixel 124a-124d, and the component of suffix s1, s2, s3 and s4 difference represent pixel 124a, 124b, 124c and 124d.By determining color, generated picture frame 122 for whole pixel calculation equation (1)-(3) that comprised in the picture frame 122.
Equation (1) to (3) based on the identical principle of calculating of general bilinearity sampling.Be different from general bilinearity sampling part and be, from the pixel of different picture frames, obtain color component.
When not being in the following time of relation that one of them is another integral multiple by resolution that presents the sequence of image frames that processing unit 42 presents and the resolution that is presented on the display, promptly for example, when the picture frame that is presented is 720 * 480 and the resolution that is presented at the picture frame on the display when being 1920 * 1080, can not only pixel displacement be handled and point sampling obtain target resolution by carrying out.In this case, generate 1440 * 960 middle graph picture frame, and carry out the bilinearity sampling subsequently so that obtain 1920 * 1080 picture frame by pixel displacement processing and point sampling.Thus, frame frequency is reduced to 1/ (number of the picture frame that uses) in point sampling.
Can carry out pixel displacement handles.More specifically, can be directly carrying out the bilinearity sampling to obtain final picture frame by presenting on the original image frame that processing unit 42 presents.Yet, handle and point sampling by before the bilinearity sampling, carrying out pixel displacement, might obtain when amplifying, to represent the image of less deterioration.Alternatively, four picture frames in the sequence of image frames (for example, size is 720 * 480) in each can be enlarged into the picture frame (for example, 1920 * 1080) that is suitable for showing, can mix four enlarged image frames to obtain final picture frame based on this.
Can select the chance of one of above-mentioned example 1 to 3 to the user.In putting into practice a kind of method of this embodiment, the picture frame treatment facility can automatically carry out this and determine.Figure 11 is by merging the process flow diagram of determining automatically that condition setting unit 54 is carried out.Which in this process flow diagram, carry out about according to frame frequency, resolution and content, listing and carry out determining of the aftertreatment described in the example 1 to 3 by presenting image frame sequence that processing unit 42 presents.
Merge the frame frequency information of the display that condition setting unit 54 will obtain via interface unit 52 and compare, to determine whether these frame frequencies mate (S10) by the frame frequency that presents the picture frame that processing unit 42 presents.When frame frequency coupling (at S10 for being), transmission control unit (TCU) 46 is transferred to second memory 48 with picture frame from first memory 44.Frame sequence acquiring unit 56 is read picture frame with the time interval identical with the frame frequency that presents processing unit 42 from second memory 48.Merging performance element 58 outputs to image-display units 60 with picture frame and does not carry out any such as the aftertreatment (S12) that merges handling.Thus, can utilize the performance that presents that presents processing unit 42 fully, and can present image with full specification (full-spec) frame frequency.In the replacement method, can the output image frame sequence and it temporarily need not be stored in the second memory 48.More specifically, use first memory 44 as impact damper, frame sequence acquiring unit 56 is directly read by presenting the sequence of image frames that processing unit 42 presents, and these picture frames are outputed to image-display units 60.
When frame frequency does not match (being not among the S10), merging condition setting unit 54 will be by the resolution that presents the picture frame that processing unit 42 presents (hereinafter, be called " picture frame that has presented ") and compare via the exploration on display resolution ratio information that interface unit 52 obtains, whether mate (S14) to determine resolution.If exploration on display resolution ratio is higher than the resolution (in S14 for being) of the picture frame that has presented, then merging the picture material of determining to be presented on the display in condition setting unit 54 is rest image or live image (S16).Information in the head that can be by reading the program of being registered in etc. is carried out this and is determined.Alternatively, can based on calculate by motion determining unit (not shown), carry out determining of relevant rest image or live image as the value of the component motion of the difference of adjacent image frame.If this content is the rest image (S16 is for being) such as the screen picture of word processor documents or html document, then merges condition setting unit 54 and determine also whether exploration on display resolution ratio is the integral multiple (S18) that has presented the resolution of image.When the vertical and horizontal pixel counts of a resolution that is compared is a plurality of integral multiple of vertical and horizontal pixel counts of another resolution, promptly when the picture frame that has for example presented be 640 * 480 and exploration on display resolution ratio when being 1280 * 960 or 1920 * 1440, make determining of " integral multiple ".When exploration on display resolution ratio is the integral multiple of resolution of the picture frame that presented (in S18 for being), then carry out with reference to figure 5 and 6 examples of describing 1 to obtain the sequence of image frames (S22) of expectation resolution.Therefore, when exploration on display resolution ratio has twice high, merge condition setting unit 54 and make four picture frames of frame sequence acquiring unit 56 retrievals.When exploration on display resolution ratio has three times high, merge condition setting unit 54 and make nine picture frames of frame sequence acquiring unit 56 retrievals.Frame sequence acquiring unit 56 is delivered to the picture frame that is obtained and merges performance element 58, based on this, merges performance element 58 and carry out point sampling on the picture frame of being sent, and outputs to image-display units 60 with the picture frame that will have expectation resolution.
When determining that at S18 exploration on display resolution ratio is not the integral multiple of the picture frame that presented (in S18 for not), then carry out with reference to figure 9 and 10 examples of describing 3 so that obtain the sequence of image frames (S20) of expectation resolution.More specifically, merge condition setting unit 54 and generate the integral multiple of resolution that resolution are the picture frame that presented and the picture frame of the most approaching desired resolution.By on the picture frame that is generated, carrying out the bilinearity sampling, merge condition setting unit 54 and also generate the image that does not wherein keep horizontal vertical ratio.
When determining that at S16 this content is a live image (in S16 for not) such as motion CG or film, merge condition setting unit 54 and carry out with reference to figure 7 and 8 examples of describing 2, so that obtain to have the live image (S24) of motion blur effects.More specifically, merge condition setting unit 54 and carry out the merging processing by using by the definite picture frame number of (frame frequency that has presented picture frame)/(display frame frequency).For example, when the frame frequency that presents picture frame is 240fps and the frame frequency of display when being 60fps, then use the individual frame in 4 (=240/60) to be used for merging and handle.If the result of this division is not an integer, for example when the frame frequency that presents image be 240fps and the frame frequency of display when being 70fps, this result of division is 3.4 (=240/70).In this case, abandon fractional part, and export the picture frame that obtains to three picture frames execution merging processing so that with 70fps.
Under any circumstance, the frame frequency from the sequence of image frames of post-processing unit 50 output is lower than by presenting the frame frequency that presents picture frame that processing unit 42 produces.For example, by carrying out point sampling frame frequency is reduced to 1/4 resolution that produces up to twice, and frame frequency is reduced to 1/9 resolution that produces up to three times by carrying out point sampling.Therefore, even during the resolution that obtains to expect as the result who handles as S20 or S22, also might be because low frame rate and on screen, produce flicker etc.In the replacement method, user prompt unit (not shown) can be provided, it will descend significantly at the frame frequency when obtaining the resolution of expectation of warning user on the screen, and the prompting user accepts.When the user accepts, will carry out point sampling.When not accepting, do not carry out point sampling.In also having a method, merge specification (resolution and frame frequency) that condition setting unit 54 can reference display and the aftertreatment that can in post-processing unit 50, carry out, the possible many tabulations of demonstration on screen then to resolution and frame frequency.User prompt unit (not shown) can be pointed out a pair of that the user selects to expect, and selected a pair of being transferred to merged condition setting unit 54.In response to this, merge condition setting unit 54 guidance frame sequence acquiring unit 56 and merging performance element 58 then.
Can before the demonstration that presents picture frame, carry out above-mentioned processing.Alternatively, can show according to pre-defined algorithm and stand to merge the picture frame of handling, can judge whether to proceed above-mentioned processing according to user's taste so that watch shown result's user.
Return referring to S14, when exploration on display resolution ratio is equal to or less than the resolution (being not) of the picture frame that is presented in S14, merges condition setting unit 54 and determine whether these contents are rest image (S26).When content is rest image (in S26 for being), merges condition setting unit 54 and skip some picture frames that are used to show (S28).More specifically, merge condition setting unit 54 indication frame sequence acquiring units 56 and from the picture frame that has presented of given number, obtain a picture frame.Merging performance element 58 outputs to these picture frames image-display units 60 and does not make them stand aftertreatment.For example, when the picture frame that presented output frame frequency is 240fps and the frame frequency of display when being 60fps, export each the 4th picture frame.
When determine that in S26 content is live image (being not), then carries out example 2 as mentioned above to obtain the live image (S30) of motion blur in S26.
Therefore, by comparing, merge the condition that aftertreatment can automatically be determined in condition setting unit 54 by the frame frequency or the resolution that present the sequence of image frames that processing unit 42 presents and the frame frequency or the resolution that are connected to the display of picture frame treatment facility.
As mentioned above, allowing to present processing unit carries out with predetermined frame frequency and presents to generate sequence of image frames the conditional independence that will satisfy with the picture frame that is used for outputing to display.Post-processing unit is then to carrying out predetermined processing by presenting the sequence of image frames that handle to generate, and output meet above-mentioned condition, the updated images frame sequence.
Handle and aftertreatment because carry out discretely to present, generate sequence of image frames and irrelevant with the display specification such as resolution or frame frequency so might carry out to present with predetermined frame frequency.
Present processing unit 42 and be described to present 640 * 480 pixel image with 240fps.The picture frame that can present other pixel counts.The presentation speed of picture frame also can be lower or higher.For example, can present sequence of image frames by 300fps.In this case, can generate and be suitable for the two sequence of image frames of 50 hertz of displays and 60 hertz of displays.
Describe in the above among this embodiment, handle the pixel execution that is described as from four picture frames merging.Alternatively, can carry out merging to the pixel of bigger quantity handles.For example, can utilize pixel displacement to present six picture frames so that be positioned at hexagonal respective vertices place, and form picture frame by the pixel of average RGB value with six pixels from the pixel of each frame.
When describing this embodiment in the above, the picture frame treatment facility is described as being built in the entertainment device that is used for presenting the CG image.Yet picture frame treatment technology according to the present invention can also be applied to DVD player, personal computer, digital video camera etc.
(second embodiment)
In first embodiment, presented than the needed picture frame more images of show events image frame on display.Then, the picture frame that is presented is carried out predetermined process so that the output map picture frame is to be used for demonstration.On the contrary, such embodiment also is envisioned for, and when providing live image in advance, picks up a plurality of picture frames and make them stand predetermined process from this live image, and output is less than the picture frame of the picture frame of being read then.By latter embodiment, can create the picture that advances fast of original active image.In addition, can also create the live image that fasts rewind.The rewinding live image is the image of oppositely exporting along time shaft for live image.Hereinafter, " advance fast " and comprise the action of fasting rewind.
These two embodiment are different at first sight.Yet these two embodiment have identical principle, wherein make the picture frame that surpasses finally the frame that provides to the user stand predetermined process, and export the updated images frame then.In other words, the difference between these two embodiment only is the gap length of output map picture frame.
Recently, the widespread that becomes of the digital active scanner-recorder such as HDD (hard disk drive) video recording apparatus.Therefore, can easily create, write down or play a large amount of moving image datas in person.In such device, the user uses quick advancement function to search for interested part in the moving image data that is write down.Yet when live image was advanced fast, the user usually missed part interested at searching period, and sensation search sometimes is inconvenient.
Therefore, in this second embodiment, even will provide the picture frame treatment technology of when live image is advanced fast, also exporting the live image that is easy to watch.
Figure 12 shows the hardware configuration according to the picture frame treatment facility 200 of second embodiment.Host CPU 12, primary memory 14, display 26, display controller 28, input and output (I/O) port 30 and External memory equipment 32 with as shown in Figure 1, identical according to the piece of first embodiment, so identical digital distribution is given these pieces and is omitted further specifying these pieces.Camera apparatus 38 such as digital video camera is connected to input and output port 30.The live image that is obtained by camera apparatus 38 is stored as numerical data in the External memory equipment 32 such as DVD (digital multi-purpose disk) driver and hard disk drive.Select sequence of image frames in the moving image data of graphic process unit 80 from be stored in External memory equipment 32, and it is stored in the primary memory 14.Then, 80 pairs of sequence of image frames of graphic process unit are carried out predetermined process with establishment updated images frame sequence, and the sequence that will upgrade outputs to display 26.
Picture frame treatment facility 200 can be incorporated in various types of live image display devices, and these display devices show the live image of being made up of sequence of image frames on display 26.Such live image display device can comprise that being used to such as DVD player and HDD video recorder stores or the various devices of movie content.In addition, the live image display device can also be incorporated in personal computer, digital video camera or the entertainment device.
Input equipment 84 produces some inputs to picture frame treatment facility 200.According to the type of live image display device, can use various types of equipment as input equipment 84.For example, suppose that the live image display device is DVD player or HDD video recorder, then input equipment 84 can be the various buttons that provide on telepilot or live image display device.Suppose that the live image display device is a multi-purpose computer, then input equipment 84 can be keyboard or mouse.
In this second embodiment, with describe when receive from the user to prior establishment and be recorded in movie contents the mass-memory unit such as DVD driver or HDD driver advance during request the image that establishment is advanced fast fast.As first embodiment, second embodiment can be applied to carry out present and handle so that create the entertainment device that is used to be presented at the new image frame sequence on the display.
Now, in two field picture treatment facility 200 shown in Figure 12, the method for the live image that advances fast that is used to create the value of being added will be described with reference to some examples.
(example 4)
Figure 13 is the functional block diagram according to the picture frame treatment facility 200 of example 4.Feature among Figure 13 can mainly utilize graphic process unit 80, host CPU 12 and primary memory 14 to realize.In this example 4, be described a kind of being used in response to the method that the image of that advance fast, level and smooth activity is provided from the advancing request fast of user.
Interface unit 202 obtains the request of being carried out via input equipment 84 by the user of advancing fast.For example, suppose that picture frame treatment facility 200 is incorporated in the DVD player, then this request of advancing fast is corresponding to forwarding button or the dial (of a telephone) quick pace information specified, such as " double-speed ", " 4 * (four times) speed " that provides on main body or the telepilot is provided.The request of advancing fast can specify in the head part of moving image data rather than provide from the user.Interface unit 202 sends to transmission frame number determining unit 206 with the information of being obtained.Transmission frame number determining unit 206 is determined to realize advancing fast the needed picture frame number of live image with the quick pace information of being received.Frame transmission unit 208 is 206 determined by transmission frame number determining unit to read in the sequence of image frames of constant timing from be stored in storage unit 250, picture frame is fixed a number order really.Then, this frame transmission unit 208 is transferred to quick advance unit 220 with these frames.For example, storage unit 250 is corresponding to the primary memory among Figure 12 14.Yet storage unit 250 can be any storage unit or the storer that provide in picture frame treatment facility 200 such as External memory equipment 32.In addition, the picture frame in the storage unit 250 can be unpressed image.Picture frame in the storage unit 250 can also be to use the compressed image of DCT (discrete cosine transform).
Advance unit 220 comprises frame sequence acquiring unit 222 and merges performance element 224 fast.Frame sequence acquiring unit 222 obtains institute's image transmitted frame, and temporarily stores them.Merge performance element 224 and carry out the merging processing, generate a updated images frame in its a plurality of picture frames from be stored in frame sequence acquiring unit 222.This merges processing can be the mixed processing of describing among first embodiment.This updated images frame is called as " frame fast advances ".
To generate order, will be transferred to picture component units 240 by the frame that advances fast that merges performance element 224 generations.Picture component units 240 is exported these frames that advances fast with the predetermined frame frequency that can show on display 26.Thus, the user can watch the picture that advances fast of expectation on display 26.
When on a plurality of picture frames, carrying out the merging processing, in the frame that advances fast, produce pseudo-after image (afterimage).By sequentially exporting such frame that advances fast, can obtain the picture that advances fast with motion blur effects.Like this, the user can enjoy true to nature and level and smooth moving frame.
Incidentally, can carry out such processing, wherein extract picture frame in the sequence of image frames of the frame of every predetermined number from storage unit 250, and the frame that is extracted with predetermined frame frequency output and do not provide and anyly be used to create advance the fast merging of picture and handle.Now, in order to understand the advantage of example 4, the shortcoming of the picture of being created by such processing that advances fast will be described with reference to Figure 14.
Figure 14 shows such handling principle, and it comprises step: the picture frame that extracts appropriate number from cut-and-dried sequence of image frames 300; And create the frame sequence 310 that advances fast.
Sequence of image frames 300 comprises picture frame 301-309 and scope a large amount of other picture frames before and after these frames.Picture frame 301-309 deputy activity image, wherein circular object 400 moves to the lower right from the upper left side of screen.In fact, unless used the picture frame more much more than frame 301--309, otherwise can not mobile smoothly circular object 400.Yet,, suppose and only utilize picture frame 301-309 just to realize the level and smooth mobile of circular object 400 in order to simplify the explanation among Figure 14.
Star 401 shown in the picture frame 303 and 307 is represented the flicker of circular object 400.In this sequence of image frames 300, circular object 400 appears at the upper left corner of screen, moves to the lower right corner then and glimmers twice.
In this example, from per three picture frames, extract a picture frame.More specifically, per three picture frames extract picture frame 301,304 and 307 in sequence of image frames 300.Then, the picture frame of these extractions becomes respectively and advances frame 311,312,313 fast and do not take any processing.Therefore, can create the frame sequence 310 and export this sequence 310 with predetermined frame frequency and create the picture that advances fast of advancing fast by from the picture frame of every appropriate number, extracting a picture frame.In this example shown in Figure 14, can obtain the 3X picture that advances fast.
Yet, utilize such processing, when the difference between the picture frame that extracts from the original image frame sequence was big, particularly under the situation of advancing very fast, picture may advance frame by frame.Therefore, for the user, this picture becomes relatively poor picture.In addition, sequence of image frames 300 comprises the picture frame 303 of representing circular object 400 flickers.Yet the frame sequence 310 that advances does not fast comprise this picture frame 303.Therefore, watching the user of the picture that advances fast that comprises frame sequence 310 can not recognize circular object 400 has glimmered twice.
This shows that in such processing, the flicker of object, the picture frame that might not have to have important information is included in the picture that advances fast in Figure 14.In other words, though some incidents occurred in the original image frame sequence, work as the user and watch in this picture that advances fast, he can not see this incident.So when the user searched for scene of interest as clue with customizing messages from the picture that advances fast, if the picture that advances fast lacks this customizing messages, then the user can not find out this scene.
Next, will describe according to the example 4 of reference Figure 15 and be used to create the method for picture of advancing fast.Replace per three picture frames and extract a picture frame, three picture frames that merge in 224 pairs of sequence of image frames 300 of performance element are carried out the merging processing to create the frame that advances fast.More specifically, merge 224 couples of picture frame 301-303 of performance element and carry out the merging processing to create the frame 321 that advances fast.Merge 224 couples of picture frame 304-306 of performance element and carry out the merging processing to create the frame 322 that advances fast.Merge 224 couples of picture frame 307-309 of performance element and carry out the merging processing to create the frame 323 that advances fast.
This merges handles corresponding to creating such picture frame, and each pixel in this picture frame is the weighted mean pixel that is arranged in picture frame same position place.More specifically, when using n picture frame F m(m=1 ..., n, n are positive integer) and frame F that advances fast of establishment fThe time,
F f=∑α m/F m(4)
α wherein mRepresent the weight coefficient of each picture frame, and satisfy ∑ α m=1.As can be from finding out the expression (4), for each picture frame, the weight ratio may not wait.For example, high weight ratio can be applied to the picture frame of contiguous certain picture frame, and this picture frame is far away more from the position of certain picture frame, the weight ratio of application may be low more.How distribute α mValue depend on the frame F that advances fast fFeature.
Handle according to aforesaid merging, obtained the frame 321,322,323 that advances fast of pseudo-after image with the circular object 400 that between picture frame, moves.In Figure 15, the after image of circular object 400 is expressed as sky circle or star.Thus, when reproduction has advancing fast during frame sequence 320 of the frame 321-323 that advances fast, can obtain motion blur and level and smooth movable picture.Therefore, can alleviate user's eye fatigue.In addition, as finding out that object scintigram picture is retained in these frames as pseudo-after image from the frame 321 and 323 that advances fast.Therefore, the information in original image frame may not can be advanced frame fast and loses from frame owing to creating.In other words, always the partial information of original image frame is stayed in the frame that advances fast.Therefore, when the user searched for his interested scene as clue with customizing messages from the picture that advances fast, because this residual risk, the user felt to be easy to find out this scene.
Figure 16 shows to increase or reduce to be merged with establishment has the principle that the number of image frames purpose of the picture that advances fast of different paces is handled.Sequence of image frames 350 comprises picture frame 351-362 and the great amount of images frame of scope before and after these frames.When picture is advanced in establishment normally fast, merge 224 pairs of four picture frames of performance element and carry out the merging processing to create the frame that advances fast.More specifically, merge 224 couples of picture frame 351-354 of performance element and carry out to merge and handle creating the frame 371 that advances fast, and merge 224 couples of picture frame 355-358 of performance element and carry out to merge and handle to create the frame 372 that advances fast.
When advance fast when detecting particular image frame image duration creating, merge 224 pairs of per two picture frames of performance element and carry out to merge and handle to create the frame that advances fast with specified conditions by frame sequence acquiring unit 222.In Figure 16, when picture frame 359 satisfies these specified conditions, merge 224 pairs of picture frames 359 of performance element and 360 and carry out to merge and handle creating the frame 373 that advances fast, and merge 224 pairs of picture frames 361 of performance element and 362 and carry out to merge and handle to create the frame 374 that advances fast.
The picture that advances fast that comprises the frame sequence 370 that advances fast that comprises the frame 371-374 that advances fast at first has the 4X pace, but after the frame 373 that advances fast, pace drops to twice.Therefore, by increasing or reduce merged picture frame number rightly, can obtain the picture that advances fast that its speed is put change at any time.
In order to detect particular image frame, can use any well-known technology.For example, use the scene change detection technique, can obtain particular image frame in the place of scene change.So, can obtain the picture that advances fast of pace at any special scenes place with reduction.Alternatively, calculate the motion vector between picture frame, can detect the particular image frame of the absolute value of its motion vector then greater than predetermined value.Thus, by detecting the wherein mobile change big picture frame of object in screen, can obtain the picture that advances fast of the pace that after this particular frame, has reduction.
Even also after set point, reduce pace automatically because the user advances this picture fast, so the user can easily find out interested or important scene.In addition, when the user searches for scene of interest as clue with customizing messages from the picture that advances fast, can automatically reduce pace at frame place with this information.Therefore, the user can more easily find out this scene.Can imagine the use that following reality.Suppose that image content is a drama, the scene when then can be during advancing fast reproducing specific actors and occur with the speed that reduces.Suppose that image content is Association football match broadcasting, then can reproduce the score scene with the speed that reduces during advancing fast.
(example 5)
In example 4, do not consider the feature of each picture frame and from sequence of image frames, extract the picture frame of predetermined number, and the picture frame that is extracted carried out to merge handle to create the frame that advances fast.This processing is preferably the level and smooth picture that advances fast that moves of structure.Yet in some cases, extracting some picture frames with some features by priority, to create the picture that advances fast be preferred.In this example 5, a kind of like this picture frame treatment facility is provided, it is used for extracting some picture frames that satisfy specified conditions by priority and creates and have the picture that advances fast that height is watched efficient.
Figure 17 shows the functional block diagram according to the picture frame treatment facility of example 5.Interface unit 202, transmission frame number determining unit 206, frame transmission unit 208, frame sequence acquiring unit 222, picture synthesis unit 240 are identical with piece shown in Figure 13 with storage unit 250, so identical digital distribution is given these pieces and omitted further specifying these pieces.
Advance unit 220 comprises frame sequence acquiring unit 222 and feature frame extraction unit 226 fast.Feature frame extraction unit 226 extracts from the picture frame that frame transmission unit 208 transmits and satisfies the picture frame of predetermined condition as the feature frame based on the monochrome information of picture frame.For example, feature frame extraction unit 226 calculates the pixel average of each included in ten forwards comprising certain picture frame and reverse frames pixel, extracts then to comprise that the picture frame that has than the pixel of mean value big 50% is as the feature frame.Feature frame extraction unit 226 extracts the picture frame of some proper numbers except these feature frames, creates the frame that advances fast, and sends it to picture synthesis unit 240.The frame that advances fast that picture synthesis unit 240 can show on display 26 with predetermined frame frequency output.
Now, with the extraction of more specifically describing according to example 5 to the feature frame.Figure 18 shows the principle of extracting the processing of some picture frames based on monochrome information from sequence of image frames.Identical with Figure 14, sequence of image frames 300 comprises picture frame 301-309 and scope a large amount of other picture frames before and after these frames.Feature frame extraction unit 226 extract wherein pixel intensity greater than the picture frame of the pixel intensity in other adjacent image frame as the characteristic pattern picture frame.As mentioned above, circular object 400 is glimmered in picture frame 303 and 307.Therefore, because picture frame 303 and 307 comprises the pixel of brightness greater than the pixel intensity of adjacent image frame, so extract them respectively as the feature frame.These feature frames become advance fast frame 331 and 332 and do not take any processing respectively.
Only might not extract with the synthetic needed frame of picture that advances fast of the pace of being asked by from sequence of image frames 300, extracting the feature frame.Therefore, if in the picture frame of predetermined number, do not extract the feature frame, be preferably and allow feature frame extraction unit 226 from the picture frame of predetermined number, extract a frame and do not consider monochrome information.On the contrary, if in the picture frame of predetermined number, extracted a plurality of feature frames, then be preferably and allow feature frame extraction unit only extract a frame.In this way, can construct the frame sequence 330 that advances fast.
Alternatively, during a plurality of feature frame that extracts in existing based on the picture frame of monochrome information at predetermined number, all images frame that is confirmed as the feature frame can be the frame and irrelevant with pace information of advancing fast.In this way, extract the picture frame of pixel continuously with big brightness for certain period.Therefore, in this period, the picture that advances fast of speed that can obtain reduction with normal playback speed.Thus, even because in the picture that advances fast, also can obtain the live image of normal play no better than in the feature scene, this can reduce the chance that the user misses the important information in the feature scene.Be preferably and depend on that the information type that the user wants to obtain is provided with condition for the feature frame.
In example 5, because create the picture that advances fast, so reduced the number of picture frame that miss, that have important information in the picture that advances fast by extracting the feature frame based on monochrome information.
Should be noted that the information that is used to extract the feature frame is not limited to monochrome information.For example, the mobile message between picture frame also can be used for extracting some picture frames with specified conditions according to priority.
Referring to Figure 17, mobile message detecting unit 210 receives picture frame and the mobile message of calculating between these picture frames that transmits from frame transmission unit 208.For example, mobile message detecting unit 210 is by using the corresponding point of well-known block matching method acquisition between picture frame.Then, mobile message detecting unit 210 is according to the difference calculating kinematical vector between corresponding point.This motion vector is used for mobile message.If prepared each zone in picture frame or some mobile messages of object in advance as data, then also can be with these data as mobile message.
Feature frame extraction unit 226 extracts from institute's image transmitted frame based on mobile message and satisfies the picture frame of predetermined condition as the feature frame.This condition for example is that the absolute value of motion vector is greater than predetermined value.Except these feature frames, feature frame extraction unit 226 extracts the picture frame of some appropriate numbers, creates the frame that advances fast, and this frame that advances is fast sent to picture synthesis unit 240.Picture synthesis unit 240 is to be scheduled to frame frequency with this frame Output Display Unit 26 that advances fast.
In another example, the information that feature frame extraction unit 226 receives the head that is written to moving image data from interface unit 202, and based on this information extraction feature frame.For example, the content of supposing live image is a drama, has then enabled the position of indication scene change before and after scene change point in the head of tens or the hundreds of picture frame in the scope.Feature frame extraction unit 226 can extract by such picture frame of indicating as the feature frame.In this way, even in the picture that advances fast, pace also becomes and equals normal broadcasting speed.Therefore, this user can more easily recognize the content in the picture that advances fast.
(example 6)
In example 5, described and used monochrome information or mobile message extraction to satisfy the picture frame of specified conditions as the feature frame.In other words, in example 5, the picture frame in sequence of image frames is separated into two groups.One group comprises the picture frame useful to the user (that is to say to have the picture frame of bulk information).Another group comprises the picture frame few slightly to user's benefit (that is to say to have the picture frame of less information).Then, from first group, pick up more picture frame and create the picture that advances fast.
In example 6, will provide the picture frame treatment facility that a picture frame is separated into two parts: a part that has the part of more information and have less information.And strengthen these parts or make these parts be difficult for being discovered.Thus, user's acquired information from the picture that advances fast more easily.
Figure 19 shows the functional block diagram according to the picture frame treatment facility of example 6.Interface unit 202, transmission frame number determining unit 206, frame transmission unit 208, mobile message detecting unit 210 are identical with piece shown in Figure 17 with storage unit 250, so identical digital distribution is given these pieces and omitted further specifying these pieces.
Advance unit 220 comprises separative element 228, merging performance element 230 and frame synthesis unit 232 again fast.Separative element 228 receives the picture frame that transmits from frame transmission unit 208.Separative element 228 is separated into each picture frame in " specific image zone " and " nonspecific image-region ".Carry out this separation based on the mobile message that receives from mobile message detecting unit 210.The specific image zone is the zone of the absolute value of wherein motion vector greater than predetermined threshold.Nonspecific image-region is the zone except the specific image zone.Merge 230 pairs of nonspecific image-regions between picture frame of performance element and carry out the merging processing.On the other hand, merge performance element 230 and from picture frame, pick up any one specific image zone.
Frame again the synthetic specific images zone of being extracted of synthesis unit 232 and nonspecific image-region through merging to create the updated images frame.The updated images frame is sent to picture synthesis unit 240 as the frame that advances fast.Picture synthesis unit 240 exports this frame that advances fast to display 26 with predetermined frame frequency.
Figure 20 shows the principle that is used for picture frame is separated into specific image zone and nonspecific image-region.Sequence of image frames 380 comprises picture frame 381-384 and the great amount of images frame of scope before and after these picture frames.Sequence of image frames 380 comprises someone's image.Image that can this people of detection as described below.The user specifies the design and color of the clothes of this people's dress.Then, use well-known Image Matching Technique, detect this people's image-region with this design and color as clue.
Separative element 228 is separated into the specific image zone of the image that is used for this people with picture frame 381-384 and is used for the nonspecific image-region of background image.This nonspecific image-region that merges among 230 couples of picture frame 381-384 of performance element is carried out the merging processing.This merge cells 230 picks up a specific image zone from picture frame 381-384.In Figure 20, by the specific image zone that merges in the performance element 230 captured image frames 382.Then, frame again synthesis unit 232 will put together to create the frame 385 that advances fast by merging specific image zone that performance element 230 picked up and the nonspecific image-region that is merged.Handle because this merges, this frame 385 that advances fast has fuzzy background image.Therefore, comprise that the picture that advances fast of frame 385 can show the someone with motion blur background, so this user can more easily recognize this people.
As mentioned above, according to example 6, can be in the picture that advances fast the pith in the displayed map picture frame clearly.In other words, according to example 6, because motion blur, more unessential part in the picture frame is become be difficult for being discovered.
In this way, when the content of live image is drama or motion broadcasting, personnel that can noticeable ground explicit user is liked in the picture that advances fast.
Alternatively, when nonspecific image-region was rest image, merge cells 230 used a plurality of nonspecific image-regions to strengthen its picture quality.
(example 7)
Figure 21 shows the functional block diagram according to the picture frame treatment facility of example 7.In this example 7, in the path (track) of screen displayed object in picture frame of advancing fast.
Interface unit 202, transmission frame number determining unit 206, frame transmission unit 208, mobile message detecting unit 210, picture synthesis unit 240 are identical with piece shown in Figure 17 with storage unit 250, so identical digital distribution is given these pieces and omitted further specifying these pieces.
Advance unit 220 comprises path creating unit 236 and frame synthesis unit 232 again fast.Path creating unit 236 is used the mobile message that receives from mobile message detecting unit 210 image that Makes Path.This path profile similarly is the image that shows the streamline (flow line) of predetermine one the picture frame that transmits from frame transmission unit 208.Frame synthesis unit 232 again rewrites the path profile picture to create the frame that advances fast on original image frame.
Figure 22 shows according to the path of example 7 and creates the principle of handling.Identical with Figure 14, sequence of image frames 300 comprises picture frame 301-309 and scope a large amount of other picture frames before and after these frames.
Path creating unit 236 is according to the image 411 that Makes Path of the difference between picture frame 301 and picture frame 302.Path creating unit 236 is according to the image 412 that Makes Path of the difference between picture frame 302 and picture frame 303.Frame synthesis unit 232 again is put into path profile in the picture frame 303 to create the frame 341 that advances fast as 411 and 412.Similarly, path creating unit 236 is according to the image 413 that Makes Path of the difference between picture frame 304 and picture frame 305.Path creating unit 236 is according to the image 414 that Makes Path of the difference between picture frame 305 and picture frame 306.Frame synthesis unit 232 again is put into path profile in the picture frame 306 to create the frame 342 that advances fast as 413 and 414.Picture frame to picture frame 307 or back repeats identical processing.
Picture synthesis unit 240 is drawn together the frame 341 and 342 that advances fast at the interior frame sequence 340 that advances fast with predetermined frame frequency output packet.Therefore, can obtain the picture that advances fast in the path of moving with circular object of illustrating 400.
In order to determine to select which object to show its path in the middle of the object in being present in picture frame, can use various well-known methods.For example, use well-known image recognition technique in each picture frame, to detect predetermine one (for example, Association football), and can be in the path of advancing the screen displayed Association football fast.
According to example 7, might not appear at information in the original image frame in the screen displayed of advancing fast.In other words, by using the differential information between picture frame, might strengthen the information in the picture frame.
(example 8)
By selecting a processing, can produce the picture that advances fast that is suitable for content or customer objective according to aforesaid example 4 to 7.For example, depend on the content that is recorded in the live image in the memory device, can select suitable advancing fast.
Now, suppose that the picture frame treatment facility is incorporated in the live image playing device, describe the picture that advances fast and create processing.
Figure 23 shows the functional block diagram that can realize according to the picture frame treating apparatus of whole processing of example 4 to 7.Interface unit 202, transmission frame number determining unit 206, frame transmission unit 208, mobile message detecting unit 210, picture synthesis unit 240 are identical with piece shown in Figure 13 with storage unit 250, so identical digital distribution is given these pieces and omitted further specifying these pieces.
Advance unit 220 is configured to carry out whole processing of describing in example 4 to 7 fast.The picture frame treatment facility also comprises content determining unit 214.Content determining unit 214 determines to be stored in the content type of the live image in the storage unit 250.Can carry out this based on the header information of moving image data determines.Alternatively, can or carry out this from the mobile message of mobile message detecting unit 210 and determine based on user's input.Determined content type is sent to quick advance unit 220.Advance unit 220 receives the picture frame that transmits from frame transmission unit 208 fast, and depends on the processing of content type execution according to example 4 to 7.
Now, will describe when the content type of being determined by content determining unit 214 is the film of motion broadcasting, drama or film or user's original creation, by the particular procedure of quick advance unit 220 execution.
A. motion broadcasting
When content type is the videograph of Association football match, might only when the score scene, reduce the pace of the picture that advances fast.Can detection score scene as described below: the live image that obtains by the fixed position camera that points to goal post.Prior zone, nominated ball doorway in the image that obtains by this camera.When the Association football image enters the zone, goal mouth (this can detect by image matching method), quick advance unit 220 is defined as the score scene with it and is extracted in a plurality of picture frames of these time point front and back.In addition, unit 220 can use Image Matching Technique, determine specific player with color, pattern or its conforming number as clue.Then can be by other player being used the picture that advances fast that motion blur obtains the motion that has strengthened specific player.In addition, the use Image Matching Technique is recognized the Association football in the picture frame, and can obtain the picture that advances fast with this Association football path.
B. drama/film
When content type is drama programs, for example, this unit prompting user's input he favorite actor's in this drama color or pattern.Then, advance unit 220 detects the object that has corresponding to the zone of the design and color of being imported by well-known image matching technology fast.Thus, unit 220 can identify the scene that the favorite actor of institute occurs, and is that the scene tool that is identified is created the picture that advances fast with pace that reduction is arranged.
C. the film of user original creation
When the live image in being stored in storage unit 250 was the image that is obtained by portable camera by the user, advance unit 220 used well-known scene change extractive technique to detect the interruption of scene fast.By being included in the picture that advances fast, can easily in the picture that advances fast, grasp this content at the picture frame before and after scene is interrupted.In addition, using light stream to detect utilizes portable camera to follow the trail of the object of taking (chase).Then, in the picture that advances fast, make background image motion blur except the object of tracked shooting.Thus, can create the picture that advances fast of object with the tracked shooting that is easy to watch.
As mentioned above, according to preferred image frame processing device shown in Figure 23, can obtain the picture that advances fast that content type or User Preferences at live image have carried out suitable processing.
According to a second embodiment of the present invention, based on the sequence of image frames that is stored in advance in DVD driver or the hard disk drive etc.,, sequence of image frames is carried out predetermined process to create the frame that advances fast when receiving when request of advancing fast.Then, show that with the request of advancing fast at this needed frame frequency of picture that advances fast exports this frame that advances fast.The feature of second embodiment is by sequence of image frames being carried out various processing, and the picture that advances fast can have the value of various interpolations.The value of these interpolations comprises: the important information with the original image frame sequence remains on (example 4 and 5) in the picture that advances fast as much as possible; In the picture that advances fast, omit the unwanted information (example 6) of sequence of image frames.
In a second embodiment, after receiving the request signal that advances fast, almost create the frame that advances fast in real time also as the picture output of advancing fast.Therefore, receive when advancing request signal fast at every turn, depend on condition or user indication, might export the various pictures of creating by different disposal that advance fast.In other words, in a second embodiment, in the quick advance unit of handling corresponding to aftertreatment, there is high versatility, and the various pictures that advance fast with different advantages can be provided.
To create the sequence of image frames that is stored in the live image in the storage unit 250 with the irrelevant predetermined frame frequency of pace.Yet fast advance unit depends on that advance fast request signal or content type carry out this processings and advance fast being used for through the updated images frame sequence with establishment.From as can be seen above-mentioned, the common principle in first embodiment and second embodiment is: creating by the picture frame of preparing with the speed higher than demonstration speed is sampled will be to the frame of user's demonstration.The special circumstances of that second embodiment is extended corresponding to the time shaft that wherein is used for sampling, first embodiment.
The application that should be noted that second embodiment is not limited to establishment advance the fast picture or the picture that fasts rewind.For example, use, can create normal play image according to second embodiment with more above-mentioned effects by the live image that high speed camera obtained.In this case, should satisfy following condition:
N s≥N p≥N o
N wherein sThe number of the picture frame of each unit interval that representative is obtained by high speed camera, N pRepresentative is stored in the number of the picture frame of each unit interval in the storage unit, and N oRepresentative finally outputs to the number of picture frame display, each unit interval.
The present invention has been described based on some embodiment.These embodiment are illustrative in essence, and for a person skilled in the art clearly, have the various changes aspect element and processing within the scope of the invention.The optional combination of the element of Miao Shuing in an embodiment, and also can be used as other patterns of the present invention and put into practice with the realization of the present invention of method, device, system, computer program and recording medium form.

Claims (23)

1, a kind of image frame processing method comprises:
In storer, present sequence of image frames based on given geometric data; And
The sequence of image frames that is generated by described rendering step is carried out aftertreatment so that make it be adapted to display,
Wherein said rendering step generates sequence of image frames with the conditional independence ground that described picture frame should be satisfied with predetermined frame frequency, and
The merging that described post-processing step makes the sequence of image frames that is generated by described rendering step stand individual element between the successive image frame is handled, so that generate and export the updated images frame sequence that satisfies described condition.
2, a kind of image frame processing method comprises:
In storer, present sequence of image frames based on given geometric data; And
The sequence of image frames that is generated by described rendering step is carried out aftertreatment so that make it be adapted to display,
The specification of wherein said rendering step and described display irrespectively generates sequence of image frames with predetermined frame frequency, and
The merging that described post-processing step makes the sequence of image frames that is generated by described rendering step stand individual element between the successive image frame is handled, so that generate and export the updated images frame sequence of the specification that is suitable for described display.
3, a kind of image frame processing method comprises:
In storer, present sequence of image frames based on given geometric data; And
The sequence of image frames that is generated by described rendering step is carried out aftertreatment so that make it be adapted to display,
Wherein said rendering step irrespectively generates sequence of image frames with predetermined frame frequency with just being performed program, and
The merging that described post-processing step makes the sequence of image frames that is generated by described rendering step stand individual element between the successive image frame is handled, so that generate and export the updated images frame sequence that is suitable for described program.
4, a kind of picture frame treatment facility comprises:
Interface unit is used to be retrieved as the condition that outputs to display and picture frame should be satisfied;
Present processing unit, be used for generating sequence of image frames with predetermined frame frequency with described conditional independence ground; And
Post-processing unit, the merging that is used for carrying out according to the described condition of being obtained by described interface unit, between by the described successive image frame that presents the sequence of image frames that processing unit generates individual element is handled, and is used for the updated images frame sequence of this display with generation.
5, picture frame treatment facility as claimed in claim 4, wherein, described condition is the output resolution ratio of display.
6, picture frame treatment facility as claimed in claim 4, wherein, described condition is the frame frequency of display.
7, picture frame treatment facility as claimed in claim 4, wherein, described condition is the output resolution ratio for the content type definition on the display.
8, picture frame treatment facility according to claim 4, wherein, the described processing unit that presents begins to present set of diagrams picture frame in the sequence of image frames with mutual different coordinate offset in presentation space.
9, picture frame treatment facility as claimed in claim 8, wherein, described post-processing unit merga pass skew presents this picture group picture frame of generation, and remain on displacement between this picture group picture frame, so that span resolution is higher than from the updated images frame sequence of the spatial resolution of the described sequence of image frames that presents processing unit output.
10, picture frame treatment facility as claimed in claim 9, wherein, this picture group picture frame that presents generation by skew is arranged in single coordinate system, and span resolution is higher than from the updated images frame sequence of the spatial resolution of the described sequence of image frames that presents processing unit output by sampled pixel in described coordinate system.
11, picture frame treatment facility as claimed in claim 8, wherein, described post-processing unit presents this picture group picture frame that generates by merga pass skew so that offset displacement between this picture group picture frame, generate spatial resolution with from the described identical updated images frame sequence of spatial resolution that presents the sequence of image frames of processing unit output.
12, picture frame treatment facility as claimed in claim 11, wherein, this picture group picture frame that presents generation by skew is arranged in single coordinate system, and by mixed pixel in this coordinate system generate spatial resolution with from the described identical updated images frame sequence of spatial resolution that presents the sequence of image frames that processing unit exports.
13, as the described picture frame treatment facility of claim 4 to 12, also comprise:
First memory, storage is by the described picture frame that presents the processing unit generation; And
Second memory is used as the perform region by described post-processing unit,
Wherein said first memory has storage by the described capacity that presents at least one picture frame of processing unit generation, the picture frame that temporarily is stored in the described first memory is transferred to described second memory, and described post-processing unit is read a plurality of picture frames to be used for this predetermined process from described second memory.
14, as the described picture frame treatment facility of claim 4 to 12, also comprise:
First memory, storage is by the described picture frame that presents the processing unit generation; And
Second memory is used as the perform region by described post-processing unit,
Wherein said first memory is built at least and presents in the identical semiconductor circuit components of processing unit with described.
15, picture frame treatment facility as claimed in claim 14 wherein, is used as described second memory with primary memory.
16, a kind of processor that presents, it is with the conditional independence ground that picture frame should be satisfied, with predetermined frame frequency, generate sequence of image frames based on given geometric data in storer, wherein this image frame sequence is listed in and is converted to the updated images frame sequence that satisfies described condition in the aftertreatment.
17, the processor that presents as claimed in claim 16, wherein, described condition is the output resolution ratio of display.
18, the processor that presents as claimed in claim 16, wherein, described condition is the frame frequency of display.
19, the processor that presents as claimed in claim 16, wherein, described condition is the output resolution ratio for the content type definition on the display.
20,, wherein, in presentation space, begin to present set of diagrams picture frame in the sequence of image frames with mutual different coordinate offset as the described processor that presents of claim 16 to 19.
21, as the described processor that presents of claim 16 to 19, wherein, the memory built-in of the picture frame of storage by presenting generation is in presenting in the identical semiconductor circuit components of processor with described.
22, a kind of image frame processing method that is used for showing the outside sequence of image frames that provides on display comprises:
Obtain the condition that picture frame should be satisfied in order to output to this display;
With described conditional independence ground, receive this sequence of image frames with predetermined frame frequency; And
The merging that makes the sequence of image frames of being received stand individual element between the successive image frame is handled, so that generate the updated images frame sequence that satisfies described condition.
23, a kind of picture frame treatment facility comprises:
Interface unit is used to be retrieved as the condition that outputs to display and picture frame should be satisfied;
The picture frame receiving element is used for described conditional independence ground, receives sequence of image frames with predetermined frame frequency; And
Post-processing unit is used for carrying out according to the described condition of being obtained by described interface unit, between the successive image frame of the sequence of image frames of being received the merging processing of individual element, is used for the updated images frame sequence of this display with generation.
CN200580024443A 2004-05-19 2005-05-18 Image frame processing method and device for displaying moving images to a variety of displays Active CN100593188C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP149705/2004 2004-05-19
JP2004149705 2004-05-19
JP100075/2005 2005-03-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN200810161915.5A Division CN101373590B (en) 2004-05-19 2005-05-18 Image frame processing method and device for displaying moving images to a variety of displays

Publications (2)

Publication Number Publication Date
CN1989545A CN1989545A (en) 2007-06-27
CN100593188C true CN100593188C (en) 2010-03-03

Family

ID=38185464

Family Applications (2)

Application Number Title Priority Date Filing Date
CN200810161915.5A Active CN101373590B (en) 2004-05-19 2005-05-18 Image frame processing method and device for displaying moving images to a variety of displays
CN200580024443A Active CN100593188C (en) 2004-05-19 2005-05-18 Image frame processing method and device for displaying moving images to a variety of displays

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN200810161915.5A Active CN101373590B (en) 2004-05-19 2005-05-18 Image frame processing method and device for displaying moving images to a variety of displays

Country Status (2)

Country Link
JP (1) JP4761403B2 (en)
CN (2) CN101373590B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI684360B (en) * 2014-11-20 2020-02-01 瑞典商安訊士有限公司 Method and apparatus for visualizing information of a digital video stream

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5999454B2 (en) * 2012-03-30 2016-09-28 ソニー株式会社 Image processing apparatus and method, and program
GB2518846A (en) * 2013-10-01 2015-04-08 Ibm Diagnosing graphics display problems
KR102503442B1 (en) * 2015-12-24 2023-02-28 삼성전자주식회사 Electronic device and operating method thereof
CN109040837B (en) * 2018-07-27 2021-09-14 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
US11120242B2 (en) * 2018-09-21 2021-09-14 Lg Display Co., Ltd. Moving fingerprint recognition method and apparatus using display
US11295660B2 (en) * 2019-06-10 2022-04-05 Ati Technologies Ulc Frame replay for variable rate refresh display
CN113709389A (en) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 Video rendering method and device, electronic equipment and storage medium
CN113596564B (en) * 2021-09-29 2021-12-28 卡莱特云科技股份有限公司 Picture playing method and device
CN115037992A (en) * 2022-06-08 2022-09-09 中央广播电视总台 Video processing method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003078880A (en) * 2001-08-30 2003-03-14 Sony Corp Image processor, image processing method and program

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06149902A (en) * 1992-11-09 1994-05-31 Matsushita Electric Ind Co Ltd Animation image recording medium, animation image recorder and animation image reproducing device
US5543927A (en) * 1993-04-29 1996-08-06 Sony Corporation Variable speed playback of digital video stored in a non-tape media
JPH0738842A (en) * 1993-06-29 1995-02-07 Toshiba Corp Dynamic image editing device
US5559950A (en) * 1994-02-02 1996-09-24 Video Lottery Technologies, Inc. Graphics processor enhancement unit
JPH09307858A (en) * 1996-05-16 1997-11-28 Nippon Telegr & Teleph Corp <Ntt> Double speed reproduction method
JP2000101985A (en) * 1998-09-28 2000-04-07 Toshiba Corp Stream data fast forwarding mechanism
JP2001333389A (en) * 2000-05-17 2001-11-30 Mitsubishi Electric Research Laboratories Inc Video reproduction system and method for processing video signal
JP2002094947A (en) * 2000-09-12 2002-03-29 Matsushita Electric Ind Co Ltd High-speed video reproducing device, high-speed video reproducing method and recording medium
US6690427B2 (en) * 2001-01-29 2004-02-10 Ati International Srl Method and system for de-interlacing/re-interlacing video on a display device on a computer system during operation thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003078880A (en) * 2001-08-30 2003-03-14 Sony Corp Image processor, image processing method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI684360B (en) * 2014-11-20 2020-02-01 瑞典商安訊士有限公司 Method and apparatus for visualizing information of a digital video stream

Also Published As

Publication number Publication date
CN101373590B (en) 2014-01-22
CN101373590A (en) 2009-02-25
CN1989545A (en) 2007-06-27
JP2009110536A (en) 2009-05-21
JP4761403B2 (en) 2011-08-31

Similar Documents

Publication Publication Date Title
CN100593188C (en) Image frame processing method and device for displaying moving images to a variety of displays
AU2005242447B2 (en) Image frame processing method and device for displaying moving images to a variety of displays
EP2870771B1 (en) Augmentation of multimedia consumption
US6121966A (en) Navigable viewing system
CN102077580B (en) Display control device, display control method
CN101617530B (en) Image processor, animation reproduction apparatus, and processing method and program for the processor and apparatus
US9582610B2 (en) Visual post builder
US20160118080A1 (en) Video playback method
US9313444B2 (en) Relational display of images
US8503770B2 (en) Information processing apparatus and method, and program
US20090259955A1 (en) System and method for providing digital multimedia presentations
US20140031120A1 (en) Using a Portable Device to Interface with a Video Game Rendered on a Main Display
CN103797784A (en) Video Peeking
CN102821261A (en) Display apparatus, object display method, and program
CN101783886A (en) Information processing apparatus, information processing method, and program
JP2000106661A (en) Image processing method and system and device
JP2009201041A (en) Content retrieval apparatus, and display method thereof
CN1987765A (en) System and method for content navigation
CN101611629A (en) Image processing equipment, moving-image reproducing apparatus and processing method thereof and program
CN104254875A (en) Information processing device, information processing method, and information processing computer program product
KR100878640B1 (en) Image frame processing method and device for displaying moving images to a variety of displays
KR20220097945A (en) Non-Closed Video Overlays
Apostu et al. Towards the design of an intuitive multi-view video navigation interface based on spatial information
CN107592573A (en) Suitable for the audio/video player system of large area array meter level high-resolution satellite
AU2006252198B2 (en) Animated sub-images for browsing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant