CN100346638C - Method and apparatus for generating new images by using image data that vary along time axis - Google Patents

Method and apparatus for generating new images by using image data that vary along time axis Download PDF

Info

Publication number
CN100346638C
CN100346638C CNB2003801019768A CN200380101976A CN100346638C CN 100346638 C CN100346638 C CN 100346638C CN B2003801019768 A CNB2003801019768 A CN B2003801019768A CN 200380101976 A CN200380101976 A CN 200380101976A CN 100346638 C CN100346638 C CN 100346638C
Authority
CN
China
Prior art keywords
image
frame
value
pixel
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB2003801019768A
Other languages
Chinese (zh)
Other versions
CN1708982A (en
Inventor
挂智一
大场章男
铃木章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Publication of CN1708982A publication Critical patent/CN1708982A/en
Application granted granted Critical
Publication of CN100346638C publication Critical patent/CN100346638C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A rectangular-parallelopiped space (box space) expresses the moving images by use of a virtual space. A plurality of frames contained in the moving images appear continuously along time axis. The box space is cut through by a desired surface, and an image projected on this cut surface is projected onto a plane parallel in the direction of time axis. Images sequentially projected onto the plane are outputted as new moving images.

Description

The view data that use changes along time shaft produces the method and apparatus of new images
Technical field
The present invention relates to be used to produce the method and apparatus of image, relate in particular to therefore processing is also exported the live image of handling again by the live image of video camera shooting technology.
Background technology
Along with the marked improvement and the development of computer technology in recent years, the image-capable that computer provides is significantly improved.Even the home PC (personal computer) that can use of general user and game machine also can realize it once being various processing that the high-end work station of image processing special use could be realized.
The image processing performance of PC and the improvement on the ability provide the potential application of another kind of home PC and game machine.That is, exist at a low price, towards general user's the instrument that is used for move edit, image processing, creation etc.Therefore, specialized skills no longer is the prerequisite of complicated operation image processing, even amateurish user also can be by using the processing of these available tool operation live images.
Summary of the invention
Because afore-mentioned, the present inventor has set about seeking the image processing method of innovation, can obtain image novel and special-effect by this method.The present inventor has invented the present invention according to above-mentioned cognition, and a target of the present invention is to obtain interesting image.In addition, consider that other targets that following target or the description by patent specification are understood have developed the present invention.That is, target comprises the efficient of improving image processing, the load that reduction is caused by image processing, new suggested of improving image processing techniques or the like.
Relate to image producing method according to a preferred embodiment of the invention.This method comprises: the original active image is used as the two dimensional image that changes along time shaft, and when live image being expressed as the box space that is formed by two dimensional image and time shaft with virtual mode, with comprise a plurality of on time value the curved surface of mutually different points cut this box space.To the plane on time-axis direction, and by changing the cross section in time, the image that will occur in the plane is as new live image output with the image projection that occurs on the cross section.By being set in every way, curved surface determines the content of change, and the output new live image different with the content of original active image.
Here, " original active image " can be the image of being taken then and there by video camera, also can be in recording medium storage in advance, such as by form image encoded such as MPEG." projecting to the plane " is: if project to plane on the time shaft, the image that then will project to this plane is the result of " projecting to the plane ".Specifically, it refers to: when directly when time-axis direction is watched the cross section, the image that will project to this cross section equals to project to the image on plane.
For example, by move the complete of curve form that the cross section keeps the cross section along time shaft, can realize " changing the cross section in time ".Mobile cross section can obtain level and smooth and continuous new live image by passage in time.Curved surface (surface) shape may pass in time and change.Represent by t if be included in the position of the point in the time shaft top-surface camber, and the coordinate that is included in the point in the two dimensional image is by (x, y) expression, then function t=f (x, y) definition that can be represented by general equation of t.Curved surface can be the plane.The image of projection changes according to the type of set curve form.
Relate to image forming appts according to a further advantageous embodiment of the invention.This device comprises: video memory is used for along time shaft sequential storage original active image; Image conversion unit, the original active image that is used for being stored in video memory is used as the two dimensional image that changes along time shaft, and when live image being expressed as the box space that is formed by two dimensional image and time shaft with virtual mode, with comprise a plurality of on time value the curved surface of mutually different points cut this box space, and with the image projection that occurs on the cross section to the plane on time-axis direction; And image data-outputting unit, be new activity diagram picture frame with image setting that the cross section obtains by in image conversion unit, changing in time, that occur in the plane.Video memory is as temporarily being stored in the buffer that has been converted to a plurality of frames in the set time section of new frame in the original active image up to the frame that comprises.
Image forming appts can also comprise the image input unit, is used to obtain the image taken by video camera and sends to video memory as the original active image and with the image of these acquisitions.Therefore, the image that realtime graphic handle to be taken, thus can on screen, show virtual condition images different, uniqueness, mystery or special-effect with target.
Image conversion unit can be with the curved surface cutting box space of the function definition of the coordinate of the image-region that constitutes two dimensional image.Here, " image-region " can be the zone that covers the zone of single pixel or cover block of pixels.This curved surface can be by the function definition of the coordinate of the two dimensional image that does not rely on horizontal direction." horizontal direction " can be the direction of scan line.Image conversion unit can be used the curved surface cutting box space about the function definition of the property value of the image-region that constitutes two dimensional image." property value " that determines the displaying contents of each pixel can be various attributes, such as pixel value, depth value (depth value), with the approximate exponent number (order) of AD HOC, change degree of relative other frames or the like.This property value can be the mean value or the central value of image-region.
According at pixel value, depth value, can determine the time value of the point that in above-mentioned curved surface, comprises with the approximate exponent number of AD HOC and any parameter in the change degree.In addition, according at pixel value, depth value, can determine that with the approximate exponent number and any parameter in the change degree of AD HOC which image-region will project to above-mentioned plane.
Relate to image producing method according to a further advantageous embodiment of the invention.This method comprises: for position among each figure of the image that comprises in the target frame in the original active image, read the data corresponding to position among the figure at least one frame of a plurality of frames that comprise from the original active image; Synthetic data of reading; And the frame that forms in synthetic by order output forms new live image.The sense data from the frame in past with pixel or pixel behavior unit, generated data then, thus obtain the image different with the original active image.These are so-called images of piecing together, and it comprises with pixel or pixel behavior unit and mixes synthetic, different data in time, thereby can obtain uniqueness and the mysterious image that can not exist in real world.
" target frame " is as a reference frame when showing, and passage may change in time.For example, in existing scan method, be somebody's turn to do " target frame " corresponding to the present frame that will export in timing at present.According to this target frame, judge and from which frame, read and export which real data." position among the figure (in-pictureposition) " can be the position as the pixel column of scan line, perhaps can be locations of pixels.From frame, can read corresponding data corresponding to it, and with pixel or pixel behavior unit generated data." synthesizing " can be overlapping, mixing, displacement and bonding.
Relate to image forming appts according to a further advantageous embodiment of the invention, comprise video memory, image conversion unit and image data-outputting unit.Video memory is each frame sequential record original active image.Image conversion unit is included in the data of reading at least one frame of position from be recorded in video memory among the figure of the image in the target frame corresponding to position among the figure for each, and generated data.Image data-outputting unit is exported the frame of and reconstruct synthetic by image conversion unit in proper order.Video memory is as at preset time a plurality of frames of temporary transient storage in the cycle, when the frame that comprises in the original active image has been converted to new frame and has not re-used these frames till." pixel " is the point that is formed in the image that shows on the display screen, and can be the pixel by one group of RGB color showing.
Should be noted that the said structure element that changes between recording medium in method, device, system, computer program, storage computation machine program, the data structure etc. and the combination in any of expression formula all are effectively, and comprise by this embodiment.
In addition, summary of the invention part of the present invention does not need to describe all essential feature, and therefore, the present invention also can be the sub-portfolio of these described features.
Description of drawings
Fig. 1 is with the virtual mode diagram and expressed in according to first embodiment of the invention, state that the frame of original active image occurs continuously along time shaft.
Fig. 2 A and 2B are provided to relatively illustrate the screen and the screen that the content of actual displayed is shown of the target (object shot) of shooting.
Fig. 3 is the block diagram that illustrates according to the function of the image forming appts of first embodiment.
Fig. 4 illustrates according to flow chart first embodiment, that the original active image transitions become the step of new live image;
Fig. 5 is illustrated as box space with virtual mode with live image according to second embodiment.
Fig. 6 A and 6B are provided to according to second embodiment, and the screen and the screen that the content of actual displayed is shown of the target of shooting relatively is shown.
Fig. 7 illustrates in a second embodiment, by produce the flow chart of the step of new live image according to Z value sense data from frame.
Fig. 8 is illustrated as box space with virtual mode with live image according to the 3rd embodiment.
Fig. 9 A and 9B relatively illustrate the screen and the screen that the content of actual displayed is shown of the target of shooting according to the 3rd embodiment.
Figure 10 illustrates according to flow chart the 3rd embodiment, produce the step of the live image that has extracted desired color part from produce the original active image.
Figure 11 is illustrated as the box space according to the 4th embodiment with the original active image.
Figure 12 is the functional-block diagram that the structure of image forming appts is shown.
Figure 13 has shown the example by the screen of the monitor of the chart that the definite function of input unit is set.
Figure 14 is the flow chart that the step that produces directed and operation (directing and manipulating) target is shown.
Figure 15 illustrates the flow chart that directed and operating effect is applied to the step of present frame.
Embodiment
To describe the present invention according to embodiment, described embodiment does not mean that restriction the present invention but example the present invention.All features of Miao Shuing and their combination are dispensable in an embodiment.
First embodiment
According to first embodiment of the invention, a plurality of frame sequentials that comprise in the original active image are stored in the circular buffer (see figure 3), and at each scan line sense data from different frames, so that the data that will read like this are presented on the screen as new frame.Specifically, from the frame that upgrades, read data, and from the time, read data the older previous frame about the pixel of the scan line of screen lower edge about the pixel on the scan line that is positioned at the screen upper edge.On screen, shown different with realistic objective, strange and mysterious image.
Fig. 1 is with the virtual mode diagram and expressed the state of the frame of original active image along the continuous appearance of time shaft.The original active image is taken as and grasps the two dimensional image that changes into along time shaft.Pass in time and expand on time shaft t direction in rectangular parallel piped shape (rectangular-parallelopiped) space 10 (being also referred to as " box space " in the back).The cross section vertical with time shaft t represented frame.Frame is the pixel of the coordinate representation on one group of plane that is formed by x axle and y axle.This box space 10 is by the curved surface cutting with desired shape.As shown in Figure 1, according to first embodiment, box space 10 by be parallel to the x axle, along with the time from t 0Pass to time t 1And the cutting of the inclined-plane on the direction of a following line on the x axle.When the image projection that appears at curved surface 14 on the plane of time-axis direction the time, the image that projects to the plane on the time shaft is exported as actual frame, rather than output present frame 12.
Passage in time, move along time shaft t in cross section 14.Have continuous width mode with its direction and define cross section 14 at time shaft t.The synthetic image that is included in this width, and these synthetic images are as the actual frame that shows on screen.
Present frame 12 is corresponding to the frame in the timing that should be located at current scanline in the normal displaying mode.If the current position of present frame 12 on time shaft is at time t 0Then, at time t 0Preceding frame (for example lays respectively at time t 1, t 2, t 3And t 4Frame) corresponding to the frame that under normal Displaying timer, has shown.Yet, in first embodiment, in fact shown prior to time t 0Frame.With the mode to the output of the pixel column of each along continuous straight runs order video data, the data of the pixel that output comprises in each frame.Read the data that are included in each pixel in the single pixel column in same timing, show then.
Regularly export the pixel column of peak at normal scan.With the delay of frame output be positioned at the pixel column of a pixel below the pixel column of peak thereafter.Therefore, the order of pixel column is low more, just the timing output that is postponing more.
From the frame in past, read the data of each pixel on the screen, and the degree that these frames should be retreated can be by the function representation of pixel coordinate, such as t=t 0-y.Function t=t 0-y only is the function of the y coordinate of pixel column, and does not rely on the x coordinate of pixel column.
Be that the coordinate of establishing top left pixel is that the coordinate of (0,0) bottom right pixel is (719,479) under 720 * 480 the situation at the resolution of present frame 12.In this case, the maximum of coordinate y is 479, and the scanning of the pixel column of minimum order regularly is delayed 479 frames.In box space 10, at time t 0With time t 2Between placed 480 frames.
Fig. 2 A and 2B diagram respectively illustrate the screen of captured target and the screen of the target of actual displayed are shown, and are provided to the former with the latter is compared.Fig. 2 A illustrates image lens, and the image of this target is equivalent to present frame 12.Here, target is slowly swung his/her hand 16.Fig. 2 B is the image that appears on the cross section shown in Figure 1 14, and is the image of target actual displayed on screen of Fig. 2 A.In other words, according to the frame in past to present frame, the position of hand 16 with from the left side to the mid point, the order modification on the right, mid point and the left side, therefore, by be each scan line from different past frame sense datas, the image of the hand 16 of on the left side position and on the right the image of position alternately occur.Owing to from the frame of upward identical scanning timing of time, read the data on the identical data wire, therefore can not cause bending or distortion in the horizontal direction.Yet, in vertical direction, be that tortuous and crooked mode shows hand 16 with the shape on the left side of hand 16 and the right.
In other words, the target of swing hand 16 moves except hand 16 hardly.Therefore, even the synthetic image of reading from upward mutual different frame of time, owing to do not have difference on its display position, crooked or distortion also is almost non-existent.
Fig. 3 is the functional-block diagram that illustrates according to the image forming appts of first embodiment.With regard to hardware, by the structure that can realize image forming appts 50 such as CPU and its similar device of any computer.With regard to software, realize it by having storage, image processing and picture functional programs or similar program, but institute's diagram and what describe is the functional block that realizes in conjunction with them among Fig. 3.Therefore, by hardware only, only software or with their in conjunction with realizing these functional blocks in a variety of forms.
Image forming appts 50 comprises image input unit 52, and its image that obtains to be taken by video camera is as the original active image, and the frame that will be included in the original active image sends to video memory; Circular buffer 56 is as the video memory along time shaft sequential storage original active image; Buffer control unit 54, control from circular buffer 56 read frame and to the incoming frame of writing of circular buffer 56; Image conversion unit 60 is converted to the frame that is used to show with the frame that is stored in the circular buffer 56; Function memory 70, the function of quoting during its storage frame conversion; With display buffer 74, the frame that its storage is used to show.
Image input unit 52 can comprise the CCD that grasps digital picture, and the converting unit that obtains digital picture by the A-D conversion.Image input unit 52 can be implemented as the equipment that externally provides and be installed in image generation unit 50 in separable mode.Buffer control unit 54 will be recorded the zone by the write pointer indication of circular buffer 56 by the frame sequential of the original active image of image input unit 52 input.
For each pixel that is included in the present frame 12, image conversion unit 60 is read the data corresponding to pixel from the frame that is recorded in circular buffer 56, and generated data.Image conversion unit 60 comprises decision processing unit 62, be used to each pixel determine should from which frame sense data; Data obtain unit 64, are used for the frame sense data of determining from by decision processing unit 62; With image formation unit 66, be used for forming frame by the pixel column generated data of reading for each.
In decision processing unit 62, defining and draw according to this equation according to following equation (1) should be from the decision of which frame sense data.
P Fr(x,y,t 0)=P(x,y,t 0-y)---(1)
Wherein as shown in Figure 1, x and y are the pixel coordinates about present frame 12, and t 0It is the time value on the time shaft t.P FrIt is the pixel value of each pixel in the frame of actual output.Apparent from equation (1), the time value of the frame that will export only is the function of y coordinate.Therefore, making for each pixel column should be from the decision of which frame sense data in a plurality of frames of storage circular buffer 56, and this decision its do not rely on the x coordinate.
Function by equation (1) expression is stored in the function memory 70.Other optional function also is stored in the function memory 70.The user can be provided with via order acquisition unit 72 and adopt which function.
Data for each pixel that is obtained by data to read unit 64 are written to display buffer 74 by image formation unit 66 orders with image chip function, thus component frame.
Image generation unit 50 comprises that also order obtains unit 72, and it receives order from the user; Image data-outputting unit 76 is used for exporting the frame that is stored in buffer 74; With monitor 78, be used on screen, showing the frame of output.This monitor 78 can be the display that externally is provided to image forming appts 50.
Image data-outputting unit 76 converts them to analog signal then and they is sent to monitor 78 from being to read view data the display buffer 74 of a frame storing image data.Image data-outputting unit 76 order outputs are stored in the frame in the display buffer 74, so that export new live image.
Fig. 4 illustrates according to flow chart first embodiment, that the original active image transitions become the step of new live image.At first, the write pointer t of next writing position of expression promptly is provided with t=0 (S10), so that begin storage frame from the apex zone of circular buffer 56 in the initialization circular buffer 56.Be included in frame recording in the original active image in the t zone of circular buffer 56 (S12).Therefore, the T of one frame is provided 0The summation in zone.
The pixel column n of initialization in display buffer 74 promptly is provided with n=0 (S14), so as will from corresponding to the data copy orderly of those pixel columns that begin of the top line of screen to display buffer 74.What calculated is to specify to read pointer T (S16) corresponding to the data read-out position of row n.Here, obtain T by T=t-n.Retinue number increase is read pointer T and is further turned back to frame in the past.Originally, therefore T=0-0=0, reads pointer T and indicates the 0th zone.
If read pointer T, then in fact there is not such pointer of reading less than 0 (S18Y).Therefore, read the end (S20) of pointer movement to circular buffer 56.More specifically say, with the regional T of circular buffer 56 0Quantity be added to and read pointer T.Read row n in the frame in the data acquisition zone of reading pointer of unit 64 from be stored in circular buffer 56, and image formation unit 66 will copy to the zone of the row n of display buffer 74 corresponding to the data of this number of reading.
As row n when not being last capable in the display buffer 74 (S24N), the number of going is added " 1 " (S26).Row n continuous increasing and repetition S16 reach last row to the processing of S24 up to capable number.When row number becomes corresponding to last row digital, just with the image data storage of a frame to display buffer 74 (S24Y), and write pointer added " 1 " (S28).When the ending of write pointer t indication circular buffer 56 is regional (S30Y), write pointer t turns back to the beginning zone (S32) of circular buffer 56.
Image data-outputting unit is read frame from display buffer 74, with frame as video data output and allow monitor 78 display frame (S34) on screen.Repeat S12 to the processing of S34 till order stops demonstration (S36).By this way, with pixel behavior unit sense data from same frame, and data are write display buffer.Yet pixel column at first is the identical a plurality of pixels set of scan line of arranging with along continuous straight runs, so pixel column is the data of should the same one scan in normally being provided with regularly reading.Therefore, in scanning process, handle read and write effectively, and can prevent the excessive increase of the load that causes owing to the image transitions in the present embodiment.
As the improved example of present embodiment, decision processing unit 62 can be according to the definite frame that will read of x coordinate.For example,, read its data from the left-hand side of present frame 12 for the pixel column of the left-hand side that is positioned at screen, and for the dexter pixel column that is positioned at screen, from time t shown in Figure 1 2On the right-hand side pixel column of frame read its data.Then, its cross section will by parallel with the y axle, from time t 0To time t 2The curved surface that the inclined-plane cut.
Revise as another kind, decision processing unit 62 can be according to x and the definite frame that will read of y coordinate.For example,, read its data from the upper left limit of present frame 12 about the pixel column on the upper left limit that is positioned at screen, and about the pixel column on the limit, bottom right that is positioned at screen, from time t shown in Figure 1 2On limit, the bottom right pixel of frame read its data.
Revise as another kind, scan line can be vertically rather than the horizontal direction setting.In this case, by the frame of determining according to the x coordinate to read, can realize more effective image transitions.
Second embodiment
In a second embodiment, according to the depth value (Z value) that is each pixel appointment, sense data from different frame.Like this, aspect this, different with the processing of first embodiment that determines frame according to the y coordinate of pixel by the processing that second embodiment carries out.For example, between the target of in the original active image, taking, from older frame, read the target nearer apart from video camera.Therefore, the distance between video camera and the target is near more, and its Displaying timer postpones manyly.
Fig. 5 is illustrated as box space with virtual mode with live image according to second embodiment.Between the target of taking in present frame 12, the Z value of first image 20 is set to " 120 " and the Z value of second image 24 is set to " 60 ".The big more expression target of Z value is near more from video camera.The retardation of Displaying timer and Z value are proportional.Each pixel of the frame of actual displayed on screen is defined by following equation (2).
P Fr(x,y,t 0)=P(x,y,t 0-Z(x,y,t 0))---(2)
Z (x, y, t wherein 0) be the Z value of current pixel unit.With the increase of Z value, the t on from its frame of reading pixel data from time shaft 0To t 1And t 2Direction retreat.From time t 2On frame in, read data in the zone that is designated as the 3rd image 22 corresponding to first image 20.From time t 1On frame in, read data in the zone that is designated as the 4th image 26 corresponding to second image 24.
In the cross section in box space 10, the 3rd image-region 22 holding time value t 2, and the 4th image-region 26 holding time value t 1Other regional holding time value t 0Therefore, the point that comprises in the cross section is dispersed in t 0, t 1And t 2, make its cross section have discrete width at time-axis direction.
The pixel of forming first image 20 has the Z value bigger than the pixel of second image, and its data were read the older frame from the time.That is, have the Z value littler than the pixel of first image 20 owing to form the pixel of second image, the time that turns back to older frame is shorter.
The screen and the screen that the target of actual displayed is shown of the target (object shot) that Fig. 6 A and 6B relatively show shooting are provided.Fig. 6 A represents that the target of taking, captured in this case target are to lift his/her hand and beginning slowly the swing people 30 of hand and traveling automobile 32 in the back.The actual image that projects to screen of target shown in Fig. 6 B presentation graphs 6A.On screen with different state display-objects normally is set.That is, regional near more from video camera, its Displaying timer is delayed more.Now, especially, people 30 is from the nearest target of video camera, so the retardation of Displaying timer is maximum.About the part or the zone of moving hardly,, its pixel will cause the image much at one of this target for showing older image.On the other hand, about frequently or significantly or the image in the zone that moves up of upper and lower, move its display part on frame.Therefore, shown in Fig. 6 B, even when its data when corresponding to the older frame under the coordinate of those coordinates of Fig. 6 A, reading, this regional image will be transparent or be in (permeated) state that fills the air.In Fig. 6 B, people 30 is presented on the screen with the state that the frequent hand that moves disappears.On the automobile 32 of back is positioned at relatively the position away from video camera, and its Z value is very little, thus Fig. 6 A and and Fig. 6 B shown in the show state of automobile between have little difference.
Image forming appts 50 according to this second embodiment has and the essentially identical structure of device shown in Figure 3.According to top equation (2), decision processing unit 62 calculates to the time quantum that returns in the past according to each Z value, then for each pixel cell definite will from which frame sense data.After this, from wherein the frame of sense data being also referred to as the source frame.The distance measurement sensor that comprises the Z value that is used to detect each pixel cell according to the image input unit 52 of second embodiment.Distance measurement method can be laser means, infrared illumination method, phase detection method or the like.
Fig. 7 illustrates the flow chart that produces the step of new live image in a second embodiment according to the Z value by sense data from frame.At first, the write pointer t of next writing position promptly is provided with t=0 (S100) in the initialization indication circular buffer 56, makes to begin storage frame from the apex zone of buffer 56.Be included in frame recording in the original active image in the t zone of circular buffer 56 (S102).
The position x and the y of initialization object pixel in display buffer 74, that is, position x and y are set to x=0 and y=0 (S104), make the pixel that begins from the screen top line by copy orderly to display buffer 74.What calculated is to specify to read pointer T (S106) corresponding to the read-out position of the data of pixel x and y.Decision processing unit 62 calculates according to the Z value of each pixel and reads pointer T.Data obtain unit 64 and read pixel P from the frames of reading pointer area that is stored in circular buffer 56 X, yData.Then, image formation unit 66 copies to pixel P in the display buffer 74 with sense data X, yOn the zone (S108).
Work as P X, yWhen also not being the last pixel of display buffer 74, promptly as pixel P X, yWhen not being the pixel of bottom right edge (S110N), pixel P X, yMove to next pixel (S112).Repeating S106 handles up to pixel P to S112 X, yBecome till the last pixel.As pixel P X, yWhen becoming last pixel, be written into display buffer 74 (S110Y) and its by image formation unit 66 draw (S114) about the image of a frame.
" 1 " is added to write pointer t (S116).When write pointer t indicates the stub area of circular buffer 56 (S118Y), write pointer t turns back to the apex zone (S120) of circular buffer 56.The image that image output unit 76 is drawn to display output.Repeat S102 to the processing of S122 till order shows termination (S124).By this way, be that unit writes display buffer 74 then from the frame sense data of separating with the pixel.Here it should be noted, for each pixel is determined sense data from which frame respectively, can be from similar and different frame sense data.
The 3rd embodiment
Difference according to the third embodiment of the invention and first and second embodiment is to come composograph by the data of reading the pixel with expectation parameter from a plurality of frames.Above-mentioned property value is a pixel value, for example, when when reading the image value that only has red component and come composograph, obtains the image of mysterious and special-effect, wherein only residually seems to be the desired color of after image.
Fig. 8 is illustrated as box space with virtual mode with live image according to the 3rd embodiment.In box space 10, be projected in time t 0, t 1, t 2And t 3Frame on people 30, be the target of hand-held red material, the target of waving about lentamente.Red material target image 34,35,36 is projected on each different mutually frame with each display part of 37.
According to the 3rd embodiment, in a plurality of old frames that are stored in the circular buffer 56, determine in advance to be used for the frame that image is synthetic or image is synthetic.In the situation of Fig. 8, at time t 0, t 1, t 2And t 3Frame to be used for image synthetic.These four frames are a plurality of curved surfaces of arranging on the Fixed Time Interval in box space 10.
The screen and the screen that the target of actual displayed is shown of the target that Fig. 9 A and 9B relatively show shooting are provided.Fig. 9 A represents the target of taking.The actual image that projects on the screen of target that shows among Fig. 9 B presentation graphs 9A.On this screen, only extract and synthetic red component, therefore only show the image 34,35,36 and 37 of red material target, and its background is white or black.
Each pixel by the frame of following equation (3) definition actual displayed on screen.
P Fr ( x , y , t 0 ) = Σ i = 0 3 αP ( x , y , t 0 - const * i ) - - - ( 3 )
Wherein indicate the α or the alpha value of synthesis rate by following equation (4) expression.
α=P R(x,y,t 0-const*i)---(4)
P wherein RIt is the red color component value of pixel.
It is each pixel reading of data according to equation (3) that data obtain unit 64, and determines whether composograph.By this way, realize extracting by color pixel.Though the pixel value of red component is set to the alpha value in equation (4), it is provided with unrestricted.If the alpha value is set to P GOr P B, then only extract and synthetic green component or blue component.Therefore, if comprise any specific color component in target, then only showing the specific part that comprises particular color component, just looks like to be residual after image.
Image forming appts 50 according to the 3rd embodiment has and the essentially identical structure of device shown in Figure 3.According to top equation (3) and (4), decision processing unit 62 is chosen in a plurality of frames on the Fixed Time Interval.Data obtain unit 64 and image formation unit 66 is pressed by the synthetic sense data of the determined ratio of the pixel value of each pixel.
Figure 10 illustrates according to flow chart the 3rd embodiment, produce the step of the live image that has extracted the desired color part from produce the original active image.At first, the write pointer t of next writing position promptly is provided with t=0 (S50) in the initialization indication circular buffer 56, and the synthetic number i of initialization frame, and i=0 (S51) promptly is set, and makes to begin storage frame from the apex zone of circular buffer 56.The frame recording that comprises in the original active image is in the t zone of circular buffer 56 (S52).
The position x and the y of initialization object pixel in display buffer 74, that is, position x and y are set to x=0 and y=0 (S54), make the pixel that begins from the screen left edge by copy orderly to display buffer 74.The position conduct of reading pointer T in circular buffer 56 is calculated (S56) corresponding to the read-out position of the data of pixel x and y.This is read pointer T and passes through T=t 0The time value that-const*t obtains, and a plurality of past frames of indication by returning by Fixed Time Interval along time shaft.Data obtain unit 64 and read pixel P from the frames of reading pointer area that is stored in circular buffer 56 X, yData.Then, image formation unit 66 copies to pixel P in the display buffer 74 with sense data X, yOn the zone (S58).
Calculate and be provided with pixel P X, yAlpha value a X, y(S60).As pixel P X, yWhen also not being the last pixel in the display buffer 74, promptly as pixel P X, yWhen not being the bottom right edge pixel (S62N), pixel P X, yMove to next pixel (S64).Repeat S56 to the processing of S62 up to pixel P X, yBecome the last pixel in the display buffer 74.As pixel P X, yWhen becoming the last pixel in the display buffer 74, write display buffer 74 (S62Y) and by image formation unit 66 draw (S66) about the image of a frame.
If the synthetic number i of frame does not also arrive predetermined number I (S68N), then add " 1 " and repeat the processing of S54 to S66 to number i.In the 3rd embodiment, predetermined number I is " 3 " and repeat synthetic 4 times and count down to " 3 " up to the synthetic number i of frame from " 0 ".When the synthetic number i of frame arrives predetermined number i (S68Y), write pointer t is added " 1 " (S70).When write pointer t indicated the stub area of circular buffer 56, write pointer t turned back to the apex zone of circular buffer 56.Image data-outputting unit 76 is to display 78 output drawn view data (S76).Repeat S52 to the processing of S76 till order stops demonstration (S78).By this way, with the pixel data of unit reading duration observation of complexion component from past frame only, write display buffer then.
For example as the improved example of the 3rd embodiment, the alpha value of frame promptly can be with the alpha value rather than the P of present frame 12 corresponding to synthetic number i=0 RBe set to P.In this case, extract three color RGB together, make red material target 34-37 on the display screen of Fig. 9 B, not only to occur but also occur people 30 simultaneously.
The 4th embodiment
A fourth embodiment in accordance with the invention and the 3rd embodiment difference are that the property value in the 4th embodiment is the approximate rank (order) between indication desired images pattern and the real image.As the result of pattern matching, image is approximate more and near the desired images pattern, and its data are just read from old more frame.Therefore, in the timing that postpones, can show the parts of images that is included in the expectation in the original active image separately.
Figure 11 is illustrated as box space with virtual mode with the original active image according to the 4th embodiment.The present frame 20 that is included in box space 10 comprises first image 40.Now hypothesis is calculated coupling by image model, so that approximate first image 40 then, compares with the pixel in other zone, the pixel of forming first image 40 has and image model high-order approximation more.Therefore, by further returning,, from the frame in past, read data corresponding to it according to approximate rank along time shaft.Here, by turn back to time t along time shaft 2, come from having time value t 2Frame in the position sense data of second image 42.The cross section in box space 10 holding time value t only in the zone of second image 42 2, and at other regional holding time value t 1Therefore, the cross section has discrete width along time-axis direction.
Image forming appts 50 according to the 4th embodiment has and the essentially identical structure of device shown in Figure 3.The user obtains unit 72 specify image patterns via order, and the coupling that decision processing unit 62 is handled between image model and the two field picture.As its result, by the detection of pixel and the approximate rank of image model.Decision processing unit 62 is according to its approximate rank, for each pixel determine should from which frame sense data.
With reference to the handling process of Fig. 7 description according to the 4th embodiment.At first, prior to step S100, the user specifies the image model as the target that its calculating is mated, and calculates coupling between present frame 12 and image model, so that be the approximate rank of each pixel detection by " s " expression.That is,, the approximate rank of image-region are set about pixel approximate in the image-region with image model.Step S100 is identical with the step of second embodiment to S104.In step S106, determine to read pointer T according to approximate rank " s ".For example (x y) obtains to read pointer T by T=t-s.Thereafter the step also step with second embodiment is identical.
The 5th embodiment
In the 5th embodiment, also be sense data and synthetic from the frame that separates according to the pixel property value.This property value and the third and fourth embodiment difference be its be the presentation video zone the time range degree value.For example, between target, fast or significantly the zone of moving has big image modification in time, therefore sense data from older frame.Therefore, can postpone to be included in zone in the original active image, that have big image modification and show, so that image modification is big more, the just demonstration in delayed image zone more.
Image forming appts 50 according to the 5th embodiment has and the essentially identical structure of device shown in Figure 3.By detecting processing unit 62 is each pixel detection temporal change degree between the tight former frame of target frame and this target frame.Decision processing unit 62 is determined sense data from which frame according to its change degree.
With reference to the handling process of Fig. 7 description according to the 5th embodiment.At S106, compare t frame and (T-1) frame by pixel ground, and detect change degree by " c " expression.Determine to read pointer T according to its change degree " c ".For example, (x y) obtains to read pointer T by T=t-c.When change degree " c " increased, the degree of time is in the past returned in expression along time shaft time value increased.
The 6th embodiment
Be that with the first embodiment difference user can freely determine or defines sense data from which frame by using the interface on the screen for each scan line according to a sixth embodiment of the invention.Below by emphasizing to describe the 6th embodiment with the first embodiment difference.
Figure 12 is the functional-block diagram that the structure of image forming appts is shown.Image forming appts 50 with shown in Fig. 3, be that according to the main difference part of the image forming appts 50 of first embodiment device shown in Figure 12 comprises input unit 80 is set.Input unit 80 is set obtains the input of the value of setting that unit 72 obtains to be used to define the cross section 14 of Fig. 1 via the order of user operation.In the 6th embodiment, as defining the function of from which frame, reading the data of each pixel column, t=t 0-y is the predetermined value as default settings.That is, this functional definition frame is to the degree of returning in the past.Setting input unit 80 sends by being presented at t=t to display buffer 74 0The image of the graphical presentation of the coordinate y of the relation of the last time value of-y and pixel column.Image data-outputting unit 76 shows that at display 78 this image also shows the relation between time t and the coordinate y by the image that input unit 80 produces is set.When watching the figure that is presented on the display 78, user operation commands obtains unit 72, and the shape of modification figure is with function t=t 0-y changes into other functions.For example, order acquisition unit 72 can be the touch-screen that is connected to indicator screen.In this case, the expression user presses the value of the position of touch-screen to import as content of operation.
Order obtains unit 72 and changes the figure that is presented on the display 78 to input unit 80 transmission user content of operation are set.According to obtain the content of operation that unit 72 obtains from order, input unit is set changes function t=t 0-y, thus new function is set, and therefore newly-installed function is stored in the function memory 70.Decision processing unit 62 is read from function storaging unit by the function that input unit 80 settings are set, and determines sense data from which frame according to this new function for each pixel column.As its result, box space 10 shown in Figure 1 is cut by the curved surface that the function definition that input unit 80 is provided with is set, and appears at image on the cross section rather than present frame 12 as actual frame.By realizing above structure, the user can utilize image forming appts 50 as the mandate instrument, and can produce mysterious and unique image by freely changing the figure that is presented on the screen.
Figure 13 illustrates the screen of demonstration by the monitor of the chart that the definite function of input unit is set.At first, demonstration indicator function t=t on the screen 82 is being set 0The straight line 84 that concerns between time t among the-y and the coordinate y.The user can change into Bezier 86 with straight line 84.Bezier 86 is the curves that connect first end points 88 and second end points 90, is determined the shape of this curve by the position at first control point 96 and second control point 98.The position and the length that change first handle (handle) 92 and second handle 94 by the user are determined the position at first control point 96 and the position at second control point 98.If specify by Bezier 86 by the function that input unit 80 is provided with is set, then obtain be for each pixel column will from the contiguous frame of present frame sense data and the data mixing image of from the frame in past, reading together.For example, via input unit 80 is set, the user can the designated period curve by Bezier 86, such as sine wave curve.Though in this embodiment, by Bezier 86 specified function, can provide a kind of structure, wherein by other curve specified function such as curves such as B-battens as improved example.
The 7th embodiment
Be to be provided with input unit 80 with the 6th embodiment difference according to a seventh embodiment of the invention and obtain one of characteristic point coordinates conduct value of setting in the present frame 12, and by this characteristic point coordinates defined function.Below by the difference of emphasizing itself and the 6th embodiment present embodiment is described.
It also is the touch-screen that is connected to display 78 that order obtains unit 72.When the user pushes touch-screen in the position of expectation and when drawing round mode current collector, a plurality of successive values of the coordinate of expression contact are sent to input unit 80 is set.According to the coordinate figure that obtains, be provided with input unit 80 identifications by a plurality of contacts around and the zone that covers, and produce the function that is used for determining circle zone, so that record in the function memory 70.According to the function of reading from function memory 70, from past frame, read about being included in the data of the pixel in the circle zone, and from present frame 12, read about not being included in the data of the pixel in the circle zone.As its result, box space 10 shown in Figure 1 is by being cut by the curved surface that the coordinate function definition that input unit 80 obtains is set, and exports the image that occurs rather than present frame 12 as actual frame on the cross section.By realizing top structure, the user can utilize image generation unit 50 as the mandate instrument, and can produce mysterious and unique image by specify arbitrary region on touch-screen.
The 8th embodiment
Be pre-defined in some way function according to the eighth embodiment of the present invention and other embodiment differences, make predetermined change shape appear on the screen, sense data from the frame of determining according to this function.According to the 8th embodiment, appear at the pre-defined function of mode on the screen with waveform change shape such as water ring.
The characteristic point that decision processing unit 62 is determined in the present frame 12.Similar with the 6th and the 7th embodiment, by the touch-screen specific characteristic point of user via the screen that is connected to display 78, wherein touch-screen obtains unit 72 as order.Decision processing unit 62 is determined source frame and pixel coordinate, so that the waveform of water ring occurs from characteristic point as its center.Here, the source frame is will be from the frame of its sense data.For example,, suppose to show circle along the radiation direction that decision processing unit 62 uses the gradual change time value to determine the source frame for each radiation circle from characteristic point in order to show the stereogram of water ring.The change of definition gradual change time value makes it become cyclomorphosis.Thereby can show the unevenness of water ring.In addition, decision processing unit 62 moves the pixel coordinate scheduled volume that will read at predetermined direction.Thereby can show the anaclasis that water ring causes.
The 9th embodiment
The the 7th and the 8th embodiment difference that obtains unit 72 as the order of input feature vector point according to the ninth embodiment of the present invention and touch-screen is that the information that basis comprises determines characteristic point in present frame 12.
Decision processing unit 62 according to the 9th embodiment is determined characteristic point according to the pixel value that is included in each pixel in the present frame 12.For example, thereby the LED of flicker is synthesized to a part that becomes target on the target at a high speed, and decision processing unit 62 identifies the flicker part by zone in the present frame 12 that specifies in continuous input, that pixel value intermittently changes between two values.Decision processing unit 62 determines that the flicker part is as characteristic point coordinates.As improved example, decision processing unit 62 can use fixing coordinate to determine characteristic point.As another improved example, if following factor be pixel value, Z value, in changing with the pixel value of the approximate rank of desired pattern and pixel any one fall into preset range, decision processing unit 62 can determine that pixel is a characteristic point.
The tenth embodiment
Be that according to the tenth embodiment of the present invention and other embodiment differences image-input device 52 not only obtains the original active image, also obtains voice data.Voice data that image-input device 52 obtains and the input of original active image synchronization are so that send to circular buffer 56.According to the frequency distribution of voice data, volume change etc., decision processing unit 62 is determined the source frames, is read regularly, in alpha value and the characteristic point at least one.For example, when the volume of voice data changed above threshold value, decision processing unit 62 can be determined the source frame and read pixel in the mode such as the shape that occurs describing among the 8th embodiment on screen.For example, if the change of volume exceeds the threshold value of frequency domain part, then determine processing unit 62 to determine characteristic point in the 8th and the 9th embodiment according to frequency domain.
The 11 embodiment
According to eleventh embodiment of the invention, according to the property value of the pixel that comprises in the target frame, predetermined graphics combine is near locations of pixels.Here, property value be indicator diagram picture zone the time range degree digital value.For example, the zone of fast or significantly moving in target shown continuously, so as art and expression vivo have particulate form (form of particle) target with from the time change big pixel be out of shape to mode of its peripheral diffusion.By this way, can produce such as the orientation and the operating effect that are presented at paper-snowfall (paper-snowfall-like) effect on the screen periphery in the original active image, the major heading such as moving area or track.
Image generation unit 50 according to the 11 embodiment has and the similar structure of device shown in Figure 3.Decision processing unit 62 is each pixel, on present frame 12 and time between the frame before the present frame 12, the time range degree of the pixel value of the image-region of test set framing.If the change degree of this pixel exceeds predetermined threshold, then determine processing unit 62 pixel to be worked as the center that acts on directed and Action Target.If exist a plurality of its change degree to surpass the pixel and the layout adjacent to one another of threshold value, a pixel then can determining to have maximum change degree in them is as the center, and the center can show a plurality of orientations and Action Target with diffusion way on every side.In addition, decision processing unit 62 can be determined the moving direction of directed and Action Target according to pixel in the present frame 12 and the pixel in the previous frame (each all has maximum change degree in these two pixels in each frame).
Figure 14 is the flow chart that the step that produces directed and Action Target is shown.At first, input present frame 12 is as processing target (S150).If after beginning to import present frame 12, do not carry out reproduction processes (S152N) immediately, then in present frame 12 and the change (S154) of extraction pixel value between its former frame in time, detect pixel value and change maximum position (S156), and the vector of the position that its pixel value change is maximum is defined as the moving direction (S158) of directed and Action Target.If after beginning to import present frame 12, carry out reproduction processes immediately, then there is not such former frame, therefore omit treatment S 154 to S158 (S152Y).Present frame 12 separates storage so that compare (S160) with next frame.As the center, produce the orientation that will show and the image (S162) of Action Target around in the detected position of S156.Orientation of Chan Shenging and Action Target and present frame 12 are overlapping therefrom, so that handle the picture (S164) that will show.By reprocessing S150 to S164 till stop showing (S166Y), when the moving method of determining along S158 moves, show orientation and Action Target.
In the above example, the structure that changes the position of determining demonstration orientation and Action Target according to the pixel value with its former frame has been described.As another example, decision processing unit 62 can determine that according to color component, profile, brightness, Z value, movement locus or the like the position carries out orientation and operating effect.For example, according to the position that can determine to produce directed and operating effect such as the size of the pixel value of " position that comprises maximum red components in the drawings ", perhaps the difference of the pixel value between itself and other adjacent profile is that the outline line of maximum can be confirmed as orientation and operation part in single frames.Here, " will produce the position of directed and operating effect " and also will abbreviate " orienting station " in the back as.In addition, for example close on difference between the pixel value of " red profile ", and its color component can be confirmed as orienting station greater than the part of threshold value greater than threshold value.In addition, brightness can be defined as bearing portion greater than the part of threshold value, and the part with specific Z value scope can be defined as bearing portion.If in fixed time period restriction stored a plurality of past frames, then can detect the track (locus) of the characteristic point of extracting according to specific criteria.Therefore, can produce orientation and operating effect along such track.As orientation and operating effect, decision processing unit 62 can show linear goal or the personage who it has been used glittering color or other features, or the target such as symbol.In addition, decision processing unit 62 can produce orientation and the operating effect that becomes translucent from the transparency in the characteristic zone that present frame 12 extracts, so that overlap onto past frame.
Decision processing unit 62 can according to such as the pixel value of coordinate, Z value and each pixel, with the approximate rank of desired images pattern and the property value the change rate in the pixel value, the definite orientation that will show and the size and the translational speed of Action Target.Be used to make up directed and the alpha value of Action Target and can be fixed value or be different for each pixel.For example, can be according to the alpha value being set such as the pixel value of coordinate, Z value and each pixel, property value with the approximate rank of desired images pattern, change degree in the pixel value.
Figure 15 illustrates the flow chart that directed and operating effect is applied to the step of present frame 12.At first, according to order, determine to be applied to the orientation of present frame 12 and the type of operating effect (S200) from the user.Then, input produces the present frame 12 (S202) of directed and operating effect.Then, the part that the adjacent pixels value difference surpasses threshold value in the extraction present frame 12 is as profile (S204), and definite margin of image element the best part is as the part (S206) that produces directed and operating effect.Then directed and operating effect are applied to definite part (S208), and handle the figure (S210) that is used to show the image that produces directed and operating effect.Repeating above treatment S 202 to S210 (S212N) on the present frame thereon till stopping demonstration (S212Y), so that using directed and operating effect.
The 12 embodiment
According to the 12 embodiment, according to the change that is included in pixel property value in the target frame, the local change reproduces frame per second.That is, according to the property value of the image-region that constitutes two dimensional image, to each image-region, cross section 14 changes in time by different speed, so that will be from local change of frame rate of the new live image of image data-outputting unit 76 outputs.For example, the degree that changes for pixel value in time is greater than the part of threshold value, and is elongated from the time interval of frame sense data, thereby reduces rendering frame speed.Thereby, will produce mysterious and unique image, wherein regional reality moves fast more, and the zone that shows on display is just moved slowly more, so the part of target moves with the speed different with normal speed in the live image.
Image forming appts 50 according to the 12 embodiment has and the essentially identical structure of device shown in Figure 3.Can change frame rate by pixel ground according to the decision processing unit 62 of this embodiment, or be that unit changes frame with the target of extracting according to its pixel value.Decision processing unit 62 can be included as the mode of the part of the scope of being discussed with some pixels around this pixel and extract target.In this case, the such part of object edge that changes gradually such as its pixel value can be included as the part of target, and also it is handled.
About the zone that its frame rate will change, decision processing unit 62 is determined the frame rate after this change.For example, can frame rate be set, and the frame rate that its speed changes big more zone can be provided with low more value according to the time range degree of the pixel value in zone.According to the frame rate of determining like this, decision processing unit 62 determines to read time interval between source frame and the next frame for each pixel.Decision processing unit 62 can change frame rate in time.For example, decision processing unit 62 at first frame rate is set to low rate and increases frame rate gradually, so that it catches up with other Displaying timers around the pixel of this pixel.
As improved example, a kind of structure can be so that the user obtains unit 72 via order shown in Figure 12, and the mode that whether can be provided with in the scope that edge with target is included in target is carried out processing.Can change the frame rate of the pixel that in present frame 12, has predetermined Z value scope.Can change the position that in present frame, has with the predetermined approximate rank of desired images pattern.In other words, by pixel ground control frame speed.
The 13 embodiment
According to thriteenth embodiment of the invention, from being the data that read out in the pixel that has the predetermined attribute value in the target frame previous frame rather than the target frame in time.For example, from old frame, read data, so that can produce lively effect, as the image of from prune shape (trimmingshape) window, watching the part past corresponding to the pixel value of black.
Image generation unit 50 according to the 13 embodiment has and the essentially identical structure of device shown in Figure 3.Decision processing unit 62 according to present embodiment extracts the zone with predetermined pixel value scope from present frame 12, and determines the source frame in zone simultaneously.The source frame can be by returning the past frame that predetermined period obtains along time shaft, perhaps can according to such as the pixel value of coordinate, Z value, each pixel, and the property value on approximate rank of desired image pattern, change amplitude of pixel value or the like determine the source frame.
As improved example, frame is old more, and these frames of sequence arrangement (gradate) just are so that can emphasize and vividly represent old thing.Decision processing unit 62 can extract and comprise that also some center on the zone of pixel.For example from people's face, extract together corresponding to mouth with around the zone of some pixels of mouth, thereby extract certainly such as part object edge, that pixel value changes gradually.In the above embodiments, sense data the frame the preceding from the time.Yet, also be stored in the circular buffer 56 if the time is gone up frame in the future, can from these in the future the frame from sense data.
The 14 embodiment
In fourteenth embodiment of the invention, according to the change of the property value that is included in the pixel in the target frame pixel value is added to pixel, thereby changes color.For example, the mode such as regional shown in red will significantly moving in the target is applied to orientation and operating effect on the original image.
Image forming appts 50 according to the 14 embodiment has and the essentially identical structure of device shown in Figure 3.The value that to be scheduled to according to the decision processing unit 62 of this embodiment is added on the pixel value of pixel, makes with the big pixel of change degree in the red display present frame 12.Thereafter, about having added the pixel of predetermined value, the pixel value that will be added to this pixel reduces in time gradually, and as its result, can show the after image that has stayed red afterbody.
As the improvement example of image forming appts 50, its structure can be by in addition after pixel shows this pixel still be retained in mode on the screen, the data of the violent pixel that changes can be synthesized with predetermined alpha value in time.Pixel value can also be added to synthetic data, makes to show its image with desired color.Thereafter, the alpha value of the data that will synthesize reduces in time gradually, as its result, can show the after image that stays afterbody.As another improved example, structure can so that: by using predetermined alpha value, the high pixel data of its change degree and alpha value about all pixels of screen are set to zero screen synthesizes, and shows whole screen gradually.As another improved example, can change the pixel value that is added to each pixel, and, can change color of pixel by adding the pixel value of this pixel in the mode of accumulation.
The 15 embodiment
In fifteenth embodiment of the invention,, use the synthetic target frame of predetermined target according to the property value that is included in the pixel in the frame in future.That is, between the frame that in the original active image of storage in advance, comprises, be presented at as the graininess target in the 11 embodiment on the zone of predetermined image pattern in the approaching frame that will show.Thereby, can produce the directional effect of similar broadcasting (announce).For example expose on the screen, here as the high priest of target before image such as display paper-snowfall.
Image forming appts 50 according to the 15 embodiment has and the essentially identical structure of device shown in Figure 3.In the frame that comprises according to the original active image of decision processing unit 62 from be stored in circular buffer 56 in advance of present embodiment, detect the zone of the preset range on the approximate rank that fall into the frame predetermined image pattern that will show after going up about the time.Decision processing unit 62 synthesizes the graininess target on every side in the zone of being detected.The method of combination and composograph is similar with the method for describing in the 11 embodiment.
As improved example, the synthetic of target can be applied to the real-time activity image, wherein takes this live image concurrently with the current reproduction of image.That is, after shooting, obtain live image immediately and temporarily be stored in the buffer, reproducing each frame than the timing of taking constant time lag then.Extract the predetermined picture pattern the present frame that after taking, obtains immediately, reproducing its frame than the timing of taking constant time lag simultaneously, thereby can produce the directional effect of similar broadcasting.
According to only having described the present invention for exemplary embodiment.Those skilled in the art should understand that existence other various modifications, and such modification is comprised by scope of the present invention about the combination of above-mentioned each parts and processing.
In a second embodiment, determine sense data from which frame according to the Z value.Yet in improved example, a plurality of frames are set on Fixed Time Interval, and can synthesize a plurality of frames by ratio according to the Z value as the source frame.Illustrate once more that here the source frame is the frame from its sense data.In this case, determine the alpha value according to the Z value.The zone that has big relatively Z value in target promptly can be provided with from the near zone of video camera in the mode that is provided with its alpha value bigger.In this case, will knowing projection lucidly more from the near zone of video camera, and will show violent zone of moving in residual mode, just looks like that it is after image.
In the 3rd embodiment, the alpha value is set according to pixel value.Yet in another improved example, can determine the source frame according to pixel value.For example, when extraction has the pixel value of red component, from older frame, read out in the data on the zone with more red components, further postponed so that comprise the demonstration in the zone of more red components.
At the 4th embodiment, according to determining the source frame with the approximate rank of desired image pattern.Yet in another improved example, the alpha value can be set according to approximate rank.In this case, the clearer zone that shows lucidly more near image model, and show fast and the zone of significantly moving in the mode of further residual after image.In addition, prepared a plurality of different images patterns in advance, and can will be used for adopting approximate rank to read the source frame according to which AD HOC.Perhaps, can will be used for adopting approximate rank to determine the alpha value according to which AD HOC.The identification of image not only can be the identification of each frame, and can be the identification of crossing over the posture of a plurality of frames.
In the 5th embodiment, determine the source frame according to the time range degree in the image-region.Yet in another improved example, can the alpha value be set according to the degree that changes.In this case, clearer the demonstration lucidly has the zone that changes more, and also shows to have the zone that changes more in the mode of residual after image.
Among each embodiment in the first to the 15 embodiment, by identical coordinate (x, y) corresponding relation of the pixel of judgement in a plurality of frames.Yet revise according to another, can judge corresponding relation, perhaps can judge that whether should make this moves according to the width of attribute or pixel by the coordinate that moves specific pixel.
In second to the 5th embodiment, determine the source frame according to each single attribute value or alpha value.Yet in another is revised, can determine source frame or alpha value according to a plurality of property values in Z value, pixel value, approximate rank and the change degree.For example, after having determined the source frame for specific pixel, can between described frame and present frame 12, mate by computation schema, then according to synthesizing a plurality of frames corresponding to the alpha value on its approximate rank according to its Z value.In this case, if target is nearer from video camera, sense data from older frame then in addition, shows the zone of significantly moving in the mode of residual after image.
Among each embodiment in the first to the 15 embodiment, provide a kind of structure, make be included in the original active image, be stored in the circular buffer 56 corresponding to the source frame of specific period.Yet in another was revised, image input unit 52 can be read the frame of being determined by decision processing unit 62 from the original active image with the mpeg format compression, and buffer control unit 54 can make these frames be stored in the circular buffer 54.In addition, buffer control unit 54 can be with reference to before this frame and frame afterwards.
Hereinafter, further modification will be described.
1-1. in first embodiment, provide a kind of structure, make to be each pixel column sense data from different frames.In this improved example 1-1, a kind of structure can be so that be set up as some past frames of source frame, and be sense data in any frame of each pixel column from the frame of these settings.For example, a kind of structure can be so that two frame A and B be set to the source frame, and are odd number or even number sense data from A and B according to the order of pixel column.For example, structure can be read the data of the 0th to the 79th pixel column, and read data of the 80th to the 159th pixel column or the like from frame B so that six frame A, B, C, D, E and F are set to the source frame from frame A.In other words, be the data that unit cuts apart pixel column and reads each unit that is used for cut-off rule from different past frames with 80 lines.When being on the screen each unit of being made up of 50 pixel columns frame different from each during dateout, belt pattern or icotype appear on moving area.
1-2. in a second embodiment, provide a kind of structure, make according to Z value sense data from different frames.In this improved example 1-2, a kind of structure can be so that read the data of the pixel that only has predetermined Z value scope from past frame.For example, set in advance one of upper and lower bound of Z value at least, and set in advance one or more past frames from its sense data.About have the scope of setting that falls into (under the upper limit on lower limit) the pixel of Z value, sense data from the past frame that is provided with.About having the pixel of the Z value outside the scope of setting, sense data from present frame 12.The quantity of the source frame that will be provided with here is fixed to one or more frames, and perhaps the pixel coordinate source frame according to the source frame can be a past frame.
1-3. in the 3rd embodiment, provide a kind of structure, feasible data of from a plurality of past frames, reading pixel with predetermined pixel value, and synthetic with present frame 12.In this improved example 1-3, a kind of structure can so that: about the pixel of present frame 12 with predetermined pixel value, sense data from predetermined past frame, and about other pixels, sense data from present frame 12.In addition, can be with the past frame of fixed form setting as the source frame, perhaps according to its pixel value, the source frame can be a past frame.
1-4. in the 4th embodiment, provide a kind of structure, make from corresponding to the past frame on the approximate rank of desired images pattern sense data.In this improved example 1-4, a kind of structure can fall into the data of the pixel of predetermined scope so that only read the approximate rank of itself and desired images pattern from past frame, and reads other pixels from present frame 12.Scope as approximate rank can set in advance one of its upper and lower bound at least.Past frame as the source frame can be set in a fixed manner, and perhaps the source frame can be by return the past frame of acquisition along time shaft according to approximate rank.
1-5. in the 5th embodiment, provide a kind of structure, feasible change sense data from past frame according to pixel value.In this improved example 1-5, a kind of structure can fall into the data of the pixel of preset range so that only read its pixel value change from past frame, and reads the data of other pixels from present frame 12.Can be with the past frame of fixed form setting as the source frame, perhaps the source frame can be to change by return the past frame of acquisition along time shaft according to pixel value.
1-6. in first embodiment, according to time value t and pixel coordinate y defined function t=t 0-y.In this improved example 1-6, the relation between time t and pixel coordinate y can be defined as the t=sin y of use trigonometric function etc.Among the figure that describes in Figure 13, what periodically mix is the pixel column of reading its data from the past frame that further returns along time shaft, and the one other pixel row of reading its data from the frame that upgrades.In addition, as among the improved example 1-1, a kind of form can be so that set in advance some past frames as the source frame, and from corresponding to the data of reading each pixel column in any frame the frame of these settings of time value t.
2-1. in first embodiment, provide a kind of structure, make to be each pixel column sense data from past frame, and these data arranged so that construct a frame with vertical direction.In this improved example 2-1, a kind of structure can be so that data of reading from past frame for each pixel column and present frame 12 be synthetic so that form a frame.In this case, the alpha value can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-2. in a second embodiment, provide a kind of structure, make according to Z value sense data from different frames.In this improved example 2-2, a kind of structure can be so that data and the present frame 12 read from different frame according to the Z value synthesize so that produce a frame.Perhaps, from past frame, read the data of the pixel of the preset range that only has the Z value in the present frame 12, and such data and present frame 12 are synthetic so that form a frame.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-3. in the 3rd embodiment, provide a kind of structure, make the data of from a plurality of past frames, reading pixel so that synthetic with present frame 12 with predetermined pixel value.In this improved example, a kind of structure can so that: about in present frame 12, having the pixel of predetermined pixel value, from predetermined past frame, read its data, and these data are synthesized with present frame 12.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-4. in the 4th embodiment, provide a kind of structure, make from corresponding to the past frame on the approximate rank of desired image pattern sense data.In this improved example 2-4, a kind of structure can so that from corresponding to the past frame on the approximate rank of desired image pattern data and the present frame 12 read synthetic.Perhaps, the approximate rank of only reading itself and desired image pattern from past frame fall into the data of the pixel of preset range, and such data and present frame 12 are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-5. in the 5th embodiment, provide a kind of structure, feasible change sense data from the past frame that returns along time shaft according to pixel value.In improved example 2-5, a kind of structure can be synthetic so that change data and the present frame 12 read according to pixel value from past frame.Perhaps, from past frame, read the data that only its pixel value change falls into the pixel of preset range, and such data and present frame 12 are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-6. in improved example 1-6, the relation between time t and the pixel coordinate y is defined as t=sin y that uses trigonometric function etc.As its further modification,, will synthesize with present frame 12 from current data of reading to the past frame according to the function that uses such as the trigonometric function of this class of t=sin y.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-7. in the 6th embodiment, provide a kind of structure, make from corresponding to by user's sense data the frame that the Bezier 86 that screen 82 is provided with is set.In this improved example 2-7, structure can be so that according to by the user Bezier 86 that is provided with on the screen 82 being set, the data that will read from frame and present frame 12 be synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
3-1. in this improved example, a kind of structure can be so that will improved example and 2-1 be to 2-7 among two or more embodiment at least or improved example in the improved example to 1-6 at the first to the 15 embodiment, 1-1, and read for each pixel two parts or more piece of data are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc., the alpha value can be set.
Although represented with reference to definite preferred embodiment of the present invention and described the present invention, but the one of ordinary skilled in the art will be appreciated that and can be under the prerequisite that does not deviate from the aim of the present invention that limited by appended claims and scope the present invention be carried out modification on various forms and the details.

Claims (18)

1. image producing method comprises:
The original active image is used as the two dimensional image that changes along time shaft, and when live image being expressed as the rectangular parallel piped shape space that is formed by two dimensional image and time shaft with virtual mode, with comprise a plurality of on time value the curved surface of mutually different points cut this rectangular parallel piped shape space;
With the image projection that occurs on the cross section to the plane on time-axis direction; And
By moving curved surface along time shaft and, the image that occurs in the plane being exported as new live image by changing the cross section in time.
2. image producing method as claimed in claim 1, wherein the function definition cross section of the coordinate by being included in the point in the two dimensional image.
3. image forming appts comprises:
Video memory is used for along time shaft sequential storage original active image;
Image conversion unit, comprising: the decision processing unit, the original active image that is used for being stored in described video memory is used as the two dimensional image that changes along time shaft, and when live image being expressed as the rectangular parallel piped shape space that is formed by two dimensional image and time shaft with virtual mode, to utilizing the function of curved surface being represented with coordinate figure and time value to constitute to comprise each pixel in a plurality of cross sections the when curved surface of different points cuts this rectangular parallel piped shape space mutually on time value, decision should be read which frame of these data in a plurality of frames from video memory; Data obtain the unit, are used for from the frame sense data that is determined; And image formation unit, be used for by the data of being read are synthesized, generate the image projection that will occur on the cross section image to the plane on time-axis direction; And
Image data-outputting unit is new activity diagram picture frame with image setting that the cross section obtains by changing in time in described image conversion unit, that occur in the plane.
4. image forming appts as claimed in claim 3, wherein said image conversion unit realizes changing in time the cross section by move curved surface along time shaft.
5. image forming appts as claimed in claim 3 wherein define curved surface in curved surface has continuous or a discrete width on time-axis direction mode, and described image conversion unit is synthesized the image that covers in this width.
6. image forming appts as claimed in claim 3, wherein said image conversion unit is cut rectangular parallel piped shape space with the curved surface of the function definition of the coordinate of the image-region that constitutes two dimensional image.
7. image forming appts as claimed in claim 6, wherein usefulness does not rely on the function definition curved surface of the horizontal coordinate of two dimensional image.
8. image forming appts as claimed in claim 3, wherein said image conversion unit is cut rectangular parallel piped shape space with the curved surface of the function definition of the property value of the image-region that constitutes two dimensional image.
9. image forming appts as claimed in claim 3, also comprise input unit is set, be used for operating the value of the setting input that obtains to be used to define curved surface via the user, wherein said image conversion unit is used by the described curved surface that the function definition of the value of setting that input unit obtains is set and is cut rectangular parallel piped shape space.
10. image forming appts as claimed in claim 9, wherein when concerning between the variable of function that on screen, shows the value of setting and function, with indicating the curve that concerns between the coordinate that is included in the point in the two dimensional image and its time value to represent the function that the value of setting that input unit obtains is set by described.
11. image forming appts as claimed in claim 9, wherein said characteristic point coordinates conduct value of setting that is provided with in the input unit acquisition two dimensional image, and described image conversion unit is used the curvilinear cut rectangular parallel piped shape space of the function definition of characteristic point coordinates.
12. image forming appts as claimed in claim 3, wherein said image conversion unit is so that according to property value, the mode of cross section so that the different speed of each image-region is changed in time of the image-region that constitutes two dimensional image, partly changing will be from the new activity diagram picture frame of described image data-outputting unit output.
13. image forming appts as claimed in claim 3, the time value that wherein defines curved surface comprise at least with the present moment be the past at center or in the future in one.
14. image forming appts as claimed in claim 8, wherein property value is a depth value.
15. image forming appts as claimed in claim 8, wherein property value is the value on the approximate rank of the relative desired image pattern of indication.
16. image forming appts as claimed in claim 8, wherein property value is the value of the degree that changes in time of indicator diagram picture zone.
17. image forming appts as claimed in claim 8, wherein property value is a pixel value.
18. image forming appts as claimed in claim 3 also comprises the image input unit, the image that is used to obtain by video camera is taken sends to described video memory as the original active image and with these images.
CNB2003801019768A 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis Expired - Lifetime CN100346638C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP311631/2002 2002-10-25
JP2002311631 2002-10-25
JP326771/2003 2003-09-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN2007101025576A Division CN101123692B (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Publications (2)

Publication Number Publication Date
CN1708982A CN1708982A (en) 2005-12-14
CN100346638C true CN100346638C (en) 2007-10-31

Family

ID=35581912

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2007101025576A Expired - Lifetime CN101123692B (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis
CNB2003801019768A Expired - Lifetime CN100346638C (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN2007101025576A Expired - Lifetime CN101123692B (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Country Status (1)

Country Link
CN (2) CN101123692B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010103972A (en) * 2008-09-25 2010-05-06 Sanyo Electric Co Ltd Image processing device and electronic appliance
JP6532393B2 (en) 2015-12-02 2019-06-19 株式会社ソニー・インタラクティブエンタテインメント Display control apparatus and display control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459830A (en) * 1991-07-22 1995-10-17 Sony Corporation Animation data index creation drawn from image data sampling composites
CN1169808A (en) * 1995-11-14 1998-01-07 索尼公司 Device and method for processing image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064355A (en) * 1994-05-24 2000-05-16 Texas Instruments Incorporated Method and apparatus for playback with a virtual reality system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459830A (en) * 1991-07-22 1995-10-17 Sony Corporation Animation data index creation drawn from image data sampling composites
CN1169808A (en) * 1995-11-14 1998-01-07 索尼公司 Device and method for processing image

Also Published As

Publication number Publication date
CN101123692A (en) 2008-02-13
CN1708982A (en) 2005-12-14
CN101123692B (en) 2012-09-26

Similar Documents

Publication Publication Date Title
CN1173296C (en) Improved image conversion and encoding techniques
CN1254954C (en) Digital camera capable of image processing
CN100351868C (en) Generation of still image from a plurality of frame images
CN1136737C (en) Image conversion and encoding techniques
CN1026928C (en) System and method for color image enhancement
CN1149851C (en) Method and apparatus for transmitting picture data, processing pictures and recording medium therefor
CN1284073C (en) Information display system and its information processing apparauts, indicator and mark displaying method
CN1237778C (en) Image processing device, image processing method and image processing program
CN1701595A (en) Image pickup processing method and image pickup apparatus
EP1554871B1 (en) Method and apparatus for generating new images by using image data that vary along time axis
CN1287646A (en) Recording medium, image processing device, and image processing method
CN1892756A (en) Projection image position adjustment method
CN1870715A (en) Means for correcting hand shake
CN1941918A (en) Imaging device and imaging method
CN1496535A (en) Image processing apparatus and image processing meethod, storage medium, and computer program
CN1130667C (en) Device and method for controlling quality of reproduction of motion picture
CN1942899A (en) Face image creating device and method
CN1835556A (en) Pixel interpolation device and camera
US20130251267A1 (en) Image creating device, image creating method and recording medium
CN1241255A (en) Three-dimensional shape measurement device and three-dimensional engraver using said measurement device
CN1848934A (en) Image processing system, image pickup apparatus, image pickup method, image reproducing apparatus, and image reproducing method
CN101038676A (en) Image processing apparatus and image processing method
CN100346638C (en) Method and apparatus for generating new images by using image data that vary along time axis
CN1670614A (en) Projector and pattern image display method
CN1290330C (en) Systems and methods for enhanced visual presentation using interactive video streams

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20071031