CN101123692B - Method and apparatus for generating new images by using image data that vary along time axis - Google Patents

Method and apparatus for generating new images by using image data that vary along time axis Download PDF

Info

Publication number
CN101123692B
CN101123692B CN2007101025576A CN200710102557A CN101123692B CN 101123692 B CN101123692 B CN 101123692B CN 2007101025576 A CN2007101025576 A CN 2007101025576A CN 200710102557 A CN200710102557 A CN 200710102557A CN 101123692 B CN101123692 B CN 101123692B
Authority
CN
China
Prior art keywords
image
frame
pixel
value
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN2007101025576A
Other languages
Chinese (zh)
Other versions
CN101123692A (en
Inventor
挂智一
大场章男
铃木章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2003326771A external-priority patent/JP4114720B2/en
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Publication of CN101123692A publication Critical patent/CN101123692A/en
Application granted granted Critical
Publication of CN101123692B publication Critical patent/CN101123692B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A rectangular-parallelopiped space (box space) expresses the moving images by use of a virtual space. A plurality of frames contained in the moving images appear continuously along time axis. The box space is cut through by a desired surface, and an image projected on this cut surface is projected onto a plane parallel in the direction of time axis. Images sequentially projected onto the plane are outputted as new moving images.

Description

The view data that use changes along time shaft produces the method and apparatus of new images
The application is that denomination of invention is " using the method and apparatus that produces new images along the view data of time shaft change " (application number: 200380101976.8; The applying date: the dividing an application of application on October 22nd, 2003).
Technical field
The present invention relates to be used to produce the method and apparatus of image, relate in particular to therefore processing is also exported the live image of handling again by the live image of video camera shooting technology.
Background technology
Along with the marked improvement and the development of computer technology in recent years, the image-capable that computer provides is significantly improved.Even home PC (personal computer) and game machine that the general user can use also can realize it once being various processing that the special-purpose high-end work station of image processing could be realized.
The image processing performance of PC and the improvement on the ability provide the potential application of another kind of home PC and game machine.That is, exist at a low price, towards general user's the instrument that is used for move edit, image processing, creation etc.Therefore, specialized skills no longer is the prerequisite of complicated operation image processing, even amateurish user also can be through using these tool operant activity treatment of picture.
Summary of the invention
Because afore-mentioned, inventor of the present invention has set about seeking the image processing method of innovation, can obtain image novel and special-effect through this method.Inventor of the present invention has invented the present invention according to above-mentioned cognition, and a target of the present invention is to obtain interesting image.In addition, consider that other targets that following target or the description through patent specification are understood have developed the present invention.That is, target comprises the efficient of improving image processing, the load that reduction is caused by image processing, new suggested of improving image processing techniques or the like.
According to one embodiment of the invention; A kind of image producing method; Comprise:, read data at least one frame of a plurality of frames that from the original active image, comprise corresponding to position among the figure for position among each figure of the image that in the target frame of original active image, comprises; Above-mentioned data of reading are carried out synthesizing based on the alpha value; And, form new live image through exporting in proper order by the said synthetic frame that forms based on the alpha value.
According to another embodiment of the present invention; A kind of image forming appts that comprises video memory, image conversion unit and image data-outputting unit; The original active image of wherein said each frame of video memory journal; And said image conversion unit is read the data corresponding to position among the figure for each is included among the figure of the image in the target frame position from least one frame that is recorded in said video memory; And these data are carried out synthetic based on the alpha value, and said image data-outputting unit is exported the frame of said synthetic institute reconstruct based on the alpha value in proper order.
Relate to image producing method according to a preferred embodiment of the invention.This method comprises: the original active image is used as the two dimensional image that changes along time shaft; And when live image being expressed as the box space that is formed by two dimensional image and time shaft with virtual mode, with comprise a plurality of on time value this box space of curved surface cutting of each other different points.To the plane on time-axis direction, and through changing the cross section in time, the image that will occur in the plane is as new live image output with the image projection that occurs on the cross section.Through being set in every way, curved surface confirms the content of change, and the output new live image different with the content of original active image.
Here, " original active image " can be the image of being taken then and there by video camera, also can be in recording medium storage in advance, such as image by format encoded such as MPEG." projecting to the plane " is: if project to the plane on the time shaft, the image that then will project to this plane is the result of " projecting to the plane ".Specifically, it refers to: when directly when time-axis direction is watched the cross section, the image that will project to this cross section equals to project to the image on plane.
For example, through move the complete of curve form that the cross section keeps the cross section along time shaft, can realize " changing the cross section in time ".Move the cross section through passing in time, can obtain level and smooth and continuous new live image.Curved surface (surface) shape possibly pass in time and change.Represent by t if be included in the position of the point in the time shaft top-surface camber, and the coordinate that is included in the point in the two dimensional image is by (x, y) expression, then function t=f (x, y) definition that can be represented by general equality of t.Curved surface can be the plane.The image of projection changes according to the type of set curve form.
Relate to image forming appts according to a further advantageous embodiment of the invention.This device comprises: video memory is used for along time shaft sequential storage original active image; Image conversion unit; The original active image that is used for being stored in video memory is used as the two dimensional image that changes along time shaft; And when live image being expressed as the box space that is formed by two dimensional image and time shaft with virtual mode; With comprise a plurality of on time value this box space of curved surface cutting of each other different points, and with the image projection that occurs on the cross section to the plane on time-axis direction; And image data-outputting unit, be new activity diagram picture frame with image setting that the cross section obtains through in image conversion unit, changing in time, that occur in the plane.Video memory is as temporarily being stored in the buffer that has converted a plurality of frames in the set time section of new frame up to the frame that in the original active image, comprises into.
Image forming appts can also comprise the image input unit, is used to obtain the image taken by video camera and sends to video memory as the original active image and with the image of these acquisitions.Therefore, the image that realtime graphic handle to be taken, thus can on screen, show the virtual condition images different, uniqueness, mystery or special-effect with target.
Image conversion unit can be used the curved surface cutting box space of function definition of the coordinate of the image-region that constitutes two dimensional image.Here, " image-region " can be the zone that covers the zone of single pixel or cover block of pixels.This curved surface can be by the function definition of the coordinate of the two dimensional image that does not rely on horizontal direction." horizontal direction " can be the direction of scan line.Image conversion unit can be used the curved surface cutting box space about the function definition of the property value of the image-region that constitutes two dimensional image." property value " that determines the displaying contents of each pixel can be various attributes, such as pixel value, depth value (depth value), with the approximate exponent number (order) of AD HOC, the change degree or the like of other frames relatively.This property value can be the mean value or the central value of image-region.
Time value according to the point that can confirm in above-mentioned curved surface, to comprise at pixel value, depth value, with the approximate exponent number of AD HOC and any parameter in the change degree.In addition, according at pixel value, depth value, can confirm that with the approximate exponent number and any parameter in the change degree of AD HOC which image-region will project to above-mentioned plane.
Relate to image producing method according to a further advantageous embodiment of the invention.This method comprises: for position among each figure of the image that comprises in the target frame in the original active image, read the data corresponding to position among the figure at least one frame of a plurality of frames that from the original active image, comprise; Synthetic data of reading; And the frame that in synthetic, forms through order output forms new live image.With pixel or pixel behavior unit sense data from the frame in past, generated data then, thus obtain and original active image pictures different.These are so-called images of piecing together, and it comprises with pixel or pixel behavior unit and mixes synthetic, different data in time, thereby can obtain uniqueness and the mysterious image that can not in real world, exist.
" target frame " is as a reference frame when showing, and passage possibly change in time.For example, in existing scan method, be somebody's turn to do " target frame " corresponding to the present frame that will export in timing at present.According to this target frame, judge and from which frame, read and export which real data." position among the figure (in-pictureposition) " can be the position as the pixel column of scan line, perhaps can be locations of pixels.From frame, can read corresponding data corresponding to it, and with pixel or pixel behavior unit generated data." synthesizing " can be overlapping, mixing, displacement and bonding.
Relate to image forming appts according to a further advantageous embodiment of the invention, comprise video memory, image conversion unit and image data-outputting unit.Video memory is each frame sequential record original active image.Image conversion unit is included in the data of reading at least one frame of position from be recorded in video memory among the figure of the image in the target frame corresponding to position among the figure for each, and generated data.Image data-outputting unit is exported frame synthetic by image conversion unit and reconstruct in proper order.Video memory is as at preset time a plurality of frames of temporary transient storage in the cycle, when the frame that in the original active image, comprises has converted new frame into and has not re-used these frames till." pixel " is the point that is formed in images displayed on the display screen, and can be the pixel by one group of RGB color showing.
Should be noted that the said structure element that changes between recording medium in method, device, system, computer program, storage computation machine program, the data structure etc. and the combination in any of expression formula all are effectively, and comprise by this embodiment.
In addition, summary of the invention part of the present invention need not described all essential feature, and therefore, the present invention also can be the son combination of these said characteristics.
Description of drawings
Fig. 1 is with the virtual mode diagram and expressed in according to first embodiment of the invention, state that the frame of original active image occurs along time shaft continuously.
Fig. 2 A and 2B are provided to relatively illustrate the screen and the screen that the content of actual displayed is shown of the target (object shot) of shooting.
Fig. 3 is the block diagram that illustrates according to the function of the image forming appts of first embodiment.
Fig. 4 illustrates according to flow chart first embodiment, that the original active image transitions become the step of new live image;
Fig. 5 is illustrated as box space with virtual mode with live image according to second embodiment.
Fig. 6 A and 6B are provided to according to second embodiment, and the screen and the screen that the content of actual displayed is shown of the target of shooting relatively is shown.
Fig. 7 illustrates in a second embodiment, through produce the flow chart of the step of new live image according to Z value sense data from frame.
Fig. 8 is illustrated as box space with virtual mode with live image according to the 3rd embodiment.
Fig. 9 A and 9B relatively illustrate the screen and the screen that the content of actual displayed is shown of the target of shooting according to the 3rd embodiment.
Figure 10 illustrates according to flow chart the 3rd embodiment, that from produce the original active image, produce the step of the live image that has extracted desired color part.
Figure 11 is illustrated as the box space according to the 4th embodiment with the original active image.
Figure 12 is the functional-block diagram that the structure of image forming appts is shown.
Figure 13 has shown the example by the screen of the monitor that the definite graph of function table of input unit is set.
Figure 14 is the flow chart that the step that produces (the directing and manipulating) target of deducing is shown.
Figure 15 illustrates the flow chart of deduction effects applications to the step of present frame.
Embodiment
To describe the present invention according to embodiment, said embodiment does not mean that restriction the present invention but example the present invention.The all characteristics described in an embodiment and their combination are dispensable.
First embodiment
According to first embodiment of the invention, a plurality of frame sequentials that in the original active image, comprise are stored in the circular buffer (see figure 3), and to each scan line sense data from different frames, so that the data that will read like this are presented on the screen as new frame.Specifically, from the frame that upgrades, read data, and from the time, read data the older previous frame about the pixel of the scan line of screen lower edge about the pixel on the scan line that is positioned at the screen upper edge.On screen, different with realistic objective, the strange and mysterious image that is shown.
Fig. 1 is with the virtual mode diagram and expressed the state of the frame of original active image along the continuous appearance of time shaft.The original active image is taken as and grasps the two dimensional image that changes into along time shaft.Pass in time and on time shaft t direction, expand in rectangular parallel piped shape (rectangular-parallelopiped) space 10 (being also referred to as " box space " in the back).The cross section vertical with time shaft t represented frame.Frame is the pixel of the coordinate representation on one group of plane that is formed by x axle and y axle.This box space 10 is by the curved surface cutting with desired shape.As shown in Figure 1, according to first embodiment, box space 10 by be parallel to the x axle, along with the time from t 0Pass to time t 1And the cutting of the inclined-plane on the direction of a following line on the x axle.When the image projection that appears at curved surface 14 on the plane of time-axis direction the time, the image that projects to the plane on the time shaft is exported as actual frame, rather than output present frame 12.
Passage in time, move along time shaft t in cross section 14.Have continuous width mode with its direction and define cross section 14 at time shaft t.Synthetic be included in the image in this width, and these synthetic images are as the actual frame that on screen, shows.
Present frame 12 is corresponding to the frame in the timing that should be located at current scanline in the normal displaying mode.If the current position of present frame 12 on time shaft is at time t 0Then, at time t 0Preceding frame (for example lays respectively at time t 1, t 2, t 3And t 4Frame) corresponding to the frame that under normal Displaying timer, has shown.Yet, in first embodiment, in fact shown prior to time t 0Frame.With the mode to the output of the pixel column of each along continuous straight runs order video data, the data of the pixel that output comprises in each frame.Read the data that are included in each pixel in the single pixel column in same timing, show then.
Regularly export the pixel column of peak at normal scan.With the delay of frame output be positioned at the pixel column of a pixel below the pixel column of peak thereafter.Therefore, the order of pixel column is low more, the timing output that is just postponing more.
From the frame in past, read the data of each pixel on the screen, and the degree that these frames should be retreated can be by the function representation of pixel coordinate, such as t=t 0-y.Function t=t 0-y only is the function of the y coordinate of pixel column, and does not rely on the x coordinate of pixel column.
Be that the coordinate of establishing top left pixel is that the coordinate of (0,0) bottom right pixel is (719,479) under 720 * 480 the situation at the resolution of present frame 12.In this case, the maximum of coordinate y is 479, and the scanning of the pixel column of minimum order is regularly postponed 479 frames.In box space 10, at time t 0With time t 2Between placed 480 frames.
Fig. 2 A and 2B diagram respectively illustrate the screen of captured target and the screen of the target of actual displayed are shown, and are provided to the former with the latter is compared.Fig. 2 A illustrates image lens, and the image of this target is equivalent to present frame 12.Here, target is slowly swung his/her hand 16.Fig. 2 B is the image that appears on the cross section shown in Figure 1 14, and is the image of target actual displayed on screen of Fig. 2 A.In other words; According to the frame in past to present frame, the position of hand 16 with from the left side to the mid point, the order modification on the right, mid point and the left side, therefore; Through be each scan line from different past frame sense datas, the image of the hand 16 of on the left side position and on the right images of positions alternately occur.Owing to from the frame of upward identical scanning timing of time, read the data on the identical data wire, therefore can not cause bending or distortion in the horizontal direction.Yet, in vertical direction, be that tortuous with crooked mode shows hand 16 with the shape on the left side of hand 16 and the right.
In other words, the target of swing hand 16 moves except hand 16 hardly.Therefore, even the synthetic image of reading from upward mutual different frame of time, owing on its display position, do not have difference, crooked or distortion also is almost non-existent.
Fig. 3 is the functional-block diagram that illustrates according to the image forming appts of first embodiment.With regard to hardware, through the structure that can realize image forming appts 50 such as CPU and its similar device of any computer.With regard to software, realize it through having storage, image processing and picture functional programs or similar program, but institute's diagram and what describe is the functional block that combines them to realize among Fig. 3.Therefore, through hardware only, software or their are combined to realize these functional blocks in a variety of forms only.
Image forming appts 50 comprises image input unit 52, and its image that obtains to be taken by video camera is as the original active image, and the frame that will be included in the original active image sends to video memory; Circular buffer 56 is as the video memory along time shaft sequential storage original active image; Buffer control unit 54, control from circular buffer 56 read frame and to the incoming frame of writing of circular buffer 56; Image conversion unit 60 converts the frame that is stored in the circular buffer 56 into be used to show frame; Function memory 70, the function of quoting during its storage frame conversion; With display buffer 74, the frame that its storage is used to show.
Image input unit 52 can comprise the CCD that grasps digital picture, and obtains the converting unit of digital picture through the A-D conversion.Image input unit 52 can be implemented as the equipment that externally provides and be installed in image generation unit 50 with separable mode.Buffer control unit 54 will record the zone by the write pointer indication of circular buffer 56 by the frame sequential of the original active image of image input unit 52 input.
For each pixel that is included in the present frame 12, image conversion unit 60 is read the data corresponding to pixel from the frame that is recorded in circular buffer 56, and generated data.Image conversion unit 60 comprises decision processing unit 62, be used to each pixel confirm should from which frame sense data; Data obtain unit 64, are used for the frame sense data of confirming from by decision processing unit 62; With image formation unit 66, be used for forming frame through the pixel column generated data of reading for each.
In decision processing unit 62, defining and draw according to this equality according to following equality (1) should be from the decision of which frame sense data.
P Fr(x,y,t 0)=P(x,y,t 0-y)---(1)
Wherein as shown in Figure 1, x and y are the pixel coordinates about present frame 12, and t 0It is the time value on the time shaft t.P FrIt is the pixel value of each pixel in the frame of actual output.Obvious from equality (1), the time value of the frame that will export only is the function of y coordinate.Therefore, making for each pixel column should be from the decision of which frame sense data in a plurality of frames of storage circular buffer 56, and this decision its do not rely on the x coordinate.
Function by equality (1) expression is stored in the function memory 70.Other optional function also is stored in the function memory 70.The user can be provided with via order acquisition unit 72 and adopt which function.
Data for each pixel that is obtained by data to read unit 64 are written to display buffer 74 by 66 orders of the image formation unit with image chip function, thus component frame.
Image generation unit 50 comprises that also order obtains unit 72, and it receives order from the user; Image data-outputting unit 76 is used for exporting the frame that is stored in buffer 74; With monitor 78, be used on screen, showing the frame of output.This monitor 78 can be the display that externally is provided to image forming appts 50.
Image data-outputting unit 76 converts them to analog signal then and they is sent to monitor 78 from being to read view data the display buffer 74 of a frame storing image data.Image data-outputting unit 76 order outputs are stored in the frame in the display buffer 74, so that export new live image.
Fig. 4 illustrates according to flow chart first embodiment, that the original active image transitions become the step of new live image.At first, the write pointer t of next writing position of expression promptly is provided with t=0 (S10), so that begin storage frame from the apex zone of circular buffer 56 in the initialization circular buffer 56.Be included in frame recording in the original active image in the t zone of circular buffer 56 (S12).Therefore, the T of one frame is provided 0The summation in zone.
The pixel column n of initialization in display buffer 74 promptly is provided with n=0 (S14), so as will from corresponding to the data copy orderly of those pixel columns that begin of the top row of screen to display buffer 74.What calculated is to specify to read pointer T (S16) corresponding to the data read-out position of row n.Here, obtain T through T=t-n.Retinue number increase is read pointer T and is further turned back to frame in the past.Originally, therefore T=0-0=0, reads pointer T and indicates the 0th zone.
If read pointer T, then in fact there is not such pointer of reading less than 0 (S18Y).Therefore, read the end (S20) of pointer movement to circular buffer 56.More specifically say, with the regional T of circular buffer 56 0Quantity be added to and read pointer T.Read row n in the frame in the data acquisition zone of reading pointer of unit 64 from be stored in circular buffer 56, and image formation unit 66 will copy to the zone of the row n of display buffer 74 corresponding to the data of this number of reading.
As row n when not being last capable in the display buffer 74 (S24N), the number of going is added " 1 " (S26).Row n continuous increasing and repetition S16 reach last row to the processing of S24 up to capable number.When row number becomes corresponding to last row digital, just with the image data storage of a frame to display buffer 74 (S24Y), and write pointer added " 1 " (S28).When the ending of write pointer t indication circular buffer 56 is regional (S30Y), write pointer t turns back to the beginning zone (S32) of circular buffer 56.
Image data-outputting unit is read frame from display buffer 74, with frame as video data output and let monitor 78 display frame (S34) on screen.Repeat S12 to the processing of S34 till order stops demonstration (S36).By this way, with pixel behavior unit sense data from same frame, and data are write display buffer.Yet pixel column at first is the identical a plurality of collection of pixels of scan line of arranging with along continuous straight runs, so pixel column is the data of should the same one scan in normally being provided with regularly reading.Therefore, in scanning process, handled read and write effectively, and the excessive increase of the load that can prevent to cause owing to the image transitions in the present embodiment.
As the improved example of present embodiment, decision processing unit 62 can be according to the definite frame that will read of x coordinate.For example,, read its data from the left-hand side of present frame 12 for the pixel column of the left-hand side that is positioned at screen, and for the dexter pixel column that is positioned at screen, from time t shown in Figure 1 2On the right-hand side pixel column of frame read its data.Then, its cross section will by parallel with the y axle, from time t 0To time t 2The curved surface that the inclined-plane cut.
Revise as another kind, decision processing unit 62 can be according to x and the definite frame that will read of y coordinate.For example,, read its data from the upper left limit of present frame 12 about the pixel column on the upper left limit that is positioned at screen, and about the pixel column on the limit, bottom right that is positioned at screen, from time t shown in Figure 1 2On limit, the bottom right pixel of frame read its data.
Revise as another kind, scan line can be vertically rather than the horizontal direction setting.In this case, through the frame of confirming according to the x coordinate to read, can realize more effective image transitions.
Second embodiment
In a second embodiment, according to the depth value (Z value) that is each pixel appointment, sense data from different frame.Like this, aspect this, different with the processing of first embodiment that confirms frame according to the y coordinate of pixel by the processing that second embodiment carries out.For example, between the target of in the original active image, taking, from older frame, read the target nearer apart from video camera.Therefore, the distance between video camera and the target is near more, and its Displaying timer postpones manyly.
Fig. 5 is illustrated as box space with virtual mode with live image according to second embodiment.Between the target of in present frame 12, taking, the Z value of first image 20 is set to " 120 " and the Z value of second image 24 is set to " 60 ".The big more expression target of Z value is near more from video camera.The retardation of Displaying timer and Z value are proportional.Each pixel of the frame of actual displayed on screen is defined by following equality (2).
P Fr(x,y,t 0)=P(x,y,t 0-Z(x,y,t 0))---(2)
Z (x, y, t wherein 0) be the Z value of current pixel unit.With the increase of Z value, the t on from its frame of reading pixel data from time shaft 0To t 1And t 2Direction retreat.From time t 2On frame in, read data in the zone that is designated as the 3rd image 22 corresponding to first image 20.From time t 1On frame in, read data in the zone that is designated as the 4th image 26 corresponding to second image 24.
In the cross section in box space 10, the 3rd image-region 22 holding time value t 2, and the 4th image-region 26 holding time value t 1Other regional holding time value t 0Therefore, the point that in the cross section, comprises is dispersed in t 0, t 1And t 2, make its cross section have discrete width at time-axis direction.
The pixel of forming first image 20 has the Z value bigger than the pixel of second image, and its data were read the older frame from the time.That is, have the Z value littler than the pixel of first image 20 owing to form the pixel of second image, the time that turns back to older frame is shorter.
The screen and the screen that the target of actual displayed is shown of the target (object shot) that Fig. 6 A and 6B come relatively to show shooting are provided.Fig. 6 A representes that the target of taking, captured in this case target are to lift his/her hand and beginning slowly the swing people 30 of hand and traveling automobile 32 in the back.The actual image that projects to screen of target shown in Fig. 6 B presentation graphs 6A.On screen with the different state display-object normally is set.That is, regional near more from video camera, its Displaying timer is postponed more.Now, especially, people 30 is from the nearest target of video camera, so the retardation of Displaying timer is maximum.About the part or the zone of moving hardly,, its pixel will cause the image much at one of this target for showing older image.On the other hand, about frequently or significantly or the image in the zone that moves up of upper and lower, move its display part on frame.Therefore, shown in Fig. 6 B, even when its data when corresponding to the older frame under the coordinate of those coordinates of Fig. 6 A, reading, this regional image will be transparent or be in (permeated) state that fills the air.In Fig. 6 B, people 30 is presented on the screen with the state that the frequent hand that moves disappears.On the automobile 32 of back is positioned at relatively the position away from video camera, and its Z value is very little, thus Fig. 6 A and and Fig. 6 B shown in the show state of automobile between have little difference.
Image forming appts 50 according to this second embodiment has and the essentially identical structure of device shown in Figure 3.According to top equality (2), decision processing unit 62 calculates to the time quantum that returns in the past according to each Z value, then for each pixel cell definite will from which frame sense data.After this, from wherein the frame of sense data being also referred to as the source frame.The distance measurement sensor that comprises the Z value that is used to detect each pixel cell according to the image input unit 52 of second embodiment.Distance measurement method can be laser means, infrared illumination method, phase detection method or the like.
Fig. 7 illustrates the flow chart that produces the step of new live image in a second embodiment according to the Z value through sense data from frame.At first, the write pointer t of next writing position promptly is provided with t=0 (S100) in the initialization indication circular buffer 56, makes to begin storage frame from the apex zone of buffer 56.Be included in frame recording in the original active image in the t of circular buffer 56 zone (S102).
The position x and the y of initialization object pixel in display buffer 74, that is, position x and y are set to x=0 and y=0 (S104), make the pixel of row beginning from the screen top by copy orderly to display buffer 74.What calculated is to specify to read pointer T (S106) corresponding to the read-out position of the data of pixel x and y.Decision processing unit 62 calculates according to the Z value of each pixel and reads pointer T.Data obtain unit 64 and from the frame of reading pointer area that is stored in circular buffer 56, read pixel P X, yData.Then, image formation unit 66 copies to the pixel P in the display buffer 74 with sense data X, yOn the zone (S108).
Work as P X, yWhen also not being the last pixel of display buffer 74, promptly as pixel P X, yWhen not being the pixel of bottom right edge (S110N), pixel P X, yMove to next pixel (S112).Repeating S106 handles up to pixel P to S112 X, yBecome till the last pixel.As pixel P X, yWhen becoming last pixel, be written into display buffer 74 (S110Y) and its by image formation unit 66 draw (S114) about the image of a frame.
" 1 " is added to write pointer t (S116).When write pointer t indicates the stub area of circular buffer 56 (S118Y), write pointer t turns back to the apex zone (S120) of circular buffer 56.The image that image output unit 76 is drawn to display output.Repeat S102 to the processing of S122 till order shows termination (S124).By this way, be that unit writes display buffer 74 then from the frame sense data of separating with the pixel.Here it should be noted, for each pixel is confirmed sense data from which frame respectively, can be from similar and different frame sense data.
The 3rd embodiment
Difference according to the third embodiment of the invention and first and second embodiment is to come composograph through the data of from a plurality of frames, reading the pixel with expectation parameter.Above-mentioned property value is a pixel value, for example, when when reading the image value that only has red component and come composograph, obtains image mysterious and special-effect, wherein only residually seems to be the desired color of after image.
Fig. 8 is illustrated as box space with virtual mode with live image according to the 3rd embodiment.In box space 10, be projected in time t 0, t 1, t 2With the people 30 on the frame of t3, be the target of hand-held red material, the target of waving about lentamente.Red material target image 34,35,36 is projected on each different each other frame with each display part of 37.
According to the 3rd embodiment, in a plurality of old frames that are stored in the circular buffer 56, confirm in advance to be used for the frame that image is synthetic or image is synthetic.In the situation of Fig. 8, at time t 0, t 1, t 2And t 3Frame to be used for image synthetic.These four frames are a plurality of curved surfaces of arranging on the Fixed Time Interval in box space 10.
The screen and the screen that the target of actual displayed is shown of the target that Fig. 9 A and 9B come relatively to show shooting are provided.Fig. 9 A representes the target of taking.The actual image that projects on the screen of target that shows among Fig. 9 B presentation graphs 9A.On this screen, only extract and synthetic red component, the therefore image 34,35,36 and 37 of exhibit red material target only, and its background is white or black.
Each pixel through the frame of following equality (3) definition actual displayed on screen.
P Fr ( x , y , t 0 ) = Σ i = 0 3 αP ( x , y , t 0 - cons t * i ) - - - ( 3 )
Wherein indicate the α or the alpha value of synthesis rate by following equality (4) expression.
α=P R(x,y,t 0-const*i) …(4)
P wherein RIt is the red color component value of pixel.
It is each pixel reading of data according to equality (3) that data obtain unit 64, and determines whether composograph.By this way, realize extracting through color pixel.Though the pixel value of red component is set to the alpha value in equality (4), it is provided with unrestricted.If the alpha value is set to P GOr P B, then only extract and synthetic green component or blue component.Therefore, if in target, comprise any specific color component, then only showing the specific part that comprises particular color component, just looks like to be residual after image.
Image forming appts 50 according to the 3rd embodiment has and the essentially identical structure of device shown in Figure 3.According to top equality (3) and (4), decision processing unit 62 is chosen in a plurality of frames on the Fixed Time Interval.Data obtain unit 64 and press by the synthetic sense data of the determined ratio of the pixel value of each pixel with image formation unit 66.
Figure 10 illustrates according to flow chart the 3rd embodiment, that from produce the original active image, produce the step of the live image that has extracted the desired color part.At first, the write pointer t of next writing position promptly is provided with t=0 (S50) in the initialization indication circular buffer 56, and the synthetic number i of initialization frame, and i=0 (S51) promptly is set, and makes to begin storage frame from the apex zone of circular buffer 56.The frame recording that in the original active image, comprises is in the t zone of circular buffer 56 (S52).
The position x and the y of initialization object pixel in display buffer 74, that is, position x and y are set to x=0 and y=0 (S54), make the pixel that begins from the screen left edge by copy orderly to display buffer 74.The position conduct of reading pointer T in circular buffer 56 is calculated (S56) corresponding to the read-out position of the data of pixel x and y.This reads pointer T is through T=t 0The time value that-const*t obtains, and a plurality of past frames of indication through returning by Fixed Time Interval along time shaft.Data obtain unit 64 and from the frame of reading pointer area that is stored in circular buffer 56, read pixel P X, yData.Then, image formation unit 66 copies to the pixel P in the display buffer 74 with sense data X, yOn the zone (S58).
Calculate and be provided with pixel P X, yAlpha value a X, y(S60).As pixel P X, yWhen also not being the last pixel in the display buffer 74, promptly as pixel P X, yWhen not being the bottom right edge pixel (S62N), pixel P X, yMove to next pixel (S64).Repeat S56 to the processing of S62 up to pixel P X, yBecome the last pixel in the display buffer 74.As pixel P X, yWhen becoming the last pixel in the display buffer 74, write display buffer 74 (S62Y) and by image formation unit 66 draw (S66) about the image of a frame.
If the synthetic number i of frame does not also arrive predetermined number I (S68N), then add " 1 " and repeat the processing of S54 to S66 to number i.In the 3rd embodiment, predetermined number I is " 3 " and repeat synthetic 4 times and count down to " 3 " up to the synthetic number i of frame from " 0 ".When the synthetic number i of frame arrives predetermined number i (S68Y), write pointer t is added " 1 " (S70).When write pointer t indicated the stub area of circular buffer 56, write pointer t turned back to the apex zone of circular buffer 56.Image data-outputting unit 76 is to display 78 output drawn view data (S76).Repeat S52 to the processing of S76 till order stops demonstration (S78).By this way, with the pixel data of unit reading duration observation of complexion component from past frame only, write display buffer then.
For example as the improved example of the 3rd embodiment, the alpha value of frame promptly can be with the alpha value rather than the P of present frame 12 corresponding to synthetic number i=0 RBe set to P.In this case, extract three color RGB together, make red material target 34-37 on the display screen of Fig. 9 B, not only to occur but also occur people 30 simultaneously.
The 4th embodiment
A fourth embodiment in accordance with the invention and the 3rd embodiment difference are that the property value in the 4th embodiment is the approximate rank (order) between indication desired images pattern and the real image.As the result of pattern matching, image is approximate more and near the desired images pattern, and its data are just read from old more frame.Therefore, in the timing that postpones, can show the parts of images that is included in the expectation in the original active image separately.
Figure 11 is illustrated as box space with virtual mode with the original active image according to the 4th embodiment.The present frame 20 that is included in box space 10 comprises first image 40.Now hypothesis is calculated coupling through image model, so that approximate first image 40 then, compares with the pixel in other zone, the pixel of forming first image 40 has and image model high-order approximation more.Therefore, through further returning,, from the frame in past, read data corresponding to it according to approximate rank along time shaft.Here, through turn back to time t along time shaft 2, come from having time value t 2Frame in the position sense data of second image 42.The cross section in box space 10 holding time value t only in the zone of second image 42 2, and at other regional holding time value t 1Therefore, the cross section has discrete width along time-axis direction.
Image forming appts 50 according to the 4th embodiment has and the essentially identical structure of device shown in Figure 3.The user obtains unit 72 specify image patterns via order, and the coupling that decision processing unit 62 is handled between image model and the two field picture.As its result, by the detection of pixel and the approximate rank of image model.Decision processing unit 62 is according to its approximate rank, for each pixel confirm should from which frame sense data.
To the handling process according to the 4th embodiment be described with reference to Fig. 7.At first, prior to step S100, the user specifies the image model as the target that its calculating is mated, and between present frame 12 and image model, calculates coupling, so that be the approximate rank of each pixel detection by " s " expression.That is,, the approximate rank of image-region are set about pixel approximate in the image-region with image model.Step S100 is identical with the step of second embodiment to S104.In step S106, confirm to read pointer T according to approximate rank " s ".For example (x y) obtains to read pointer T through T=t-s.Thereafter the step also step with second embodiment is identical.
The 5th embodiment
In the 5th embodiment, also be sense data and synthetic from the frame that separates according to the pixel property value.This property value and the third and fourth embodiment difference be its be presentation video zone the time range degree value.For example, between target, the zone of fast or significantly moving has big image modification in time, therefore sense data from older frame.Therefore, can postpone to be included in zone in the original active image, that have big image modification and show, so that image modification is big more, the just demonstration in delayed image zone more.
Image forming appts 50 according to the 5th embodiment has and the essentially identical structure of device shown in Figure 3.By detecting processing unit 62 is each pixel detection temporal change degree between the tight former frame of target frame and this target frame.Decision processing unit 62 is confirmed sense data from which frame according to its change degree.
To the handling process according to the 5th embodiment be described with reference to Fig. 7.At S106, compare t frame and (T-1) frame by pixel ground, and detect change degree by " c " expression.Confirm to read pointer T according to its change degree " c ".For example, (x y) obtains to read pointer T through T=t-c.When change degree " c " increased, the degree of time is in the past returned in expression along time shaft time value increased.
The 6th embodiment
Be that with the first embodiment difference user can freely confirm or defines sense data from which frame through using the interface on the screen for each scan line according to a sixth embodiment of the invention.Below through stressing to describe the 6th embodiment with the first embodiment difference.
Figure 12 is the functional-block diagram that the structure of image forming appts is shown.Image forming appts 50 with shown in Fig. 3, be that according to the main difference part of the image forming appts 50 of first embodiment device shown in Figure 12 comprises input unit 80 is set.Input unit 80 is set obtains the input of settings that unit 72 obtains to be used to define the cross section 14 of Fig. 1 via the order of user operation.In the 6th embodiment, as defining the function of from which frame, reading the data of each pixel column, t=t 0-y is the predetermined value as default settings.That is, this functional definition frame is to the degree of returning in the past.Setting input unit 80 sends by being presented at t=t to display buffer 74 0The image of the graphical presentation of the coordinate y of the relation of the last time value of-y and pixel column.Image data-outputting unit 76 shows that at display 78 this image also shows the relation between time t and the coordinate y by the image that input unit 80 produces is set.When watching the figure that is presented on the display 78, user operation commands obtains unit 72, and the shape of modification figure is with function t=t 0-y changes into other functions.For example, order acquisition unit 72 can be the touch-screen that is connected to indicator screen.In this case, the expression user presses the value of the position of touch-screen to import as content of operation.
Order obtains unit 72 and changes the figure that is presented on the display 78 to input unit 80 transmission user content of operation are set.According to obtain the content of operation that unit 72 obtains from order, input unit is set changes function t=t 0-y, thus new function is set, and therefore newly-installed function is stored in the function memory 70.Decision processing unit 62 is read from function storaging unit by the function that input unit 80 settings are set, and confirms sense data from which frame according to this new function for each pixel column.As its result, box space 10 shown in Figure 1 is cut by the curved surface that the function definition that input unit 80 is provided with is set, and appears at image rather than present frame 12 on the cross section as actual frame.Through realizing above structure, the user can utilize image forming appts 50 as the mandate instrument, and can produce mysterious with unique image through freely changing the figure that is presented on the screen.
Figure 13 illustrates the screen of demonstration by the monitor that the definite graph of function table of input unit is set.At first, demonstration indicator function t=t on the screen 82 is being set 0The straight line 84 that concerns between time t among the-y and the coordinate y.The user can change into Bezier 86 with straight line 84.Bezier 86 is the curves that connect first end points 88 and second end points 90, is confirmed the shape of this curve by the position at first control point 96 and second control point 98.The position and the length that change first handle (handle) 92 and second handle 94 through the user are confirmed the position at first control point 96 and the position at second control point 98.If specify by Bezier 86 by the function that input unit 80 is provided with is set, then obtain be for each pixel column will from the contiguous frame of present frame sense data and the data mixing image of from the frame in past, reading together.For example, via input unit 80 is set, the user can the designated period curve through Bezier 86, such as sine wave curve.Though in this embodiment, by Bezier 86 specified function, can provide a kind of structure, wherein by other curve specified function such as curves such as B-battens as improved example.
The 7th embodiment
Be to be provided with input unit 80 with the 6th embodiment difference according to a seventh embodiment of the invention and obtain characteristic point coordinates in the present frames 12 as one of settings, and by this characteristic point coordinates defined function.Through the difference of stressing itself and the 6th embodiment present embodiment is described below.
It also is the touch-screen that is connected to display 78 that order obtains unit 72.When the user pushes touch-screen in the position of expectation and when drawing round mode current collector, a plurality of successive values of the coordinate of expression contact are sent to input unit 80 is set.According to the coordinate figure that obtains, be provided with input unit 80 identifications by a plurality of contacts around and region covered, and produce the function that is used for confirming circle zone, so that record in the function memory 70.According to the function of reading from function memory 70, from past frame, read about being included in the data of the pixel in the circle zone, and from present frame 12, read about not being included in the data of the pixel in the circle zone.As its result, box space 10 shown in Figure 1 is by cutting through the curved surface that the coordinate function definition that input unit 80 obtains is set, and exports the image that on the cross section, occurs rather than present frame 12 as actual frame.Through realizing top structure, the user can utilize image generation unit 50 as the mandate instrument, and can produce mysterious with unique image through on touch-screen, specifying arbitrary region.
The 8th embodiment
Be to make predetermined change shape appear on the screen, sense data from the frame of confirming according to this function according to the eighth embodiment of the present invention and other embodiment differences with certain mode defined function in advance.According to the 8th embodiment, appear at the mode defined function in advance on the screen with waveform change shape such as water ring.
The characteristic point that decision processing unit 62 is confirmed in the present frame 12.Similar with the 6th and the 7th embodiment, through the touch-screen specific characteristic point of user via the screen that is connected to display 78, wherein touch-screen obtains unit 72 as order.Decision processing unit 62 is confirmed source frame and pixel coordinate, so that the waveform of water ring occurs from characteristic point as its center.Here, the source frame is will be from the frame of its sense data.For example,, suppose to show circle along the radiation direction that decision processing unit 62 uses the gradual change time value to confirm the source frame for each radiation circle from characteristic point in order to show the stereogram of water ring.The change of definition gradual change time value makes it become cyclomorphosis.Thereby can show the unevenness of water ring.In addition, decision processing unit 62 moves the pixel coordinate scheduled volume that will read at predetermined direction.Thereby can show the anaclasis that water ring causes.
The 9th embodiment
The the 7th and the 8th embodiment difference that obtains unit 72 as the order of input feature vector point according to the nineth embodiment of the present invention and touch-screen is that the information that basis comprises confirms characteristic point in present frame 12.
Decision processing unit 62 according to the 9th embodiment is confirmed characteristic point according to the pixel value that is included in each pixel in the present frame 12.For example, thereby the LED of flicker is synthesized to a part that becomes target on the target at a high speed, and decision processing unit 62 identifies the flicker part through zone in the present frame 12 that specifies in continuous input, that pixel value intermittently changes between two values.Decision processing unit 62 confirms that the flicker part is as characteristic point coordinates.As improved example, decision processing unit 62 can use fixing coordinate to confirm characteristic point.As another improved example, if following factor be pixel value, Z value, in changing with the pixel value of the approximate rank of desired pattern and pixel any one fall into preset range, decision processing unit 62 can confirm that pixel is a characteristic point.
The tenth embodiment
Be that according to the tenth embodiment of the present invention and other embodiment differences image-input device 52 not only obtains the original active image, also obtains voice data.Voice data that image-input device 52 obtains and the input of original active image synchronization are so that send to circular buffer 56.According to the frequency distribution of voice data, volume change etc., decision processing unit 62 is confirmed the source frames, is read regularly, in alpha value and the characteristic point at least one.For example, when the volume of voice data changed above threshold value, decision processing unit 62 can be confirmed the source frame and read pixel with the mode such as the shape that on screen, occurs describing among the 8th embodiment.For example, if the change of volume exceeds the threshold value of frequency domain part, then determine processing unit 62 to confirm the characteristic point in the 8th and the 9th embodiment according to frequency domain.
The 11 embodiment
According to eleventh embodiment of the invention, according to the property value of the pixel that comprises in the target frame, predetermined graphics combine is near locations of pixels.Here, property value be indicator diagram picture zone the time range degree digital value.For example, the zone of fast or significantly moving in target shown continuously, so as art and expression vivo have particulate form (form of particle) target with from the time change big pixel be out of shape to mode of its peripheral diffusion.By this way, can produce such as the deduction effect that is presented at paper-snowfall (paper-snowfall-like) effect on the screen periphery in the original active image, the major heading such as moving area or track.
Image generation unit 50 according to the 11 embodiment has and the similar structure of device shown in Figure 3.Decision processing unit 62 be each pixel, on present frame 12 and time between present frame 12 frame before, the time range degree of the pixel value of the image-region of test set framing.If the change degree of this pixel exceeds predetermined threshold, then determine processing unit 62 pixel to be worked as the center that acts on the deduction target.If exist a plurality of its change degree to surpass the pixel and the layout adjacent to one another of threshold value, a pixel then can confirming to have maximum change degree in them is as the center, and the center can show a plurality of deduction targets with diffusion way on every side.In addition, decision processing unit 62 can be confirmed the moving direction of deduction target according to pixel in the present frame 12 and the pixel in the previous frame (each all has maximum change degree in these two pixels in each frame).
Figure 14 is the flow chart that the step that produces the deduction target is shown.At first, input present frame 12 is as processing target (S150).If after beginning to import present frame 12, do not carry out reproduction processes (S152N) immediately; Then in present frame 12 and the change (S154) of extraction pixel value between its former frame in time; Detect pixel value and change maximum position (S156), and the vector of the position that its pixel value change is maximum is confirmed as the moving direction (S158) of deduction target.If after beginning to import present frame 12, carry out reproduction processes immediately, then there is not such former frame, therefore omit treatment S 154 to S158 (S152Y).Present frame 12 separates storage so that compare (S160) with next frame.At image (S162) as the deduction target center, that generation on every side will show in the detected position of S156.The deduction target and the present frame 12 that produce therefrom are overlapping, so that handle the picture (S164) that will show.Through reprocessing S150 to S164 till stop showing (S166Y), when the moving method of confirming along S158 moves, show the deduction target.
In the above example, the pixel value change definite structure that show the position of deduction target of basis with its former frame described.As another example, decision processing unit 62 can confirm that according to color component, profile, brightness, Z value, movement locus or the like the position carries out the deduction effect.For example; According to the position that can confirm to produce the deduction effect such as the size of the pixel value of " position that comprises maximum red components in the drawings ", perhaps the difference of the pixel value between itself and other adjacent profile is that maximum outline line can be confirmed as the deduction part in single frames.Here, " will produce the position of deduction effect " and also will abbreviate " deduction position " in the back as.In addition, for example close on difference between the pixel value of " red profile ", and its color component can be confirmed as the deduction position greater than the part of threshold value greater than threshold value.In addition, can brightness be confirmed as the deduction part greater than the part of threshold value, and the part with specific Z value scope can be confirmed as the deduction part.If in fixed time period restriction stored a plurality of past frames, then can detect the track (locus) of the characteristic point of extracting according to specific criteria.Therefore, can produce the deduction effect along such track.As the deduction effect, decision processing unit 62 can show linear goal or the personage who it has been used glittering color or other characteristics, or the target such as symbol.In addition, decision processing unit 62 can produce the deduction effect that becomes translucent from the transparency in the characteristic zone that present frame 12 extracts, so that overlap onto past frame.
Decision processing unit 62 can according to such as the pixel value of coordinate, Z value and each pixel, with the approximate rank of desired images pattern and the property value the change rate in the pixel value, the size and the translational speed of definite deduction target that will show.The alpha value that is used to make up the deduction target can be fixed value or be different for each pixel.For example, can be according to the alpha value being set such as the pixel value of coordinate, Z value and each pixel, property value with the approximate rank of desired images pattern, change degree in the pixel value.
Figure 15 illustrates the flow chart of deduction effects applications to the step of present frame 12.At first, according to order from the user, definite type (S200) that will be applied to the deduction effect of present frame 12.Then, input produces the present frame 12 (S202) of deduction effect.Then, the part that the adjacent pixels value difference surpasses threshold value in the extraction present frame 12 is as profile (S204), and definite margin of image element the best part is as the part (S206) that produces the deduction effect.Then the deduction effects applications is arrived the part of confirming (S208), and handle the figure (S210) that is used to show the image that produces the deduction effect.Repeating above treatment S 202 to S210 (S212N) on the present frame till stopping demonstration (S212Y), so that use the deduction effect above that.
The 12 embodiment
According to the 12 embodiment, according to the change that is included in pixel property value in the target frame, the local change reproduces frame per second.That is, according to the property value of the image-region that constitutes two dimensional image, to each image-region, cross section 14 changes by different speed in time, so that will be from local change of frame rate of the new live image of image data-outputting unit 76 outputs.For example, the degree that changes for pixel value in time is greater than the part of threshold value, and is elongated from the time interval of frame sense data, thereby reduces rendering frame speed.Thereby with producing mysterious and unique image, wherein regional reality moves fast more, and the zone that on display, shows is just moved slowly more, so the part of target moves with the speed different with normal speed in the live image.
Image forming appts 50 according to the 12 embodiment has and the essentially identical structure of device shown in Figure 3.Can change frame rate by pixel ground according to the decision processing unit 62 of this embodiment, or be that unit changes frame with the target of extracting according to its pixel value.Decision processing unit 62 can be included as the mode of the part of the scope of being discussed with some pixels around this pixel and extract target.In this case, the such part of object edge that changes gradually such as its pixel value can be included as the part of target, and also it is handled.
About the zone that its frame rate will change, decision processing unit 62 is confirmed the frame rate after this change.For example, can frame rate be set, and the frame rate that its speed changes big more zone can be provided with low more value according to the time range degree of the pixel value in zone.According to the frame rate of confirming like this, decision processing unit 62 is confirmed to read the time interval between source frame and the next frame for each pixel.Decision processing unit 62 can change frame rate in time.For example, decision processing unit 62 at first frame rate is set to low rate and increases frame rate gradually, so that it catches up with other Displaying timers around the pixel of this pixel.
As improved example, a kind of structure can be so that the user obtains unit 72 via order shown in Figure 12, and the mode that whether can be provided with in the scope that edge with target is included in target is carried out processing.Can change the frame rate of the pixel that in present frame 12, has predetermined Z value scope.Can change the position that in present frame, has with the predetermined approximate rank of desired images pattern.In other words, by pixel ground control frame speed.
The 13 embodiment
According to thriteenth embodiment of the invention, from being the data that read out in the pixel that has the predetermined attribute value in the target frame previous frame rather than the target frame in time.For example, from old frame, read data, so that can produce lively effect, like the image of from prune shape (trimmingshape) window, watching the part past corresponding to the pixel value of black.
Image generation unit 50 according to the 13 embodiment has and the essentially identical structure of device shown in Figure 3.Decision processing unit 62 according to present embodiment extracts the zone with predetermined pixel value scope from present frame 12, and confirms the source frame in zone simultaneously.The source frame can be through returning the past frame that predetermined period obtains along time shaft, perhaps can according to such as the pixel value of coordinate, Z value, each pixel, and the property value on approximate rank of desired image pattern, change amplitude of pixel value or the like confirm the source frame.
As improved example, frame is old more, and these frames of sequence arrangement (gradate) just are so that can stress and vividly represent old thing.Decision processing unit 62 can extract and comprise that also some center on the zone of pixel.For example from people's face, extract together corresponding to mouth with around the zone of some pixels of mouth, thereby extract certainly such as part object edge, that pixel value changes gradually.In the above embodiments, go up sense data preceding frame from the time.Yet, also be stored in the circular buffer 56 if the time is gone up frame in the future, can from these in the future the frame from sense data.
The 14 embodiment
In fourteenth embodiment of the invention, according to the change of the property value that is included in the pixel in the target frame pixel value is added to pixel, thereby changes color.For example, with the mode such as regional shown in red that significantly moves in the target, with the deduction effects applications to original image.
Image forming appts 50 according to the 14 embodiment has and the essentially identical structure of device shown in Figure 3.The value that to be scheduled to according to the decision processing unit 62 of this embodiment is added on the pixel value of pixel, makes with the big pixel of change degree in the red display present frame 12.Thereafter, about having added the pixel of predetermined value, the pixel value that will be added to this pixel reduces in time gradually, and as its result, can show the after image that has stayed red afterbody.
As the improvement example of image forming appts 50, its structure can be through in addition after pixel shows this pixel still be retained in the mode on the screen, the data of the violent in time pixel that changes can be synthesized with predetermined alpha value.Pixel value can also be added to synthetic data, makes to show its image with desired color.Thereafter, the alpha value of the data that will synthesize reduces in time gradually, as its result, can show the after image that stays afterbody.As another improved example, structure can so that: through using predetermined alpha value, the high pixel data of its change degree and alpha value about all pixels of screen are set to zero screen synthesizes, and shows whole screen gradually.As another improved example, can change the pixel value that is added to each pixel, and, can change color of pixel through adding the pixel value of this pixel with the mode of accumulation.
The 15 embodiment
In fifteenth embodiment of the invention,, use the synthetic target frame of expected objective according to the property value that is included in the pixel in the frame in future.That is, between the frame that in the original active image of storage in advance, comprises, be presented on the zone near predetermined image pattern in the frame that will show like the graininess target in the 11 embodiment.Thereby, can produce the deduction effect of similar broadcasting (announce).For example on screen, expose, here as images such as display paper-snowfalls before the high priest of target.
Image forming appts 50 according to the 15 embodiment has and the essentially identical structure of device shown in Figure 3.In the frame that comprises according to the original active image of decision processing unit 62 from be stored in circular buffer 56 in advance of present embodiment, detect the zone of the preset range on the approximate rank that fall into the frame predetermined image pattern that will show after going up about the time.Decision processing unit 62 synthesizes the graininess target on every side in the zone of being detected.The method of combination and composograph is similar with the method for in the 11 embodiment, describing.
As improved example, the synthetic of target can be applied to the real-time activity image, wherein takes this live image concurrently with the current reproduction of image.That is, after shooting, obtain live image immediately and temporarily be stored in the buffer, reproducing each frame than the timing of taking constant time lag then.Extract the predetermined picture pattern the present frame that after taking, obtains immediately, reproducing its frame than the timing of taking constant time lag simultaneously, thereby can produce the deduction effect of similar broadcasting.
According to only having described the present invention for exemplary embodiment.Those skilled in the art should understand that existence other various modifications, and such modification is comprised by scope of the present invention about the combination of above-mentioned each parts and processing.
In a second embodiment, confirm sense data from which frame according to the Z value.Yet in improved example, a plurality of frames are set on Fixed Time Interval, and can synthesize a plurality of frames through ratio according to the Z value as the source frame.Explain once more that here the source frame is the frame from its sense data.In this case, confirm the alpha value according to the Z value.The zone that in target, has big relatively Z value promptly can be provided with from the near zone of video camera with the mode that is provided with its alpha value bigger.In this case, the near zone of video camera being left in clear more projection lucidly, and will show violent zone of moving with residual mode, just looks like that it is after image.
In the 3rd embodiment, the alpha value is set according to pixel value.Yet in another improved example, can confirm the source frame according to pixel value.For example, when extraction has the pixel value of red component, from older frame, read out in the data on the zone with more red components, further postponed so that comprise the demonstration in the zone of more red components.
At the 4th embodiment, according to confirming the source frame with the approximate rank of desired image pattern.Yet in another improved example, the alpha value can be set according to approximate rank.In this case, the clearer zone that shows lucidly more near image model, and show fast and the zone of significantly moving with the mode of further residual after image.In addition, prepared a plurality of different images patterns in advance, and can will be used for adopting approximate rank to read the source frame according to which AD HOC.Perhaps, can will be used for adopting approximate rank to confirm the alpha value according to which AD HOC.The identification of image not only can be the identification of each frame, and can be the identification of crossing over the posture of a plurality of frames.
In the 5th embodiment, confirm the source frame according to the time range degree in the image-region.Yet in another improved example, can the alpha value be set according to the degree that changes.In this case, clearer the demonstration lucidly has the zone that changes more, and also shows to have the zone that changes more with the mode of residual after image.
Among each embodiment in the first to the 15 embodiment, through identical coordinate (x, y) corresponding relation of the pixel of judgement in a plurality of frames.Yet revise according to another, can judge corresponding relation, perhaps can judge that whether should make this moves according to the width of attribute or pixel through the coordinate that moves specific pixel.
In second to the 5th embodiment, confirm the source frame according to each single attribute value or alpha value.Yet in another is revised, can confirm source frame or alpha value according to a plurality of property values in Z value, pixel value, approximate rank and the change degree.For example, after having confirmed the source frame for specific pixel, can between said frame and present frame 12, mate by computation schema, then according to synthesizing a plurality of frames corresponding to the alpha value on its approximate rank according to its Z value.In this case, if target is nearer from video camera, sense data from older frame then in addition, shows the zone of significantly moving with the mode of residual after image.
Among each embodiment in the first to the 15 embodiment, a kind of structure is provided, make be included in the original active image, be stored in the circular buffer 56 corresponding to the source frame of specific period.Yet in another was revised, image input unit 52 can be read the frame of being confirmed by decision processing unit 62 from the original active image with the mpeg format compression, and buffer control unit 54 can make these frames be stored in the circular buffer 54.In addition, buffer control unit 54 can with reference to before this frame with afterwards frame.
Hereinafter, further modification will be described.
1-1. in first embodiment, a kind of structure is provided, makes to be each pixel column sense data from different frames.In this improved example 1-1, a kind of structure can be so that be set up as some past frames of source frame, and be sense data in any frame of each pixel column from the frame of these settings.For example, a kind of structure can be so that two frame A and B be set to the source frame, and are odd number or even number sense data from A and B according to the order of pixel column.For example, structure can be read the data of the 0th to the 79th pixel column so that six frame A, B, C, D, E and F are set to the source frame from frame A, and from frame B, reads data of the 80th to the 159th pixel column or the like.In other words, be the data that unit cuts apart pixel column and reads each unit that is used for cut-off rule from different past frames with 80 lines.When being on the screen each unit of being made up of 50 pixel columns frame different from each during dateout, belt pattern or icotype appear on moving area.
1-2. in a second embodiment, a kind of structure is provided, makes according to Z value sense data from different frames.In this improved example 1-2, a kind of structure can be so that read the data of the pixel that only has predetermined Z value scope from past frame.For example, one of upper and lower bound of Z value is set at least in advance, and is provided with one or more past frames in advance from its sense data.About have the scope of setting that falls into (under the upper limit on lower limit) the pixel of Z value, sense data from the past frame that is provided with.About having the pixel of the Z value outside the scope of setting, sense data from present frame 12.The quantity of the source frame that will be provided with here is fixed to one or more frames, and perhaps the pixel coordinate source frame according to the source frame can be a past frame.
1-3. in the 3rd embodiment, a kind of structure is provided, feasible data of from a plurality of past frames, reading pixel with predetermined pixel value, and synthetic with present frame 12.In this improved example 1-3, a kind of structure can so that: about the pixel of present frame 12 with predetermined pixel value, sense data from predetermined past frame, and about other pixels, sense data from present frame 12.In addition, can be with the past frame of fixed form setting as the source frame, perhaps according to its pixel value, the source frame can be a past frame.
1-4. in the 4th embodiment, a kind of structure is provided, make from corresponding to the past frame on the approximate rank of desired images pattern sense data.In this improved example 1-4, a kind of structure can fall into the data of the pixel of predetermined scope so that from past frame, only read the approximate rank of itself and desired images pattern, and from present frame 12, reads other pixels.Scope as approximate rank can be provided with one of its upper and lower bound at least in advance.Past frame as the source frame can be set in a fixed manner, and perhaps the source frame can be through return the past frame of acquisition along time shaft according to approximate rank.
1-5. in the 5th embodiment, a kind of structure is provided, feasible change sense data from past frame according to pixel value.In this improved example 1-5, a kind of structure can fall into the data of the pixel of preset range so that from past frame, only read its pixel value change, and from present frame 12, reads the data of other pixels.Can be with the past frame of fixed form setting as the source frame, perhaps the source frame can be to change through return the past frame of acquisition along time shaft according to pixel value.
1-6. in first embodiment, according to time value t and pixel coordinate y defined function t=t 0-y.In this improved example 1-6, the relation between time t and pixel coordinate y can be defined as the t=siny of use trigonometric function etc.Among the figure that in Figure 13, describes, what periodically mix is the pixel column of from the past frame that further returns along time shaft, reading its data, and from the frame that upgrades, reads the one other pixel row of its data.In addition, as among the improved example 1-1, a kind of form can be so that be provided with some past frames as the source frame in advance, and from corresponding to the data of reading each pixel column in any frame the frame of these settings of time value t.
2-1. in first embodiment, a kind of structure is provided, make to be each pixel column sense data from past frame, and these data is arranged so that construct a frame with vertical direction.In this improved example 2-1, a kind of structure can be so that data of from past frame, reading for each pixel column and present frame 12 be synthetic so that form a frame.In this case, the alpha value can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-2. in a second embodiment, a kind of structure is provided, makes according to Z value sense data from different frames.In this improved example 2-2, a kind of structure can be so that data and the present frame 12 from different frame, read according to the Z value synthesize so that produce a frame.Perhaps, from past frame, read the data of the pixel of the preset range that only has the Z value in the present frame 12, and such data and present frame 12 are synthetic so that form a frame.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-3. in the 3rd embodiment, a kind of structure is provided, makes the data of from a plurality of past frames, reading pixel so that synthetic with present frame 12 with predetermined pixel value.In this improved example, a kind of structure can so that: about in present frame 12, having the pixel of predetermined pixel value, from predetermined past frame, read its data, and these data are synthesized with present frame 12.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-4. in the 4th embodiment, a kind of structure is provided, make from corresponding to the past frame on the approximate rank of desired image pattern sense data.In this improved example 2-4, a kind of structure can so that from corresponding to the past frame on the approximate rank of desired image pattern data and the present frame 12 read synthetic.Perhaps, the approximate rank of only from past frame, reading itself and desired image pattern fall into the data of the pixel of preset range, and such data and present frame 12 are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-5. in the 5th embodiment, a kind of structure is provided, feasible change sense data from the past frame that returns along time shaft according to pixel value.In improved example 2-5, a kind of structure can be synthetic so that change data and the present frame 12 from past frame, read according to pixel value.Perhaps, from past frame, read the data that only its pixel value change falls into the pixel of preset range, and such data and present frame 12 are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-6. in improved example 1-6, the relation between time t and the pixel coordinate y is defined as the t=siny of use trigonometric function etc.As its further modification,, will synthesize with present frame 12 from current data of reading to the past frame according to the function that uses such as the trigonometric function of this type of t=siny.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
2-7. in the 6th embodiment, a kind of structure is provided, makes from corresponding to by user's sense data the frame that the Bezier 86 that screen 82 is provided with is set.In this improved example 2-7, structure can be so that according to by the user Bezier 86 that is provided with on the screen 82 being set, the data that will from frame, read and present frame 12 be synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, can the alpha value be set according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc.
3-1. in this improved example; A kind of structure can be so that will improved example and 2-1 be to 2-7 among two or more embodiment at least or improved example in the improved example to 1-6 at the first to the 15 embodiment, 1-1, and read for each pixel two parts or more piece of data are synthetic.Alpha value in this case can be a fixed value, also can be for each pixel column and difference.For example, according to the pixel value of coordinate, Z value, each pixel, itself and the approximate rank of desired images pattern, change of its pixel value etc., the alpha value can be set.
Although represented with reference to definite preferred embodiment of the present invention and described the present invention; But the one of ordinary skilled in the art will be appreciated that and can under the prerequisite that does not deviate from the aim of the present invention that limited appended claims and scope, carry out the modification on various forms and the details to the present invention.

Claims (12)

1. image producing method comprises:
For position among each figure of the image that in the target frame of original active image, comprises, according to its Z value, at least one frame of definite a plurality of frames that from the original active image, comprise, and from determined frame, read data corresponding to position among the said figure;
Said data of reading are carried out synthesizing based on the alpha value; And
By the said synthetic frame that forms, form new live image through order output based on the alpha value.
2. image producing method as claimed in claim 1; Wherein, Make said data of reading synthetic based on the corresponding alpha value of numerical value with the approximate rank of expression, said approximate rank are the image that comprised at least one frame of said a plurality of frames and the approximate rank of desired image pattern.
3. image producing method as claimed in claim 1, wherein based on making said data of reading synthetic with the corresponding alpha value of pixel value, said pixel value is the pixel value of the image that comprised at least one frame of said a plurality of frames.
4. image forming appts that comprises video memory, image conversion unit and image data-outputting unit,
The original active image of wherein said each frame of video memory journal; And said image conversion unit is included in position among the figure of the image in the target frame for each; According to its Z value, confirm to be recorded at least one frame of said video memory, and from determined frame, read data corresponding to position among the said figure; And these data are carried out synthetic based on the alpha value, and said image data-outputting unit is exported the frame of said synthetic institute reconstruct based on the alpha value in proper order.
5. image forming appts as claimed in claim 4; Wherein said image conversion unit is confirmed a plurality of frames as said at least one frame at interval with preset time, and said image conversion unit be among said each figure the position based on synthesizing this a plurality of frames with the corresponding alpha value of its Z value.
6. image forming appts as claimed in claim 4; Wherein said image conversion unit is confirmed a plurality of frames as said at least one frame at interval with preset time, and said image conversion unit is synthesized said a plurality of frame for position among each said figure based on the corresponding alpha value of numerical value that goes up the degree that changes with its time of expression.
7. image forming appts as claimed in claim 4, wherein for position among each figure that is included in image in the said target frame, said image conversion unit is used the deduction effect according to the numerical value on the approximate rank of the expression of this position and desired image pattern.
8. image forming appts as claimed in claim 4, wherein for position among each figure that is included in the image in the said target frame, the numerical value of the degree that changes on the express time of said image conversion unit according to this position is used the deduction effect.
9. image forming appts as claimed in claim 4, wherein for position among each figure that is included in the image in the said target frame, said image conversion unit is used the deduction effect according to the pixel value of this position.
10. image forming appts as claimed in claim 4, wherein said image conversion unit are confirmed as said at least one frame with a plurality of frames for position among the figure of the Z value with preset range with preset time at interval.
11. image forming appts as claimed in claim 4, wherein said at least one frame be than said target frame in time preceding frame and on the time after frame at least one.
12. image forming appts as claimed in claim 4 also comprises the image input unit, the image that is used to obtain by video camera is taken sends to said video memory as the original active image and with these images.
CN2007101025576A 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis Expired - Lifetime CN101123692B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002311631 2002-10-25
JP311631/02 2002-10-25
JP326771/03 2003-09-18
JP2003326771A JP4114720B2 (en) 2002-10-25 2003-09-18 Image generation method and image generation apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB2003801019768A Division CN100346638C (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Publications (2)

Publication Number Publication Date
CN101123692A CN101123692A (en) 2008-02-13
CN101123692B true CN101123692B (en) 2012-09-26

Family

ID=35581912

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2007101025576A Expired - Lifetime CN101123692B (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis
CNB2003801019768A Expired - Lifetime CN100346638C (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Family Applications After (1)

Application Number Title Priority Date Filing Date
CNB2003801019768A Expired - Lifetime CN100346638C (en) 2002-10-25 2003-10-22 Method and apparatus for generating new images by using image data that vary along time axis

Country Status (1)

Country Link
CN (2) CN101123692B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010103972A (en) * 2008-09-25 2010-05-06 Sanyo Electric Co Ltd Image processing device and electronic appliance
JP6532393B2 (en) 2015-12-02 2019-06-19 株式会社ソニー・インタラクティブエンタテインメント Display control apparatus and display control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459830A (en) * 1991-07-22 1995-10-17 Sony Corporation Animation data index creation drawn from image data sampling composites
EP0684059B1 (en) * 1994-05-24 1999-08-25 Texas Instruments Incorporated Method and apparatus for the display of video images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997018668A1 (en) * 1995-11-14 1997-05-22 Sony Corporation Device and method for processing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5459830A (en) * 1991-07-22 1995-10-17 Sony Corporation Animation data index creation drawn from image data sampling composites
EP0684059B1 (en) * 1994-05-24 1999-08-25 Texas Instruments Incorporated Method and apparatus for the display of video images

Also Published As

Publication number Publication date
CN101123692A (en) 2008-02-13
CN1708982A (en) 2005-12-14
CN100346638C (en) 2007-10-31

Similar Documents

Publication Publication Date Title
JP6785282B2 (en) Live broadcasting method and equipment by avatar
US6396491B2 (en) Method and apparatus for reproducing a shape and a pattern in a three-dimensional scene
CN1254954C (en) Digital camera capable of image processing
Crook et al. Motion graphics: Principles and practices from the ground up
US8947448B2 (en) Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file
JPH05501184A (en) Method and apparatus for changing the content of continuous images
KR100700262B1 (en) Method and apparatus for generating new images by using image data that vary along time axis
CN101682765B (en) Method of determining an image distribution for a light field data structure
US20130251267A1 (en) Image creating device, image creating method and recording medium
US6959113B2 (en) Arbitrary-shape image-processing device and arbitrary-shape image-reproducing device
EP0903695A1 (en) Image processing apparatus
CN101123692B (en) Method and apparatus for generating new images by using image data that vary along time axis
CN103858421B (en) Image processor and image treatment method
US7012623B1 (en) Image processing method and apparatus
JPH07200868A (en) Picture processing method and device
JP2006323450A (en) Simulation image generator, simulation image generation method, computation program, and recording medium recorded with program
JP2843262B2 (en) Facial expression reproduction device
JP2000182063A (en) Processing method/system for extracting data on three- dimensional object
CN110021222B (en) Multimedia content deduction system of electronic sand table projection display system
JP2575705B2 (en) Architectural perspective drawing animation creation device
Higgins The moviemaker's workspace: towards a 3D environment for pre-visualization
CN111652793B (en) Tooth image processing method, tooth image live device, electronic equipment and storage medium
JP3157015B2 (en) Image processing method and image processing apparatus
CA2466377A1 (en) Methods and apparatus for synthesizing a three-dimensional image signal and producing a two-dimensional visual display therefrom
JPS6327234Y2 (en)

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term

Granted publication date: 20120926

CX01 Expiry of patent term