CN102254308A - Method and system for computing interpolation of realistic scene - Google Patents

Method and system for computing interpolation of realistic scene Download PDF

Info

Publication number
CN102254308A
CN102254308A CN2011102126890A CN201110212689A CN102254308A CN 102254308 A CN102254308 A CN 102254308A CN 2011102126890 A CN2011102126890 A CN 2011102126890A CN 201110212689 A CN201110212689 A CN 201110212689A CN 102254308 A CN102254308 A CN 102254308A
Authority
CN
China
Prior art keywords
time
motion model
interpolation
texture
material time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102126890A
Other languages
Chinese (zh)
Other versions
CN102254308B (en
Inventor
戴琼海
武迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN 201110212689 priority Critical patent/CN102254308B/en
Publication of CN102254308A publication Critical patent/CN102254308A/en
Application granted granted Critical
Publication of CN102254308B publication Critical patent/CN102254308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for computing interpolation of a realistic scene. The method comprises the following steps of: 1, selecting key time, wherein the key time can represent motion information in two adjacent pieces of frame time; 2, mapping deblurring results at different moments into a motion model which corresponds to the same key time to obtain a texture structure of the motion model which corresponds to the key time; 3, performing optic flow matching on the texture structure to obtain the time domain motion information of the same space texture feature; and 4, performing time interpolation on the time domain motion information to obtain a change texture sequence of the motion model. The invention has the advantages that: based on an image sequence which is input at a low speed, realistic interpolation is performed to obtain highly-realistic high-speed texture which changes along with the motion of the scene.

Description

A kind of computing method and system of sense of reality scene interpolation
Technical field
The present invention relates to the image reconstruction field, relate in particular to a kind of computing method and system of sense of reality scene interpolation.
Background technology
Based on the sport interpolation method of image such as light stream, in computer graphics and computer vision, obtained using widely based on the method for motion path etc.High-speed video deblurring (deblur) problem for the high-speed motion model, the deblur result that each long exposure solves is the scene texture in a certain moment in this collection time shutter, solve a short exposure result for each long exposure, because low speed camera shooting frame per second is low, the picture rich in detail that obtains is the sampled result that motion model disperses on time shaft very much, can't obtain the continuous motion high-speed video.Therefore, press for this problem that exists at the low speed camera feasible solution is provided.
Summary of the invention
At the above-mentioned problems in the prior art, the invention provides a kind of computing method and system of sense of reality scene interpolation.
The invention provides a kind of computing method of sense of reality scene interpolation, comprising:
Step 1 is selected material time, and this material time can characterize the movable information in adjacent two frame times;
Step 2, with difference constantly the result of deblurring be mapped to the motion model of same material time correspondence, obtain the texture structure of the motion model of this material time correspondence;
Step 3 is carried out the light stream coupling to described texture structure, obtains the time domain movable information of the same space textural characteristics;
Step 4 is carried out the variation texture sequence that temporal interpolation obtains motion model to described time domain movable information.
In one example, also comprise step 5, the variation texture sequence of utilizing motion model is carried out the three-D grain pinup picture to the motion model of correspondence, obtains free view-point rending model result.
In one example, material time is the time of camera sense data.
In one example, in the step 2, the different results of deblurring constantly are different deblurring results constantly in the two continuous frames time.
In one example, the duration of material time is [((N-1)/N) * T, T], and N is a frame per second, and T is a frame time.
The invention provides a kind of computing system of sense of reality scene interpolation, comprising:
Material time is selected module, is used to select material time, and this material time can characterize the movable information in adjacent two frame times;
The texture structure acquisition module, be used for difference constantly the result of deblurring be mapped to the motion model of same material time correspondence, obtain the texture structure of the motion model of this material time correspondence;
Time domain movable information acquisition module is used for described texture structure is carried out the light stream coupling, obtains the time domain movable information of the same space textural characteristics;
Interpolating module is used for described time domain movable information is carried out the variation texture sequence that temporal interpolation obtains motion model.
In one example, also comprise free view-point rending model acquisition module, the variation texture sequence that is used to utilize motion model is carried out the three-D grain pinup picture to the motion model of correspondence, obtains free view-point rending model result.
The present invention is based on the image sequence of low speed input, carry out sense of reality interpolation, with the high speed texture that obtains changing with height sense of reality along with scene motion.The present invention can access robust, accurately, sense of reality scene efficiently.
Description of drawings
Come the present invention is described in further detail below in conjunction with accompanying drawing, wherein:
Fig. 1 chooses principle schematic for key point;
Fig. 2 a and Fig. 2 b are more options texture technology synoptic diagram;
Fig. 3 calculates texture movable information synoptic diagram for the light stream mapping result;
Fig. 4 for the time area deformation merge, be time domain interpolation result to Fig. 3 optical flow computation result, middle arrow is a key point, the arrow of middle arrow both sides be the deformation texture result after light stream is constantly mated based on key point;
The result that Fig. 5 obtains for Fig. 4 is mapped to the result on the corresponding model of material time;
Fig. 6 is for to carry out the schematic flow sheet that time domain interpolation obtains high-speed video to image.
Embodiment
The present invention utilizes space-time coupling deformation mechanism to realize that high-speed video calculates reconstruct clearly, comprises that the time domain material time determines, the more options texture, and the space light stream mates that timely area deformation merges and 3D model texture mapping and free view-point such as play up at gordian technique.
The computing method of sense of reality scene interpolation provided by the invention comprise:
1) choose the time domain material time, each material time can characterize time domain and gather the interior scene motion information of former and later two long time shutter scopes;
2) result of different time deblur is mapped to the motion model of same material time correspondence, obtain same material time correspondence motion model corresponding to the fine textures structure under the different motion effect, for example, be designated as Ref_img_A and Ref_img_B;
3) texture structure under the motion model different motion effect of above-mentioned same material time correspondence is carried out the light stream coupling, find the time domain movable information of the same space textural characteristics, promptly find the corresponding relation calculating kinematical vector between Ref_img_A and the Ref_img_B;
4) motion vector that obtains according to aforementioned calculation carries out temporal interpolation to Ref_img_A and Ref_img_B, obtains being matched with the variation texture sequence of motion model of the time super-resolution of high time resolution motion model.
5) utilize the variation texture sequence of the motion model after the super-resolution that the corresponding constantly motion model of each high time resolution is carried out the three-D grain pinup picture, thereby obtain free view-point rending model result.
With reference to figure 1-5, specify technical scheme of the present invention below by enforcement, but the present invention is not limited to following examples.
As shown in Figure 1, choose principle schematic for material time.The time of the time shutter of images acquired and a two field picture, it was camera readout time that a bit of differential time is wherein arranged respectively as shown in the figure, can take into account front and back two frame informations in order to guarantee material time, just it was chosen as moment corresponding readout time.
As shown in Figure 2, be more options texture technology, with [0, T] interior middle low speed image and [T constantly of time period, 2T] in the time period in the middle of constantly low speed image all be mapped to [((N-1)/N) * T, T] on this material time, that obtain is exactly the texture result that different images is mapped to same moment model.。
As shown in Figure 3, light stream mapping result is calculated texture movable information synoptic diagram.
As shown in Figure 4, the time area deformation merge synoptic diagram, it is the time domain interpolation result to Fig. 3 optical flow computation result, middle arrow is a material time, the arrow of middle arrow both sides be the deformation texture result after light stream is constantly mated based on material time.
As shown in Figure 5, the result who obtains for Fig. 4 is mapped to the result on the corresponding model of material time.
Fig. 6 has described the temporal interpolation algorithm of high-speed video, 1) at first determine material time, the moment shown in red frame in the middle of the image d; 2) carry out the more options texture, the motion model of time correspondence gets on when being mapped to key also to be about to image a (3T/4 constantly) and image b (5T/4 constantly), shown in the image of image e correspondence position (vertical direction); 3) use optical flow algorithm to calculate the textural characteristics movable information, carry out the distortion of texture image based on movable information, and carry out weighting fusion (being weighted) according to distance range image a and image b distance constantly, the moment in the middle of interpolation obtains, as figure is 7T/8, T, the variation texture sequence of 9T/8 motion model constantly; 4) the high-speed video result who obtains being mapped to corresponding motion model gets on (not showing corresponding result among this figure).
The above only is a preferred implementation of the present invention, but protection domain of the present invention is not limited thereto.Any those skilled in the art all can carry out suitable change or variation to it in technical scope disclosed by the invention, and this change or variation all should be encompassed within protection scope of the present invention.

Claims (10)

1. the computing method of a sense of reality scene interpolation is characterized in that, comprising:
Step 1 is selected material time, and this material time can characterize the movable information in adjacent two frame times;
Step 2, with difference constantly the result of deblurring be mapped to the motion model of same material time correspondence, obtain the texture structure of the motion model of this material time correspondence;
Step 3 is carried out the light stream coupling to described texture structure, obtains the time domain movable information of the same space textural characteristics;
Step 4 is carried out the variation texture sequence that temporal interpolation obtains motion model to described time domain movable information.
2. computing method as claimed in claim 1 is characterized in that, also comprise step 5, and the variation texture sequence of utilizing motion model is carried out the three-D grain pinup picture to the motion model of correspondence, obtains free view-point rending model result.
3. computing method as claimed in claim 1 or 2 is characterized in that, material time is the time of camera sense data.
4. computing method as claimed in claim 1 or 2 is characterized in that, in the step 2, the different results of deblurring constantly are different deblurring results constantly in the two continuous frames time.
5. computing method as claimed in claim 1 or 2 is characterized in that, the duration of material time is [((N-1)/N) * T, T], and N is a frame per second, and T is a frame time.
6. the computing system of a sense of reality scene interpolation is characterized in that, comprising:
Material time is selected module, is used to select material time, and this material time can characterize the movable information in adjacent two frame times;
The texture structure acquisition module, be used for difference constantly the result of deblurring be mapped to the motion model of same material time correspondence, obtain the texture structure of the motion model of this material time correspondence;
Time domain movable information acquisition module is used for described texture structure is carried out the light stream coupling, obtains the time domain movable information of the same space textural characteristics;
Interpolating module is used for described time domain movable information is carried out the variation texture sequence that temporal interpolation obtains motion model.
7. the computing system of sense of reality scene interpolation as claimed in claim 5, it is characterized in that, also comprise free view-point rending model acquisition module, the variation texture sequence that is used to utilize motion model is carried out the three-D grain pinup picture to the motion model of correspondence, obtains free view-point rending model result.
8. as the computing system of claim 6 or 7 described sense of reality scene interpolation, it is characterized in that material time is the time of camera sense data.
9. as the computing system of claim 6 or 7 described sense of reality scene interpolation, it is characterized in that the different results of deblurring constantly are different deblurring results constantly in the two continuous frames time.
10. as the computing system of claim 6 or 7 described sense of reality scene interpolation, it is characterized in that the duration of material time is [((N-1)/N) * T, T], N is a frame per second, and T is a frame time.
CN 201110212689 2011-07-27 2011-07-27 Method and system for computing interpolation of realistic scene Active CN102254308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110212689 CN102254308B (en) 2011-07-27 2011-07-27 Method and system for computing interpolation of realistic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110212689 CN102254308B (en) 2011-07-27 2011-07-27 Method and system for computing interpolation of realistic scene

Publications (2)

Publication Number Publication Date
CN102254308A true CN102254308A (en) 2011-11-23
CN102254308B CN102254308B (en) 2013-01-30

Family

ID=44981551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110212689 Active CN102254308B (en) 2011-07-27 2011-07-27 Method and system for computing interpolation of realistic scene

Country Status (1)

Country Link
CN (1) CN102254308B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408035A (en) * 2016-10-31 2017-02-15 东南大学 Haptic representation sense of reality objective evaluation method based on human haptic perception feature
CN110648281A (en) * 2019-09-23 2020-01-03 华南农业大学 Method, device and system for generating field panorama, server and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN101271579A (en) * 2008-04-10 2008-09-24 清华大学 Method for modeling high-speed moving object adopting ring shaped low frame rate camera array

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN101271579A (en) * 2008-04-10 2008-09-24 清华大学 Method for modeling high-speed moving object adopting ring shaped low frame rate camera array

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDREW J. PATTI, ET AL: "Superresolution Video Reconstruction with Arbitrary Sampling Lattices and Nonzero Aperture Time", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 6, no. 8, 31 August 1997 (1997-08-31), pages 1064 - 1076, XP011026196 *
BYEONG-DOO CHOI, ET AL: "Motion-Compensated Frame Interpolation Using Bilateral Motion Estimation and Adaptive Overlapped Block Motion Compensation", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 17, no. 4, 30 April 2007 (2007-04-30), pages 407 - 416, XP011179771 *
周智恒,等: "基于自适应鲁棒性光流的差错掩盖", 《电子与信息学报》, vol. 28, no. 10, 31 October 2006 (2006-10-31), pages 1888 - 1891 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408035A (en) * 2016-10-31 2017-02-15 东南大学 Haptic representation sense of reality objective evaluation method based on human haptic perception feature
CN106408035B (en) * 2016-10-31 2019-05-31 东南大学 Haptic feedback sense of reality method for objectively evaluating based on manpower tactilely-perceptible characteristic
CN110648281A (en) * 2019-09-23 2020-01-03 华南农业大学 Method, device and system for generating field panorama, server and storage medium

Also Published As

Publication number Publication date
CN102254308B (en) 2013-01-30

Similar Documents

Publication Publication Date Title
Xu et al. Quadratic video interpolation
Tulyakov et al. Time lens++: Event-based frame interpolation with parametric non-linear flow and multi-scale fusion
US11195314B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
US10733475B2 (en) Artificially rendering images using interpolation of tracked control points
US11636637B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
US10726593B2 (en) Artificially rendering images using viewpoint interpolation and extrapolation
EP3216216B1 (en) Methods and systems for multi-view high-speed motion capture
CN101719264B (en) Method for computing visual field of multi-view dynamic scene acquisition
US9565414B2 (en) Efficient stereo to multiview rendering using interleaved rendering
CN108932725B (en) Scene flow estimation method based on convolutional neural network
CN102982518A (en) Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
CN102270339B (en) Method and system for deblurring of space three-dimensional motion of different fuzzy cores
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN106056622B (en) A kind of multi-view depth video restored method based on Kinect cameras
US10861213B1 (en) System and method for automatic generation of artificial motion blur
Cheng et al. A dual camera system for high spatiotemporal resolution video acquisition
Do et al. Immersive visual communication
CN102254308B (en) Method and system for computing interpolation of realistic scene
CN104159098B (en) The translucent edge extracting method of time domain consistence of a kind of video
CN111767679A (en) Method and device for processing time-varying vector field data
CN114202564A (en) High-speed target tracking method and system based on event camera
Li et al. GGRt: Towards Generalizable 3D Gaussians without Pose Priors in Real-Time
Yu et al. Racking focus and tracking focus on live video streams: a stereo solution
Zhu et al. Fused network for view synthesis
CN117474761A (en) Supersampling imaging method based on intra-pixel quantum efficiency measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant