CN100448271C - Video editing method based on panorama sketch split joint - Google Patents
Video editing method based on panorama sketch split joint Download PDFInfo
- Publication number
- CN100448271C CN100448271C CNB2007100707436A CN200710070743A CN100448271C CN 100448271 C CN100448271 C CN 100448271C CN B2007100707436 A CNB2007100707436 A CN B2007100707436A CN 200710070743 A CN200710070743 A CN 200710070743A CN 100448271 C CN100448271 C CN 100448271C
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- panorama
- editing
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Studio Circuits (AREA)
Abstract
The present invention discloses a video editing method based on concatenation of panoramagram, being used for editing or repairing section series of motion video. The method includes the steps as follows: matching with the characteristic point of the every frame image of the video series, calculating the projection matrix between the frames and appending the image to acquire the video panoragram; editing the video panoragram with the human interaction and computer according to the specific demand of video editing; restoring the video series from the edited panoragram based on the projection relationship. Applying the video panoragram as the medium, the present invention shifts the video editing into the image editing of vider panoragram, not only directly providing the whole scene message to the user, but also reducing the calculating quantity and the human interaction.
Description
Technical field
The present invention relates to a kind of video sequence processing method, specifically, relate to a kind of synthetic video panorama sketch that passes through, replace conventional video editor's new method with picture editting panorama sketch.
Background technology
In traditional video editing, lens shooting, montage are storage medium usually with the video tape, because material order on video tape is deposited, finish editor and must search for repeatedly and duplicate, and rearrange these materials at another video tape, this edit methods is referred to as the linear editing method.Special-purpose non-linear editing computer after digital technology grows up, occurred, can handle more easily not according to the linear position of material on tape.In fact PC also can be used as a non-linear editing computer, because all materials all capture on the disk, can handle the content on any time line position at any time.
Nonlinear video editor makes things convenient for video editing work to a great extent, but it remains a very loaded down with trivial details job, and this is because existing nonlinear video editor is that video file is launched frame by frame, is that precision is edited with the frame.Because the video data volume is very big, therefore need expend the amount of calculation of a large amount of man-machine interactively and computer to the editor frame by frame of video.
Because the object in the video can appear at a lot of frames usually, if directly revise video, need carry out frame by frame, and the work that repeats is in a large number arranged.If whole section video information represented with piece image, as required this width of cloth image is edited, and then according to editor after image obtain video again, not only make the manual workload that participates in reduce (even can finish automatically by computer fully) greatly, also save computing time, improved operating efficiency.The present invention proposes video editing new method thus based on Panoramagram montage.
Summary of the invention
The purpose of this invention is to provide a kind of video editing method based on Panoramagram montage, solve existing video editing method to video frame by frame editor, not directly perceived, amount of calculation is big, defectives such as length consuming time provide a kind of method that can edit video content quickly and intuitively.
To achieve these goals, the technical solution used in the present invention is:
1, the invention provides a kind of video content editing method, be used for editor one section motion video sequence based on Panoramagram montage.This method comprises:
1) generates a video panorama of describing the sport video overall picture with a plurality of frame of video;
2) video panorama that obtains is carried out the picture material editor;
3) return each frame of video coordinate system by the contrary projection of the video panorama behind the editor, generate the video sequence after editing.
2, described video panorama generates and comprises the following steps:
1) global motion relative between a plurality of frame of video is carried out overall motion estimation, draw the plane projection relation between each video frame images;
2) if include moving object in the motion video sequence, then at first with its removal;
3) according to the relation of the plane projection between each video frame images, as the reference frame, set up the panorama sketch coordinate system, each video frame images is projected in this panorama sketch coordinate system, and estimate the size of panorama sketch with first two field picture;
4) according to the relation of the plane projection between each frame of video, calculate the corresponding points of each picture point in a plurality of video frame images on the panorama sketch, these a plurality of corresponding points are sorted, get intermediate value, constitute video panorama as the value on the panorama sketch;
3, described overall motion estimation comprises:
1) coupling step: extract the angle point of each video frame images, the line correlation of going forward side by side coupling obtains the initial matching point set;
2) parametric estimation step: utilize Ransac to reject the concentrated erroneous matching of initial matching point, and go out transformation parameter under the perspective projection with least-squares estimation.
4, described moving object removal method comprises:
1) utilize frame difference method to determine the approximate range of moving object;
2) utilize Region Segmentation that image division is the different zone of color based on color;
3) with the figure patterning method with the two combination, and be optimized as constraint with the former frame segmentation result and find the solution.
5, comprise that also each video frame images is carried out colour brightness to be proofreaied and correct, when taking to eliminate because exposure and the different color distortion that causes of white balance.
6, described panorama sketch Edition Contains method comprises:
1) image is transplanted: select a zone by manual, the information with this piece zone is put in the zone that needs to fill again, changes the color of former area information according to the information that is filled region exterior, makes this filling nature that becomes;
2) based on the picture editting of information procreation: utilize round the Given information for the treatment of the editing area border, borderline half-tone information " breeding " is realized in the editing area to treating along the direction of gradient minimum;
3) the semi-automatic filling of texture image: fill veined zone automatically.
7, the video sequence behind the described editor generates and comprises the following steps:
1) the contrary projection matrix of calculating from video panorama to each frame of video coordinate system;
2) generate each frame of video image according to contrary projection matrix from video panorama, finish the video editing process.
The beneficial effect that the present invention has is:
1. the present invention is converted into disposable editor to synthetic video panorama image with the editor frame by frame of conventional video edit methods, has greatly reduced to edit required man-machine interactively and amount of calculation;
2. owing to user's editing is finished on the video panorama that obtains, therefore more directly perceived, accurate.
Description of drawings
Fig. 1 is the flow chart of the inventive method.
The schematic diagram that Fig. 2 transplants for image.
Fig. 3 is the semi-automatic filling schematic diagram of texture image.
Fig. 4 the present invention is used for the example to the video scene Edition Contains, wherein (a) is original each frame of video sequence, (b) video panorama for generating, (c) result after video panorama is edited, (d) return the result of each frame of video coordinate system by the contrary projection of the video sequence behind the editor, i.e. final result.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is described in further detail.
Fig. 1 has provided the method flow diagram that carries out video editing according to the present invention.
In the shooting process of sport video, the athletic meeting of video camera causes the motion of video image background, and this forms of motion is called as global motion.Corresponding with global motion is local motion, and local motion refers to the foreground moving that the moving object action causes.Consider that moving object can be rigid body or non-rigid body, therefore adopt and do initial cutting apart based on the frame difference method of time domain.Frame difference method is based on stationary background or have unified global motion, and moving object has this hypothesis of character that is different from this global motion.Have in background under the situation of unified global motion, a demand obtains the global motion parameter, can obtain and refuse to obey the moving object zone of parameter from then on.
As shown in Figure 1, in step 101, global motion relative between each frame of video sequence is carried out overall motion estimation, obtain the global motion parameter.Requirement based on video editing, global motion between every adjacent two frames in the frame of video is estimated, thereby can calculate the global motion of each frame of video by recursion, can obtain the transformation parameter of each frame coordinate system of video thus with respect to the reference frame coordinate system for aforementioned reference frame.Generally, first frame in the optional scene of reference frame.
The rule of the global motion between the adjacent image frame can be by the global motion parameter characterization, according to overall motion estimation at the scene difference, can select different parameter models for use.In the present invention, in order to embody the variation of scene depth, what select for use is the Perspective transformation model of eight degrees of freedom:
The matrix form of perspective transform can be expressed as:
It is the general type of non singular linear transformation under the homogeneous coordinate system.Perspective transformation matrix has 9 parameters, but its ratio meaningfully in the homogeneous coordinate system, and therefore this conversion has been actually 8 parameters.For every adjacent two frame images of same video sequence, only need the correspondence of four pairs of points, just can obtain this parameter.
Find the solution transformation parameter, the present invention has adopted the Feature Points Matching algorithm, and it comprises following three steps:
1. respectively adjacent video frame images is extracted angle point, as use the Harris angle point, SUSAN angle point etc.
2. use the relevant matches of the angle point neighborhood information that is extracted to obtain thick matching result.
For the characteristic point x among the sequence image n, correlation window is set to (2n+1) * (2m+1).Region of search in image n+1 is decided to be (2d
u+ 1) * (2d
v+ 1).The coefficient correlation ρ (x, x ') of every bit x ' and x in the calculating region of search:
Wherein Cov (x, x ') is the covariance of x ' and x:
σ (x) be an x=(u, the v) standard deviation of correlation window:
E (x) be an x=(u, the v) average of correlation window:
Select the maximum match point of coefficient correlation ρ (x, x ') as optimum Match, in order to guarantee the correctness of match point, a threshold value T should be set also, the coefficient correlation of optimum Match should be greater than this threshold value.
3. because there be possibility of mistake coupling in the coupling 2., even and mate and do not make mistakes, if but match point be positioned at just in the moving object, the camera motion of trying to achieve by these match points also is wrong.Therefore, the means that must have pair matching result to test are to guarantee the robustness of matching result.The method that the present invention adopts is to utilize perspective transformation matrix for constraint, votes with RANSAC, and it is right to remove the point that does not meet the global camera motion parameter in the thick matching result.
But about RANSAC algorithm list of references 1:Fischler M.A.and Bolles R.C.RandomSample Consensus:A Paradigm for Model Fitting with Applications to ImageAnalysis and Automated Cartography.Communications of the ACM, 1981, Vol.24:381-395.
The RANSAC algorithm is simply described:
The homogeneous coordinates of postulated point x are expressed as (x
1, x
2, 1)
T, then be through the coordinate x ' after the projection of perspective matrix
Then the Euclidean distance between latter two corresponding points of projection is
Wherein:
x
1′=h
11x
1+h
12x
2+h
13
x
2′=h
21x
1+h
22x
2+h
23
x
3′=h
31x
1+h
32x
2+h
33
Suppose that the corresponding points group number that corners Matching obtains is P, compose maximum match point group P
MaxInitial value be 0, the initial value of number of iterations N is made as 200.
A) 4 groups of points of picked at random are right from P, obtain the perspective matrix H
Pi
B) calculate each match point to the distance between the model, if distance is labeled as true with this point, otherwise is labeled as false less than threshold value d, writing down following of current model has a centering, and the right group of point that is labeled as true is counted P
G
C) if P
Max<P
G, then make P
Max=P
G, and it is somewhat right to preserve the current down institute that is labeled as true, forwards steps d to; Otherwise, get back to step a;
D) calculate iterations
Wherein T is the prior probability of correct matching result proportion in the thick matching result of prediction;
E) if N>k
P, then make N=k
P, compare with the current k of iterations altogether, if k<N makes k=k+1, return step a; Otherwise to step f;
F) if P
Max〉=4, then forward to 4. with least square method find the solution satisfy current model somewhat right optimum perspective matrix H
P, and the match point of noting before this is to just removing all rough error points to remaining afterwards result with the RANSAC method.
Utilize the constraints of perspective transform, not only can eliminate the erroneous point in the coupling, also increased the constraint of parametric equation simultaneously to coupling as RANSAC.
4. generally, match point to number than the desired 4 groups of points of the perspective transformation matrix of asking 8 degrees of freedom to many, the present invention asks separating of this overdetermination linear equation with least square method, its process is as follows:
Suppose to have the corresponding match point x of n group, x ', the perspective matrix of its correspondence is H
P, then least square method separates H
PAnswer following formula to have minimum value:
Order
To H
PDifferentiate can be by releasing its each element differentiate.
For example,
To h
11Ask local derviation, launch to obtain:
∑2(h
11x
1+h
12x
2+h
13x
3-x′
1)x
1=0
To H
PIn each element all as following formula to h
11Ask local derviation equally to launch, the result combined can obtain again:
Promptly
Make the column vector of two 3 * 1 of the right and lefts and 1 * 3 and the row vector matrix that obtains that multiplies each other be respectively:
With
H then
PA=B can obtain H
PSeparate for:
H
P=BA
-1
As shown in Figure 1, in step 102, when having moving object in the video sequence, it should be removed.The present invention adopts frame difference method to determine the approximate range of moving object, and as initial value, in conjunction with in the frame and the correlation between each pixel of interframe, the definition energy equation utilizes the figure cutting method to separate this energy equation and obtains final segmentation result with this.
Step 101 by the agency of ask for the method for global camera motion parameter.Utilize this parameter, can be with a two field picture I
I+1Project to consecutive frame I
iCoordinate system in, obtain new images I
I+1'.This image is with respect to I
iStationary background.Suppose image I
I+1And I
iSatisfy projection relation P, i.e. x
I+1=Px
i, x wherein
iAnd x
I+1Difference presentation video I
iAnd I
I+1The coordinate of last corresponding points, then new images I
I+1' computational process be:
1. generate the width of cloth size blank image I identical with original image
I+1';
2. for
Obtain its corresponding points coordinate x '=P in original image
-1X (P
-1Contrary projection for P);
3. the respective coordinates x ' that obtains, this coordinate are not integer usually, carry out interpolation with bilinear interpolation.
Need to prove that the some pixel value of blind area is changed to 0 because therefore the scope of two two field pictures and incomplete same when a two field picture is projected to the coordinate system of another two field picture, " blind area " (coordinate figure that calculates gained has exceeded image boundary) can occur.
Ideally, when not having moving object in the sequence, I
I+1' and I
iIt is same image.If have moving object, I in the sequence
I+1' and I
iCan in the zone of moving object is arranged, difference be arranged.Therefore, definable I
iIn the moving object zone be:
{x
i|x
i∈I
i,|f(x
i)-f(x
i+1′)|>T}
In the following formula, f (x
i) expression I
iMeta is changed to x
iThe pixel value of point, f (x '
I+1) then be by I
I+1Project to I
iThe new images I that is generated
I+1' middle coordinate is x '
I+1Pixel value, T is the threshold value that sets.The result who subtracts each other when corresponding points thinks that this point is the point in the moving object, otherwise is considered as static background dot during greater than this threshold value T.
In order to keep the information of static background image in every two field picture as much as possible, can add the information of the 3rd frame again, promptly estimate moving object in the present frame with the front and back two field picture, therefore following formula is revised,
{x
i|x
i∈I
i,|f(x
i)-f(x
i+1′)|>T&|f(x
i)-f(x
i-1′)|>T}
Be subjected to the influence of factors such as noise, directly some wrong isolated point or pockets usually can appear in the result who is obtained by following formula, and available morphological operator is done simple processing.
If the figure patterning method is node then amount of calculation is bigger with the pixel, can influence the efficient of algorithm.Therefore, before utilization figure cutting algorithm, the present invention does pre-segmentation with the mean shift method to image earlier, with the node of each zone that obtains after cutting apart as figure.So not only can reduce amount of calculation, and because mean shift can also guarantee the accuracy of segmentation result than the edge of each color region of accurate in locating.
But about mean shift algorithm list of references 2:Fukunaga K.and Hostetler L.D.Theestimation of the gradient of a density function, with applications in patternrecognition, IEEE Transactions on Information Theory, 1975,21 (1): 32~40.
Color partitioning algorithm based on mean shift mainly divides two steps: at first, image carries out mean shift filtering in the associating territory, this filtering has discontinuous retentivity, each pixel all is divided to pattern nearest in the associating territory, and the three-dimensional colour component in the corresponding modes is replaced each pixel original value.Then, adopt alternative manner to merge and be arranged in color space h
rPattern domain of attraction in/2 scopes until convergence, finally obtains cutting apart the back image.
Color partitioning algorithm based on mean shift utilizes colour information with the locus image division to be become pocket.But the result of cutting apart does not have knowledge semantically, can't distinguish moving object and background according to this result.Therefore we set up the relation of moving object zone and colored mean shift arithmetic result with the method for figure cutting, thereby time domain and spatial information (si) can be combined, and obtain segmentation result more accurately.
But about figure cutting algorithm list of references 3:Yuri Boykov, Olga Veksler, Ramin Zabih.Efficient Approximate Energy Minimization via Graph Cuts.IEEE transactions onPattern Analysis and Machine Intelligence, 2001,20 (12): 1222-1239.
As shown in Figure 1, in step 103, Panoramagram montage adopts the plane joining method, chooses wherein that plane, a frame place is a reference frame, utilizes the plane projection model, and other all frames are all projected to this plane, frame place, constructs panorama sketch.Step 101 has described the method for utilizing coupling to estimate the global motion parameter in detail.Suppose x
iRepresent the coordinate of putting on the i two field picture, P
I, jRepresent i, the perspective projection matrix of j two interframe, the projection relation formulate of interframe is:
x
i=P
i,i+1x
i+1
Utilize the transitivity between the consecutive frame, can obtain the projection relation between each two field picture and first two field picture, that is:
x
1=P
1,2x
2
x
2=P
2,3x
3
.
.
.
x
n-1=P
n-1,nx
n
Can calculate the projection relation between the every two field picture and first two field picture thus:
x
1=P
1,2x
2=P
1,2P
2,3x
3=P
1,3x
3(wherein, P
1,3=P
1,2P
2,3)
.
.
.
x
1=P
1,2P
2,3P
N-1, nx
n=P
1, nx
n(wherein, P
1, n=P
1,2P
2,3P
N-1, n)
Choosing first two field picture is reference frame, just can obtain the projection relation of every two field picture and panorama sketch.
After determining the projection relation between each frame and the panorama sketch coordinate system, next need to calculate the size of panorama sketch: calculate four projected positions of summit under the panorama sketch coordinate system of every two field picture, remember its coordinate for (x, y).Relatively the size of its coordinate obtains x
Max, x
MaxAnd y
Max, y
Min, then the big I of panorama sketch is defined as W * H, wherein W=x
Max-x
Min, H=y
Max-y
Min
Because perspective transform is a kind of linear inverible transform, so the point on the panorama sketch can be expressed as to the projection relation of each frame:
Matrix P wherein
1, n -1Be P
1, nInverse matrix.Under this projection relation, can obtain pixel { x, the y} coordinate { x on the n two field picture on the panorama sketch
n, y
n.Because { the x that obtains
n, y
nMay not integer, can calculate the pixel value of this point by bilinear interpolation.
When the camera motion between consecutive frame was smaller in the video sequence, point was generally corresponding with the point on many video frame images on the panorama sketch.Suppose that the video frame number corresponding with it is M, the intermediate value of the pixel value of this M of desirable correspondence point is the pixel value of putting on the panorama sketch.
As shown in Figure 1, in step 104,, the panorama sketch that is generated by step 103 is carried out the picture material editor according to the specific requirement of video editing, a lot of to picture editting's method, among the present invention, adopted following three kinds of methods:
1. image is transplanted: provided the schematic diagram that image is transplanted among Fig. 2, the image g of the left area chosen is transplanted among the regional Ω on the right the inner boundary of Ω ' expression Ω.Composograph after the transplanting that order is asked is u, and transplanted image is g, and the original image of the regional Ω that is transplanted to is f.
In order to introduce the information of transplanting image, make the single order differential of composograph identical with the transplanting image
That is:
Simultaneously, the composograph after the transplanting also can be subjected to transplanting regional borderline constraint.Generally, in order to keep the continuity of image, satisfy u=f for u ∈ Ω '.
Above two constraintss can represent with same energy function J (u):
The u that can make this energy function J (u) value minimum is exactly the image after synthesizing.Wherein λ (λ>0) is the Lagrange multiplier, and it regulates top two conditions at whole intrafascicular approximately proportion.
2. based on the picture editting of information procreation: utilize round the Given information for the treatment of the editing area border, borderline half-tone information " breeding " is realized in the editing area to treating along the direction of gradient minimum.
Make I
o(i, j): [0, M] * [0, N] → R represents that width of cloth size is the image of M * N.The image repair algorithm can by iteration obtain a series of image I (i, j, n): [0, M] * [0, N] * N → R, satisfy I (i, j, 0)=I
o(i, j) and
(I
R(i j) is output image), its mathematic(al) representation is written as:
In the following formula, n represents time of repairing, i.e. number of iterations, (i, the j) coordinate of remarked pixel, and Δ t is the step-length of each iteration, I
t n(i, j) presentation video I
n(i, upgating object j), and I
N+i(i j) then is I
n(i is j) at I
t n(i, result through obtaining after the iteration under constraint j).The effective coverage of this equation is the inside for the treatment of modifier area Ω of manual appointment.The image that after n iteration, can obtain repairing.
The key of algorithm is to look for a suitable I
t n(i, j).In artificial recovery technique, people usually can be with the information of failure area outside along the outer boundary of failure area extending in the failure area slowly.When therefore the active computer imitation is manually repaired, can use this thought, the Ω external information is smoothly extended to Ω inside.Suppose L
n(i j) is information to be expanded, and
Be the direction that expands, can obtain I
t n(i, expression formula j) is:
Wherein
Be information L
n(i, variable quantity j).In this equation, can estimate the information L of image
n(i j) also can calculate it in direction
On variable quantity.State after stable just during algorithmic statement, satisfies I
N+1(i, j)=I
n(i, j), just
Mean that amount of information L extends to fully
In the direction.
Because wishing information is to be diffused in the image smoothly, so L
n(i is a smoothing operator j), can choose Laplacian, and it is expressed as
Certainly, other smoothing operator also is suitable for.
Because the continuity of isophote is always along the normal direction on border, so the selection border
Normal direction be the direction of level and smooth information change
For each point in the Ω (i, j),
The border at vertical this place of direction
Restoring area is arbitrarily, so
Direction and original image itself irrelevant.If the direction of isophote and
Unanimity is then chosen
The time, best direction is exactly the direction of isophote.To the arbitrfary point (i, j), gradient
Be to change maximum direction, so the direction vertical with gradient
Be to change minimum direction.Definition
Be the direction of isophote, thus direction vector
Expression formula be:
3. the semi-automatic filling of texture image: above-mentioned two kinds of methods that provide are primarily aimed at level and smooth image-region, will solve with the semi-automatic filling of texture image for the zone of texture-rich.With unspoiled zone in the image is the sampling sample, is that unit is repaired image with the piece.The size of definition every " piece " is w * w, is that standard is divided into n piece, { B with failure area with w
1, B
2..., B
n, repair each fritter then successively.
As shown in Figure 3, to the current damage piece Bk that will repair, in zone known around it, get a width w
BBand
, the part among Fig. 3 shown in the shade.To the current damage piece B shown in the figure
k, the image on its right still belongs to broken parts, so the information of the belt-like zone on the right is unknown, so the belt-like zone of getting is a left side, upper and lower three limits.Equally, for other damage piece, also only consider the belt-like zone of ten-four in four limits.Arbitrary sampling sample B to (being exactly unspoiled part in the image here) in the sample area
(x, y)(B
(x, y)The point in the expression lower left corner is (x, piece y)), gets the belt-like zone of same position and size
, as not using the represented part of shade in the failure area among Fig. 3.Calculate the distance of two belt-like zones, can obtain and current damage piece B
kDistance is less than the set ψ of the piece of a certain given threshold value
BDefinition set ψ
BFor:
Wherein, d
MaxBe given threshold value.At set ψ
BIn select one at random, every gray value in this piece is copied to current damage piece B successively
kIn.Handle remaining damage piece (zone of having repaired may provide edge-restraint condition for the zone that next piece will be repaired) after the same method.To the last one value is determined, entire image is promptly repaired and finished.
As shown in Figure 1, in step 105, according to the specific requirement of video editing, to recovering video sequence in the panorama sketch after editing by step 104.
Projection relation between each video frame images and the panorama sketch is expressed as:
x=P
1,nx
n
Because perspective transform is a kind of linear inverible transform, therefore can obtain:
Under this projection relation, get final product the coordinate of pixel on panorama sketch on each frame of video, be non-integer if change punctuate, then can calculate the pixel value of this point by bilinear interpolation.
Fig. 4 the present invention is used for the example to the video scene Edition Contains, wherein (a) is original each frame of video sequence, (b) video panorama for generating, (c) result after video panorama is edited, (d) return the result of each frame of video coordinate system by the contrary projection of the video sequence behind the editor, i.e. final result.
Claims (4)
1, a kind of video editing method based on Panoramagram montage is characterized in that, the step of this method is as follows:
1) generates a video panorama of describing the sport video overall picture with a plurality of frame of video;
2) video panorama that obtains is carried out the picture material editor;
3) return each frame of video coordinate system by the contrary projection of the video panorama behind the editor, generate the video sequence after editing;
Described video panorama generates and comprises the following steps:
(1) global motion relative between a plurality of frame of video is carried out overall motion estimation, draw the plane projection relation between each video frame images;
(2) if include moving object in the motion video sequence, then at first with its removal;
(3) according to the relation of the plane projection between each video frame images, as the reference frame, set up the panorama sketch coordinate system, each video frame images is projected in this panorama sketch coordinate system, and estimate the size of panorama sketch with first two field picture;
(4) according to the relation of the plane projection between each frame of video, calculate the corresponding points of each picture point in a plurality of video frame images on the panorama sketch, these a plurality of corresponding points are sorted, get intermediate value, constitute video panorama as the value on the panorama sketch;
Described panorama sketch Edition Contains method comprises:
(1) image is transplanted: select a zone by manual, the information with this piece zone is put in the zone that needs to fill again, changes the color of former area information according to the information that is filled region exterior, makes this filling nature that becomes;
(2) based on the picture editting of information procreation: utilize round the Given information for the treatment of the editing area border, borderline half-tone information " breeding " is realized in the editing area to treating along the direction of gradient minimum;
(3) the semi-automatic filling of texture image: fill veined zone automatically;
Video sequence behind the described editor generates and comprises the following steps:
(1) the contrary projection matrix of calculating from video panorama to each frame of video coordinate system;
(2) generate each frame of video image according to contrary projection matrix from video panorama, finish the video editing process.
2. a kind of video editing method based on Panoramagram montage according to claim 1 is characterized in that described overall motion estimation comprises:
1) coupling step: extract the angle point of each video frame images, the line correlation of going forward side by side coupling obtains the initial matching point set;
2) parametric estimation step: utilize Ransac to reject the concentrated erroneous matching of initial matching point, and go out transformation parameter under the perspective projection with least-squares estimation.
3. a kind of video editing method based on Panoramagram montage according to claim 1 is characterized in that, described moving object removal method comprises:
1) utilize frame difference method to determine the approximate range of moving object;
2) utilize Region Segmentation that image division is the different zone of color based on color;
3) with the figure patterning method with the two combination, and be optimized as constraint with the former frame segmentation result and find the solution.
4. a kind of video editing method based on Panoramagram montage according to claim 1 is characterized in that, comprises that also each video frame images is carried out colour brightness to be proofreaied and correct, when taking to eliminate because exposure and the different color distortion that causes of white balance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100707436A CN100448271C (en) | 2007-08-10 | 2007-08-10 | Video editing method based on panorama sketch split joint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100707436A CN100448271C (en) | 2007-08-10 | 2007-08-10 | Video editing method based on panorama sketch split joint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101119442A CN101119442A (en) | 2008-02-06 |
CN100448271C true CN100448271C (en) | 2008-12-31 |
Family
ID=39055353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2007100707436A Expired - Fee Related CN100448271C (en) | 2007-08-10 | 2007-08-10 | Video editing method based on panorama sketch split joint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100448271C (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102426705B (en) * | 2011-09-30 | 2013-10-30 | 北京航空航天大学 | Behavior splicing method of video scene |
US9712746B2 (en) | 2013-03-14 | 2017-07-18 | Microsoft Technology Licensing, Llc | Image capture and ordering |
GB2512621A (en) | 2013-04-04 | 2014-10-08 | Sony Corp | A method and apparatus |
CN104092998B (en) * | 2014-07-18 | 2018-04-06 | 深圳英飞拓科技股份有限公司 | A kind of panoramic video processing method and its device |
CN104537659B (en) * | 2014-12-23 | 2017-10-27 | 金鹏电子信息机器有限公司 | The automatic calibration method and system of twin camera |
CN104966063A (en) * | 2015-06-17 | 2015-10-07 | 中国矿业大学 | Mine multi-camera video fusion method based on GPU and CPU cooperative computing |
CN105894443B (en) * | 2016-03-31 | 2019-07-23 | 河海大学 | A kind of real-time video joining method based on improved SURF algorithm |
JP6953961B2 (en) * | 2017-09-27 | 2021-10-27 | カシオ計算機株式会社 | Image processing equipment, image processing methods and programs |
CN108198181B (en) * | 2018-01-23 | 2019-12-27 | 电子科技大学 | Infrared thermal image processing method based on region segmentation and image fusion |
CN108319958A (en) * | 2018-03-16 | 2018-07-24 | 福州大学 | A kind of matched driving license of feature based fusion detects and recognition methods |
KR20240050468A (en) * | 2019-01-18 | 2024-04-18 | 스냅 아이엔씨 | Systems and methods for template-based generation of personalized videos |
CN110717430A (en) * | 2019-09-27 | 2020-01-21 | 聚时科技(上海)有限公司 | Long object identification method and identification system based on target detection and RNN |
CN113962867B (en) * | 2021-12-22 | 2022-03-15 | 深圳思谋信息科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
-
2007
- 2007-08-10 CN CNB2007100707436A patent/CN100448271C/en not_active Expired - Fee Related
Non-Patent Citations (4)
Title |
---|
一种全自动稳健的图像拼接融合算法. 赵向阳,杜利民.中国图象图形学报,第9卷第4期. 2004 |
一种全自动稳健的图像拼接融合算法. 赵向阳,杜利民.中国图象图形学报,第9卷第4期. 2004 * |
视频序列的全景图拼接技术. 朱云芳等.中国图象图形学报,第11卷第8期. 2006 |
视频序列的全景图拼接技术. 朱云芳等.中国图象图形学报,第11卷第8期. 2006 * |
Also Published As
Publication number | Publication date |
---|---|
CN101119442A (en) | 2008-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100448271C (en) | Video editing method based on panorama sketch split joint | |
JP4074062B2 (en) | Semantic object tracking in vector image sequences | |
US7609888B2 (en) | Separating a video object from a background of a video sequence | |
CN110853026B (en) | Remote sensing image change detection method integrating deep learning and region segmentation | |
US7636128B2 (en) | Poisson matting for images | |
Radke | Computer vision for visual effects | |
CN104616286B (en) | Quick semi-automatic multi views depth restorative procedure | |
Vicente et al. | Balloon shapes: Reconstructing and deforming objects with volume from images | |
CN104715451B (en) | A kind of image seamless fusion method unanimously optimized based on color and transparency | |
CN101459843B (en) | Method for precisely extracting broken content region in video sequence | |
CN101930614B (en) | Drawing rendering method based on video sub-layer | |
Zhang et al. | Critical regularizations for neural surface reconstruction in the wild | |
CN103955945B (en) | Self-adaption color image segmentation method based on binocular parallax and movable outline | |
CN102096915B (en) | Camera lens cleaning method based on precise image splicing | |
CN103279961A (en) | Video segmentation method based on depth recovery and motion estimation | |
CN105701515A (en) | Face super-resolution processing method and system based on double-layer manifold constraint | |
CN104159098A (en) | Time-domain consistent semi-transparent edge extraction method for video | |
Woodford et al. | On New View Synthesis Using Multiview Stereo. | |
Saunders et al. | Dyna-dm: Dynamic object-aware self-supervised monocular depth maps | |
CN104835161B (en) | A kind of global image editor transmission method and system | |
CN112164009B (en) | Depth map structure repairing method based on two-layer full-connection condition random field model | |
Liu et al. | Recent development in image completion techniques | |
Lai et al. | Surface-based background completion in 3D scene | |
Fleishman et al. | Video operations in the gradient domain | |
CN106355642A (en) | Three-dimensional reconstruction method, based on depth map, of green leaf |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20081231 Termination date: 20100810 |