CN103051915A - Manufacture method and manufacture device for interactive three-dimensional video key frame - Google Patents

Manufacture method and manufacture device for interactive three-dimensional video key frame Download PDF

Info

Publication number
CN103051915A
CN103051915A CN201310013059XA CN201310013059A CN103051915A CN 103051915 A CN103051915 A CN 103051915A CN 201310013059X A CN201310013059X A CN 201310013059XA CN 201310013059 A CN201310013059 A CN 201310013059A CN 103051915 A CN103051915 A CN 103051915A
Authority
CN
China
Prior art keywords
key frame
image
depth
weights
dimensional video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310013059XA
Other languages
Chinese (zh)
Other versions
CN103051915B (en
Inventor
戴琼海
李振尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310013059.XA priority Critical patent/CN103051915B/en
Publication of CN103051915A publication Critical patent/CN103051915A/en
Application granted granted Critical
Publication of CN103051915B publication Critical patent/CN103051915B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a manufacture method and a manufacture device for an interactive three-dimensional video key frame. The manufacture method comprises the following steps that an image sequence is obtained, in addition, a key frame in the image sequence is selected, and after the key frame is subjected to the image denoising, the K-means clustering algorithm is invoked for merging similar pixel points in the key frame into a region, and region information is recorded; each region assigned weight is calculated, a GrpahCut algorithm is invoked according to the region assigned weight for carrying out region division on the key frame for obtaining the region division results; and the image expansion and corrosion operation is carried out according to the image division results for constructing a trisection image of the key frame, and in addition, a Bayesian cutout algorithm is invoked for obtaining the refined division results; the image foreground and background regions are constructed according to the refined division results, in addition, the depth assignment is respectively carried out, and the foreground and background depth images are merged for obtaining a key frame depth image to be output. The manufacture method has the advantages that the manufacture cost of the key frame is reduced, and in addition, the manufacture speed and the precision are high.

Description

A kind of manufacture method of interactive three-dimensional video-frequency key frame and producing device
Technical field
The present invention relates to the computer image processing technology field, particularly the producing device of a kind of manufacture method of interactive three-dimensional video-frequency key frame and a kind of interactive three-dimensional video-frequency key frame.
Background technology
Three-dimensional video-frequency is widely regarded as the following main development direction of video display industry as a kind of important behaviour form of current films and television programs.The display effect level of three-dimensional video-frequency is clearly demarcated, bright in luster, has very strong visual impact, stays deep impression to spectators.In addition, the display effect of three-dimensional video-frequency has more the sense of reality, and scene and personage are life-like, gives the strong sensation on the spot in person of spectators, has very high value of art appreciation.Just because of three-dimensional video-frequency has the not available characteristics of these planar videos, so have wide market prospects and commercial value in fields such as terminal demonstration, robot navigation, Aero-Space, military training, medical education, game medium.
Three-dimensional video-frequency is as a kind of important expression mode of visual information, particularly the fields such as computer vision, image/video processing, pattern recognition.For a long time, the production method of three-dimensional video-frequency can be divided into three kinds: first kind of way is directly to use stereo camera to take.This mode need to adopt the stereo video shooting equipment of specialty and complete post-processed streamline, and is with high costs.Meanwhile, stereo camera need to mate calibration when taking between different visual angles, shooting environmental and camera motion are had certain restriction, and these have all restricted to a great extent and have adopted stereo camera directly to take popularizing of this mode.The second way is to utilize 3 d modeling software to make three-dimensional video-frequency, this mode can generate the three-dimensional video-frequency based on any scene, but need the professional to spend great effort scene, object are carried out modeling, its Financial cost and time cost make us hanging back.The third mode is to utilize planar video three-dimensional technology, directly planar video is converted to three-dimensional video-frequency.The cost of this mode is more much lower than front dual mode, and any one group of existing planar video can be converted to corresponding three-dimensional video-frequency.Consider the ample resources of existing planar video, if be converted into three-dimensional video-frequency, not only can obtain better to view and admire experience, can also advance popularizing of stereo display technique take stereoscopic TV, three-dimensional movie theatre as representative.
Current planar video three-dimensional technology based on man-machine interaction can be divided into that key frame is made, non-key frame diffusion and based on the three phases of playing up of depth map.In production phase, generate high-precision key frame depth map by introducing manual operation at key frame.In production phase, utilize degree of depth propagation algorithm that the depth map of key frame is diffused into non-key frame at non-key frame, obtain the depth map of whole planar video sequence.Based on the playing up in the stage of depth map, according to planar video sequence and corresponding depth map thereof, adopt based on depth map play up algorithm (DIBR algorithm), generate corresponding three-dimensional video-frequency.In the three phases of planar video three-dimensional technology, it is that first also is a most important stage that key frame is made, only in the higher situation of key frame depth map precision, can spread the high-precision non-key frame depth map of generation, and then obtain high-quality three-dimensional video-frequency.
Key frame is made can be divided into image segmentation and two stages of degree of depth assignment.Traditional image partition method carries out the 0-1 binary segmentation with image, and namely the pixel in the image can only belong to prospect or belong to background.Yet the segmentation result edge that these class methods obtain is comparatively stiff, and particularly when existing the object front and back to block in the image or having the more tiny situations such as hair edge, it is more obvious that wrong phenomenon appears in segmenting edge.Segmentation result inaccurate directly causes the inaccurate of degree of depth assignment, affected the quality of key frame depth map, and has the obviously flaw such as object edge shake in the three-dimensional video-frequency that finally causes generating.These flaws can cause three-dimensional video-frequency beholder's sense of discomfort, have limited to a certain extent popularizing of planar video three-dimensional technology.
Summary of the invention
Purpose of the present invention is intended to solve at least one of above-mentioned technological deficiency.
For this reason, one object of the present invention is to propose a kind of manufacture method of interactive three-dimensional video-frequency key frame, this manufacture method has reduced the cost of manufacture of three-dimensional video-frequency, speed and precision that the three-dimensional video-frequency key frame is made have been improved, eliminate edge shake phenomenon that the traditional fabrication mode causes and beholder's sense of discomfort, improved the efficient of planar video three-dimensional conversion.Another object of the present invention is to propose a kind of producing device of interactive three-dimensional video-frequency key frame.
For achieving the above object, an aspect of of the present present invention embodiment has proposed a kind of manufacture method of interactive three-dimensional video-frequency key frame, comprise the steps: S1, obtain the key frame in sequence of pictures and the selected described sequence of pictures, and described key frame carried out calling behind the image denoising similar pixel is merged into the zone, posting field information in the key frame of K-means clustering algorithm after with image denoising; S2 calculates each region allocation weights, and calls the GrpahCut algorithm according to described region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result; S3 carries out image expansion operation and Image erosion operates to construct three components of key frame according to described Region Segmentation result, and calls Bayes and scratch the result that nomography becomes more meticulous and cuts apart with acquisition; S4 according to the described as a result construct image foreground area cut apart and the image background regions and carry out respectively degree of depth assignment of becoming more meticulous, and merges foreground depth figure and background depth map with obtain the key frame depth map and exports.
Manufacture method according to the interactive three-dimensional video-frequency key frame of the embodiment of the invention, can Stepwise Refinement carry out the image segmentation operation, generate fast high-precision segmentation result, can also carry out easily the degree of depth assignment of prospect and background, obtain high-quality key frame depth map, thereby reduced the cost of manufacture of three-dimensional video-frequency, improved speed and precision that the three-dimensional video-frequency key frame is made, improved the efficient of planar video three-dimensional conversion.
In one embodiment of the invention, in described step S1, also comprise: adopt the gaussian filtering algorithm that described key frame is carried out image denoising.
In one embodiment of the invention, described region allocation weights comprise joint area weights and zone marker weights, and described zone marker weights comprise prospect mark weights and context marker weights.
In one embodiment of the invention, described step S3 further comprises the steps: to read in described Region Segmentation result and is converted into eight marking images of single channel, and the copy of preserving at least two parts of marking images; A copy in the copy of described at least two parts of marking images carries out the image expansion operation, and another copy in the copy of described at least two parts of marking images carries out the Image erosion operation; Three components according to image expansion operating result and Image erosion operating result structure key frame; Key frame images and described three components as input parameter, are called Bayes and scratch the result of nomography to obtain to become more meticulous and cut apart, and wherein, the described result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
In one embodiment of the invention, in described step S4, also comprise: carry out described degree of depth assignment by the mode of drafting or the mode of model, wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
The producing device of the interactive three-dimensional video-frequency key frame that the further embodiment of the present invention proposes comprises: pretreatment module, be used for obtaining sequence of pictures and the key frame of selected described sequence of pictures is carried out image denoising, and by calling similar pixel is merged into the zone in the key frame of K-means clustering algorithm after with image denoising, and posting field information; The Region Segmentation module is used for calculating each region allocation weights, and calls the GrpahCut algorithm according to described region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result; Become more meticulous and cut apart module, operate to construct three components of key frame for carry out image expansion operation and Image erosion according to described Region Segmentation result, and call Bayes and scratch the result of nomography to obtain to become more meticulous and cut apart; Degree of depth assignment module is used for according to described as a result construct image foreground area and the image background regions cut apart and carry out respectively degree of depth assignment of becoming more meticulous, and foreground depth figure and background depth map are merged to obtain the key frame depth map exports.
Producing device according to the interactive three-dimensional video-frequency key frame of the embodiment of the invention, operate by Region Segmentation module and the image segmentation that carries out of cutting apart the module Stepwise Refinement that becomes more meticulous, generate fast high-precision segmentation result, can also carry out easily the degree of depth assignment of prospect and background, obtain high-quality key frame depth map, thereby reduced the cost of manufacture of three-dimensional video-frequency, improved speed and precision that the three-dimensional video-frequency key frame is made, and then improved the efficient of planar video three-dimensional conversion.
In one embodiment of the invention, described pretreatment module adopts the gaussian filtering algorithm that described key frame is carried out image denoising.
In one embodiment of the invention, described region allocation weights comprise joint area weights and zone marker weights, and described zone marker weights comprise prospect mark weights and context marker weights.
In one embodiment of the invention, the described result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
In one embodiment of the invention, described degree of depth assignment module is carried out described degree of depth assignment by the mode of drafting or the mode of model, and wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
The aspect that the present invention adds and advantage in the following description part provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Description of drawings
Above-mentioned and/or the additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment below in conjunction with accompanying drawing, wherein:
Fig. 1 is the flow chart of the manufacture method of interactive according to an embodiment of the invention three-dimensional video-frequency key frame;
Fig. 2 is the flow chart according to the preprocessing process of interactive three-dimensional video-frequency key frame of the present invention;
Fig. 3 is the block diagram according to the Region Segmentation process of interactive three-dimensional video-frequency key frame of the present invention;
The block diagram of Fig. 4 for cutting apart according to becoming more meticulous of interactive three-dimensional video-frequency key frame of the present invention;
Fig. 5 is the block diagram according to the degree of depth assignment procedure of interactive three-dimensional video-frequency key frame of the present invention;
Fig. 6 is the block diagram of the producing device of interactive according to an embodiment of the invention three-dimensional video-frequency key frame; And
Fig. 7 is the technical scheme FB(flow block) according to the producing device of interactive three-dimensional video-frequency key frame of the present invention.
Embodiment
The below describes embodiments of the invention in detail, and the example of described embodiment is shown in the drawings, and wherein identical or similar label represents identical or similar element or the element with identical or similar functions from start to finish.Be exemplary below by the embodiment that is described with reference to the drawings, only be used for explaining the present invention, and can not be interpreted as limitation of the present invention.
Disclosing hereinafter provides many different embodiment or example to be used for realizing different structure of the present invention.Of the present invention open in order to simplify, hereinafter parts and the setting of specific examples are described.Certainly, they only are example, and purpose does not lie in restriction the present invention.In addition, the present invention can be in different examples repeat reference numerals and/or letter.This repetition is in order to simplify and purpose clearly, itself not indicate the relation between the various embodiment that discuss of institute and/or the setting.In addition, the various specific technique that the invention provides and the example of material, but those of ordinary skills can recognize the property of can be applicable to of other techniques and/or the use of other materials.In addition, First Characteristic described below Second Characteristic it " on " structure can comprise that the first and second Characteristics creations are the direct embodiment of contact, also can comprise the embodiment of other Characteristics creation between the first and second features, such the first and second features may not be direct contacts.
In description of the invention, need to prove, unless otherwise prescribed and limit, term " installation ", " linking to each other ", " connection " should be done broad understanding, for example, can be mechanical connection or electrical connection, also can be the connection of two element internals, can be directly to link to each other, and also can indirectly link to each other by intermediary, for the ordinary skill in the art, can understand as the case may be the concrete meaning of above-mentioned term.
With reference to following description and accompanying drawing, these and other aspects of embodiments of the invention will be known.These describe and accompanying drawing in, specifically disclose some particular implementation in the embodiments of the invention, represent to implement some modes of the principle of embodiments of the invention, still should be appreciated that the scope of embodiments of the invention is not limited.On the contrary, embodiments of the invention comprise spirit and interior all changes, modification and the equivalent of intension scope that falls into additional claims.
The crucial genuine manufacture method of interactive three-dimensional video-frequency and the producing device that with reference to the accompanying drawings the embodiment of the invention are proposed are described.
As shown in Figure 1, the crucial genuine manufacture method of the interactive three-dimensional video-frequency of embodiment of the invention proposition comprises the steps:
Step S1 obtains the key frame in sequence of pictures and the selected sequence of pictures, and key frame is carried out calling behind the image denoising similar pixel is merged into the zone, posting field information in the key frame of K-means clustering algorithm after with image denoising.
In one embodiment of the invention, in step S1, also comprise: adopt the gaussian filtering algorithm that key frame is carried out image denoising.Thus, weaken picture noise to the impact of K-means clustering algorithm.
Particularly, step S1 is the preprocessing process in the manufacture method of interactive three-dimensional video-frequency key frame, further comprises as shown in Figure 2 step:
Step S201 obtains sequence of pictures.Artificial selected key frame, can select in the image sequence those to exist camera lens to switch or object of which movement change in residing that frame of key operations as key frame.
Step S202, image denoising.Key frame on selected sequence of pictures carries out image denoising to weaken picture noise to the impact of subsequent algorithm.In one embodiment of the invention, adopt the gaussian filtering algorithm to carry out the image denoising operation.
Step S203 is the K-means clustering algorithm to step S205, and its effect is according to (x, y the pixel in the image, r, g, b) quintuple space coordinate (location of pixels coordinate and color space coordinate) carries out cluster, for generating, the zone of subsequent step S206 lays the groundwork.
Step S203 arranges initial cluster center.In an example of the present invention, according to fixing length and width image is divided into several rectangular blocks, calculate the quintuple space coordinate mean value of all pixels in each rectangular block as initial cluster center.If the length/width of image can't be divided exactly by the length/width of rectangular block, then the length/width of the remainder after being divided by as last rectangular block.
Step S204, with pixel according to quintuple space coordinate cluster.For any pixel in the image, calculate the quintuple space distance of each cluster centre in this point and its hunting zone, and it is ranged classification under the nearest cluster centre.
The computing formula of quintuple space distance is as follows:
dist color = ( R p - R s ) 2 + ( G p - G s ) 2 + ( B p - B s ) 2
dist pos = ( X p - X s ) 2 + ( Y p - Y s ) 2
dist ps = dist color 2 + dist pos 2
min _ dist p = min s dist ps
Wherein, (X p, Y p, R p, G p, B p) and (X s, Y s, R s, G s, B s) be respectively the quintuple space coordinate of pixel p and cluster centre s; Dist ColorColor space distance for pixel p and cluster centre s; Dist PosPositional distance for pixel p and cluster centre s; Dist PsQuintuple space distance for pixel p and cluster centre s; Min_dist pMinimum quintuple space distance for pixel p and cluster centre s.
Upgrade the information of cluster centre, add up the included pixel of each classification, calculate the mean value of quintuple space coordinate in this classification as new cluster centre coordinate.
Step S205 judges whether to reach the condition that cluster finishes.At first in the computed image pixel apart from the minimum quintuple space of cluster centre apart from sum:
total _ dist = Σ p min _ dist p
Wherein, total_dist is that the minimum quintuple space of all pixels is apart from sum; Make total_dist CurrentThe distance that obtains for current cycle calculations and, total_dist PreviousThe distance that obtains for a front cycle calculations and, then when following two inequality have one to set up, the condition establishment that cluster finishes.
total_dist previous-total_dist current≤dist_threshold
iter_num>max_iter
Wherein, dist_threshold is given threshold value; Iter_num is that number of times is carried out in current circulation, and max_iter is that largest loop is carried out number of times.If the cluster termination condition is false, then returns step 204 and continue circulation; If the cluster termination condition is set up, then enter the regional generation phase of step 206.
Step S206, the zone generates.The region growing algorithm that at first adopts the neighbours territory is converted into the zone of connection with each cluster, adds up the pixel number in each zone.If certain area pixel point number then arranges two or more cluster centres greater than given upper limit threshold, again call the K-means clustering algorithm, be two or more subregions with this Region Segmentation; If certain area pixel point number then is incorporated into neighborhood nearest in the quintuple space with this zone less than given lower threshold.At last, average five dimension coordinates and the interregional neighbor information of record regional.
Step S2 calculates each region allocation weights, and calls the GrpahCut algorithm according to the region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result.
Wherein, the region allocation weights comprise joint area weights and zone marker weights, and the zone marker weights comprise prospect mark weights and context marker weights.
Particularly, step S2 is the Region Segmentation process of interactive three-dimensional video-frequency key frame, further comprises as shown in Figure 3 step:
Step S301, the zoning connects weights.For non-conterminous zone on the space, do not consider that it connects weights, for zone adjacent on the space, it is as follows that it connects the weights computing formula:
diff ab = ( R a - R b ) 2 + ( G a - G b ) 2 + ( B a - B b ) 2
weight ab = 1 diff ab + 1
Wherein, (R a, G a, B a) be the average color of regional a; (R b, G b, B b) be the average color of adjacent area b; Diff AbBe the color difference between regional a and the regional b; Weight AbBe the weights that are connected between regional a and the regional b.
Step S302, the input handmarking.The operator can select to make marks at foreground object with left mouse button, makes marks at background object with right mouse button.If the number of prospect gauge point or context marker point is greater than cluster threshold value cluster Num, then utilize the K-means algorithm that its cluster is cluster NumIndividual classification is got the cluster centre of each classification as final gauge point; If the number of gauge point is less than or equal to cluster threshold value cluster Num, then do not carry out cluster.
Step S303, zoning mark weights.For each zone in the image, calculate it for the weights of prospect mark and context marker.The computing formula of prospect mark weights is as follows:
diff color=(R k-R s) 2+(G k-G s) 2+(B k-B s) 2
diff pos=(X k-X s) 2+(Y k-Y s) 2
diff ks=diff color+dif fpos
fore _ diff k = min s diff ks
fore _ weight k = 1 fore _ diff k
Wherein, (X k, Y k, R k, G k, B k) and (X s, Y s, R s, G s, B s) be respectively the quintuple space coordinate of regional k and prospect gauge point s; Diff ColorColor space distance for regional k and prospect gauge point s; Diff PosPositional distance for regional k and prospect gauge point s; Diff KsQuintuple space distance for regional k and prospect gauge point s; Fore_diff kMinimum quintuple space distance for regional k and prospect gauge point s; Fore_weight kProspect mark weights for regional k.
The computing formula of context marker weights is as follows:
diff color=(R k-R t) 2+(G k-G t) 2+(B k-B t) 2
dff pos=(X k-X t) 2+(Y k-Y t) 2
diff kt=diff color+diff pos
back _ diff k = min t diff kt
back _ weight k = 1 back _ diff k
Wherein, (X k, Y k, R k, G k, B k) and (X t, Y t, R t, G t, B t) be respectively the quintuple space coordinate of regional k and context marker point t; Diff ColorColor space distance for regional k and context marker point t; Diff PosPositional distance for regional k and context marker point t; Diff KtQuintuple space distance for regional k and context marker point t; Back_diff kMinimum quintuple space distance for regional k and context marker point t; Back_weight kContext marker weights for regional k.
Step S304, GrapchCut is cut apart.Joint area weights and zone marker weights as input parameter, are called the GraphCut algorithm and obtain the Region Segmentation result.
Step S305, whether the judging area segmentation result meets the demands.Particularly, whether isolated comparatively accurately prospect and the background of image by operator's judging area segmentation result.If segmentation result can not meet the demands, then return step 302 and continue to add the handmarking, re-start the Region Segmentation operation; If segmentation result can meet the demands, then carry out follow-up becoming more meticulous and cut apart.
Thus, in the Region Segmentation process, the operator by the handmarking can be real-time carry out Region Segmentation, compare conventional pixel level image segmentation and greatly reduced operand, accelerated corresponding speed, so that real-time man-machine interaction becomes possibility.The operator can check easily segmentation effect and make amendment, improve handmarking's effect.
Step S3 carries out image expansion operation and Image erosion operates to construct three components of key frame according to the Region Segmentation result, and calls Bayes and scratch the result that nomography becomes more meticulous and cuts apart with acquisition.
In one embodiment of the invention, step S3 further comprises the steps: to read in the Region Segmentation result and is converted into eight marking images of single channel, and the copy of preserving at least two parts of marking images; A copy in the copy of at least two parts of marking images carries out the image expansion operation, and another copy in the copy of at least two parts of marking images carries out the Image erosion operation; Three components according to image expansion operating result and Image erosion operating result structure key frame; Key frame images and three components as input parameter, are called Bayes and scratch the result of nomography to obtain to become more meticulous and cut apart, and wherein, the result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
In a concrete example of the present invention, as shown in Figure 4, the cutting procedure that becomes more meticulous in the manufacture method of the interactive three-dimensional video-frequency key frame of step S3 further comprises step:
Step S401 reads in the Region Segmentation result, is converted into eight marking images of single channel, and wherein the mark value of foreground area is 255, and the mark value of background area is 0.Preserve the copy of two parts of marking images.
Step S402 carries out the image expansion operation at a copy of marking image.According to the difference of key frame images, the size of calculating nuclear that expands can be specified by the operator.Generally speaking, can expand and calculate the square grid that nuclear is set to 6x6.
Step S403 carries out the Image erosion operation at another copy of marking image.According to the difference of key frame images, corrosion is calculated the size of nuclear and can be specified by the operator.Generally speaking, can corrode the square grid that calculating nuclear is set to 6x6.
Step S404 constructs three components.Read in two copies of marking image, will carry out the background area of mark copy of image expansion operation as the background area of three components; To carry out the foreground area of mark copy of Image erosion operation as the foreground area of three components; With the zone of ignorance of the unfilled zone of residue as three components.
Step S405, Bayes scratches figure.Key frame images and three components as input parameter, are called Bayes and scratch the result that nomography obtains becoming more meticulous and cuts apart, this result is preserved with the form of alpha passage.
The present invention introduces the concept of alpha passage in the cutting procedure that becomes more meticulous, adopt Bayes to scratch nomography and overcome in traditional 0-1 binary segmentation, and a pixel can only belong to prospect (the alpha value is 1) or belong to background (the alpha value is 0); And scratch in the nomography Bayes, the degree that each pixel belongs to prospect or background is to be described by the real number of span between [0,1], i.e. the alpha passage.The alpha passage characterizes the concept of transparency, and namely pixel had both not exclusively belonged to prospect, also not exclusively belongs to background, but belongs to prospect in the degree of alpha, belongs to background in the degree of 1-alpha.Can find out that the value that traditional 0-1 binary segmentation is equivalent to alpha can only be 0 or 1 situation, this can't tackle translucent phenomenon common in the reality scene.In reality scene, the object edge place tends to occur this translucent phenomenon, and when especially having the more tiny edge such as hair in scene, translucent phenomenon is more obvious.Traditional 0-1 binary segmentation can't obtain gratifying segmentation result in these cases, and the stingy nomography of Bayes expands to whole alpha passage by the codomain with image segmentation, can be good at characterizing the size of transparency.
Step S4, the as a result construct image foreground area of cutting apart according to becoming more meticulous and image background regions are also carried out respectively degree of depth assignment, and foreground depth figure and background depth map are merged to obtain the key frame depth map export.
In one embodiment of the invention, in step S4, also comprise: carry out degree of depth assignment by the mode of drafting or the mode of model, wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
Particularly, step S4 is the degree of depth assignment procedure in the manufacture method of interactive three-dimensional video-frequency key frame, further comprises as shown in Figure 5 step:
Step S501, the construct image prospect.Cut apart the alpha passage that obtains according to becoming more meticulous, if alpha value corresponding to certain pixel thinks then that greater than 0.1 this pixel belongs to display foreground.All pixels in the traversal key frame, the pixel set of satisfying above-mentioned condition is display foreground.
Step S502, the foreground depth assignment.According to the display foreground that obtains in the step 501, the method that the operator can choice for use draws or the method for model are carried out degree of depth assignment.The method of drawing requires the operator to use paintbrush tool to draw out the depth map of display foreground, and its result is comparatively accurate but cost of manufacture is higher; The method of model requires the operator to use the depth map of various degree of depth assignment Construction of A Model display foregrounds, the easy and simple to handle but method of depth map quality not as drawing.Degree of depth assignment model commonly used comprises single depth model, linear gradient depth model, spherical gradual change depth model etc., specific as follows: single depth model: the operator specifies the global depth value, and then the degree of depth of whole display foreground all is set to this designated depth value.The linear gradient depth model: the operator specifies the depth value of starting point, terminal point and these two points of line segment, and whole display foreground is according to the degree of depth fade effect generating depth map of this line segment.Particularly, each pixel of display foreground carries out projection to this line segment, if subpoint drops on the extended line of line segment, the depth value of line segment end points that then will be nearest apart from subpoint is as the depth value of this pixel; If subpoint drops in the line segment, then be calculated as follows the depth value of this pixel:
d = x - x a x b - x a d b + x b - x x b - x a d a
Wherein, x bAnd x aBe respectively the horizontal direction coordinate of two line segment end points; d bAnd d aBe respectively the depth value of two line segment end points; X is the horizontal direction coordinate of subpoint; D is pixel depth value to be calculated.
Spherical gradual change depth model: the operator specifies the depth value of spherical center, radius and centerand edge, and whole display foreground is according to the degree of depth fade effect generating depth map of sphere.Particularly, if certain pixel on the display foreground apart from the distance at spherical center greater than radius, then with the depth value at the spherical edge depth value as this point; If this pixel less than radius, then is calculated as follows the depth value of this point apart from the distance at soccer star center:
l = ( x - x center ) 2 + ( y - y center ) 2
d = R 2 - l 2 R ( d center - d rim ) + d rim
Wherein, (x Center, y Center) and (x, y) be respectively the coordinate of spherical center and pixel; L is the distance at pixel and spherical center; R is spherical radius; d CenterDepth value for spherical center; d RimDepth value for spherical edge; D is pixel depth value to be calculated.
Need to prove that degree of depth assignment model includes but are not limited to above-mentioned three kinds of models.The operator can determine according to the scene information that key frame images comprises to select the method for drafting or the method for model, and specifically selects which kind of degree of depth assignment model.
Step S503, the construct image background.Cut apart the alpha passage that obtains according to becoming more meticulous, if alpha value corresponding to certain pixel thinks then that less than 0.9 this pixel belongs to image background.All pixels in the traversal key frame, the pixel set of satisfying above-mentioned condition is image background.
Step S504, the background depth assignment.According to the image background that obtains in the step 503, the method that the operator can choice for use draws or the method for model are carried out degree of depth assignment.These two kinds of methods are identical with the description in the step 602.
Step S505, depth map merges.With foreground depth figure, background depth map with become more meticulous and cut apart the alpha passage that obtains as input parameter, call the depth map blending algorithm.Concrete formula is as described below:
d i=alpha i×fd i+(1-alpha i)×bd i
Wherein, fd iAnd bd iBe respectively the depth value of i pixel in foreground depth figure and the background depth map; Alpha iValue for i pixel in the alpha passage; d iMerge the depth value of rear i pixel for depth map.
At last, depth map being merged the key frame depth map that obtains exports as final result.
Degree of depth assign operation during the present invention makes the three-dimensional video-frequency key frame is cut apart the alpha passage that obtains and is combined with becoming more meticulous.Thus, the introducing of alpha passage has not only improved the precision of image segmentation, has also improved the precision of degree of depth assignment simultaneously.Traditional 0-1 binary segmentation can't embody the size of transparency, and segmenting edge is comparatively stiff, and the depth map that causes assignment to obtain larger depth jump occurs in edge easily, causes beholder's discomfort.In addition, by introducing the alpha passage, the operator can carry out respectively degree of depth assignment to display foreground and background, then utilize the alpha passage that foreground depth figure and background depth map are merged, the depth map edge that this mode obtains is softer and depth value is comparatively continuous, has eliminated beholder's sense of discomfort.
Manufacture method according to the interactive three-dimensional video-frequency key frame of the embodiment of the invention, can Stepwise Refinement carry out the image segmentation operation, generate fast high-precision segmentation result, can also carry out easily the degree of depth assignment of prospect and background, obtain high-quality key frame depth map, thereby reduced the cost of manufacture of three-dimensional video-frequency, speed and precision that the three-dimensional video-frequency key frame is made have been improved, eliminate edge shake phenomenon that the traditional fabrication mode causes and beholder's sense of discomfort, improved the efficient of planar video three-dimensional conversion.
Be described below with reference to the producing device of accompanying drawing to the further interactive three-dimensional video-frequency key frame that proposes of the present invention.
As shown in Figure 6, the producing device of the interactive three-dimensional video-frequency key frame that proposes of the further embodiment of the present invention comprises: pretreatment module 110, Region Segmentation module 120, become more meticulous and cut apart module 130, degree of depth assignment module 140.
Pretreatment module 110 is used for obtaining sequence of pictures and the key frame of selected sequence of pictures is carried out image denoising, and similar pixel is merged into the zone in the key frame of K-means clustering algorithm after with image denoising by calling, and posting field information.Wherein, pretreatment module 110 adopts the gaussian filtering algorithm that key frame is carried out image denoising.
Region Segmentation module 120 is used for calculating each region allocation weights, and calls the GrpahCut algorithm according to the region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result.Wherein, the region allocation weights comprise joint area weights and zone marker weights, and the zone marker weights comprise prospect mark weights and context marker weights.
Become more meticulous and cut apart module 130 and be used for carrying out the image expansion operation and Image erosion operates to construct three components of key frame according to described Region Segmentation result, and call Bayes and scratch the result that nomography becomes more meticulous and cuts apart with acquisition.Wherein, the result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
Degree of depth assignment module 140 is used for as a result construct image foreground area and the image background regions of cutting apart according to becoming more meticulous and carries out respectively degree of depth assignment, and foreground depth figure and background depth map are merged to obtain the key frame depth map exports.Particularly, degree of depth assignment module 140 is carried out degree of depth assignment by the mode of drafting or the mode of model, and wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
Below in conjunction with Fig. 7 the technical scheme flow process of the producing device of interactive three-dimensional video-frequency key frame of the present invention is carried out describe, in general terms.
As shown in Figure 7, the technical scheme flow process of the producing device of interactive three-dimensional video-frequency key frame of the present invention comprises the steps:
Step S701 obtains sequence of pictures.Artificial selected key frame, can select in the image sequence those to exist camera lens to switch or object of which movement change in residing that frame of key operations as key frame.
Step S702, image denoising.Pretreatment module 110 is carried out image denoising to weaken picture noise to the impact of subsequent algorithm to the key frame of selected sequence of pictures.In one embodiment of the invention, pretreatment module 110 adopts the gaussian filtering algorithm to carry out the image denoising operation.
Step S703, the K-means cluster.The effect of K-means clustering algorithm is according to (x the pixel in the image, y, r, g, b) quintuple space coordinate (location of pixels coordinate and color space coordinate) carries out cluster, and pretreatment module 110 is called the K-means clustering algorithm, initial cluster center at first is set, then with pixel according to quintuple space coordinate cluster, at last pixel similar in the key frame behind the image denoising is merged into the zone, posting field information.
Step S704, the input handmarking.The operator can select to make marks at foreground object with left mouse button, makes marks at background object with right mouse button.
Cut apart module 120 in execution of step S703 rear region and calculate each region allocation weights, handmarking according to input distributes weights to regional, comprise joint area weights and zone marker weights, the zone marker weights comprise prospect mark weights and context marker weights.
Step S705, GrpahCut is cut apart.Region Segmentation module 120 as input parameter, is called joint area weights and zone marker weights the GraphCut algorithm and is obtained the Region Segmentation result.
Step S706 judges whether segmentation result meets the demands.If satisfy not require then return step S704, if meet the demands then continue execution in step S707.
Particularly, whether isolated comparatively accurately prospect and the background of image by operator's judging area segmentation result.If segmentation result can not meet the demands, then return step S704 and continue to add the handmarking, re-start the Region Segmentation operation; If segmentation result can meet the demands, then continue execution in step S707.
Step S707, Bayes scratches figure.Become more meticulous and cut apart that module 130 is carried out image expansion operation according to the Region Segmentation result and Image erosion operates to construct three components of key frame, and call Bayes and scratch the result that nomography becomes more meticulous and cuts apart with acquisition.Wherein, the result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
Step S708, degree of depth assignment.The as a result construct image foreground area that degree of depth assignment module 140 is cut apart according to becoming more meticulous and image background regions are also carried out respectively degree of depth assignment, and foreground depth figure and background depth map are merged to obtain the key frame depth map export.
Producing device according to the interactive three-dimensional video-frequency key frame of the embodiment of the invention, operate by Region Segmentation module and the image segmentation that carries out of cutting apart the module Stepwise Refinement that becomes more meticulous, generate fast high-precision segmentation result, can also carry out easily the degree of depth assignment of prospect and background, obtain high-quality key frame depth map, thereby reduced the cost of manufacture of three-dimensional video-frequency, improved speed and precision that the three-dimensional video-frequency key frame is made, and then improved the efficient of planar video three-dimensional conversion.
Describe and to be understood in the flow chart or in this any process of otherwise describing or method, expression comprises module, fragment or the part of code of the executable instruction of the step that one or more is used to realize specific logical function or process, and the scope of preferred implementation of the present invention comprises other realization, wherein can be not according to order shown or that discuss, comprise according to related function by the mode of basic while or by opposite order, carry out function, this should be understood by the embodiments of the invention person of ordinary skill in the field.
In flow chart the expression or in this logic of otherwise describing and/or step, for example, can be considered to the sequencing tabulation for the executable instruction that realizes logic function, may be embodied in any computer-readable medium, use for instruction execution system, device or equipment (such as the computer based system, comprise that the system of processor or other can and carry out the system of instruction from instruction execution system, device or equipment instruction fetch), or use in conjunction with these instruction execution systems, device or equipment.With regard to this specification, " computer-readable medium " can be anyly can comprise, storage, communication, propagation or transmission procedure be for instruction execution system, device or equipment or the device that uses in conjunction with these instruction execution systems, device or equipment.The more specifically example of computer-readable medium (non-exhaustive list) comprises following: the electrical connection section (electronic installation) with one or more wirings, portable computer diskette box (magnetic device), random-access memory (ram), read-only memory (ROM), the erasable read-only memory (EPROM or flash memory) of editing, fiber device, and portable optic disk read-only memory (CDROM).In addition, computer-readable medium even can be paper or other the suitable media that to print described program thereon, because can be for example by paper or other media be carried out optical scanner, then edit, decipher or process to obtain described program in the electronics mode with other suitable methods in case of necessity, then it is stored in the computer storage.
Should be appreciated that each several part of the present invention can realize with hardware, software, firmware or their combination.In the above-described embodiment, a plurality of steps or method can realize with being stored in the memory and by software or firmware that suitable instruction execution system is carried out.For example, if realize with hardware, the same in another embodiment, can realize with the combination of each or they in the following technology well known in the art: have for the discrete logic of data-signal being realized the logic gates of logic function, application-specific integrated circuit (ASIC) with suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc.
Those skilled in the art are appreciated that and realize that all or part of step that above-described embodiment method is carried is to come the relevant hardware of instruction to finish by program, described program can be stored in a kind of computer-readable recording medium, this program comprises step of embodiment of the method one or a combination set of when carrying out.
In addition, each functional unit in each embodiment of the present invention can be integrated in the processing module, also can be that the independent physics of unit exists, and also can be integrated in the module two or more unit.Above-mentioned integrated module both can adopt the form of hardware to realize, also can adopt the form of software function module to realize.If described integrated module realizes with the form of software function module and during as independently production marketing or use, also can be stored in the computer read/write memory medium.
The above-mentioned storage medium of mentioning can be read-only memory, disk or CD etc.
In the description of this specification, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or the example in conjunction with specific features, structure, material or the characteristics of this embodiment or example description.In this manual, the schematic statement of above-mentioned term not necessarily referred to identical embodiment or example.And the specific features of description, structure, material or characteristics can be with suitable mode combinations in any one or more embodiment or example.
Although illustrated and described embodiments of the invention, for the ordinary skill in the art, be appreciated that without departing from the principles and spirit of the present invention and can carry out multiple variation, modification, replacement and modification to these embodiment that scope of the present invention is by claims and be equal to and limit.

Claims (10)

1. the manufacture method of an interactive three-dimensional video-frequency key frame is characterized in that, comprises the steps:
S1 obtains the key frame in sequence of pictures and the selected described sequence of pictures, and described key frame is carried out calling behind the image denoising similar pixel is merged into the zone, posting field information in the key frame of K-means clustering algorithm after with image denoising;
S2 calculates each region allocation weights, and calls the GrpahCut algorithm according to described region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result;
S3 carries out image expansion operation and Image erosion operates to construct three components of key frame according to described Region Segmentation result, and calls Bayes and scratch the result that nomography becomes more meticulous and cuts apart with acquisition;
S4 according to the described as a result construct image foreground area cut apart and the image background regions and carry out respectively degree of depth assignment of becoming more meticulous, and merges foreground depth figure and background depth map with obtain the key frame depth map and exports.
2. the manufacture method of interactive three-dimensional video-frequency key frame as claimed in claim 1 is characterized in that, in described step S1, also comprises:
Adopt the gaussian filtering algorithm that described key frame is carried out image denoising.
3. the manufacture method of interactive three-dimensional video-frequency key frame as claimed in claim 1 is characterized in that, described region allocation weights comprise joint area weights and zone marker weights, and described zone marker weights comprise prospect mark weights and context marker weights.
4. the manufacture method of interactive three-dimensional video-frequency key frame as claimed in claim 1 is characterized in that, described step S3 further comprises the steps:
Read in described Region Segmentation result and be converted into eight marking images of single channel, and the copy of preserving at least two parts of marking images;
A copy in the copy of described at least two parts of marking images carries out the image expansion operation, and another copy in the copy of described at least two parts of marking images carries out the Image erosion operation;
Three components according to image expansion operating result and Image erosion operating result structure key frame;
Key frame images and described three components as input parameter, are called Bayes and scratch the result of nomography to obtain to become more meticulous and cut apart, and wherein, the described result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
5. the manufacture method of interactive three-dimensional video-frequency key frame as claimed in claim 1 is characterized in that, in described step S4, also comprises:
Carry out described degree of depth assignment by the mode of drafting or the mode of model, wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
6. the producing device of an interactive three-dimensional video-frequency key frame is characterized in that, comprising:
Pretreatment module, be used for obtaining sequence of pictures and the key frame of selected described sequence of pictures is carried out image denoising, and by calling similar pixel is merged into the zone in the key frame of K-means clustering algorithm after with image denoising, and posting field information;
The Region Segmentation module is used for calculating each region allocation weights, and calls the GrpahCut algorithm according to described region allocation weights key frame is carried out Region Segmentation to obtain the Region Segmentation result;
Become more meticulous and cut apart module, operate to construct three components of key frame for carry out image expansion operation and Image erosion according to described Region Segmentation result, and call Bayes and scratch the result of nomography to obtain to become more meticulous and cut apart;
Degree of depth assignment module is used for according to described as a result construct image foreground area and the image background regions cut apart and carry out respectively degree of depth assignment of becoming more meticulous, and foreground depth figure and background depth map are merged to obtain the key frame depth map exports.
7. the producing device of interactive three-dimensional video-frequency key frame as claimed in claim 6 is characterized in that, described pretreatment module adopts the gaussian filtering algorithm that described key frame is carried out image denoising.
8. the producing device of interactive three-dimensional video-frequency key frame as claimed in claim 6 is characterized in that, described region allocation weights comprise joint area weights and zone marker weights, and described zone marker weights comprise prospect mark weights and context marker weights.
9. the producing device of interactive three-dimensional video-frequency key frame as claimed in claim 6 is characterized in that, the described result of cutting apart that becomes more meticulous preserves with the form of alpha passage.
10. the producing device of interactive three-dimensional video-frequency key frame as claimed in claim 6, it is characterized in that, described degree of depth assignment module is carried out described degree of depth assignment by the mode of drafting or the mode of model, wherein, degree of depth assignment model comprises single depth model, linear gradient depth model, spherical gradual change depth model.
CN201310013059.XA 2013-01-14 2013-01-14 Manufacture method and manufacture device for interactive three-dimensional video key frame Expired - Fee Related CN103051915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310013059.XA CN103051915B (en) 2013-01-14 2013-01-14 Manufacture method and manufacture device for interactive three-dimensional video key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310013059.XA CN103051915B (en) 2013-01-14 2013-01-14 Manufacture method and manufacture device for interactive three-dimensional video key frame

Publications (2)

Publication Number Publication Date
CN103051915A true CN103051915A (en) 2013-04-17
CN103051915B CN103051915B (en) 2015-02-18

Family

ID=48064398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310013059.XA Expired - Fee Related CN103051915B (en) 2013-01-14 2013-01-14 Manufacture method and manufacture device for interactive three-dimensional video key frame

Country Status (1)

Country Link
CN (1) CN103051915B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581196A (en) * 2014-12-30 2015-04-29 北京像素软件科技股份有限公司 Video image processing method and device
CN104994368A (en) * 2015-07-10 2015-10-21 孙建德 Non-critical frame ordering method in 2D-3D video switch
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN105590312A (en) * 2014-11-12 2016-05-18 株式会社理光 Foreground image segmentation method and apparatus
CN105631868A (en) * 2015-12-25 2016-06-01 清华大学深圳研究生院 Depth information extraction method based on image classification
CN105740623A (en) * 2016-02-01 2016-07-06 南昌大学 High-immersion visual presentation method applicable to brain surgery virtual surgery simulation
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN108154086A (en) * 2017-12-06 2018-06-12 北京奇艺世纪科技有限公司 A kind of image extraction method, device and electronic equipment
CN109151444A (en) * 2018-11-13 2019-01-04 盎锐(上海)信息科技有限公司 3D intelligence pixel enhances engine
CN112200756A (en) * 2020-10-09 2021-01-08 电子科技大学 Intelligent bullet special effect short video generation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316352A (en) * 2011-08-08 2012-01-11 清华大学 Stereo video depth image manufacturing method based on area communication image and apparatus thereof
CN102592268A (en) * 2012-01-06 2012-07-18 清华大学深圳研究生院 Method for segmenting foreground image
CN102663748A (en) * 2012-03-27 2012-09-12 电子科技大学 Low depth of field image segmentation method based on frequency domain
CN102724530A (en) * 2012-05-29 2012-10-10 清华大学 Three-dimensional method for plane videos based on feedback control
CN102724532A (en) * 2012-06-19 2012-10-10 清华大学 Planar video three-dimensional conversion method and system using same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316352A (en) * 2011-08-08 2012-01-11 清华大学 Stereo video depth image manufacturing method based on area communication image and apparatus thereof
CN102592268A (en) * 2012-01-06 2012-07-18 清华大学深圳研究生院 Method for segmenting foreground image
CN102663748A (en) * 2012-03-27 2012-09-12 电子科技大学 Low depth of field image segmentation method based on frequency domain
CN102724530A (en) * 2012-05-29 2012-10-10 清华大学 Three-dimensional method for plane videos based on feedback control
CN102724532A (en) * 2012-06-19 2012-10-10 清华大学 Planar video three-dimensional conversion method and system using same

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590312A (en) * 2014-11-12 2016-05-18 株式会社理光 Foreground image segmentation method and apparatus
CN105590312B (en) * 2014-11-12 2018-05-18 株式会社理光 Foreground image dividing method and device
CN104581196A (en) * 2014-12-30 2015-04-29 北京像素软件科技股份有限公司 Video image processing method and device
CN104994368A (en) * 2015-07-10 2015-10-21 孙建德 Non-critical frame ordering method in 2D-3D video switch
CN104994368B (en) * 2015-07-10 2017-10-27 孙建德 Non-key frame sort method in 2D 3D Video Quality Metrics
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN105100773B (en) * 2015-07-20 2017-07-28 清华大学 Three-dimensional video-frequency preparation method, three-dimensional view preparation method and manufacturing system
CN105631868A (en) * 2015-12-25 2016-06-01 清华大学深圳研究生院 Depth information extraction method based on image classification
CN105740623B (en) * 2016-02-01 2018-05-25 南昌大学 A kind of high-immersion visual presentation method suitable for cranial surgery virtual teach-in
CN105740623A (en) * 2016-02-01 2016-07-06 南昌大学 High-immersion visual presentation method applicable to brain surgery virtual surgery simulation
CN107610041A (en) * 2017-08-16 2018-01-19 南京华捷艾米软件科技有限公司 Video portrait based on 3D body-sensing cameras scratches drawing method and system
CN107610041B (en) * 2017-08-16 2020-10-27 南京华捷艾米软件科技有限公司 Video portrait matting method and system based on 3D somatosensory camera
CN108154086A (en) * 2017-12-06 2018-06-12 北京奇艺世纪科技有限公司 A kind of image extraction method, device and electronic equipment
CN108154086B (en) * 2017-12-06 2022-06-03 北京奇艺世纪科技有限公司 Image extraction method and device and electronic equipment
CN109151444A (en) * 2018-11-13 2019-01-04 盎锐(上海)信息科技有限公司 3D intelligence pixel enhances engine
CN112200756A (en) * 2020-10-09 2021-01-08 电子科技大学 Intelligent bullet special effect short video generation method

Also Published As

Publication number Publication date
CN103051915B (en) 2015-02-18

Similar Documents

Publication Publication Date Title
CN103051915B (en) Manufacture method and manufacture device for interactive three-dimensional video key frame
Taniai et al. Graph cut based continuous stereo matching using locally shared labels
CN104424634B (en) Object tracking method and device
CN100355272C (en) Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
JP2019525515A (en) Multiview scene segmentation and propagation
CN104820990A (en) Interactive-type image-cutting system
CN102196292B (en) Human-computer-interaction-based video depth map sequence generation method and system
CN105144234A (en) Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
US11184558B1 (en) System for automatic video reframing
CN102263979B (en) Depth map generation method and device for plane video three-dimensional conversion
Djelouah et al. Sparse multi-view consistency for object segmentation
US20100067863A1 (en) Video editing methods and systems
EP2849426A1 (en) Color video processing system and method, and corresponding computer program
CN110047139B (en) Three-dimensional reconstruction method and system for specified target
CN104899563A (en) Two-dimensional face key feature point positioning method and system
CN107871321B (en) Image segmentation method and device
CN101930367B (en) Implementation method of switching images and mobile terminal
Yan et al. Depth map generation for 2d-to-3d conversion by limited user inputs and depth propagation
CN107122792A (en) Indoor arrangement method of estimation and system based on study prediction
Xue et al. Boundary-induced and scene-aggregated network for monocular depth prediction
CN115937461B (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN111091151A (en) Method for generating countermeasure network for target detection data enhancement
CN116503836A (en) 3D target detection method based on depth completion and image segmentation
CN104700384B (en) Display systems and methods of exhibiting based on augmented reality
CN105590327A (en) Motion estimation method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150218

CF01 Termination of patent right due to non-payment of annual fee