CN101777180A - Complex background real-time alternating method based on background modeling and energy minimization - Google Patents
Complex background real-time alternating method based on background modeling and energy minimization Download PDFInfo
- Publication number
- CN101777180A CN101777180A CN200910243733A CN200910243733A CN101777180A CN 101777180 A CN101777180 A CN 101777180A CN 200910243733 A CN200910243733 A CN 200910243733A CN 200910243733 A CN200910243733 A CN 200910243733A CN 101777180 A CN101777180 A CN 101777180A
- Authority
- CN
- China
- Prior art keywords
- background
- value
- pixel
- model
- foreground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 34
- 230000004927 fusion Effects 0.000 claims abstract description 21
- 230000000694 effects Effects 0.000 claims abstract description 11
- 238000012805 post-processing Methods 0.000 claims abstract description 6
- 230000011218 segmentation Effects 0.000 claims description 26
- 238000001914 filtration Methods 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 230000003628 erosive effect Effects 0.000 claims description 2
- 210000003746 feather Anatomy 0.000 claims description 2
- 238000002156 mixing Methods 0.000 claims description 2
- 238000012847 principal component analysis method Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000012821 model calculation Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011049 filling Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a complex background real-time alternating method based on background modeling and energy minimization, comprising the steps of: firstly, segmenting a current frame image into a foreground pixel set, a background pixel set and an unknown label pixel set by using a provided fusion background model; secondly, constructing a target energy function according to the color and contrast information under a dynamic diagram cutting frame, designing a data item of the fusion background model based on time continuity information and a contrast smooth item based on a local two-value mode, solving two-value labels of all pixels by using a diagram cutting algorithm minimization energy function; and seamlessly fusing segmented foreground targets into a virtual background by adopting postprocessing methods of boundary smooth, alpha value estimation and the like. Experimental results show that the method can better segment the targets in the complex background in real time and really synthesize the segmented targets into the virtual background. The invention has the characteristics of automatically segmenting the complex background in real time and replacing the segmented complex background into the virtual background to obtain a virtual effect sequence diagram with high quality.
Description
Technical field
The present invention relates to Flame Image Process and computer vision field, refer in particular to and a kind ofly in video flowing, complex background is carried out accurately cutting apart in real time and the method for replacing.
Background technology
The video object dividing method can be divided into Interactive Segmentation and self-action partitioning algorithm on the whole at present.Interactive video target partitioning algorithm is meant in cutting procedure, is improved the effect of cutting apart the algorithm intervention by customer interaction information.Though the Interactive Segmentation technology has obtained very big progress, remains in some problems.As loaded down with trivial details alternately, workload is big, and exists certain craftsmenship alternately, and mutual quality directly affects final segmentation effect; Automatically partitioning algorithm need not manual intervention in cutting procedure, automatically finishes cutting apart of specific objective extraction.
If according to background complexity classification, automatic target cut apart the target that can be divided under the color background cut apart with complex background under target cut apart two kinds.Video Segmentation under the color background is meant that carrying out cutting apart in real time with background of target under color background, light homogeneous condition replaces, background under the color background is replaced technology maturation and commercialization at present, scratches diagram technology as blue screen and has been applied in virtual studio, the film making.But how color background realizes cutting apart automatically and detection of target in scene and the performer's dressing many restrictions being arranged under complicated natural background, this problem is subjected to Chinese scholars and more and more pays close attention to.
Based on the Video Segmentation system cost cheapness of monocular, system constructing is convenient, has higher utility and application prospects in fields such as man-machine interaction, video editing, intelligent monitoring and digital entertainments under the complex background.Though people are doing a large amount of effort aspect the Video Segmentation of monocular, also obtained certain achievement, but general background replacement method can't be cut apart background in real time, exactly under complex background, the synthetic automatically higher virtual background video of accuracy.
Summary of the invention
In order to solve the in real time accurate background replacement problem under the complex background single camera, the purpose of this invention is to provide a kind of complex background real-time alternating method based on background modeling and energy minimization, can be partitioned into prospect and background real-time and accurately, and will cut apart the real background that obtains and replace to virtual background picture, have the video effect of virtual background with generation.
To achieve these goals, the complex background real-time alternating method based on background modeling and energy minimization provided by the invention at first adopts the fusion background model that proposes that each two field picture three in the video is divided into background area, foreground area and zone of ignorance; Next utilizes color and spatial contrast degree feature construction target energy function, and the method that adopts dynamic figure to cut is asked for the target energy extreme value of a function, thereby each two field picture is divided into prospect and background in real time; Adopt edge smoothing method and my method transparency (alpha) value estimation to wait post-processing approach at last, make the foreground target that is partitioned into seamlessly be fused in the virtual background based on frequency filtering; Specifically comprise the steps:
Step 1: system at first obtains video stream data, the background of the image of study video stream data, set up this image based on the feature background model of brightness with based on the Gauss model of colourity, two models are carried out linearity merge, obtain light is had antijamming capability, possesses the fusion background model of model modification feature, Pixel-level; According to merging background model, every two field picture pixel is divided into prospect, background and UNKNOWN TYPE three class labels;
Step 2: in the video present frame, system's determining section pixel possesses clear and definite foreground/background type affiliation according to step 1; Pixel for which UNKNOWN TYPE, utilize to merge the time color characteristic of background model and based on local binary pattern (local binary pattern, lbp) spatial contrast degree feature, structure is based on the target energy function of space time information, to determine the prospect or the background type of Unknown Label pixel by step 3;
Step 3: cut the extreme value that (graph cut) algorithm is asked for the target energy function of step 2 foundation according to dynamic figure, obtain the segmentation tag of binaryzation based on max-flow and minimal cut theory; Therefore, all pixels of present frame have possessed clear and definite background or prospect label;
Step 4: the foreground target segmentation result is carried out aftertreatment: utilize area identification and morphologic filtering post-processing algorithm to remove the noise and the slot of segmentation result, and design obtains final background segment result based on the level and smooth prospect of the frequency filtering border of Fourier transform;
Step 5:, find the solution my method transparence value of prospect border to realize the feather effect on foreground target border for the foreground target that will split seamlessly is fused in the virtual background; And will cut apart the real background that obtains according to my method transparence value and replace to virtual background picture, have the video effect of virtual background with generation.
Wherein, it is as described below that step 1 makes up the fusion background model:
Step 11: at first learn the video sequence that some frames are only had powerful connections;
Step 12: utilize the feature background model of Principal Component Analysis Method foundation, and pass through the method reconstructed background image of background compensation, to realize reaching the renewal of feature background model based on pixel intensity information;
Step 13: be the single Gaussian Background model of each pixel foundation based on chrominance information, and the parameter of real-time update feature background model;
Step 14: feature background model and the single Gaussian Background model of setting up carried out the linearity fusion, and, obtain to merge background model according to pixel intensity information dynamic calculation fusion coefficients;
Step 15: utilize above-mentioned fusion background model, current frame image is divided into prospect, background and Unknown Label three class set of pixels.
Wherein, step 2 energy object construction of function is as described below:
Step 21: the classification results of the pixel that obtains according to step 15, every form that embodies of target setting function;
Step 22: utilize the spatial contrast degree information of local two value models (lbp) calculating neighbor, to characterize the flatness between adjacent two pixels;
Step 23: according to fusion background model and local two-value model space contrast information, set the data item and the level and smooth sex factor of gibbs energy (Gibbs) energy function, promptly obtain the energy object function based on time continuity information.
Wherein, the two-value segmentation procedure of all pixels of present frame is as follows:
Step 31:, cut algorithm with execution graph for each two field picture structure weighted graph;
Step 32: utilize the video relevance of frame segmentation result up and down,, upgrade the weight on limit in the weighted graph from data item and the level and smooth item that the previous frame segmentation result utilizes step 23 to ask for;
Step 33: cut the minimal cut set that algorithm is asked for current weighted graph according to dynamic figure, with the extreme value of definite target energy function and the two-value label of unknown pixel.
Wherein, step 4 pair foreground target segmentation result aftertreatment is carried out according to the following procedure:
Step 41: utilize area identification and morphology closed operation algorithm, small size interference region among the foreground segmentation result that removal step 33 obtains is filled narrow interruption and little hole;
Step 42: the border of trying to achieve foreground target according to edge following algorithm;
Step 43: the boundary curve along foreground object obtains point sequence with interval sampling, and the Fourier that obtains the border through Fourier transform is described son;
Step 44: the cutoff frequency of design low-pass filter, the foreground target border that obtains with level and smooth step 42; Last inverse-Fourier transform obtains the frontier point coordinate after level and smooth, with the final border of the foreground target determining to split.
Wherein, step 5 is my method transparence value of the prospect boundary strip that obtains of calculation procedure 44 at first, promptly utilize the expansion erosion algorithm to obtain the boundary strip of certain bandwidth,, and normalize in 0 to 1 scope according to my method transparence value of the level and smooth band of computation bound as a result of Gauss; Set the blending ratio of preceding background according to trying to achieve my method transparence value again, determine each color of pixel value in the synthesising picture, the final video effect that generates with virtual background.
The present invention has following advantage compared with prior art:
1. according to color and contrast information structure target energy function, designed based on the fusion background model of time continuity information and the level and smooth item of contrast of local binary pattern (lbp).Like this, we make full use of color, contrast, space and time continuous information, set up more robust, simple and direct target energy function.
2. cut the two-value label that algorithm minimization energy function is asked for all pixels by dynamic figure, improved the efficient of cutting apart of video current frame image; And the level and smooth prospect of design low-pass filter border, adopt area identification and my method transparency (alpha) value estimation to wait post-processing approach to make the foreground target that is partitioned into seamlessly be fused in the virtual background.
Conclusion is got up, and system building of the present invention is simple, algorithm possess full automation, in real time, result's characteristic accurately.
Description of drawings
Fig. 1 is the process flow diagram of complex background real-time alternating method of the present invention.
Fig. 2 is the real-time replacement process synoptic diagram of complex background of the present invention.
Fig. 3 (a) and Fig. 3 (b) they are the real-time segmentation result of complex background of the present invention, wherein:
The original video frame of Fig. 3 (a) input;
Fig. 3 (b) complex background segmentation result.
Fig. 4 (a)-Fig. 4 (d) is that complex background of the present invention is replaced the result.
From the original video of Fig. 4 (a), Fig. 4 (c) row, be partitioned into prospect, and in Fig. 4 (b) Fig. 4 (d) row virtual scene that is added to respectively.
Embodiment
Below in conjunction with accompanying drawing the present invention is described in detail, it should be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Fig. 1 is the process flow diagram of complex background real-time alternating method.According to flow sequence, the specific implementation process of each step of the inventive method is as follows:
1. read in video flowing
System at first obtains video stream data.Video data obtains the source mode can be divided into two kinds: the one, and the image sequence that camera collects in real time.The 2nd, the prior video file of gathering.
2. background study, structure merges background model
(21) study N frame image sequence; General study 40-80 frame, the time of about 2s;
(22) set up characteristic background:, set up proper subspace based on monochrome information by study N frame background image sequence based on monochrome information.Be that dimension is that every two field picture of m * n is counted as a kind of random vector, obtain its quadrature K-L substrate with Karhunen-Loeve transformation, the corresponding wherein substrate of big eigenwert has constituted proper subspace W, has characterized the characteristic of static background.Rebuild present frame p background image B by proper subspace
p, calculate a new two field picture I
pDifference D with background image
p(E)=| I
p-B
p|, this difference D
pRepresented in a new two field picture that (E) pixel that calculates by feature background model belongs to the cost value of background;
(23) set up single Gaussian Background model:, set up Gauss model D based on chrominance information by study N frame background image sequence based on chrominance information
p(G);
(24) set up the background model that merges: characteristic background and Gaussian Background model to above-mentioned foundation carry out linearity fusion, D
p(E)=λ D
p(E)+(1-λ) D
p(G).Here, D
p(E) be the background model that merges, ask method shown in (22); D
p(G) be the Gauss model that (23) are set up; λ is an integration percentage, determines according to the pixel intensity difference.
(25) utilize above-mentioned fusion background model, current frame image be divided into prospect, background and Unknown Label three class set of pixels:
Here, X
pThe label value of remarked pixel p.B represents background, and F represents prospect, and U represents to treat further to be defined as the UNKNOWN TYPE of foreground/background.D
p(E) be the background model that merges.T
b, T
fFor judging that pixel p belongs to the discrimination threshold of background, prospect.
3. the structure energy function is set data item and level and smooth item in the energy function
(31) calculate under the framework at markov-maximum a posteriori probability (MRF-MAP), adopt gibbs Gibbs distribution energy function
Be the objective function that makes up.Here D
i(x
i) be the data item of objective function, be expressed as summit i distributing labels x
iCost value; V
I, j(x
i, x
j) be the level and smooth item of objective function, be expressed as adjacent vertex i, j distributing labels x
i, x
jCost value; γ represents V
I, j(x
i, x
j) with respect to D
i(x
i) weight coefficient.Like this, the segmentation problem based on MRF just is converted into the pairing label problem of posteriority energy function Gibbs minimum value of asking;
(32) according to the data item D in the feature background model setting energy function that merges
i(x
i);
(33) according to neighbor i in local two value model lbp (local binary pattern) the calculating energy function, level and smooth the V of j
I, j(x
i, x
j).Do not relate to exponent arithmetic based on the lbp contrast value, calculate simply, and have the insensitive characteristics of illumination variation;
(34) according to the scale-up factor γ in the importance setting energy function of level and smooth item and data item.
4. ask the extreme value of energy function
(41) we adopt figure to cut the minimal value minE (X) that algorithm is found the solution target energy function (seeing shown in (31)).So at first construct weighted graph, each point among the figure corresponding each pixel of present frame, according to the data item in the energy function and level and smooth the weight of setting each limit of weighted graph;
(42) method that adopts dynamic figure to cut accelerates that energy is minimizing to be asked for.Promptly at first at this fact of consecutive frame continuity in the video, utilized the weight on the residual graph information updating present frame weighted graph limit of previous frame fully, reduce re-constructing of search tree in the present frame augmenting path algorithm, reduced the minimal cut path searching time; Find out the minimal cut in all cut sets according to max-flow-minimal cut algorithm again, promptly corresponding minimal value of trying to achieve energy function E (X) is for the partial pixel of UNKNOWN TYPE (U) has been determined two-value label (foreground/background).Like this, we have obtained the binary segmentation result of all pixels of present frame;
After adopting dynamic figure to cut algorithm, for the picture of 320 * 240 sizes, every frame is cut algorithm required about 120 milliseconds (ms) by original execution graph and reduce to about 65ms sliced time, and splitting speed has improved one times nearly.Foreground target to be split in the present frame is more little, and it is remarkable more that splitting speed improves.
5. aftertreatment such as edge smoothing, area identification
(51) utilize the area identification algorithm that the result of cutting apart is carried out region labeling, remove the less isolated area of area;
(52) utilize the morphology closed operation to carry out the filling of said minuscule hole and fracture;
(53) according to edge following algorithm, try to achieve the prospect frontier point, list its plural expression-form;
(54) frontier point is carried out Fourier transform, set the cutoff frequency r of low-pass filter, to determine the smoothness at edge;
(55) the frontier point planimetric coordinates after inverse-Fourier transform is tried to achieve smoothly.
6. ask for my method transparency (alpha) value on the prospect border that splits, replace the synthetic virtual image of background
(61) the prospect border of determining is corroded the expansive working of carrying out corresponding number of times afterwards, the long and narrow boundary strip that obtains having certain bandwidth several times.Bandwidth is relevant with the structural element size of morphological operation;
(62) it is level and smooth to carry out Gauss in boundary strip, like this in boundary strip interior pixel value by 0 and 255 two original value, transfer the some discrete values between 0 to 255 to, and these discrete values are carried out normalization in 0 to 1 scope, just obtained my method transparency (alpha) value of each pixel in the boundary strip.My method transparency (alpha) value of each background dot of the boundary strip outside is 0, and my method transparency (alpha) value of inboard each foreground point of boundary strip is 1;
(63) select virtual background or virtual video sequence, the background of cutting apart is replaced with virtual background image, virtual background is mixed with the foreground object of cutting apart, synthetic video sequence with virtual background according to my method transparency (alpha) value.
Fig. 2 is that background is replaced the algorithmic procedure synoptic diagram.System obtains current frame image information by reading in the image sequence that video or camera collection arrive; Utilize the fusion background model calculating pixel that proposes to belong to the probable value of background model, be divided three classes: background, prospect and unknown classification (remaining further to be differentiated to belong to prospect or background) with this each pixel with current frame image; Making up energy function, utilize dynamic figure to cut the pixel two-value label that algorithm is determined unknown classification, finally is that all pixels of present frame are distributed the two-value label of prospect or background; Ask for my method transparency (alpha) value of prospect boundary strip, replace with virtual background and cut apart the background that obtains, a synthetic frame virtual image.
Fig. 3 is the test result in one section sequence image.Fig. 3 (a) has provided the original image of certain two frame in the sequence image.Fig. 3 (b) has provided the Fusion Model that proposes according to us and the dynamic result of the background segment that obtains of figure blanking method, and wherein background is accurately differentiated, and foreground target is by complete extraction.
Fig. 4 is that the background that has provided in the different sequence images is replaced the result.From the original video of Fig. 4 (a), Fig. 4 (c) row, be partitioned into prospect automatically real-time, and among the Fig. 4 (b) that is added to respectively, Fig. 4 (d) row virtual scene.
The invention provides a kind of complex background and replace algorithm based on background modeling and energy minimization.This algorithm has good real-time performance, is the input video of 320x240 for yardstick, is P42.8G at CPU, in save as on the machine of 512MB and test, algorithm speed can reach 16-18 frame/s, basic requirement of real time; This algorithm has carried out experimental verification in a large amount of sequence images simultaneously, and the result shows that algorithm has higher accuracy.
The above only is the embodiment among the present invention, but scope of the present invention should not described by this and limits.It should be appreciated by those skilled in the art,, all belong to claim of the present invention and come restricted portion in any modification or partial replacement that does not depart from the scope of the present invention.
Claims (6)
1. the complex background image real-time alternating method based on background modeling and energy minimization is characterized in that, at first adopts the fusion background model that proposes that each two field picture three in the video is divided into background area, foreground area and zone of ignorance; Next utilizes color and spatial contrast degree feature construction target energy function, and the method that adopts dynamic figure to cut is asked for the target energy extreme value of a function, thereby each two field picture is divided into prospect and background in real time; Adopt edge smoothing method and my method transparency (alpha) value estimation to wait post-processing approach at last, make the foreground target that is partitioned into seamlessly be fused in the virtual background based on frequency filtering; Specifically comprise the steps:
Step 1: system at first obtains video stream data, the background of the image of study video stream data, set up this image based on the feature background model of brightness with based on the Gauss model of colourity, two models are carried out linearity merge, obtain light is had antijamming capability, possesses the fusion background model of model modification feature, Pixel-level; According to merging background model, every two field picture pixel is divided into prospect, background and UNKNOWN TYPE three class labels;
Step 2: in the video present frame, system's determining section pixel possesses clear and definite foreground/background type affiliation according to step 1; Pixel for which UNKNOWN TYPE, utilize to merge the time color characteristic of background model and based on the spatial contrast degree feature of local binary pattern, structure is based on the target energy function of space time information, to determine the prospect or the background type of Unknown Label pixel by step 3;
Step 3: cut the extreme value that algorithm is asked for the target energy function of step 2 foundation according to dynamic figure, obtain the segmentation tag of binaryzation based on max-flow and minimal cut theory; Therefore, all pixels of present frame have possessed clear and definite background or prospect label;
Step 4: the foreground target segmentation result is carried out aftertreatment: utilize area identification and morphologic filtering post-processing algorithm to remove the noise and the slot of segmentation result, and design obtains final background segment result based on the level and smooth prospect of the frequency filtering border of Fourier transform;
Step 5:, find the solution my method transparence value of prospect border to realize the feather effect on foreground target border for the foreground target that will split seamlessly is fused in the virtual background; And will cut apart the real background that obtains according to my method transparence value and replace to virtual background picture, have the video effect of virtual background with generation.
2. method according to claim 1 is characterized in that, it is as described below that step 1 makes up the fusion background model:
Step 11: at first learn the video sequence that some frames are only had powerful connections;
Step 12: utilize the feature background model of Principal Component Analysis Method foundation, and pass through the method reconstructed background image of background compensation, to realize reaching the renewal of feature background model based on pixel intensity information;
Step 13: be the single Gaussian Background model of each pixel foundation based on chrominance information, and the parameter of real-time update feature background model;
Step 14: feature background model and the single Gaussian Background model of setting up carried out the linearity fusion, and, obtain to merge background model according to pixel intensity information dynamic calculation fusion coefficients;
Step 15: utilize above-mentioned fusion background model, current frame image is divided into prospect, background and Unknown Label three class set of pixels.
3. method according to claim 1 is characterized in that, step 2 energy object construction of function is as described below:
Step 21: the classification results of the pixel that obtains according to step 15, every form that embodies of target setting function;
Step 22: utilize the spatial contrast degree information of local two-value Model Calculation neighbor, to characterize the flatness between adjacent two pixels;
Step 23: according to fusion background model and local two-value model space contrast information, set the data item and the level and smooth sex factor of Gibbs energy energy function, promptly obtain the energy object function based on time continuity information.
4. method according to claim 1 is characterized in that, the two-value segmentation procedure of all pixels of present frame is as follows:
Step 31:, cut algorithm with execution graph for each two field picture structure weighted graph;
Step 32: utilize the video relevance of frame segmentation result up and down,, upgrade the weight on limit in the weighted graph from data item and the level and smooth item that the previous frame segmentation result utilizes step 23 to ask for;
Step 33: cut the minimal cut set that algorithm is asked for current weighted graph according to dynamic figure, with the extreme value of definite target energy function and the two-value label of unknown pixel.
5. method according to claim 1 is characterized in that, step 4 pair foreground target segmentation result aftertreatment is carried out according to the following procedure:
Step 41: utilize area identification and morphology closed operation algorithm, small size interference region among the foreground segmentation result that removal step 33 obtains is filled narrow interruption and little hole;
Step 42: the border of trying to achieve foreground target according to edge following algorithm;
Step 43: the boundary curve along foreground object obtains point sequence with interval sampling, and the Fourier that obtains the border through Fourier transform is described son;
Step 44: the cutoff frequency of design low-pass filter, the foreground target border that obtains with level and smooth step 42; Last inverse-Fourier transform obtains the frontier point coordinate after level and smooth, with the final border of the foreground target determining to split.
6. method according to claim 1, it is characterized in that, step 5 is my method transparence value of the prospect boundary strip that obtains of calculation procedure 44 at first, promptly utilize the expansion erosion algorithm to obtain the boundary strip of certain bandwidth, according to my method transparence value of the level and smooth band of computation bound as a result of Gauss, and normalize in 0 to 1 scope; Set the blending ratio of preceding background according to trying to achieve my method transparence value again, determine each color of pixel value in the synthesising picture, the final video effect that generates with virtual background.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102437337A CN101777180B (en) | 2009-12-23 | 2009-12-23 | Complex background real-time alternating method based on background modeling and energy minimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102437337A CN101777180B (en) | 2009-12-23 | 2009-12-23 | Complex background real-time alternating method based on background modeling and energy minimization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101777180A true CN101777180A (en) | 2010-07-14 |
CN101777180B CN101777180B (en) | 2012-07-04 |
Family
ID=42513635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102437337A Expired - Fee Related CN101777180B (en) | 2009-12-23 | 2009-12-23 | Complex background real-time alternating method based on background modeling and energy minimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101777180B (en) |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496163A (en) * | 2011-11-03 | 2012-06-13 | 长安大学 | Background reconstruction method based on gray extremum |
CN102663398A (en) * | 2012-03-31 | 2012-09-12 | 上海博康智能信息技术有限公司 | Color image color feature extraction method and device thereof |
CN102819841A (en) * | 2012-07-30 | 2012-12-12 | 中国科学院自动化研究所 | Global threshold partitioning method for partitioning target image |
CN102855634A (en) * | 2011-06-28 | 2013-01-02 | 中兴通讯股份有限公司 | Image detection method and image detection device |
CN103065327A (en) * | 2012-12-31 | 2013-04-24 | 合肥寰景信息技术有限公司 | Gait region partitioning algorithm based on novel space-time fusion |
CN103136722A (en) * | 2011-11-25 | 2013-06-05 | 北京大学 | Color gamut analysis based image partition method and system |
CN103500447A (en) * | 2013-09-18 | 2014-01-08 | 中国石油大学(华东) | Video foreground and background partition method based on incremental high-order Boolean energy minimization |
CN103559690A (en) * | 2013-11-05 | 2014-02-05 | 北京京东尚科信息技术有限公司 | Method for achieving smoothness of image edges |
CN103686050A (en) * | 2012-09-18 | 2014-03-26 | 联想(北京)有限公司 | Method and electronic equipment for simulating call scenes, |
CN104463839A (en) * | 2013-09-18 | 2015-03-25 | 卡西欧计算机株式会社 | Image processing device and image processing method |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN104933694A (en) * | 2014-03-17 | 2015-09-23 | 华为技术有限公司 | Method and equipment for segmenting foreground and background |
CN104967855A (en) * | 2015-06-25 | 2015-10-07 | 华侨大学 | Coding method suitable for monitoring video |
CN105608699A (en) * | 2015-12-25 | 2016-05-25 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105608716A (en) * | 2015-12-21 | 2016-05-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105654513A (en) * | 2015-12-30 | 2016-06-08 | 电子科技大学 | Moving target detection method based on sampling strategy |
CN106204426A (en) * | 2016-06-30 | 2016-12-07 | 广州华多网络科技有限公司 | A kind of method of video image processing and device |
CN106233716A (en) * | 2014-04-22 | 2016-12-14 | 日本电信电话株式会社 | Video display devices, video projection, dynamic illusion present device, video-generating device, their method, data structure, program |
CN106254849A (en) * | 2016-08-08 | 2016-12-21 | 深圳迪乐普数码科技有限公司 | The method of a kind of foreground object local displacement and terminal |
CN106296683A (en) * | 2016-08-09 | 2017-01-04 | 深圳迪乐普数码科技有限公司 | A kind of generation method of virtual screen curtain wall and terminal |
CN106446749A (en) * | 2016-08-30 | 2017-02-22 | 西安小光子网络科技有限公司 | Optical label shooting and optical label decoding relay work method |
CN106558043A (en) * | 2015-09-29 | 2017-04-05 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus for determining fusion coefficients |
CN107025457A (en) * | 2017-03-29 | 2017-08-08 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
CN107194949A (en) * | 2017-05-18 | 2017-09-22 | 华中科技大学 | A kind of interactive video dividing method and system for being matched based on block and strengthening Onecut |
CN107301655A (en) * | 2017-06-16 | 2017-10-27 | 上海远洲核信软件科技股份有限公司 | A kind of video movement target method for detecting based on background modeling |
CN107920213A (en) * | 2017-11-20 | 2018-04-17 | 深圳市堇茹互动娱乐有限公司 | Image synthesizing method, terminal and computer-readable recording medium |
CN107979754A (en) * | 2016-10-25 | 2018-05-01 | 百度在线网络技术(北京)有限公司 | A kind of test method and device based on camera application |
WO2018141232A1 (en) * | 2017-02-06 | 2018-08-09 | 腾讯科技(深圳)有限公司 | Image processing method, computer storage medium, and computer device |
WO2018166289A1 (en) * | 2017-03-15 | 2018-09-20 | 北京京东尚科信息技术有限公司 | Image generation method and device |
CN108765278A (en) * | 2018-06-05 | 2018-11-06 | Oppo广东移动通信有限公司 | A kind of image processing method, mobile terminal and computer readable storage medium |
CN108830780A (en) * | 2018-05-09 | 2018-11-16 | 北京京东金融科技控股有限公司 | Image processing method and device, electronic equipment, storage medium |
CN109118507A (en) * | 2018-08-27 | 2019-01-01 | 明超 | Shell bears air pressure real-time alarm system |
CN109737961A (en) * | 2018-05-23 | 2019-05-10 | 哈尔滨理工大学 | A kind of robot optimization area Dian Dao paths planning method with probability completeness |
CN110023989A (en) * | 2017-03-29 | 2019-07-16 | 华为技术有限公司 | A kind of generation method and device of sketch image |
CN110099209A (en) * | 2018-01-30 | 2019-08-06 | 佳能株式会社 | Image processing apparatus, image processing method and storage medium |
CN111080748A (en) * | 2019-12-27 | 2020-04-28 | 北京工业大学 | Automatic picture synthesis system based on Internet |
CN111210450A (en) * | 2019-12-25 | 2020-05-29 | 北京东宇宏达科技有限公司 | Method for processing infrared image for sea-sky background |
CN112070656A (en) * | 2020-08-10 | 2020-12-11 | 上海明略人工智能(集团)有限公司 | Frame data modification method and device |
CN112163589A (en) * | 2020-11-10 | 2021-01-01 | 中国科学院长春光学精密机械与物理研究所 | Image processing method, device, equipment and storage medium |
CN112954398A (en) * | 2021-02-07 | 2021-06-11 | 杭州朗和科技有限公司 | Encoding method, decoding method, device, storage medium and electronic equipment |
CN113947523A (en) * | 2021-10-18 | 2022-01-18 | 杭州研极微电子有限公司 | Method and device for replacing background image |
WO2022033312A1 (en) * | 2020-08-11 | 2022-02-17 | 北京芯海视界三维科技有限公司 | Image processing apparatus and terminal |
WO2022142219A1 (en) * | 2020-12-31 | 2022-07-07 | 上海商汤智能科技有限公司 | Image background processing method, apparatus, electronic device, and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101154265A (en) * | 2006-09-29 | 2008-04-02 | 中国科学院自动化研究所 | Method for recognizing iris with matched characteristic and graph based on partial bianry mode |
-
2009
- 2009-12-23 CN CN2009102437337A patent/CN101777180B/en not_active Expired - Fee Related
Cited By (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9361702B2 (en) | 2011-06-28 | 2016-06-07 | Zte Corporation | Image detection method and device |
CN102855634A (en) * | 2011-06-28 | 2013-01-02 | 中兴通讯股份有限公司 | Image detection method and image detection device |
WO2013000404A1 (en) * | 2011-06-28 | 2013-01-03 | 中兴通讯股份有限公司 | Image detection method and device |
CN102855634B (en) * | 2011-06-28 | 2017-03-22 | 中兴通讯股份有限公司 | Image detection method and image detection device |
CN102496163A (en) * | 2011-11-03 | 2012-06-13 | 长安大学 | Background reconstruction method based on gray extremum |
CN103136722A (en) * | 2011-11-25 | 2013-06-05 | 北京大学 | Color gamut analysis based image partition method and system |
CN103136722B (en) * | 2011-11-25 | 2016-06-29 | 北京大学 | A kind of image partition method based on colour gamut analysis and system |
CN102663398A (en) * | 2012-03-31 | 2012-09-12 | 上海博康智能信息技术有限公司 | Color image color feature extraction method and device thereof |
CN102819841A (en) * | 2012-07-30 | 2012-12-12 | 中国科学院自动化研究所 | Global threshold partitioning method for partitioning target image |
CN102819841B (en) * | 2012-07-30 | 2015-01-28 | 中国科学院自动化研究所 | Global threshold partitioning method for partitioning target image |
CN103686050A (en) * | 2012-09-18 | 2014-03-26 | 联想(北京)有限公司 | Method and electronic equipment for simulating call scenes, |
CN103065327A (en) * | 2012-12-31 | 2013-04-24 | 合肥寰景信息技术有限公司 | Gait region partitioning algorithm based on novel space-time fusion |
CN104463839B (en) * | 2013-09-18 | 2018-03-30 | 卡西欧计算机株式会社 | Image processing apparatus, image processing method and recording medium |
CN103500447B (en) * | 2013-09-18 | 2015-03-18 | 中国石油大学(华东) | Video foreground and background partition method based on incremental high-order Boolean energy minimization |
CN103500447A (en) * | 2013-09-18 | 2014-01-08 | 中国石油大学(华东) | Video foreground and background partition method based on incremental high-order Boolean energy minimization |
CN104463839A (en) * | 2013-09-18 | 2015-03-25 | 卡西欧计算机株式会社 | Image processing device and image processing method |
CN103559690A (en) * | 2013-11-05 | 2014-02-05 | 北京京东尚科信息技术有限公司 | Method for achieving smoothness of image edges |
CN103559690B (en) * | 2013-11-05 | 2017-02-01 | 北京京东尚科信息技术有限公司 | Method for achieving smoothness of image edges |
CN104933694A (en) * | 2014-03-17 | 2015-09-23 | 华为技术有限公司 | Method and equipment for segmenting foreground and background |
WO2015139453A1 (en) * | 2014-03-17 | 2015-09-24 | 华为技术有限公司 | Foreground and background segmentation method and device |
CN106233716B (en) * | 2014-04-22 | 2019-12-24 | 日本电信电话株式会社 | Dynamic illusion presenting device, dynamic illusion presenting method, and program |
CN106233716A (en) * | 2014-04-22 | 2016-12-14 | 日本电信电话株式会社 | Video display devices, video projection, dynamic illusion present device, video-generating device, their method, data structure, program |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN104967855A (en) * | 2015-06-25 | 2015-10-07 | 华侨大学 | Coding method suitable for monitoring video |
CN106558043B (en) * | 2015-09-29 | 2019-07-23 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus of determining fusion coefficients |
CN106558043A (en) * | 2015-09-29 | 2017-04-05 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus for determining fusion coefficients |
CN105608716A (en) * | 2015-12-21 | 2016-05-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105608716B (en) * | 2015-12-21 | 2020-12-18 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105608699B (en) * | 2015-12-25 | 2019-03-29 | 联想(北京)有限公司 | A kind of image processing method and electronic equipment |
CN105608699A (en) * | 2015-12-25 | 2016-05-25 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105654513A (en) * | 2015-12-30 | 2016-06-08 | 电子科技大学 | Moving target detection method based on sampling strategy |
CN106204426A (en) * | 2016-06-30 | 2016-12-07 | 广州华多网络科技有限公司 | A kind of method of video image processing and device |
CN106254849A (en) * | 2016-08-08 | 2016-12-21 | 深圳迪乐普数码科技有限公司 | The method of a kind of foreground object local displacement and terminal |
CN106254849B (en) * | 2016-08-08 | 2018-11-13 | 深圳迪乐普数码科技有限公司 | A kind of method and terminal that foreground object is locally replaced |
CN106296683A (en) * | 2016-08-09 | 2017-01-04 | 深圳迪乐普数码科技有限公司 | A kind of generation method of virtual screen curtain wall and terminal |
CN106446749A (en) * | 2016-08-30 | 2017-02-22 | 西安小光子网络科技有限公司 | Optical label shooting and optical label decoding relay work method |
CN106446749B (en) * | 2016-08-30 | 2019-08-13 | 西安小光子网络科技有限公司 | A kind of shooting of optical label and optical label decode relay working method |
CN107979754A (en) * | 2016-10-25 | 2018-05-01 | 百度在线网络技术(北京)有限公司 | A kind of test method and device based on camera application |
WO2018141232A1 (en) * | 2017-02-06 | 2018-08-09 | 腾讯科技(深圳)有限公司 | Image processing method, computer storage medium, and computer device |
WO2018166289A1 (en) * | 2017-03-15 | 2018-09-20 | 北京京东尚科信息技术有限公司 | Image generation method and device |
CN108629815A (en) * | 2017-03-15 | 2018-10-09 | 北京京东尚科信息技术有限公司 | image generating method and device |
CN107025457A (en) * | 2017-03-29 | 2017-08-08 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
CN110023989A (en) * | 2017-03-29 | 2019-07-16 | 华为技术有限公司 | A kind of generation method and device of sketch image |
CN107025457B (en) * | 2017-03-29 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN107194949A (en) * | 2017-05-18 | 2017-09-22 | 华中科技大学 | A kind of interactive video dividing method and system for being matched based on block and strengthening Onecut |
CN107194949B (en) * | 2017-05-18 | 2019-09-24 | 华中科技大学 | A kind of interactive video dividing method and system matched based on block and enhance Onecut |
CN107301655A (en) * | 2017-06-16 | 2017-10-27 | 上海远洲核信软件科技股份有限公司 | A kind of video movement target method for detecting based on background modeling |
CN107920213A (en) * | 2017-11-20 | 2018-04-17 | 深圳市堇茹互动娱乐有限公司 | Image synthesizing method, terminal and computer-readable recording medium |
CN110099209A (en) * | 2018-01-30 | 2019-08-06 | 佳能株式会社 | Image processing apparatus, image processing method and storage medium |
US11100655B2 (en) | 2018-01-30 | 2021-08-24 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method for hiding a specific object in a captured image |
CN110099209B (en) * | 2018-01-30 | 2021-03-23 | 佳能株式会社 | Image processing apparatus, image processing method, and storage medium |
CN108830780A (en) * | 2018-05-09 | 2018-11-16 | 北京京东金融科技控股有限公司 | Image processing method and device, electronic equipment, storage medium |
CN109737961A (en) * | 2018-05-23 | 2019-05-10 | 哈尔滨理工大学 | A kind of robot optimization area Dian Dao paths planning method with probability completeness |
CN108765278A (en) * | 2018-06-05 | 2018-11-06 | Oppo广东移动通信有限公司 | A kind of image processing method, mobile terminal and computer readable storage medium |
CN109118507A (en) * | 2018-08-27 | 2019-01-01 | 明超 | Shell bears air pressure real-time alarm system |
CN111210450A (en) * | 2019-12-25 | 2020-05-29 | 北京东宇宏达科技有限公司 | Method for processing infrared image for sea-sky background |
CN111080748A (en) * | 2019-12-27 | 2020-04-28 | 北京工业大学 | Automatic picture synthesis system based on Internet |
CN111080748B (en) * | 2019-12-27 | 2023-06-02 | 北京工业大学 | Automatic picture synthesizing system based on Internet |
CN112070656A (en) * | 2020-08-10 | 2020-12-11 | 上海明略人工智能(集团)有限公司 | Frame data modification method and device |
CN112070656B (en) * | 2020-08-10 | 2023-08-25 | 上海明略人工智能(集团)有限公司 | Frame data modification method and device |
WO2022033312A1 (en) * | 2020-08-11 | 2022-02-17 | 北京芯海视界三维科技有限公司 | Image processing apparatus and terminal |
TWI827960B (en) * | 2020-08-11 | 2024-01-01 | 大陸商北京芯海視界三維科技有限公司 | Image processing devices and terminals |
CN112163589A (en) * | 2020-11-10 | 2021-01-01 | 中国科学院长春光学精密机械与物理研究所 | Image processing method, device, equipment and storage medium |
WO2022142219A1 (en) * | 2020-12-31 | 2022-07-07 | 上海商汤智能科技有限公司 | Image background processing method, apparatus, electronic device, and storage medium |
CN112954398A (en) * | 2021-02-07 | 2021-06-11 | 杭州朗和科技有限公司 | Encoding method, decoding method, device, storage medium and electronic equipment |
CN113947523A (en) * | 2021-10-18 | 2022-01-18 | 杭州研极微电子有限公司 | Method and device for replacing background image |
Also Published As
Publication number | Publication date |
---|---|
CN101777180B (en) | 2012-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101777180B (en) | Complex background real-time alternating method based on background modeling and energy minimization | |
CN102567727B (en) | Method and device for replacing background target | |
Berjón et al. | Real-time nonparametric background subtraction with tracking-based foreground update | |
CN109558806B (en) | Method for detecting high-resolution remote sensing image change | |
Dornaika et al. | Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors | |
CN106664417A (en) | Content adaptive background-foreground segmentation for video coding | |
US20130251260A1 (en) | Method and system for segmenting an image | |
CN104408429A (en) | Method and device for extracting representative frame of video | |
CN106548160A (en) | A kind of face smile detection method | |
CN110084782B (en) | Full-reference image quality evaluation method based on image significance detection | |
CN103198479B (en) | Based on the SAR image segmentation method of semantic information classification | |
CN104166983A (en) | Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut | |
Rahman et al. | Segmentation of color image using adaptive thresholding and masking with watershed algorithm | |
WO2019197021A1 (en) | Device and method for instance-level segmentation of an image | |
CN108596923A (en) | Acquisition methods, device and the electronic equipment of three-dimensional data | |
CN102034247A (en) | Motion capture method for binocular vision image based on background modeling | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN106447656B (en) | Rendering flaw image detecting method based on image recognition | |
KR20100091864A (en) | Apparatus and method for the automatic segmentation of multiple moving objects from a monocular video sequence | |
Szemenyei et al. | Real-time scene understanding using deep neural networks for RoboCup SPL | |
CN114463173A (en) | Hyperspectral remote sensing image subgraph reconstruction method based on superpixel segmentation | |
US20230343017A1 (en) | Virtual viewport generation method and apparatus, rendering and decoding methods and apparatuses, device and storage medium | |
CN102486827B (en) | Extraction method of foreground object in complex background environment and apparatus thereof | |
Li et al. | Edge-based split-and-merge superpixel segmentation | |
CN104077788A (en) | Moving object detection method fusing color and texture information for performing block background modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120704 Termination date: 20151223 |
|
EXPY | Termination of patent right or utility model |