CN105488777A - System and method for generating panoramic picture in real time based on moving foreground - Google Patents

System and method for generating panoramic picture in real time based on moving foreground Download PDF

Info

Publication number
CN105488777A
CN105488777A CN201510784947.0A CN201510784947A CN105488777A CN 105488777 A CN105488777 A CN 105488777A CN 201510784947 A CN201510784947 A CN 201510784947A CN 105488777 A CN105488777 A CN 105488777A
Authority
CN
China
Prior art keywords
image
color space
prospect
background
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510784947.0A
Other languages
Chinese (zh)
Inventor
吴沂楠
兰雨晴
梁堉
黄彬洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510784947.0A priority Critical patent/CN105488777A/en
Publication of CN105488777A publication Critical patent/CN105488777A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a system and a method for generating a panoramic picture in real time based on a moving foreground. According to the system and the method, a dynamic foreground removal algorithm based on a background difference method is adopted, so that a static background can be well restored through eight frame images without influence of a moving object; and the system and the method have the advantages of being quick, small in calculation amount and clear in background model. At the same time, in combination with a panoramic camera mode in an existing camera application, an image splicing method based on a SIFT feature point can well solve the problem in image matching during image splicing by utilizing good invariance properties of scale, rotation, illumination and the like of the SIFT feature point, and remove mismatching points by utilizing an RANSAC algorithm, so that the matching efficiency and accuracy are improved and a panoramic picture after pedestrian removal is obtained.

Description

A kind of based on the real-time generation system of panorama sketch and method thereof under mobile prospect
Technical field
The invention belongs to digital camera technology field, particularly relate to a kind of based on the real-time generation system of panorama sketch under mobile prospect and method.
Background technology
Along with the raising of living standards of the people, and China's holiday system is perfect, and people have longer more vacation, and out on tours vacation becomes the first selection that increasing people spend vacation, has become a kind of consumption pattern of fashion.Unavoidable and the beautiful scenery of playing at sight spot is taken a photograph, but the various mobile object interted between sight spot such as the stream of people, vehicle that surrounding is walked about, greatly reduce the quality of picture, cause un-hero personage in picture " stealing the show " and beautiful scenery and to be blocked destruction.And existing camera applications does not have a good method solves the problem.
The real-time generation system of existing panorama sketch is in the generative process of panorama sketch in addition, usually can the mobile object such as the stream of people, Che Liang because there is movement around, and causes " stealing the show " and beautiful scenery to be blocked destruction, thus greatly reduces the quality of panorama sketch.
Existing solution to the problems described above is the PS process of later stage to picture substantially, and digital camera, mobile phone camera can accomplish real-time process without relevant function or application temporarily.
Summary of the invention
In order to solve the problems of the technologies described above, fundamental purpose of the present invention is to provide a kind of based on panorama sketch Real-time Generation, the real-time generation system of panorama sketch under mobile prospect and the dynamic prospect removing method based on background subtraction being suitable for mobile terminal computing.
The present invention is a kind of based on panorama sketch Real-time Generation under mobile prospect, and be applied to the panoramic photography pattern of camera, it comprises:
1) by dollying equipment moving to angle;
2) by the camera collection number two field picture of dollying equipment; Step 12, utilizes several frame picture background extracting step to carry out prospect and removes operation;
Wherein, this background extracting step removes algorithm based on the dynamic prospect of background subtraction, and it is the conversion formula utilizing YUV color space and RGB color space:
By the process of YUV color space conversion, the several frame pictures collected are converted to YUV image respectively, then Gaussian statistics mean value is adopted to process to obtained YUV image brightness, cut out the lightness of target context, then corresponding U is added, V component, the reconstructed formula of recycling YUV color space and RGB color space:
Reduction RGB background model and obtain the picture of removing prospect.
Wherein, described several frame picture is at least 4, and preferably described several frame picture is 8.
It is wherein, described that to adopt Gaussian statistics mean value to carry out process to obtained YUV image brightness be comprise:
Brightness value in YUV color space is added up, in any a period of time t, brightness value there will be a very little concentration zones, be similar to the background brightness-value of this point, for ensureing the accuracy of value, a further selection N continuous time period, more each section is averaged, as final background brightness-value;
Computing formula based on Gaussian statistics is:
In formula
Wherein, described based on panorama sketch Real-time Generation under mobile prospect, also comprise step:
3) step 1 is repeated) and 2), to obtain the picture of the removing prospect of several different angles,
4) picture splicing step, it comprises:
41) Image semantic classification;
42) feature extraction;
This step adopts SIFT algorithm, first feature detection is carried out at metric space, and determine the position of unique point and the yardstick residing for unique point, then use the direction character of principal direction as this point of unique point neighborhood gradient, to realize the independence of operator to yardstick and direction; The generating algorithm of piece image SlFT proper vector comprises metric space extremum extracting altogether, and refining characteristic point position arranges unique point direction, generates SIFT feature descriptor and Feature Points Matching;
43) transformation matrix is solved;
This step adopts RANSAC algorithm to solve and refining image conversion matrix H, and suppose that image to be spliced is I (x, y) and I ' (x ', y '), then the Transformation Relation of Projection between them is:
Wherein (x, y, 1) and (x ', y ', 1) image I (x is represented respectively, y) and I ' (x ', y ') on the homogeneous coordinates of i-th matching characteristic point, here transformation matrix H has 8 degree of freedom, due to 2 systems of linear equations can be obtained for every a pair unique point, so only need 4 pairs of matching characteristic points just can calculate H in theory, general specification error meets normal distribution, set a threshold value d, the matching double points that error is greater than d is defined as exterior point, cast out and do not participate in solving of transformation matrix H, the matching double points that error is less than d is designated as interior reservation, finally get the initial value of the maximum transformation matrix H of interior point as nonlinear optimization,
44) image co-registration, to obtain panorama sketch;
After the transformation matrix between the stitching image that preceding step calculates, just can convert to corresponding image the overlapping region determined between image, and image registration to be fused is formed spliced map in the new blank image of a width.
Present invention also offers a kind of based on the real-time generation system of panorama sketch under mobile prospect, it comprises:
Dollying equipment;
Background extracting and passerby remove module, and this module gathers number two field picture by dollying equipment and obtains the picture of removing prospect;
Picture concatenation module, the picture of above-mentioned removing prospect merges, to obtain and to export panorama sketch by this module.
A kind of dynamic prospect removing method based on background subtraction being suitable for mobile terminal computing of the present invention, comprising:
1) pretreatment stage, is first converted to YUV color space by rgb color space, only to carry out computing to Y-component;
Utilize the conversion formula of YUV color space and RGB color space:
By the process of YUV color space conversion, the several frame pictures collected are converted to YUV image respectively,
2) Gaussian statistics mean value process: the brightness value in YUV color space is added up, in any a period of time t, brightness value there will be a very little concentration zones, is similar to the background brightness-value of this point; For ensureing the accuracy of value, selecting a N continuous time period further, more each section being averaged, as final background brightness-value;
Wherein, the computing formula based on Gaussian statistics is:
In formula
3) reduction RGB background model,
Through above-mentioned handling averagely, the lightness of the target context be partitioned into will be utilized, adds corresponding U, V component, the reconstructed formula of recycling YUV color space and RGB color space:
Obtain the RGB background model of reduction.
Beneficial effect of the present invention is:
In background extracting, be the motion detection algorithm based on background subtraction, more efficiently can overcome the impact of change of background, background extraction model, and have the advantages that algorithm complex is low, operand is little, the time is fast, be highly suitable for mobile terminal image procossing.In image mosaic, the image split-joint method based on SIFT feature point has very strong matching rate, extracts feature by SIFT algorithm, then use RANSAC method coupling of purifying further right, obtain homography matrix, make matching precision higher, thus the stitching image drawn is more level and smooth.Comprehensively above-mentioned two aspects, the present invention can obtain the panoramic pictures eliminating prospect well.
Accompanying drawing explanation
Fig. 1 is the schematic diagram inputting RGB in one embodiment of the invention;
Fig. 2 is the schematic diagram exporting YUV in one embodiment of the invention;
Fig. 3 is the schematic diagram of the 1st frame of the input treating background extracting;
Fig. 4 is the schematic diagram of the 4th frame of the input treating background extracting;
Fig. 5 is the schematic diagram of the 7th frame of the input treating background extracting;
Fig. 6 is the schematic diagram of the output after background extracting;
Fig. 7 is left figure unique point schematic diagram;
Fig. 8 is right figure unique point schematic diagram;
Fig. 9 is the schematic diagram of Feature Points Matching figure;
Figure 10 is the schematic diagram of input picture (left side) to be spliced;
Figure 11 be input picture to be spliced (in) schematic diagram;
Figure 12 is the schematic diagram of input picture (right side) to be spliced;
Figure 13 is the schematic diagram of the spliced output of background;
Figure 14 is the schematic flow sheet of the present invention one specific embodiment.
Embodiment
Clearly understand to make object of the present invention, technical scheme and beneficial effect, below in conjunction with drawings and Examples, the present invention is further elaborated, is to be understood that, specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
See Figure 14, for the schematic flow sheet basis of the present invention one specific embodiment, invention is the exposal model in order to propose a kind of innovation, be mainly used in the mobile photographing devices such as mobile phone, it combines passerby and removes pattern and panoramic photography pattern, and the sight spot that can realize wagon flow, the stream of people are more obtains a perfect landscape figure, and propose the method for a kind of background model extraction, make the time of extracting background faster, better effects if, and running that can be smooth on existing mobile camera installation.Optimize picture stitching algorithm simultaneously, make application scenarios of the present invention can expand to the occasions such as unmanned plane ring bat.
The present embodiment be with 3 different angles and the image overlapped to obtain panoramic picture.
When this system uses, be comprise following steps:
1) step 10, moves to angle 1 by mobile phone (or other dollying equipment);
2) step 11, by camera collection 8 pictures; 8 pictures are carried out prospect and remove operation by step 12;
3) step 20, moves in parallel mobile phone to angle 2, and ensures that the image of angle 1,2 partially overlaps;
4) step 21, by camera collection 8 pictures; 8 pictures are carried out prospect and remove operation by step 22;
5) step 30, by camera collection 8 pictures; 8 pictures are carried out prospect and remove operation by step 32;
6) step 41, the picture of three removing prospects step 12,22,32 obtained splices, to obtain the panorama sketch of step 42.
The wherein realization of topmost function mainly two parts:
One, utilize the plurality of pictures (this present embodiment is for 8) gathered, then carry out prospect (passerby, vehicle etc. of passing by one's way) and remove, the namely extraction of background model;
Two, to the splicing of the picture (background model) of removing prospect.
Specifically:
One, the extraction of background model
The whether accurate accuracy being directly connected to final splicing result that background model is extracted.The descriptive model of background image and background model, it is the basis of background subtraction segmentation foreground target.Background model has single mode and multi-modal two kinds: the former color distribution on each background dot is more concentrated, can describe with individual probability distributed model; The distribution of the latter then compares dispersion, as the ripple of level, needs jointly to describe with multiple distributed model.
At present, the acquisition methods of conventional background model comprises optical flow analysis method, RGB color space model, intermediate value method, probability density estimation, single Gauss and many Gauss models, background subtraction etc., these models or to there is background model fuzzy, or it is excessive to there is operand, or existence is subject to the problems such as change of background impact.Because we are the computings will carrying out data in mobile terminal, therefore the extraction of background model should be considered and the time domain change that environment is passed in time reduce the data volume of extraction again.Therefore, the present invention proposes one can utilize YUV type color space to carry out pretreated method.
In embodiments of the present invention, see Fig. 1,2, the method for the extraction of background model is by the process of YUV color space conversion, then adopts the process of Gaussian statistics mean value.
1, YUV color space conversion treatment step:
In general, for the background of relative quiescent, as campus, office building etc., can flow by manual control, thus rapid background extraction model; But for wagon flow such as picture highway, station, airport etc., place that the stream of people is larger, be difficult to the background model that acquisition one is static.According to the feature of these occasions, can suppose that the moving target such as car, the people moment is in motion, can not rest on for a long time on a certain position, trees, communal facility article etc. are then in relative stationary state.Thus, suppose that the lightness Y value of any point pixel in background is θ (θ ∈ (0,255)).When having moving object through out-of-date, the brightness value through position can change.
As shown in Figure 1, in step 12,22,32, first input the RGB image of acquisition, and be converted to YUV image, and the YUV image of output map 2.
Wherein, the transformational relation of YUV color space and RGB color space is:
Wherein R, G, B ∈ [0,255];
2, Gaussian statistics mean value is adopted to process to obtained YUV image brightness:
Add up the brightness value in YUV color space, in any a period of time t, brightness value there will be a very little concentration zones, is similar to the background brightness-value of this point.For ensureing the accuracy of value, selecting a N continuous time period further, more each section being averaged, as final background brightness-value.
Computing formula based on Gaussian statistics is:
In formula
By above-mentioned handling averagely, the Y-component in color space can be partitioned into the lightness of target context effectively, adds corresponding U, V component, can utilize the reconstructed formula of YUV color space and RGB color space:
Thus effectively can reduce RGB background model, thus in the process extracting background, be not subject to the impact of moving target.
Referring to Fig. 4-6, is the enforcement schematic diagram of the present embodiment; Be respectively the output image (being the picture of removing prospect) after the input picture (the 1st frame) until background extracting, the input picture (the 7th frame) until background extracting and background extracting.
In step 12,22,32, be respectively based on the image of the first frame input, then superpose the parameter of successive image foreground image (objects of the passerby of movement or other movements) cutting to be removed accurately.Wherein in order to ensure identifying mobile object, need to adopt number two field picture to carry out overlap-add procedure, because passerby moves slower, in order to obtain enough information, at least every secondary seizure is not less than the image of 4 frames, through repeatedly testing, better with 8 frames, too much image will make the processor of mobile camera installation can't bear the heavy load.
Two, the splicing of background model
In order to obtain the panorama sketch of step 42, need to carry out with the overlapping shooting in local through multiple angle, then splice, in this step, image mosaic is that the image sequence of one group of mutual lap is carried out spatial match aligning, after resampling, synthesize a width comprise new images technology that the is wide viewing angle scene of each image sequence information, complete, high definition, most important two parts are images match and image co-registration.
For images match, present invention employs the image split-joint method based on SIFT feature point.The method well make use of the good unchangeability such as yardstick, rotation, illumination of SIFT feature point to realize the problem of images match in image mosaic, and utilizes RANSAC algorithm to eliminate and mismatch a little, to improve matching efficiency and accuracy.
For image co-registration, the method smooth registration gap that the present invention uses pixel simple weighted to merge, finally achieves the seamless image splicing under illumination and dimensional variation condition.
In embodiments of the present invention, the method for the splicing of background model comprises Image semantic classification, and feature extraction SIFT algorithm, solves transformation matrix, image co-registration.
Wherein:
(1) Image semantic classification, the object of Image semantic classification is exactly the accuracy in order to ensure images match.Image semantic classification roughly can be divided into geometry correction and image denoising two parts of image.
When camera lens rotates to later scene, the parameters arranged in camera lens can not be all constant, and therefore, there will be the situation without coupling when mating the same area in the image of shooting, this will carry out geometry correction process to image.What the present invention adopted is bilinear interpolation, and experiment proves its effect and for mobile terminal, reaches a good equilibrium point working time.
In the process of camera shooting, certain noise can be produced, understand the registration process of effect diagram picture like this, therefore also will in pre-service restraint speckle, noise decrease is on the impact of image registration.Due to existence main in image is Gaussian noise, and mean filter is responsive to Gaussian noise, and arithmetic speed is very fast, so the present invention adopts mean filter denoising.
(2) feature extraction is special, see Fig. 7,8,9, is respectively the schematic diagram of left figure unique point schematic diagram, right figure unique point schematic diagram, Feature Points Matching figure; Levy the SIFT algorithm of extraction, first feature detection is carried out at metric space, and determine the position of unique point and the yardstick residing for unique point, then use the direction character of principal direction as this point of unique point neighborhood gradient, to realize the independence of operator to yardstick and direction.The generating algorithm of piece image SlFT proper vector comprises metric space extremum extracting altogether, and refining characteristic point position arranges unique point direction, generates SIFT feature descriptor, Feature Points Matching.
(3) solve transformation matrix, in order to improve the precision of image registration, the present embodiment have employed RANSAC algorithm and solves and refining image conversion matrix H.Suppose that image to be spliced is I (x, y) and I (x ', y '), then the Transformation Relation of Projection between them is:
Wherein (x, y, 1) and (x ', y ', 1) represents the homogeneous coordinates of i-th matching characteristic point on image I (x, y) and I ' (x ', y ') respectively.Here transformation matrix H has 8 degree of freedom, owing to can obtain 2 systems of linear equations for every a pair unique point, so only need 4 pairs of matching characteristic points just can calculate H in theory.General specification error meets normal distribution, and set a threshold value d, the matching double points that error is greater than d is defined as exterior point, casts out and does not participate in solving of transformation matrix H, and the matching double points that error is less than d is designated as interior reservation.Finally get the initial value of the maximum transformation matrix H of interior point as nonlinear optimization.
(4) image co-registration, after the transformation matrix between the stitching image calculated above, just can convert to corresponding image the overlapping region determined between image, and image registration to be fused is formed spliced map in the new blank image of a width.
Because common hand held camera can automatic choosing when pictures taken, this can make to there is luminance difference between input picture, causes spliced image stitching line two ends to occur that obvious light and shade changes.Therefore, need to process suture line in fusion process.
Adopt simple weighting smoothing algorithm process splicing seams problem fast, namely the function cvAddWeighted (constCvArr*src1 in OpenCV storehouse is utilized, doublealpha, constCvArr*src2, doublebeta, doublegamma, CvArrdst) fusion computing is carried out to stitching image.Wherein src1 is first former array, and alpha is the weights of first array element, and src2 is second former array, and beta is the weights of second array element, and dst is for exporting array, and gamma is the constant term of adding.
Computing method are:
dst(I)=src1(I)*alpha+src2(I)*beta+gamma
Wherein, all arrays must have identical type and identical size.
As shown in figs. 10-12, be respectively input picture (left, center, right) schematic diagram to be spliced, the schematic diagram of the spliced output of background as shown in figure 13 after utilizing method of the present invention to splice.
Background extracting step of the present invention, be the motion detection algorithm based on background subtraction, it more efficiently can overcome the impact of change of background, background extraction model, and have the advantages that algorithm complex is low, operand is little, the time is fast, be highly suitable for mobile terminal image procossing.
And in image mosaic step, it is the image split-joint method based on SIFT feature point, it has very strong matching rate, feature is extracted by SIFT algorithm, then use RANSAC method coupling of purifying further right, obtain homography matrix, make matching precision higher, thus the stitching image drawn is more level and smooth.Thus, the present invention reduces static background well by number frame picture (eight two field pictures), not by the impact of mobile object, and have fast, little, the background model advantage clearly of operand.Technical scheme of the present invention achieves the panoramic pictures that can obtain eliminating prospect in camera applications well.

Claims (7)

1., based on a panorama sketch Real-time Generation under mobile prospect, be applied to the panoramic photography pattern of camera, it is characterized in that comprising:
1) by dollying equipment moving to angle;
2) by the camera collection number two field picture of dollying equipment; Step 12, utilizes several frame picture background extracting step to carry out prospect and removes operation;
Wherein, this background extracting step removes algorithm based on the dynamic prospect of background subtraction, and it is the conversion formula utilizing YUV color space and RGB color space:
Y = 0.299 R + 0.587 G + 0.114 B U = - 0.147 R - 0.289 G + 0.436 B V = 0.615 R - 0.515 G - 0.100 B
By the process of YUV color space conversion, the several frame pictures collected are converted to YUV image respectively, then Gaussian statistics mean value is adopted to process to obtained YUV image brightness, cut out the lightness of target context, then corresponding U is added, V component, the reconstructed formula of recycling YUV color space and RGB color space:
R = Y + 1.140 V G = Y + 0.395 U - 0.581 V B = Y + 2.032 U
Reduction RGB background model and obtain the picture of removing prospect.
2. according to claim 1ly it is characterized in that based on panorama sketch Real-time Generation under mobile prospect, described several frame picture is at least 4.
3. according to claim 1ly it is characterized in that based on panorama sketch Real-time Generation under mobile prospect, described several frame picture is 8.
4. according to claim 1ly it is characterized in that based on panorama sketch Real-time Generation under mobile prospect, described to adopt Gaussian statistics mean value to carry out process to obtained YUV image brightness be comprise:
Add up the brightness value in YUV color space, in any a period of time t, brightness value there will be a very little concentration zones, is similar to the background brightness-value of this point; For ensureing the accuracy of value, selecting a N continuous time period further, more each section being averaged, as final background brightness-value;
Wherein, the computing formula based on Gaussian statistics is:
B = Σ k - 1 n B k N
In formula
B k = [ μ 1 , δ t 2 ]
μ 0 ( x , y ) = Σ i = 0 t - 1 f i ( x , y ) t
δ 0 2 ( x , y ) = Σ i = 0 t - 1 [ f i ( x , y ) - μ 0 ( x , y ) ] 2 t .
5. according to claim 1 based on panorama sketch Real-time Generation under mobile prospect, it is characterized in that, it also comprises step:
3) step 1 is repeated) and 2), to obtain the picture of the removing prospect of several different angles,
4) picture splicing step, it comprises:
41) Image semantic classification;
42) feature extraction;
This step adopts SIFT algorithm, first feature detection is carried out at metric space, and determine the position of unique point and the yardstick residing for unique point, then use the direction character of principal direction as this point of unique point neighborhood gradient, to realize the independence of operator to yardstick and direction; The generating algorithm of piece image SlFT proper vector comprises metric space extremum extracting altogether, and refining characteristic point position arranges unique point direction, generates SIFT feature descriptor and Feature Points Matching;
43) transformation matrix is solved;
This step adopts RANSAC algorithm to solve and refining image conversion matrix H; Suppose that image to be spliced is I (x, y) and I ' (x ', y '), then the Transformation Relation of Projection between them is:
x i y i 1 = H x i ′ y i ′ 1 = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x i ′ y i ′ 1
Wherein (x, y, 1) and (x ', y ', 1) image I (x is represented respectively, y) and I ' (x ', y ') on the homogeneous coordinates of i-th matching characteristic point, here transformation matrix H has 8 degree of freedom, due to 2 systems of linear equations can be obtained for every a pair unique point, so only need 4 pairs of matching characteristic points just can calculate H in theory, general specification error meets normal distribution, set a threshold value d, the matching double points that error is greater than d is defined as exterior point, cast out and do not participate in solving of transformation matrix H, the matching double points that error is less than d is designated as interior reservation, finally get the initial value of the maximum transformation matrix H of interior point as nonlinear optimization,
44) image co-registration, to obtain panorama sketch;
After the transformation matrix between the stitching image that preceding step calculates, corresponding image is converted to the overlapping region determined between image, and image registration to be fused is formed spliced map in the new blank image of a width.
6. apply method described in above-mentioned arbitrary claim based on the real-time generation system of panorama sketch under mobile prospect, it is characterized in that, this system comprises:
Dollying equipment;
Background extracting and passerby remove module, and this module gathers number two field picture by dollying equipment and obtains the picture of removing prospect;
Picture concatenation module, the picture of above-mentioned removing prospect merges, to obtain and to export panorama sketch by this module.
7. be suitable for the dynamic prospect removing method based on background subtraction of mobile terminal computing, it is characterized in that, comprising:
1) pretreatment stage, is first converted to YUV color space by rgb color space, only to carry out computing to Y-component;
Utilize the conversion formula of YUV color space and RGB color space:
Y = 0.299 R + 0.587 G + 0.114 B U = - 0.147 R - 0.289 G + 0.436 B V = 0.615 R - 0.515 G - 0.100 B
By the process of YUV color space conversion, the several frame pictures collected are converted to YUV image respectively;
2) Gaussian statistics mean value process: the brightness value in YUV color space is added up, in any a period of time t, brightness value there will be a very little concentration zones, is similar to the background brightness-value of this point; For ensureing the accuracy of value, selecting a N continuous time period further, more each section being averaged, as final background brightness-value;
Wherein, the computing formula based on Gaussian statistics is:
B = Σ k - 1 n B k N
In formula
B k = [ μ 1 , δ t 2 ]
μ 0 ( x , y ) = Σ i = 0 t - 1 f i ( x , y ) t
δ 0 2 ( x , y ) = Σ i = 0 t - 1 [ f i ( x , y ) - μ 0 ( x , y ) ] 2 t ;
3) reduction RGB background model,
Through above-mentioned handling averagely, the lightness of the target context be partitioned into will be utilized, adds corresponding U, V component, the reconstructed formula of recycling YUV color space and RGB color space:
R = Y + 1.140 V G = Y + 0.395 U - 0.581 V B = Y + 2.032 U
Obtain the RGB background model of reduction.
CN201510784947.0A 2015-04-22 2015-11-16 System and method for generating panoramic picture in real time based on moving foreground Pending CN105488777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510784947.0A CN105488777A (en) 2015-04-22 2015-11-16 System and method for generating panoramic picture in real time based on moving foreground

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2015101948769 2015-04-22
CN201510194876 2015-04-22
CN201510784947.0A CN105488777A (en) 2015-04-22 2015-11-16 System and method for generating panoramic picture in real time based on moving foreground

Publications (1)

Publication Number Publication Date
CN105488777A true CN105488777A (en) 2016-04-13

Family

ID=55675743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510784947.0A Pending CN105488777A (en) 2015-04-22 2015-11-16 System and method for generating panoramic picture in real time based on moving foreground

Country Status (1)

Country Link
CN (1) CN105488777A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204456A (en) * 2016-07-18 2016-12-07 电子科技大学 Panoramic video sequences estimation is crossed the border folding searching method
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
CN110377259A (en) * 2019-07-19 2019-10-25 深圳前海达闼云端智能科技有限公司 A kind of hidden method of equipment, electronic equipment and storage medium
CN110443771A (en) * 2019-08-16 2019-11-12 同济大学 It is vehicle-mounted to look around panoramic view brightness and colour consistency method of adjustment in camera system
CN110796629A (en) * 2019-10-28 2020-02-14 杭州涂鸦信息技术有限公司 Image fusion method and system
CN111062984A (en) * 2019-12-20 2020-04-24 广州市鑫广飞信息科技有限公司 Method, device and equipment for measuring area of video image region and storage medium
CN113139480A (en) * 2021-04-28 2021-07-20 艾拉物联网络(深圳)有限公司 Gesture detection method based on improved VIBE
CN116246085A (en) * 2023-03-07 2023-06-09 北京甲板智慧科技有限公司 Azimuth generating method and device for AR telescope

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877140A (en) * 2009-12-18 2010-11-03 北京邮电大学 Panorama-based panoramic virtual tour method
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877140A (en) * 2009-12-18 2010-11-03 北京邮电大学 Panorama-based panoramic virtual tour method
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
杨珺等: "基于改进单高斯模型法的交通背景提取", 《光子学报》 *
金克琼等: "基于自适应高斯背景建模的运动目标检测", 《中国科技论文在线》 *
韩亚伟: "视频交通流背景提取与运动目标跟踪监测技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204456A (en) * 2016-07-18 2016-12-07 电子科技大学 Panoramic video sequences estimation is crossed the border folding searching method
CN107423409B (en) * 2017-07-28 2020-03-31 维沃移动通信有限公司 Image processing method, image processing device and electronic equipment
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
CN110377259A (en) * 2019-07-19 2019-10-25 深圳前海达闼云端智能科技有限公司 A kind of hidden method of equipment, electronic equipment and storage medium
CN110377259B (en) * 2019-07-19 2023-07-07 深圳前海达闼云端智能科技有限公司 Equipment hiding method, electronic equipment and storage medium
CN110443771A (en) * 2019-08-16 2019-11-12 同济大学 It is vehicle-mounted to look around panoramic view brightness and colour consistency method of adjustment in camera system
CN110443771B (en) * 2019-08-16 2023-07-21 同济大学 Method for adjusting consistency of brightness and color of annular view in vehicle-mounted annular view camera system
CN110796629A (en) * 2019-10-28 2020-02-14 杭州涂鸦信息技术有限公司 Image fusion method and system
CN110796629B (en) * 2019-10-28 2022-05-17 杭州涂鸦信息技术有限公司 Image fusion method and system
CN111062984A (en) * 2019-12-20 2020-04-24 广州市鑫广飞信息科技有限公司 Method, device and equipment for measuring area of video image region and storage medium
CN111062984B (en) * 2019-12-20 2024-03-15 广州市鑫广飞信息科技有限公司 Method, device, equipment and storage medium for measuring area of video image area
CN113139480A (en) * 2021-04-28 2021-07-20 艾拉物联网络(深圳)有限公司 Gesture detection method based on improved VIBE
CN116246085A (en) * 2023-03-07 2023-06-09 北京甲板智慧科技有限公司 Azimuth generating method and device for AR telescope
CN116246085B (en) * 2023-03-07 2024-01-30 北京甲板智慧科技有限公司 Azimuth generating method and device for AR telescope

Similar Documents

Publication Publication Date Title
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN110837778B (en) Traffic police command gesture recognition method based on skeleton joint point sequence
CN103856727B (en) Multichannel real-time video splicing processing system
Zama Ramirez et al. Geometry meets semantics for semi-supervised monocular depth estimation
CN103325112B (en) Moving target method for quick in dynamic scene
Chen et al. Surrounding vehicle detection using an FPGA panoramic camera and deep CNNs
US11830222B2 (en) Bi-level optimization-based infrared and visible light fusion method
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN107154022A (en) A kind of dynamic panorama mosaic method suitable for trailer
CN101950426A (en) Vehicle relay tracking method in multi-camera scene
CN101394573A (en) Panoramagram generation method and system based on characteristic matching
CN106952286A (en) Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis
CN105809626A (en) Self-adaption light compensation video image splicing method
CN105046649A (en) Panorama stitching method for removing moving object in moving video
CN105787876A (en) Panorama video automatic stitching method based on SURF feature tracking matching
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
CN114783024A (en) Face recognition system of gauze mask is worn in public place based on YOLOv5
CN105550981A (en) Image registration and splicing method on the basis of Lucas-Kanade algorithm
CN105678318A (en) Traffic label matching method and apparatus
CN108259764A (en) Video camera, image processing method and device applied to video camera
CN105069749A (en) Splicing method for tire mold images
Huang et al. Measuring the absolute distance of a front vehicle from an in-car camera based on monocular vision and instance segmentation
CN106780309A (en) A kind of diameter radar image joining method
CN105657268A (en) Multi-viewpoint video splicing and fusion algorithm based on multiple resolutions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160413

RJ01 Rejection of invention patent application after publication