CN102831580B - Method for restoring image shot by cell phone based on motion detection - Google Patents

Method for restoring image shot by cell phone based on motion detection Download PDF

Info

Publication number
CN102831580B
CN102831580B CN201210245614.7A CN201210245614A CN102831580B CN 102831580 B CN102831580 B CN 102831580B CN 201210245614 A CN201210245614 A CN 201210245614A CN 102831580 B CN102831580 B CN 102831580B
Authority
CN
China
Prior art keywords
agglomerate
target
list
pixel
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210245614.7A
Other languages
Chinese (zh)
Other versions
CN102831580A (en
Inventor
田玉敏
唐铭谦
蒙安魁
李玥江
冯艳
蔡苗苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210245614.7A priority Critical patent/CN102831580B/en
Publication of CN102831580A publication Critical patent/CN102831580A/en
Application granted granted Critical
Publication of CN102831580B publication Critical patent/CN102831580B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method for restoring an image shot by a cell phone based on motion detection. The problems that the texture of the prior art is rich, the structure is complicated and the image restoration under an obvious outline scene is incorrect are solved. The method comprises the following steps of: preparing a frame by utilizing shooting of a shutter frame in a short time before shooting; training a background model based on a hybrid gaussian background model to acquire the background model which is close to the real scene; performing outline extraction, false detection correction and motion target sequence extraction by a motion detection module to acquire a motion target sequence which is erased by a user; and finally, replacing pixel values at corresponding positions in a target by pixel values of a background model frame in a target restoration range selected by a user to complete the restoration of image information and acquire a shutter frame image which does not include motion obstacles. By the method, a target erasing area in the lower image of the scene is correctly restored, and the method is used for an intelligent cell phone to shoot pictures.

Description

Based on the image shot by cell phone restorative procedure that motion detects
Technical field
The invention belongs to video and technical field of image processing, relate to image repair, is a kind of by target detection technique, and a kind of method of the moving target swarmed into suddenly in erasing image, is mainly used in the reparation of smart mobile phone shooting image.
Background technology
Along with the widespread use of smart mobile phone, the camera function of user to smart mobile phone is had higher requirement.User, when taking pictures, because how numerous and diverse environment people is, often can swarm into uninvited guest in picture, make photo seem mixed and disorderly, and it is fuzzy to produce motion smear, affects the whole structure of photo.So need to carry out the erasing of by-end thing to this intruder.
Traditional by-end image wipe technology utilizes the related pixel information around erase area to estimate to fill the pixel value wiping out this position after by-end.Basic skills has following three kinds:
One, based on Partial Differential Equation method.This method is the marginal information utilizing region to be repaired, determines diffuse information and dispersal direction, anisotropically spreads in border from zone boundary.The method for the image mending of gap form, in the face of the repair efficiency of larger image block is not good.
Two, based on picture breakdown method.This method is that picture breakdown is become structure division and texture part, and structure division adopts primary image restore design to repair, and texture part adopts texture synthesis method to fill.Easily there is the aplastic situation of regional area in the method.
Three, based on the texture synthesis method of sample.This method chooses a pixel from the border in region to be repaired, by this point centered by, according to the textural characteristics of image, choose sizeable texture block, Texture Matching block close is with it found again around repairing area, carry out this texture block alternative, the method when outline line obviously and texture-rich be difficult to obtain good effect.
Three kinds of basic skills above all utilize single-frame images information to carry out image repair, and all there is the false situation of reparation, especially when texture-rich, can produce wrong texture information, make repairing effect undesirable.
Summary of the invention
The object of the invention is to the deficiency for above-mentioned prior art, propose a kind of image shot by cell phone restorative procedure detected based on motion, in quick deletion shutter frame image, the moving obstacle of non-shooting main body, repairs image information accurately and effectively, improves effect of taking pictures.
For achieving the above object, technical scheme of the present invention comprises the steps:
(1) use mixed Gaussian background modeling method, build the Gaussian Background model of mobile phone photographed scene, to mate with the difference of Gaussian Background model frame according to the present frame of image shot by cell phone frame sequence and carry out sport foreground detection, obtain motion foreground picture;
(2) detect the motion foreground picture obtained from sport foreground and extract moving target, generate and detect agglomerate list:
(2a) target outermost layer contour detecting is carried out:
(2a1) from the upper left corner in sport foreground image, by from left to right, top-down order finds out its new frontier point, if can not find, complete all target outermost layer contour detecting in foreground image, otherwise using this new frontier point as initial boundary point S, completed the outermost layer contour detecting of the target comprising frontier point S by Contour extraction, make this frontier point S be current border point D;
(2a2) record the coordinate of current border point D, according to the boundary chain code direction of current border point D, find its adjacent boundary point N;
(2a3) judge whether to find adjacent boundary point N, if can not find, then complete the outermost layer contour detecting of the target comprising frontier point S, return step (2a1); Otherwise judge whether frontier point N overlaps with initial boundary point S, if overlapped; would complete the outermost layer contour detecting of the target comprising frontier point S, return step (2a1), otherwise using N as current border point D, return step (2a2);
(2b) characteristic information of the target outermost layer profile detected is calculated:
According to the point coordinate information of each target outermost layer profile, obtain the centre of form of this target these three features of wide W and high H:
i ‾ = Σ n = 1 N p n i f ( p n i , p n j ) Σ n = 1 N f ( p n i , p n j ) , j ‾ = Σ n = 1 N p n j f ( p n i , p n j ) Σ n = 1 N f ( p n i , p n j ) ,
W = 4 × Σ n = 1 N ( p n i ) 2 f ( p n i , p n j ) Σ n = 1 N f ( p n i , p n j ) - [ Σ n = 1 N p n i f ( p n i , p n j ) ] 2 ,
H = 4 × Σ n = 1 N ( p n j ) 2 f ( p n i , p n j ) Σ n = 1 N f ( p n i , p n j ) - [ Σ n = 1 N p n j f ( p n i , p n j ) ] 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the n-th point, be the grey scale pixel value of the n-th point, N is the pixel number of target outermost layer profile;
(2c) according to closing on principle, polymerization process is carried out to target according to the target outermost layer contour feature information obtained:
For any two targets that step (2a) obtains, as target A and target B, calculate its centre of form apart from Dis aB:
Dis AB = ( i ‾ A - i ‾ B ) 2 + ( j ‾ A - j ‾ B ) 2 ,
If then target A and target B polymerization is become target C, and delete target A and target B, the centre of form obtaining target C is wide W c, high H cbe respectively:
i ‾ C = i ‾ A + i ‾ B 2 , j ‾ C = j ‾ A + j ‾ B 2 ,
W C = W A + W B 2 ,
H C = H A + H B 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the target A centre of form, with be respectively x-axis and the y-axis coordinate of the target B centre of form, W aand H abe respectively the wide and high of target A, W band H bbe respectively the wide and high of target B, α disfor the centre of form is apart from coefficient, its span is: 1.2≤α dis≤ 1.4;
(2d) the outermost layer contour feature information of each target is described as lumpiness, and all lumpinesses that present frame obtains are arranged in order, generate and detect agglomerate list;
(3) utilize feature matching method to carry out the tracking of motion agglomerate to the agglomerate detected in agglomerate list, upgrade and follow the tracks of agglomerate list:
If it is empty for (3a) following the tracks of agglomerate list, then the detection agglomerate list of present frame is directly added to and follow the tracks of agglomerate list;
If (3b) following the tracks of agglomerate list is not sky, then travels through the detection agglomerate list of present frame and follow the tracks of agglomerate list, carrying out blob match by calculating the similarity detected in agglomerate list and the list of tracking agglomerate between each agglomerate;
(4) list of tracking agglomerate, sport foreground image and background model is utilized to complete the image information reparation of moving target.
Tool of the present invention has the following advantages:
1) the present invention is by the background model of photographed scene, utilizes the replacement of shutter frame image and background model pixel to reach reparation object, can accurately to texture-rich, complex structure and there is the target erasure region under the scene of apparent contour to repair.
2) the present invention carries out characteristic matching by the detection agglomerate list obtained and the list of tracking agglomerate photographed scene being carried out to Object Detecting and Tracking, to just detecting the major punishment periodical repair of agglomerate, can when moving target have shade and target static suddenly detect moving target exactly.
Accompanying drawing explanation
Fig. 1 is general flow chart of the present invention;
Fig. 2 is that the sport foreground in Fig. 1 detects sub-process figure;
Fig. 3 is the extraction moving target in Fig. 1, generates and detects agglomerate list sub-process figure;
Fig. 4 is that agglomerate list sub-process figure is followed the tracks of in the renewal in Fig. 1;
Fig. 5 is that the image information in Fig. 1 repairs sub-process figure;
Fig. 6 adopts the present invention to emulate the original image of use and the sport foreground image of extraction;
Fig. 7 is original image and the Contour extraction result figure of Contour extraction emulation;
Fig. 8 adopts the present invention to emulate the shutter frame of use and the result figure after repairing.
Specific implementation method
Referring to accompanying drawing, the present invention is described in further detail.
With reference to Fig. 1, specific implementation step of the present invention is as follows:
Step 1, uses mixed Gaussian background modeling method, builds the Gaussian Background model of mobile phone photographed scene.
Be the rule meeting gaussian random probability distribution according to scene background pixel value natural in the short time, and same pixel does not have the different pixel value of possibility in the same time, uses multiple average and standard deviation to characterize the background of a pixel; The present invention describes scene background by setting up M Gauss model to each pixel, and the value of M determines according to scene complexity, and value is 3 ~ 5.
The mathematical definition of mixture Gaussian background model is as follows:
Probability density function f (X t) an available following M Gauss model function representation:
f ( X t ) = Σ i = 1 M ω i , t × η ( X t , μ i , t , Σ i , t )
η ( X t , μ i , t , Σ i , t ) = 1 ( 2 π ) n / 2 | Σ i , t | 1 / 2 e - 1 2 ( X t - μ i , t ) T Σ i , t - 1 ( X t - μ i , t )
Wherein, η (X in above formula t, μ i,t, ∑ i,t) be i-th Gaussian distribution of t, its average is μ i, t, covariance matrix is ∑ i, t, ω i, tfor the weights of each Gauss model, X tfor t pixel value, n is X tdimension.
According to above-mentioned principle, the concrete steps implementing the Gaussian Background model building mobile phone photographed scene are as follows:
(1a) M Gauss model is set up, by the average μ of first Gauss model to each pixel in video frame images 0be initialized as the pixel value of the first two field picture, the average μ of all the other Gauss models ibe initialized as random value, 0<i<M; All Gauss model standard deviation sigma are initialized as 10, by the weights omega of first Gauss model 0be set to α is for upgrading coefficient, and its span is 0.001< α <0.01, the weights omega of all the other Gauss models ibe set to
(1b) each frame in K frame, calculates the average μ of the color value v of this each pixel of two field picture each Gauss model corresponding with it iabsolute difference △ i=| v-μ i|, to the average μ of each Gauss model i, standard deviation sigma iand weights omega itrain by following formula:
Make learning rate &delta; = 1 K , 20 &le; K &le; 50 ;
Weight training: &omega; i = ( 1 - &delta; ) &omega; i + &delta; , &Delta; i < 3 &times; &sigma; i &omega; i = ( 1 - &delta; ) &omega; i , &Delta; i &GreaterEqual; 3 &times; &sigma; i ,
Average is trained: μ i=(1-ρ) μ i+ ρ X t,
Standard deviation is trained: &sigma; i = ( 1 - &rho; ) &sigma; i 2 + &rho; ( X t - &mu; i ) 2 ,
Wherein, X tbe the pixel color value of t two field picture, for intermediate variable;
(1c), after training terminates, its average mean value is calculated to each pixel then the M of an each pixel Gauss model is pressed weights omega isort from big to small, replace the average of M Gauss model by the average of M-1 Gauss model, replace the average of M-1 Gauss model by the average of M-2 Gauss model, by that analogy, finally use replace the average of first Gauss model.
Step 2, to mate with the difference of Gaussian Background model frame according to the present frame of image shot by cell phone frame sequence and carries out sport foreground detection, obtain motion foreground picture.
With reference to Fig. 2, being implemented as follows of this step:
(2a) the color value v of each pixel of current frame image and the average μ of each Gauss model corresponding is with it calculated iabsolute difference △ i=| v-μ i|, and by absolute difference and difference threshold T c=3 × σ icompare, if there is △ i<T c, then △ is upgraded ithe average of corresponding Gauss model and standard deviation, on the contrary replace the average of the minimum Gauss model of weight with v, upgrade the weight of all Gauss models, concrete update mode is as follows:
Make intermediate variable α is the renewal coefficient in step (1a), ω ibe the weight of i-th Gauss model,
Average upgrades: μ i=(1-γ) μ i+ γ X,
Standard deviation upgrades: &sigma; i = ( 1 - &gamma; ) &sigma; i 2 + &gamma; ( X - &mu; i ) 2 ,
Weight upgrades: &omega; i = ( 1 - &alpha; ) &omega; i + &alpha; , &Delta; i < T c &omega; i = ( 1 - &alpha; ) &omega; i , &Delta; i &GreaterEqual; T c ,
Wherein, X is the pixel color value of current frame image;
(2b) M of an each pixel Gauss model is sorted from big to small by weight, successively the weight of Gauss model is sued for peace from front to back, namely ω jfor the weight of a jth Gauss model after sorting, wherein 0≤j≤B, when the value with value U is greater than weight and the threshold value T of setting wtime, the weight number B of summation operation participated in record, 0.5<T w<0.8, in a front B-1 Gauss model, if average μ jdifference threshold T is less than with the absolute difference of color value v c, namely | v-μ j| <T c, then the pixel of this present frame is judged to be background dot, otherwise is judged to be foreground point, complete sport foreground to detect, obtain motion foreground picture, as shown in Figure 6, wherein Fig. 6 (a) is current frame image, the motion foreground picture of Fig. 6 (b) for detecting.
Step 3, detects the motion foreground picture obtained from sport foreground and extracts moving target, generates and detects agglomerate list:
With reference to Fig. 3, being implemented as follows of this step:
(3a) target outermost layer contour detecting is carried out:
(3a1) from the upper left corner in sport foreground image, by from left to right, top-down order finds out its new frontier point, if can not find, complete all target outermost layer contour detecting in foreground image, otherwise using this new frontier point as initial boundary point S, completed the outermost layer contour detecting of the target comprising frontier point S by Contour extraction, make this frontier point S be current border point D;
(3a2) record the coordinate of current border point D, according to the boundary chain code direction of current border point D, find its adjacent boundary point N; With reference to Fig. 7, the specific implementation method finding the adjacent boundary point of current border point is as follows:
If current point is initial boundary point, be then that the direction of 4 is for initial direction is around being rotated counterclockwise the frontier point found in its eight neighborhood by fore boundary point with direction number, otherwise, a direction is rotated counterclockwise by finding the direction of current border point, be rotated counterclockwise around this current point the frontier point found in its eight neighborhood using this direction as initial direction, such as, if find the direction number of current point to be 5, then using direction number be the direction of 6 as initial direction, do around this current point the frontier point found counterclockwise in its eight neighborhood;
(3a3) judge whether to find adjacent boundary point N, if can not find, then complete the outermost layer contour detecting of the target comprising frontier point S, return step (3a1); Otherwise judge whether frontier point N overlaps with initial boundary point S, if overlapped; would complete the outermost layer contour detecting of the target comprising frontier point S, return step (3a1), otherwise using N as current border point D, return step (3a2);
(3b) characteristic information of the target outermost layer profile detected is calculated:
According to the point coordinate information of each target outermost layer profile, obtain the centre of form of this target these three features of wide W and high H:
i &OverBar; = &Sigma; n = 1 N p n i f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) , j &OverBar; = &Sigma; n = 1 N p n j f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) ,
W = 4 &times; &Sigma; n = 1 N ( p n i ) 2 f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) - [ &Sigma; n = 1 N p n i f ( p n i , p n j ) ] 2 ,
H = 4 &times; &Sigma; n = 1 N ( p n j ) 2 f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) - [ &Sigma; n = 1 N p n j f ( p n i , p n j ) ] 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the n-th point, be the grey scale pixel value of the n-th point, N is the pixel number of target outermost layer profile;
(3c) according to the target outermost layer contour feature information obtained, according to closing on principle, polymerization process is carried out to target:
For any two targets that step (3a) obtains, as target A and target B, calculate its centre of form apart from Dis aB:
Dis AB = ( i &OverBar; A - i &OverBar; B ) 2 + ( j &OverBar; A - j &OverBar; B ) 2 ,
If then target A and target B polymerization is become target C, and delete target A and target B, the centre of form obtaining target C is wide W c, high H cbe respectively:
i &OverBar; C = i &OverBar; A + i &OverBar; B 2 , j &OverBar; C = j &OverBar; A + j &OverBar; B 2 ,
W C = W A + W B 2 ,
H C = H A + H B 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the target A centre of form, with be respectively x-axis and the y-axis coordinate of the target B centre of form, W aand H abe respectively the wide and high of target A, W band H bbe respectively the wide and high of target B, α disfor the centre of form is apart from coefficient, its span is: 1.2≤α dis≤ 1.4;
(3d) the outermost layer contour feature information of each target is described as lumpiness, and all lumpinesses that present frame obtains are arranged in order, generate and detect agglomerate list.
Step 4, utilizes feature matching method to carry out the tracking of motion agglomerate to the agglomerate detected in agglomerate list, upgrades and follow the tracks of agglomerate list.
With reference to Fig. 4, being implemented as follows of this step:
If it is empty for (4a) following the tracks of agglomerate list, then the detection agglomerate list of present frame is directly added to and follow the tracks of agglomerate list;
If (4b) following the tracks of agglomerate list is not sky, then travels through the detection agglomerate list of present frame and follow the tracks of agglomerate list, carrying out blob match by calculating the similarity detected in agglomerate list and the list of tracking agglomerate between each agglomerate;
If (4b1) detect an agglomerate A in agglomerate list to be greater than similarity threshold T with the similarity of the agglomerate B followed the tracks of in agglomerate list s, then replace the characteristic information of agglomerate B with the characteristic information of agglomerate A, and agglomerate A is labeled as mates agglomerate, 0.4≤T s≤ 0.6;
If the agglomerate A (4b2) detected in agglomerate list does not all mate with any one agglomerate followed the tracks of in agglomerate list, then agglomerate A is the agglomerate or the flase drop agglomerate that newly enter scene, is labeled as by this agglomerate A and does not mate agglomerate;
If (4b3) the agglomerate B followed the tracks of in agglomerate list can not be detected any one blob match in agglomerate list, then agglomerate B is for leaving scene agglomerate, is removed by this agglomerate B from the list of tracking agglomerate;
(4c) heavy decision threshold T is calculated r:
T r = &Sigma; m = 1 M | diff ( p m i , p m j ) | M + &sigma; ( p m i , p m j ) &times; &beta; r 2 ,
Wherein with be respectively x and the y-axis coordinate of m point, for point the absolute value of the present frame at place and the grey scale pixel value difference of background frames, M is the number of having mated all pixels in agglomerate, for point locate the average of the standard deviation of each Gauss model, β rattach most importance to decision threshold coefficient, its span is: 3≤β r≤ 4.5;
(4d) heavy decision threshold T is utilized r, using step (2b) to judge, that the method for pixel carries out major punishment to the foreground pixel point be labeled as in step (4b2) in the agglomerate do not mated is fixed, obtain major punishment surely than wherein, M rthe pixel number of attaching most importance to still as foreground point after judging, M afor not mating all pixel numbers in agglomerate;
(4e) utilizing the major punishment in step (4d) surely than judging this major punishment not mating agglomerate and whether should be added to and follow the tracks of in agglomerate list surely than correspondence, making T sattach most importance to and judge than threshold value, if S r≤ T sthen remove this not mate and sentence agglomerate, otherwise this is not mated agglomerate and add to and follow the tracks of in agglomerate list, 0≤T s≤ 0.4.
Step 5, utilizes and follows the tracks of the image information reparation that agglomerate list, sport foreground image and background model complete moving target.
With reference to Fig. 5, being implemented as follows of this step:
(5a) the target erasure rectangular area that user selectes is extracted, utilize the tracking agglomerate list of present frame that the pixel in this region is divided into two classes, one class is to follow the tracks of the motion pixel of locating in list, another kind of for not following the tracks of the static pixel of locating in list, described motion pixel belongs to the pixel needing to carry out target erasure;
(5b) use the motion pixel in the gray-scale value replacement shutter frame of relevant position pixel in background model frame, complete the image information reparation of moving target.
Effect of the present invention further illustrates by following emulation:
The texture-rich selected arbitrarily under outdoor elements, complex structure and under having the scene of apparent contour, use a smart mobile phone to fix a direction and manually take one section of frame of video, then by method of the present invention, moving target secondary in shutter frame is wiped and repaired, result as shown in Figure 8, wherein Fig. 8 (a) is for comprising the shutter frame image of secondary moving target, and Fig. 8 (b) is the result images repaired after erasing target.
As can be seen from Fig. 8 (b), the present invention can repair the image-region after shutter frame image motion target erasure well.

Claims (5)

1., based on the image shot by cell phone restorative procedure that motion detects, comprise the steps:
(1) use mixed Gaussian background modeling method, build the Gaussian Background model of mobile phone photographed scene, to mate with the difference of Gaussian Background model frame according to the present frame of image shot by cell phone frame sequence and carry out sport foreground detection, obtain motion foreground picture;
(2) detect the motion foreground picture obtained from sport foreground and extract moving target, generate and detect agglomerate list:
(2a) target outermost layer contour detecting is carried out:
(2a1) from the upper left corner in sport foreground image, by from left to right, top-down order finds out its new frontier point, if can not find, complete all target outermost layer contour detecting in foreground image, otherwise using this new frontier point as initial boundary point S, completed the outermost layer contour detecting of the target comprising frontier point S by Contour extraction, make this frontier point S be current border point D;
(2a2) record the coordinate of current border point D, according to the boundary chain code direction of current border point D, find its adjacent boundary point N ';
(2a3) judge whether to find adjacent boundary point N ', if can not find, then complete the outermost layer contour detecting of the target comprising frontier point S, return step (2a1); Otherwise judge whether frontier point N ' overlaps with initial boundary point S, if overlapped; would complete the outermost layer contour detecting of the target comprising frontier point S, return step (2a1), otherwise using N ' as current border point D, return step (2a2);
(2b) characteristic information of the target outermost layer profile detected is calculated:
According to the point coordinate information of each target outermost layer profile, obtain the centre of form of this target these three features of wide W and high H:
i _ = &Sigma; n = 1 N p n i f ( p n i , p n i ) &Sigma; n = 1 N f ( p n i , p n j ) , j _ = &Sigma; n = 1 N p n j f ( p n i , p n i ) &Sigma; n = 1 N f ( p n i , p n j ) ,
W = 4 &times; &Sigma; n = 1 N ( p n i ) 2 f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) - [ &Sigma; n = 1 N p n i f ( p n i , p n j ) ] 2 ,
H = 4 &times; &Sigma; n = 1 N ( p n j ) 2 f ( p n i , p n j ) &Sigma; n = 1 N f ( p n i , p n j ) - [ &Sigma; n = 1 N p n j f ( p n i , p n j ) ] 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the n-th point, be the grey scale pixel value of the n-th point, N is the pixel number of target outermost layer profile;
(2c) according to closing on principle, polymerization process is carried out to target according to the target outermost layer contour feature information obtained:
For any two targets that step (2a) obtains, as target A and target B, calculate its centre of form apart from Dis aB:
Dis AB = ( i _ A - i _ B ) 2 + ( j _ A - j _ B ) 2 ,
If then target A and target B polymerization is become target C, and delete target A and target B, the centre of form obtaining target C is wide W c, high H cbe respectively:
i _ C = i _ A + i _ B 2 , j _ C = j _ A + j _ B 2 ,
W C = W A + W B 2 ,
H C = H A + H B 2 ,
Wherein, with be respectively x-axis and the y-axis coordinate of the target A centre of form, with be respectively x-axis and the y-axis coordinate of the target B centre of form, W aand H abe respectively the wide and high of target A, W band H bbe respectively the wide and high of target B, α disfor the centre of form is apart from coefficient, its span is: 1.2≤α dis≤ 1.4;
(2d) the outermost layer contour feature information of each target is described as lumpiness, and all lumpinesses that present frame obtains are arranged in order, generate and detect agglomerate list;
(3) utilize feature matching method to carry out the tracking of motion agglomerate to the agglomerate detected in agglomerate list, upgrade and follow the tracks of agglomerate list:
If it is empty for (3a) following the tracks of agglomerate list, then the detection agglomerate list of present frame is directly added to and follow the tracks of agglomerate list;
If (3b) following the tracks of agglomerate list is not sky, then travels through the detection agglomerate list of present frame and follow the tracks of agglomerate list, carrying out blob match by calculating the similarity detected in agglomerate list and the list of tracking agglomerate between each agglomerate;
(4) list of tracking agglomerate, sport foreground image and background model is utilized to complete the image information reparation of moving target.
2. image shot by cell phone restorative procedure as claimed in claim 1, use mixed Gaussian background modeling method, the Gaussian Background model of structure mobile phone photographed scene wherein described in step (1), carries out as follows:
(1a) M Gauss model is set up, by the average μ of first Gauss model to each pixel in video frame images 0be initialized as the pixel value of the first two field picture, the average μ of all the other Gauss models ibe initialized as random value, 0 < i < M; All Gauss model standard deviation sigma are initialized as 10, by the weights omega of first Gauss model 0be set to 0.001 < α < 0.01, the weights omega of all the other Gauss models ibe set to 0 < i < M;
(1b) in K frame to the average μ of each Gauss model i, standard deviation sigma iand weights omega itrain, 20≤K≤50, at the end of training, its average mean value is calculated to each pixel then the M of an each pixel Gauss model is pressed weights omega isort from big to small, replace the average of M Gauss model by the average of M-1 Gauss model, replace the average of M-1 Gauss model by the average of M-2 Gauss model, by that analogy, finally use replace the average of first Gauss model.
3. image shot by cell phone restorative procedure as claimed in claim 1, the present frame according to image shot by cell phone frame sequence wherein described in step (1) mates with the difference of Gaussian Background model frame and carries out sport foreground detection, obtain motion foreground picture, carry out as follows:
(1c) each Gauss model average μ that the color value v of each pixel of current frame image is corresponding is with it calculated iabsolute difference Δ i=| v-μ i|, and by absolute difference and difference threshold T c=3 × σ icompare, if there is Δ i< T c, then Δ is upgraded ithe average of corresponding Gauss model and standard deviation, on the contrary the average of replacing the minimum Gauss model of weight with v, upgrades the weight of all Gauss models;
(1d) M of an each pixel Gauss model is sorted from big to small by weight, successively the weight of Gauss model is sued for peace from front to back, namely ω jfor the weight of a jth Gauss model after sorting, wherein 0≤j < B, when the value with value U is greater than weight and the threshold value T of setting wtime, the weight number B of summation operation participated in record, 0.5 < T w< 0.8, in a front B-1 Gauss model, if average μ jdifference threshold T is less than with the absolute difference of color value v c, namely | v-μ j| < T c, then the pixel of this present frame is judged to be background dot, otherwise is judged to be foreground point, complete sport foreground and detect, obtain motion foreground picture.
4. image shot by cell phone restorative procedure as claimed in claim 1, carries out blob match by calculating the similarity detected in agglomerate list and the list of tracking agglomerate between each agglomerate wherein described in step (3b), carries out as follows:
If (3b1) detect an agglomerate A in agglomerate list to be greater than similarity threshold T with the similarity of the agglomerate B followed the tracks of in agglomerate list s, then replace the characteristic information of agglomerate B with the characteristic information of agglomerate A, and agglomerate A is labeled as mates agglomerate, 0.4≤T s≤ 0.6;
If the agglomerate A (3b2) detected in agglomerate list does not all mate with any one agglomerate followed the tracks of in agglomerate list, then agglomerate A is the agglomerate or the flase drop agglomerate that newly enter scene, is labeled as by this agglomerate A and does not mate agglomerate;
If (3b3) the agglomerate B followed the tracks of in agglomerate list can not be detected any one blob match in agglomerate list, then agglomerate B is for leaving scene agglomerate, is removed by this agglomerate B from the list of tracking agglomerate;
(3b4) heavy decision threshold T is calculated r:
T r = &Sigma; m = 1 M | diff ( p m i , p m j ) | M + &sigma; ( p m i , p m j ) &times; &beta; r 2 ,
Wherein with be x-axis and the y-axis coordinate of m point, for point the absolute value of the present frame at place and the grey scale pixel value difference of background frames, M is the number of having mated all pixels in agglomerate, for point locate the average of the standard deviation of each Gauss model, β rattach most importance to decision threshold coefficient, its span is: 3≤β r≤ 4.5;
(3b5) heavy decision threshold T is utilized r, using step (1d) to judge, that the method for pixel carries out major punishment to the foreground pixel point be labeled as in step (3b2) in the agglomerate do not mated is fixed, obtain major punishment surely than wherein, M rthe pixel number of attaching most importance to still as foreground point after judging, M afor not mating all pixel numbers in agglomerate;
(3b6) utilizing the major punishment in step (3b5) surely than judging this major punishment not mating agglomerate and whether should be added to and follow the tracks of in agglomerate list surely than correspondence, making T sattach most importance to and judge than threshold value, if S r≤ T sthen remove this not mate and sentence agglomerate, otherwise this is not mated agglomerate and add to and follow the tracks of in agglomerate list, 0≤T s≤ 0.4.
5. image shot by cell phone restorative procedure as claimed in claim 1, the image information reparation that agglomerate list is followed the tracks of in the utilization wherein described in step (4), sport foreground image, background model complete moving target, carry out as follows:
(4a) the target erasure rectangular area that user selectes is extracted, utilize the tracking agglomerate list of present frame that the pixel in this region is divided into two classes, one class is to follow the tracks of the motion pixel of locating in list, another kind of for not following the tracks of the static pixel of locating in list, described motion pixel belongs to the pixel needing to carry out target erasure;
(4b) use the motion pixel in the gray-scale value replacement shutter frame of relevant position pixel in background model frame, complete the image information reparation of moving target.
CN201210245614.7A 2012-07-17 2012-07-17 Method for restoring image shot by cell phone based on motion detection Expired - Fee Related CN102831580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210245614.7A CN102831580B (en) 2012-07-17 2012-07-17 Method for restoring image shot by cell phone based on motion detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210245614.7A CN102831580B (en) 2012-07-17 2012-07-17 Method for restoring image shot by cell phone based on motion detection

Publications (2)

Publication Number Publication Date
CN102831580A CN102831580A (en) 2012-12-19
CN102831580B true CN102831580B (en) 2015-04-08

Family

ID=47334697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210245614.7A Expired - Fee Related CN102831580B (en) 2012-07-17 2012-07-17 Method for restoring image shot by cell phone based on motion detection

Country Status (1)

Country Link
CN (1) CN102831580B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150730A (en) * 2013-03-07 2013-06-12 南京航空航天大学 Round small target accurate detection method based on image
CN105357575A (en) * 2014-08-20 2016-02-24 中兴通讯股份有限公司 Video image processing device and method
CN104952030B (en) * 2015-06-08 2019-04-19 广东小天才科技有限公司 Image processing method and device
CN106331460A (en) * 2015-06-19 2017-01-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, and terminal
CN105701842A (en) * 2016-01-08 2016-06-22 天津大学 Liquid film fluctuation speed measurement method based on chain code contour features
CN105913436A (en) * 2016-04-13 2016-08-31 乐视控股(北京)有限公司 Wind information determining method and mobile terminal
CN106572387B (en) * 2016-11-09 2019-09-17 广州视源电子科技股份有限公司 Video sequence alignment schemes and system
CN107491748B (en) * 2017-08-09 2018-10-02 电子科技大学 A kind of target vehicle extracting method based on video
CN107690673B (en) * 2017-08-24 2021-04-02 达闼机器人有限公司 Image processing method and device and server
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108961302B (en) * 2018-07-16 2021-03-02 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN109165600B (en) * 2018-08-27 2021-11-26 浙江大丰实业股份有限公司 Intelligent search platform for stage performance personnel
CN110378928B (en) * 2019-04-29 2022-01-04 北京佳讯飞鸿电气股份有限公司 Dynamic and static matching target detection and tracking method
CN114691252B (en) * 2020-12-28 2023-05-30 中国联合网络通信集团有限公司 Screen display method and device
CN113869302B (en) * 2021-11-16 2022-07-15 云丁网络技术(北京)有限公司 Control method and device for image processing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101409005A (en) * 2008-11-24 2009-04-15 浙江大学 Multi-mode monitoring method based on Symbian OS mobile phone

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030078987A1 (en) * 2001-10-24 2003-04-24 Oleg Serebrennikov Navigating network communications resources based on telephone-number metadata

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101409005A (en) * 2008-11-24 2009-04-15 浙江大学 Multi-mode monitoring method based on Symbian OS mobile phone

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Contextualizing object detection and classification;Zheng Song et al.;《IEEE Conference on Computer Vision and Pattern Recognition 2011》;20110625;第1585-1592页 *
监控视频运动目标检测减背景技术的研究现状和展望;代科学 等;《中国图象图形学报》;20060731;第11卷(第7期);第919-926页 *

Also Published As

Publication number Publication date
CN102831580A (en) 2012-12-19

Similar Documents

Publication Publication Date Title
CN102831580B (en) Method for restoring image shot by cell phone based on motion detection
Bang et al. Image augmentation to improve construction resource detection using generative adversarial networks, cut-and-paste, and image transformation techniques
CN105005992B (en) A kind of based on the background modeling of depth map and the method for foreground extraction
CN109767423B (en) Crack detection method for asphalt pavement image
CN107301378B (en) Pedestrian detection method and system based on multi-classifier integration in image
CN108364466B (en) Traffic flow statistical method based on unmanned aerial vehicle traffic video
CN103198493B (en) A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation
CN105404847A (en) Real-time detection method for object left behind
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN102521844A (en) Particle filter target tracking improvement method based on vision attention mechanism
CN106056607A (en) Monitoring image background modeling method based on robustness principal component analysis
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
CN103577832B (en) A kind of based on the contextual people flow rate statistical method of space-time
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN104599291A (en) Structural similarity and significance analysis based infrared motion target detection method
CN104599288A (en) Skin color template based feature tracking method and device
CN112990237B (en) Subway tunnel image leakage detection method based on deep learning
CN105321188A (en) Foreground probability based target tracking method
Mayr et al. Self-supervised learning of the drivable area for autonomous vehicles
CN105184317A (en) License plate character segmentation method based on SVM classification
CN104376580B (en) The processing method of non-interesting zone issue in a kind of video frequency abstract
CN103093241B (en) Based on the remote sensing image nonuniformity cloud layer method of discrimination of homogeneity process
CN106611147A (en) Vehicle tracking method and device
CN102800105B (en) Target detection method based on motion vector
CN105389781A (en) Vehicle light repairing method based on saliency detection and Criminisi algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150408

Termination date: 20210717

CF01 Termination of patent right due to non-payment of annual fee