CN102946523B - Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS - Google Patents

Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS Download PDF

Info

Publication number
CN102946523B
CN102946523B CN201210427457.1A CN201210427457A CN102946523B CN 102946523 B CN102946523 B CN 102946523B CN 201210427457 A CN201210427457 A CN 201210427457A CN 102946523 B CN102946523 B CN 102946523B
Authority
CN
China
Prior art keywords
block
avs
monitor video
video
interlace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210427457.1A
Other languages
Chinese (zh)
Other versions
CN102946523A (en
Inventor
王翀
崔恒志
丁正阳
缪巍巍
赵俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Jiangsu Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Jiangsu Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201210427457.1A priority Critical patent/CN102946523B/en
Publication of CN102946523A publication Critical patent/CN102946523A/en
Application granted granted Critical
Publication of CN102946523B publication Critical patent/CN102946523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Disclosed by the invention is a kind of interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS, its method: the monitor video original image of input is carried out mode division; A part of monitor video original image after mode division enters mixing optical flow method motion estimation process, and its another part carries out the block-based motion estimation process of AVS; Reconstruct de-interlaced video after monitoring screen image after mixing optical flow method estimation and the block-based motion estimation process of AVS is motion-compensated, namely export as the de interlacing monitor video on computer screen.The present invention, by the de-interlaced method of monitor video based on CLG and AVS, improves reliability and the code efficiency of estimation, ensures definition and the real-time of process rearview screen; And interlace-removing method has very high practicality, there is the extensibility of height, rapid deployment can be carried out by the upgrading of corresponding module, can be good at the video monitoring requirement meeting transformer station.

Description

Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS
Technical field
What the present invention relates to is a kind of technology category of power industry unattended operation transformer station method for supervising, and what be specifically related to is a kind of interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS.
Background technology
National grid formula proposed in 2009 and builds with extra-high voltage grid is bulk transmission grid, and electric network coordination development at different levels, has the strong intelligent grid of automation, informationization, interactive feature.Unattended operation transformer station is a very important part in the system of intelligent grid.In unattended operation transformer station, by the application of the information technology such as video monitoring, image procossing, realize comprehensively monitoring and the centralized management of user's or few man on duty system unmanned to front end, the image, environment, equipment running status, gate inhibition, boundary defence etc. of existing network to front end is utilized to carry out effective monitor and managment, increase substantially real-time, the validity to front end monitoring, reduce personnel and management cost.
Be limited to existing network condition, video monitoring in unattended operation transformer station mainly uses the SD interlacing scan camera based on NTSC (NationalTelevisionStandardsCommittee) and PAL (PhaseAlternatingLine) standard, the high definition interlacing scan camera of 1080I standard also has application to a certain degree, its shortcoming is when in monitored picture, object of which movement or camera rotate, it is serious that picture there will be jagged edge, edge is level and smooth not, the problems such as after coding the poor and code check of code stream image quality is bigger than normal, video monitoring effect is impacted.In addition, video monitoring should monitor the apparent condition of the important operational outfit such as transformer, circuit breaker in transformer station, realize the antitheft automatic monitoring of transformer station again, carry out circumference, indoor, gate inhibition warning and safety deploy to ensure effective monitoring and control of illegal activities, have higher requirement to the quality of real-time and picture; These all need to use suitable deinterlacing technique.
Estimation is the core content in existing deinterlacing technique, specifically can be divided into four classes: block matching algorithm, phase plane relevance algorithms, optical flow algorithm and bayesian algorithm.Wherein, block matching algorithm is most widely used in four classes, can meet certain requirement of real-time, but there will be the phenomenon of artifact when detecting motion picture.And the computation complexity of phase plane relevance algorithms is too large; the advantage of optical flow method is that light stream not only carries the movable information of moving object; but also the abundant information carried about scenery three-dimensional structure; it can when not knowing any information of scene; detect Moving Objects; but the method has certain computation complexity, and more difficultly under blocking property, multiple light courcess condition solve correct optical flow field; In addition, bayesian algorithm poor-performing when not having extra auxiliary interpolation.Therefore, above-mentioned four kinds of methods, because the correlation comparison of time-domain and spatial domain is poor, be difficult to obtain motion estimation result accurately, it is high to there is computation complexity in interleaved code, inefficient problem.Traditional staggered three-dimensional video-frequency Performance Evaluation framework as shown in Figure 1, calculate the Y-PSNR (PSNR) comparing the progressive video of alternation sum respectively, it compares with progressive video the Y-PSNR calculated, instead of the contrast of the raw video adopted, owing to have employed different mark posts: terleaved video and progressive video, the precision of its Performance Evaluation is low, and result is not too convincing.
Summary of the invention
For overcoming the deficiency that prior art exists, the present invention seeks to the interlace-removing method being to provide a kind of unattended operation transformer station monitor video based on CLG and AVS, eliminate the subjectivity impact of de interlacing effect, improve reliability and the code efficiency of estimation, ensure that real-time and the definition of video.
To achieve these goals, the present invention realizes by the following technical solutions:
Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS, it is characterized in that, its method step comprises: (1) mode division step; The monitor video original image of input is carried out mode division; (2) motion-estimation step; A part of monitor video original image after mode division enters mixing optical flow method (CLG) motion estimation process, and its another part carries out AVS(AudioVideocodingStandard, audio/video encoding standard) block-based motion estimation process; (3) estimation aligning step; Reconstruct de-interlaced video after monitoring screen image after mixing optical flow method estimation and the block-based motion estimation process of AVS is motion-compensated, namely export as the de interlacing monitor video on computer screen.
Further, in described mode division step, change young pathbreaker's monitor video original image greatly according to pixel motion to be divided into pixel motion and to change little smooth block and pixel motion and change large non-smooth block, described smooth block is by the block-based motion estimation module of AVS, and non-smooth block is by the process of mixing optical flow method motion estimation module process mixing optical flow method motion estimation module.
Further, in described mode division step, this mode division is the method for smooth block and non-smooth block: after every two field picture being divided into the fritter of 8 × 8, then each pixel in this selected block and the absolute value of the difference of the average gray of this block are added summation, if this summing value more levels off to zero, then this block is smooth block; Otherwise then this block is non-smooth block.
Further, described mixed light flow algorithm motion estimation process method is: first, monitor video original image is carried out pre-filtering process, remove potential noise effect; Then, mixing optical flow method is adopted to calculate light stream vector.
Further, the method for described pre-filtering process is that a bit of for this in very short time video sequence is called a segmentation, for time T 1and T 2between two segmentations, carry out illumination pre-filtering by formula (1), obtain the pre-filtering gray scale u (x, y, t) of t frame;
( 1 ) , u ′ ( x , y , t ) = ( u ‾ T 1 T 2 / u ‾ ( t ) ) u ( x , y , t )
Wherein, T 1≤ t≤T 2, be t frame average gray a little, the average gray of this segmentation computing formula be: u ‾ T 1 T 2 = ( Σ t = T 1 T 2 u ‾ ( t ) ) / ( T 2 - T 1 + 1 ) .
Described interlace-removing method also comprises the performance estimating method of described de interlacing monitor video, this performance estimating method is: first, the progressive screen of a part of interlaced original screen after the interlace-removing method described in claim 1-8 any one after progressive screen, progressive screen, progressively-encode, reconstruct, secondary interleaved code, new interleaved video, calculates staggered Y-PSNR again; Finally, the Y-PSNR that interlaced original screen is directly staggered with calculating is compared.
The present invention, by the de-interlaced method of monitor video based on CLG and AVS, improves reliability and the code efficiency of estimation, ensures definition and the real-time of process rearview screen; And interlace-removing method has very high practicality, concrete advantage is as follows:
(1) result display by experiment, the method that the present invention proposes has good effect for the process of reference format video and actual monitored video;
(2) method that the present invention proposes takes full advantage of existing substation equipment and bandwidth, does not need to increase any additional facilities, only need install the program software of de interlacing algorithm in literary composition in computer;
(3) method that the present invention proposes is implemented in Jiangsu unattended operation transformer station video monitoring system pilot, from feedback effects, can be good at the video monitoring requirement meeting transformer station;
(4) method that the present invention proposes adopts modularized design, has the extensibility of height, if having better algorithm to occur in the future, can carry out rapid deployment by the upgrading of corresponding module.
Accompanying drawing explanation
The present invention is described in detail below in conjunction with the drawings and specific embodiments;
Fig. 1 is the appraisal procedure flow chart of existing terleaved video coding efficiency;
Fig. 2 is the performance estimating method flow chart of de interlacing monitor video of the present invention;
Fig. 3 is flow chart of the present invention;
Fig. 4 is the schematic diagram of the level and smooth I type of the present embodiment;
Fig. 5 is the schematic diagram of the level and smooth II type of the present embodiment;
Fig. 6 is the schematic diagram of the production interleaved code of this enforcement;
Fig. 7 is video monitoring picture original effect figure;
Fig. 8 is the design sketch after video monitoring picture de interlacing.
Embodiment
The technological means realized for making the present invention, creation characteristic, reaching object and effect is easy to understand, below in conjunction with embodiment, setting forth the present invention further.
Fig. 3 is flow chart of the present invention, the interlace-removing method based on CLG and AVS in the present embodiment, its method adopts to be input as monitor video original image and stagger scheme video, the light stream estimation of CLG is carried out again after part stagger scheme video carries out pre-filtering, adopt mixing optical flow method to calculate light stream vector, then enter estimation and correct; Its another part stagger scheme video adopts the block-based motion estimation module of AVS, the block-based motion estimation module of this AVS is that another part stagger scheme video is first carried out Video coding, then enter block-based estimation, finally enter estimation again and correct.This estimation correct be by CLG and AVS estimation after image carry out motion compensation after reconstruct video, after de interlacing, then form de-interlaced progressive video, namely export as the de interlacing monitor video on computer screen.Dynamic compensation de-interlacing methods uses estimation ME(MotionEstimation according to contiguous field) go the movement predicting object in picture between contiguous field, the MV of each piecemeal in the available picture of dynamic estimation, then use previous field and dynamic vector can to reconstruct a new field, then these two occasions are completed deinterleave.
This law concrete steps are as follows:
(1) original staggered monitor video is first through model selection.
Mode division principle is as follows:
After every two field picture being divided into the fritter of 8 × 8, distinguish smooth block and non-smooth block, smooth block mentioned here and non-smooth block are for each block, and the principle of division is such: represent the average gray of every block, each pixel in this selected block and the absolute value of the difference of the average gray of this block are added summation, if this value δ (x, y) more levels off to 0, then represent block more level and smooth; Its formula is:
u ‾ ( x , y ) = 1 16 Σ ( ξ , η ) f ( x + ξ , y + η )
δ ( x , y ) = 1 16 Σ ( ξ , η ) | f ( x + ξ , y + η ) - u ‾ ( x , y ) |
Wherein represent the average gray of every block, then δ (x, y) more convergence 0 to represent block more level and smooth.The threshold value of the present embodiment is set to 0.5, if δ (x, y) <0.5, then thinks that this fritter of 8 × 8 belongs to smooth block, otherwise belongs to non-smooth block.
The pixel motion that non-smooth block represents in block changes greatly, and smooth block represents that pixel motion change is less.Be more suitable for detecting the larger image of motion change according to optical flow method, after mode division, non-smooth block transfers to CLG process, and smooth block transfers to AVS process.Result after process forms de-interlaced progressive video after estimation correction module is reconstructed.In order to the needs of follow-up recovery, be that each piecemeal is numbered with the form of " frame number adds classification and adds location of pixels ", as 15C(3,5), 37A(18,7) pixel of the 15th frame non-smooth block the 3rd row the 5th row and the pixel of the 37th frame smooth block the 18th row the 7th row is represented respectively, the coordinate of first pixel in the upper left corner of every two field picture is decided to be (1,1).
(2) CLG(CombiningLocalandGlobalAlgorithm, mixing optical flow method) explanation of optical flow method motion estimation process;
The improvement mixed light flow algorithm CLG that the present embodiment proposes is divided into two steps: first remove potential noise effect by pre-filtering, then adopts mixing optical flow method to calculate light stream vector, as shown in Figure 3.
Optical flow computation is realized by the Time and place derivative of estimated image, and these technology can be divided into two classes, and a class is partial approach, and another kind of is Global Algorithm.Local algorithm is that the expression formula of similar energy by optimizing some local realizes, and Typical Representative method comprises: the method for the structure tensor that the LK method that Lucas-Kanade in 1981 proposes, Lucas in 1984, Bigun and Granlund in 1988 propose.Global approach is that the energy function by minimizing the overall situation realizes, and Typical Representative method comprises the HS method etc. of Horn and Schunck proposition in 1981.Partial approach has good antijamming capability to noise, but this method can not obtain dense light stream; The method of the contrary overall situation can obtain the light stream of 100% density, but to noise sensitivity very.Bruhn and Weickert etc. analyze the pluses and minuses of LK algorithm and HS algorithm, and both discoveries have certain complementarity, propose CLG algorithm, this algorithm and make use of the noise robustness of LK algorithm and " filling " effect of HS algorithm dexterously.
Illumination pre-filtering processing method is that a bit of for this in very short time video sequence is called a segmentation, for time T 1and T 2between two segmentations, carry out illumination pre-filtering by following formula.In general pattern sequence, illumination changes very trickle in a short period of time, by adjusting the brightness of the pixel of each frame in this very short time a little, makes it the condition meeting illumination invariant, and the accuracy that such light stream is estimated can be greatly improved.
Pre-filtering principle Analysis is as follows: u &prime; ( x , y , t ) = ( u &OverBar; T 1 T 2 / u &OverBar; ( t ) ) u ( x , y , t )
Wherein, T 1≤ t≤T 2, be t frame average gray a little, the average gray of this segmentation calculate according to following formula.
u &OverBar; T 1 T 2 = ( &Sigma; t = T 1 T 2 u &OverBar; ( t ) ) / ( T 2 - T 1 + 1 )
Before utilization CLG algorithm, illumination filtering is first used to carry out illumination variation elimination to successive frame.It should be noted that (T 2-T 1+ 1) can not be too large, so just can make pre-filtering while eliminating illumination, retain the characteristic of each pixel.In the present invention, (T 2-T 1+ 1) value is taken as 2.
The specific descriptions of CLG algorithm are as follows:
Suppose that pixel f (x, y, t) moves to f (x+dx, y+dy, t+dt) after time dt, so
(1)f(x+dx,y+dy,t+dt)=f(x,y,t)
The left side Taylor's formula of (1) is launched to obtain
(2) f (x+dx, y+dy, t+dt)=f (x, y, t)+f xdx+f ydy+f tdt+O (θ 2) wherein f x, f yand f tthe partial derivative of f, O (θ 2) represent d x, d yand d thigher derivative item.
Work as d x, d yand d ttime very little, formula (2) is equivalent to
(3)f xdx+f ydy+f tdt=0
Work as d ttime very little, formula (3) can be written as
(4)(f x,f y)(dx,dy) TΔf
Solving of formula (4) can be obtained by the method making following formula obtain minimum value
E CLG ( u , v ) = &Integral; &Omega; ( K &rho; * ( f x u + f y v + f t ) 2 + &alpha; ( | &dtri; &RightArrow; u | 2 + | &dtri; &RightArrow; v | 2 ) ) dxdy
(5) wherein K ρbe the gaussian coefficient in Size of Neighborhood ρ scope, α is smoothing weights, and larger α value can reduce optical flow gradient, thus makes light stream more level and smooth, and the α in the present invention is set to 150.
Task of making formula (5) obtain minimum value can apply Euler-Lagrange equation decomposition for solving
&dtri; 2 u = 1 &alpha; ( K &rho; * ( f x 2 ) u + K &rho; * ( f x f y ) v + K &rho; * ( f x f t ) ) &dtri; 2 v = 1 &alpha; ( K &rho; * ( f y 2 ) v + K &rho; * ( f x f y ) u + K &rho; * ( f y f t ) )
(6) adopt the method for discretization to carry out solving equation (6), the Lagrangian finite difference equations at pixel i place is
&Sigma; j &Element; N ( i ) ( u j - u i ) = 1 &alpha; ( [ K &rho; * ( f x 2 ) ] i u i + [ K &rho; * ( f x f y ) ] i v i + [ K &rho; * ( f x f t ) ] i ) &Sigma; j &Element; N ( i ) ( v j - v i ) = 1 &alpha; ( [ K &rho; * ( f y 2 ) ] i v i + [ K &rho; * ( f x f y ) ] i u i + [ K &rho; * ( f y f t ) ] i ) - - - ( 7 )
Wherein N (i) is the neighborhood of pixel i, gets four neighborhoods in the present invention.
Utilize over-relaxation iterative method (SuccessiveOver-relaxation) solving equation (7) to be converted to and solve following formula
u i k + 1 = ( 1 - w ) u i k + &alpha;w &CenterDot; &Sigma; j < i u j k + 1 + &Sigma; j > i u j k 4 &alpha; + [ K &rho; * ( f x 2 ) ] i - w &CenterDot; [ K &rho; * ( f x f y ) ] i v i k + [ K &rho; * ( f x f t ) ] i 4 &alpha; + [ K &rho; * ( f x 2 ) ] i v i k + 1 = ( 1 - w ) v i k + &alpha;w &CenterDot; &Sigma; j < 1 v j k + 1 + &Sigma; j > i v j k 4 &alpha; + [ K &rho; * ( f y 2 ) ] i - w &CenterDot; [ K &rho; * ( f x f y ) ] i u i k + 1 + [ K &rho; * ( f y f t ) ] i 4 &alpha; + [ K &rho; * ( f y 2 ) ] i - - - ( 8 )
Wherein k is iteration index, and w is used to a constant of control convergence speed, be set in the present invention 1.5. when the maximum change number of the order of magnitude of light stream be less than one pre-set threshold value time, iteration ends.In order to ensure the promptness of algorithm, be provided with maximum iterations, by experiment, the iterations in the present invention is set to 80.
(3) AVS(AudioVideocodingStandard, audio/video encoding standard) block-based motion estimation process explanation;
" information technology advanced audio/video " national standard (being called for short AVS standard) video section is formally promulgated in February, 2006 by national standardization administration committee, is numbered GB/T20090.2-2006, and formal enforcement from 1 day March in 2006.The generation of AVS is the opportunity of a history, and in the face of the expensive patent royalties of MPEG, the standard such as H.264, China has the audio and video standard of independent intellectual property right in the urgent need to working out, and this is also conducive to the core competitiveness improving China's digital audio/video industry.
AVS video standard is a kind of fairshaped efficient video encoding and decoding standard, and all video coding algorithms all include coding and the optimization performing complexity.Compared with other standard, the design of AVS is optimized more, complexity is lower.AVS is mainly used in radio and television, HD-DVD and wideband video network.AVS uses progressive scan format when coding video frequency data.To same perceived effect, progressive content can be encoded with obvious low bit compared with the content of intertexture.Further, the complexity of motion compensation reduces greatly.This is the important content that AVS reduces complexity.
The block-based motion estimation process of AVS of the present embodiment comprises block-based estimation and calculates, and in the block-based estimation of AVS of the present invention, devises the SAD(SumofAbsoluteDifferences of improvement, summation absolute error) method;
SAD = &Sigma; x = 1 m &Sigma; y = 1 n | f k ( x , y ) - f k - 1 ( x + i , y + j ) |
In formula, (i, j) is displacement vector; f kand f k-1be respectively the gray value of present frame and previous frame; M × n is macroblock size.Each sub-block only need calculate a motion vector, and namely the global minima point of this function correspond to optimum movement vector.
In this enforcement, its improvement SAD algorithm is: be first piecemeal, distinguish level and smooth/non-smooth block, then divide smooth block, finally carry out estimation.Wherein, the piecemeal of first step, distinguishes level and smooth/being operated in mode division of non-smooth block and completes; The details that second step divides smooth block is as follows:
See Fig. 4 and Fig. 5, smooth block can be divided into two classes for smooth block distribution, be respectively level and smooth I type and level and smooth II type.For the image of every frame, " 1 " in figure represents non-smooth block, " 0 " represents smooth block, can find out, the feature of level and smooth I type be smooth block neighborhood in non-smooth block be in the great majority, smooth block is surrounded in a large number by non-smooth block, and the feature of level and smooth II type is that smooth block presents integrated distribution and forms connected domain.
To the processing method of level and smooth I type be: non-smooth block calculates motion vector MV(MotionVector by optical flow method), select to make this smooth block Least-cost MV as the motion vector of this smooth block from the MV of the non-smooth block of smooth block neighborhood.Estimation error rate like this based on smooth block is high, can draw movable information about this block, corrected the motion vector of this smooth block from adjacent block.
The probability identical due to the MV of level and smooth II type is very large, so processing method selects connected region, using each MV of occurring in bulk portion as the MV of monoblock connected domain, then calculate the Matching power flow of monoblock connected domain, the MV that selection makes Matching power flow minimum is as the MV of monoblock.
(4) estimation corrects;
The module work principle that adaptively selected estimated vector carries out correcting is as follows: in estimation in figure 3, the result of pre-filtering in multiplexing optical flow method motion estimation process, and adaptively selected suitable motion vector carries out de interlacing process.
(5) experimental result and analysis
In order to assess objective effect, have selected standardized six progressive videos of CIF form (Foreman, News, Bus, MobileCalendar, ParisandTempete) and nine progressive videos of HD form (City, Crew, Cyclists, Jets, Night, Optis, Sailormen, Sheriff, andSpinCalendar), generate stagger scheme according to the form of Fig. 6 and test.Interleaved code normally adopts one (field) giving up each progressive frames to realize interlocking, and as shown in Figure 6, wherein the square of white represents the part be rejected, the part of the square representative reservation of grey.
Because the correlation comparison of time-domain and spatial domain is poor, be difficult to obtain motion estimation result accurately, it is high to there is computation complexity in interleaved code, inefficient problem.Video deinterlacing greatly can strengthen the association of time-space domain, promotes the performance of interleaved code, particularly in low bit-rate scenarios.In order to weigh image quality after treatment, by detecting PSNR(PeakSignaltoNoiseRatio, Y-PSNR) size of value carrys out its performance of check and evaluation., in order to improve precision, the present embodiment provides a kind of performance estimating method flow process of improvement, as shown in Figure 2, employing be same mark post, namely original staggered three-dimensional video-frequency, makes its assessment result more convincing; The assessed for performance method of its de interlacing screen is as follows:
First, the progressive screen of a part of interlaced original screen after described interlace-removing method after progressive screen, progressive screen, progressively-encode, reconstruct, secondary interleaved code, new interleaved video, calculate staggered Y-PSNR again; Finally, the Y-PSNR that interlaced original screen is directly staggered with calculating is compared.
This performance estimating method is by the secondary interleaved code to the progressive video of reconstruct, is terleaved video equally, so just can compares with original interlaced video, calculate PSNR accurately, thus improve the accuracy of assessment.Specific experiment test of the present invention is as follows:
First, from staggered screen through the staggered screen of interleaved code then to reconstruct, more staggered Y-PSNR is calculated; Again from staggered screen progressive screen, secondary interleaved code, new interleaved video successively after video deinterlacing, progressive screen, progressively-encode, reconstruct, calculate staggered Y-PSNR again; Meanwhile, staggered screen directly can calculate staggered Y-PSNR and compare, and the present embodiment secondary interleaved code can't have any impact to the Subjective video quality after deinterleave, and it improves reliability and the accuracy of the assessment of interlacing rearview screen greatly.
It is the result adopting diverse ways contrast average peak signal to noise ratio (PSNR) in table 1.Several interlace-removing method is from top to bottom respectively: the line method of average, linear time-space filter method, self adaptation recurrence method, adaptive M VR de interlacing method, and bottom is the method that the present invention proposes; 15 videos of table 1 adopt the PSNR of multiple interlace-removing method to compare.
Table 1
The result of table 1 shows, the mean P SNR of the interlace-removing method based on CLG and AVS that the present invention proposes is better than other classical and up-to-date methods.
For verifying the requirement of real-time whether meeting monitor video of put forward the methods of the present invention, be applied to the image of the real-time photography head input processing SD D1 (720 × 576) 25f/s; Table 2 is elapsed times that different D1 layout sequence adopts interlace-removing method process of the present invention; D1 layout sequence adopts the elapsed time of interlace-removing method process of the present invention.
Table 2
As can be seen from Table 2, average every time consuming interval of two field picture is between 0.5 second to 1.4 seconds, and such processing speed can meet the requirement of real-time of unattended operation transformer station video monitoring completely.
Fig. 7, Fig. 8 are the video monitoring pictures of the Jiangsu unattended operation transformer station before and after the interlace-removing method process of employing the present invention proposition.Monitored picture is chosen respectively: substation gate mouth, master-control room, 220kv battery limits and main transformer 2#, and that includes based on monitoring moving object is the two class monitoring advocating peace to monitor geo-stationary equipment.As can be seen from the figure, the picture subjective vision after de interlacing process is respond well, can meet the definition requirement of unattended operation transformer station video monitoring.
The present invention, by the de-interlaced method of monitor video based on CLG and AVS, improves reliability and the code efficiency of estimation, ensures definition and the real-time of process rearview screen.The interlace-removing method that the present invention proposes has very high practicality.One is result display by experiment, and the method that the present invention proposes has good effect for the process of reference format video and actual monitored video; Two is that the method that the present invention proposes takes full advantage of existing substation equipment and bandwidth, and not needing increases any additional facilities, only need install the program software of de interlacing algorithm in literary composition in computer; Three is that the method that the present invention proposes is implemented in Jiangsu unattended operation transformer station video monitoring system pilot, from feedback effects, can be good at the video monitoring requirement meeting transformer station; Four is that the method that the present invention proposes adopts modularized design, has the extensibility of height, if having better algorithm to occur in the future, can carry out rapid deployment by the upgrading of corresponding module.
More than show and describe general principle of the present invention and principal character and advantage of the present invention.The technical staff of the industry should understand; the present invention is not restricted to the described embodiments; what describe in above-described embodiment and specification just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in the claimed scope of the invention.Application claims protection range is defined by appending claims and equivalent thereof.

Claims (5)

1. based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS, it is characterized in that, its method step comprises: (1) mode division step; The monitor video original image of input is carried out mode division; (2) motion-estimation step; A part of monitor video original image after mode division enters mixing optical flow method motion estimation process, and its another part carries out the block-based motion estimation process of AVS; (3) estimation aligning step; Reconstruct de-interlaced video after monitoring screen image after mixing optical flow method estimation and the block-based motion estimation process of AVS is motion-compensated, namely export as the de interlacing monitor video on computer screen; In described mode division step, change young pathbreaker's monitor video original image greatly according to pixel motion to be divided into pixel motion and to change little smooth block and pixel motion and change large non-smooth block, described smooth block is by the block-based motion estimation module of AVS, and non-smooth block is by the process of mixing optical flow method motion estimation module process mixing optical flow method motion estimation module; In described mode division step, this mode division is the method for smooth block and non-smooth block: after every two field picture being divided into the fritter of 8 × 8, then each pixel in selected block and the absolute value of the difference of the average gray of this block are added summation, if this summing value more levels off to zero, then this block is smooth block; Otherwise then this block is non-smooth block;
In the block-based motion estimation process of described AVS, block-based estimation computing formula SAD is:
S A D = &Sigma; x = 1 m &Sigma; y = 1 n | f k ( x , y ) - f k - 1 ( x + i , y + j ) |
In formula, (i, j) is displacement vector; f kand f k-1be respectively the gray value of present frame and previous frame; M × n is macroblock size.
2. the interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS according to claim 1, it is characterized in that, described mixed light flow algorithm motion estimation process method is: first, monitor video original image is carried out pre-filtering process, remove potential noise effect; Then, mixing optical flow method is adopted to calculate light stream vector.
3. the interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS according to claim 2, is characterized in that, the method for described pre-filtering process is that a bit of for this in very short time video sequence is called a segmentation, for time T 1and T 2between two segmentations, undertaken of illumination pre-filtering by formula (1) tthe pre-filtering gray scale u'(x of frame, y, t);
(1) u &prime; ( x , y , t ) = ( u &OverBar; T 1 T 2 / u &OverBar; ( t ) ) u ( x , y , t )
Wherein, T 1≤ t≤T 2, ? tframe average gray a little, the average gray of this segmentation u &OverBar; T 1 T 2 Computing formula be: u &OverBar; T 1 T 2 = ( &Sigma; t = T 1 T 2 u &OverBar; ( t ) ) / ( T 2 - T 1 + 1 ) .
4. the interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS according to claim 2, is characterized in that, the computing formula of described mixing optical flow method is:
u i k + 1 = ( 1 - w ) u i k + &alpha; w &CenterDot; &Sigma; j < i u j k + 1 + &Sigma; j > i u j k 4 &alpha; + &lsqb; K &rho; * ( f x 2 ) &rsqb; i - w &CenterDot; &lsqb; K &rho; * ( f x f y ) &rsqb; i v i k + &lsqb; K &rho; * ( f x f t ) &rsqb; i 4 &alpha; + &lsqb; K &rho; * ( f x 2 ) &rsqb; i v i k + 1 = ( 1 - w ) v i k + &alpha; w &CenterDot; &Sigma; j < i v j k + 1 + &Sigma; j > i v j k 4 &alpha; + &lsqb; K &rho; * ( f y 2 ) &rsqb; i - w &CenterDot; &lsqb; K &rho; * ( f x f y ) &rsqb; i u i k + 1 + &lsqb; K &rho; * ( f y f t ) &rsqb; i 4 &alpha; + &lsqb; K &rho; * ( f y 2 ) &rsqb; i
Wherein, f x, f yand f tthe partial derivative of f, represent d x, d yand d thigher derivative item, d σbe the gaussian coefficient in Size of Neighborhood σ scope, α is smoothing weights, and k is iteration index, and w is used to the constant of control convergence speed.
5. the interlace-removing method of the unattended operation transformer station monitor video based on CLG and AVS according to claim 1, it is characterized in that, described interlace-removing method also comprises the performance estimating method of described de interlacing monitor video, this performance estimating method is: first, the progressive screen of a part of interlaced original screen after the interlace-removing method described in claim 1-4 any one after progressive screen, progressive screen, progressively-encode, reconstruct, secondary interleaved code, new interleaved video, calculates staggered Y-PSNR again; Finally, the Y-PSNR that interlaced original screen is directly staggered with calculating is compared.
CN201210427457.1A 2012-10-31 2012-10-31 Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS Active CN102946523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210427457.1A CN102946523B (en) 2012-10-31 2012-10-31 Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210427457.1A CN102946523B (en) 2012-10-31 2012-10-31 Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS

Publications (2)

Publication Number Publication Date
CN102946523A CN102946523A (en) 2013-02-27
CN102946523B true CN102946523B (en) 2016-04-27

Family

ID=47729427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210427457.1A Active CN102946523B (en) 2012-10-31 2012-10-31 Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS

Country Status (1)

Country Link
CN (1) CN102946523B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3539078A1 (en) * 2016-11-14 2019-09-18 Google LLC Video frame synthesis with deep learning
CN106780559B (en) * 2016-12-28 2019-12-24 中国科学院长春光学精密机械与物理研究所 Moving target detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1615652A (en) * 2002-01-17 2005-05-11 皇家飞利浦电子股份有限公司 Unit for and method of estimating a current motion vector
CN1706189A (en) * 2002-10-22 2005-12-07 皇家飞利浦电子股份有限公司 Image processing unit with fall-back
US7587091B2 (en) * 2004-10-29 2009-09-08 Intel Corporation De-interlacing using decoder parameters

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080018788A1 (en) * 2006-07-20 2008-01-24 Samsung Electronics Co., Ltd. Methods and systems of deinterlacing using super resolution technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1615652A (en) * 2002-01-17 2005-05-11 皇家飞利浦电子股份有限公司 Unit for and method of estimating a current motion vector
CN1706189A (en) * 2002-10-22 2005-12-07 皇家飞利浦电子股份有限公司 Image processing unit with fall-back
US7587091B2 (en) * 2004-10-29 2009-09-08 Intel Corporation De-interlacing using decoder parameters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Video de-interlacing method based-on optical flow;Chong Wang,et al;<<Wireless Communications & Signal Processing (WCSP), 2012 International Conference on >>;20121027;1-7 *

Also Published As

Publication number Publication date
CN102946523A (en) 2013-02-27

Similar Documents

Publication Publication Date Title
CN101236656B (en) Movement target detection method based on block-dividing image
CN110324626B (en) Dual-code-stream face resolution fidelity video coding and decoding method for monitoring of Internet of things
Kang et al. Dual motion estimation for frame rate up-conversion
US8199252B2 (en) Image-processing method and device
US20030189980A1 (en) Method and apparatus for motion estimation between video frames
CN102131058B (en) Speed conversion processing module and method of high definition digital video frame
CN104539962A (en) Layered video coding method fused with visual perception features
Yao et al. Detecting video frame-rate up-conversion based on periodic properties of edge-intensity
CN102946505B (en) Self-adaptive motion detection method based on image block statistics
CN1706189A (en) Image processing unit with fall-back
CN105120290A (en) Fast coding method for depth video
CN102984541B (en) Video quality assessment method based on pixel domain distortion factor estimation
CN101621683A (en) Fast stereo video coding method based on AVS
CN102611893B (en) DMVC (distributed multi-view video coding) side-information integration method on basis of histogram matching and SAD (security association database) judgment
CN102509311B (en) Motion detection method and device
CN104537685B (en) One kind carries out automatic passenger flow statisticses analysis method based on video image
CN102946523B (en) Based on the interlace-removing method of the unattended operation transformer station monitor video of CLG and AVS
US8891609B2 (en) System and method for measuring blockiness level in compressed digital video
CN108921147B (en) Black smoke vehicle identification method based on dynamic texture and transform domain space-time characteristics
CN101931820A (en) Spatial error concealing method
US20140098879A1 (en) Method and apparatus for motion estimation in a video system
CN101765015B (en) Method and device for approximating a discrete cosine coefficient of a block of pixels of a frame
CN101340539A (en) Deinterlacing video processing method and system by moving vector and image edge detection
CN101902642B (en) Quick decision method for H.264 interframe SKIP modes
CN104796581A (en) Video denoising system based on noise distribution feature detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant