CN102637293B - Moving image processing device and moving image processing method - Google Patents

Moving image processing device and moving image processing method Download PDF

Info

Publication number
CN102637293B
CN102637293B CN201110037718.4A CN201110037718A CN102637293B CN 102637293 B CN102637293 B CN 102637293B CN 201110037718 A CN201110037718 A CN 201110037718A CN 102637293 B CN102637293 B CN 102637293B
Authority
CN
China
Prior art keywords
frame
subframe
parameter
distance parameter
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110037718.4A
Other languages
Chinese (zh)
Other versions
CN102637293A (en
Inventor
三好雅则
伊藤诚也
李媛
沙浩
王瑾绢
吕越峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to CN201110037718.4A priority Critical patent/CN102637293B/en
Priority to JP2012012013A priority patent/JP2012168936A/en
Publication of CN102637293A publication Critical patent/CN102637293A/en
Application granted granted Critical
Publication of CN102637293B publication Critical patent/CN102637293B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a moving image processing device and a moving image processing method. The device and the method are based on an atmosphere model so that videos in foggy days are clear, the image visibility is improved, and in addition, the real-time processing requirements of the images can be preferably met. According to the moving image processing device, in the video defogging process, the video is divided into a core frame used as a main frame and an ordinary frame used as a sub frame, the core frame is calculated again to be used as a distance parameter t(X) and an unplugged point parameter A, the ordinary frame is not subjected to A calculation, and the A of the core frame is used, the t(X) of a region corresponding to the core frame is used for the background part of the ordinary frame, and the t(X) is calculated again for the foreground part of the ordinary frame. According to the device and the method, the application speed of the single-frame image defogging algorithm based on the atmosphere model in the defogging clarifying processing on the moving images such as videos and the like can be accelerated, a good defogging effect can be obtained, and in addition, the real-time performance of the moving images is ensured.

Description

Motion picture processing device and motion image processing method
Technical field
The present invention relates to motion picture processing device and motion image processing method, moving image (hereinafter referred to as " greasy weather the video ") sharpening taken under the weather such as mist, sand and dust can be made.
Background technology
Outdoor video quality monitoring is subject to the boisterous impact such as dense fog, sandstorm usually, causes the details of video and remote monitor scene information to lose because of visibility degradation.And out of doors, monitoring application scenarios is numerous, and climate change is complicated, and mist and dust and sand weather often occur, particularly more frequent and serious under urban road and highway environment.So, under mist and dust and sand weather, improve the active demand that video definition becomes field of video monitoring.Although existing video shooting device product has the function, i.e. the mist elimination function that make video sharpening, prior art generally adopts simple image enhancement processing technology, as image histogram stretching etc., so effect is bad.
In 2002, proposed first based on Atmospheric models mist elimination clarification method in paper " Vision and the Atmosphere " by people such as NARASIMAHAN.But, the effect based on Atmospheric models mist elimination clarification method that the people such as NARASIMAHAN propose is undesirable, and just can complete the process of mist elimination sharpening after needing the image under two width different weather scenes to obtain as input the relevant information of current scene, the precondition of application is stronger.
In 2008 ~ 2009 years, Fattal, the people such as Kaiming He are based on Atmospheric models mist elimination clarification method achieving new breakthrough, a kind of new mist elimination clarification method is proposed, do not need multiple image as input, only utilize present image information just can complete the process of mist elimination sharpening, and the mist elimination clarification method that mist elimination sharpening effect strengthens treatment technology etc. than existing employing simple image is good.These new mist elimination clarification methods are all based on Atmospheric models.So-called " Atmospheric models " describe when having suspended particle in air, and video camera photographs image or the eye-observation optical principle to object.
Atmospheric models can represent with following formula (1).
I(X)=J(X)t(X)+A(1-t(X)) (1)
Formula (1) acts on RGB tri-Color Channels of image.
Wherein, I (X) represents the band mist image that camera head photographs or the band mist image that eye-observation arrives, and is input picture.X=(x, y) represents image pixel coordinates.
J (X) is object reflects light, and representing the image not having mist, is the result images of mist elimination process.
A represents a day ignore parameter, is the vector data of the image rgb value of the aerial any point in sky (hereinafter referred to as " sky ignore ").When not having sky in current input image, point the strongest for mistiness degree in image is considered as a day ignore, all pixels of piece image share a sky ignore parameter.I (X), J (X) are the same with A, are the vector data of image RGB.
T (X) defines the transition function of air dielectric, describe object reflects light to left behind after the scattering of airborne particles and the ratio reaching camera head, indicate that how many object reflects light can arrive camera head or human eye after atmospheric attenuation, be one be greater than 0 and be less than the scalar data of 1, in image, each pixel has a t (X).
Below, above-mentioned formula (1) is illustrated with reference to Fig. 6.
Fig. 6 is the schematic diagram of Atmospheric models formula.Image on the left of Fig. 6 is the image I (X) that human eye or camera head are observed.This image I (X) is made up of 2 parts, Part I is the part that object reflects light remains after airborne particles scattering is J (X) t (X), Part II for the atmospheric environment light that airborne particles scattering sunshine causes be A (1-t (X)).
Wherein, the function that the t (X) defining the transition function of air dielectric in formula (1) is the distance between subject (object) and camera head (human eye), is specifically expressed as following formula (2).
t(X)=e -βd(X)(2)
Wherein, d (X) is the distance in image between an object point X and camera head, so be also called " distance parameter " by t (X).β is atmospheric scattering coefficient, is constant.
Can find out that intensity J (X) t (X) of object reflects light arrival camera head is inversely proportional to object and the distance d of camera head time (X) from formula (1) and formula (2), distance is far away, then light attenuation is more severe, and the intensity A (1-t (X)) that atmospheric environment light arrives camera head is directly proportional to distance d (X), distance is far away, then light is stronger, so at infinity present white.
Then, illustrate to show have based on Atmospheric models mist elimination clarification method with reference to Fig. 7.
As shown in Figure 7, Fig. 7 (A) is input picture I (X), Fig. 7 (B) is for having carried out the output image J (X) of mist elimination sharpening process, the sky ignore parameter A of Fig. 7 (C) for calculating, Fig. 7 (D) t (X) for calculating.According to Fig. 7, mist elimination algorithm based on Atmospheric models simply can be summarized as: when obtaining single width band mist image and input picture I (X), obtain t (X) and sky ignore parameter A, then through type (1) obtains the later result images J (X) of mist elimination.
And existing single frames mist elimination clarification method provides different algorithms for how obtaining t (X) from A, and these algorithms can reach good mist elimination effect at present, be better than the mist elimination algorithm strengthened based on simple image far away.Below, in Table 1,3 are illustrated about obtaining the method for t (X) and A.
Table 1 is based on Atmospheric models mist elimination new algorithm
With traditional to be strengthened by simple image carry out mist elimination sharpening disposal route compared with, these methods have better mist elimination effect.But it is too slow that one of these methods large weak point is algorithm speed, and real-time is very poor.And in field of video monitoring, mist elimination belongs to the pre-service of video, compare video compress or video content analysis, the process of mist elimination sharpening needs to complete with minimum system consumption within the time fast as far as possible.In table 2, some processing times based on the mist elimination sharpening of existing algorithm are listed.
Table 2 is based on the processing time of Atmospheric models mist elimination sharpening
Often relate to multiframe situation in actual video monitoring or field of video compression, and prior art mainly studies in single frames situation, how to improve mist elimination sharpening effect, does not study further the application of single frames mist elimination algorithm in multi-frame video situation.Known according to table 2,10 seconds are wanted at least for single frames 600x400 image, so, if directly directly apply in the moving images such as video by the mist elimination clarification method for single frames of prior art, then can have a strong impact on real-time.
In addition, patent documentation 1 (Chinese patent CN101290680A) discloses a kind of greasy weather video clarification method recovered based on histogram equalization overcorrect, specifically, a kind of accelerated method is proposed to the application in video of histogram equalization mist elimination clarification method, before and after video, in frame, reuses the mapping table of histogram equalization to improve processing speed.But the single frames mist elimination algorithm in this patent documentation 1 is simple histogram equalization image enchancing method, and this method is not for mist elimination Environment Design, and effect is undesirable.
This invention and difference of the present invention are: the single frames mist elimination algorithm of 1 this invention is simple histogram equalization image enchancing method, and this method is not for mist elimination Environment Design, and effect is not desirable especially; Single frames mist elimination algorithm of the present invention is up-to-date based on Atmospheric models special mist elimination sharpening Processing Algorithm, and this method is for mist elimination Environment Design, better than traditional algorithm for image enhancement effect.2: histogram equalization mapping table is reused in this invention between multiframe; The present invention is according to the current scene degree of depth between multiframe, and atmospheric environment can not global change physical characteristics, reuses indeclinable t (X) part, has better mist elimination effect.
Summary of the invention
The object of this invention is to provide a kind of motion picture processing device and motion image processing method, make greasy weather video sharpening based on Atmospheric models, improve picture visibility, and meet the requirement of view synthesis preferably.
The present invention is conceived to existing various single frames mist elimination algorithm be applied to effectively rapidly multi-frame video monitoring or general field of video processing, the present inventor etc. are from the physical significance of t (X), analyze t (X) only relevant with atmospheric environment with the depth of field of current scene, the most of region of t (X) at image between the frame of front and back can not produce great changes, and the prospect part only in the picture with object of which movement changes.Because the motion of object may cause object to camera head directly distance change.It ignore parameter A represents the rgb value of sky ignore or the denseest point of mist in image, and this parameter can not change equally between the frame of front and back.That is, key parameter t (X) in formula (1) is relevant with atmospheric environment with the current scene degree of depth with A, these parameters integral image are only upgraded in local slow, so the present invention utilizes this characteristic of t (X) and A, based in the video mist elimination clarification method of Atmospheric models, between the frame of front and back, reuse part t (X) and A.For the background of image, the partial reuse t (X) do not changed between the frame of front and back; For the prospect of image, the part do not changed between the frame of front and back recalculates t (X).Background and prospect part reuse a day ignore parameter A simultaneously.
To achieve these goals, motion picture processing device of the present invention, the moving image according to input generates the moving image exported, and it is characterized in that, possess: input block, input moving image subject taken by outside or built-in camera head, processing unit, carries out image procossing to the moving image of this input, and output unit, export the moving image after by image procossing, described processing unit, analyze the multiple frames in the moving image of described input, judge the change whether having scene in each frame, the frame changed there being scene is as prime frame, the frame changed not having scene is as subframe, prime frame before more described subframe and described subframe, described subframe is divided into vicissitudinous prospect part and unconverted background parts, to described prime frame, distance parameter according to the distance dependent between the described camera head in this prime frame and subject carries out image procossing, to the background parts of described subframe, described distance parameter according to the prime frame before described subframe carries out image procossing, to the prospect part of described subframe, distance parameter according to calculating based on the change in this subframe carries out image procossing.
According to motion picture processing device of the present invention and motion image processing method, for mist elimination Environment Design, better than traditional algorithm for image enhancement effect.And according to the current scene degree of depth between multiframe, atmospheric environment can not global change physical characteristics, reuses indeclinable t (X) part, has better mist elimination effect.Therefore, the present invention both overcome existing based on Atmospheric models single-frame images mist elimination clarification method in the slow problem of Video Applications medium velocity, and while raising picture visibility, the requirement of view synthesis can be met, so be specially adapted to and field of video monitoring, certainly general Video processing is suitable for too.
Accompanying drawing explanation
Fig. 1 represents the block diagram with the structure of the system of motion picture processing device of the present invention.
Fig. 2 is the figure of embodiments of the invention 1, and wherein, Fig. 2 (a) is process flow diagram.Fig. 2 (b) is the schematic diagram each frame in video being divided into core frames and normal frames.
Fig. 3 is the effect contrast figure utilizing prior art and embodiment 1 to carry out image procossing respectively.
Fig. 4 is the figure representing embodiments of the invention 2, and wherein, the depth map that Fig. 4 (a) is core frames, the depth map that Fig. 4 (b) is normal frames, Fig. 4 (c) is the process flow diagram of the embodiment of the present invention 2.
Fig. 5 is the figure representing embodiments of the invention 2, and wherein, Fig. 5 (a) and Fig. 5 (b) is the figure in the foreground moving region of the prospect part representing normal frames, and Fig. 5 (c) is the process flow diagram of the embodiment of the present invention 3.
Fig. 6 is the schematic diagram of Atmospheric models formula.
Fig. 7 is the effect schematic diagram based on Atmospheric models mist elimination sharpening.
Embodiment
Below, with reference to Fig. 1, motion picture processing device of the present invention is described.
Fig. 1 is the block diagram of the structure representing motion picture processing device of the present invention.
As shown in Figure 1, motion picture processing device of the present invention comprises input block 100, processing unit 200, output unit 300 and shared storage 90.Input block 100 is for inputting by outside or built-in camera head (scheming not shown) moving images such as the videos that subject is taken.Processing unit 200 is for carrying out image procossing to the video inputted from input block 100.Output unit 300 is such as display, and for showing the video processed by processing unit 200, shared storage 90 is for storing various data.
Processing unit 200 comprises frame separative element 10, core frames parameter calculation unit 20, core frames parameter storage unit 30, normal frames parameter reuse unit 40, image mist elimination unit 50 and overall control module 60.
Frame separative element 10, for analyzing each frame of the video inputted from input block 100, judges the change whether having scene in each frame, and the frame changed there being scene is as core frames and prime frame, and the frame changed not having scene is as normal frames and subframe.
Core frames parameter calculation unit 20 calculates the various parameters of this core frames for inputted core frames, these parameters comprise transition function t (X) as distance parameter and sky ignore parameter A, adopt single frames to calculate based on the prior art that Atmospheric models mist elimination algorithm is relevant.
Core frames parameter storage unit 30 stores the various parameters of the core frames calculated by core frames parameter calculation unit 20.
Normal frames parameter is reused unit 40 and normal frames is divided into vicissitudinous prospect part and does not have vicissitudinous background parts, and determines the prospect part of normal frames and the parameters of background parts respectively.For example, to the vicissitudinous prospect part in normal frames, normal frames parameter is reused unit 40 and is recalculated transition function t (X) and sky ignore parameter A, and for the vicissitudinous background parts that do not have in normal frames, do not recalculate transition function t (X) and sky ignore parameter A, but using the transition function t (X) of the core frames be stored in core frames parameter storage unit 30 (being generally the last core frames of this normal frames) and sky ignore parameter A as the transition function t (X) of this normal frames and sky ignore parameter A.
Image mist elimination unit 50 utilizes the parameters calculated by core frames parameter calculation unit 20 to carry out the image procossing such as mist elimination sharpening to core frames, and utilizes and reuse by normal frames parameter the parameters that unit 40 obtains and carry out the image procossing such as mist elimination sharpening to normal frames.The image procossing such as these mist elimination sharpenings adopt prior art.
Each Component units of overall situation control module 60 pairs of processing units 200 or module are carried out the overall situation and are controlled.
Above, describe an example of motion picture processing device of the present invention, but certainly the present invention is not limited thereto, can various change be carried out in aim of the present invention.Such as, the processing unit 200 of Fig. 1 is made up of multiple unit, multiple unit integral can certainly be formed as a module and realize.
Below, with reference to Fig. 2, embodiments of the invention 1 are described.Wherein, Fig. 2 (a) is process flow diagram.Fig. 2 (b) is the schematic diagram each frame in video being divided into core frames and normal frames.
Motion picture processing device of the present invention based on the video mist elimination sharpening flow process of Atmospheric models as shown in Fig. 2 (a), first, taken the video of subject by input media 100 from the input of outside or built-in camera head by this camera head, below this video is called input picture I (X) (step S0).
Then, frame separative element 10 judges present frame in the video inputted whether occurrence scene conversion (step S1).This judges that the method whether scene converts can use existing algorithm realization.
In step sl, if frame separative element 10 is judged as there occurs scene change, then present frame is judged as core frames, then carries out step S2; Do not have occurrence scene to convert if be judged as, then present frame is judged as normal frames, then carry out step S3.
Below, the step S1 in Fig. 2 (a) is further described with reference to Fig. 2 (b).
As shown in Fig. 2 (b), video comprise frame 1,2 ..., N+2, wherein, the frame 1 represented with heavy line and frame N are core frames, and remaining frame is normal frames.Core frames and normal frames adopt different mist elimination algorithms.The division of core frames and normal frames can take 2 kinds of modes.First kind of way chooses core frames with fixed intervals in video, and it is core frames that such as every 300 frames choose a frame, and all the other are then normal frames.The second way is that choose present frame in video when occurrence scene switches be core frames, is normal frames when non-scene switches.So-called " scene switching " means current scene and changes, and the background of environment changes.And for the detection that scene switches, existing had a lot of mature technology and method.
Then, Fig. 2 (a) is returned.In step s 2, adopt existing single frames based on Atmospheric models mist elimination algorithm by core frames parameter calculation unit 20, calculate the transition function t (X) of this core frames and sky ignore parameter A.Then, according to formula (1), the process of mist elimination sharpening is carried out to core frames and obtains image J (X) (step S7).Finally, image J (X) (step S8) after being processed is exported by output unit 300.
In step S3, normal frames parameter is reused unit 40 and normal frames is divided into and does not have vicissitudinous background parts (i.e. moving region) and these two parts of vicissitudinous prospect part (i.e. stagnant zone).The segmentation of background parts and prospect part can realize by existing motion detection technique, and fairly simple way is that front and back frame subtract can find out current frame motion changing unit.
For the background parts of normal frames, perform step S4, normal frames parameter is reused unit 40 pairs of background parts and is processed, and reuses the transition function of t (X) as this background parts in the corresponding region of last core frames, is denoted as t1 (X) at this.And for the prospect part of normal frames, perform step S5, recalculate the transition function t (X) of this part, be denoted as t2 (X) at this, computing method adopt existing single frames based on Atmospheric models mist elimination algorithm equally.
After completing steps S4 and step S5, perform step S6, obtain the transition function t (X) of current normal frames according to the transition function t1 (X) obtained in step s 4 which and the transition function t2 (X) obtained in step s 5 (being such as added by t1 (X) and t1 (X)), the sky ignore parameter of core frames is used as the sky ignore parameter A of this normal frames.Then, according to Atmospheric models formula (1), image J (X) (the step S7) that the process of mist elimination sharpening obtains is completed to this normal frames.Finally, image J (X) (step S8) after being processed is exported by output unit 300.
Then, the processing speed of the motion picture processing device of the present invention shown in Fig. 1 is described according to table 3.
The contrast of table 3 prior art and processing speed of the present invention
Data described are in table 3 data estimators.Wherein, in " prior art " a line, suppose the method that have employed paper Single image haze remove using dark channel prior, its single image processing time for 600x400 is 10 seconds.
In " the present invention " a line, because field of video monitoring scene switching occurrence frequency is not high, suppose that a scene occurs for 10 seconds to be switched at this, video capturing rates per second is 30 frames/second, then inside this 300 frame, there is a core frames, 299 normal frames (the actual monitored scene switching interval time was greater than for 10 seconds, and core frames occurrence number is less).If take single frames based on Atmospheric models mist elimination algorithm for each frame, then 10 seconds consuming time of every frame.That is, according to the present invention, then the core frames processing time is 10 seconds, for normal frames, consider subregion transfixion in video monitoring, many times there is no moving object, so suppose that in each normal frames, moving object on average accounts for image area 5%, then the processing time of normal frames is 10 × 5%=0.5 second, is (1 × 10+299 × 0.5)/300=0.53 second like this for the average handling time that 300 frames are total.Consider actual video monitoring occasion, it is less than normal that scene switching average time and moving object account for image area average proportions, and speed of the present invention improves advantage and can improve further.
As known from Table 3, motion picture processing device of the present invention improves the processing speed applied in video based on Atmospheric models single frames mist elimination clarification method.
Then, the picture quality of the process of the motion picture processing device of the present invention shown in Fig. 1 is described according to Fig. 3.
Each image in the processing procedure that Fig. 3 is the motion picture processing device of the present invention shown in Fig. 1.
Wherein, Fig. 3 (a) is core frames, is original image and the input picture I (X) of band mist.Fig. 3 (b) is normal frames, be assumed to be the 8th frame after core frames, suppose in this normal frames, part in the black box region of below is for this normal frames is relative to the vicissitudinous prospect part of the core frames shown in Fig. 3 (a), and other regions beyond this square frame are not for having vicissitudinous background parts.Fig. 3 (c) is the result images adopting existing single frames mist elimination algorithm to obtain for this normal frames shown in Fig. 3 (b).3 (d) is the result images adopting the motion picture processing device shown in Fig. 1 of the present invention to be obtained by the flow process shown in Fig. 2.
Fig. 3 (c) after contrast utilizes prior art process and the Fig. 3 (d) after utilizing process of the present invention, visually effect is similar.And, by the PSNR parameter of calculating chart 3 (c) with Fig. 3 (d), know PSNR=31.365.PSNR parameter is used for representing original image and processed image difference at image procossing or field of video compression usually, and in field of video compression when PSNR value is between 30 ~ 50, video quality is reasonable.That is, in mist elimination quality, motion picture processing device of the present invention does not obviously decline relative to single frames mist elimination algorithm.
As mentioned above, the motion picture processing device of embodiments of the invention 1, can not only improve picture visibility, and meets the requirement of view synthesis preferably.
Below, with reference to Fig. 4, embodiments of the invention 2 are described.Wherein, the depth map that Fig. 4 (a) is core frames, the depth map that Fig. 4 (b) is normal frames, Fig. 4 (c) is the process flow diagram of the embodiment of the present invention 2.
This embodiment 2 pairs of embodiments 1 are improved, and difference is the process of the prospect part for normal frames.In embodiment 1, prospect part for normal frames recalculates the t (X) of this part, and in example 2, only t (X) is just recalculated meeting under certain condition (will describe in detail below) for normal frames prospect part, otherwise do identical process with background parts, reuse the t (X) of the same corresponding region of core frames.
Certain condition as above is specially: whether the motion of judgment object causes this object depth of field in the picture (namely object is to camera head distance) to change, if an object moves to the near region of distance camera head from the region that distance camera head is far away, then the motion of this object causes object depth of field change in the picture, recalculates t (X) for this part region; If the motion of this object does not cause object depth of field change in the picture, for the t (X) that this part Regional Gravity core frames is corresponding.Its principle is according to above-mentioned formula (2).
In above-mentioned formula (2), the function that transition function t (X) is depth of field d (X) and atmospheric scattering factor beta, in image, pixel has different d (X), and atmospheric scattering factor beta is that constant is relevant with atmospheric environment.The t (X) obtained in mist elimination process can obtain the relative depth information-β d (x) of current scene, although the concrete distance value of objects in images to camera head or human eye cannot be learnt, relative depth information-β d (x) can be obtained.The process of mist elimination sharpening is outside one's consideration except strengthening clear picture, can also obtain the relative depth of field of image.So the relative depth of field of current scene obtained according to core frames mist elimination, the change in depth whether current normal frames moving object campaign produces can be obtained, thus make corresponding process.
In Fig. 4 (a), the transition function t (X) of left side first width figure to be the grandfather tape mist figure of core frames, the second width figure be gained in core frames mist elimination process; 3rd width figure is relative depth map i.e.-β d (x) figure obtained with formula (2) according to t (X).4th width figure is the relative depth of field segmentation figure (referred to as " depth map ") of the scene pixel value of the 3rd secondary figure being divided into 5 regions 130,131,133,133,134 and obtaining, the brightness in these 5 regions from light to dark, represents that distance becomes near by far away.When embodiments of the invention 2 pairs of normal frames carry out image procossing, the relative depth map of the scene obtained utilizing core frames.
In Fig. 4 (b), left side first width figure is normal frames, and square frame 140 represents the moving object of the prospect part in this normal frames.The position (white portion) corresponding to square frame 140 is marked in the second width figure of Fig. 4 (b), square frame 140 is mobile in same depth of field region 133, when namely moving along the direction of black arrow, there is not the change of the degree of depth, the transition function t (X) of prospect part now represented for square frame 140 does not recalculate, but is regarded as background portion and assigns to process.If square frame 140 moves along the direction of white arrow, namely move to region 134 from the region 133 of depth map, occur change in depth, at this moment will square frame 140 represent part as foreground portion assign to process, recalculate transition function t (X).
Fig. 4 (c) is the process flow diagram of embodiments of the invention 2.Compared with the process flow diagram of Fig. 2 (a), difference has 2 points, first is the step S2 replacing Fig. 2 (a) with step S2-1, second is the step S5 replacing Fig. 2 (a) with step S5-1 ~ step S5-3, so omit other identical steps, only step S2-1 and S5-1 ~ step S5-3 is described.
As shown in Fig. 4 (c), in step S2-1, except the t (X) that calculates full figure as the step S2 of Fig. 2 (b) and sky ignore parameter A, also need the depth map calculating current scene.
In step S5-1, normal frames parameter is reused unit 40 and is judged whether the motion of the prospect part of normal frames change in depth occurs.If be judged as change (being "Yes" in step S5-1), then then performed step S5-2 and recalculate the t (X) of the prospect part of the generation change in depth of normal frames and obtain t2 (X); If be judged as do not change (being "No" in step S5-1), then then perform step S5-3, t2 (X) is obtained to the transition function t (X) that the prospect partial reuse core frames of change in depth does not occur of normal frames.
Compared with embodiments of the invention 1, embodiment 2 can improve processing speed further, but needs in the motion picture processing device shown in Fig. 1, to increase extra storer to store the depth map information of current scene.And need dividing processing is done for the depth of field, be whether recalculate normal frames t (X) standard trans-regional being used as of depth map with foreground moving, so the t (X) finally obtained in normal frames slightly declines in precision.
Below, with reference to Fig. 5, embodiments of the invention 3 are described.Wherein, Fig. 5 (a) and Fig. 5 (b) is the figure in the foreground moving region of the prospect part representing normal frames, and Fig. 5 (c) is the process flow diagram of the embodiment of the present invention 3.
In actual applications, video mist elimination is a preprocessing process, and often along with video compress after mist elimination, Internet Transmission etc. are critical system task more.The computing time that every part can consume in the system of such multitask is limited.So be also limited for the region recalculating t (X) in normal frames at every turn.The algorithm time distributing to the mist elimination sharpening process of normal frames in multitask system is T (budget) parameter, t (X) elemental area (i.e. maximum regeneration area) can be upgraded according to this gain of parameter is maximum, for parameter Max_Update_Size, (0 <=Max_Update_Size <=Image Size).If when in normal frames, foreground moving region summation area is greater than Max_Update_Size, then only choose the larger region of moving region area, select successively from big to small, to meet Max_Update_Size restriction, only recalculating and renewal of t (X) is carried out to the region chosen, t (X) is not then recalculated in the moving region less to area, reuses core frames part.
As shown in Fig. 5 (a), the prospect part of this normal frames has 2 larger moving regions represented by white box.In order to meet Max_Update_Size requirement, can only as shown in Fig. 5 (b), recalculate the transition function t (X) of the larger left field of area in two moving regions, and the less moving region of the area on right side is considered as background portion and assigns to process, reuse the transition function t (X) of core frames corresponding part.
In addition, a kind of conventional extreme case is Max_Update_Size=0, and the process of such mist elimination sharpening is calculating a t (X) and sky ignore parameter A in core frames, remaining all normal frames all reuses t (X) and the A of core frames.
As shown in Fig. 5 (c), the process flow diagram of this embodiment 3 only has 1 place different compared with the process flow diagram (Fig. 2 (a)) of embodiment 1, that uses step S5-4 exactly, S5-5, S5-6 instead of original step S5, so omit other identical steps, only step S5-1 ~ S5-3 is described.
In step S5-4, according to the size in different motion region in prospect part, select zones of different successively according to order from big to small, until be selected region sum to be more than or equal to parameter Max_Update_Size.
Then, perform step S5-5, recalculate these selected by step S5-4 out area compared with the transition function t (X) of large regions, obtain the t2 (X) corresponding to these regions.
Then, perform step S5-6, for step S5-4 there is no selected area comparatively zonule, reuse the transition function t (X) of core frames, obtain the t2 (X) corresponding to these regions.
According to the present embodiment 3, the working time of each part of the prospect part of normal frames can be controlled, there is very strong practical value.
Motion picture processing device of the present invention and motion image processing method are specially adapted to field of video monitoring, simultaneously also may be used for any and image, video relevant device, as camera head, demoder, camera etc.

Claims (8)

1. a motion picture processing device, carries out image procossing to the moving image of input and exports, it is characterized in that,
Possess: input block, input moving image subject taken by outside or built-in camera head; Processing unit, carries out image procossing to the moving image of this input; And output unit, export the moving image after by image procossing;
Described processing unit,
Analyze the multiple frames in the moving image of described input, judge the change whether having scene in each frame, the frame changed there being scene as prime frame, the frame changed not having scene as subframe,
The last prime frame of more described subframe and described subframe, is divided into vicissitudinous prospect part and unconverted background parts by described subframe,
To described prime frame, calculate and this prime frame in described camera head and subject between the distance parameter of distance dependent and the sky ignore parameter of this prime frame, image procossing is carried out according to described distance parameter and sky ignore parameter, and the described distance parameter of the distance dependent between camera head and subject defines the transition function of air dielectric, described sky ignore parameter is the vector data of the image rgb value of the aerial any point in sky
To the background parts of described subframe, image procossing is carried out according to the described distance parameter of the last prime frame of described subframe and the sky ignore parameter of described prime frame, and the prospect part to described subframe, the sky ignore parameter according to the distance parameter calculated based on the change in this subframe and described prime frame carries out image procossing.
2., as the motion picture processing device that claim 1 is recorded, it is characterized in that,
Vicissitudinous part among described processing unit divides for the foreground portion of described subframe, according to the depth map with multiple region, the movement of the subject in the prospect part of described subframe be from a certain region described depth map to this certain region beyond region when moving, recalculate the distance parameter of the prospect part of described subframe, the movement of the subject in the prospect part of described subframe is when moving in the same area, use the distance parameter identical with the distance parameter of the last prime frame of described subframe
Wherein, described depth map is split to form the foreground portion of described subframe according to from described camera head to the degree of depth of the distance of described subject.
3., as the motion picture processing device that claim 2 is recorded, it is characterized in that,
When the vicissitudinous part of described processing unit among the foreground portion of described subframe is divided exists multiple, select the area the best part among multiple vicissitudinous part, preferentially described distance parameter is calculated to this area the best part.
4., as the motion picture processing device that claim 3 is recorded, it is characterized in that,
Described processing unit can calculate the maximal value of the area of described distance parameter and maximum regeneration area in the prospect part presetting described subframe, when the area of the vicissitudinous part among described foreground portion is divided exceedes described maximum regeneration area, the vicissitudinous part among dividing the described foreground portion exceeding described maximum regeneration area uses the distance parameter identical with the distance parameter of the last prime frame of described subframe.
5. a motion image processing method, carries out image procossing to the moving image of input and exports, it is characterized in that having following steps:
Analyze the multiple frames in the moving image of input, judge the change whether having scene in each frame, the frame changed there being scene is as prime frame, and the frame changed not having scene is as subframe;
The last prime frame of more described subframe and described subframe, is divided into vicissitudinous prospect part and unconverted background parts by described subframe;
To described prime frame, calculate and this prime frame in camera head and subject between the distance parameter of distance dependent and the sky ignore parameter of this prime frame, image procossing is carried out according to described distance parameter and sky ignore parameter, and the described distance parameter of the distance dependent between camera head and subject defines the transition function of air dielectric, described sky ignore parameter is the vector data of the image rgb value of the aerial any point in sky;
To the background parts of described subframe, image procossing is carried out according to the described distance parameter of the last prime frame of described subframe and the sky ignore parameter of described prime frame, to the prospect part of described subframe, the sky ignore parameter according to the distance parameter calculated based on the change in this subframe and described prime frame carries out image procossing.
6., as the motion image processing method that claim 5 is recorded, it is characterized in that,
Vicissitudinous part among the foreground portion of described subframe is divided, according to the depth map with multiple region, the movement of the subject in the prospect part of described subframe be from a certain region described depth map to this certain region beyond region when moving, recalculate the distance parameter of the prospect part of described subframe, the movement of the subject in the prospect part of described subframe is when moving in the same area, use the distance parameter identical with the distance parameter of the last prime frame of described subframe
Wherein, described depth map is split to form the foreground portion of described subframe according to from described camera head to the degree of depth of the distance of described subject.
7., as the motion image processing method that claim 6 is recorded, it is characterized in that,
When vicissitudinous part among the foreground portion of described subframe is divided exists multiple, select the area the best part among multiple vicissitudinous part, preferentially described distance parameter is calculated to this area the best part.
8., as the motion image processing method that claim 7 is recorded, it is characterized in that,
When can calculate the maximal value of the area of described distance parameter and maximum regeneration area in the prospect part presetting described subframe, when the area of the vicissitudinous part among described foreground portion is divided exceedes described maximum regeneration area, the vicissitudinous part among dividing the described foreground portion exceeding described maximum regeneration area uses the distance parameter identical with the distance parameter of the last prime frame of described subframe.
CN201110037718.4A 2011-02-12 2011-02-12 Moving image processing device and moving image processing method Expired - Fee Related CN102637293B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110037718.4A CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method
JP2012012013A JP2012168936A (en) 2011-02-12 2012-01-24 Animation processing device and animation processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110037718.4A CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method

Publications (2)

Publication Number Publication Date
CN102637293A CN102637293A (en) 2012-08-15
CN102637293B true CN102637293B (en) 2015-02-25

Family

ID=46621679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110037718.4A Expired - Fee Related CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method

Country Status (2)

Country Link
JP (1) JP2012168936A (en)
CN (1) CN102637293B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226809B (en) * 2012-01-31 2015-11-25 株式会社日立制作所 Image demister and image haze removal method
KR101394361B1 (en) 2012-11-21 2014-05-14 중앙대학교 산학협력단 Apparatus and method for single image defogging using alpha matte estimation and image fusion
CN103077500B (en) * 2012-12-30 2016-03-30 贺江涛 The defogging method capable of view data and device
CN104112251A (en) * 2013-04-18 2014-10-22 信帧电子技术(北京)有限公司 Method and device for defogging video image data
JP6324192B2 (en) * 2014-04-25 2018-05-16 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
JP6228670B2 (en) * 2014-06-12 2017-11-08 Eizo株式会社 Fog removing device and image generation method
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN107451969B (en) * 2017-07-27 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107845078B (en) * 2017-11-07 2020-04-14 北京航空航天大学 Unmanned aerial vehicle image multithreading sharpening method assisted by metadata
CN107945546A (en) * 2017-11-17 2018-04-20 嘉兴四维智城信息科技有限公司 Expressway visibility early warning system and method for traffic video automatic identification
CN107808368A (en) * 2017-11-30 2018-03-16 中国电子科技集团公司第三研究所 A kind of color image defogging method under sky and ocean background
CN109166081B (en) * 2018-08-21 2020-09-04 安徽超远信息技术有限公司 Method for adjusting target brightness in video visibility detection process
JP7421273B2 (en) 2019-04-25 2024-01-24 キヤノン株式会社 Image processing device and its control method and program
CN110866486B (en) * 2019-11-12 2022-06-10 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN116249018B (en) * 2023-05-11 2023-09-08 深圳比特微电子科技有限公司 Dynamic range compression method and device for image, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
CN101290680A (en) * 2008-05-20 2008-10-22 西安理工大学 Foggy day video frequency image clarification method based on histogram equalization overcorrection restoration
CN101699509A (en) * 2009-11-11 2010-04-28 耿则勋 Method for recovering atmosphere fuzzy remote image with meteorological data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007042040A (en) * 2005-07-29 2007-02-15 Hexagon:Kk Three-dimensional stereoscopic vision generator
US20090251468A1 (en) * 2008-04-03 2009-10-08 Peled Nachshon Animating of an input-image to create personal worlds

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
CN101290680A (en) * 2008-05-20 2008-10-22 西安理工大学 Foggy day video frequency image clarification method based on histogram equalization overcorrection restoration
CN101699509A (en) * 2009-11-11 2010-04-28 耿则勋 Method for recovering atmosphere fuzzy remote image with meteorological data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董慧颖 等.基于物理模型的恶化天气下的图像复原方法及应用.《东北大学学报(自然科学版)》.2005,第26卷(第3期),第217-219页. *

Also Published As

Publication number Publication date
CN102637293A (en) 2012-08-15
JP2012168936A (en) 2012-09-06

Similar Documents

Publication Publication Date Title
CN102637293B (en) Moving image processing device and moving image processing method
CN102750674B (en) Video image defogging method based on self-adapting allowance
Jiang et al. Night video enhancement using improved dark channel prior
CN105631831B (en) Video image enhancing method under the conditions of a kind of haze
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
US9361670B2 (en) Method and system for image haze removal based on hybrid dark channel prior
CN102831591B (en) Gaussian filter-based real-time defogging method for single image
CN103186887B (en) Image demister and image haze removal method
CN107451966B (en) Real-time video defogging method implemented by guiding filtering through gray level image
EP2740100A1 (en) Method and system for removal of fog, mist or haze from images and videos
CN104867121B (en) Image Quick demisting method based on dark primary priori and Retinex theories
CN103747213A (en) Traffic monitoring video real-time defogging method based on moving targets
CN103218778A (en) Image and video processing method and device
CN102082896B (en) Method for treating video of liquid crystal display device
CN102231791A (en) Video image defogging method based on image brightness stratification
CN102881018B (en) Method for generating depth maps of images
CN103226809B (en) Image demister and image haze removal method
CN108305225A (en) Traffic monitoring image rapid defogging method based on dark channel prior
CN105023246B (en) A kind of image enchancing method based on contrast and structural similarity
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
CN108629750A (en) A kind of night defogging method, terminal device and storage medium
CN110738624B (en) Area-adaptive image defogging system and method
Luan et al. Fast video dehazing using per-pixel minimum adjustment
CN111028184B (en) Image enhancement method and system
CN114897720A (en) Image enhancement device and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150225

Termination date: 20190212