CN104537618A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN104537618A CN104537618A CN201410817295.1A CN201410817295A CN104537618A CN 104537618 A CN104537618 A CN 104537618A CN 201410817295 A CN201410817295 A CN 201410817295A CN 104537618 A CN104537618 A CN 104537618A
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixel value
- vehicle
- candid photograph
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image processing method and device. The image processing method and device are applied to a gate system or an electronic police system. The image processing method comprises the following steps that capturing is carried out on a vehicle in a monitoring area, and a capturing image is obtained; according to a corresponding relationship between the distance from any point in a predefined monitoring area to the lower edge of the monitoring area and the pixel location of the point in the corresponding capturing image and the displacement distance of the vehicle in the capturing process, the corresponding ending pixel location of the vehicle in the capturing image when the capturing is ended is defined; a moving pixel value corresponding to the displacement distance is calculated; according to the driving direction and the moving pixel value of the vehicle when being captured, directed defuzzification processing is carried out on the capturing image. According to the technical scheme of the image processing method and device, the sharp image of the vehicle can be obtained under the condition of weak light.
Description
Technical field
The present invention relates to technical field of image processing, particularly relate to image processing method and device.
Background technology
In intelligentized traffic control system, by arranging Gate System or electronic police system at crossing, vehicle image can be automatically snapped, thus analysis obtains the information such as vehicle characteristics and violation situation.
In order to realize the monitoring of round-the-clock traffic, avoiding the low light environment at night to cause image partially dark and impact analysis, proposing in correlation technique and adopt the equipment such as light compensating lamp, flashing light in Gate System and electronic police system, light filling is carried out to image taking.
But light filling equipment needs extra hardware cost and high energy consumption, and there is serious light pollution problem.
Summary of the invention
In view of this, the invention provides a kind of image processing method and device, the picture rich in detail of vehicle can be obtained under low light condition.
For achieving the above object, the invention provides technical scheme as follows:
According to a first aspect of the invention, propose a kind of image processing method, be applied to Gate System or electronic police system, comprise:
When vehicle travels to default camera site in monitored area, described vehicle captured and obtains capturing image, between wherein said default camera site and the lower edge of described monitored area, there is predetermined interval distance;
According to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image;
The starting pixels position corresponding in described candid photograph image according to described default camera site and described termination location of pixels, calculate the mobile pixel value corresponding to described displacement;
According to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
According to a second aspect of the invention, propose a kind of image processing apparatus, be applied to Gate System or electronic police system, comprise:
Capture unit, for when vehicle travels to default camera site in monitored area, described vehicle captured and obtains candid photograph image, between wherein said default camera site and the lower edge of described monitored area, there is predetermined interval distance;
Determining unit, for according to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image;
Computing unit, for according to starting pixels position corresponding in described candid photograph image, described default camera site and described termination location of pixels, calculates the mobile pixel value corresponding to described displacement;
Processing unit, for according to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
From above technical scheme, the present invention is by obtaining distance in monitored area between any point to lower edge, monitored area in advance and this is capturing the corresponding relation between the location of pixels in image, and by Real-time Obtaining vehicle capture time travel direction and displacement, accurately can calculate the mobile pixel value capturing the pixel corresponding to vehicle in image, thus by directed de-fuzzy process, vehicle image clearly can be obtained, even if thus under low light environment, still vehicle image clearly can be obtained by extending aperture time.
Accompanying drawing explanation
Fig. 1 is the candid photograph schematic diagram under the different speed of a motor vehicle and aperture time;
Fig. 2 is the process flow diagram of a kind of image processing method according to the present invention one exemplary embodiment;
Fig. 3 is the process flow diagram of the another kind of image processing method according to the present invention one exemplary embodiment;
Fig. 4 A-4B is the schematic diagram of video camera according to the present invention one exemplary embodiment and monitored area thereof;
Fig. 5 is the schematic diagram of the pixel movement according to the present invention one exemplary embodiment;
Fig. 6 is the schematic diagram of the directed obfuscation according to the present invention one exemplary embodiment;
Fig. 7 is the structural representation of a kind of electronic equipment according to the present invention one exemplary embodiment;
Fig. 8 is the block diagram of a kind of image processing apparatus according to the present invention one exemplary embodiment.
Embodiment
Fig. 1 is the candid photograph schematic diagram under the different speed of a motor vehicle and aperture time, candid photograph image when wherein Fig. 1 (a) shows that the speed of a motor vehicle is 0km/h, aperture time is 10000 μ s, because the speed of a motor vehicle is low, aperture time is long, the candid photograph brightness of image obtained and sharpness are all very high; Candid photograph image when Fig. 1 (b) shows that the speed of a motor vehicle is 60km/h, aperture time is 10000 μ s, because the speed of a motor vehicle is fast, aperture time is long, although the candid photograph image brightness obtained is high, there is obvious obfuscation, sharpness is very low; Candid photograph image when Fig. 1 (c) shows that the speed of a motor vehicle is 60km/h, aperture time is 4000 μ s, because the speed of a motor vehicle is fast, aperture time is shorter, although the candid photograph image obtained does not exist fuzzy situation, brightness is very low, have impact on the sharpness of picture material equally.
Visible, if need to obtain higher brightness, just need to adopt longer aperture time, but this will cause image generation fuzzy.Therefore, in order to take into account the brightness and obfuscation problem of capturing image, the present invention, by the de-fuzzy process of capturing image, can obtain the candid photograph image that brightness is high, sharpness is good under low light environment.
For being further described the present invention, provide the following example:
Fig. 2 is the process flow diagram of a kind of image processing method according to the present invention one exemplary embodiment, and as shown in Figure 2, the method is applied to Gate System or electronic police system, can comprise the following steps:
Step 202, when vehicle travels to default camera site in monitored area, captures described vehicle and obtains capturing image, having predetermined interval distance between wherein said default camera site and the lower edge of described monitored area.
In the present embodiment, by the default camera site in pre-defined monitored area, vehicle is travelled and captures operation to performing during this position, can according to the displacement of vehicle in candid photograph process, extrapolate vehicle and capturing the pixel movement value in image, thus perform corresponding directed de-fuzzy process.
Step 204, according to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image.
In the present embodiment, predefined described corresponding relation can be calculated by the specifications parameter of the video camera of described Gate System or electronic police system and installation parameter.
Wherein, described specifications parameter can comprise: specification on vehicle heading of the focal length of described video camera, the photo-sensitive cell of described video camera and the unit picture element specification of described sensor devices on described vehicle heading; And described installation parameter can comprise: the setting height(from bottom) of described video camera and angle of inclination.
In the present embodiment, the displacement of vehicle in candid photograph process can be obtained in several ways.Such as an exemplary embodiment, calculating with the aperture time adopted when capturing by detecting the travel speed of described vehicle when capturing, can obtain according to " displacement=travel speed × aperture time ".Wherein, travel speed can be recorded by the peripheral hardware such as velocity radar, vehicle checker, also can directly carry out the modes such as video frequency speed-measuring by video camera and obtain.
In the present embodiment, corresponding relation can be expressed as following formula:
Wherein, D is the pixel value of described any point in the candid photograph image of correspondence, and d is the distance between described any point to described lower edge, h is the setting height(from bottom) of described video camera, A is the angle of inclination between the shooting direction of described video camera and horizontal direction, and f is the focal length of described video camera, v
hfor the specification of described photo-sensitive cell on vehicle heading, σ is the unit picture element specification of described sensor devices on vehicle heading; And, a is the distance between described video camera and described lower edge, u is object distance, v is image distance, B be with the line segment between described video camera and described lower edge be waist, the object distance u base angle that is high isosceles triangle, b be described video camera and described isosceles triangle base on specified point between distance and on the line of this specified point between described video camera and described default camera site, x is the distance between described lower edge and described specified point, and X is the angle that x is corresponding in the triangle being limit with a, b and x.
Step 206, the starting pixels position corresponding in described candid photograph image according to described default camera site and described termination location of pixels, calculate the mobile pixel value corresponding to described displacement.
Step 208, according to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
In the present embodiment, fuzzy situation is there is owing to capturing image, because the pixel that vehicle is corresponding is moved in candid photograph process, cause that pixel superposition occurs between neighbor pixel to cause, thus can realize directed de-fuzzy process by following manner: according to described mobile pixel value, determine the pixel stacking fold of each pixel of described vehicle in described candid photograph image in described travel direction; In the opposite direction of described travel direction, determine that each pixel corresponds to all superposition pixels of described pixel stacking fold; The pixel value of each pixel is deducted the pixel value of corresponding all superposition pixels.
In the present embodiment, because the ground of video camera and monitored area keeps geo-stationary, thus pixel corresponding to vehicle is only had to there occurs mobile and fuzzy, can be identified these by following manner and fuzzy pixel occurs, and realizing de-fuzzy process: the pixel value pixel value of each pixel in described candid photograph image being deducted its neighbor pixel in described travel direction, obtains processing rear image; Add up the sharpness value variable quantity of image after described process, and determine all pixels of described vehicle in described candid photograph image according to statistics.
In the present embodiment, based on the clear candid photograph image completing de-fuzzy process obtained, the operation of the image recognition processing such as Car license recognition, vehicle-logo recognition can be carried out to vehicle wherein further, realize the intelligent traffic administration system of robotization.
From above-described embodiment, the present invention is by obtaining distance in monitored area between any point to lower edge, monitored area in advance and this is capturing the corresponding relation between the location of pixels in image, and by Real-time Obtaining vehicle capture time travel direction and displacement, accurately can calculate the mobile pixel value capturing the pixel corresponding to vehicle in image, thus by directed de-fuzzy process, vehicle image clearly can be obtained, even if thus under low light environment, still vehicle image clearly can be obtained by extending aperture time.
Fig. 3 is the process flow diagram of a kind of image processing method according to the present invention one exemplary embodiment, as shown in Figure 3, can comprise the following steps:
1, pretreatment stage
Step 302, according to specifications parameter and the installation parameter of video camera in Gate System or electronic police system, calculate predefined corresponding relation, the distance namely in monitored area between any point to lower edge, monitored area and the corresponding relation between this location of pixels in corresponding candid photograph image.
As shown in Figure 4 A, the camera lens of video camera points to lower right by upper left side, then the extended line of lens area is crossing with ground defines corresponding monitored area, and its mid point O is illustrated as the lower edge (i.e. the lower edge of the candid photograph image of this video camera) of this monitored area.
In this embodiment, the specifications parameter of video camera can comprise: the focal distance f (not shown) of video camera, the photo-sensitive cell of the video camera specification v on vehicle heading (i.e. the direction of " from right to left " in Fig. 4 A, shows as the direction of capturing " from top to bottom " on image)
hwith the unit picture element specification σ of this sensor devices on vehicle heading (such as when travel direction is the direction of capturing " from top to bottom " on image, " unit picture element specification σ " the i.e. height of each pixel of in the vertical direction).And the installation parameter of video camera comprises: the setting height(from bottom) h of video camera and angle of inclination A (represents by the included angle A between lens direction and horizontal direction herein; But obviously also can adopt other forms of expression, the angle etc. such as between lens direction and vertical direction).
So, for any point S in monitored area and the distance d between some O
s, according to the above-mentioned specifications parameter of video camera and installation parameter, the distance d between this S and some O can be calculated
slocation of pixels D corresponding on image is being captured with a S
s(namely by candid photograph image upper edge downward time, the number of lines of pixels residing for this pixel) between corresponding relation.
Particularly, the computation process of an exemplary embodiment is as follows:
1) by setting height(from bottom) h and the angle of inclination A of video camera, the length that can obtain corresponding hypotenuse a is
2) in conjunction with object distance u and the image distance v of video camera, and the specification v of sensor devices on vehicle heading
h, can obtain:
And and then calculate object distance u and image distance v;
3) due to v
hwith x
sstraight line corresponding is respectively parallel to each other, and can obtain:
and and then obtain D
swith x
sbetween corresponding relation;
4) according to triangle relation, can obtain:
And and then obtain d
swith x
sbetween corresponding relation;
5) D is determined
swith d
sbetween corresponding relation be
Step 304, determines the default camera site in monitored area, and starting pixels position corresponding in image is being captured in this default camera site.
In the present embodiment, assuming that default camera site is the some M shown in Fig. 4 B, and the actual range between some M and some O is d
m, then d is made
s=d
m, D
s=D
mand by this actual range d
msubstitute into above-mentioned predefined corresponding relation, corresponding starting pixels position D can be obtained
m.
2, in real time processing stage
Step 306, when vehicle travels to when presetting camera site, obtains and captures image.
Step 308, according to the travel direction when capturing and displacement, determines that termination location of pixels corresponding in image captured by vehicle.
In the present embodiment, because the setting position of video camera is fixed, then corresponding vehicle heading is in fact also fixing.Therefore, can carry out pre-configured to vehicle heading on video camera or background devices in advance; Or, the lane line in the monitored area can arrived according to camera acquisition, road edge etc., and automatically generate corresponding vehicle heading.
In the present embodiment, by the travel speed of collection vehicle when capturing and the aperture time adopted can be captured, calculating operating range=travel speed × aperture time.Corresponding to Fig. 4 B, because video camera is captured when vehicle travels and extremely presets camera site point M, then operating range can be MN, namely puts M and be moved to a N place at the end of candid photograph, thus cause image blurring.
Therefore, similar with step 304, stop location of pixels D in calculating
ntime, be d according to the actual range between a N and some O
n=d
m-MN, then make d
s=d
n, D
s=D
nand by this actual range d
nsubstitute into above-mentioned predefined corresponding relation, corresponding starting pixels position D can be obtained
n.
It should be noted that:
As mentioned previously, the operating range that each pixel corresponding due to vehicle experienced by candid photograph process " by a M to putting N ", thus cause these pixels all quilts " directed obfuscation " in travel direction.
Than as shown in Figure 5, move for the pixel on vertical direction.Assuming that in the vertical direction, comprise the pixel columns such as y, y+1, y+2, y+3, y+4, y+5 and y+6, and front 5 row are provided with corresponding pixel is (x, y), (x, y+1), (x, y+2), (x, y+3) and (x, y+4), as shown in table 1.
Line number | Pixel |
y | (x,y) |
y+1 | (x,y+1) |
y+2 | (x,y+2) |
y+3 | (x,y+3) |
y+4 | (x,y+4) |
Table 1
If in candid photograph process, these pixels move 2 pixels downwards, then, when moving first pixel, these pixels are moved into: y+1, y+2, y+3, y+4 and y+5 are capable; But during due to the position of these pixels shown in table 1, leave corresponding image capturing on image, thus achieve the superposition of pixel at each pixel column, specifically as shown in table 2.
Line number | Pixel |
y | (x,y) |
y+1 | (x,y)、(x,y+1) |
y+2 | (x,y+1)、(x,y+2) |
y+3 | (x,y+2)、(x,y+3) |
y+4 | (x,y+3)、(x,y+4) |
y+5 | (x,y+4) |
Table 2
When moving second pixel, these pixels are moved into: y+2, y+3, y+4, y+5 and y+6 are capable, achieve the superposition of pixel at each pixel column again, specifically as shown in table 3.
Line number | Pixel |
y | (x,y) |
y+1 | (x,y)、(x,y+1) |
y+2 | (x,y+1)、(x,y+2)、(x,y+3) |
y+3 | (x,y+2)、(x,y+3)、(x,y+4) |
y+4 | (x,y+3)、(x,y+4)、(x,y+5) |
y+5 | (x,y+4)、(x,y+5) |
y+6 | (x,y+5) |
Table 3
Therefore, because vehicle there occurs movement in candid photograph process, cause the pixel originally only occupying y, y+1, y+2, y+3 and y+4 pixel column, be moved and be superimposed to other pixel columns.
Meanwhile, due to the movement of pixel, make 5 pixel columns being originally in pixel column y to y+4, extend to pixel column y to y+6 totally 7 pixel columns, wherein:
For pixel column y and y+1, be positioned at the origination side of travel direction, and the quantity of other pixels superposed on the basis of original pixel is less than mobile pixel value.Such as mobile pixel value is herein 2, then the pixel (x, y) of pixel column y does not superpose other pixel; And the pixel (x, y+1) of pixel column y+1 has superposed a pixel, i.e. (x, y).Therefore, capture the pixel being positioned at pixel column y and y+1 in image and be called start edge pixel, there is identical generation and the processing mode of deblurring.
For pixel column y+2 to y+4, the quantity of other pixels that the basis of original pixel superposes equals mobile pixel value.Such as mobile pixel value is herein 2, then the pixel of above-mentioned each pixel column has all superposed the pixel of 2 other pixel columns, such as the capable pixel (x, y+2) of y+2 has superposed (x, y+1) and (x, y).Therefore, capture the pixel being positioned at pixel column y+2 to y+4 in image and be called area pixel point, there is identical generation and the processing mode of deblurring.
For pixel column y+5 and y+6, be positioned at the termination side of travel direction, and the quantity of other pixels superposed on the basis of original pixel is less than mobile pixel value.Such as mobile pixel value is herein 2, then pixel column y+5 does not have vehicle pixel originally, but has superposed pixel (x, y+4) and (x, y+3); Similar, pixel column y+6 does not have vehicle pixel originally yet, but has superposed pixel (x, y+4).Therefore, capture the pixel being positioned at pixel column y+5 and y+6 in image and be called terminating edge pixel, there is identical generation and the processing mode of deblurring.
So, according to the feature of above-mentioned all kinds of pixel (i.e. start edge pixel, terminating edge pixel and area pixel point), and move and summation rule corresponding to the pixel of table 1 to table 3, corresponding directed obfuscation formula can be obtained.
For start edge pixel, the directed obfuscation formula of employing is:
wherein N={0,1 ..., D-1};
For terminating edge pixel, the directed obfuscation formula of employing is:
wherein N={M+1 ..., M+D}, and M for the distance on terminal side edge equal described mobile pixel value the positional information of pixel in described travel direction (such as in the embodiment shown in Fig. 5, (x
m, y
m) be the pixel equaling mobile pixel value 2 with the distance of the pixel column y+6 as terminal side edge, i.e. pixel (x, y+4));
For area pixel point, the directed obfuscation formula of employing is:
Wherein, (x
k, y
k) demand fulfillment three conditions:
1) move along directed straight line y=ax+b, wherein a and b is coefficient;
2)(x-x
k)
2+(y-y
k)
2≤D
2;
3) (x-x
k) × a > 0, i.e. (x
k, y
k) be on the direction of motion ray of (x, y).
For license plate image, vehicle travels when capturing, and is equivalent to perform the Fuzzy processing based on above-mentioned directed obfuscation formula.Wherein, Fig. 6 (a) shows corresponding to the image under situation shown in table 1, pixel now not yet occurs and moves and superpose; And Fig. 6 (b) shows the image of the obfuscation after corresponding to the superposition of pixel shown in table 3.
Step 310, according to starting pixels position and termination location of pixels, calculates the mobile pixel value corresponding to displacement.
Step 312, determines that all pixels corresponding in image captured by vehicle.
In the present embodiment, the pixel corresponding due to vehicle all there occurs fuzzy, and ground region as a setting, then do not occur fuzzy.Therefore, image can be captured to whole and carry out rough de-fuzzy process, thus according to processing the acutance situation of change of rear image, identify the pixel that vehicle is corresponding.
Particularly, the pixel value of each pixel in described candid photograph image can be deducted the pixel value of its neighbor pixel in described travel direction, obtain processing rear image; Then, add up the sharpness value variable quantity of image after described process, and determine all pixels of described vehicle in described candid photograph image according to statistics.
Wherein, the neighbor pixel in travel direction, can be travel direction positive dirction or in the other direction on neighbor pixel.Pixel column y+3 in such as Fig. 5 corresponds to pixel (x, y+4), the superposition pixel value of (x, y+3) and (x, y+2), then neighbor pixel can be the pixel (x that pixel column y+2 is corresponding, y+3), the superposition pixel value of (x, y+2) and (x, y+1), or the pixel (x that pixel column y+4 is corresponding, y+5), the superposition pixel value of (x, y+4) and (x, y+3).
For example, can be the pixel (x that pixel column y+2 is corresponding with neighbor pixel, y+3), (x, and (x y+2), y+1) superposition pixel value is example, by being subtracted each other by superposition pixel value corresponding respectively with pixel column y+2 for pixel column y+3, difference can be obtained for (y+4)-(y+1).
Step 314, according to moving direction and mobile pixel value, performs directed de-fuzzy process to each pixel that correspondence in image captured by vehicle.
In the present embodiment, based on the processing mode of above-mentioned directed obfuscation, when performing reverse directed de-fuzzy process, in fact have employed following manner: according to described mobile pixel value, determine the pixel stacking fold of each pixel of described vehicle in described candid photograph image in described travel direction; In the opposite direction of described travel direction, determine that each pixel corresponds to all superposition pixels of described pixel stacking fold; The pixel value of each pixel is deducted the pixel value of corresponding all superposition pixels.
Particularly, for the pixel of every type, i.e. start edge pixel, terminating edge pixel and area pixel point, can adopt following formula to perform directed de-fuzzy process respectively.
A, according to the following equation, directed de-fuzzy process is carried out to each start edge pixel of described vehicle in described candid photograph image:
n={1 ..., D-1};
Wherein, the distance on the starting point side edge of described start edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be start edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of start edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of start edge pixel, (x
k, y
k) be start edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value;
B, according to the following equation, directed de-fuzzy process is carried out to each terminating edge pixel of described vehicle in described candid photograph image:
n={M+1 ..., M+D};
Wherein, the distance on the terminal side edge of described terminating edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be terminating edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of terminating edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of terminating edge pixel, (x
k, y
k) be terminating edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value, and M is the positional information of pixel in described travel direction equaling described mobile pixel value with the distance on terminal side edge;
C, according to the following equation, directed de-fuzzy process is carried out to each area pixel point of described vehicle in described candid photograph image:
Wherein, the starting point side edge of described area pixel point and described travel direction and the distance on terminal side edge are all greater than described mobile pixel value; (x, y) is area pixel point, the pixel value after the de-fuzzy process that R (x, y) is area pixel point, the de-fuzzy pixel value before treatment that S (x, y) is area pixel point.
In addition, the vehicle pixel after de-fuzzy process can be extracted, and the non-vehicle pixel in original candid photograph image, thus combination obtains all pixels image all clearly.
Fig. 7 shows the schematic configuration diagram of the electronic equipment of the exemplary embodiment according to the application.Please refer to Fig. 7, at hardware view, this electronic equipment comprises processor, internal bus, network interface, internal memory and nonvolatile memory, certainly also may comprise the hardware required for other business.Processor reads corresponding computer program and then runs in internal memory from nonvolatile memory, and logic level forms image processing apparatus.Certainly, except software realization mode, the application does not get rid of other implementations, mode of such as logical device or software and hardware combining etc., that is the executive agent of following treatment scheme is not limited to each logical block, also can be hardware or logical device.
Please refer to Fig. 8, in Software Implementation, this image processing apparatus can comprise captures unit, determining unit, computing unit and processing unit.Wherein:
Capture unit, for when vehicle travels to default camera site in monitored area, described vehicle captured and obtains candid photograph image, between wherein said default camera site and the lower edge of described monitored area, there is predetermined interval distance;
Determining unit, for according to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image;
Computing unit, for according to starting pixels position corresponding in described candid photograph image, described default camera site and described termination location of pixels, calculates the mobile pixel value corresponding to described displacement;
Processing unit, for according to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
Optionally, predefined described corresponding relation is calculated by the specifications parameter of the video camera of described Gate System or electronic police system and installation parameter.
Optionally,
Described specifications parameter comprises: specification on vehicle heading of the focal length of described video camera, the photo-sensitive cell of described video camera and the unit picture element specification of described sensor devices on described vehicle heading; And
Described installation parameter comprises: the setting height(from bottom) of described video camera and angle of inclination.
Optionally, described corresponding relation is expressed as following formula:
Wherein, D is the pixel value of described any point in the candid photograph image of correspondence, and d is the distance between described any point to described lower edge, h is the setting height(from bottom) of described video camera, A is the angle of inclination between the shooting direction of described video camera and horizontal direction, and f is the focal length of described video camera, v
hfor the specification of described photo-sensitive cell on vehicle heading, σ is the unit picture element specification of described sensor devices on vehicle heading; And, a is the distance between described video camera and described lower edge, u is object distance, v is image distance, B be with the line segment between described video camera and described lower edge be waist, the object distance u base angle that is high isosceles triangle, b be described video camera and described isosceles triangle base on specified point between distance and on the line of this specified point between described video camera and described default camera site, x is the distance between described lower edge and described specified point, and X is the angle that x is corresponding in the triangle being limit with a, b and x.
Optionally, the displacement of described vehicle in candid photograph process is calculated with the aperture time adopted when capturing by the travel speed of described vehicle when capturing.
Optionally, described processing unit is used for:
According to described mobile pixel value, determine the pixel stacking fold of each pixel of described vehicle in described candid photograph image in described travel direction;
In the opposite direction of described travel direction, determine that each pixel corresponds to all superposition pixels of described pixel stacking fold;
The pixel value of each pixel is deducted the pixel value of corresponding all superposition pixels.
Optionally,
Described processing unit according to the following equation, carries out directed de-fuzzy process to each start edge pixel of described vehicle in described candid photograph image:
n={1 ..., D-1};
Wherein, the distance on the starting point side edge of described start edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be start edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of start edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of start edge pixel, (x
k, y
k) be start edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value;
Described processing unit according to the following equation, carries out directed de-fuzzy process to each terminating edge pixel of described vehicle in described candid photograph image:
n={M+1 ..., M+D};
Wherein, the distance on the terminal side edge of described terminating edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be terminating edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of terminating edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of terminating edge pixel, (x
k, y
k) be terminating edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value, and M is the positional information of pixel in described travel direction equaling described mobile pixel value with the distance on terminal side edge;
Described processing unit according to the following equation, carries out directed de-fuzzy process to each area pixel point of described vehicle in described candid photograph image:
Wherein, the starting point side edge of described area pixel point and described travel direction and the distance on terminal side edge are all greater than described mobile pixel value; (x, y) is area pixel point, the pixel value after the de-fuzzy process that R (x, y) is area pixel point, the de-fuzzy pixel value before treatment that S (x, y) is area pixel point.
Optionally, also comprise:
Graphics processing unit, for the pixel value of each pixel in described candid photograph image being deducted the pixel value of its neighbor pixel in described travel direction, obtains processing rear image;
Acutance statistic unit, for adding up the sharpness value variable quantity of image after described process, and determines all pixels of described vehicle in described candid photograph image according to statistics.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.
Claims (14)
1. an image processing method, is applied to Gate System or electronic police system, it is characterized in that, comprising:
When vehicle travels to default camera site in monitored area, described vehicle captured and obtains capturing image, between wherein said default camera site and the lower edge of described monitored area, there is predetermined interval distance;
According to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image;
The starting pixels position corresponding in described candid photograph image according to described default camera site and described termination location of pixels, calculate the mobile pixel value corresponding to described displacement;
According to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
2. method according to claim 1, is characterized in that, predefined described corresponding relation is calculated by the specifications parameter of the video camera of described Gate System or electronic police system and installation parameter;
Wherein, described specifications parameter comprises: specification on vehicle heading of the focal length of described video camera, the photo-sensitive cell of described video camera and the unit picture element specification of described sensor devices on described vehicle heading; And
Described installation parameter comprises: the setting height(from bottom) of described video camera and angle of inclination.
3. method according to claim 2, is characterized in that, described corresponding relation is expressed as following formula:
Wherein, D is the pixel value of described any point in the candid photograph image of correspondence, and d is the distance between described any point to described lower edge, h is the setting height(from bottom) of described video camera, A is the angle of inclination between the shooting direction of described video camera and horizontal direction, and f is the focal length of described video camera, v
hfor the specification of described photo-sensitive cell on vehicle heading, σ is the unit picture element specification of described sensor devices on vehicle heading; And, a is the distance between described video camera and described lower edge, u is object distance, v is image distance, B be with the line segment between described video camera and described lower edge be waist, the object distance u base angle that is high isosceles triangle, b be described video camera and described isosceles triangle base on specified point between distance and on the line of this specified point between described video camera and described default camera site, x is the distance between described lower edge and described specified point, and X is the angle that x is corresponding in the triangle being limit with a, b and x.
4. method according to claim 1, is characterized in that, the displacement of described vehicle in candid photograph process is calculated with the aperture time adopted when capturing by the travel speed of described vehicle when capturing.
5. method according to claim 1, is characterized in that, described according to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image, comprising:
According to described mobile pixel value, determine the pixel stacking fold of each pixel of described vehicle in described candid photograph image in described travel direction;
In the opposite direction of described travel direction, determine that each pixel corresponds to all superposition pixels of described pixel stacking fold;
The pixel value of each pixel is deducted the pixel value of corresponding all superposition pixels.
6. method according to claim 5, is characterized in that,
According to the following equation, directed de-fuzzy process is carried out to each start edge pixel of described vehicle in described candid photograph image:
N={1,...,D-1};
Wherein, the distance on the starting point side edge of described start edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be start edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of start edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of start edge pixel, (x
k, y
k) be start edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value;
According to the following equation, directed de-fuzzy process is carried out to each terminating edge pixel of described vehicle in described candid photograph image:
N={M+1,...,M+D};
Wherein, the distance on the terminal side edge of described terminating edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be terminating edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of terminating edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of terminating edge pixel, (x
k, y
k) be terminating edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value, and M is the positional information of pixel in described travel direction equaling described mobile pixel value with the distance on terminal side edge;
According to the following equation, directed de-fuzzy process is carried out to each area pixel point of described vehicle in described candid photograph image:
Wherein, the starting point side edge of described area pixel point and described travel direction and the distance on terminal side edge are all greater than described mobile pixel value; (x, y) is area pixel point, the pixel value after the de-fuzzy process that R (x, y) is area pixel point, the de-fuzzy pixel value before treatment that S (x, y) is area pixel point.
7. method according to claim 5, is characterized in that, also comprises:
The pixel value of each pixel in described candid photograph image is deducted the pixel value of its neighbor pixel in described travel direction, obtain processing rear image;
Add up the sharpness value variable quantity of image after described process, and determine all pixels of described vehicle in described candid photograph image according to statistics.
8. an image processing apparatus, is applied to Gate System or electronic police system, it is characterized in that, comprising:
Capture unit, for when vehicle travels to default camera site in monitored area, described vehicle captured and obtains candid photograph image, between wherein said default camera site and the lower edge of described monitored area, there is predetermined interval distance;
Determining unit, for according to any point in predefined described monitored area to the distance between described lower edge and the corresponding relation between this location of pixels in corresponding candid photograph image, and the displacement of described vehicle in candid photograph process, the termination location of pixels that at the end of determining to capture, described vehicle is corresponding in described candid photograph image;
Computing unit, for according to starting pixels position corresponding in described candid photograph image, described default camera site and described termination location of pixels, calculates the mobile pixel value corresponding to described displacement;
Processing unit, for according to described vehicle capture time travel direction and described mobile pixel value, directed de-fuzzy process is carried out to described candid photograph image.
9. device according to claim 8, is characterized in that, predefined described corresponding relation is calculated by the specifications parameter of the video camera of described Gate System or electronic police system and installation parameter;
Wherein, described specifications parameter comprises: specification on vehicle heading of the focal length of described video camera, the photo-sensitive cell of described video camera and the unit picture element specification of described sensor devices on described vehicle heading; And
Described installation parameter comprises: the setting height(from bottom) of described video camera and angle of inclination.
10. device according to claim 9, is characterized in that, described corresponding relation is expressed as following formula:
Wherein, D is the pixel value of described any point in the candid photograph image of correspondence, and d is the distance between described any point to described lower edge, h is the setting height(from bottom) of described video camera, A is the angle of inclination between the shooting direction of described video camera and horizontal direction, and f is the focal length of described video camera, v
hfor the specification of described photo-sensitive cell on vehicle heading, σ is the unit picture element specification of described sensor devices on vehicle heading; And, a is the distance between described video camera and described lower edge, u is object distance, v is image distance, B be with the line segment between described video camera and described lower edge be waist, the object distance u base angle that is high isosceles triangle, b be described video camera and described isosceles triangle base on specified point between distance and on the line of this specified point between described video camera and described default camera site, x is the distance between described lower edge and described specified point, and X is the angle that x is corresponding in the triangle being limit with a, b and x.
11. devices according to claim 8, is characterized in that, the displacement of described vehicle in candid photograph process is calculated with the aperture time adopted when capturing by the travel speed of described vehicle when capturing.
12. devices according to claim 8, is characterized in that, described processing unit is used for:
According to described mobile pixel value, determine the pixel stacking fold of each pixel of described vehicle in described candid photograph image in described travel direction;
In the opposite direction of described travel direction, determine that each pixel corresponds to all superposition pixels of described pixel stacking fold;
The pixel value of each pixel is deducted the pixel value of corresponding all superposition pixels.
13. devices according to claim 12, is characterized in that,
Described processing unit according to the following equation, carries out directed de-fuzzy process to each start edge pixel of described vehicle in described candid photograph image:
N={1,...,D-1};
Wherein, the distance on the starting point side edge of described start edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be start edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of start edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of start edge pixel, (x
k, y
k) be start edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value;
Described processing unit according to the following equation, carries out directed de-fuzzy process to each terminating edge pixel of described vehicle in described candid photograph image:
N={M+1,...,M+D};
Wherein, the distance on the terminal side edge of described terminating edge pixel and described travel direction is not more than described mobile pixel value; (x
n, y
n) be terminating edge pixel, R (x
n, y
n) be the pixel value after the de-fuzzy process of terminating edge pixel, S (x
n, y
n) be the de-fuzzy pixel value before treatment of terminating edge pixel, (x
k, y
k) be terminating edge pixel (x
n, y
n) pixel corresponding after a mobile k pixel in described travel direction, D is described mobile pixel value, and M is the positional information of pixel in described travel direction equaling described mobile pixel value with the distance on terminal side edge;
Described processing unit according to the following equation, carries out directed de-fuzzy process to each area pixel point of described vehicle in described candid photograph image:
Wherein, the starting point side edge of described area pixel point and described travel direction and the distance on terminal side edge are all greater than described mobile pixel value; (x, y) is area pixel point, the pixel value after the de-fuzzy process that R (x, y) is area pixel point, the de-fuzzy pixel value before treatment that S (x, y) is area pixel point.
14. devices according to claim 12, is characterized in that, also comprise:
Graphics processing unit, for the pixel value of each pixel in described candid photograph image being deducted the pixel value of its neighbor pixel in described travel direction, obtains processing rear image;
Acutance statistic unit, for adding up the sharpness value variable quantity of image after described process, and determines all pixels of described vehicle in described candid photograph image according to statistics.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817295.1A CN104537618B (en) | 2014-12-24 | 2014-12-24 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817295.1A CN104537618B (en) | 2014-12-24 | 2014-12-24 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104537618A true CN104537618A (en) | 2015-04-22 |
CN104537618B CN104537618B (en) | 2018-01-16 |
Family
ID=52853137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410817295.1A Active CN104537618B (en) | 2014-12-24 | 2014-12-24 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104537618B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633490A (en) * | 2017-09-19 | 2018-01-26 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN114208164A (en) * | 2019-08-16 | 2022-03-18 | 影石创新科技股份有限公司 | Method for dynamically controlling video coding rate, intelligent equipment and motion camera |
CN114882709A (en) * | 2022-04-22 | 2022-08-09 | 四川云从天府人工智能科技有限公司 | Vehicle congestion detection method and device and computer storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102075678A (en) * | 2009-11-20 | 2011-05-25 | 鸿富锦精密工业(深圳)有限公司 | System and method for deblurring motion blurred images |
CN102131079A (en) * | 2011-04-20 | 2011-07-20 | 杭州华三通信技术有限公司 | Method and device for eliminating motion blur of image |
CN102436639A (en) * | 2011-09-02 | 2012-05-02 | 清华大学 | Image acquiring method for removing image blurring and image acquiring system |
CN102752484A (en) * | 2012-06-25 | 2012-10-24 | 清华大学 | Fast non-global uniform image shaking blur removal algorithm and system thereof |
US20130271615A1 (en) * | 2009-03-27 | 2013-10-17 | Canon Kabushiki Kaisha | Method of removing an artefact from an image |
-
2014
- 2014-12-24 CN CN201410817295.1A patent/CN104537618B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130271615A1 (en) * | 2009-03-27 | 2013-10-17 | Canon Kabushiki Kaisha | Method of removing an artefact from an image |
CN102075678A (en) * | 2009-11-20 | 2011-05-25 | 鸿富锦精密工业(深圳)有限公司 | System and method for deblurring motion blurred images |
CN102131079A (en) * | 2011-04-20 | 2011-07-20 | 杭州华三通信技术有限公司 | Method and device for eliminating motion blur of image |
CN102436639A (en) * | 2011-09-02 | 2012-05-02 | 清华大学 | Image acquiring method for removing image blurring and image acquiring system |
CN102752484A (en) * | 2012-06-25 | 2012-10-24 | 清华大学 | Fast non-global uniform image shaking blur removal algorithm and system thereof |
Non-Patent Citations (2)
Title |
---|
李沛秦,谢剑斌,陈章永,程永茂,刘通: "一种面向目标区域的快速去模糊算法", 《信号处理》 * |
程姝,赵志刚,蒋静,陈莹莹,潘振宽: "单幅运动模糊图像的恢复及应用", 《青岛大学学报(自然科学版)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633490A (en) * | 2017-09-19 | 2018-01-26 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN107633490B (en) * | 2017-09-19 | 2023-10-03 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN114208164A (en) * | 2019-08-16 | 2022-03-18 | 影石创新科技股份有限公司 | Method for dynamically controlling video coding rate, intelligent equipment and motion camera |
CN114208164B (en) * | 2019-08-16 | 2024-02-09 | 影石创新科技股份有限公司 | Method for dynamically controlling video coding rate, intelligent device and moving camera |
CN114882709A (en) * | 2022-04-22 | 2022-08-09 | 四川云从天府人工智能科技有限公司 | Vehicle congestion detection method and device and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104537618B (en) | 2018-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11205284B2 (en) | Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device | |
WO2020097840A1 (en) | Systems and methods for correcting a high-definition map based on detection of obstructing objects | |
CN110298300B (en) | Method for detecting vehicle illegal line pressing | |
US8050459B2 (en) | System and method for detecting pedestrians | |
US20080285799A1 (en) | Apparatus and method for detecting obstacle through stereovision | |
CN112163543A (en) | Method and system for detecting illegal lane occupation of vehicle | |
CN110738150B (en) | Camera linkage snapshot method and device and computer storage medium | |
WO2006126490A1 (en) | Vehicle, image processing system, image processing method, image processing program, method for configuring image processing system, and server | |
CN104427255A (en) | Image processing method of vehicle camera and image processing apparatus using the same | |
KR100820952B1 (en) | Detecting method at automatic police enforcement system of illegal-stopping and parking vehicle using single camera and system thereof | |
KR102431215B1 (en) | Camera system and method for capturing the surrounding area of a vehicle | |
CN105303157A (en) | Algorithm to extend detecting range for AVM stop line detection | |
CN107316463A (en) | A kind of method and apparatus of vehicle monitoring | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
CN114727024A (en) | Automatic exposure parameter adjusting method and device, storage medium and shooting equipment | |
CN109615660A (en) | The method and device that vehicle panoramic picture is demarcated | |
CN104537618A (en) | Image processing method and device | |
CN112172797A (en) | Parking control method, device, equipment and storage medium | |
JP6139493B2 (en) | License plate detection device and license plate detection method | |
CN115909240A (en) | Road congestion detection method based on lane line and vehicle identification | |
CN108389177B (en) | Vehicle bumper damage detection method and traffic safety early warning method | |
CN103971383A (en) | Method for calculating velocity of movement of targets in video images | |
Yang | Estimation of vehicle's lateral position via the Lucas-Kanade optical flow method | |
CN113468911A (en) | Vehicle-mounted red light running detection method and device, electronic equipment and storage medium | |
CN113519150A (en) | Camera assembly and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |