CN104537618B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN104537618B
CN104537618B CN201410817295.1A CN201410817295A CN104537618B CN 104537618 B CN104537618 B CN 104537618B CN 201410817295 A CN201410817295 A CN 201410817295A CN 104537618 B CN104537618 B CN 104537618B
Authority
CN
China
Prior art keywords
pixel
mfrac
mrow
vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410817295.1A
Other languages
Chinese (zh)
Other versions
CN104537618A (en
Inventor
丁志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201410817295.1A priority Critical patent/CN104537618B/en
Publication of CN104537618A publication Critical patent/CN104537618A/en
Application granted granted Critical
Publication of CN104537618B publication Critical patent/CN104537618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of image processing method and device, applied to bayonet system or electronic police system, including:Vehicle in monitored area is captured and obtains capturing image;According to the corresponding relation between the location of pixels of the distance between any point in predefined monitored area to its lower edge and this in corresponding candid photograph image, and displacement of the vehicle during candid photograph, it is determined that vehicle corresponding termination location of pixels in image is captured at the end of capturing;Calculate the mobile pixel value corresponding to the displacement;According to travel direction and movement pixel value of the vehicle when capturing, de-fuzzy processing is oriented to capturing image.By technical scheme, the picture rich in detail of vehicle can be obtained under low light condition.

Description

Image processing method and device
Technical field
The present invention relates to technical field of image processing, more particularly to image processing method and device.
Background technology
, can be certainly by setting bayonet system or electronic police system at crossing in intelligentized traffic control system Dynamic shooting vehicle image, so as to analyze to obtain the information such as vehicle characteristics and violation condition.
In order to realize round-the-clock traffic monitoring, avoiding the low light environment at night causes image partially dark and influences to divide Analyse, proposed in correlation technique in bayonet system and electronic police system using equipment such as light compensating lamp, flashing lights, image is clapped Take the photograph carry out light filling.
However, light filling equipment needs extra hardware cost and high energy consumption, and serious light pollution be present.
The content of the invention
In view of this, the present invention provides a kind of image processing method and device, can obtain vehicle under low light condition Picture rich in detail.
To achieve the above object, it is as follows to provide technical scheme by the present invention:
According to the first aspect of the invention, it is proposed that a kind of image processing method, applied to bayonet system or electronic police System, including:
When vehicle travels the default camera site to monitored area, the vehicle is captured and obtains candid photograph figure Picture, wherein having predetermined interval distance between the default camera site and the lower edge of the monitored area;
According to the distance between any point in the predefined monitored area to the lower edge with the point corresponding The corresponding relation between the location of pixels in image, and displacement of the vehicle during candid photograph are captured, it is determined that grabbing The vehicle corresponding termination location of pixels in the candid photograph image at the end of bat;
According to the default camera site in corresponding starting pixels position and the termination pixel in capturing image Position, calculate the mobile pixel value corresponding to the displacement;
According to the vehicle capture when travel direction and the mobile pixel value, to it is described candid photograph image be oriented De-fuzzy processing.
According to the second aspect of the invention, it is proposed that a kind of image processing apparatus, applied to bayonet system or electronic police System, including:
Unit is captured, for when vehicle travels the default camera site to monitored area, being grabbed to the vehicle Clap and obtain capture image, wherein between the default camera site and the lower edge of the monitored area have predetermined interval away from From;
Determining unit, for according to the distance between any point in the predefined monitored area to the lower edge with Corresponding relation between location of pixels of this in corresponding candid photograph image, and movement of the vehicle during candid photograph Distance, it is determined that the vehicle corresponding termination location of pixels in the candid photograph image at the end of capturing;
Computing unit, for according to the default camera site it is described candid photograph image in corresponding starting pixels position and The termination location of pixels, calculate the mobile pixel value corresponding to the displacement;
Processing unit, for according to the vehicle capture when travel direction and the mobile pixel value, grabbed to described Clap image and be oriented de-fuzzy processing.
From above technical scheme, the present invention by obtaining in monitored area any point to monitored area lower edge in advance The distance between corresponding relation with this between the location of pixels in capturing image, and captured by obtaining vehicle in real time When travel direction and displacement, can accurately calculate capture image in correspond to vehicle pixel mobile pixel Value, so as to be handled by orienting de-fuzzy, you can clearly vehicle image is obtained, even if so as under low light environment, still Clearly vehicle image can be obtained by extending aperture time.
Brief description of the drawings
Fig. 1 is the candid photograph schematic diagram under different speeds and aperture time;
Fig. 2 is the flow chart according to a kind of image processing method of an exemplary embodiment of the invention;
Fig. 3 is the flow chart according to another image processing method of an exemplary embodiment of the invention;
Fig. 4 A-4B are according to the video camera of an exemplary embodiment of the invention and its schematic diagram of monitored area;
Fig. 5 is the schematic diagram moved according to the pixel of an exemplary embodiment of the invention;
Fig. 6 is the schematic diagram being blurred according to the orientation of an exemplary embodiment of the invention;
Fig. 7 is the structural representation according to a kind of electronic equipment of an exemplary embodiment of the invention;
Fig. 8 is the block diagram according to a kind of image processing apparatus of an exemplary embodiment of the invention.
Embodiment
Fig. 1 is the candid photograph schematic diagram under different speeds and aperture time, and wherein Fig. 1 (a) shows speed for 0km/h, fast Candid photograph image when the door time is 10000 μ s, because speed is low, aperture time length, obtained candid photograph brightness of image and definition It is all very high;Fig. 1 (b) show speed be 60km/h, candid photograph image of aperture time when being 10000 μ s, because speed is fast, shutter Time is grown, although obtained candid photograph image brightness is high, obvious blurring be present, definition is very low;Fig. 1 (c) is shown Candid photograph image when speed is 60km/h, aperture time is 4000 μ s, because speed is fast, aperture time is shorter, obtained candid photograph For image although in the absence of fuzzy situation, brightness is very low, equally have impact on the definition of picture material.
It can be seen that higher brightness is obtained if desired, it is necessary to use longer aperture time, but this will cause image to be sent out It is raw fuzzy.Therefore, in order to take into account the brightness of candid photograph image and blurring problem, the present invention passes through the de-fuzzy to capturing image Processing, the candid photograph image that brightness is high, definition is good can be obtained under low light environment.
For the present invention is further described, there is provided the following example:
Fig. 2 is according to a kind of flow chart of image processing method of an exemplary embodiment of the invention, as shown in Fig. 2 should Method is applied to bayonet system or electronic police system, may comprise steps of:
Step 202, when vehicle travels the default camera site to monitored area, the vehicle is captured and obtained To image is captured, wherein having predetermined interval distance between the default camera site and the lower edge of the monitored area.
In the present embodiment, by pre-defining the default camera site in monitored area so that vehicle is travelled to the position Performed when putting and capture operation, you can according to displacement of the vehicle during candid photograph, extrapolate vehicle in image is captured Pixel movement value, so as to perform corresponding orientation de-fuzzy processing.
Step 204, according to the distance between any point in the predefined monitored area to the lower edge and the point It is corresponding candid photograph image in location of pixels between corresponding relation, and movement of the vehicle during candid photograph away from From it is determined that the vehicle corresponding termination location of pixels in the candid photograph image at the end of capturing.
In the present embodiment, the predefined corresponding relation can taking the photograph by the bayonet system or electronic police system The specifications parameter and installation parameter of camera are calculated.
Wherein, the specifications parameter can include:The focal length of the video camera, the photo-sensitive cell of the video camera are in vehicle The unit pixel specification of specification and the photo-sensitive cell on the vehicle heading in travel direction;And the peace Dress parameter can include:The setting height(from bottom) of the video camera and angle of inclination.
In the present embodiment, displacement of the vehicle during candid photograph can be obtained in several ways.For example as One exemplary embodiment, calculated by the aperture time for detecting travel speed of the vehicle when capturing and using during candid photograph, It can be obtained according to " displacement=travel speed × aperture time ".Wherein, travel speed can be by velocity radar, vehicle checker Measured Deng peripheral hardware, the modes such as video frequency speed-measuring can also be directly carried out by video camera and obtained.
In the present embodiment, corresponding relation can be expressed as following equation:
Wherein, D is pixel value of any point in corresponding candid photograph image, and d is any point to described following The distance between along, h is the setting height(from bottom) of the video camera, and A is between the shooting direction and horizontal direction of the video camera Angle of inclination, f be the video camera focal length, vhFor specification of the photo-sensitive cell on vehicle heading, σ is the sense Unit pixel specification of the optical element on vehicle heading;And a between the video camera and the lower edge away from From u is object distance, v is image distance, and B is high isosceles as waist, object distance u for the line segment between the video camera and the lower edge The base angle of triangle, b is the distance between the video camera and specified point on the base of the isosceles triangle and this is specific Point is on the line between the video camera and the default camera site, and x is between the lower edge and the specified point Distance, X is x corresponding angles in using a, b and x as the triangle on side.
Step 206, according to the default camera site in the candid photograph image corresponding starting pixels position and described Location of pixels is terminated, calculates the mobile pixel value corresponding to the displacement.
Step 208, according to the vehicle capture when travel direction and the mobile pixel value, to the candid photograph image It is oriented de-fuzzy processing.
In the present embodiment, there is a situation where to obscure due to capturing image, be due to that pixel corresponding to vehicle is being captured During be moved, cause pixel superposition occurs between neighbor pixel to cause, thus can be accomplished in the following manner it is fixed Handled to de-fuzzy:According to the mobile pixel value, determine that each pixel of the vehicle in the candid photograph image exists Pixel stacking fold in the travel direction;In the travel direction in the reverse direction, it is determined that each pixel corresponds to institute State all superposition pixels of pixel stacking fold;The pixel value of each pixel is subtracted into corresponding all superposition pixels Pixel value.
In the present embodiment, because the ground of video camera and monitored area keeps geo-stationary, consequently only that vehicle is corresponding Pixel there occurs mobile and fuzzy, can identify that fuzzy pixel occurs in these by following manner, and realize and go Fuzzy processing:The pixel value of each pixel in the candid photograph image is subtracted into its adjacent picture in the travel direction The pixel value of vegetarian refreshments, image after being handled;The sharpness value variable quantity of image after the processing is counted, and it is true according to statistical result All pixels point of the fixed vehicle in the candid photograph image.
In the present embodiment, can be further to it based on the obtained clear candid photograph image for completing de-fuzzy processing In vehicle carry out Car license recognition, the operation of the image recognition processing such as vehicle-logo recognition, realize the intelligent traffic administration system of automation.
From above-described embodiment, the present invention by obtain in advance in monitored area any point to monitored area lower edge it Between corresponding relation between the location of pixels in capturing image of distance and this, and by obtaining vehicle in real time when capturing Travel direction and displacement, can accurately calculate capture image in correspond to vehicle pixel mobile pixel value, So as to be handled by orienting de-fuzzy, you can clearly vehicle image is obtained, even if so as under low light environment, remain able to Clearly vehicle image is obtained by extending aperture time.
Fig. 3 is according to a kind of flow chart of image processing method of an exemplary embodiment of the invention, as shown in figure 3, can To comprise the following steps:
1st, pretreatment stage
Step 302, according to the specifications parameter and installation parameter of video camera in bayonet system or electronic police system, calculate pre- The corresponding relation of definition, i.e., in monitored area any point to the distance between monitored area lower edge with the point in corresponding candid photograph The corresponding relation between location of pixels in image.
As shown in Figure 4 A, the camera lens of video camera points to lower right by upper left side, then the extended line of lens area and ground phase Friendship forms corresponding monitored area, and its midpoint O is illustrated as lower edge (the i.e. candid photograph image of the video camera of the monitored area Lower edge).
In this embodiment, the specifications parameter of video camera can include:The focal length f (not shown)s of video camera, shooting The photo-sensitive cell of machine vehicle heading (i.e. direction in Fig. 4 A " from right to left ", show as capture image on " on to Under " direction) on specification vhWith unit pixel specification σ of the photo-sensitive cell on vehicle heading (such as when traveling side During to capture direction " from top to bottom " on image, " unit pixel specification σ " is the height of each pixel of in the vertical direction Degree).And the installation parameter of video camera includes:The setting height(from bottom) h and angle of inclination A of video camera are (herein with lens direction and level Included angle A between direction represents;It is apparent that it can also use between other forms of expression, such as lens direction and vertical direction Angle etc.).
So, for the distance between any point S in monitored area and point O dS, can be according to the above-mentioned rule of video camera Lattice parameter and installation parameter, the distance between the point S and point O d is calculatedSWith point S capture image on corresponding pixel position Put DS(i.e. by candid photograph image upper edge it is downward when, the number of lines of pixels residing for the pixel) between corresponding relation.
Specifically, the calculating process of an exemplary embodiment is as follows:
1) by the setting height(from bottom) h and angle of inclination A of video camera, the length that can obtain corresponding hypotenuse a is
2) the object distance u and image distance v of video camera, and specification v of the photo-sensitive cell on vehicle heading are combinedh, can be with Obtain:And and then it is calculated object distance u and image distance v;
3) due to vhWith xSCorresponding straight line is parallel to each other respectively, can obtain:And and then obtain DSWith xSBetween corresponding relation;
4) according to triangle relation, can obtain:And and then obtain dSWith xSIt Between corresponding relation;
5) D is determinedSWith dSBetween corresponding relation be
Step 304, the default camera site in monitored area is determined, and the default camera site is right in image is captured The starting pixels position answered.
In this embodiment it is assumed that default camera site is the point M shown in Fig. 4 B, and between point M and point O actually away from From for dM, then d is madeS=dM、DS=DMAnd by actual range dMSubstitute into above-mentioned predefined corresponding relation, you can corresponding to obtaining Starting pixels position DM
2nd, real-time processing stage
Step 306, when vehicle is travelled to default camera site, obtain and capture image.
Step 308, according to the travel direction and displacement when capturing, vehicle corresponding end in image is captured is determined Only location of pixels.
In the present embodiment, because the set location of video camera is fixed, then corresponding vehicle heading is actually also Fixed.Therefore, vehicle heading can be carried out on video camera or background devices in advance pre-configured;Or Ke Yigen The lane line in monitored area that is arrived according to camera acquisition, road edge etc., and automatically generate corresponding vehicle heading.
In the present embodiment, during the shutter that can be used by travel speed of the collection vehicle when capturing and candid photograph Between, calculate operating range=travel speed × aperture time.Corresponding to Fig. 4 B, because video camera is travelled to default shooting in vehicle Captured during location point M, then operating range can be MN, i.e. point M is moved at point N at the end of candid photograph, so as to cause Image obscures.
Therefore, it is similar with step 304, terminate location of pixels D calculatingNWhen, according to the reality between point N and point O Distance is dN=dM- MN, then make dS=dN、DS=DNAnd by actual range dNSubstitute into above-mentioned predefined corresponding relation, you can Starting pixels position D corresponding to obtainingN
It should be noted that:
As mentioned previously, because each pixel corresponding to vehicle is experienced during candid photograph " by point M to point N " Operating range, so as to cause these pixels in travel direction by " orientation blurring ".
Than as shown in figure 5, by taking the pixel movement on vertical direction as an example.It is assumed that in the vertical direction, including y, y+1, y + 2, the pixel column such as y+3, y+4, y+5 and y+6, and pixel corresponding to being provided with preceding 5 row is (x, y), (x, y+1), (x, y+ 2), (x, y+3) and (x, y+4), as shown in table 1.
Line number Pixel
y (x, y)
y+1 (x, y+1)
y+2 (x, y+2)
y+3 (x, y+3)
y+4 (x, y+4)
Table 1
If during capturing, these pixels move downwards 2 pixels, then when moving first pixel, this A little pixels are moved into:Y+1, y+2, y+3, y+4 and y+5 row;But because these pixels are at the position shown in table 1, Corresponding image is left on image is captured, it is thus achieved that superposition of the pixel in each pixel column, specifically such as the institute of table 2 Show.
Table 2
When moving second pixel, these pixels are moved into:Y+2, y+3, y+4, y+5 and y+6 row, again Superposition of the pixel in each pixel column is realized, it is specific as shown in table 3.
Line number Pixel
y (x, y)
y+1 (x, y), (x, y+1)
y+2 (x, y+1), (x, y+2), (x, y+3)
y+3 (x, y+2), (x, y+3), (x, y+4)
y+4 (x, y+3), (x, y+4), (x, y+5)
y+5 (x, y+4), (x, y+5)
y+6 (x, y+5)
Table 3
Therefore, because vehicle during candid photograph there occurs movement, cause only to occupy y, y+1, y+2, y+3 and y+4 originally The pixel of pixel column, moved and be superimposed to other pixel columns.
Simultaneously as the movement of pixel so that 5 pixel columns originally in pixel column y to y+4, extend to pixel Row y to y+6 totally 7 pixel columns, wherein:
For pixel column y and y+1, positioned at the origination side of travel direction, and be superimposed on the basis of original pixel other The quantity of pixel is less than mobile pixel value.For example mobile pixel value herein is 2, then pixel column y pixel (x, y) is not folded Add other pixels;And pixel column y+1 pixel (x, y+1) has been superimposed a pixel, i.e. (x, y).Therefore, figure is captured Pixel positioned at pixel column y and y+1 as in is referred to as start edge pixel, and there is identical to produce and eliminate the place of blurring Reason mode.
For pixel column y+2 to y+4, the quantity for other pixels being superimposed on the basis of original pixel is equal to mobile picture Element value.For example mobile pixel value herein is 2, then the pixel of above-mentioned each pixel column has been superimposed the picture of 2 other pixel columns Pixel (x, y+2) superposition (x, y+1) and (x, y) of vegetarian refreshments, for example y+2 rows.Therefore, capture in image and be located at pixel column y+ 2 to y+4 pixel is referred to as area pixel point, and there is identical to produce and eliminate the processing mode of blurring.
For pixel column y+5 and y+6, positioned at the termination side of travel direction, and be superimposed on the basis of original pixel its The quantity of his pixel is less than mobile pixel value.For example mobile pixel value herein is 2, then pixel column y+5 does not have vehicle originally Pixel, but it has been superimposed pixel (x, y+4) and (x, y+3);Similar, pixel column y+6 does not have vehicle pixel originally yet, but It has been superimposed pixel (x, y+4).Therefore, capture the pixel in image positioned at pixel column y+5 and y+6 and be referred to as terminating edge pixel Point, there is identical to produce and eliminate the processing mode of blurring.
So, according to above-mentioned all kinds of pixels (i.e. start edge pixel, terminating edge pixel and area pixel point) The characteristics of, and corresponding to the pixel movement of table 1 to table 3 and summation rule, corresponding orientation blurring formula can be obtained.
For start edge pixel, the orientation blurring formula that uses for:Its Middle N=0,1 ..., D-1 };
For terminating edge pixel, the orientation blurring formula that uses for: Wherein N={ M+1 ..., M+D }, and M is to be equal to the pixel for moving pixel value described with the distance on terminal side edge In travel direction positional information (such as in the embodiment shown in Fig. 5, (xM,yM) be and the pixel as terminal side edge Row y+6 distance is equal to the pixel of mobile pixel value 2, i.e. pixel (x, y+4));
For area pixel point, the orientation blurring formula that uses for:
Wherein, (xk,yk) need to meet three conditions:
1) moved along orientation straight line y=ax+b, wherein a and b are coefficient;
2)(x-xk)2+(y-yk)2≤D2
3)(x-xk) × a > 0, i.e. (xk,yk) be on the direction of motion ray of (x, y).
By taking license plate image as an example, vehicle is travelled when capturing, and formula is blurred based on above-mentioned orientation equivalent to performing Fuzzy processing.Wherein, Fig. 6 (a) is shown corresponding to the image under situation shown in table 1, now not yet occur pixel movement and Superposition;And Fig. 6 (b) shows the image of the blurring after being superimposed corresponding to pixel shown in table 3.
Step 310, according to starting pixels position and termination location of pixels, the mobile pixel corresponding to displacement is calculated Value.
Step 312, vehicle corresponding all pixels point in image is captured is determined.
In the present embodiment, because pixel corresponding to vehicle is all obscured, and as the ground region of background, then Do not occur fuzzy.Therefore, image can be captured to whole and carries out rough de-fuzzy processing, so as to according to image after processing Acutance situation of change, identify pixel corresponding to vehicle.
Specifically, the pixel value of each pixel in the candid photograph image can be subtracted it in the travel direction Neighbor pixel pixel value, image after being handled;Then, the sharpness value variable quantity of image after the processing is counted, and All pixels point of the vehicle in the candid photograph image is determined according to statistical result.
Wherein, the neighbor pixel in travel direction, can be the positive direction or adjacent picture in the reverse direction of travel direction Vegetarian refreshments.For example the pixel column y+3 in Fig. 5 corresponds to the superposition pixel value of pixel (x, y+4), (x, y+3) and (x, y+2), then Neighbor pixel can be the superposition pixel value of pixel (x, y+3), (x, y+2) and (x, y+1) corresponding to pixel column y+2, or The superposition pixel value of pixel (x, y+5), (x, y+4) and (x, y+3) corresponding to person's pixel column y+4.
For example, can be for pixel (x, y+3), (x, y+2) and (x, y corresponding to pixel column y+2 with neighbor pixel + 1) exemplified by superposition pixel value, by the way that by pixel column y+3, corresponding superposition pixel value subtracts each other respectively with pixel column y+2, can obtain It is (y+4)-(y+1) to difference.
Step 314, according to moving direction and mobile pixel value, to vehicle, corresponding each pixel is held in image is captured Row orientation de-fuzzy processing.
In the present embodiment, the processing mode based on above-mentioned orientation blurring, is performing reverse orientation de-fuzzy During processing, following manner is actually employed:According to the mobile pixel value, determine the vehicle in the candid photograph image Each pixel stacking fold of the pixel in the travel direction;In the travel direction in the reverse direction, it is determined that each picture Vegetarian refreshments corresponds to all superposition pixels of the pixel stacking fold;The pixel value of each pixel is subtracted corresponding all It is superimposed the pixel value of pixel.
Specifically, for each type of pixel, i.e. start edge pixel, terminating edge pixel and area pixel Point, following formula can be respectively adopted and perform orientation de-fuzzy processing.
A, according to the following equation, each start edge pixel of the vehicle in the candid photograph image is oriented De-fuzzy processing:N=1 ..., D-1 };
Wherein, the start edge pixel and the distance on the starting point side edge of the travel direction move no more than described Pixel value;(xN,yN) it is start edge pixel, R (xN,yN) for start edge pixel de-fuzzy processing after pixel Value, S (xN,yN) for start edge pixel de-fuzzy before processing pixel value, (xk,yk) it is start edge pixel (xN, yN) corresponding pixel after k pixel is moved in the travel direction, D is the mobile pixel value;
B, according to the following equation, each terminating edge pixel of the vehicle in the candid photograph image is oriented De-fuzzy processing:N=M+1 ..., M+D };
Wherein, the terminating edge pixel and the distance on the terminal side edge of the travel direction move no more than described Pixel value;(xN,yN) it is terminating edge pixel, R (xN,yN) for terminating edge pixel de-fuzzy processing after pixel Value, S (xN,yN) for terminating edge pixel de-fuzzy before processing pixel value, (xk,yk) it is terminating edge pixel (xN, yN) corresponding pixel, D are the mobile pixel value after mobile k pixel in the travel direction, M is and terminal side The distance on edge is equal to positional information of the pixel for moving pixel value in the travel direction;
C, according to the following equation, mould is oriented to each area pixel point of the vehicle in the candid photograph image Gelatinization is handled:
Wherein, the area pixel point is all higher than with the starting point side edge of the travel direction and the distance on terminal side edge The mobile pixel value;(x, y) is area pixel point, and R (x, y) is the pixel value after the de-fuzzy processing of area pixel point, S (x, y) is the pixel value of the de-fuzzy before processing of area pixel point.
Furthermore, it is possible to the vehicle pixel after de-fuzzy processing is extracted, and the original non-vehicle picture captured in image Vegetarian refreshments, so as to combine to obtain all pixels point clearly image.
Fig. 7 shows the schematic configuration diagram of the electronic equipment of the exemplary embodiment according to the application.It refer to Fig. 7, In hardware view, the electronic equipment includes processor, internal bus, network interface, internal memory and nonvolatile memory, certainly The hardware being also possible that required for other business.Processor read from nonvolatile memory corresponding to computer program to In internal memory and then run, image processing apparatus is formed on logic level.Certainly, in addition to software realization mode, the application It is not precluded from other implementations, such as mode of logical device or software and hardware combining etc., that is to say, that following processing stream The executive agent of journey is not limited to each logic unit or hardware or logical device.
Fig. 8 is refer to, in Software Implementation, the image processing apparatus can include capturing unit, determining unit, meter Calculate unit and processing unit.Wherein:
Unit is captured, for when vehicle travels the default camera site to monitored area, being grabbed to the vehicle Clap and obtain capture image, wherein between the default camera site and the lower edge of the monitored area have predetermined interval away from From;
Determining unit, for according to the distance between any point in the predefined monitored area to the lower edge with Corresponding relation between location of pixels of this in corresponding candid photograph image, and movement of the vehicle during candid photograph Distance, it is determined that the vehicle corresponding termination location of pixels in the candid photograph image at the end of capturing;
Computing unit, for according to the default camera site it is described candid photograph image in corresponding starting pixels position and The termination location of pixels, calculate the mobile pixel value corresponding to the displacement;
Processing unit, for according to the vehicle capture when travel direction and the mobile pixel value, grabbed to described Clap image and be oriented de-fuzzy processing.
Optionally, the predefined corresponding relation is by the bayonet system or the specification of the video camera of electronic police system Parameter and installation parameter are calculated.
Optionally,
The specifications parameter includes:The focal length of the video camera, the photo-sensitive cell of the video camera are in vehicle heading On unit pixel specification on the vehicle heading of specification and the photo-sensitive cell;And
The installation parameter includes:The setting height(from bottom) of the video camera and angle of inclination.
Optionally, the corresponding relation expression is following equation:
Wherein, D is pixel value of any point in corresponding candid photograph image, and d is any point to described following The distance between along, h is the setting height(from bottom) of the video camera, and A is between the shooting direction and horizontal direction of the video camera Angle of inclination, f be the video camera focal length, vhFor specification of the photo-sensitive cell on vehicle heading, σ is the sense Unit pixel specification of the optical element on vehicle heading;And a between the video camera and the lower edge away from From u is object distance, v is image distance, and B is high isosceles as waist, object distance u for the line segment between the video camera and the lower edge The base angle of triangle, b is the distance between the video camera and specified point on the base of the isosceles triangle and this is specific Point is on the line between the video camera and the default camera site, and x is between the lower edge and the specified point Distance, X is x corresponding angles in using a, b and x as the triangle on side.
Optionally, the vehicle the displacement during candid photograph by the vehicle capture when travel speed and grab The aperture time used during bat is calculated.
Optionally, the processing unit is used for:
According to the mobile pixel value, determine each pixel of the vehicle in the candid photograph image in the traveling Pixel stacking fold on direction;
In the travel direction in the reverse direction, it is determined that each pixel corresponds to all folded of the pixel stacking fold Add pixel;
The pixel value of each pixel is subtracted to the pixel values of corresponding all superposition pixels.
Optionally,
The processing unit according to the following equation, to the vehicle it is described candid photograph image in each start edge pixel Point is oriented de-fuzzy processing:N=1 ..., D-1 };
Wherein, the start edge pixel and the distance on the starting point side edge of the travel direction move no more than described Pixel value;(xN,yN) it is start edge pixel, R (xN,yN) for start edge pixel de-fuzzy processing after pixel Value, S (xN,yN) for start edge pixel de-fuzzy before processing pixel value, (xk,yk) it is start edge pixel (xN, yN) corresponding pixel after k pixel is moved in the travel direction, D is the mobile pixel value;
The processing unit according to the following equation, to the vehicle it is described candid photograph image in each terminating edge pixel Point is oriented de-fuzzy processing:N=M+1 ..., M+D };
Wherein, the terminating edge pixel and the distance on the terminal side edge of the travel direction move no more than described Pixel value;(xN,yN) it is terminating edge pixel, R (xN,yN) for terminating edge pixel de-fuzzy processing after pixel Value, S (xN,yN) for terminating edge pixel de-fuzzy before processing pixel value, (xk,yk) it is terminating edge pixel (xN, yN) corresponding pixel, D are the mobile pixel value after mobile k pixel in the travel direction, M is and terminal side The distance on edge is equal to positional information of the pixel for moving pixel value in the travel direction;
The processing unit according to the following equation, clicks through to each area pixel of the vehicle in the candid photograph image Row orientation de-fuzzy processing:
Wherein, the area pixel point is all higher than with the starting point side edge of the travel direction and the distance on terminal side edge The mobile pixel value;(x, y) is area pixel point, and R (x, y) is the pixel value after the de-fuzzy processing of area pixel point, S (x, y) is the pixel value of the de-fuzzy before processing of area pixel point.
Optionally, in addition to:
Graphics processing unit, for the pixel value of each pixel in the candid photograph image to be subtracted into it in the traveling The pixel value of neighbor pixel on direction, image after being handled;
Acutance statistic unit, for counting the sharpness value variable quantity of image after the processing, and determined according to statistical result All pixels point of the vehicle in the candid photograph image.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention God any modification, equivalent substitution and improvements done etc., should be included within the scope of protection of the invention with principle.

Claims (10)

  1. A kind of 1. image processing method, applied to bayonet system or electronic police system, it is characterised in that including:
    When vehicle travels the default camera site to monitored area, the vehicle is captured and obtains capturing image, There is predetermined interval distance between wherein described default camera site and the lower edge of the monitored area;
    According to the distance between any point in the predefined monitored area to the lower edge with the point in corresponding candid photograph The corresponding relation between location of pixels in image, and displacement of the vehicle during candid photograph, it is determined that capturing knot The vehicle corresponding termination location of pixels in candid photograph image during beam;
    According to the default camera site it is described candid photograph image in corresponding starting pixels position and the termination location of pixels, Calculate the mobile pixel value corresponding to the displacement;
    According to the vehicle capture when travel direction and the mobile pixel value, to it is described candid photograph image be oriented mould Gelatinization is handled;Wherein, the corresponding relation expression is following equation:
    <mrow> <mi>D</mi> <mo>=</mo> <mfrac> <mi>v</mi> <mi>u</mi> </mfrac> <mo>&amp;times;</mo> <mi>&amp;sigma;</mi> <mo>&amp;times;</mo> <mfrac> <mi>b</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mrow> </mfrac> <mo>&amp;times;</mo> <mfrac> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>A</mi> <mo>-</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> <mi>a</mi> </mfrac> <mo>&amp;times;</mo> <mi>d</mi> <mo>,</mo> </mrow>
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>a</mi> <mo>=</mo> <mfrac> <mi>h</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>A</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mn>1</mn> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>u</mi> </mfrac> <mo>+</mo> <mfrac> <mn>1</mn> <mi>v</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>a</mi> </mfrac> <mo>=</mo> <mfrac> <mi>v</mi> <msqrt> <mrow> <msup> <mi>v</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>v</mi> <mi>h</mi> </msub> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>b</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mi>x</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>X</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>2</mn> <mi>a</mi> <mi>x</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> <mo>=</mo> <mi>a</mi> <mi> </mi> <mi>tan</mi> <mo>(</mo> <mfrac> <mrow> <mn>2</mn> <mi>v</mi> </mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> </mfrac> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow>
    Wherein, D be any point it is corresponding candid photograph image in pixel value, d be any point to the lower edge it Between distance, h be video camera setting height(from bottom), A be the video camera shooting direction and horizontal direction between angle of inclination, F be the video camera focal length, vhThe specification for being photo-sensitive cell on vehicle heading, σ are the photo-sensitive cell in vehicle Unit pixel specification in travel direction;And a is the distance between the video camera and the lower edge, u is object distance, v is Image distance, B are the line segment between the video camera and the lower edge as waist, the base angle that object distance u is high isosceles triangle, b It is located at the shooting for the distance between specified point on the base of the video camera and the isosceles triangle and the specified point On line between machine and the default camera site, x is the distance between the lower edge and the specified point, X be x with A, b and x is corresponding angle in the triangle on side.
  2. 2. according to the method for claim 1, it is characterised in that displacement of the vehicle during candid photograph is by described Travel speed of the vehicle when capturing and the aperture time used during candid photograph are calculated.
  3. 3. according to the method for claim 1, it is characterised in that it is described according to the vehicle capture when travel direction and The mobile pixel value, de-fuzzy processing is oriented to the candid photograph image, including:
    According to the mobile pixel value, determine each pixel of the vehicle in the candid photograph image in the travel direction On pixel stacking fold;
    In the travel direction in the reverse direction, it is determined that each pixel corresponds to all superposition pictures of the pixel stacking fold Vegetarian refreshments;
    The pixel value of each pixel is subtracted to the pixel values of corresponding all superposition pixels.
  4. 4. according to the method for claim 3, it is characterised in that
    According to the following equation, deblurring is oriented to each start edge pixel of the vehicle in the candid photograph image Change is handled:N=1 ..., D-1 };
    Wherein, the start edge pixel and the distance on the starting point side edge of the travel direction move pixel no more than described Value;(xN,yN) it is start edge pixel, R (xN,yN) for start edge pixel de-fuzzy processing after pixel value, S (xN,yN) for start edge pixel de-fuzzy before processing pixel value, (xk,yk) it is start edge pixel (xN,yN) The corresponding pixel after mobile k pixel in the travel direction, D is the mobile pixel value;
    According to the following equation, deblurring is oriented to each terminating edge pixel of the vehicle in the candid photograph image Change is handled:N=M+1 ..., M+D };
    Wherein, the terminating edge pixel and the distance on the terminal side edge of the travel direction move pixel no more than described Value;(xN,yN) it is terminating edge pixel, R (xN,yN) for terminating edge pixel de-fuzzy processing after pixel value, S (xN,yN) for terminating edge pixel de-fuzzy before processing pixel value, (xk,yk) it is terminating edge pixel (xN,yN) Corresponding pixel after mobile k pixel, D are the mobile pixel value in the travel direction, and M is and terminal side edge Distance is equal to positional information of the pixel of the mobile pixel value in the travel direction;
    According to the following equation, each area pixel point of the vehicle in the candid photograph image is oriented at de-fuzzy Reason:
    Wherein, the area pixel point is all higher than described with the starting point side edge of the travel direction and the distance on terminal side edge Mobile pixel value;(x, y) is area pixel point, and R (x, y) is the pixel value after the de-fuzzy processing of area pixel point, S (x, Y) for area pixel point de-fuzzy before processing pixel value.
  5. 5. according to the method for claim 3, it is characterised in that also include:
    The pixel value of each pixel in the candid photograph image is subtracted into its neighbor pixel in the travel direction Pixel value, image after being handled;
    The sharpness value variable quantity of image after the processing is counted, and determines the vehicle in the candid photograph image according to statistical result In all pixels point.
  6. A kind of 6. image processing apparatus, applied to bayonet system or electronic police system, it is characterised in that including:
    Unit is captured, for when vehicle travels the default camera site to monitored area, being captured simultaneously to the vehicle Obtain capturing image, wherein having predetermined interval distance between the default camera site and the lower edge of the monitored area;
    Determining unit, for according to the distance between any point in the predefined monitored area to the lower edge and the point It is corresponding candid photograph image in location of pixels between corresponding relation, and movement of the vehicle during candid photograph away from From it is determined that the vehicle corresponding termination location of pixels in the candid photograph image at the end of capturing;
    Computing unit, for according to the default camera site in the candid photograph image corresponding starting pixels position and described Location of pixels is terminated, calculates the mobile pixel value corresponding to the displacement;
    Processing unit, for according to the vehicle capture when travel direction and the mobile pixel value, to it is described capture scheme As being oriented de-fuzzy processing;
    Wherein, the corresponding relation expression is following equation:
    <mrow> <mi>D</mi> <mo>=</mo> <mfrac> <mi>v</mi> <mi>u</mi> </mfrac> <mo>&amp;times;</mo> <mi>&amp;sigma;</mi> <mo>&amp;times;</mo> <mfrac> <mi>b</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mrow> </mfrac> <mo>&amp;times;</mo> <mfrac> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>A</mi> <mo>-</mo> <mi>X</mi> <mo>)</mo> </mrow> </mrow> <mi>a</mi> </mfrac> <mo>&amp;times;</mo> <mi>d</mi> <mo>,</mo> </mrow>
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>a</mi> <mo>=</mo> <mfrac> <mi>h</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>A</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mn>1</mn> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mn>1</mn> <mi>u</mi> </mfrac> <mo>+</mo> <mfrac> <mn>1</mn> <mi>v</mi> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>u</mi> <mi>a</mi> </mfrac> <mo>=</mo> <mfrac> <mi>v</mi> <msqrt> <mrow> <msup> <mi>v</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <msub> <mi>v</mi> <mi>h</mi> </msub> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mi>b</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mi>x</mi> <mrow> <mi>sin</mi> <mi> </mi> <mi>X</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>2</mn> <mi>a</mi> <mi>x</mi> <mi> </mi> <mi>sin</mi> <mi> </mi> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> <mo>=</mo> <mi>a</mi> <mi> </mi> <mi>tan</mi> <mo>(</mo> <mfrac> <mrow> <mn>2</mn> <mi>v</mi> </mrow> <msub> <mi>v</mi> <mi>h</mi> </msub> </mfrac> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow>
    Wherein, D be any point it is corresponding candid photograph image in pixel value, d be any point to the lower edge it Between distance, h be video camera setting height(from bottom), A be the video camera shooting direction and horizontal direction between angle of inclination, F be the video camera focal length, vhThe specification for being photo-sensitive cell on vehicle heading, σ are the photo-sensitive cell in vehicle Unit pixel specification in travel direction;And a is the distance between the video camera and the lower edge, u is object distance, v is Image distance, B are the line segment between the video camera and the lower edge as waist, the base angle that object distance u is high isosceles triangle, b It is located at the shooting for the distance between specified point on the base of the video camera and the isosceles triangle and the specified point On line between machine and the default camera site, x is the distance between the lower edge and the specified point, X be x with A, b and x is corresponding angle in the triangle on side.
  7. 7. device according to claim 6, it is characterised in that displacement of the vehicle during candid photograph is by described Travel speed of the vehicle when capturing and the aperture time used during candid photograph are calculated.
  8. 8. device according to claim 6, it is characterised in that the processing unit is used for:
    According to the mobile pixel value, determine each pixel of the vehicle in the candid photograph image in the travel direction On pixel stacking fold;
    In the travel direction in the reverse direction, it is determined that each pixel corresponds to all superposition pictures of the pixel stacking fold Vegetarian refreshments;
    The pixel value of each pixel is subtracted to the pixel values of corresponding all superposition pixels.
  9. 9. device according to claim 8, it is characterised in that
    The processing unit according to the following equation, clicks through to each start edge pixel of the vehicle in the candid photograph image Row orientation de-fuzzy processing:N=1 ..., D-1 };
    Wherein, the start edge pixel and the distance on the starting point side edge of the travel direction move pixel no more than described Value;(xN,yN) it is start edge pixel, R (xN,yN) for start edge pixel de-fuzzy processing after pixel value, S (xN,yN) for start edge pixel de-fuzzy before processing pixel value, (xk,yk) it is start edge pixel (xN,yN) The corresponding pixel after mobile k pixel in the travel direction, D is the mobile pixel value;
    The processing unit according to the following equation, clicks through to each terminating edge pixel of the vehicle in the candid photograph image Row orientation de-fuzzy processing:N=M+1 ..., M+D };
    Wherein, the terminating edge pixel and the distance on the terminal side edge of the travel direction move pixel no more than described Value;(xN,yN) it is terminating edge pixel, R (xN,yN) for terminating edge pixel de-fuzzy processing after pixel value, S (xN,yN) for terminating edge pixel de-fuzzy before processing pixel value, (xk,yk) it is terminating edge pixel (xN,yN) Corresponding pixel after mobile k pixel, D are the mobile pixel value in the travel direction, and M is and terminal side edge Distance is equal to positional information of the pixel of the mobile pixel value in the travel direction;
    The processing unit according to the following equation, is determined each area pixel point of the vehicle in the candid photograph image Handled to de-fuzzy:
    Wherein, the area pixel point is all higher than described with the starting point side edge of the travel direction and the distance on terminal side edge Mobile pixel value;(x, y) is area pixel point, and R (x, y) is the pixel value after the de-fuzzy processing of area pixel point, S (x, Y) for area pixel point de-fuzzy before processing pixel value.
  10. 10. device according to claim 8, it is characterised in that also include:
    Graphics processing unit, for the pixel value of each pixel in the candid photograph image to be subtracted into it in the travel direction On neighbor pixel pixel value, image after being handled;
    Acutance statistic unit, for counting the sharpness value variable quantity of image after the processing, and according to determining statistical result All pixels point of the vehicle in the candid photograph image.
CN201410817295.1A 2014-12-24 2014-12-24 Image processing method and device Active CN104537618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410817295.1A CN104537618B (en) 2014-12-24 2014-12-24 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410817295.1A CN104537618B (en) 2014-12-24 2014-12-24 Image processing method and device

Publications (2)

Publication Number Publication Date
CN104537618A CN104537618A (en) 2015-04-22
CN104537618B true CN104537618B (en) 2018-01-16

Family

ID=52853137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410817295.1A Active CN104537618B (en) 2014-12-24 2014-12-24 Image processing method and device

Country Status (1)

Country Link
CN (1) CN104537618B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633490B (en) * 2017-09-19 2023-10-03 北京小米移动软件有限公司 Image processing method, device and storage medium
CN114208164B (en) * 2019-08-16 2024-02-09 影石创新科技股份有限公司 Method for dynamically controlling video coding rate, intelligent device and moving camera
CN114882709B (en) * 2022-04-22 2023-05-30 四川云从天府人工智能科技有限公司 Vehicle congestion detection method, device and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075678A (en) * 2009-11-20 2011-05-25 鸿富锦精密工业(深圳)有限公司 System and method for deblurring motion blurred images
CN102131079A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method and device for eliminating motion blur of image
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquiring method for removing image blurring and image acquiring system
CN102752484A (en) * 2012-06-25 2012-10-24 清华大学 Fast non-global uniform image shaking blur removal algorithm and system thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8520083B2 (en) * 2009-03-27 2013-08-27 Canon Kabushiki Kaisha Method of removing an artefact from an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075678A (en) * 2009-11-20 2011-05-25 鸿富锦精密工业(深圳)有限公司 System and method for deblurring motion blurred images
CN102131079A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method and device for eliminating motion blur of image
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquiring method for removing image blurring and image acquiring system
CN102752484A (en) * 2012-06-25 2012-10-24 清华大学 Fast non-global uniform image shaking blur removal algorithm and system thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种面向目标区域的快速去模糊算法;李沛秦,谢剑斌,陈章永,程永茂,刘通;《信号处理》;20100831;第26卷(第8期);第1240-1244页 *
单幅运动模糊图像的恢复及应用;程姝,赵志刚,蒋静,陈莹莹,潘振宽;《青岛大学学报(自然科学版)》;20120831;第25卷(第3期);第50-55页 *

Also Published As

Publication number Publication date
CN104537618A (en) 2015-04-22

Similar Documents

Publication Publication Date Title
WO2021259344A1 (en) Vehicle detection method and device, vehicle, and storage medium
CN104246821B (en) Three-dimensional body detection device and three-dimensional body detection method
US8041079B2 (en) Apparatus and method for detecting obstacle through stereovision
US7512494B2 (en) Vehicle mounted image processor and method of use
US20200160561A1 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
CN103473554B (en) Artificial abortion&#39;s statistical system and method
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN104952060B (en) A kind of infrared pedestrian&#39;s area-of-interest adaptivenon-uniform sampling extracting method
WO2015012219A1 (en) Vehicle monitoring device and vehicle monitoring method
CN105632186A (en) Method and device for detecting vehicle queue jumping behavior
WO2006126490A1 (en) Vehicle, image processing system, image processing method, image processing program, method for configuring image processing system, and server
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
CN107392139A (en) A kind of method for detecting lane lines and terminal device based on Hough transformation
CN112368756A (en) Method for calculating collision time of object and vehicle, calculating device and vehicle
CN104537618B (en) Image processing method and device
CN114419874B (en) Target driving safety risk early warning method based on road side sensing equipment data fusion
CN103021179B (en) Based on the Safe belt detection method in real-time monitor video
US11430226B2 (en) Lane line recognition method, lane line recognition device and non-volatile storage medium
CN107316463A (en) A kind of method and apparatus of vehicle monitoring
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
CN113536935A (en) Safety monitoring method and equipment for engineering site
JP6820075B2 (en) Crew number detection system, occupant number detection method, and program
CN110111582A (en) Multilane free-flow vehicle detection method and system based on TOF camera
Dehghani et al. Single camera vehicles speed measurement
JP7052265B2 (en) Information processing device, image pickup device, device control system, mobile body, information processing method, and information processing program

Legal Events

Date Code Title Description
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant