CN101982825B - Method and device for processing video image under intelligent transportation monitoring scene - Google Patents

Method and device for processing video image under intelligent transportation monitoring scene Download PDF

Info

Publication number
CN101982825B
CN101982825B CN 201010535397 CN201010535397A CN101982825B CN 101982825 B CN101982825 B CN 101982825B CN 201010535397 CN201010535397 CN 201010535397 CN 201010535397 A CN201010535397 A CN 201010535397A CN 101982825 B CN101982825 B CN 101982825B
Authority
CN
China
Prior art keywords
video image
shade
moving target
value
track direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010535397
Other languages
Chinese (zh)
Other versions
CN101982825A (en
Inventor
车军
张继霞
简武宁
胡扬忠
邬伟琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN 201010535397 priority Critical patent/CN101982825B/en
Publication of CN101982825A publication Critical patent/CN101982825A/en
Application granted granted Critical
Publication of CN101982825B publication Critical patent/CN101982825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for processing a video image under an intelligent transportation monitoring scene, which comprises the following steps: A. determining a lane detection region and determining the lane direction by analyzing the running track of a moving target in the lane detection region in continuous M frames of video images; B. respectively determining whether shadows exist in each later frame of the video image according to the lane direction and executing step C if determining that the shadows exist in continuous N frames of the video images; and C. respectively inhibiting the shadows in each later frame of the video image according to a hue, saturation and value (HSV) color model, determining whether the contact ratio of foreground images in each frame of the video image before and after the inhibition meets the requirements, and repeatedly executing step B if continuous L frames meet the requirements, wherein, M, N and L are positive integers which are greater than 1. The invention further discloses a device for processing a video image under the intelligent transportation monitoring scene. The method and the device of the invention can effectively inhibit the shadows in the video images.

Description

Method of video image processing under a kind of intelligent transportation monitoring scene and device
Technical field
The present invention relates to the intelligent traffic monitoring technology, particularly method of video image processing and the device under a kind of intelligent transportation monitoring scene.
Background technology
In the last few years, the infrastructure investment dynamics was increasing, and road construction is one of them.Because the road construction phase is generally longer, its growth rate does not catch up with the growth rate of vehicle far away, therefore so that the traffic in each city goes from bad to worse, and the vehicle supervision department in each city sets up the intelligent traffic monitoring system of perfect in shape and function, is the effective ways that improve present current situation of traffic.Specifically, namely by the video image of each road of camera acquisition, each frame video image that collects is analyzed, carried out wagon flow statistics etc. according to analysis result, and then realize traffic control and management.
Because the impact of illumination, when light source when incident direction is subject to occlusion, can produce shade at the opposite side of vehicle.Owing to often have identical motion feature between shade and the moving target (being vehicle), therefore follow-up when analyzing for video image, dash area may be incorporated into mistakenly and be moving target, thereby the shape of moving target is affected, as the shape that makes moving target becomes large, and then cause that a plurality of moving targets are sticked together etc., affect subsequent treatment.Therefore, need to suppress shade, but also not have a kind of effective inhibition method in the prior art.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide the method for video image processing under a kind of intelligent transportation monitoring scene, can effectively suppress the shade in the video image.
Another object of the present invention is to provide the video image processing device under a kind of intelligent transportation monitoring scene, can effectively suppress the shade in the video image.
For achieving the above object, technical scheme of the present invention is achieved in that
Method of video image processing under a kind of intelligent transportation monitoring scene comprises:
A, determine lane detection zone, and determine the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image;
B, according to described track direction, whether have shade in the every frame video image after determining respectively, all have shade, then execution in step C in the N continuous frame video image if determine;
C, according to hue, saturation, intensity HSV colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determine suppressing reach requirement, if the L frame all reaches requirement continuously, and repeated execution of steps B then; Described M, N and L are the positive integer greater than 1.
Video image processing device under a kind of intelligent transportation monitoring scene comprises:
Track orientation determination module is used for determining the lane detection zone, and determines the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image, and the track direction of determining is sent to the shadow Detection module;
Described shadow Detection module is used for according to described track direction, whether has shade in the every frame video image after determining respectively, all has shade in the N continuous frame video image if determine, and then notifies shade to suppress module and carries out self function;
Described shade suppresses module, be used for according to hue, saturation, intensity HSV colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determining to suppress reaches requirement, if the L frame all reaches requirement continuously, then notify described shadow Detection module to repeat self function; Described M, N and L are the positive integer greater than 1.
As seen, adopt technical scheme of the present invention, can effectively suppress the shade in the video image, thereby make things convenient for subsequent treatment; And scheme of the present invention implements simple and convenient, is convenient to popularize.
Description of drawings
Fig. 1 is the process flow diagram of the inventive method embodiment.
Fig. 2 is from the lower left to top-right track direction synoptic diagram among the inventive method embodiment.
Fig. 3 is from the lower right to upper left track direction synoptic diagram among the inventive method embodiment.
Fig. 4 is a detected target area synoptic diagram among the inventive method embodiment.
Fig. 5 be among the inventive method embodiment the target area is equally divided into about synoptic diagram after two sub regions.
Fig. 6 is the composition structural representation of apparatus of the present invention embodiment.
Embodiment
For problems of the prior art, the video image processing scheme under a kind of brand-new intelligent transportation monitoring scene is proposed among the present invention, use this scheme and can effectively suppress shade in the video image.
For make technical scheme of the present invention clearer, understand, referring to the accompanying drawing embodiment that develops simultaneously, scheme of the present invention is described in further detail.
Fig. 1 is the process flow diagram of the inventive method embodiment.As shown in Figure 1, may further comprise the steps:
Step 11: determine the lane detection zone, and determine the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image.
According to the installation situation of the video camera under the existing intelligent transportation monitoring scene, the track direction only have following three kinds may, that is: 1) from the lower left to the upper right side; 2) from the lower right to the upper left side; 3) under to directly over.
Fig. 2 is from the lower left to top-right track direction synoptic diagram among the inventive method embodiment.Fig. 3 is from the lower right to upper left track direction synoptic diagram among the inventive method embodiment.
In this step, at first determine the lane detection zone, this zone need to comprise the straight way part in track, and avoid comprising bend and turnout part as far as possible, shown in Fig. 2 and 3, white line is wherein extended to the edge of video image to two ends, the rectangular area that defines is the lane detection zone.For every video camera, in case after it installs, utilize the lane detection zone in its every frame video image that collects all identical.
Afterwards, can by the running orbit of the moving target in the lane detection zone continuous N frame (from the 1st frame to the M frame) video image is analyzed, determine the track direction.How the movement locus of moving target is analyzed and be prior art.M is the positive integer greater than 1, and span is generally 200~500.
Step 12: according to the track direction, whether have shade in the every frame video image after determining respectively, all have shade in the N continuous frame video image if determine, then execution in step 13.
In this step, at first according to the track direction, determine moving target type to be detected.For different track directions, wait difference owing to block, the shade of generation is also with difference.Such as, if the regulation vehicle is kept to the right, so when the track direction be during from the lower left to the upper right side, the more close video camera of upstroke target (vehicle that travels to the upper right side from the lower left) meeting, because the relation of projection angle, the shade that produces can be more obvious, therefore on this track direction, should more pay close attention to the upstroke target, be that moving target type to be detected is the upstroke target, when the track direction is during from the lower right to the upper left side, moving target type to be detected is the downward movement target; If the regulation vehicle is kept to the left, so when the track direction be during from the lower left to the upper right side, moving target type to be detected is the downward movement target, when the track direction is during from the lower right to the upper left side, moving target type to be detected is the upstroke target; Distinguishingly, when the track direction be under to directly over the time, moving target type to be detected both can be the upstroke target, also can be the downward movement target.
In addition, the textural characteristics of shade and the textural characteristics of moving target have larger difference, wherein, the texture of shade is more smooth a little less than can be, and the texture of moving target then can be abundanter, therefore, in this step, can determine whether there is shade in every frame video image based on textural characteristics.
Specifically, for every frame video image X, can come in the following manner respectively to determine wherein whether to exist shade:
1) detect the target area at the moving target place of each respective type in the lane detection zone from video image X, the target area refers to comprise the minimum rectangular area of moving target.
If moving target type to be detected is the downward movement target, so then in the lane detection zone of video image X, detect the target area at each downward movement target place.
According to existing detection mode, can include simultaneously moving target and shade (if hypographous words) thereof in the target area.Fig. 4 is a detected target area synoptic diagram among the inventive method embodiment.
2) each target area is equally divided into about two sub regions, calculate the average gradient amplitude of every sub regions
Figure BSA00000340407300041
With , and calculate
Figure BSA00000340407300043
With
Figure BSA00000340407300044
In smaller value and the ratio ρ of higher value.
Fig. 5 be among the inventive method embodiment the target area is equally divided into about synoptic diagram after two sub regions.
In addition, can utilize the Sobel operator to calculate the average gradient amplitude of every sub regions
Figure BSA00000340407300051
With
Figure BSA00000340407300052
, how to be calculated as prior art; Afterwards, calculate
Figure BSA00000340407300053
With
Figure BSA00000340407300054
In smaller value and the ratio ρ of higher value.
3) calculate the mean value of ρ corresponding to all target areas
Figure BSA00000340407300055
For video image X, may detect a plurality of target areas, each target area all can be according to step 2) shown in mode calculate a ρ, in this step, calculate the mean value of ρ corresponding to all target areas
Figure BSA00000340407300056
4) will
Figure BSA00000340407300057
With the first threshold T that sets in advance GCompare, if Less than T G, then determine to have shade among the video image X.
T GSpan be generally 0.4~0.7.
All there is shade in the N continuous frame video image if determine in the manner described above, but execution in step 13 then.N is the positive integer greater than 1, and span is generally 500~2000.
Step 13: according to hue, saturation, intensity (HSV) colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determining to suppress reaches requirement, if the L frame all reaches requirement continuously, then repeated execution of steps 12.
In the HSV colour model, brightness (V) component is used for the expression light action in the sensation of the caused bright degree of human eye, and it is relevant with the luminous intensity of object; Tone (H) component is used for the expression human eye and sees the color sensation that the light time of one or more wavelength produces, and the kind of its reflection color is the fundamental characteristics that determines color, such as the tone that just refers to such as red, brown; Saturation degree (S) component is used for the purity of expression color, the degree of namely mixing white light, and gradation of color in other words, for the identical colorama of tone, saturation degree more dark colour is distincter or say purer.Usually tone component and saturation degree component are referred to as chromatic component.
In addition, the camera acquisition normally brightness of video image of arriving, colourity (YUV) form.
In this step, for every frame video image Y, can come in such a way respectively that it is carried out shade and suppress:
1) each pixel among the traversal video image Y is found out luminance component less than the pixel of the luminance component of the corresponding pixel points in the background image that is similarly yuv format that obtains in advance.
Corresponding pixel points refers to the pixel that coordinate is identical.For instance, be the pixel of (1,1) for the coordinate among the video image Y, coordinate is that the pixel of (1,1) is its corresponding pixel points in the background image.How the background extraction image is prior art.
Suppose Lum Curr(i) luminance component of a pixel among the expression video image Y, Lum Bg(i) luminance component of its corresponding pixel points in the expression background image so then has: Lum Bg(i)-Lum Curr(i)>T LumT LumGreater than 0, the luminance component difference between two pixels of the larger expression of value is larger, T LumSpan be generally 10~60.
Because the characteristic of shade, wherein each pixel, namely the luminance component of shadow spots is usually less than the luminance component of corresponding pixel points in the background image, so in this step, will satisfy Lum Bg(i)-Lum Curr(i)>T LumPixel as doubtful shadow spots, and for these doubtful shadow spots, further carry out subsequent treatment.
2) video image Y and background image all are converted to the HSV form, and three parameters are set.
How to be converted to prior art.
3) for each the pixel Z that finds out, carry out respectively following processing:
3.1) luminance component of calculating pixel point Z in the HSV model and the ratio of the luminance component of corresponding pixel points in the HSV model, if this ratio is positioned at the interval (α, β) that sets in advance, then the value of first parameter is set to 1, otherwise, be set to 0; The saturation degree component of calculating pixel point Z in the HSV model and the difference of the saturation degree component of corresponding pixel points in the HSV model are if this difference is less than the Second Threshold T that sets in advance s, then the value of second parameter is set to 1, otherwise, be set to 0; The tone component of calculating pixel point Z in the HSV model and the absolute value of the difference of the tone component of corresponding pixel points in the HSV model are if this absolute value is less than the 3rd threshold value T that sets in advance H, then the value of the 3rd parameter is set to 1, otherwise, be set to 0.
Wherein, α represents that light is strong and weak, and the value of the stronger α of light is larger, but can not be excessive, otherwise can cause shade to suppress excessively; β represents the robustness to noise, and the larger adaptability to noise of value is stronger, but can not be excessive, otherwise can cause shade to suppress not enough; Usually, the span of α is that the span of 0.3~0.7, β is 0.9~1.0.
In addition, T sSpan be generally 0.04~0.1; T HSpan be generally 20~40.
3.2) multiply by respectively the weights that set in advance for it with the value of each parameter, and with the multiplied result addition, if addition result, determines then that pixel Z is shadow spots greater than the 4th threshold value that sets in advance, and with its filtering.
Because tone component, luminance component are different with the importance of saturation degree component, therefore need to be for it arranges different weights, wherein, the weight of luminance component is maximum, secondly is the saturation degree component, minimum be the tone component.Such as, the weight of tone component, saturation degree component, luminance component can be set to respectively 1,3,5 or 1,4,5.In addition, in actual applications, the weight of saturation degree component also can be identical with the weight of luminance component, such as, the weight of tone component, saturation degree component and luminance component can be set to respectively 1,5,5.
How the filtering pixel is prior art.
After finishing in the manner described above the shade inhibition to every frame video image, calculate the registration of the foreground image in the every frame video image before and after suppressing, how to be calculated as prior art, if shadow-free or shade are less in the video image before suppressing, the registration of the foreground image before and after suppressing so can be very high, therefore, the registration that calculates and the 5th threshold value that sets in advance can be compared, if registration is greater than the 5th threshold value, determine that then registration reaches requirement, and after determining continuous L frame video image and all reaching requirement, repeated execution of steps 12.L is the positive integer greater than 1, and span is generally 500~2000.
The concrete value of the 4th threshold value and the 5th threshold value all can be decided according to the actual requirements.
So far, namely finished introduction about the inventive method embodiment.
Based on above-mentioned introduction, Fig. 6 is the composition structural representation of apparatus of the present invention embodiment.As shown in Figure 6, comprising:
Track orientation determination module 61 is used for determining the lane detection zone, and determines the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image, and the track direction of determining is sent to shadow Detection module 62;
Shadow Detection module 62 is used for according to the track direction, whether has shade in the every frame video image after determining respectively, all has shade in the N continuous frame video image if determine, and then notifies shade to suppress module 63 and carries out self function;
Shade suppresses module 63, be used for according to the HSV colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determining to suppress reaches requirement, if the L frame all reaches requirement continuously, then notify shadow Detection module 62 to repeat self function; M, N and L are the positive integer greater than 1.
Wherein, can specifically comprise (for simplifying accompanying drawing, not shown) in the shadow Detection module 62:
The first processing unit, be used for according to the track direction, determine moving target type to be detected, described moving target type comprises upstroke target and downward movement target, and for every frame video image X, carry out respectively following processing: detect the target area at the moving target place of each respective type in the lane detection zone from video image X, the target area is the minimum rectangular area that comprises moving target; Two sub regions about each target area is equally divided into are calculated the average gradient amplitude of every sub regions
Figure BSA00000340407300081
With , and calculate
Figure BSA00000340407300083
With
Figure BSA00000340407300084
In smaller value and the ratio ρ of higher value; Calculate the mean value of ρ corresponding to all target areas
Figure BSA00000340407300085
Will
Figure BSA00000340407300086
Compare with the first threshold that sets in advance, if
Figure BSA00000340407300087
Less than first threshold, then determine to have shade among the video image X;
The second processing unit is used for when the first processing unit is determined the N continuous frame video image and all had shade, and the notice shade suppresses module 63 execution self function.
Wherein, if the regulation vehicle is kept to the right, then when the track direction be during from the lower left to the upper right side, moving target type to be detected is the upstroke target, when the track direction is during from the lower right to the upper left side, moving target type to be detected is the downward movement target;
If the regulation vehicle is kept to the left, then when the track direction be during from the lower left to the upper right side, moving target type to be detected is the downward movement target, when the track direction is during from the lower right to the upper left side, moving target type to be detected is the upstroke target.
Shade suppresses can specifically comprise in the module 63 (for simplifying accompanying drawing, not shown):
The 3rd processing unit, be used for the video image Y for every frame yuv format, carry out respectively following processing: each pixel among the video image Y of traversal yuv format, find out luminance component less than the pixel of the luminance component of the corresponding pixel points in the background image of the yuv format that obtains in advance, corresponding pixel points is the identical pixel of coordinate; The video image Y of yuv format and the background image of yuv format all are converted to the HSV form, and three parameters are set; For each the pixel Z that finds out, carry out respectively following processing: the luminance component of calculating pixel point Z in the HSV model and the ratio of the luminance component of corresponding pixel points in the HSV model, if this ratio is positioned at the interval that sets in advance, then the value of first parameter is set to 1, otherwise, be set to 0, the saturation degree component of calculating pixel point Z in the HSV model and the difference of the saturation degree component of corresponding pixel points in the HSV model, if this difference is less than the Second Threshold that sets in advance, then the value of second parameter is set to 1, otherwise, be set to 0, the tone component of calculating pixel point Z in the HSV model and the absolute value of the difference of the tone component of corresponding pixel points in the HSV model, if this absolute value is less than the 3rd threshold value that sets in advance, then the value of the 3rd parameter is set to 1, otherwise, be set to 0; Value with each parameter multiply by respectively the weights that set in advance for it, and with the multiplied result addition, if addition result, determines then that pixel Z is shadow spots greater than the 4th threshold value that sets in advance, and with its filtering; Afterwards, calculate the registration of the foreground image in the every frame video image before and after suppressing, registration and the 5th threshold value that sets in advance are compared, if registration, determines then that registration reaches requirement greater than described the 5th threshold value;
The manages the unit everywhere, is used for notifying shadow Detection module 62 to repeat self function when the 3rd processing unit is determined continuous Y frame video image and all reached requirement.
The specific works flow process of device embodiment shown in Figure 6 please refer to the respective description in the embodiment of the method shown in Figure 1, repeats no more herein.
The above only is preferred embodiment of the present invention, and is in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of making, is equal to replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (5)

1. the method for video image processing under the intelligent transportation monitoring scene is characterized in that, comprising:
A, determine lane detection zone, and determine the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image;
B, according to described track direction, whether have shade in the every frame video image after determining respectively, all have shade, then execution in step C in the N continuous frame video image if determine;
C, according to hue, saturation, intensity HSV colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determine suppressing reach requirement, if the L frame all reaches requirement continuously, and repeated execution of steps B then; Described M, N and L are the positive integer greater than 1;
Wherein, whether exist shade to comprise in described every frame video image after determining respectively:
According to described track direction, determine moving target type to be detected, described moving target type comprises upstroke target and downward movement target; And for every frame video image X, carry out respectively following processing:
Detect the target area at the moving target place of each respective type in the lane detection zone from described video image X, described target area is the minimum rectangular area that comprises moving target;
Two sub regions about each target area is equally divided into are calculated the average gradient amplitude of every sub regions With
Figure FSB00000822030900012
And calculate
Figure FSB00000822030900013
With
Figure FSB00000822030900014
In smaller value and the ratio ρ of higher value;
Calculate the mean value of ρ corresponding to all target areas
With described Compare with the first threshold that sets in advance, if described Less than described first threshold, then determine to have shade among the described video image X;
Described according to the HSV colour model, every frame video image is afterwards carried out respectively shade suppress to comprise:
For the video image Y of every frame brightness, colourity yuv format, carry out respectively following processing:
Travel through each pixel among the video image Y of described yuv format, find out luminance component less than the pixel of the luminance component of the corresponding pixel points in the background image of the yuv format that obtains in advance, described corresponding pixel points is the identical pixel of coordinate; The video image Y of described yuv format and the background image of described yuv format all are converted to the HSV form, and three parameters are set;
Each pixel Z for finding out, carry out respectively following processing:
Calculate the luminance component of described pixel Z in the HSV model and the ratio of the luminance component of described corresponding pixel points in the HSV model, if this ratio is positioned at the interval that sets in advance, then the value of first parameter is set to 1, otherwise, be set to 0; Calculate the saturation degree component of described pixel Z in the HSV model and the difference of the saturation degree component of described corresponding pixel points in the HSV model, if this difference less than the Second Threshold that sets in advance, then the value of second parameter is set to 1, otherwise, be set to 0; Calculate the tone component of described pixel Z in the HSV model and the absolute value of the difference of the tone component of described corresponding pixel points in the HSV model, if this absolute value is less than the 3rd threshold value that sets in advance, then the value of the 3rd parameter is set to 1, otherwise, be set to 0;
Value with each parameter multiply by respectively the weights that set in advance for it, and with the multiplied result addition, if addition result, determines then that described pixel Z is shadow spots greater than the 4th threshold value that sets in advance, and with its filtering.
2. method according to claim 1 is characterized in that, and is described according to described track direction, determines that moving target type to be detected comprises:
If the regulation vehicle is kept to the right, then when described track direction be during from the lower left to the upper right side, described moving target type to be detected is the upstroke target, and when described track direction is during from the lower right to the upper left side, described moving target type to be detected is the downward movement target;
If the regulation vehicle is kept to the left, then when described track direction be during from the lower left to the upper right side, described moving target type to be detected is the downward movement target, and when described track direction is during from the lower right to the upper left side, described moving target type to be detected is the upstroke target.
3. method according to claim 1 is characterized in that, whether the registration of the foreground image in the every frame video image before and after described definite the inhibition reaches requirement comprises:
Calculate the registration of the foreground image in the every frame video image before and after suppressing, described registration and the 5th threshold value that sets in advance are compared, if described registration, determines then that described registration reaches requirement greater than described the 5th threshold value.
4. the video image processing device under the intelligent transportation monitoring scene is characterized in that, comprising:
Track orientation determination module is used for determining the lane detection zone, and determines the track direction by the running orbit of analyzing the moving target in the lane detection zone in the continuous N frame video image, and the track direction of determining is sent to the shadow Detection module;
Described shadow Detection module is used for according to described track direction, whether has shade in the every frame video image after determining respectively, all has shade in the N continuous frame video image if determine, and then notifies shade to suppress module and carries out self function;
Described shade suppresses module, be used for according to hue, saturation, intensity HSV colour model, every frame video image is afterwards carried out respectively shade to be suppressed, and whether the registration of the foreground image in the every frame video image before and after determining to suppress reaches requirement, if the L frame all reaches requirement continuously, then notify described shadow Detection module to repeat self function; Described M, N and L are the positive integer greater than 1;
Wherein, described shadow Detection module comprises:
The first processing unit is used for according to described track direction, determines moving target type to be detected, and described moving target type comprises upstroke target and downward movement target; And for every frame video image X, carry out respectively following processing: detect the target area at the moving target place of each respective type in the lane detection zone from described video image X, described target area is the minimum rectangular area that comprises moving target; Two sub regions about each target area is equally divided into are calculated the average gradient amplitude of every sub regions
Figure FSB00000822030900031
With
Figure FSB00000822030900032
And calculate
Figure FSB00000822030900033
With
Figure FSB00000822030900034
In smaller value and the ratio ρ of higher value; Calculate the mean value of ρ corresponding to all target areas
Figure FSB00000822030900035
With described
Figure FSB00000822030900036
Compare with the first threshold that sets in advance, if described Less than described first threshold, then determine to have shade among the described video image X;
The second processing unit is used for notifying described shade to suppress module execution self function when described the first processing unit is determined the N continuous frame video image and all had shade;
Described shade suppresses module and comprises:
The 3rd processing unit, be used for the video image Y for every frame brightness, colourity yuv format, carry out respectively following processing: travel through each pixel among the video image Y of described yuv format, find out luminance component less than the pixel of the luminance component of the corresponding pixel points in the background image of the yuv format that obtains in advance, described corresponding pixel points is the identical pixel of coordinate; The video image Y of described yuv format and the background image of described yuv format all are converted to the HSV form, and three parameters are set; For each the pixel Z that finds out, carry out respectively following processing: calculate the luminance component of described pixel Z in the HSV model and the ratio of the luminance component of described corresponding pixel points in the HSV model, if this ratio is positioned at the interval that sets in advance, then the value of first parameter is set to 1, otherwise, be set to 0, calculate the saturation degree component of described pixel Z in the HSV model and the difference of the saturation degree component of described corresponding pixel points in the HSV model, if this difference is less than the Second Threshold that sets in advance, then the value of second parameter is set to 1, otherwise, be set to 0, calculate the tone component of described pixel Z in the HSV model and the absolute value of the difference of the tone component of described corresponding pixel points in the HSV model, if this absolute value is less than the 3rd threshold value that sets in advance, then the value of the 3rd parameter is set to 1, otherwise, be set to 0; Value with each parameter multiply by respectively the weights that set in advance for it, and with the multiplied result addition, if addition result, determines then that described pixel Z is shadow spots greater than the 4th threshold value that sets in advance, and with its filtering; Afterwards, calculate the registration of the foreground image in the every frame video image before and after suppressing, described registration and the 5th threshold value that sets in advance are compared, if described registration, determines then that described registration reaches requirement greater than described the 5th threshold value;
The manages the unit everywhere, is used for notifying described shadow Detection module to repeat self function when described the 3rd processing unit is determined continuous L frame video image and all reached requirement.
5. device according to claim 4 is characterized in that,
If the regulation vehicle is kept to the right, then when described track direction be during from the lower left to the upper right side, described moving target type to be detected is the upstroke target, and when described track direction is during from the lower right to the upper left side, described moving target type to be detected is the downward movement target;
If the regulation vehicle is kept to the left, then when described track direction be during from the lower left to the upper right side, described moving target type to be detected is the downward movement target, and when described track direction is during from the lower right to the upper left side, described moving target type to be detected is the upstroke target.
CN 201010535397 2010-11-04 2010-11-04 Method and device for processing video image under intelligent transportation monitoring scene Active CN101982825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010535397 CN101982825B (en) 2010-11-04 2010-11-04 Method and device for processing video image under intelligent transportation monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010535397 CN101982825B (en) 2010-11-04 2010-11-04 Method and device for processing video image under intelligent transportation monitoring scene

Publications (2)

Publication Number Publication Date
CN101982825A CN101982825A (en) 2011-03-02
CN101982825B true CN101982825B (en) 2013-01-09

Family

ID=43619723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010535397 Active CN101982825B (en) 2010-11-04 2010-11-04 Method and device for processing video image under intelligent transportation monitoring scene

Country Status (1)

Country Link
CN (1) CN101982825B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632351B (en) * 2013-12-16 2017-01-11 武汉大学 All-weather traffic image enhancement method based on brightness datum drift
CN104156727B (en) * 2014-08-26 2017-05-10 中电海康集团有限公司 Lamplight inverted image detection method based on monocular vision
CN104899881B (en) * 2015-05-28 2017-11-28 湖南大学 Moving vehicle shadow detection method in a kind of video image
CN111488780A (en) * 2019-08-30 2020-08-04 杭州海康威视系统技术有限公司 Duration obtaining method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
EP1909230A1 (en) * 2005-07-06 2008-04-09 HONDA MOTOR CO., Ltd. Vehicle and lane mark recognition apparatus
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN101447082A (en) * 2008-12-05 2009-06-03 华中科技大学 Detection method of moving target on a real-time basis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7965875B2 (en) * 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1909230A1 (en) * 2005-07-06 2008-04-09 HONDA MOTOR CO., Ltd. Vehicle and lane mark recognition apparatus
CN101141633A (en) * 2007-08-28 2008-03-12 湖南大学 Moving object detecting and tracing method in complex scene
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring
CN101447082A (en) * 2008-12-05 2009-06-03 华中科技大学 Detection method of moving target on a real-time basis

Also Published As

Publication number Publication date
CN101982825A (en) 2011-03-02

Similar Documents

Publication Publication Date Title
CN105005758B (en) Image processing apparatus
US8036427B2 (en) Vehicle and road sign recognition device
US9424462B2 (en) Object detection device and object detection method
US11700457B2 (en) Flicker mitigation via image signal processing
CN103679733B (en) A kind of signal lamp image processing method and its device
US9811746B2 (en) Method and system for detecting traffic lights
CA2609533C (en) Vehicle and road sign recognition device
CN105812674A (en) Signal lamp color correction method, monitoring method, and device thereof
WO2007061779A1 (en) Shadow detection in images
CN101982825B (en) Method and device for processing video image under intelligent transportation monitoring scene
CN107085707A (en) A kind of license plate locating method based on Traffic Surveillance Video
CN101739827A (en) Vehicle detecting and tracking method and device
US20160171314A1 (en) Unstructured road boundary detection
TWI394096B (en) Method for tracking and processing image
CN114143940A (en) Tunnel illumination control method, device, equipment and storage medium
CN102610104A (en) Onboard front vehicle detection method
JP2000048211A (en) Movile object tracking device
US10380743B2 (en) Object identifying apparatus
CN104361566A (en) Picture processing method for optimizing dark region
KR20130012749A (en) Video enhancer and video image processing method
CN110428411B (en) Backlight plate detection method and system based on secondary exposure
CN105704363A (en) Image data processing method and device
CN100391265C (en) A pseudo color inhibiting method with recursive protection
US9092876B2 (en) Object-tracking apparatus and method in environment of multiple non-overlapping cameras
CN102375980A (en) Image processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant