WO2019076326A1 - 监控视频图像的阴影检测方法及其系统、阴影去除方法 - Google Patents

监控视频图像的阴影检测方法及其系统、阴影去除方法 Download PDF

Info

Publication number
WO2019076326A1
WO2019076326A1 PCT/CN2018/110701 CN2018110701W WO2019076326A1 WO 2019076326 A1 WO2019076326 A1 WO 2019076326A1 CN 2018110701 W CN2018110701 W CN 2018110701W WO 2019076326 A1 WO2019076326 A1 WO 2019076326A1
Authority
WO
WIPO (PCT)
Prior art keywords
shadow
value
threshold
candidate
detection value
Prior art date
Application number
PCT/CN2018/110701
Other languages
English (en)
French (fr)
Inventor
晋兆龙
邹文艺
陈卫东
Original Assignee
苏州科达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科达科技股份有限公司 filed Critical 苏州科达科技股份有限公司
Priority to GB2007386.2A priority Critical patent/GB2583198B/en
Priority to DE112018004661.3T priority patent/DE112018004661T5/de
Publication of WO2019076326A1 publication Critical patent/WO2019076326A1/zh
Priority to US16/852,597 priority patent/US20200250840A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a shadow detection method for monitoring a video image, a shadow detection system for monitoring a video image, and a method for removing a shadow of a surveillance video image using the shadow detection method of the surveillance video image.
  • the monitoring system is one of the most widely used systems in security systems.
  • the shadows in the monitoring scene (including the shadow of the monitoring target and the shadow of other background objects) have always been an important factor in the monitoring and detection of the interference monitoring target, especially in the lighting conditions, the monitoring target in motion
  • the projected shadow and the monitoring target are “like the shadow”, that is, the shadow projected by the monitoring target has similar motion properties to the monitoring target, and the same corresponding background area has a larger degree of discrimination, which is easy to be combined with the monitoring target of the motion state. It was detected.
  • the shadow is detected as a monitoring target and detected at the same time, it is easy to cause adhesion, fusion, geometric property distortion and the like of the monitoring target. Therefore, how to detect the moving target in the surveillance video scene and eliminate the interference of the projected shadow, as far as possible to ensure the integrity of the monitoring target has important significance for intelligent video analysis.
  • the object of the present application is to provide a shadow detection method for monitoring a video image, a shadow detection system for monitoring a video image, and a shadow removal method for monitoring a video image using the shadow detection method of the surveillance video image.
  • the shadow detection method of the surveillance video image, the shadow detection system, and the method of removing the shadow of the surveillance video image can effectively detect and remove the shadow, thereby minimizing the influence of the shadow on the integrity of the monitoring target.
  • a shadow detecting method for monitoring a video image comprising the following steps: S10: acquiring a current frame and a background frame from source data; S20: by the current frame Obtaining a first candidate shadow region, wherein a brightness of the first candidate shadow region is smaller than a brightness of a corresponding region in the background frame; S30: calculating a shadow detection value of a local ternary mode of all the first candidate shadow regions, selecting a first candidate shadow region of the local ternary mode whose shadow detection value is greater than the first threshold is used as the second candidate shadow region; S40: calculating a hue and saturation detection value and a gradient detection value of each of the second candidate shadow regions; S50: Estimating a shadow threshold, a hue and a saturation threshold, and a gradient threshold of the corresponding local ternary mode according to the calculated shadow detection value, hue and saturation detection value, and gradient detection value of the local ternary mode of the second candidate shadow region
  • a method for removing a shadow of a video image the method for removing a shadow of the video image at least comprising the following steps of implementing a shadow detection method for monitoring a video image: S10: source data Obtaining a current frame and a background frame; S20: acquiring a first candidate shadow region from the current frame, wherein a brightness of the first candidate shadow region is smaller than a brightness of a corresponding region in the background frame; S30: calculating all the first a shadow detection value of the local ternary mode of the candidate shadow region, selecting a first candidate shadow region whose shadow detection value of the local ternary mode is greater than the first threshold as the second candidate shadow region; S40: calculating each of the second candidate shadows a hue and saturation detection value and a gradient detection value of the region; S50: estimating a corresponding portion of the shadow detection value, the hue and saturation detection value, and the gradient detection value of the local ternary mode of the second candidate shadow region according to the
  • a shadow detection system for monitoring a video image.
  • the shadow detection system of the surveillance video image includes: an extraction module, configured to acquire a current frame, a background frame, or a foreground frame from the source data; a first candidate shadow region acquiring module, configured to acquire a first candidate shadow region from the current frame, where a brightness of the first candidate shadow region is smaller than a brightness of a corresponding region in the background frame; and a second candidate shadow region acquiring module a shadow detection value for calculating a local ternary mode of all the first candidate shadow regions, and selecting a first candidate shadow region of the local ternary mode whose shadow detection value is greater than the first threshold as the second candidate shadow region; a calculation module, configured to calculate a hue and a saturation detection value and a gradient detection value of each of the second candidate shadow regions; a threshold estimation module, configured to calculate a shadow of the local ternary mode of the second candidate shadow region according to the calculation The detection value, the hue and saturation detection value, and the
  • the shadow detection method for monitoring video images Compared with the prior art, the shadow detection method for monitoring video images, the shadow detection system for monitoring video images, and the method for removing shadows of surveillance video images using the shadow detection method of the surveillance video image are provided in the prior art.
  • Obtaining a first candidate shadow region (rough shadow candidate region) extracting a small portion of the true second candidate shadow region from the first candidate shadow region for estimating a threshold parameter of the subsequent three shadow detectors, and further, based on the shadow
  • the region and the corresponding background region have the principle of texture consistency and chroma constancy.
  • the three shadow detectors are used to extract more accurate shadow regions from the first candidate shadow region in parallel, and then all the more accurate shadow regions are performed. Joint screening for more accurate shaded areas.
  • the shadow area detected by the shadow detection method of the monitoring video image of the present application has a significant detection effect on the shadow area of the monitoring target in the moving state in most common indoor scenes, and the detected shadow area is very accurate.
  • the algorithm can be applied as a stand-alone module in a monitoring scenario, combined with background modeling or background difference algorithm, and can realize and apply the real-time video frame (current frame), foreground frame and background frame. The algorithm minimizes the influence of shadow on the integrity of the target, so that the monitoring target obtained after the subsequent removal of the shadow area is more accurate and complete, and is more conducive to monitoring the monitoring target.
  • FIG. 1 is a flowchart of a method for detecting a shadow of an image according to an embodiment of the present application
  • FIG. 2 is a flow chart of steps of acquiring a first candidate shadow region in a shadow detection method for an image according to an embodiment of the present application
  • FIG. 3 is a flow chart for calculating a shadow detection value of an improved partial ternary mode in a shadow detection method for an image according to an embodiment of the present application
  • FIG. 4 is a flow chart for calculating a calculated value of an improved partial ternary mode in a shadow detection method for an image according to an embodiment of the present application
  • FIG. 5 is a schematic diagram showing the calculation result of the calculated value of the improved partial ternary mode in the shadow detecting method of the image according to an embodiment of the present application.
  • a shadow detection method for monitoring a video image of the present application includes the steps of: acquiring a current frame and a background frame from source data; and acquiring a first candidate shadow region from the current frame, the first The brightness of a candidate shadow region is smaller than the brightness of the corresponding region in the background frame; the shadow detection value of the local ternary mode of all the first candidate shadow regions is calculated, and the shadow detection value of the local ternary mode is selected to be greater than the first threshold.
  • the shadow detection value, the hue and saturation detection value, and the gradient detection value estimate a shadow threshold, a hue and a saturation threshold, and a gradient threshold of the corresponding local ternary mode; and calculate a shadow of the local ternary mode of each of the first candidate shadow regions Detecting values, hue and saturation detection values, and gradient detection values; selecting the shadow detection of the local ternary mode Value, hue and saturation values of the detected value and the average gradient detection threshold value in the shadow of local ternary mode, the shaded area of the first candidate hue and saturation threshold value and the gradient threshold range as the shadow region.
  • the shadow detection method for monitoring video images of the present application is mainly applied to two color spaces: hue, saturation, lightness (HSV) color space, and red, green, and blue (RGB) color space; two textures: gradient And local space mode.
  • the main idea of the algorithm for the shadow detection method of the surveillance video image is to first extract the candidate shadow region (see the first candidate shadow region and the second candidate shadow region, etc.), and then extract the shadow region from the candidate shadow region.
  • the extracted shaded area is more accurate.
  • the shadow detection method of the surveillance video image includes the following steps:
  • Step S10 Obtain the current frame and the background frame from the source data.
  • the source data refers to the original image or video data acquired by the monitoring device, and the current frame refers to the current image collected in real time, and the background frame is extracted from the monitoring screen or video by means of background modeling or background difference algorithm.
  • the step S10 further includes the step of simultaneously acquiring a foreground frame from the source data, where the foreground frame refers to a time earlier than the current frame during the running of the monitoring device. And the recorded surveillance image.
  • Step S20 Acquire a first candidate shadow region from the current frame.
  • the brightness of the first candidate shadow area is smaller than the brightness of the corresponding area in the background frame.
  • this step is mainly based on the assumption that the shaded area is darker than the corresponding background area. This assumption is true in most cases. Therefore, the assumption can be used to extract a rough candidate shadow area (ie, the above-mentioned A candidate shadow region), therefore, the acquired luminance of the first candidate shadow region is smaller than the luminance of the corresponding region in the background frame.
  • the background frame is an image that does not have a monitoring target, that is, the image of the region other than the monitoring target and the shadow region in the current frame is the same as in the background frame, and therefore, the first candidate shadow region in the current frame. It is substantially the same position as the corresponding area in the background.
  • the first candidate shadow region actually acquired in step S20 includes most of the actual shadow region and the monitoring target that is misdetected as the shadow region. If the chromaticity is dark, the assumption is that the false detection is The area of the shaded area is large. Further, in the embodiment of the present application, since the inventor performs statistical analysis on the monitoring target and the shadow area, it is found that the ratio of the spectral frequencies of the color channels of the shaded areas in the red, green, and blue (RGB) color space is corresponding.
  • RGB red, green, and blue
  • step S20 further includes the following steps:
  • Step S201 Calculate the brightness of each area in the current frame and the background frame, and select an area in the current frame that is less than the brightness of the corresponding area in the background frame as the first area.
  • Step S202 calculating three first ratios of spectral frequencies of the first region and the second region of the background frame corresponding to the first region in the three color channels of red, green, and blue, respectively, and the The third ratio of the spectral frequencies in the foreground frame corresponding to the third region of the first region and the second region in the three channels of red, green, and blue, respectively.
  • the first area, the second area, and the third area are substantially the same area in the image.
  • the three first ratios are calculated as follows:
  • ⁇ r is the first ratio of the spectral frequencies in the red channel
  • ⁇ g is the first ratio of the spectral frequencies in the green channel
  • ⁇ b is the first ratio of the spectral frequencies in the green channel
  • C r is in the red channel
  • the spectral frequency of the current frame C g is the spectral frequency of the current frame in the green channel
  • C b is the spectral frequency of the current frame in the blue channel
  • B r is the spectral frequency of the background frame in the red channel
  • B g is the background of the green channel
  • the spectral frequency of the frame, B b is the spectral frequency of the background frame in the blue channel.
  • the three second ratios of the spectral frequencies of the third region and the second region corresponding to the first region in the red, green, and blue channels in the foreground frame are calculated in the same manner as the first ratio.
  • the corresponding current frame parameters are replaced, and the related parameters of the background frame are reserved.
  • the C g is replaced with the spectral frequency of the foreground frame in the red channel, and the parameters of other current frames are similarly replaced, and details are not described herein.
  • Step S203 Select the first region whose difference between the first ratio and the second ratio is smaller than the second threshold as the first candidate shadow region.
  • the second threshold can be set and adjusted according to actual needs.
  • Step S30 Calculate a shadow detection value of the local ternary mode of all the first candidate shadow regions, and select a first candidate shadow region whose shadow detection value of the local ternary mode is greater than the first threshold as the second candidate shadow region.
  • the shadow area is mainly detected by three shadow detectors, and for each shadow detector, there is a corresponding parameter threshold, but since the scene in the surveillance video is variable, if The need to set a set of parameter thresholds for each scene will limit the application of the algorithm. Therefore, it is necessary to predict the more accurate parameter thresholds in advance.
  • the present application utilizes an improved Local Ternary Pattern detector (hereinafter referred to as an ILTP detector) to extract all the first
  • ILTP detector an improved Local Ternary Pattern detector
  • a candidate shadow area is screened to select accurate shadow areas (ie, the detection criteria of these shadow areas are higher, and then the selected areas are basically the final shadow areas), and based on these accurate shadow areas, estimates are made for other areas.
  • Threshold parameters (hue and saturation detector and gradient detector) for the three shadow detectors detected by a candidate shaded region. It should be noted that in this step, the ILTP detector is selected because the ILTP detector has higher accuracy and target for detecting the shaded area than the hue and saturation (HS) detector and the gradient (Gradient) detector. Less interference.
  • HS hue and saturation
  • Gdient gradient
  • FIG. 3 shows a calculation flowchart of the shadow detection value of the improved partial ternary mode in the shadow detection method of the image of one embodiment of the present application.
  • the calculation of the shadow detection value of the improved local ternary mode of the present application includes the following steps:
  • Step S301 Calculate a calculated value of a local ternary mode of all pixel points in the first candidate shadow region or the second candidate shadow region in the current frame. Specifically, for the above step S30, the calculated value (ILTP calculation value) of the improved local ternary mode of the present application is performed on the pixel points in the first candidate shadow region.
  • Step S302 Calculate a calculated value of the local ternary mode of each corresponding pixel point having the same position in the background frame.
  • Step S303 Calculate the number of pixel points in the first candidate shadow region or the second candidate shadow region in the current frame that have the same calculated value as the local ternary pattern of the corresponding pixel point in the background frame, and use the number of the pixel points as The shadow detection value of the local ternary mode.
  • the ILTP calculation value of each pixel point calculated in the above steps S301 and S302 is compared, and if the ILTP calculation value of a certain pixel point of the current frame in step S301 is In the step S302, the corresponding (ie, the same position) pixel points have the same ILTP calculation value, and the pixel point can be counted as 1 pixel point.
  • the shadow detection value of the local ternary mode is obtained.
  • FIG. 4 shows a calculation flowchart of the calculated value of the improved local ternary mode in the shadow detecting method of the image of one embodiment of the present application.
  • the calculation manner of the calculated value of the local ternary mode includes at least the following steps:
  • Step S3001 setting a noise tolerance value.
  • Step S3002 Compare each neighborhood pixel point surrounding the pixel point with a gray value of the pixel point.
  • the result of the comparison is as follows, that is, only three values are calculated. Specifically, if the difference between the gray value of a neighboring pixel point and the pixel point is smaller than the noise tolerance value, the neighboring pixel point is marked as the first value; if the gray value of a neighboring pixel point is greater than or equal to The sum of the gray value of the pixel and the noise tolerance value, the neighboring pixel point is marked as the second value; if the gray value of a neighboring pixel is less than or equal to the gray value of the pixel and the noise tolerance value For the difference, the neighborhood pixel is marked as a third value.
  • FIG. 5 there is shown a schematic diagram showing the calculation result of the calculated value of the improved partial ternary mode in the shadow detecting method of the image of one embodiment of the present application.
  • the detected pixel points are arranged in a nine-square grid between the pixels of the plurality of neighboring pixels, and the surrounding of the pixel points includes eight adjacent pixel points disposed around the pixel.
  • the pixel value detected in FIG. 5 has a gray value of 90, a noise tolerance value t of 6, a first value of 01, a second value of 10, and a third value of 00.
  • the neighboring pixel point located in the upper left corner of the detected pixel point is marked as 01, and the neighboring pixel point located on the left side of the detected pixel point is marked as 00, and the pixel is located at the detected pixel
  • the upper neighboring pixel is marked as 10, and similarly, the surrounding eight neighboring pixel points are marked (refer to the nine-square grid marked in FIG. 5), and step S3003 is performed.
  • Step S3003 The first value, the second value, and the third value of all the neighborhood pixel points are grouped into the first array in the first order.
  • the first sequence begins with a neighborhood pixel located in the upper left corner of the nine-square grid formed by eight neighborhood pixels, and is sequentially arranged clockwise to form a first array. Since all neighborhood pixels are marked by the first value 01, the second value 10, and the third value 00, the first array is essentially a string of numbers consisting of 01, 10, and 00. As shown in FIG. 5, the first array formed after the completion of step S3003 is 011001001001000.
  • Step S3004 Compare each of the neighboring pixel points with a gray value of another of the neighboring pixel points that is farthest from the neighboring pixel point. If the difference between the gray values of the two neighboring pixel points is smaller than the noise tolerance value, forming the first value; if the gray value of the neighboring pixel point is greater than one of the neighboring pixel points And a sum of a gray value of another of the neighboring pixel points farthest from the neighboring pixel point and the noise tolerance value, forming the second value; if a gray of the neighboring pixel point The third value is formed when the degree value is less than or equal to a difference between the gray value of the other neighboring pixel point farthest from the neighboring pixel point and the noise tolerance value.
  • the association information between the neighboring pixel points is ignored, and this can be enhanced.
  • the neighboring pixel points for comparison are respectively compared between the diagonal direction of the pixel to be detected, the vertical direction, and the adjacent pixel points in the horizontal direction, as shown in the figure.
  • the comparison shown in Fig. 5 is marked in the table of the shape of the field.
  • the value of the mark in the space in the upper left corner of the table of the field is the neighborhood pixel in the upper left corner of the nine-square grid and the neighborhood pixel in the lower right corner.
  • the gray value 89 is compared with the gray value 91, since the difference between 89 and 91 is smaller than the noise tolerance value 6, the value in the upper left corner of the table in the field is marked as the first value 01;
  • the value in the upper right corner of the table is the result of comparing the neighboring pixel in the upper right corner of the nine-square grid with the neighboring pixel in the lower left corner; the value in the lower left corner of the table is the nine-square grid.
  • the result of comparison between two adjacent pixel points in the horizontal direction ie, the left and right sides of the detected pixel
  • the value in the upper right corner of the table of the field is the numerical direction in the nine-square grid (ie, located in the detection
  • Step S3005 Form all the formed first value, second value, and third value into a second array in the second order.
  • the second sequence is also formed by sequentially arranging clockwise directions from the upper left corner of the table of the field shape.
  • the second array includes four values, as shown in FIG. 5, and the second array is 01100010.
  • Step S3006 superimposing the first array and the second array as a calculated value of the local ternary mode.
  • the string number is used as the calculated value of the local ternary mode (the calculated value of the local ternary mode as shown in FIG. 5). It is 01100100100100001100010).
  • the calculated value of the local ternary mode in Fig. 5 is composed of 12 values. If three color channels are comprehensively considered in the RGB color space, the final ILTP calculation value includes 36 values.
  • step S303 calculating the calculated value of the local ternary mode for the detected pixel point in the current frame and the corresponding one of the background frames, and determining whether the calculated values of the local ternary patterns of the two pixel points are the same, and The number of identical pixels is calculated (step S303). This number is the shadow detection value of the local ternary mode of a first candidate shadow region finally obtained in step S30. If the shadow detection value of the local ternary mode of a first candidate shadow region is greater than the first threshold, it is taken as the second candidate shadow region.
  • FIG. 5 is only an example, and is not limited thereto.
  • the first sequence, the second sequence, and the first value and the second value may be set according to actual requirements. And the third value and other parameters.
  • the detected pixel points and the neighboring pixel points may not even be nine-grid.
  • the neighboring pixel points may also surround the detection pixel in a ring shape, and details are not described herein.
  • Step S40 Calculate the hue and saturation detection values and the gradient detection values of the respective second candidate shadow regions.
  • the hue detection value of the second candidate shadow region is an average value of the difference values of the hue values of all the pixel points in the second candidate shadow region and all corresponding pixel points in the background frame; similarly, the second candidate shadow region
  • the saturation detection value is an average of the difference between the saturation values of all the pixel points in the second candidate shadow region and all corresponding pixel points in the background frame.
  • Step S50 Estimating the shadow threshold, hue and saturation threshold and gradient of the corresponding local ternary mode according to the calculated shadow detection value, hue and saturation detection value and gradient detection value of the local ternary mode of the second candidate shadow region. Threshold. Specifically, since the calculation manner of the present application increases the association information between the neighboring pixel points according to the above step S30, the expression capability of the local ternary mode is enhanced, and therefore, the acquired second candidate shadow region is very accurate. Basically the final shaded area. Further, the shadow of the local ternary mode for detecting all the first candidate shadow regions may be estimated according to the shadow detection value, the hue and saturation detection value, and the gradient detection value of the local ternary mode calculated in the second candidate shadow region.
  • Threshold, hue and saturation thresholds and gradient thresholds may be performed by using the average value of the shadow detection values of the local ternary modes of all the second candidate shadow regions as the shadow threshold of the local ternary mode; and the hue and saturation detection values of all the second candidate shadow regions.
  • the average value is used as the hue and saturation threshold; the average of the gradient detection values of all the second candidate shading regions is taken as the gradient threshold.
  • the average value described above may be adjusted according to actual needs as a final threshold, and will not be described herein.
  • the selected second candidate shadow region is accurately high and the target interference is small, and is used to determine all subsequent first candidates.
  • the threshold parameters of the individual shadow detectors in the shaded area will have better representation and accuracy.
  • Step S60 calculating a shadow detection value, a hue and a saturation detection value, and a gradient detection value of the local ternary mode of each of the first candidate shadow regions.
  • the shading detection value, the hue and saturation detection value, and the gradient detection value for the partial ternary mode are calculated in the same manner as the above-described steps S30 and S50.
  • Step S70 selecting the shadow detection value, the hue and saturation detection value, and the gradient detection value of the local ternary mode to be within the shadow threshold, the hue and saturation threshold, and the gradient threshold range of the local ternary mode.
  • the first candidate shaded area acts as a shaded area.
  • determining whether the shadow detection value of the local ternary mode of the first candidate shadow region is within the shadow threshold range of the local ternary mode may use the method in the above step S30, wherein only the first The threshold value may be replaced with the shadow threshold value of the partial ternary mode in step S50.
  • the hue and saturation detection values of the first candidate shading region When the average value of the hue of the first candidate shadow region is less than the hue threshold and the average value of the saturation is less than the saturation threshold, the hue and saturation detection values of the first candidate shading region have an output value of 1 in the hue and saturation threshold range; Otherwise, the hue and saturation detection values of the first candidate shaded area exceed the hue and saturation threshold range, and the output value is zero.
  • the average value of the hue of the first candidate shadow region is an average value of the difference between the hue values of all the pixel points in the first candidate shadow region and all corresponding pixel points in the background frame; similarly, the saturation of the first candidate shadow region
  • the degree average is the average of the difference between the saturation values of all pixel points in the first candidate shadow region and all corresponding pixel points in the background frame.
  • the hue and saturation detection proposed by the present application is compared with the traditional hue, saturation, and brightness (HSV) detectors for calculating the H, S, and V channels of the current frame and the background frame.
  • HSV hue, saturation, and brightness
  • the calculation of the V channel is removed, and the H and S channels are mainly used to express the chrominance invariance, and the neighborhood information of the H and S channels (such as the neighboring pixel points) is fully utilized.
  • the hue threshold and the saturation threshold are calculated based on the second candidate shading area and, therefore, may vary from scene to scene.
  • the use of a single isolated pixel and neighborhood information can reduce the interference caused by sudden illumination changes, reduce missed detection, and improve detection accuracy.
  • the horizontal gradient value of the pixel Is the vertical gradient value of the pixel
  • the gradient value of the pixel ⁇ is the angle value, a gradient value within a color channel for a pixel in the current frame, a gradient value in the same color channel for a corresponding pixel in the background frame
  • For the gradient threshold The angle value of a pixel in the current frame in a color channel, The angle value of a corresponding pixel in the same color channel in the background frame, Angle threshold
  • an average value of all gradient differences among all the pixels in the current frame and the corresponding pixels in the background frame in three channels of red, green, and blue is smaller than the gradient threshold, and all pixels in the current frame
  • the gradient detection value of the first candidate shadow region is at the gradient threshold when an average value of all angle differences between the corresponding pixels in the background frame in the three channels of red, green, and blue is less than the angle threshold.
  • the output value is 1; otherwise, the gradient detection value of the first candidate shadow region exceeds the gradient threshold range, and the output value is 0. According to the output value of 1 or 0, it can be determined whether the gradient detection values of a first candidate shadow region are all within the gradient threshold range.
  • the present application further provides a method for removing a shadow of a video image, and the method for removing a shadow of the video image includes at least the shadow detecting method of the video image shown in FIG. 1 to FIG. Specifically, after selecting the shaded area, the following steps are also included:
  • the shadow region in the current frame is removed by median filtering and hole filling in conjunction with the foreground frame.
  • the detected shadow region is very accurate, and post-processing such as median filtering and hole filling is added.
  • post-processing such as median filtering and hole filling is added.
  • the separation of the shadow area and the monitoring target can be achieved.
  • the shape and contour of the monitoring target with the shadow area removed are relatively complete and accurate, and accurate and effective data are provided for the pattern recognition algorithm such as further identification classification.
  • the present application further provides a shadow detection system for monitoring a video image, which is used to implement the shadow detection method of the above-mentioned surveillance video image.
  • the shadow detection system for monitoring a video image mainly includes: an extraction module, a first candidate shadow region acquisition module, a second candidate shadow region acquisition module, a first calculation module, a threshold estimation module, a second calculation module, and a shadow region selection module.
  • the extraction module is configured to obtain a current frame, a background frame, or a foreground frame from the source data.
  • the first candidate shadow region obtaining module is configured to obtain a first candidate shadow region from the current frame, where a brightness of the first candidate shadow region is smaller than a brightness of a corresponding region in the background frame.
  • the second candidate shadow region obtaining module is configured to calculate a shadow detection value of the local ternary mode of all the first candidate shadow regions, and select a first candidate shadow region whose shadow detection value of the local ternary mode is greater than the first threshold as the second Candidate shadow area.
  • the first calculation module is configured to calculate hue and saturation detection values and gradient detection values of each of the second candidate shadow regions.
  • the threshold estimation module is configured to estimate a shadow threshold, a hue, and a saturation of the corresponding local ternary mode according to the calculated shadow detection value, the hue and saturation detection value, and the gradient detection value of the local ternary mode of the second candidate shadow region. Degree threshold and gradient threshold.
  • the second calculation module is configured to calculate a shadow detection value, a hue and a saturation detection value, and a gradient detection value of the local ternary mode of each of the first candidate shadow regions.
  • the shaded area selection module is configured to select the shadow detection value, the hue and saturation detection value, and the gradient detection value of the local ternary mode to be within a shadow threshold, a hue and a saturation threshold, and a gradient threshold of the local ternary mode.
  • the first candidate shaded area is used as a shaded area.
  • the method for detecting a shadow of a surveillance video image, the shadow detection system for monitoring a video image, and the method for removing a shadow of a surveillance video image using the shadow detection method of the surveillance video image are obtained in the first embodiment.
  • the first candidate shadow region (rough shadow candidate region) extracts a small portion of the true second candidate shadow region from the first candidate shadow region for estimating the threshold parameters of the subsequent three shadow detectors, and further, based on the shadow region and
  • the corresponding background region has the principle of texture consistency and chroma constancy.
  • the three shadow detectors are used to extract more accurate shadow regions from the first candidate shadow region in parallel, and then all the more accurate shadow regions are jointly filtered. , to get a more accurate shaded area.
  • the shadow area detected by the shadow detection method of the monitoring video image of the present application has a significant detection effect on the shadow area of the monitoring target in the moving state in most common indoor scenes, and the detected shadow area is very accurate.
  • the algorithm can be applied as a stand-alone module in a monitoring scenario, combined with background modeling or background difference algorithm, and can realize and apply the real-time video frame (current frame), foreground frame and background frame. The algorithm minimizes the influence of shadow on the integrity of the target, so that the monitoring target obtained after the subsequent removal of the shadow area is more accurate and complete, and is more conducive to monitoring the monitoring target.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请揭示一种监控视频图像的阴影检测方法及其系统、阴影去除方法。所述监控视频图像的阴影检测方法包括如下步骤:从源数据中获取当前帧和背景帧;由当前帧中获取第一候选阴影区域;计算所有第一候选阴影区域的局部三元模式的阴影检测值,选取第二候选阴影区域;计算各个第二候选阴影区域的色调及饱和度检测值和梯度检测值;估算局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;计算各个第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;选取局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在阈值范围内的第一候选阴影区域作为阴影区域。

Description

监控视频图像的阴影检测方法及其系统、阴影去除方法 技术领域
本申请涉及图像处理技术领域,尤其涉及一种监控视频图像的阴影检测方法、监控视频图像的阴影检测系统以及使用该监控视频图像的阴影检测方法的监控视频图像的阴影的去除方法。
背景技术
监控系统是安防系统中应用最多的系统之一。对于监控技术来说,监控场景下的阴影(包括监控目标的阴影和其他背景物体的阴影等)一直是干扰监控目标监控和检测的重要因素,尤其是在光照条件下,处于运动状态的监控目标投射的阴影与监控目标“如影随行”,即监控目标投射的阴影与监控目标有着相似的运动属性,再加上同样对应背景区域具有较大的区分度,很容易连同运动状态的监控目标一起被检测出来。
若阴影被误检为监控目标而被同时检测出来,则容易造成监控目标的粘连、融合、几何属性畸变等情况。因此,如何在监控视频场景下对处于运动状态的监控目标进行检测、消除其投射的阴影的干扰,尽可能保证监控目标的完整度对智能视频分析有着重要的意义。
发明内容
针对现有技术中的缺陷,本申请的目的是提供一种监控视频图像的阴影检测方法、监控视频图像的阴影检测系统以及使用该监控视频图像的阴影检测方法的监控视频图像的阴影的去除方法。该监控视频图像的阴影检测方法、阴影检测系统以及监控视频图像的阴影的去除方法可以有效地对阴影进行检测和去除,最大限度减少阴影对监控目标的完整性造成的影响。
根据本申请的一个方面提供一种监控视频图像的阴影检测方法,所述监控视频图像的阴影检测方法包括如下步骤:S10:从源数据中获取当前 帧和背景帧;S20:由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;S30:计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;S40:计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;S50:根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;S60:计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;S70:选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
根据本申请的另一个方面,还提供一种监控视频图像的阴影的去除方法,所述监控视频图像的阴影的去除方法至少包括实现监控视频图像的阴影检测方法的如下步骤:S10:从源数据中获取当前帧和背景帧;S20:由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;S30:计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;S40:计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;S50:根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;S60:计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;S70:选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
根据本申请的又一个方面,还提供一种监控视频图像的阴影检测系统,所述监控视频图像的阴影检测系统包括:提取模块,用于从源数据中获取当前帧、背景帧或者前景帧;第一候选阴影区域获取模块,用于由所 述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;第二候选阴影区域获取模块,用于计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;第一计算模块,用于计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;阈值估算模块,用于根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;第二计算模块,用于计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;阴影区域选取模块,用于选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
相比于现有技术,本申请实施例提供的监控视频图像的阴影检测方法、监控视频图像的阴影检测系统以及使用该监控视频图像的阴影检测方法的监控视频图像的阴影的去除方法中由于先获取了第一候选阴影区域(粗糙的阴影候选区)从第一候选阴影区域中提取出少部分真实的第二候选阴影区域,用于估计后续三个阴影检测子的阈值参数,进而,基于阴影区域和对应的背景区域存在纹理一致性和色度恒常性的原理,利用三个阴影检测子并行从第一候选阴影区域中提取出较为准确的阴影区域来,接着将所有较为准确的阴影区域进行联合筛选,获得更加准确的阴影区域。因此,本申请的监控视频图像的阴影检测方法检测得到的阴影区域针对多数常见室内场景中处于运动状态的监控目标的阴影区域的检测效果显著,检测得到的阴影区域十分准确。此外,该算法可以作为独立的模块应用在监控场景下,结合背景建模或背景差分算法,在获得实时的视频帧(当前帧)、前景帧和背景帧的基础上,即可实现和应用该算法,最大限度减少阴影对目标完整性的影响,使后续去除阴影区域后得到的监控目标也比较准确、完整,更有利于对监控目标的监控。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1为本申请的一个实施例的图像的阴影检测方法的流程图;
图2为本申请的一个实施例的图像的阴影检测方法中获取第一候选阴影区域的各个步骤流程图;
图3为本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的阴影检测值的计算流程图;
图4为本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的计算值的计算流程图;以及
图5为本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的计算值的计算结果示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的实施方式;相反,提供这些实施方式使得本申请将全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的结构,因而将省略对它们的重复描述。
依据本申请的主旨构思,本申请的一种监控视频图像的阴影检测方法包括如下步骤:从源数据中获取当前帧和背景帧;由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及 饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
下面结合附图和实施例对本申请的技术内容进行进一步地说明。
请参见图1,其示出了本申请的一个实施例的图像的阴影检测方法的流程图。具体来说,本申请的监控视频图像的阴影检测方法中主要针对应用到两个颜色空间:色调、饱和度、明度(HSV)颜色空间以及红绿蓝(RGB)颜色空间;两种纹理:梯度和局部空间模式。该监控视频图像的阴影检测方法的算法中主要思想是先提取出候选阴影区域(可参见下文的第一候选阴影区域和第二候选阴影区域等),然后从候选阴影区域中提取出阴影区域,该提取出的阴影区域较为准确。具体来说,如图1所示,在本申请的实施例中,该监控视频图像的阴影检测方法包括如下步骤:
步骤S10:从源数据中获取当前帧和背景帧。其中,源数据是指监控设备所获取的原始图像或视频数据,当前帧是指实时采集的当前图像,背景帧是通过背景建模或背景差分算法等方式从监控画面或视频中提取出的不具有监控目标的背景图像。进一步地,在本申请的优选实施例中,该所述步骤S10中还包括同时从源数据中获取前景帧的步骤,其中,前景帧是指在监控设备运行的过程中早于当前帧所在时间而记录的监控图像。
步骤S20:由所述当前帧中获取第一候选阴影区域。所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度。具体来说,该步骤主要基于阴影区域比对应的背景区域暗这一假设,这种假设在绝大多数情况下是成立,因此,可以利用该假设提取出粗糙的候选阴影区域(即上述的第一候选阴影区域),因此,获取到的第一候选阴影区域的亮度小于背景帧中对应区域的亮度。在此需要说明的是,背景帧是不具有监控目标的图像,即当前帧中除了监控目标和阴影区域以外的区域的图像均与背景帧中相同,因此,当前帧中的第一候选阴影区域与背景中的对应区域实质上是相同的位置。
进一步地,由于阴影区域有可能受到噪声的干扰。因此,步骤S20中实际获取的第一候选阴影区域包括大部分的实际阴影区域和误检为阴影区域的监控目标两个部分,若单纯地用色度暗这一假设来判断会导致误检为阴影区域的面积较大。进而,在本申请实施例中,由于发明人在对监控目标和阴影区域进行统计分析时发现,阴影区域在红绿蓝(RGB)颜色 空间内的各颜色通道的光谱频率的比值相比对应的背景区域在各颜色通道的光谱频率的比值变化较小,而监控目标在各颜色通道的光谱频率的比值相比对应的背景区域的各颜色通道值的光谱频率的比值变化较大,这一特性有助于将大部分误检为阴影区域的监控目标从检测到的阴影候选区域中区分出来。因此,请参见图2,其示出了本申请的一个实施例的图像的阴影检测方法中获取第一候选阴影区域的各个步骤流程图。具体来说,在本申请的优选实施例中,步骤S20还包括如下步骤:
步骤S201:计算所述当前帧和所述背景帧中各区域的亮度,选取所述当前帧中亮度小于所述背景帧中对应区域的亮度的区域作为第一区域。
步骤S202:计算所述第一区域与所述背景帧中对应所述第一区域的第二区域分别在红色、绿色和蓝色三个颜色通道内的光谱频率的三个第一比值以及所述前景帧中对应所述第一区域的第三区域与所述第二区域分别在红色、绿色和蓝色三个通道内的光谱频率的三个第二比值。其中,第一区域、第二区域以及第三区域实质上为图像中的同一区域。
具体来说,在所述步骤S202中,三个所述第一比值的计算方式分别为:
Figure PCTCN2018110701-appb-000001
其中,Ψ r为红色通道内的光谱频率的第一比值、Ψ g为绿色通道内的光谱频率的第一比值、Ψ b为绿色通道内的光谱频率的第一比值;C r为红色通道内当前帧的光谱频率、C g为绿色通道内当前帧的光谱频率、C b为蓝色通道内当前帧的光谱频率;B r为红色通道内背景帧的光谱频率、B g为绿色通道内背景帧的光谱频率、B b为蓝色通道内背景帧的光谱频率。
相应地,前景帧中对应第一区域的第三区域与第二区域分别在红色、绿色和蓝色三个通道内的光谱频率的三个第二比值的计算方式与第一比值的计算方式相同,仅仅将相应当前帧参数进行替换,而背景帧的相关参数保留,例如,将C g替换为红色通道内前景帧的光谱频率,其他当前帧的参数类似替换,在此不予赘述。
步骤S203:选取所述第一比值与所述第二比值之间的差值小于第二 阈值的所述第一区域作为第一候选阴影区域。其中,第二阈值可以根据实际的需求进行设置和调整。
步骤S30:计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域。具体来说,本申请中主要通过三个阴影检测子来对阴影区域进行检测,对于每一个阴影检测子来说,都有其对应的参数阈值,但是由于监控视频中场景是多变的,若对每一个场景都需要去设置一组参数阈值会使得算法的应用受到限制,因此,需要事先预测较为准确的参数阈值。进而,在上述步骤S20的获取到第一候选阴影区域的基础上,本申请利用改进的局部三元模式(Improved Local Ternary Pattern)检测子(以下称为ILTP检测子),对提取出的所有第一候选阴影区域进行筛选,选取出准确的阴影区域(即这些阴影区域的检测标准较高,进而,选取的区域基本均为最终的阴影区域),并依据这些准确的阴影区域估测针对其他第一候选阴影区域的检测的三个阴影检测子的阈值参数(色调及饱和度检测子和梯度检测子)。需要说明的是,在此步骤中,之所以选择ILTP检测子是由于ILTP检测子较色调及饱和度(HS)检测子检测子和梯度(Gradient)检测子检测到阴影区域的准确率高且目标干扰少。
进一步地,请参见图3,其示出了本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的阴影检测值的计算流程图。具体来说,本申请改进的局部三元模式的阴影检测值的计算包括如下步骤:
步骤S301:计算所述当前帧中的第一候选阴影区域或第二候选阴影区域内所有像素点的局部三元模式的计算值。具体来说,针对上述步骤S30即为对第一候选阴影区域内的像素点进行本申请的改进的局部三元模式的计算值(ILTP计算值)。
步骤S302:计算背景帧中位置相同的每个对应像素点的局部三元模式的计算值。
步骤S303:计算当前帧中第一候选阴影区域或第二候选阴影区域中具有与背景帧中对应像素点的局部三元模式的计算值相同的像素点的数量,并将该像素点的数量作为局部三元模式的阴影检测值。具体来说,在 此步骤中,即为根据上述步骤S301和步骤S302中计算得到的每个像素点的ILTP计算值进行比对,若步骤S301中当前帧的某个像素点的ILTP计算值与步骤S302中对应的(即位置相同的)像素点的ILTP计算值相同,则可将该像素点计为1个像素点。进而,类似地计算第一候选区域中所有像素点,将符合上述条件的像素点进行累加,即可得到所述的局部三元模式的阴影检测值。
进一步地,请参见图4,其示出了本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的计算值的计算流程图。如图4所示,在上述步骤S301和S302中,局部三元模式的计算值的计算方式至少包括如下步骤:
步骤S3001:设定一噪声容忍值。
步骤S3002:将环绕所述像素点的各个邻域像素点与所述像素点的灰度值进行比较。其中,比较的结果为如下三种,即仅仅计算得到三种值。具体来说,若一个邻域像素点与像素点的灰度值的差值小于噪声容忍值,则将该邻域像素点标记为第一数值;若一个邻域像素点的灰度值大于等于述像素点的灰度值与噪声容忍值之和,则将该邻域像素点标记为第二数值;若一个邻域像素点的灰度值小于等于像素点的灰度值与噪声容忍值的差值,则将该邻域像素点标记为第三数值。
请参见图5,其示出了本申请的一个实施例的图像的阴影检测方法中改进的局部三元模式的计算值的计算结果示意图。在图5所示的实施例中,检测的像素点与其多个邻域像素点之间呈九宫格排布,像素点的周围包括环绕其设置的八个邻域像素点。图5中检测的像素点的灰度值为90、噪声容忍值t为6、第一数值为01、第二数值为10、第三数值为00。进而,按照上述步骤S3002中的比较方法,位于检测的像素点的左上角的邻域像素点标记为01、位于检测的像素点的左侧的邻域像素点标记为00、位于检测的像素点的上方的邻域像素点标记为10,类似地对周围八个邻域像素点进行标记后(可参见图5标记后的九宫格),执行步骤S3003。
步骤S3003:按照第一顺序将所有邻域像素点标记的第一数值、第二数值、第三数值组成第一数组。在图5所示的实施例中,第一顺序为由八个邻域像素点形成的九宫格中位于左上角的一个邻域像素点开始,顺时针 依次进行排列形成第一数组。由于所有邻域像素点均由第一数值01、第二数值10以及第三数值00所标记,因此,第一数组实质上即为由01、10和00组成的一串数字。如图5所示,完成步骤S3003后形成的第一数组为0110011001001000。
步骤S3004:将每个所述邻域像素点与距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值进行比较。若两个所述邻域像素点的灰度值的差值小于所述噪声容忍值,则形成所述第一数值;若一个所述邻域像素点所述邻域像素点的灰度值大于等于距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值与所述噪声容忍值之和,则形成所述第二数值;若一个所述邻域像素点的灰度值小于等于距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值与所述噪声容忍值的差值,则形成所述第三数值。具体来说,由于现有技术中的局部三元模式的计算值仅仅是对检测的像素点与周围邻域像素点进行比较,忽略了邻域像素点之间的关联信息,而这恰恰能增强局部三元模式的表达能力。因此,在本申请的中,将邻域像素点之间的关联信息也包括在内,以此提升现有的局部三元模式的计算值的表达能力,进而,可使检测到的阴影区域更为准确。进而,在此步骤中比较的方式与上述步骤S3003中相同,区别在于比较的像素点不同,步骤S3004中为多个邻域像素点之间的比较。在图5所示的实施例中,用于比较的邻域像素点分别为待检测的像素点两条对角线方向、竖直方向以及水平方向的邻域像素点之间的比较,如图5所示比较后标记在田字形的表格内,首先,该田字形的表格中位于左上角的空格内标记的数值为九宫格中位于左上角的邻域像素点与位于右下角的邻域像素点进行比较后的结果,即灰度值89与灰度值91比较,由于89与91的差值小于噪声容忍值6,因此,在田字形的表格中左上角的数值标记为第一数值01;类似地,田字形的表格中右上角的数值为九宫格中位于右上角的邻域像素点与位于左下角的邻域像素点进行比较后的结果;田字形的表格中左下角的数值为九宫格中水平方向(即位于检测的像素点的左侧和右侧)的两个邻域像素点之间进行比较后的结果;田字形的表格中右上角的数值为九宫格中数值方向(即位于检测的像素点的上方和下方)的两个邻域像素点之间进行比较后的结果。
步骤S3005:按照第二顺序将所有形成的所述第一数值、第二数值、第三数值组成第二数组。具体来说,在图5所示实施例中,第二顺序同样为由所述田字形的表格中左上角开始,沿顺时针方向依次排列形成。进而,在此实施例中,与上述第一数组类似的,第二数组包括四个数值,可参见图5,第二数组为01100010。
步骤S3006:叠加所述第一数组和所述第二数组后作为局部三元模式的计算值。在图5所示的实施例中,即为将第二数组直接叠加至第一数组后,将该串数字作为局部三元模式的计算值(如图5所示的局部三元模式的计算值为011001100100100001100010)。图5中的局部三元模式的计算值为12个数值组成,若在RGB颜色空间内综合考虑三个颜色通道,则最终的ILTP计算值则包括36数值。
进而,分别对当前帧中的检测的像素点和背景帧中对应的一个像素点进行局部三元模式的计算值的计算,判断上述两个像素点的局部三元模式的计算值是否相同,并且计算相同的像素点的个数(步骤S303)。该个数即为步骤S30中最终得到的一个第一候选阴影区域的局部三元模式的阴影检测值。若一个第一候选阴影区域的局部三元模式的阴影检测值大于第一阈值,则将其作为第二候选阴影区域。
需要说明的是,图5中仅仅作为一种举例,并不限于此,在实际进行检测的过程中,可以根据实际的要求来设置上述第一顺序、第二顺序以及第一数值、第二数值和第三数值等参数。并且,检测的像素点与其邻域像素点之间甚至可以不是九宫格状的,例如,在一些实施例中,邻域像素点也可以是呈圆环状环绕检测像素点,在此不予赘述。
步骤S40:计算各个第二候选阴影区域的色调及饱和度检测值和梯度检测值。具体来说,第二候选阴影区域的色调检测值为第二候选阴影区域内所有像素点与背景帧中所有对应像素点的色调值的差值的平均值;类似地,第二候选阴影区域的饱和度检测值为第二候选阴影区域内所有像素点与背景帧中所有对应像素点的饱和度值的差值的平均值。
步骤S50:根据计算得到的第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值。具体来说,由于根据上述步 骤S30,本申请的计算方式增加了邻域像素点之间的关联信息,增强了局部三元模式的表达能力,因此,获取到的第二候选阴影区域非常准确,基本均为最终的阴影区域。进而,可以根据第二候选阴影区域中计算得到的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值来估算用于检测所有第一候选阴影区域的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值。其中,估算的方式可以是将所有第二候选阴影区域的局部三元模式的阴影检测值的平均值作为局部三元模式的阴影阈值;将所有第二候选阴影区域的色调及饱和度检测值的平均值作为色调及饱和度阈值;将所有第二候选阴影区域的梯度检测值的平均值作为梯度阈值。或者也可以根据实际的需求对上述的平均值进行调整后作为最终的阈值,在此不予赘述。
由于第二候选阴影区域是利用本申请的改进的局部三元模式的阴影检测值进行检测的,因此,选出的第二候选阴影区域准确高且目标干扰少,用于确定后续所有第一候选阴影区域的各个阴影检测子的阈值参数将具有更好的代表性和准确性。
步骤S60:计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值。在此步骤中,对于局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值的计算方式与上述步骤S30和步骤S50相同。
步骤S70:选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。具体来说,在此步骤中,判断第一候选阴影区域的局部三元模式的阴影检测值是否在局部三元模式的阴影阈值范围内可以使用上述步骤S30中的方法,其中,仅仅将第一阈值替换为步骤S50中的局部三元模式的阴影阈值即可。
进一步地,色调及饱和度检测的方式为:
Figure PCTCN2018110701-appb-000002
其中,
Figure PCTCN2018110701-appb-000003
为当前帧中像素点的色调值、
Figure PCTCN2018110701-appb-000004
为背景帧中像素点的色调
τ h τ s 值、
Figure PCTCN2018110701-appb-000005
为当前帧中像素点的饱和度值、
Figure PCTCN2018110701-appb-000006
为背景帧中像素点的饱和度值、为色调阈值、为饱和度阈值;
当第一候选阴影区域的色调平均值小于色调阈值且饱和度平均值小于饱和度阈值时,则第一候选阴影区域的色调及饱和度检测值在色调及饱和度阈值范围内输出值为1;否则,所述第一候选阴影区域的色调及饱和度检测值超出所述色调及饱和度阈值范围,输出值为0。其中,第一候选阴影区域的色调平均值即为第一候选阴影区域内所有像素点与背景帧中所有对应像素点的色调值的差值的平均值;类似地,第一候选阴影区域的饱和度平均值为第一候选阴影区域内所有像素点与背景帧中所有对应像素点的饱和度值的差值的平均值。根据输出值为1或者为0即可判断一个第一候选阴影区域的色调及饱和度检测值是否均在色调及饱和度阈值范围内。需要说明的是,相比传统的色调、饱和度、明度(HSV)检测子针对当前帧和背景帧的H、S、V三个通道进行计算分析来说,本申请提出的色调及饱和度检测去掉了V通道的计算,主要利用H和S通道统一表达的是色度不变性,并充分利用H和S通道的邻域信息(如邻域像素点)。色调阈值和饱和度阈值是根据第二候选阴影区域计算得到的,因此,会因场景不同而改变。单一孤立像素点、邻域信息的使用可以减少突然光照变化造成的干扰,降低漏检,提高检测的准确度。
进一步地,梯度检测的方式为:
Figure PCTCN2018110701-appb-000007
Figure PCTCN2018110701-appb-000008
其中,
Figure PCTCN2018110701-appb-000009
为像素点的水平梯度值、
Figure PCTCN2018110701-appb-000010
为像素点的垂直梯度值、
Figure PCTCN2018110701-appb-000011
为像素点的梯度值、θ为角度值、
Figure PCTCN2018110701-appb-000012
为当前帧中的一个像素点在一个颜色通道内的梯度值、
Figure PCTCN2018110701-appb-000013
为背景帧中的一个对应像素点在同一个颜色通道内的梯度值、
Figure PCTCN2018110701-appb-000014
为梯度阈值、
Figure PCTCN2018110701-appb-000015
为当前帧中的一个像素点在一个颜色通道内的角度值、
Figure PCTCN2018110701-appb-000016
为背景帧中的一个对应像素点在同一个颜色通道 内的角度值、
Figure PCTCN2018110701-appb-000017
为角度阈值;
当所述当前帧中所有像素点与背景帧中对应像素点在红色、绿色和蓝色三个通道内的所有梯度差值的平均值小于所述梯度阈值,且所述当前帧中所有像素点与背景帧中对应像素点在红色、绿色和蓝色三个通道内的所有角度差值的平均值小于所述角度阈值时,则所述第一候选阴影区域的梯度检测值在所述梯度阈值范围内、输出值为1;否则,所述第一候选阴影区域的梯度检测值超出所述梯度阈值范围,输出值为0。根据输出值为1或者为0即可判断一个第一候选阴影区域的梯度检测值是否均在梯度阈值范围内。
进一步地,本申请还提供一种监控视频图像的阴影的去除方法,所述监控视频图像的阴影的去除方法至少包括上述图1至图5所示的监控视频图像的阴影检测方法。具体来说,在选取出阴影区域后,还包括如下步骤:
从源数据中获取前景帧;
结合所述前景帧通过中值滤波和空洞填充去除所述当前帧中的所述阴影区域。
上述监控视频图像的阴影的去除方法中由于使用了上述图1至图5所示的监控视频图像的阴影检测方法,因此,检测得到的阴影区域非常准确,加入中值滤波、空洞填充等后处理算法后可以达到阴影区域和监控目标的分离,在去除了阴影区域干扰的监控目标的形状、轮廓就比较完整和准确,为进一步识别分类等模式识别算法提供准确有效数据。
进一步地,本申请还提供一种监控视频图像的阴影检测系统,用于实现上述监控视频图像的阴影检测方法。所述监控视频图像的阴影检测系统主要包括:提取模块、第一候选阴影区域获取模块、第二候选阴影区域获取模块、第一计算模块、阈值估算模块、第二计算模块以及阴影区域选取模块。
提取模块用于从源数据中获取当前帧、背景帧或者前景帧。
第一候选阴影区域获取模块用于由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度。
第二候选阴影区域获取模块用于计算所有所述第一候选阴影区域的 局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域。
第一计算模块用于计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值。
阈值估算模块用于根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值。
第二计算模块用于计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值。
阴影区域选取模块用于选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
综上所述,本申请实施例提供的监控视频图像的阴影检测方法、监控视频图像的阴影检测系统以及使用该监控视频图像的阴影检测方法的监控视频图像的阴影的去除方法中由于先获取了第一候选阴影区域(粗糙的阴影候选区)从第一候选阴影区域中提取出少部分真实的第二候选阴影区域,用于估计后续三个阴影检测子的阈值参数,进而,基于阴影区域和对应的背景区域存在纹理一致性和色度恒常性的原理,利用三个阴影检测子并行从第一候选阴影区域中提取出较为准确的阴影区域来,接着将所有较为准确的阴影区域进行联合筛选,获得更加准确的阴影区域。因此,本申请的监控视频图像的阴影检测方法检测得到的阴影区域针对多数常见室内场景中处于运动状态的监控目标的阴影区域的检测效果显著,检测得到的阴影区域十分准确。此外,该算法可以作为独立的模块应用在监控场景下,结合背景建模或背景差分算法,在获得实时的视频帧(当前帧)、前景帧和背景帧的基础上,即可实现和应用该算法,最大限度减少阴影对目标完整性的影响,使后续去除阴影区域后得到的监控目标也比较准确、完整,更有利于对监控目标的监控。
虽然本申请已以可选实施例揭示如上,然而其并非用以限定本申请。本申请所属技术领域的技术人员,在不脱离本申请的精神和范围内,当可 作各种的更动与修改。因此,本申请的保护范围当视权利要求书所界定的范围为准。

Claims (11)

  1. 一种监控视频图像的阴影检测方法,其特征在于,所述监控视频图像的阴影检测方法包括如下步骤:
    S10:从源数据中获取当前帧和背景帧;
    S20:由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;
    S30:计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;
    S40:计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;
    S50:根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;
    S60:计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;
    S70:选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
  2. 如权利要求1所述的监控视频图像的阴影检测方法,其特征在于,在所述步骤S10中还从源数据中获取前景帧;所述步骤S20包括如下步骤:
    S201:计算所述当前帧和所述背景帧中各区域的亮度,选取所述当前帧中亮度小于所述背景帧中对应区域的亮度的区域作为第一区域;
    S202:计算所述第一区域与所述背景帧中对应所述第一区域的第二区域分别在红色、绿色和蓝色三个颜色通道内的光谱频率的三个第一比值以及所述前景帧中对应所述第一区域的第三区域与所述第二区域分别在红色、绿色和蓝色三个通道内的光谱频率的三个第二比值;
    S203:选取所述第一比值与所述第二比值之间的差值小于第二阈值的所述第一区域作为第一候选阴影区域。
  3. 如权利要求2所述的监控视频图像的阴影检测方法,其特征在于,在所述步骤S202中,三个所述第一比值的计算方式分别为:
    Figure PCTCN2018110701-appb-100001
    其中,Ψ r为红色通道内的光谱频率的第一比值、Ψ g为绿色通道内的光谱频率的第一比值、Ψ b为绿色通道内的光谱频率的第一比值;C r为红色通道内当前帧的光谱频率、C g为绿色通道内当前帧的光谱频率、C b为蓝色通道内当前帧的光谱频率;B r为红色通道内背景帧的光谱频率、B g为绿色通道内背景帧的光谱频率、B b为蓝色通道内背景帧的光谱频率。
  4. 如权利要求1所述的监控视频图像的阴影检测方法,其特征在于,所述局部三元模式的阴影检测值的计算包括如下步骤:
    计算所述当前帧中的所述第一候选阴影区域或所述第二候选阴影区域内所有像素点的局部三元模式的计算值;
    计算所述背景帧中位置相同的每个对应像素点的局部三元模式的计算值;
    计算所述当前帧中所述第一候选阴影区域或所述第二候选阴影区域中具有与所述背景帧中所述对应像素点的局部三元模式的计算值相同的所述像素点的数量,并将该像素点的数量作为所述局部三元模式的阴影检测值。
  5. 如权利要求4所述的监控视频图像的阴影检测方法,其特征在于,所述局部三元模式的计算值的计算至少包括如下步骤:
    设定一噪声容忍值;
    将环绕所述像素点的各个邻域像素点与所述像素点的灰度值进行比较;
    若一个所述邻域像素点与所述像素点的灰度值的差值小于所述噪声容忍值,则将该邻域像素点标记为第一数值;
    若一个所述邻域像素点的灰度值大于等于所述像素点的灰度值与所述噪声容忍值之和,则将该邻域像素点标记为第二数值;
    若一个所述邻域像素点的灰度值小于等于所述像素点的灰度值与所述噪声容忍值的差值,则将该邻域像素点标记为第三数值;
    按照第一顺序将所有所述邻域像素点标记的第一数值、第二数值、第 三数值组成第一数组;
    将每个所述邻域像素点与距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值进行比较;
    若两个所述邻域像素点的灰度值的差值小于所述噪声容忍值,则形成所述第一数值;
    若一个所述邻域像素点所述邻域像素点的灰度值大于等于距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值与所述噪声容忍值之和,则形成所述第二数值;
    若一个所述邻域像素点的灰度值小于等于距离该邻域像素点距离最远的另一个所述邻域像素点的灰度值与所述噪声容忍值的差值,则形成所述第三数值;
    按照第二顺序将所有形成的所述第一数值、第二数值、第三数值组成第二数组;
    叠加所述第一数组和所述第二数组后形成作为所述局部三元模式的计算值。
  6. 如权利要求5所述的监控视频图像的阴影检测方法,其特征在于,所述像素点与多个所述邻域像素点之间呈九宫格排布,每个像素点的周围包括环绕其设置的八个所述邻域像素点。
  7. 如权利要求1所述的监控视频图像的阴影检测方法,其特征在于,所述色调及饱和度检测的方式为:
    Figure PCTCN2018110701-appb-100002
    其中,
    Figure PCTCN2018110701-appb-100003
    为当前帧中像素点的色调值、
    Figure PCTCN2018110701-appb-100004
    为背景帧中像素点的色调值、
    Figure PCTCN2018110701-appb-100005
    为当前帧中像素点的饱和度值、
    Figure PCTCN2018110701-appb-100006
    为背景帧中像素点的饱和度值、τ h为色调阈值、τ s为饱和度阈值;
    当所述第一候选阴影区域的色调平均值小于所述色调阈值且饱和度平均值小于所述饱和度阈值时,则所述第一候选阴影区域的色调及饱和度检测值在所述色调及饱和度阈值范围内输出值为1;否则,所述第一候选阴影区域的色调及饱和度检测值超出所述色调及饱和度阈值范围,输出值为0。
  8. 如权利要求1所述的监控视频图像的阴影检测方法,其特征在于,所述梯度检测的方式为:
    Figure PCTCN2018110701-appb-100007
    Figure PCTCN2018110701-appb-100008
    其中,
    Figure PCTCN2018110701-appb-100009
    为像素点的水平梯度值、
    Figure PCTCN2018110701-appb-100010
    为像素点的垂直梯度值、
    Figure PCTCN2018110701-appb-100011
    为像素点的梯度值、θ为角度值、
    Figure PCTCN2018110701-appb-100012
    为当前帧中的一个像素点在一个颜色通道内的梯度值、
    Figure PCTCN2018110701-appb-100013
    为背景帧中的一个对应像素点在同一个颜色通道内的梯度值、
    Figure PCTCN2018110701-appb-100014
    为梯度阈值、
    Figure PCTCN2018110701-appb-100015
    为当前帧中的一个像素点在一个颜色通道内的角度值、
    Figure PCTCN2018110701-appb-100016
    为背景帧中的一个对应像素点在同一个颜色通道内的角度值、
    Figure PCTCN2018110701-appb-100017
    为角度阈值;
    当所述当前帧中所有像素点与背景帧中对应像素点在红色、绿色和蓝色三个通道内的所有梯度差值的平均值小于所述梯度阈值,且所述当前帧中所有像素点与背景帧中对应像素点在红色、绿色和蓝色三个通道内的所有角度差值的平均值小于所述角度阈值时,则所述第一候选阴影区域的梯度检测值在所述梯度阈值范围内、输出值为1;否则,所述第一候选阴影区域的梯度检测值超出所述梯度阈值范围,输出值为0。
  9. 一种监控视频图像的阴影的去除方法,其特征在于,所述监控视频图像的阴影的去除方法至少包括实现监控视频图像的阴影检测方法的如下步骤:
    S10:从源数据中获取当前帧和背景帧;
    S20:由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;
    S30:计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;
    S40:计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检 测值;
    S50:根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;
    S60:计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;
    S70:选取所述局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
  10. 如权利要求9所述的监控视频图像的阴影的去除方法,其特征在于,在选取出阴影区域后,还包括如下步骤:
    从源数据中获取前景帧;
    结合所述前景帧通过中值滤波和空洞填充去除所述当前帧中的所述阴影区域。
  11. 一种监控视频图像的阴影检测系统,其特征在于,所述监控视频图像的阴影检测系统包括:
    提取模块,用于从源数据中获取当前帧、背景帧或者前景帧;
    第一候选阴影区域获取模块,用于由所述当前帧中获取第一候选阴影区域,所述第一候选阴影区域的亮度小于所述背景帧中对应区域的亮度;
    第二候选阴影区域获取模块,用于计算所有所述第一候选阴影区域的局部三元模式的阴影检测值,选取局部三元模式的阴影检测值大于第一阈值的第一候选阴影区域作为第二候选阴影区域;
    第一计算模块,用于计算各个所述第二候选阴影区域的色调及饱和度检测值和梯度检测值;
    阈值估算模块,用于根据计算得到的所述第二候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值估算对应的局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值;
    第二计算模块,用于计算各个所述第一候选阴影区域的局部三元模式的阴影检测值、色调及饱和度检测值和梯度检测值;
    阴影区域选取模块,用于选取所述局部三元模式的阴影检测值、色调 及饱和度检测值和梯度检测值均在所述局部三元模式的阴影阈值、色调及饱和度阈值和梯度阈值范围内的所述第一候选阴影区域作为阴影区域。
PCT/CN2018/110701 2017-10-20 2018-10-17 监控视频图像的阴影检测方法及其系统、阴影去除方法 WO2019076326A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2007386.2A GB2583198B (en) 2017-10-20 2018-10-17 Shadow detection method and system for monitoring video images, and shadow removal method
DE112018004661.3T DE112018004661T5 (de) 2017-10-20 2018-10-17 Schattenerkennungsverfahren für ein Überwachungsvideobild, System davon undSchattenentfernungsverfahren
US16/852,597 US20200250840A1 (en) 2017-10-20 2020-04-20 Shadow detection method and system for surveillance video image, and shadow removing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710986529.9A CN107767390B (zh) 2017-10-20 2017-10-20 监控视频图像的阴影检测方法及其系统、阴影去除方法
CN201710986529.9 2017-10-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/852,597 Continuation US20200250840A1 (en) 2017-10-20 2020-04-20 Shadow detection method and system for surveillance video image, and shadow removing method

Publications (1)

Publication Number Publication Date
WO2019076326A1 true WO2019076326A1 (zh) 2019-04-25

Family

ID=61269788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/110701 WO2019076326A1 (zh) 2017-10-20 2018-10-17 监控视频图像的阴影检测方法及其系统、阴影去除方法

Country Status (5)

Country Link
US (1) US20200250840A1 (zh)
CN (1) CN107767390B (zh)
DE (1) DE112018004661T5 (zh)
GB (1) GB2583198B (zh)
WO (1) WO2019076326A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115351426A (zh) * 2022-08-11 2022-11-18 莆田市雷腾激光数控设备有限公司 一种鞋底激光打标方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767390B (zh) * 2017-10-20 2019-05-28 苏州科达科技股份有限公司 监控视频图像的阴影检测方法及其系统、阴影去除方法
CN109068099B (zh) * 2018-09-05 2020-12-01 济南大学 基于视频监控的虚拟电子围栏监控方法及系统
CN109463894A (zh) * 2018-12-27 2019-03-15 蒋梦兰 配置半月型刷头的全防水式牙刷
CN113628153A (zh) * 2020-04-22 2021-11-09 北京京东乾石科技有限公司 阴影区域检测的方法和装置
CN113111866B (zh) * 2021-06-15 2021-10-26 深圳市图元科技有限公司 一种基于视频分析的智能监控管理系统及方法
CN113870237B (zh) * 2021-10-09 2024-03-08 西北工业大学 一种基于水平扩散的复合材料图像阴影检测方法
CN114187219B (zh) * 2021-12-06 2024-06-25 广西科技大学 基于红绿蓝二重差分的移动目标阴影实时消除方法
CN117152167B (zh) * 2023-10-31 2024-03-01 海信集团控股股份有限公司 一种目标移除、基于分割大模型的目标移除方法及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528794A (zh) * 2016-01-15 2016-04-27 上海应用技术学院 基于混合高斯模型与超像素分割的运动目标检测方法
US20160140397A1 (en) * 2012-01-17 2016-05-19 Avigilon Fortress Corporation System and method for video content analysis using depth sensing
CN107220949A (zh) * 2017-05-27 2017-09-29 安徽大学 公路监控视频中运动车辆阴影的自适应消除方法
CN107230188A (zh) * 2017-04-19 2017-10-03 湖北工业大学 一种视频运动阴影消除的方法
CN107767390A (zh) * 2017-10-20 2018-03-06 苏州科达科技股份有限公司 监控视频图像的阴影检测方法及其系统、阴影去除方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8666117B2 (en) * 2012-04-06 2014-03-04 Xerox Corporation Video-based system and method for detecting exclusion zone infractions
CN107220943A (zh) * 2017-04-02 2017-09-29 南京大学 融合区域纹理梯度的船舶阴影去除方法
CN107146210A (zh) * 2017-05-05 2017-09-08 南京大学 一种基于图像处理的检测去除阴影方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140397A1 (en) * 2012-01-17 2016-05-19 Avigilon Fortress Corporation System and method for video content analysis using depth sensing
CN105528794A (zh) * 2016-01-15 2016-04-27 上海应用技术学院 基于混合高斯模型与超像素分割的运动目标检测方法
CN107230188A (zh) * 2017-04-19 2017-10-03 湖北工业大学 一种视频运动阴影消除的方法
CN107220949A (zh) * 2017-05-27 2017-09-29 安徽大学 公路监控视频中运动车辆阴影的自适应消除方法
CN107767390A (zh) * 2017-10-20 2018-03-06 苏州科达科技股份有限公司 监控视频图像的阴影检测方法及其系统、阴影去除方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115351426A (zh) * 2022-08-11 2022-11-18 莆田市雷腾激光数控设备有限公司 一种鞋底激光打标方法及系统

Also Published As

Publication number Publication date
GB2583198A (en) 2020-10-21
CN107767390B (zh) 2019-05-28
GB202007386D0 (en) 2020-07-01
DE112018004661T5 (de) 2020-06-10
US20200250840A1 (en) 2020-08-06
CN107767390A (zh) 2018-03-06
GB2583198B (en) 2022-04-06

Similar Documents

Publication Publication Date Title
WO2019076326A1 (zh) 监控视频图像的阴影检测方法及其系统、阴影去除方法
US20100104175A1 (en) Integrated image processor
CN110322522B (zh) 一种基于目标识别区域截取的车辆颜色识别方法
JP4767240B2 (ja) 映像の境界を検出する方法及びその装置とこれを具現するコンピューターで読み取れる記録媒体
WO2015070723A1 (zh) 眼部图像处理方法和装置
JP2007097178A (ja) 顔検出による赤目の除去方法
CN112887693B (zh) 图像紫边消除方法、设备及存储介质
CN110930321A (zh) 一种能够自动选取目标区域的蓝/绿幕数字图像抠取方法
Chen et al. Robust license plate detection in nighttime scenes using multiple intensity IR-illuminator
KR101875891B1 (ko) 다중 검출 방식을 이용한 얼굴 검출 장치 및 방법
CN106558044B (zh) 影像模组的解像力测量方法
CN111369529B (zh) 一种物品丢失、遗留检测方法及其系统
Ye et al. Removing shadows from high-resolution urban aerial images based on color constancy
US20110069891A1 (en) Edge detection apparatus and computing circuit employed in edge detection apparatus
CN113744326B (zh) 一种在ycrcb颜色空间中基于种子区域生长法则的火灾检测方法
US9235882B2 (en) Method for detecting existence of dust spots in digital images based on locally adaptive thresholding
JP2005165387A (ja) 画面のスジ欠陥検出方法及び装置並びに表示装置
CN105957067B (zh) 一种基于颜色差分的彩色图像边缘检测方法
TWI530913B (zh) 移動物體偵測系統及方法
CN109493361B (zh) 一种火灾烟雾图像分割方法
KR20070027929A (ko) 객체의 대칭성과 거리 벡터를 사용한 그림자 제거방법
Chondagar et al. A review: Shadow detection and removal
CN114882401A (zh) 基于rgb-hsi模型和火焰初期生长特性的火焰检测方法及系统
Sebastian et al. Tracking using normalized cross correlation and color space
Ji et al. Moving cast shadow detection using joint color and texture features based on direction and distance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18868201

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 202007386

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181017

122 Ep: pct application non-entry in european phase

Ref document number: 18868201

Country of ref document: EP

Kind code of ref document: A1