CN105261021A - Method and apparatus of removing foreground detection result shadows - Google Patents

Method and apparatus of removing foreground detection result shadows Download PDF

Info

Publication number
CN105261021A
CN105261021A CN201510679083.6A CN201510679083A CN105261021A CN 105261021 A CN105261021 A CN 105261021A CN 201510679083 A CN201510679083 A CN 201510679083A CN 105261021 A CN105261021 A CN 105261021A
Authority
CN
China
Prior art keywords
shade
pixel
split position
statistical feature
feature value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510679083.6A
Other languages
Chinese (zh)
Other versions
CN105261021B (en
Inventor
李婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201510679083.6A priority Critical patent/CN105261021B/en
Publication of CN105261021A publication Critical patent/CN105261021A/en
Application granted granted Critical
Publication of CN105261021B publication Critical patent/CN105261021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method of removing foreground detection result shadows, comprising: obtaining a grey-scale map and a foreground detection map from an input original map to obtain the shadow candidate zone in the grey-scale map; removing the background of the grey-scale map to obtain a foreground grey-scale map; calculating the texture statistical characteristic number of each pixel row in a horizontal direction according to the foreground grey-scale map, and calculating the texture statistical characteristic number of each pixel column in a vertical direction; finding a segmentation position by utilizing the texture statistical characteristic numbers; and respectively calculating the ratio between each shadow candidate zone on two sides of the segmentation position in the grey-scale map and a foreground pixel, and selecting the side with a greater ratio out of two sides on the segmentation position as a shadow to remove. The invention also discloses a corresponding apparatus. The method and apparatus can effectively reduce the shadow false drop rate in a foreground detection result, avoid the error checking of a non shadow zone as a shadow zone, increase shadow identification accuracy, and thereby facilitate the improvement of subsequent processing efficiency.

Description

Remove method and the device of foreground detection result shade
Technical field
The present invention relates to image processing field, particularly relate to a kind of method and the device of removing foreground detection result shade.
Background technology
Because shade has the motion feature identical with moving object, usually prospect is erroneously detected as, the shade existed in video image is one of key factor affecting moving object detection effect, detects and eliminate the important research content become in motion detection to shade.If shade and moving object are merged, the geometric properties of target will be affected; If shade and moving target are separated, be then easily erroneously detected as new target.These error detections all cause very large impact to the moving object classification of high level, tracking and behavioural analysis etc., therefore in detection process of moving target, carry out elimination to shade significant.At present, from foreground target, eliminate shade does not also have simple and effective way, is one of image domains double linear problems of difficulty for solving.
Current most shadow Detection and removing method are all develop based on the feature of shade itself, the gray feature, chromaticity, Texture Statistical Feature etc. of such as shade.In prior art, the method of carrying out shadow removal comprises: image is carried out gray processing process, carry out foreground detection process again, to reject background, respectively horizontal and vertical projection is carried out to the foreground detection figure of binaryzation, obtain horizontal projective histogram and vertical projective histogram respectively, and set threshold value, the part being less than a certain threshold value is judged to be shade and removes.
Also there is the method for carrying out shadow removal based on pixel in addition, namely threshold value is set according to features such as the brightness of shade, when pixel exceedes corresponding threshold value, pixel is judged to be that shade is removed.
Because each scene is different, the span of threshold value or threshold value is also just different, for different scenes, there is not blanket threshold value or threshold value, thus causes the erroneous judgement easily causing shade.Therefore no matter based on method or the simple projecting method of pixel, although the part of shade effectively can be detected under some threshold value, but simultaneously easy is shade by regional area error detection similar to shadow character to vehicle window, pedestrian or motor vehicle, bicycle etc., remove in the lump, easily like this cause the problems such as prospect cavity, segmentation errors.
Therefore, prior art exists non-hatched area flase drop is the problem of shadow region.
Summary of the invention
For solving the high problem of prior art shadow region false drop rate, the invention provides a kind of method, reducing shadow region by the probability of flase drop.
Remove a method for foreground detection result shade, comprising: obtain gray-scale map and foreground detection figure by the original image inputted, and obtain the shade candidate regions in display foreground, also comprise:
Described gray-scale map is removed background and obtains prospect gray-scale map, according to described prospect gray-scale map, calculate the Texture Statistical Feature value of each pixel column in the horizontal direction, calculate the Texture Statistical Feature value of each pixel column in vertical direction;
Utilize the Texture Statistical Feature value of pixel column to find split position in horizontal direction, and utilize the Texture Statistical Feature value of pixel column to find split position in vertical direction, wherein split position is make moving target and the maximum location of pixels of shade difference;
The shade candidate regions of the split position both sides respectively in calculated level and vertical direction accounts for the ratio of corresponding prospect part, and the side that selection percentage is larger from the both sides of split position is as shadow removal.
Find from the projection carried out based on Texture Statistical Feature and make moving target and the maximum cut-point of shade difference, by segmentation result in conjunction with shade candidate region, determine the sensing range of shadow of object further, thus avoid the region of some similar shades be taken as shade and be removed, improve the accuracy rate of shadow Detection.This method itself has carried out accurate segmentation to moving object and dash area in addition, therefore when selecting candidate shadow region, do not need to arrange strict threshold value for different scene, to the blanket unified threshold value of most of scene setting, thus the general applicability of method can be improve.Described location of pixels is pixel column position in the horizontal direction, described location of pixels is the position of pixel column in vertical direction, prospect Texture Statistical Feature is included in gray-scale map, but also comprise the Texture Statistical Feature of background simultaneously, and the Texture Statistical Feature of background is invalid information when projecting, interference can be caused to cutting procedure below, by gray-scale map is multiplied with foreground detection figure, gray-scale map is removed background, the Texture Statistical Feature of prospect directly can be obtained from obtained prospect gray-scale map, avoid the interference of background, thus improve segmentation accuracy rate.Described foreground detection figure is multiplied with described gray-scale map, can by the background removal in gray-scale map.
The image wherein obtaining shade candidate regions may be obtained also may being obtained by original image by foreground detection figure, mode such as by foreground detection figure is carried out that horizontal and vertical direction projects obtains shade candidate regions, then the shade candidate regions obtained is in foreground detection figure; Utilize colourity, brightness is when obtaining shade candidate regions in conjunction with the mode of saturation degree, the shade candidate regions obtained is in original image, or according to the follow-up process that will carry out, shade candidate regions also may in gray-scale map.
Because the position divided might not be center, therefore, if judge that each several part is judged as pixel column or the pixel column frequency of occurrence of shade, then may cause and be divided out the larger part of area and be judged as the pixel column of shade or pixel column or pixel to gesticulate the part frequency of occurrence branching away area less high, thus cause always part larger for the area marked off being rejected by as shadow region.In order to prevent this phenomenon from occurring, by by shade candidate regions prospect part proportion in division part judge the possibility size that shadow region occurs.The shade candidate regions that different computing method obtain is different, and the shade candidate regions such as obtained by projecting method is made up of pixel column or pixel column, and the shade candidate regions obtained in conjunction with saturation degree by colourity, brightness is made up of pixel.If shade candidate regions is made up of pixel column or pixel column, then calculate this shade candidate regions pixel column of the prospect that is judged as corresponding in marked off part or the ratio of pixel column; If shade candidate regions is made up of pixel, then calculates the ratio that the pixel number being judged as shade candidate regions accounts for foreground pixel point number in marked off part, whole part higher for calculating ratio is rejected as shadow region.
In horizontal direction, each location of pixels represents one-row pixels row, and on vertical texture direction, each location of pixels represents a row pixel column.
Furthermore, the Texture Statistical Feature value calculated comprise following at least one: gradient, variance, Sobel operator, entropy, Laplace operator and LBP eigenwert.
When adopting wherein a kind of, the Texture Statistical Feature value calculating method selected by utilization calculates the Texture Statistical Feature value of each pixel column in the horizontal direction and the Texture Statistical Feature value of each pixel column in vertical direction respectively.When adopting multiple, the Texture Statistical Feature value calculating method selected by utilization calculates the Texture Statistical Feature value of each pixel column in the Texture Statistical Feature value of each pixel column in the horizontal direction and vertical direction respectively.
Furthermore, described Texture Statistical Feature value is variance, and for the prospect gray-scale map with the capable q row of p, the Texture Statistical Feature value-acquiring method on horizontal and vertical direction is as follows:
In horizontal direction the variance HorizontalVariance (i) of the i-th row pixel column be the i-th row adjacent both sides r capable all be not the gray-scale value variance statistic result of the pixel of 0, wherein r is designated value:
H o r i z o n t a l V a r i a n c e ( i ) = 1 m Σ i - r i + r Σ j ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in described prospect gray-scale map, and in FG ' (i, j), the value of j is 1 to q and the value of i is i-r to i+r, for described i-th-r to i+r is total to the gray-scale value average of the capable pixel of 2*r+1, m is total number of the capable pixel of 2r+1;
In vertical direction, the variance VerticalVariance (j) of jth row pixel column for all on jth arranges adjacent both sides c row be not the variance statistic result of the pixel of 0:
V e r t i c a l V a r i a n c e ( j ) = 1 m Σ j - c j + c Σ i ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in described prospect gray-scale map, and in FG ' (i, j), the value of j is j-c to j+c, and the value of i is 1 to p, for described jth-c to j+c is total to the grey scale pixel value average of 2*c+1 row, m is total number of 2c+1 row pixel.
Exception is there is in the variance that in image, the variance of single pixel column or pixel column may exist wherein a certain pixel column or pixel column due to noise or other reasons, do not meet normal statistical nature, in follow-up split position is searched may by mistake using this location of pixels as split position, therefore make mistakes for avoiding, consider contiguous pixel column or the information of pixel column, be similar to the effect of filtering, thus make the Texture Statistical Feature variance calculated have good statistical property.When the entropy that similar employing statistical obtains is as Texture Statistical Feature value, corresponding pixel column or pixel column also need contiguous respective rows of pixels or pixel column information to take in.
Furthermore, described Texture Statistical Feature value is Sobel operator, to the pixel of the i-th row jth row in described prospect gray-scale map,
Horizontal Sobel operator result of calculation is:
S(i,j)=FG′(i-1,j-1)+2·FG′(i-1,j)+FG′(i-1,j+1)
-FG′(i+1,j-1)-2·FG′(i+1,j)-FG′(i+1,j+1)
Vertical Sobel operator result of calculation is: S (i, j)=FG ' (i-1, j+1)+2FG ' (i, j+1)+FG ' (i+1, j+1)-FG ' (i-1, j-1)-2FG ' (,, j-1) and-FG ' (i+1, j-1)
Wherein FG ' the gray-scale value that is pixel, the Texture Statistical Feature value of each pixel column is the maximal value of Sobel operator in this row pixel in the horizontal direction;
The Texture Statistical Feature value of each pixel column is the maximal value of Sobel operator in this row pixel in vertical direction.
When Sobel operator is as eigenwert, the information of neighborhood pixels is taken in, when adopting other local grain statistical characteristics to project as gradient, Laplace operator, LBP eigenwert, pixel column or the projection properties value corresponding to pixel column also for this pixel column or pixel column calculate maximal value in pixel Texture Statistical Feature value.
Furthermore, find the method for split position to comprise in the horizontal and vertical directions, for the n-th location of pixels on single direction, calculate respectively:
σ 1 = Σ i = 1 n | H ( i ) - H ‾ 1 |
σ 2 = Σ i = n k | H ( i ) - H ‾ 2 |
Wherein, k is the total length of pixel column or pixel column, and H (i) represents the Texture Statistical Feature value of i-th location of pixels, the Texture Statistical Feature value average of all location of pixels before being the n-th location of pixels, the Texture Statistical Feature value average of all location of pixels after being the n-th location of pixels, the n-th location of pixels is the n-th pixel column in the horizontal direction, and the n-th location of pixels is the n-th pixel column in vertical direction;
Obtain σ 1with σ 2sum reaches the location of pixels at minimum value place, as moving target and the maximum split position of shade difference.
σ 1represent the difference sum between the Texture Statistical Feature value of each location of pixels before the n-th location of pixels and average, σ 2represent the difference sum between the Texture Statistical Feature value of each location of pixels after the n-th location of pixels and average, work as σ 1with σ 2sum reaches minimum, then mean that the difference of Texture Statistical Feature value on the left of current n value place and between the adjacent pixel location on right side all reaches minimum, show simultaneously this location of pixels moving target and shade difference maximum, thus moving target and shade to be separated.
Furthermore, the method for the shade candidate regions obtained in display foreground comprises following one:
Respectively horizontal and vertical projection carried out to the prospect in image and limits threshold value, obtaining being less than the location of pixels of described threshold value region corresponding in the picture as described shade candidate regions;
The threshold value of predetermined luminance, colourity and saturation degree respectively, and in image, brightness, colourity and saturation degree will be exceeded simultaneously the pixel of threshold value as described shade candidate regions.
Employing is carried out horizontal and vertical projection to foreground detection figure and is limited the method for threshold value, in the horizontal projective histogram that horizontal projection obtains, the shade candidate regions obtained is each pixel column that grey scale pixel value is less than threshold value, in the horizontal projective histogram that vertical projection obtains, the shade candidate regions obtained for obtained shade candidate regions be each pixel column that grey scale pixel value is less than threshold value.
The shade candidate regions described in each pixel composition adopting the threshold method of predetermined luminance, colourity and saturation degree to obtain.
Furthermore, when calculated Texture Statistical Feature value has multiple, the split position obtained in the horizontal and vertical directions accordingly all has multiple, for single direction, after the ratio calculating each split position both sides shade candidate regions and foreground pixel respectively, the split position that first selection both sides ratio difference is larger is as final split position, then the side that selection percentage is larger from the both sides of selected final split position is as shadow removal.
Consider in conjunction with multiple Texture Statistical Feature value, the accuracy that shadow region judges can be improved, when utilizing split position to divide, in two parts that different split position marks off, the ratio of prospect shared by shade candidate regions is different, and ratio difference shows that more greatly the discrimination between shade and non-shadow is high, therefore comparatively large for two parts ratio difference corresponding split position can be rejected shade more accurately as final split position.
For implementing the inventive method, present invention also offers corresponding device, to reduce the shade false drop rate in image.
Remove a device for foreground detection result shade, comprising: pretreatment module, obtain gray-scale map and foreground detection figure by the original image inputted, shade candidate regions acquisition module obtains the shade candidate regions in display foreground, also comprises:
Texture Statistical Feature value computing module, removes background by described gray-scale map and obtains prospect gray-scale map, the Texture Statistical Feature value from described prospect gray-scale map calculated level direction and in vertical direction;
Split position searches module, utilize the Texture Statistical Feature value in horizontal direction to find the split position in horizontal direction and utilize the Texture Statistical Feature value in vertical direction to find split position in vertical direction, wherein each split position make on correspondence direction moving target and shade difference maximum;
Shadow removal module, the shade candidate regions of the split position both sides respectively in calculated level and vertical direction accounts for the ratio of corresponding prospect part, and the side that selection percentage is larger from the both sides of split position is as shadow removal.
Find from the projection carried out based on Texture Statistical Feature and make moving target and the maximum cut-point of shade difference, by segmentation result in conjunction with shade candidate region, determine the sensing range of shadow of object further, thus avoid the region of some similar shades be taken as shade and be removed, improve the accuracy rate of shadow Detection.This method itself has carried out accurate segmentation to moving object and dash area in addition, therefore when selecting candidate shadow region, do not need to arrange strict threshold value for different scene, to the blanket unified threshold value of most of scene setting, thus the general applicability of method can be improve.Described location of pixels is pixel column position in the horizontal direction, described location of pixels is the position of pixel column in vertical direction, prospect Texture Statistical Feature is included in gray-scale map, but also comprise the Texture Statistical Feature of background simultaneously, and the Texture Statistical Feature of background is invalid information when projecting, interference can be caused to cutting procedure below, by gray-scale map is multiplied with foreground detection figure, gray-scale map is removed background, the Texture Statistical Feature of prospect directly can be obtained from obtained prospect gray-scale map, avoid the interference of background, thus improve segmentation accuracy rate.Described foreground detection figure is multiplied with described gray-scale map, can by the background removal in gray-scale map.
Wherein the image of acquisition shade candidate regions may be foreground detection figure also may be original image, and such as, mode by foreground detection figure is carried out that horizontal and vertical direction projects obtains shade candidate regions, then the shade candidate regions obtained is in foreground detection figure; Utilize colourity, brightness is when obtaining shade candidate regions in conjunction with the mode of saturation degree, the shade candidate regions obtained is in original image, or according to the follow-up process that will carry out, shade candidate regions also may in gray-scale map.
Because the position divided might not be center, therefore, if judge that each several part is judged as pixel column or the pixel column frequency of occurrence of shade, then may cause and be divided out the larger part of area and be judged as the pixel column of shade or pixel column or pixel to gesticulate the part frequency of occurrence branching away area less high, thus cause always part larger for the area marked off being rejected by as shadow region.In order to prevent this phenomenon from occurring, by by shade candidate regions prospect part proportion in division part judge the possibility size that shadow region occurs.The shade candidate regions that different computing method obtain is different, and the shade candidate regions such as obtained by projecting method is made up of pixel column or pixel column, and the shade candidate regions obtained in conjunction with saturation degree by colourity, brightness is made up of pixel.If shade candidate regions is made up of pixel column or pixel column, then calculate this shade candidate regions pixel column of the prospect that is judged as corresponding in marked off part or the ratio of pixel column; If shade candidate regions is made up of pixel, then calculates the ratio that the pixel number being judged as shade candidate regions accounts for foreground pixel point number in marked off part, whole part higher for calculating ratio is rejected as shadow region.
In horizontal direction, each location of pixels represents one-row pixels row, and on vertical texture direction, each location of pixels represents a row pixel column.
Furthermore, the Texture Statistical Feature value that described Texture Statistical Feature value computing module calculates comprise following at least one: gradient, variance, Sobel operator, entropy, Laplace operator and LBP eigenwert.
When adopting wherein a kind of, the Texture Statistical Feature value calculating method selected by utilization calculates the Texture Statistical Feature value of each pixel column in the horizontal direction and the Texture Statistical Feature value of each pixel column in vertical direction respectively.When adopting multiple, the Texture Statistical Feature value calculating method selected by utilization calculates the Texture Statistical Feature value of each pixel column in the Texture Statistical Feature value of each pixel column in the horizontal direction and vertical direction respectively.
Furthermore, described split position is searched module and is found the method for split position to comprise in the horizontal and vertical directions, for the n-th location of pixels on single direction, calculates respectively:
σ 1 = Σ i = 1 n | H ( i ) - H ‾ 1 |
σ 2 = Σ i = n k | H ( i ) - H ‾ 2 |
Wherein, k is the total length of pixel column or pixel column, and H (i) represents the Texture Statistical Feature value of i-th location of pixels, the Texture Statistical Feature value average of all location of pixels before being the n-th location of pixels, the Texture Statistical Feature value average of all location of pixels after being the n-th location of pixels, the n-th location of pixels is the n-th pixel column in the horizontal direction, and the n-th location of pixels is the n-th pixel column in vertical direction;
Obtain σ 1with σ 2sum reaches the location of pixels at minimum value place, as moving target and the maximum split position of shade difference.
σ 1represent the difference sum between the Texture Statistical Feature value of each location of pixels before the n-th location of pixels and average, σ 2represent the difference sum between the Texture Statistical Feature value of each location of pixels after the n-th location of pixels and average, work as σ 1with σ 2sum reaches minimum, then mean that the difference of Texture Statistical Feature value on the left of current n value place and between the adjacent pixel location on right side all reaches minimum, show simultaneously this location of pixels moving target and shade difference maximum, thus moving target and shade to be separated.
Furthermore, when the Texture Statistical Feature value that described Texture Statistical Feature value computing module calculates has multiple, split position searches the split position that module obtains in the horizontal and vertical directions accordingly all has multiple, for single direction, described shadow removal module is after the ratio calculating each split position both sides shade candidate regions and foreground pixel respectively, the split position that first selection both sides ratio difference is larger is as final split position, then the side that selection percentage is larger from the both sides of selected final split position is as shadow removal.
Consider in conjunction with multiple Texture Statistical Feature value, the accuracy that shadow region judges can be improved, when utilizing split position to divide, in two parts that different split position marks off, the ratio of prospect shared by shade candidate regions is different, and ratio difference shows that more greatly the discrimination between shade and non-shadow is high, therefore comparatively large for two parts ratio difference corresponding split position can be rejected shade more accurately as final split position.
Method of the present invention and device are given prominence to effect and are, effectively can reduce the false drop rate of shade in foreground detection result, and avoiding non-hatched area flase drop is shadow region, improve Shadow recognition accuracy rate, thus contribute to the raising of subsequent treatment efficiency; Do not need for the concrete threshold value of concrete scene settings, thus the ubiquity that the method that improves is suitable for; Consider in conjunction with multiple Texture Statistical Feature value, the accuracy that shadow region judges can be improved.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of one embodiment of the invention;
Fig. 2 is the vertical direction texture projection histogram of present example of the present invention;
Fig. 3 searches the lookup result schematic diagram that split position obtains for present example of the present invention.
Embodiment
Now in conjunction with the embodiments and Figure of description detailed explanation explanation is carried out to the present invention program.
The present invention analyzes according to the shadow condition in reality use, uses a kind of simple and effective way, is combined by texture information with geological information, removes shade, correctly splits, and improves the accuracy rate of later stage recognition and tracking.The device that one embodiment of the invention removes foreground detection result shade comprises: pretreatment module, Texture Statistical Feature value computing module, split position search module, shade candidate regions acquisition module and shadow removal module.
Present example utilizes the device removing foreground detection result shade to carry out the method for shadow removal as shown in Figure 1 to foreground detection result, comprises the steps:
A, pretreatment module obtains gray-scale map and foreground detection figure according to by the original image of input.
In the present example, the original image of pretreatment module to input carries out gray processing process, obtains gray-scale map; And the foreground detection figure that foreground detection obtains binaryzation is carried out to gray-scale map.
B, gray-scale map and foreground detection figure are carried out dot product and obtain prospect gray-scale map by Texture Statistical Feature value computing module, and the Texture Statistical Feature value of each pixel column is calculated in the horizontal direction according to prospect gray-scale map, calculate the Texture Statistical Feature value of each pixel column in vertical direction.
The follow-up calculating carrying out Texture Statistical Feature needs half-tone information, and foreground detection figure is binary image, there is not half-tone information, if and directly adopt gray-scale map, owing to including prospect Texture Statistical Feature in gray-scale map, but also comprise the Texture Statistical Feature of background, and the Texture Statistical Feature of background is invalid information when projecting, and can cause interference to cutting procedure below simultaneously.First be mask image to split roughly the foreground detection figure obtained, the gray-scale map corresponding region split extracted, carry out dot product by foreground detection figure and gray-scale map, obtain prospect gray-scale map with identical coordinate.Prospect gray-scale map is the gray-scale map eliminating background.
Here the texture of image can be described by multiple different mode, such as gradient, variance, Sobel, entropy etc., all can token image details enrich degree.Multiple Texture Statistical Feature describing mode can select wherein a kind of use, also can use multiple then comprehensive descision, need to weigh performance and effect.Each pixel column and the variance of each pixel column in specified scope is adopted in present example, often the variance of row pixel column represents by the variance that the capable c adjacent with both sides of current pixel is capable, often the variance of row pixel column represents by the variance of the adjacent r row in both sides, in present example, the value of c and r is 1, and namely for the i-th row pixel, its variance is i-1, i, i+1 tri-gray-scale value variance of row, namely for jth row pixel, its variance is the gray-scale value variance that j-1, j, j+1 tri-arranges.
In horizontal direction, the variance HorizontalVariance (i) of the i-th row pixel column is not the gray-scale value variance statistic result of the pixel of 0 for i-1, i, i+1 tri-is all on row:
H o r i z o n t a l V a r i a n c e ( i ) = 1 m Σ i - 1 i + 1 Σ j ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in prospect gray-scale map, and in FG ' (i, j), the value of j is 1 to q and the value of i is i-1, i and i+1, for described i-th-1, i, i+1 tri-gray-scale value average of row pixel, m is the pixel number summation of this 3 row of i-1 to i+1;
In vertical direction, the variance of jth row pixel column is jth-1, j, j+1 tri-row upper all be not the variance statistic result of the pixel of 0:
V e r t i c a l V a r i a n c e ( j ) = 1 m Σ j - 1 j + 1 Σ i ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in prospect gray-scale map, and in FG ' (i, j), the value of j is j-1 to j+1, and the value of i is 1 to p, for the grey scale pixel value average that jth-1, j, j+1 tri-arrange, m is pixel number summation of these 3 row of j-1 to j+1.
Texture Statistical Feature value also adopts other modes to calculate, such as, adopt Sobel operator as Texture Statistical Feature value, and to the pixel of the i-th row jth row, horizontal Sobel operator result of calculation is:
S(i,j)=FG′(i-1,j-1)+2·FG′(i-1,j)+FG′(i-1,j+1)
-FG′(i+1,j-1)-2·FG′(i+1,j)-FG′(i+1,j+1)
To the pixel of the i-th row jth row, vertical Sobel operator result of calculation is:
S(i,j)=FG′(i-1,j+1)+2·FG′(i,j+1)+FG′(i+1,j+1)
-FG′(i-1,j-1)-2·FG′(i,j-1)-FG′(i+1,j-1)
Wherein FG ' represents the gray-scale value of pixel, and in bracket, parameter represents the column locations of pixel respectively.
In present example, the Texture Statistical Feature value of horizontal direction pixel column represents with horizontal texture projection histogram, and the Texture Statistical Feature value of vertical direction pixel row represents with vertical texture projection histogram.
For variance, horizontal texture projection histogram HorizontalProject (i) is:
HorizontalProject(i)=HorizontalVariance(i);
That is, the horizontal ordinate of horizontal texture projection histogram is each pixel column, and ordinate is the variance of each pixel column, and the variance of each pixel column is the HorizontalVariance (i) calculated herein.
Vertical texture projection histogram VerticalProject (j) is:
VerticalProject(j)=VerticalVariance(j)。
In like manner, the horizontal ordinate of vertical texture projection histogram is each pixel column, and ordinate is the variance of each pixel column, and the variance of each pixel column is the VerticalVariance (j) calculated herein.
Adopt variance as Texture Statistical Feature value, project the vertical texture projection histogram that obtains in vertical direction as shown in Figure 2.
If adopt Sobel operator, then in horizontal texture projection histogram, for any pixel column, the Texture Statistical Feature value of pixel column is the maximal value of Sobel operator in this row pixel; In vertical texture projection histogram, for any pixel column, the Texture Statistical Feature value of pixel column is the maximal value of Sobel operator in this row pixel.Other local grain statistical characteristicss such as Laplace operator, gradient, LBP eigenwert, when carrying out the calculating of Texture Statistical Feature value, are the maximum texture statistical characteristics of pixel in pixel column or pixel column in the Texture Statistical Feature value of a certain pixel column or pixel column.
C, split position is searched module and is found from horizontal texture perspective view and vertical texture perspective view respectively and make moving target and the dash area split position that difference is maximum on projecting.
Because step B describes image information well with Texture Statistical Feature, obtain texture projection histogram, only need the split position finding difference maximum by the information of texture projection histogram.To the n-th location of pixels of texture projection histogram, calculate:
σ 1 = Σ i = 1 n | H ( i ) - H ‾ 1 |
σ 2 = Σ i = n k | H ( i ) - H ‾ 2 |
Wherein k is the total length of texture projection histogram, and namely in horizontal texture projection histogram, total length is total line number of pixel column, and in vertical texture projection histogram, total length is total columns of pixel column, the Texture Statistical Feature value average of all location of pixels before being the n-th location of pixels, the Texture Statistical Feature value average of all location of pixels after being the n-th location of pixels.Work as σ 1with σ 2when sum reaches minimum value, represent and farthest to be made a distinction at this location of pixels moving target and dash area.
Coordinate position corresponding to minimum value is required split position, adopts Da-Jin algorithm (Ostu) to search moving target and the dash area split position that difference is maximum on projecting in present example.Utilize result that Da-Jin algorithm obtains the process of the projection histogram of vertical texture shown in Fig. 2 as shown in Figure 3, wherein Fig. 3 horizontal ordinate is pixel column, the σ of ordinate corresponding to pixel column 1with σ 2sum.The split position found in vertical texture projection histogram is j=38, also can find the position of this minimum value with other known sorting techniques such as K-means clustering algorithm.
If the Texture Statistical Feature value calculated has multiple, such as existing variance also has Sobel operator, texture projection histogram on each projecting direction then obtained also has to be had multiple accordingly, is the texture projection histogram of variance and the texture projection histogram of Sobel operator respectively.Accordingly, a corresponding split position can all be obtained in each texture projection histogram.Such as, utilize variance to be j=38 at the vertical texture split position obtained that projects, utilize Sobel operator to be then j=42 at the vertical texture split position obtained that projects, then need to process further in follow-up cutting procedure.
D, shade candidate regions acquisition module obtains the shade candidate regions in display foreground.
Step D adopts existing shade determination methods to obtain shade candidate region, step D can carry out after steps A completes, and can carry out with step B or C simultaneously, carries out before or after also can being placed on step B or C, in present example, step D carries out after completing steps C.
The acquisition methods of existing shade candidate regions comprises: carry out level and vertical projection to foreground detection figure, or brightness be combined with chrominance information.The obtain manner adopting brightness to obtain shade candidate regions in gray-scale map in conjunction with chrominance information is, will meet the pixel of following three inequality as shade candidate regions simultaneously:
α≤L(x,y)/L B≤β;
|H(x,y)-H B|≤τ H
S(x,y)-S B≤τ s
Be the pixel of (x, y) for coordinate, wherein L (x, y) represents the brightness of this pixel.The saturation degree that H (x, y) represents the colourity of this pixel, S (x, y) represents this pixel, L b, H band S brepresent background luminance, background colourity and the background saturation degree in gray-scale map respectively, after obtaining foreground detection figure, L b, H band S bthese values can by obtaining the corresponding computing of background.Wherein α, β and τ hbe be greater than 0 and be less than 1 value, τ sfor be greater than-1 and be less than 0 value, each value is preset value, because colourity, brightness and saturation infromation need the correlation with original image (having RGB information) obtains in conjunction with background to process, therefore shade candidate regions obtains in the prospect of original image.
Present example Utilization prospects of the present invention detects the method for figure projection, is the total ranks number in the row/column number that satisfies condition of the split position both sides processed based on row and column and split position both sides.Respectively horizontal and vertical projection is carried out to foreground detection figure, obtain horizontal projective histogram and vertical projective histogram respectively and set threshold value, pixel column wherein in horizontal projective histogram in horizontal ordinate corresponding grey scale figure, pixel column in horizontal ordinate corresponding grey scale figure in vertical projective histogram, ordinate respective rows of pixels in each projection histogram or the gray-scale value of pixel column, the pixel column and pixel column that are less than threshold value are judged to be shade candidate regions, and the shade candidate regions in the display foreground therefore obtained obtains from the prospect foreground detection figure.
E, the shade candidate regions of split position both sides of shadow removal module respectively in calculated level and vertical direction and the ratio of prospect part, and the side that selection percentage is larger from the both sides of split position is as shadow removal.
Split position is respective pixel row or pixel column in the picture, in current procedures, first utilizes split position to be divided by image, accordingly, shade candidate regions and the divided position of prospect part is divided.For vertical texture projection histogram, in the vertical direction that present example utilizes variance Texture Statistical Feature value to obtain, split position is j=38 row place, then image is divided into two parts by this split position in the vertical direction of correspondence.In the direction in which, obtain each several part shade candidate regions proportion in the prospect of corresponding part, and the part of large percentage in corresponding direction is removed as dash area.The pixel column (being judged as the pixel column of shade candidate regions) that part in present example on the left of the 38th row is less than threshold value accounts for 38% of foreground pixel row, right side is 24%, then show that left part is that the possibility of dash area is comparatively large, thus the 38th row left part is all rejected as shadow region.
If the Texture Statistical Feature value calculated has multiple, such as there are variance and Sobel operator, split position then in all directions also has multiple, now need by relatively establishing the final split position for dividing, the split position that such as Sobel operator obtains in vertical texture projection histogram is the 42nd row, then the calculating that shade candidate regions accounts for prospect fraction is carried out equally to the part of its left and right sides, such as, obtaining left side proportion is 43%, right side proportion is 20%, in this case, can find, two parts shade candidate regions proportional difference that split position in the vertical direction corresponding to upside deviation marks off is less than two parts shade candidate regions proportional difference that the split position corresponding to Sobel operator marks off, thus using the split position corresponding to Sobel operator as the split position being finally used for dividing, and by the shade candidate regions part that proportion is larger in the foreground, namely left part is rejected as dash area.In the horizontal direction can by same mode determination split position and shade.
If judge shade candidate regions based on the brightness of pixel, colourity and saturation infromation, in like manner, such as, on the left of the vertical segmentation position obtained, suppose originally have X lindividual pixel is foreground pixel point, and be judged as shade candidate regions have Y lindividual pixel, then the shade in left side accounts for foreground pixel point P l=Y l/ X l; Equally, the foreground pixel point number on the right side of split position is designated as X r, the pixel number being judged as shade candidate regions is designated as Y r, it is P that the shade candidate regions on right side accounts for foreground pixel point ratio r=Y r/ X r.Relatively P land P rsize, the side that probability is larger is judged to be shade, removes; The side that proportion is less is judged to be moving object, is retained.If framing bits is equipped with multiple on single projecting direction, two parts shade proportion difference equally also divided by first comparing split position compares determines split position, the split position that two parts differ greatly as final split position, then carries out the rejecting of shade.
Method of the present invention and device are given prominence to effect and are, effectively can reduce the false drop rate of shade in foreground detection result, and avoiding non-hatched area flase drop is shadow region, improve Shadow recognition accuracy rate, thus contribute to the raising of subsequent treatment efficiency; Do not need for the concrete threshold value of concrete scene settings, thus the ubiquity that the method that improves is suitable for; Consider in conjunction with multiple Texture Statistical Feature value, the accuracy that shadow region judges can be improved.

Claims (11)

1. remove a method for foreground detection result shade, comprising: obtain gray-scale map and foreground detection figure by the original image inputted, and obtain the shade candidate regions in display foreground, it is characterized in that, also comprise:
Described gray-scale map is removed background and obtains prospect gray-scale map, according to described prospect gray-scale map, calculate the Texture Statistical Feature value of each pixel column in the horizontal direction, calculate the Texture Statistical Feature value of each pixel column in vertical direction;
Utilize the Texture Statistical Feature value of pixel column to find split position in horizontal direction, and utilize the Texture Statistical Feature value of pixel column to find split position in vertical direction, wherein split position is make moving target and the maximum location of pixels of shade difference;
The shade candidate regions of the split position both sides respectively in calculated level and vertical direction accounts for the ratio of corresponding prospect part, and the side that selection percentage is larger from the both sides of split position is as shadow removal.
2. remove the method for foreground detection result shade as claimed in claim 1, it is characterized in that, the Texture Statistical Feature value calculated comprise following at least one: gradient, variance, Sobel operator, entropy, Laplace operator and LBP eigenwert.
3. remove the method for foreground detection result shade as claimed in claim 1, it is characterized in that, described Texture Statistical Feature value is variance, and for the prospect gray-scale map with the capable q row of p, the Texture Statistical Feature value-acquiring method on horizontal and vertical direction is as follows:
In horizontal direction the variance HorizontalVariance (i) of the i-th row pixel column be the i-th row adjacent both sides r capable all be not the gray-scale value variance statistic result of the pixel of 0, wherein r is designated value:
H o r i z o n t a l V a r i a n c e ( i ) = 1 m Σ i - r i + r Σ j ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in described prospect gray-scale map, and in FG ' (i, j), the value of j is 1 to q and the value of i is i-r to i+r, for described i-th-r to i+r is total to the gray-scale value average of the capable pixel of 2*r+1, m is total number of the capable pixel of 2r+1;
In vertical direction, the variance VerticalVariance (j) of jth row pixel column for all on jth arranges adjacent both sides c row be not the variance statistic result of the pixel of 0:
V e r t i c a l V a r i a n c e ( j ) = 1 m Σ j - c j + c Σ i ( FG ′ ( i , j ) - FG ′ ‾ ) 2
Wherein, FG ' (i, j) represents the gray-scale value of the i-th row jth row pixel in described prospect gray-scale map, and in FG ' (i, j), the value of j is j-c to j+c, and the value of i is 1 to p, for described jth-c to j+c is total to the grey scale pixel value average of 2*c+1 row, m is total number of 2c+1 row pixel.
4. remove the method for foreground detection result shade as claimed in claim 1, it is characterized in that, described Texture Statistical Feature value is Sobel operator, to the pixel of the i-th row jth row in described prospect gray-scale map,
Horizontal Sobel operator result of calculation is:
S(i,j)=FG′(i-1,j-1)+2·FG′(i-1,j)+FG′(i-1,j+1)
-FG′(i+1,j-1)-2·FG′(i+1,j)-FG′(i+1,j+1)
Vertical Sobel operator result of calculation is:
S(i,j)=FG′(i-1,j+1)+2·FG′(i,j+1)+FG′(i+1,j+1)
-FG′(i-1,j-1)-2·FG′(i,j-1)-FG′(i+1,j-1)
Wherein FG ' the gray-scale value that is pixel, the Texture Statistical Feature value of each pixel column is the maximal value of Sobel operator in this row pixel in the horizontal direction;
The Texture Statistical Feature value of each pixel column is the maximal value of Sobel operator in this row pixel in vertical direction.
5. remove the method for foreground detection result shade as claimed in claim 1, it is characterized in that, find the method for split position to comprise in the horizontal and vertical directions, for the n-th location of pixels on single direction, calculate respectively:
σ 1 = Σ i = 1 n | H ( i ) - H ‾ 1 | σ 2 = Σ i = n k | H ( i ) - H ‾ 2 |
Wherein, k is the total length of pixel column or pixel column, and H (i) represents the Texture Statistical Feature value of i-th location of pixels, the Texture Statistical Feature value average of all location of pixels before being the n-th location of pixels, the Texture Statistical Feature value average of all location of pixels after being the n-th location of pixels, the n-th location of pixels is the n-th pixel column in the horizontal direction, and the n-th location of pixels is the n-th pixel column in vertical direction;
Obtain σ 1with σ 2sum reaches the location of pixels at minimum value place, as moving target and the maximum split position of shade difference.
6. remove the method for foreground detection result shade as claimed in claim 1, it is characterized in that, the method obtaining the shade candidate regions in display foreground comprises following one:
Respectively horizontal and vertical projection carried out to the prospect in image and limits threshold value, obtaining being less than the location of pixels of described threshold value region corresponding in the picture as described shade candidate regions;
The threshold value of predetermined luminance, colourity and saturation degree respectively, and in image, brightness, colourity and saturation degree will be exceeded simultaneously the pixel of threshold value as described shade candidate regions.
7. remove the method for foreground detection result shade as claimed in claim 1 or 2, it is characterized in that, when calculated Texture Statistical Feature value has multiple, the split position obtained in the horizontal and vertical directions accordingly all has multiple, for single direction, after the ratio calculating each split position both sides shade candidate regions and foreground pixel respectively, the split position that first selection both sides ratio difference is larger is as final split position, then the side that selection percentage is larger from the both sides of selected final split position is as shadow removal.
8. remove a device for foreground detection result shade, comprising: pretreatment module, obtain gray-scale map and foreground detection figure by the original image inputted, shade candidate regions acquisition module, and obtain the shade candidate regions in display foreground, it is characterized in that, also comprise:
Texture Statistical Feature value computing module, removes background by described gray-scale map and obtains prospect gray-scale map, the Texture Statistical Feature value from described prospect gray-scale map calculated level direction and in vertical direction;
Split position searches module, utilize the Texture Statistical Feature value in horizontal direction to find the split position in horizontal direction and utilize the Texture Statistical Feature value in vertical direction to find split position in vertical direction, wherein each split position make on correspondence direction moving target and shade difference maximum;
Shadow removal module, the shade candidate regions of the split position both sides respectively in calculated level and vertical direction accounts for the ratio of corresponding prospect part, and the side that selection percentage is larger from the both sides of split position is as shadow removal.
9. remove the device of foreground detection result shade as claimed in claim 8, it is characterized in that, the Texture Statistical Feature value that described Texture Statistical Feature value computing module calculates comprise following at least one: gradient, variance, Sobel operator, entropy, Laplace operator and LBP eigenwert.
10. remove the device of foreground detection result shade as claimed in claim 8, it is characterized in that, described split position is searched module and is found the method for split position to comprise in the horizontal and vertical directions, for the n-th location of pixels on single direction, calculates respectively:
σ 1 = Σ i = 1 n | H ( i ) - H ‾ 1 | σ 2 = Σ i = n k | H ( i ) - H ‾ 2 |
Wherein, k is the total length of pixel column or pixel column, and H (i) represents the Texture Statistical Feature value of i-th location of pixels, the Texture Statistical Feature value average of all location of pixels before being the n-th location of pixels, the Texture Statistical Feature value average of all location of pixels after being the n-th location of pixels, the n-th location of pixels is the n-th pixel column in the horizontal direction, and the n-th location of pixels is the n-th pixel column in vertical direction;
Obtain σ 1with σ 2sum reaches the location of pixels at minimum value place, as moving target and the maximum split position of shade difference.
11. devices removing foreground detection result shade as described in claim 8 or 9, it is characterized in that, when the Texture Statistical Feature value that described Texture Statistical Feature value computing module calculates has multiple, split position searches the split position that module obtains in the horizontal and vertical directions accordingly all has multiple, for single direction, described shadow removal module is after the ratio calculating each split position both sides shade candidate regions and foreground pixel respectively, the split position that first selection both sides ratio difference is larger is as final split position, the side that selection percentage is larger from the both sides of selected final split position is again as shadow removal.
CN201510679083.6A 2015-10-19 2015-10-19 Remove the method and device of foreground detection result shade Active CN105261021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510679083.6A CN105261021B (en) 2015-10-19 2015-10-19 Remove the method and device of foreground detection result shade

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510679083.6A CN105261021B (en) 2015-10-19 2015-10-19 Remove the method and device of foreground detection result shade

Publications (2)

Publication Number Publication Date
CN105261021A true CN105261021A (en) 2016-01-20
CN105261021B CN105261021B (en) 2019-03-08

Family

ID=55100693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510679083.6A Active CN105261021B (en) 2015-10-19 2015-10-19 Remove the method and device of foreground detection result shade

Country Status (1)

Country Link
CN (1) CN105261021B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127744A (en) * 2016-06-17 2016-11-16 广州市幸福网络技术有限公司 Display foreground and background border Salience estimation and system
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method
CN107168049A (en) * 2017-05-19 2017-09-15 浙江工业大学 Method for acquiring photovoltaic array output characteristic curve in real time
CN110659687A (en) * 2019-09-24 2020-01-07 华南农业大学 Yellow trapping plate-based phyllotreta striolata detection method, medium and equipment
CN111161299A (en) * 2018-11-08 2020-05-15 深圳富泰宏精密工业有限公司 Image segmentation method, computer program, storage medium, and electronic device
CN113192101A (en) * 2021-05-06 2021-07-30 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113487502A (en) * 2021-06-30 2021-10-08 中南大学 Shadow removing method for hollow image
CN113870237A (en) * 2021-10-09 2021-12-31 西北工业大学 Composite material image shadow detection method based on horizontal diffusion
WO2023284313A1 (en) * 2021-07-16 2023-01-19 稿定(厦门)科技有限公司 Automatic slicing method and device for psd picture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999620B1 (en) * 2001-12-10 2006-02-14 Hewlett-Packard Development Company, L.P. Segmenting video input using high-level feedback
CN101098462A (en) * 2007-07-12 2008-01-02 上海交通大学 Chroma deviation and brightness deviation combined video moving object detection method
CN101324927A (en) * 2008-07-18 2008-12-17 北京中星微电子有限公司 Method and apparatus for detecting shadows
CN102184553A (en) * 2011-05-24 2011-09-14 杭州华三通信技术有限公司 Moving shadow detecting method and device
CN102298781A (en) * 2011-08-16 2011-12-28 长沙中意电子科技有限公司 Motion shadow detection method based on color and gradient characteristics
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information
CN104537695A (en) * 2015-01-23 2015-04-22 贵州现代物流工程技术研究有限责任公司 Anti-shadow and anti-covering method for detecting and tracing multiple moving targets

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999620B1 (en) * 2001-12-10 2006-02-14 Hewlett-Packard Development Company, L.P. Segmenting video input using high-level feedback
CN101098462A (en) * 2007-07-12 2008-01-02 上海交通大学 Chroma deviation and brightness deviation combined video moving object detection method
CN101324927A (en) * 2008-07-18 2008-12-17 北京中星微电子有限公司 Method and apparatus for detecting shadows
CN102184553A (en) * 2011-05-24 2011-09-14 杭州华三通信技术有限公司 Moving shadow detecting method and device
CN102298781A (en) * 2011-08-16 2011-12-28 长沙中意电子科技有限公司 Motion shadow detection method based on color and gradient characteristics
CN103810722A (en) * 2014-02-27 2014-05-21 云南大学 Moving target detection method combining improved LBP (Local Binary Pattern) texture and chrominance information
CN104537695A (en) * 2015-01-23 2015-04-22 贵州现代物流工程技术研究有限责任公司 Anti-shadow and anti-covering method for detecting and tracing multiple moving targets

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127744A (en) * 2016-06-17 2016-11-16 广州市幸福网络技术有限公司 Display foreground and background border Salience estimation and system
CN106127744B (en) * 2016-06-17 2019-05-14 广州市幸福网络技术有限公司 Display foreground and background border Salience estimation and system
CN106339995A (en) * 2016-08-30 2017-01-18 电子科技大学 Space-time multiple feature based vehicle shadow eliminating method
CN107168049A (en) * 2017-05-19 2017-09-15 浙江工业大学 Method for acquiring photovoltaic array output characteristic curve in real time
CN107168049B (en) * 2017-05-19 2019-11-29 浙江工业大学 Method for acquiring photovoltaic array output characteristic curve in real time
CN111161299A (en) * 2018-11-08 2020-05-15 深圳富泰宏精密工业有限公司 Image segmentation method, computer program, storage medium, and electronic device
CN111161299B (en) * 2018-11-08 2023-06-30 深圳富泰宏精密工业有限公司 Image segmentation method, storage medium and electronic device
CN110659687A (en) * 2019-09-24 2020-01-07 华南农业大学 Yellow trapping plate-based phyllotreta striolata detection method, medium and equipment
CN113192101A (en) * 2021-05-06 2021-07-30 影石创新科技股份有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113192101B (en) * 2021-05-06 2024-03-29 影石创新科技股份有限公司 Image processing method, device, computer equipment and storage medium
CN113487502A (en) * 2021-06-30 2021-10-08 中南大学 Shadow removing method for hollow image
CN113487502B (en) * 2021-06-30 2022-05-03 中南大学 Shadow removing method for hollow image
WO2023284313A1 (en) * 2021-07-16 2023-01-19 稿定(厦门)科技有限公司 Automatic slicing method and device for psd picture
CN113870237A (en) * 2021-10-09 2021-12-31 西北工业大学 Composite material image shadow detection method based on horizontal diffusion
CN113870237B (en) * 2021-10-09 2024-03-08 西北工业大学 Composite material image shadow detection method based on horizontal diffusion

Also Published As

Publication number Publication date
CN105261021B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN105261021A (en) Method and apparatus of removing foreground detection result shadows
Babu et al. Vehicle number plate detection and recognition using bounding box method
CN102609686B (en) Pedestrian detection method
US8340420B2 (en) Method for recognizing objects in images
Kong et al. General road detection from a single image
CN108681693B (en) License plate recognition method based on trusted area
Prabhakar et al. Automatic vehicle number plate detection and recognition
US7783106B2 (en) Video segmentation combining similarity analysis and classification
US20210150182A1 (en) Cloud detection from satellite imagery
CN108052904B (en) Method and device for acquiring lane line
CN103530600A (en) License plate recognition method and system under complicated illumination
CN105303153A (en) Vehicle license plate identification method and apparatus
CN104408711A (en) Multi-scale region fusion-based salient region detection method
CN106600955A (en) Method and apparatus for detecting traffic state and electronic equipment
CN110706235A (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
US9508018B2 (en) Systems and methods for object detection
CN101369312B (en) Method and equipment for detecting intersection in image
CN105447834A (en) Correction method for non-uniform illumination of mahjong images based on figure classification
CN102930292B (en) A kind of object identification method based on p-SIFT feature
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
Devadethan et al. Face detection and facial feature extraction based on a fusion of knowledge based method and morphological image processing
Aarthi et al. Vehicle detection in static images using color and corner map
CN107832732B (en) Lane line detection method based on treble traversal
CN102129569B (en) Based on body detection device and the method for multiple dimensioned contrast characteristic
WO2019148362A1 (en) Object detection method and apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant