CN106408529A - Shadow removal method and apparatus - Google Patents
Shadow removal method and apparatus Download PDFInfo
- Publication number
- CN106408529A CN106408529A CN201610797642.8A CN201610797642A CN106408529A CN 106408529 A CN106408529 A CN 106408529A CN 201610797642 A CN201610797642 A CN 201610797642A CN 106408529 A CN106408529 A CN 106408529A
- Authority
- CN
- China
- Prior art keywords
- pixel
- super
- segmentation
- seed point
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000000463 material Substances 0.000 claims abstract description 89
- 230000011218 segmentation Effects 0.000 claims description 119
- 238000012549 training Methods 0.000 claims description 29
- 238000012937 correction Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 10
- 230000033001 locomotion Effects 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 238000009827 uniform distribution Methods 0.000 claims description 8
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 abstract 1
- 238000012544 monitoring process Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- YTPMCWYIRHLEGM-BQYQJAHWSA-N 1-[(e)-2-propylsulfonylethenyl]sulfonylpropane Chemical compound CCCS(=O)(=O)\C=C\S(=O)(=O)CCC YTPMCWYIRHLEGM-BQYQJAHWSA-N 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000003739 neck Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Classifications
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Abstract
The present invention discloses a shadow removal method and apparatus. The method includes the following steps that: an inputted image to be detected is pre-segmented through using a super pixel algorithm, and seed points are allocated for pre-segmented super pixels; the distance measure of each pixel point in the pre-segmented super pixels and the seed points of adjacent pre-segmented super pixels is obtained, the minimum value of the distance measure is adopted as the class label of the pixels, and the coordinate mean values of pixels in each class label are obtained, and the coordinate mean values are adopted as new seed points; the distance measure is iteratively calculated by using the weights of color distance and spatial distance until new seed points no longer change, and the new seed points are determined as final seed points and are adopted as super pixels; material classification is performed on the super pixels, and a shadow material is removed. According to the method, the complexity of background information is not considered, the material in the image is directly decomposed into the super pixels; and the super pixels are described based on the fusion of various features, the super pixels containing shadow are removed from pixel classes; and therefore, the accurate coordinate position of a target can be obtained.
Description
Technical field
The application is related to technical field of image processing, particularly to a kind of shadow removal method.The application also relates to
A kind of shadow removal device.
Background technology
Video monitoring is the important component part of road safety crime prevention system.Traditional monitoring system includes front end shooting
Machine, transmission cable, video monitoring platform.Video camera can be divided into network digital camera and analog video camera, can regard as front end
The collection of frequency picture signal.It is a kind of stronger integrated system of prevention ability.Video monitoring so that it is directly perceived, accurately, in time and
The information content is enriched and is widely used in many occasions.
For current more and more valued intelligent transportation system, video monitoring is one of requisite heavy
Want part.In road, the practical application of garden monitoring, the shade of disturbance, such as tree shade, target (vehicle, pedestrian etc.) are certainly
The shadow of body is often mistaken as a part for target or target, thus affecting target sizes and the positioning of position, Yi Jiying
Ring the correct control of intelligent transportation system.For example, the tree shade of one piece of disturbance may lead to the false-alarm of up to a hundred times, either right
Storage or event are checked and all can be caused very big interference.And the target of movement is then probably due to be linked to be one with shade in shade
Piece causes target to be failed to report.
Applicant finds during realizing the application, and the important prerequisite applying current shadow removal method is can
Extract reliable background, could Utilization prospects and the effective characteristic vector of background extracting, for the monitoring of actual scene, due to
The complexity of environment, the background that frequently can lead to traditional background modeling method foundation is inaccurate, affects final classification and judges.
Mixture Gaussian background model as described in the prior art is just only applicable to the remote large scene of low speed, for in-plant reality
When little scene monitoring background reliability substantially reduce, additionally, being directed to single or shade that conspicuousness is not strong in prior art
Feature can affect the accuracy of shade judgement, and is relatively difficult to ensure the integrality that card final goal is extracted, thus it is accurate to affect object
The positioning of position.
Content of the invention
The embodiment of the present application provides a kind of shadow removal method and apparatus based on super-pixel, to realize being not required to consider background
Material in image is directly decomposed into super-pixel by information, merges various features and each super-pixel block is described, finally by the moon
Shadow super-pixel therefrom is classified out and is removed, and extracts the accurate location of target from material, more accurate to the positioning of shadow region
Really, more preferably, the integrality of Objective extraction is more preferable for scene adaptability.
In order to reach above-mentioned technical purpose, this application provides a kind of shadow removal method, methods described includes:
Using super-pixel algorithm, pre-segmentation is carried out to the altimetric image to be checked of input, obtain pre-segmentation super-pixel, be each described
Pre-segmentation super-pixel distributes seed point;
Obtain the distance of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel
Tolerance, the minimum of a value with described distance metric is the class label of described each pixel, obtains the pixel in each described class label
Coordinate mean value, be new seed point with described coordinate mean value;
Described distance metric is iterated calculate using the weights of color distance and space length, until described new kind
Son point no longer changes, and determines that described new seed point is final seed point, and determines described super picture according to described final seed point
Element;
Using default material grader, material classification is carried out to described super-pixel, would be classified as corresponding to shade material
Super-pixel removes, the position to the target after the super-pixel removing corresponding to described shade material in described altimetric image to be checked
It is corrected.
Preferably, described using super-pixel algorithm to input altimetric image to be checked carry out pre-segmentation before, also include:
Super-pixel segmentation is carried out to each image of input, obtains each material super-pixel as training sample;
Pixel value histogram according to image in described training sample and gradient orientation histogram, obtain union feature;
Described union feature is input in SVM training aids and is trained, obtain each material grader.
Preferably, described using super-pixel algorithm to input altimetric image to be checked carry out pre-segmentation, obtain the super picture of pre-segmentation
Element, is that each described pre-segmentation super-pixel distributes seed point, specifically includes:
According to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, in institute
State uniform distribution seed point in the image of pre-segmentation super-pixel, and distribute category for each pixel in each pre-segmentation super-pixel
Sign.
Preferably, obtain the seed point of each pixel in described pre-segmentation super-pixel and adjacent described pre-segmentation super-pixel
Distance metric, the minimum of a value with described distance metric is the class label of described each pixel, obtain each described class label in
The coordinate mean value of pixel, is new seed point with described coordinate mean value, specifically includes:
Number according to seed point obtains the Grad of all pixels point in each described pre-segmentation super-pixel, will be each described pre-
Seed point in segmentation super-pixel moves on to the minimum place of gradient;
Obtain the distance of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel
Tolerance, the minimum of a value with described distance metric is the class label of described each pixel, updates the class label of described each pixel simultaneously
Clustered, obtain the coordinate mean value of the pixel in each described class label, be new seed point with described coordinate mean value.
Preferably, described using default material grader, material classification is carried out to described super-pixel, would be classified as shade
Super-pixel corresponding to material removes, in described altimetric image to be checked to the super-pixel removing corresponding to described shade material after
The position of target be corrected, specifically include:
Remove the material of abnormal classification in described each material, the position to described target using mathematical correlation, shape information
Put and carry out preliminary corrections with shape;
Obtain the target movable information histogram of described super-pixel using movable information, straight according to described target movable information
Fang Tu, and the shade material of object boundary without motion information is removed from described each material using maximum variance between clusters;
Described altimetric image to be checked enters to the position of the target after the super-pixel removing corresponding to described shade material
Row correction, shows the target location after correction.
In addition, the application also provides a kind of shadow removal device it is characterised in that described device includes:
Extraction module, for carrying out pre-segmentation using super-pixel algorithm to the altimetric image to be checked of input, obtains pre-segmentation and surpasses
Pixel, is that each described pre-segmentation super-pixel distributes seed point;
Acquisition module, for obtaining each pixel in described pre-segmentation super-pixel and adjacent described pre-segmentation super-pixel
The distance metric of seed point, the minimum of a value with described distance metric is the class label of described each pixel, obtains each described category
The coordinate mean value of the pixel in label, is new seed point with described coordinate mean value;
Processing module, is iterated to described distance metric calculating for the weights using color distance and space length,
Until described new seed point no longer changes, determine that described new seed point is final seed point, and according to described final seed
Point determines described super-pixel;
Locating module, for carrying out material classification using default material grader to described super-pixel, would be classified as the moon
Super-pixel corresponding to shadow material removes, in described altimetric image to be checked to the super-pixel removing corresponding to described shade material it
The position of target afterwards is corrected.
Preferably, also include sort module, be used for:
Super-pixel segmentation is carried out to each image of input, obtains each material super-pixel as training sample;
Pixel value histogram according to image in described training sample and gradient orientation histogram, obtain union feature;
Described union feature is input in SVM training aids and is trained, obtain each material grader.
Preferably, described extraction module, specifically for:
According to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, in institute
State uniform distribution seed point in the image of pre-segmentation super-pixel, and distribute category for each pixel in each pre-segmentation super-pixel
Sign.
Preferably, described acquisition module, specifically for:
Number according to seed point obtains the Grad of all pixels point in each described pre-segmentation super-pixel, will be each described pre-
Seed point in segmentation super-pixel moves on to the minimum place of gradient;
Obtain the distance of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel
Tolerance, the minimum of a value with described distance metric is the class label of described each pixel, updates the class label of described each pixel simultaneously
Clustered, obtain the coordinate mean value of the pixel in each described class label, be new seed point with described coordinate mean value.
Preferably, described locating module, specifically for:
Remove the material of abnormal classification in described each material, the position to described target using mathematical correlation, shape information
Put and carry out preliminary corrections with shape;
Obtain the target movable information histogram of described super-pixel using movable information, straight according to described target movable information
Fang Tu, and the shade material of object boundary without motion information is removed from described each material using maximum variance between clusters;
Described altimetric image to be checked enters to the position of the target after the super-pixel removing corresponding to described shade material
Row correction, shows the target location after correction.
Compared with prior art, the Advantageous Effects of the technical scheme that the embodiment of the present application is proposed include:
The embodiment of the present application discloses a kind of shadow removal method and apparatus, and the method utilizes super-pixel algorithm to input
Altimetric image to be checked carries out pre-segmentation and distributes seed point for each pre-segmentation super-pixel;Obtain each pixel in pre-segmentation super-pixel
With the distance metric of the seed point of adjacent pre-segmentation super-pixel, the minimum of a value with distance metric is the class label of each pixel, obtains
Take the coordinate mean value of the pixel in all kinds of labels, be new seed point with coordinate mean value;Using color distance and space
The weights of distance tolerance of adjusting the distance is iterated calculating, until new seed point no longer changes, determines that new seed point is final
Seed point with it as super-pixel;Material classification is carried out to super-pixel, shade material is removed.The method is not required to consider background letter
Material in image is directly decomposed into super-pixel by the complexity of breath, merges various features and each super-pixel is described, finally
The super-pixel comprising shade is removed from classification, thus obtaining the accurate coordinate position of target.
Brief description
In order to be illustrated more clearly that the technical scheme of the application, the accompanying drawing of required use in embodiment being described below
Be briefly described it should be apparent that, drawings in the following description are only some embodiments of the present application, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of shadow removal method proposed by the invention;
A kind of schematic flow sheet of shadow removal method that Fig. 2 is proposed by the embodiment of the present application;
A kind of schematic flow sheet of super-pixel segmentation module that Fig. 3 is proposed by the embodiment of the present application;
A kind of shade that Fig. 4 is proposed by the embodiment of the present application, target, the flow process of road surface grader off-line training module are shown
It is intended to;
A kind of target that Fig. 5 is proposed by the embodiment of the present application relocates the schematic flow sheet of module;
A kind of shadow removal result schematic diagram that Fig. 6 is proposed by the embodiment of the present application;
A kind of schematic diagram of shadow removal device that Fig. 7 is proposed by the embodiment of the present application.
Specific embodiment
Affect one of key factor of monitoring effect, therefore, existing skill because shadow problem has become as in video monitoring
There is the figure based on super-pixel (The super pixel) and SVMs (Support Vector Machine, SVM) in art
As shadow detection method, super-pixel generally refers to there is the image that the neighbor of the features such as similar grain, color, brightness is constituted
Block.Super-pixel is widely used in image segmentation (Segmentation) field and field of target recognition.With traditional pixel
Rank is compared, and super-pixel can simplify original image, improves the efficiency of phenogram picture.In performance objective identification mission, use
Super-pixel process image is convenient efficiently, can significantly simplify task, forms the sign more succinct to image.
Just as described in background the invention, in prior art be based on super-pixel and SVMs (Support
Vector Machine, SVM) image shadow detection method usually there will be following problem:
1st, the important prerequisite applying such method is to extract reliable background, could Utilization prospects and background extracting
Effectively characteristic vector, for the monitoring of actual scene, due to the complexity of environment, frequently can lead to traditional background modeling side
The background that method is set up is inaccurate, affects final classification and judges.
2nd, mixture Gaussian background model is only applicable to the remote large scene of low speed, for in-plant little in real time scene prison
Control, then the reliability of background substantially reduces.
3 it should be appreciated that and distinguish the various features type of shade and target, feature single or that conspicuousness is not strong can shadow
Ring the accuracy that shade judges.
4th, relatively it is difficult to ensure the integrality that card final goal is extracted.
Therefore, interference shade still accurately cannot effectively be removed away from image and finally determine thing by existing technology
The exact position of body.
In view of above the problems of the prior art, the present invention proposes a kind of shadow removal method.The method will be to be checked
Target in altimetric image, shade and background are decomposed into super-pixel, merge various features and each super-pixel are described, finally by the moon
Shadow super-pixel block separates from classification and removes.This method is to the adaptability of complex environment more preferably.
As shown in figure 1, being a kind of schematic flow sheet of shadow removal method proposed by the invention, wherein:
Step 101, using super-pixel algorithm to input altimetric image to be checked carry out pre-segmentation, be the super picture of each described pre-segmentation
Element distribution seed point.
In the particular embodiment, in addition it is also necessary to carry out the classification that SVM training obtains each material before carrying out this step
Device, the material being comprised for the super-pixel after image is split is classified, specifically, each figure first to input
As carrying out super-pixel segmentation, and obtain each material super-pixel as training sample, the then picture according to image in this training sample
Plain value histogram and gradient orientation histogram, obtain union feature, finally this union feature are input in SVM and are trained,
Obtain each material grader.
Certainly the image that this grader of acquisition is used can be image currently to be detected can also be to input from advance
Several images in carry out splitting, screen, train and obtain, this can't affect protection scope of the present invention.
Further, according to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, at this
Uniform distribution seed point in the image of pre-segmentation super-pixel, and distribute category for each pixel in each pre-segmentation super-pixel
Sign.
Each pixel in step 102, the described pre-segmentation super-pixel of acquisition and the seed of adjacent described pre-segmentation super-pixel
The distance metric of point, the minimum of a value with described distance metric is the class label of described each pixel, obtains in each described class label
Pixel coordinate mean value, be new seed point with described coordinate mean value.
This step purport determines new seed point by distance metric, and the color distance by adding and space length
Weights are iterated to this distance metric calculating, until this new seed point no longer changes, determine that this new seed point is final
Seed point, and super-pixel according to needed for this final seed point determines.
In a particular embodiment of the present invention, obtained according to the number of seed point first respectively all in this pre-segmentation super-pixel
The Grad of pixel, the seed point in respectively this pre-segmentation super-pixel is moved on to the minimum place of gradient, then obtains this pre- point
Cut the distance metric of each pixel in super-pixel and the seed point of this pre-segmentation super-pixel adjacent, with the minimum of this distance metric
It is worth the class label for this each pixel, updates the class label of this each pixel and clustered, obtain the picture in such label each
The coordinate mean value of vegetarian refreshments, is new seed point with this coordinate mean value, finally utilizes the weights of color distance and space length
This distance metric is iterated calculate, finally determines required super-pixel.
Step 103, described distance metric is iterated calculate using the weights of color distance and space length, until institute
State new seed point no longer to change, determine that described new seed point is final seed point, and determined according to described final seed point
Described super-pixel.
This step is intended to make distance metric restrain faster by the color distance of addition and the weights of space length, further
Determine accurate seed point and required super-pixel, the clustering problem that exploitation right value changes will be dominated between pixel by color distance
It is converted into the clustering problem dominated by space length.Specifically, using color distance and space length weights to this apart from degree
Amount is iterated calculating, until this new seed point no longer changes, determines that this new seed point is final seed point, and according to this
Final seed point determines this super-pixel.
Step 104, using default material grader, material classification is carried out to described super-pixel, and by described each material
Shade material remove.
In this step, the super-pixel determining in step 103 is put into and classified in grader, first with mathematically related
Property, shape information remove each material in abnormal classification material, preliminary corrections are carried out to the location and shape of target;Using motion
The target movable information histogram of acquisition of information super-pixel, according to target movable information histogram, and utilizes maximum between-cluster variance
Method removes the shade material of object boundary without motion information from each material, and then obtains the accurate coordinate position of required target.
As can be seen here, compared with prior art, the Advantageous Effects bag of the technical scheme that the embodiment of the present invention is proposed
Include:
Using super-pixel algorithm, the altimetric image to be checked of input is carried out with pre-segmentation and be each pre-segmentation super-pixel distribution seed
Point;Obtain the distance metric of each pixel in pre-segmentation super-pixel and the seed point of adjacent pre-segmentation super-pixel, with apart from degree
The minimum of a value of amount is the class label of each pixel, obtains the coordinate mean value of the pixel in all kinds of labels, with coordinate mean value
For new seed point;It is iterated calculating using the weights of color distance and space length tolerance of adjusting the distance, until new seed
Point no longer changes, and determines new seed point for final seed point and with it as super-pixel;Material classification is carried out to super-pixel, by the moon
Shadow material removes.The method is not required to consider the complexity of background information, directly the material in image is decomposed into super-pixel, merges
Various features are described to each super-pixel, finally remove the super-pixel comprising shade from classification, thus obtaining target
Accurate coordinate position.
Below in conjunction with the accompanying drawing in the present invention, clear, complete description is carried out to the technical scheme in the present invention, show
So, described embodiment is a part of embodiment of the present invention, rather than whole embodiments.Based on the enforcement in the present invention
Example, the every other embodiment that those of ordinary skill in the art are obtained on the premise of not making creative work, all belong to
In the scope of protection of the invention.
As described above, image shadow detection method well known in the prior art is to first pass through mixture Gaussian background model to extract
Foreground area, then carries out super-pixel segmentation to foreground area, the brightness of prospect and background, color and gradient in statistics super-pixel
Difference mean value, the characteristic vectors that 20 dimensions are combined into this, finally treat detection image using SVMs and carry out point
Class, finally extracts required target.But the use scene due to Gaussian Background model, the convenience of characteristic vector pickup and
The integrality of Objective extraction all can have influence on the effect of the practical operation of the method, reduces Consumer's Experience.
The embodiment of the present invention is above-mentioned in order to solve the problems, such as, carry out emphasis optimization for super-pixel segmentation strategy it is proposed that
Method as shown in Figure 2, the method comprises the following steps:
Step 201, super-pixel segmentation.
In actual application scenarios, need in this step image is split, and distribute seed point for it, that is, cluster
Center, in order to absolutely prove the embodiment of this law, is all described in detail according to the mode of step below.
S11, inputs altimetric image f to be checkedsrc(x, y), obtains image coordinate ObjLoc:(Xp, Yp, Width, Height),
Here the coordinate obtaining is divided into two kinds of situations:
Situation one, according to the image of input, obtains all regions of this image, and with the coordinate value in this all region for figure
As coordinate ObjLoc:(Xp,Yp,Width,Height);
Situation two, according to the image of input, obtains the interest region of this image, and obtains the seat in this interest region further
Mark ObjLoc:(Xp,Yp,Width,Height).
It should be noted that in case above, selecting coordinate ObjLoc:The concrete number of (Xp, Yp, Width, Height)
Value changes according to image-region difference, and the difference of specific selection mode has no effect on protection scope of the present invention.
S12, carries out super-pixel segmentation using SLIC super-pixel algorithm to image, is divided into the super-pixel of K same size,
Uniform distribution seed point in image, and distribute class label, wherein, pixel in image for the pixel in each segmentation super-pixel
Point number is N=Width*Height, and the size of each super-pixel is N/K, and the distance of neighboring seeds point is S=sqrt (N/K).Tool
Body way is according to the super-pixel number setting, uniform distribution seed point in image, and is the pixel in each segmentation block
Distribution class label (belonged to which cluster centre).In interest region, pixel number is N=Width*Height, pre-segmentation
Super-pixel for K same size, then the size of each super-pixel is N/K, then the distance (step-length) of neighboring seeds point is approximate
For S=sqrt (N/K).
S13, needs seed point is corrected.It is the gradient calculating all pixels point in n*n neighborhood in seed point number
Value, seed point is moved on to the minimum place of gradient, (calculates for convenience in the present invention and take n specially in seed point n*n neighborhood
=3) calculate the Grad of all pixels point, seed point is moved on to the minimum place of gradient, it is to avoid seed point falls larger in gradient
Profile border on, in order to avoid affect follow-up Clustering Effect.
S14, in the range of neighboring seeds point distance is for 2S*2S (calculate for convenience in the present invention and take 2S*2S), determines every
Distance metric D between individual pixel and seed point:
Wherein, dcRepresent color distance, be expressed as the difference of pixel and the rgb color of seed point, dsRepresent space away from
From being expressed as the Euclidean distance between pixel and seed point:
Wherein, NcAnd NsFor normalized parameter;NcDifferent with picture difference, and also different with cluster, typically take solid
Permanent number 10.NsFor maximum space distance in class, it is defined as Ns=S.
S15, is clustered in pre-segmentation block.It is specially the distance degree that each pixel is taken with this point and neighboring seeds point
The minimum of a value of amount D, as the new class label of this point, takes average to all kinds of interior pixel point coordinates, obtains new seed point, Ye Jixin
Cluster centre.
S16, repeat step S14, S15 is iterated optimizing, and no longer changes to seed point, that is, obtains final super-pixel and divide
Cut result, the iterations of the present invention is taken as 10 and can obtain comparatively ideal super-pixel segmentation as a result, it is desirable to illustrate, in reality
In the use scene on border, specific iterations can also be selected according to actual conditions, and the change of specifically chosen content is simultaneously
Do not interfere with protection scope of the present invention.
Pass through in the present invention to adjust the weights of color distance and space length, so that iteration more rapid convergence.Specifically
For adding weights αl(its value changes with the change of iterations), formula 3 is changed to formula 4 it is only necessary to iteration 3 times i.e. up to
To optimizing super-pixel segmentation, the overall segmentation time reduces about 40%.Weights αlCan be determined with formula 5 by statistics matching:
αl=λ1*atan(λ2* l), (l=1,2,3 ...) (5)
Wherein, λ1、λ2Can be by iteration result statistical data acquistion to and λ1>0,λ2>0.
It should be noted that αlIncrease with iterations and increase, finally level off to 1.This is consistent with conventional cognitive,
Pass through colouring information quick clustering when starting, and the carrying out with iteration, the pel spacing of same color from playing a major role,
It is converted into the clustering problem leading by distance, the selection of concrete numerical value in this step is for conveniently calculating and takes
Excellent numerical value, concrete others numerical value of choosing can't change protection scope of the present invention.
Step described above concrete as shown in figure 3, a kind of super-pixel segmentation module of being proposed by the embodiment of the present application
Schematic flow sheet, first pass through to image initial seed point and correction seed point, then utilize color distance and space away from
From calculating distance metric, carrying out cluster and obtaining cluster centre by clustering and update weights tolerance of adjusting the distance.
Step 202, feature extraction.
This step is intended to the extraction by characteristic value, and it is quickly contrasted with the category images in training aids, from
And quickly filter out the super-pixel of needs.
Each the super-pixel block image being obtained according to the method for step 201 as training sample, first, with pixel value difference 32
For interval, the histogram of statistical sample image rgb each passage pixel value, each passage can obtain the Nogata of 256/32=8 dimension
Figure, totally 3 passages then can obtain the histogram feature of 24 dimensions to rgb.Next, with angle difference, 20 ° is interval, statistical sample image
Gradient orientation histogram, can obtain 180 °/20 °=9 dimension histogram features.Finally, by color histogram and gradient direction
Histogram is joined together, the union feature of one 33 dimension of synthesis.
Need explanation, the grader being used in this step can be trained beforehand through each image
Arrive, specifically can by carry out in several input pictures segmentation filter out some target super-pixel, shade super-pixel block and
Road surface super-pixel block image, as training sample, is trained to grader, and this can't affect protection scope of the present invention.
Step 203, super-pixel classification.
Also needed to obtain a grader, specially before this step is classified to super-pixel:
S21, prepares data.Each the super-pixel block image being obtained according to step S11~S16 method as training sample,
Filter out 10,000 target super-pixel block (including people, vehicle etc.) by splitting in 2000 input pictures, 30,000 shade super-pixel block, 3
Ten thousand road surface super-pixel block images are as training sample, and stamp class label { -1,0,1 } respectively.
S22, extracts feature.First, be interval with pixel value difference 32, statistical sample image rgb each passage pixel value straight
Fang Tu, each passage can obtain the histogram of 256/32=8 dimension, rgb totally 3 passages then can obtain 24 dimensions histogram special
Levy.Next, with angle difference, 20 ° is interval, and the gradient orientation histogram of statistical sample image can obtain 180 °/20 °=9 dimensions
Histogram feature.Finally, color histogram and gradient orientation histogram are joined together, the joint of one 33 dimension of synthesis is special
Levy.
S23, trains grader.The feature that step S22 is extracted stamps corresponding class label respectively, is input in SVM
Row training, obtains target, shade, road surface grader.
In a particular embodiment of the present invention, the grader being obtained by the feature of training sample image in this step, more
Plus it is suitable for current image, it is of course also possible to preset multiple separators in systems, different classification are selected to different images
Device, such change can't affect protection scope of the present invention.
Training in step described above classifies implement body as shown in figure 4, one kind of being proposed by the embodiment of the present application
Shade, target, the schematic flow sheet of road surface grader off-line training module, by extracting RGB color information and gradient direction letter
Breath carries out SVM training and obtains grader to shade, target, the positive negative sample in road surface.
Obtaining target, shade, after the grader of road surface, the union feature of each super-pixel block be input to target, the moon
Carry out svm classifier in shadow, road surface grader.It should be noted that SVM can only carry out two classification, and we need point three classes
Not (target, shade, road surface), maximum classification confidence method can be taken to judge this super-pixel block using after pairwise classification
Classification.On test set, target recall rate is 90.3%, and shade recall rate is 96.4%, and road surface recall rate is 95.6%.
Step 204, target reorientation.
This step is intended to carry out essence by using mathematical correlation, shape information and movable information to the target sorting out
Determine position, specially:
Classification results are carried out preliminary corrections according to correlation by S31.Super-pixel block is not independent, that is, as target certain
The certainty of one super-pixel block another super-pixel block therein with target is adjacent, and shade and background super-pixel block are as the same.If certain super picture
The classification results of plain block are P, and the classification results of 8 neighborhood super-pixel block are P abouti, (i=0,1 ..., 7), if P ≠ Pi,
So just it is considered that the classification of this super-pixel is wrong, it is set to that class counting maximum in 8 fields.Using this first
Test knowledge, preliminary corrections can be carried out by what classification results and neighbouring super pixels agllutination fruit all differed.
S32, is corrected to target shape according to shape information.The substantially position of target can will be determined by step S31
Put, but still there is the super-pixel that some are difficult to judge, that the shape of target can be made to occur is abnormal.As vehicle generally just
Square, non-motor vehicle, pedestrian are generally rectangle, and that is, the ratio of width to height of target and size be within the specific limits.Can be obtained by statistics
The ratio of width to height of general objectives is in [β1,β2] interval, in a preference, take [0.4,1.2], if the ratio of width to height is abnormal, can
Judge its field result in the super-pixel of object boundary be categorized as shade or road surface most for classification error, be set to 8 necks
Count that maximum class in domain, correct target shape.
S33, carries out fine positioning according to movable information to target location.Can determine that target is calibrated by step S32
True position and size, but still not accurate, especially for the classifications such as motorcycle, bicycle, pedestrian and mesh
For the applications such as mark subcharacter attribute (as cap, clothes, knapsack etc.) detection, being accurately positioned of target location is most important.For
This, can eliminate border by counting target movable information (as inter-frame difference information) histogram using maximum variance between clusters
Without motion information super-pixel is disturbed, and the position up and down of target is corrected further, final output object removal shade
Accurate coordinate position after interference:ObjLoc':(Xp',Yp',Width',Height').
Step described above is concrete as shown in figure 5, a kind of target being proposed by the embodiment of the present application relocates module
Schematic flow sheet, it is accurately fixed by using mathematical correlation, shape information and movable information, the target sorting out to be carried out
Position.Finally in the picture pinpoint target is separated, as shown in fig. 6, the one kind being proposed by the embodiment of the present application
Shadow removal result schematic diagram.
Based on the inventive concept same with said method, the embodiment of the present application also proposed a kind of shadow removal device, its
It is characterised by, described device includes:
Extraction module 71, for carrying out pre-segmentation using super-pixel algorithm to the altimetric image to be checked of input, obtains pre-segmentation
Super-pixel, is that each described pre-segmentation super-pixel distributes seed point;
Acquisition module 72, for obtaining each pixel in described pre-segmentation super-pixel and adjacent described pre-segmentation super-pixel
Seed point distance metric, the minimum of a value with described distance metric is the class label of described each pixel, obtain each described class
The coordinate mean value of the pixel in label, is new seed point with described coordinate mean value;
Processing module 73, described distance metric is iterated by the weights using color distance and space length based on
Calculate, until described new seed point no longer changes, determine that described new seed point is final seed point, and according to described final kind
Son point determines described super-pixel;
Locating module 74, for carrying out material classification using default material grader to described super-pixel, would be classified as
Super-pixel corresponding to shade material removes, to the super-pixel removing corresponding to described shade material in described altimetric image to be checked
The position of target afterwards is corrected.
Preferably, also include sort module 75, be used for:
Super-pixel segmentation is carried out to each image of input, obtains each material super-pixel as training sample;
Pixel value histogram according to image in described training sample and gradient orientation histogram, obtain union feature;
Described union feature is input in SVM training aids and is trained, obtain each material grader.
Preferably, described extraction module 71, specifically for:
According to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, in institute
State uniform distribution seed point in the image of pre-segmentation super-pixel, and distribute category for each pixel in each pre-segmentation super-pixel
Sign.
Preferably, described acquisition module 72, specifically for:
Number according to seed point obtains the Grad of all pixels point in each described pre-segmentation super-pixel, will be each described pre-
Seed point in segmentation super-pixel moves on to the minimum place of gradient;
Obtain the distance of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel
Tolerance, the minimum of a value with described distance metric is the class label of described each pixel, updates the class label of described each pixel simultaneously
Clustered, obtain the coordinate mean value of the pixel in each described class label, be new seed point with described coordinate mean value.
Preferably, described locating module 74, specifically for:
Remove the material of abnormal classification in described each material, the position to described target using mathematical correlation, shape information
Put and carry out preliminary corrections with shape;
Obtain the target movable information histogram of described super-pixel using movable information, straight according to described target movable information
Fang Tu, and the shade material of object boundary without motion information is removed from described each material using maximum variance between clusters;
Described altimetric image to be checked enters to the position of the target after the super-pixel removing corresponding to described shade material
Row correction, shows the target location after correction.
In the specific embodiment of the invention, modules can be integrated in one it is also possible to be deployed separately, and above-mentioned module is closed
And for a module it is also possible to be further split into multiple submodule.
As can be seen here, by applying the technical scheme of the application, using super-pixel algorithm, the altimetric image to be checked of input is entered
Row pre-segmentation simultaneously distributes seed point for each pre-segmentation super-pixel;Obtain each pixel in pre-segmentation super-pixel and adjacent pre-segmentation
The distance metric of the seed point of super-pixel, the minimum of a value with distance metric is the class label of each pixel, obtains in all kinds of labels
Pixel coordinate mean value, be new seed point with coordinate mean value;Weights pair using color distance and space length
Distance metric is iterated calculating, until new seed point no longer changes, determines new seed point for final seed point and with it
For super-pixel;Material classification is carried out to super-pixel, shade material is removed.The method is not required to consider the complexity of background information,
Directly the material in image is decomposed into super-pixel, merges various features and each super-pixel is described, finally will comprise shade
Super-pixel from classification remove, thus obtaining the accurate coordinate position of target.
Through the above description of the embodiments, those skilled in the art can be understood that the embodiment of the present invention
Can be realized by hardware it is also possible to realize by by way of software plus necessary general hardware platform.Based on such reason
Solution, the technical scheme of the embodiment of the present invention can be embodied in the form of software product, and this software product can be stored in one
In individual non-volatile memory medium (can be CD-ROM, USB flash disk, portable hard drive etc.), including some instructions with so that a meter
Calculate machine equipment (can be personal computer, server, or network equipment etc.) execution each implement scene of the embodiment of the present invention
Described method.
It will be appreciated by those skilled in the art that accompanying drawing is a schematic diagram being preferable to carry out scene, the module in accompanying drawing or
Flow process is not necessarily implemented necessary to the embodiment of the present invention.
It will be appreciated by those skilled in the art that module in device in implement scene can according to implement scene describe into
Row is distributed in the device of implement scene it is also possible to carry out one or more dresses that respective change is disposed other than this implement scene
In putting.The module of above-mentioned implement scene can merge into a module it is also possible to be further split into multiple submodule.
The embodiments of the present invention are for illustration only, do not represent the quality of implement scene.
Only the several of the embodiment of the present invention disclosed above are embodied as scene, but, the embodiment of the present invention not office
It is limited to this, the business that the changes that any person skilled in the art can think of all should fall into the embodiment of the present invention limits scope.
Claims (10)
1. a kind of shadow removal method is it is characterised in that methods described includes:
Using super-pixel algorithm, pre-segmentation is carried out to the altimetric image to be checked of input, obtain pre-segmentation super-pixel, be each described pre- point
Cut super-pixel distribution seed point;
Obtain the distance metric of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel,
Minimum of a value with described distance metric is the class label of described each pixel, obtains the coordinate of the pixel in each described class label
Mean value, is new seed point with described coordinate mean value;
Described distance metric is iterated calculate using the weights of color distance and space length, until described new seed point
No longer change, determine that described new seed point is final seed point, and described super-pixel is determined according to described final seed point;
Using default material grader, material classification is carried out to described super-pixel, would be classified as the super picture corresponding to shade material
Element removes, and in described altimetric image to be checked, the position of the target after the super-pixel removing corresponding to described shade material is carried out
Correction.
2. the method for claim 1 it is characterised in that described utilization super-pixel algorithm to input altimetric image to be checked
Before carrying out pre-segmentation, also include:
Super-pixel segmentation is carried out to each image of input, obtains each material super-pixel as training sample;
Pixel value histogram according to image in described training sample and gradient orientation histogram, obtain union feature;
Described union feature is input in SVM training aids and is trained, obtain each material grader.
3. the method for claim 1 is it is characterised in that described entered to the altimetric image to be checked of input using super-pixel algorithm
Row pre-segmentation, obtains pre-segmentation super-pixel, is that each described pre-segmentation super-pixel distributes seed point, specifically includes:
According to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, described pre-
Uniform distribution seed point in the image of segmentation super-pixel, and distribute class label for each pixel in each pre-segmentation super-pixel.
4. the method for claim 1 it is characterised in that obtain described pre-segmentation super-pixel in each pixel with adjacent
The distance metric of the seed point of described pre-segmentation super-pixel, the minimum of a value with described distance metric is the category of described each pixel
Sign, obtain the coordinate mean value of the pixel in each described class label, be new seed point with described coordinate mean value, concrete bag
Include:
Number according to seed point obtains the Grad of all pixels point in each described pre-segmentation super-pixel, by each described pre-segmentation
Seed point in super-pixel moves on to the minimum place of gradient;
Obtain the distance metric of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel,
Minimum of a value with described distance metric is the class label of described each pixel, updates the class label of described each pixel and is gathered
Class, obtains the coordinate mean value of the pixel in each described class label, is new seed point with described coordinate mean value.
5. the method for claim 1 is it is characterised in that described entered to described super-pixel using default material grader
Row material is classified, and would be classified as the super-pixel corresponding to shade material and removes, to the described the moon of removal in described altimetric image to be checked
The position of the target after super-pixel corresponding to shadow material is corrected, and specifically includes:
Remove the material of abnormal classification in described each material using mathematical correlation, shape information, to the position of described target and
Shape carries out preliminary corrections;
Obtain the target movable information histogram of described super-pixel using movable information, according to described target movable information Nogata
Figure, and the shade material of object boundary without motion information is removed from described each material using maximum variance between clusters;
In described altimetric image to be checked, school is carried out to the position of the target after the super-pixel removing corresponding to described shade material
Just, show the target location after correction.
6. a kind of shadow removal device is it is characterised in that described device includes:
Extraction module, for carrying out pre-segmentation using super-pixel algorithm to the altimetric image to be checked of input, obtains pre-segmentation super-pixel,
Distribute seed point for each described pre-segmentation super-pixel;
Acquisition module, for obtaining the seed of each pixel in described pre-segmentation super-pixel and adjacent described pre-segmentation super-pixel
The distance metric of point, the minimum of a value with described distance metric is the class label of described each pixel, obtains in each described class label
Pixel coordinate mean value, be new seed point with described coordinate mean value;
Processing module, is iterated to described distance metric calculating for the weights using color distance and space length, until
Described new seed point no longer changes, and determines that described new seed point is final seed point, and true according to described final seed point
Fixed described super-pixel;
Locating module, for carrying out material classification using default material grader to described super-pixel, would be classified as shade element
Super-pixel corresponding to material removes, in described altimetric image to be checked to the super-pixel removing corresponding to described shade material after
The position of target is corrected.
7. device as claimed in claim 6, it is characterised in that also including sort module, is used for:
Super-pixel segmentation is carried out to each image of input, obtains each material super-pixel as training sample;
Pixel value histogram according to image in described training sample and gradient orientation histogram, obtain union feature;
Described union feature is input in SVM training aids and is trained, obtain each material grader.
8. device as claimed in claim 6 is it is characterised in that described extraction module, specifically for:
According to inputting altimetric image to be checked, obtain image coordinate;
Using super-pixel algorithm, pre-segmentation is carried out to image, be divided into the pre-segmentation super-pixel of some same sizes, described pre-
Uniform distribution seed point in the image of segmentation super-pixel, and distribute class label for each pixel in each pre-segmentation super-pixel.
9. device as claimed in claim 6 is it is characterised in that described acquisition module, specifically for:
Number according to seed point obtains the Grad of all pixels point in each described pre-segmentation super-pixel, by each described pre-segmentation
Seed point in super-pixel moves on to the minimum place of gradient;
Obtain the distance metric of each pixel in described pre-segmentation super-pixel and the seed point of adjacent described pre-segmentation super-pixel,
Minimum of a value with described distance metric is the class label of described each pixel, updates the class label of described each pixel and is gathered
Class, obtains the coordinate mean value of the pixel in each described class label, is new seed point with described coordinate mean value.
10. device as claimed in claim 6 is it is characterised in that described locating module, specifically for:
Remove the material of abnormal classification in described each material using mathematical correlation, shape information, to the position of described target and
Shape carries out preliminary corrections;
Obtain the target movable information histogram of described super-pixel using movable information, according to described target movable information Nogata
Figure, and the shade material of object boundary without motion information is removed from described each material using maximum variance between clusters;
In described altimetric image to be checked, school is carried out to the position of the target after the super-pixel removing corresponding to described shade material
Just, show the target location after correction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610797642.8A CN106408529A (en) | 2016-08-31 | 2016-08-31 | Shadow removal method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610797642.8A CN106408529A (en) | 2016-08-31 | 2016-08-31 | Shadow removal method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106408529A true CN106408529A (en) | 2017-02-15 |
Family
ID=58000566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610797642.8A Pending CN106408529A (en) | 2016-08-31 | 2016-08-31 | Shadow removal method and apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106408529A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016691A (en) * | 2017-04-14 | 2017-08-04 | 南京信息工程大学 | Moving target detecting method based on super-pixel feature |
CN107610040A (en) * | 2017-09-25 | 2018-01-19 | 郑州云海信息技术有限公司 | A kind of method, apparatus and system of the segmentation of super-pixel image |
CN107808366A (en) * | 2017-10-21 | 2018-03-16 | 天津大学 | A kind of adaptive optical transfer single width shadow removal method based on Block- matching |
CN108305269A (en) * | 2018-01-04 | 2018-07-20 | 北京大学深圳研究生院 | A kind of image partition method and system of binocular image |
CN108446707A (en) * | 2018-03-06 | 2018-08-24 | 北方工业大学 | Remote sensing image airplane detection method based on key point screening and DPM confirmation |
CN108596257A (en) * | 2018-04-26 | 2018-09-28 | 深圳市唯特视科技有限公司 | A kind of preferential scene analytic method in position based on space constraint |
CN109472794A (en) * | 2018-10-26 | 2019-03-15 | 北京中科晶上超媒体信息技术有限公司 | A kind of pair of image carries out the method and system of super-pixel segmentation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680538A (en) * | 2015-03-09 | 2015-06-03 | 西安电子科技大学 | SAR image CFAR target detection method on basis of super pixels |
CN105678797A (en) * | 2016-03-04 | 2016-06-15 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Image segmentation method based on visual saliency model |
-
2016
- 2016-08-31 CN CN201610797642.8A patent/CN106408529A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680538A (en) * | 2015-03-09 | 2015-06-03 | 西安电子科技大学 | SAR image CFAR target detection method on basis of super pixels |
CN105678797A (en) * | 2016-03-04 | 2016-06-15 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | Image segmentation method based on visual saliency model |
Non-Patent Citations (1)
Title |
---|
朱波 等: ""基于超像素和支持向量机的车辆阴影检测方法"", 《东南大学学报(自然科学版)》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016691A (en) * | 2017-04-14 | 2017-08-04 | 南京信息工程大学 | Moving target detecting method based on super-pixel feature |
CN107016691B (en) * | 2017-04-14 | 2019-09-27 | 南京信息工程大学 | Moving target detecting method based on super-pixel feature |
CN107610040A (en) * | 2017-09-25 | 2018-01-19 | 郑州云海信息技术有限公司 | A kind of method, apparatus and system of the segmentation of super-pixel image |
CN107808366A (en) * | 2017-10-21 | 2018-03-16 | 天津大学 | A kind of adaptive optical transfer single width shadow removal method based on Block- matching |
CN107808366B (en) * | 2017-10-21 | 2020-07-10 | 天津大学 | Self-adaptive light transfer single shadow removing method based on block matching |
CN108305269A (en) * | 2018-01-04 | 2018-07-20 | 北京大学深圳研究生院 | A kind of image partition method and system of binocular image |
CN108305269B (en) * | 2018-01-04 | 2022-05-10 | 北京大学深圳研究生院 | Image segmentation method and system for binocular image |
CN108446707A (en) * | 2018-03-06 | 2018-08-24 | 北方工业大学 | Remote sensing image airplane detection method based on key point screening and DPM confirmation |
CN108446707B (en) * | 2018-03-06 | 2020-11-24 | 北方工业大学 | Remote sensing image airplane detection method based on key point screening and DPM confirmation |
CN108596257A (en) * | 2018-04-26 | 2018-09-28 | 深圳市唯特视科技有限公司 | A kind of preferential scene analytic method in position based on space constraint |
CN109472794A (en) * | 2018-10-26 | 2019-03-15 | 北京中科晶上超媒体信息技术有限公司 | A kind of pair of image carries out the method and system of super-pixel segmentation |
CN109472794B (en) * | 2018-10-26 | 2021-03-09 | 北京中科晶上超媒体信息技术有限公司 | Method and system for performing superpixel segmentation on image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106408529A (en) | Shadow removal method and apparatus | |
CN107491762B (en) | A kind of pedestrian detection method | |
CN104951784B (en) | A kind of vehicle is unlicensed and license plate shading real-time detection method | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN103839065B (en) | Extraction method for dynamic crowd gathering characteristics | |
CN109447169A (en) | The training method of image processing method and its model, device and electronic system | |
CN110059581A (en) | People counting method based on depth information of scene | |
CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
CN109214420A (en) | The high texture image classification method and system of view-based access control model conspicuousness detection | |
CN105975929A (en) | Fast pedestrian detection method based on aggregated channel features | |
CN106529448A (en) | Method for performing multi-visual-angle face detection by means of integral channel features | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN104408429A (en) | Method and device for extracting representative frame of video | |
CN108171196A (en) | A kind of method for detecting human face and device | |
CN103258432A (en) | Traffic accident automatic identification processing method and system based on videos | |
CN103914702A (en) | System and method for boosting object detection performance in videos | |
CN104202547A (en) | Method for extracting target object in projection picture, projection interaction method and system thereof | |
CN103035013A (en) | Accurate moving shadow detection method based on multi-feature fusion | |
CN104134079A (en) | Vehicle license plate recognition method based on extremal regions and extreme learning machine | |
CN110321769A (en) | A kind of more size commodity on shelf detection methods | |
CN106897681A (en) | A kind of remote sensing images comparative analysis method and system | |
CN104050684B (en) | A kind of video frequency motion target sorting technique based on on-line training and system | |
CN104123529A (en) | Human hand detection method and system thereof | |
CN109918971A (en) | Number detection method and device in monitor video | |
CN110378324A (en) | A kind of face recognition algorithms appraisal procedure based on quality dimensions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170215 |