CN101882217A - Target classification method of video image and device - Google Patents

Target classification method of video image and device Download PDF

Info

Publication number
CN101882217A
CN101882217A CN 201010124317 CN201010124317A CN101882217A CN 101882217 A CN101882217 A CN 101882217A CN 201010124317 CN201010124317 CN 201010124317 CN 201010124317 A CN201010124317 A CN 201010124317A CN 101882217 A CN101882217 A CN 101882217A
Authority
CN
China
Prior art keywords
moving target
target
value
color histogram
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010124317
Other languages
Chinese (zh)
Other versions
CN101882217B (en
Inventor
蔡巍伟
贾永华
朱勇
胡扬忠
邬伟琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Software Co Ltd filed Critical Hangzhou Hikvision Software Co Ltd
Priority to CN2010101243178A priority Critical patent/CN101882217B/en
Publication of CN101882217A publication Critical patent/CN101882217A/en
Application granted granted Critical
Publication of CN101882217B publication Critical patent/CN101882217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application discloses a target classification method of a video image and a method. The method comprises the following steps: receiving a video image, filtering a prospective block mass obtained in the video image and taking the prospective block mass which is qualified with the preset filtering conditions as a movable target; tracking the movable target by a mean iterative shifting algorithm and extracting the movable moving target on a tracked result position; carrying out normalization processing on the extracted movable target and scanning the outline of the movable target performed with the normalization processing to acquire a characteristic statistic; and determining the type of the movable target in accordance with the characteristic statistic. The embodiment of the application uses the outline characteristics of the targets to classify the targets, thereby improving classification accuracy; by using a scale factor to carry out the size normalization processing on the movable target, the embodiment of the application overcomes the defect of inaccurate characteristic of width and height proportion caused by the existing normalization processing method; and by using jointed probability distribution to calculate a color histogram, the data quantity of the color histogram is reduced.

Description

The objective classification method of video image and device
Technical field
The application relates to the computer image processing technology field, relates in particular to a kind of objective classification method and device of video image.
Background technology
The intelligent video analysis technology refers to by computing machine the video image of importing be carried out content analysis automatically, to determine whether have vehicle or people to occur in the video pictures, perhaps whether to have target to enter default warning region etc.In the application that intelligent video analysis is handled, target is divided into people and car two classes usually, for example, we are provided with an alarm region in video pictures, when vehicle sails this zone into, the intelligent video analysis system is output alarm information automatically, output alarm information is not then answered by the intelligent video analysis system when the people enters, therefore the intelligent video analysis system need follow the tracks of the target that appears in the video pictures, and tracking results classified automatically, the type of determining target is output alarm information optionally then.
Intelligent video analysis of the prior art system by analyzing the difference between inputted video image and the background image, with the position that the zone that differs greatly occurs as target, extracts the moving target in the video image when carrying out target classification; The moving target that extracts is followed the tracks of, determined the movement position of target on different time points; Extract the shape of description target and the feature of kinetic characteristic, these features generally include the aspect ratio value of describing the target shape characteristic, profile girth ratio etc., the feature of description target kinetic characteristic on sequential; The target signature of extracting is input to the good sorter of precondition, by the sort operation output category result of sorter.
The inventor finds in to the research of prior art and practice process, when in the prior art video object being followed the tracks of, need to use the image feature information and the movable information of target, usually can be with the color histogram of target a characteristic information as tracing process, because histogram commonly used need take a large amount of memory headrooms, therefore be unfavorable for the quick execution of track algorithm; In addition; the accuracy of target classification depends primarily on selects for which kind of feature; for the aspect ratio value as the target shape feature, the visual angle change usually can be when taking place at video capture device in profile girth ratio etc.; can't keep type classification preferably; and for the kinetic characteristic of target, when target hour, people's limbs swing can't obtain embodying in image; therefore the kinetic characteristic and the vehicle basically identical that cause the people also are difficult to realize the accurate classification to target.
Summary of the invention
The purpose of the embodiment of the present application provides a kind of objective classification method and device of video image, and track algorithm is difficult to quick execution when carrying out target classification to solve in the prior art, and inaccurate problem under the target classification particular case.
For solving the problems of the technologies described above, the embodiment of the present application provides following technical scheme:
A kind of objective classification method of video image comprises:
Behind the receiver, video image, filter the prospect agglomerate that obtains from described video image, the prospect agglomerate that will meet default filtercondition is as moving target;
By mean iterative shifting algorithm described moving target is followed the tracks of, and on the position as a result of described tracking, extracted moving target;
After the moving target of described extraction carried out normalized, the profile that scans the moving target after the described normalized obtained the characteristic statistics value;
Determine the type of described moving target according to described characteristic statistics value.
Behind the described receiver, video image, also comprise:
Pixel in the described video image and background pixel point in the background image model that sets in advance compared obtain the foreground pixel point;
The set of the pixel of extraction profile closure is as the prospect agglomerate from described foreground pixel point.
Described default filtercondition comprises at least one following condition:
The time threshold that the movement locus of default described prospect agglomerate continues on sequential;
The motion feature that the movement locus of default described prospect agglomerate should meet;
The movement velocity threshold value of default described prospect agglomerate.
Described described moving target the tracking by mean iterative shifting algorithm comprises:
Described moving target is carried out initialization, comprise the Kalman filter of upgrading described moving target correspondence and with the color histogram of described moving target as the target color histogram;
By described Kalman filter the movement position of described moving target is predicted, obtained predicted position;
With described predicted position as the current iteration position, add up the color histogram in the described current iteration position, calculate color histogram in the described current iteration position and the Grad between the described target color histogram, described color histogram is the color histogram of the compression that calculates by joint probability distribution;
Judge whether described Grad satisfies the iterated conditional that sets in advance, if, then export the tracing positional of described moving target, otherwise, be displaced to next iteration position according to described Grad, return the step of the color histogram in the described statistics current iteration position.
Described moving target to extraction carries out normalized and comprises:
Obtain the width value and the height value of the To Template that sets in advance, and the width value of described moving target and height value;
According to the width value of described To Template and the width value molded breadth zoom factor of described moving target, and according to the height value of described To Template and the height value computed altitude zoom factor of described moving target;
By described width zoom factor and height zoom factor described moving target is carried out convergent-divergent.
The profile of the moving target after the described scanning normalized obtains statistical value and comprises:
The boundary rectangle of described moving target is set according to the profile of described moving target;
Be starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, make profile scan line perpendicular to described boundary rectangle four edges;
Sweep trace distance the profile of record from described starting point to moving target obtains described distance the characteristic statistics value of described moving target as the feature of described moving target.
Describedly determine that according to the characteristic statistics value type of described moving target comprises:
With the good Support Vector Machine sorter of described characteristic statistics value input training in advance, preserved the moving target type of different characteristic statistical value correspondence in the described Support Vector Machine sorter;
Obtain the type of moving target according to the comparative result of described Support Vector Machine sorter output.
A kind of target classification device of video image comprises:
Receiving element is used for the receiver, video image;
Filter element is used for filtering the prospect agglomerate that obtains from described video image, and the prospect agglomerate that will meet default filtercondition is as moving target;
Tracking cell is used for by mean iterative shifting algorithm described moving target being followed the tracks of, and extracts moving target on the position as a result of described tracking;
The normalization unit is used for the moving target of described extraction is carried out normalized;
Scanning element, the profile that is used to scan the moving target after the described normalized obtains the characteristic statistics value;
Determining unit is used for determining according to described characteristic statistics the type of described moving target.
Also comprise:
Comparing unit is used for the pixel of video image that described receiving element is received and compares with background pixel point in the background image model that sets in advance and obtain the foreground pixel point;
Extraction unit is used for extracting the set of pixel of profile closure as the prospect agglomerate from described foreground pixel point.
Described tracking cell comprises:
The initialization subelement is used for described moving target is carried out initialization, comprise the Kalman filter of upgrading described moving target correspondence and with the color histogram of described moving target as the target color histogram;
The predicted position subelement is used for by described Kalman filter the movement position of described moving target being predicted, obtains predicted position;
The iterative computation subelement, be used for described predicted position as the current iteration position, add up the color histogram in the described current iteration position, calculate color histogram in the described current iteration position and the Grad between the described target color histogram, described color histogram is the color histogram of the compression that calculates by joint probability distribution;
The iteration judgment sub-unit is used to judge that described Grad is to satisfy the iterated conditional that sets in advance;
The result carries out subelement, be used for when the judged result of described iteration judgment sub-unit when being, export the tracing positional of described moving target, when the judged result of described iteration judgment sub-unit for not the time, be displaced to next iteration position according to described Grad, return described iterative computation subelement.
Described normalization unit comprises:
Boundary values is obtained subelement, is used to obtain the width value and the height value of the To Template that sets in advance, and the width value of described moving target and height value;
The zoom factor computation subunit is used for according to the width value of described To Template and the width value molded breadth zoom factor of described moving target, and according to the height value of described To Template and the height value computed altitude zoom factor of described moving target;
Target convergent-divergent subelement is used for by described width zoom factor and height zoom factor described moving target being carried out convergent-divergent.
Described scanning element comprises:
Profile is provided with subelement, is used for being provided with according to the profile of described moving target the boundary rectangle of described moving target;
Subelement is carried out in scanning, and being used for is starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, makees the profile scan line perpendicular to described boundary rectangle four edges;
Statistical value record subelement is used to write down the sweep trace distance the profile from described starting point to moving target, described distance is obtained the characteristic statistics value of described moving target as the feature of described moving target.
Described determining unit comprises:
Eigenwert input subelement is used for the good Support Vector Machine sorter of described characteristic statistics value input training in advance has been preserved the moving target type of different characteristic statistical value correspondence in the described Support Vector Machine sorter;
Target type is obtained subelement, is used for obtaining according to the comparative result of described Support Vector Machine sorter output the type of moving target.
As seen, in the embodiment of the present application behind the receiver, video image, the prospect agglomerate that filtration is obtained from video image, the prospect agglomerate that will meet default filtercondition is as moving target, by mean iterative shifting algorithm moving target is followed the tracks of, and on the position as a result of following the tracks of, is extracted moving target, the moving target that extracts is carried out normalized after, the profile of the moving target after the scanning normalized obtains the characteristic statistics value, determines the type of described moving target according to the characteristic statistics value.The embodiment of the present application utilizes the contour feature of target that target is classified, improved the accuracy of classification, the extraction of contour feature has the property distinguished preferably, can classify exactly to target, and operand is less, can operation fast in various real time algorithm system; By zoom factor moving target is carried out size normalization and handle, the ratio of width to height feature that can preserve moving target has overcome the inaccurate defective of wide high proportion feature that existing method for normalizing causes; In addition, calculate color histogram, reduced the data volume of color histogram, the corresponding arithmetic speed that improved by joint probability distribution.
Description of drawings
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, the accompanying drawing that describes below only is some embodiment that put down in writing among the application, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the first embodiment process flow diagram of the objective classification method of the application's video image;
Fig. 2 is the second embodiment process flow diagram of the objective classification method of the application's video image;
Fig. 3 is the first embodiment block diagram of the target classification device of the application's video image;
Fig. 4 A is the second embodiment block diagram of the target classification device of the application's video image;
Fig. 4 B is the embodiment block diagram of tracking cell among the application's device second embodiment;
Fig. 4 C is the embodiment block diagram of normalization unit among the application's device second embodiment;
Fig. 4 D is the embodiment block diagram of scanning element among the application's device second embodiment;
Fig. 4 E is the embodiment block diagram of determining unit among the application's device second embodiment.
Embodiment
The embodiment of the present application provides a kind of objective classification method of video image and a kind of target classification device of video image.
In order to make those skilled in the art person understand technical scheme in the embodiment of the present application better, and the above-mentioned purpose of the embodiment of the present application, feature and advantage can be become apparent more, below in conjunction with accompanying drawing technical scheme in the embodiment of the present application is described in further detail.
Referring to Fig. 1, be the first embodiment process flow diagram of the objective classification method of the application's video image:
Step 101: behind the receiver, video image, filter the prospect agglomerate that obtains from video image, the prospect agglomerate that will meet default filtercondition is as moving target.
Wherein, default filtercondition can comprise following at least one condition: the time threshold that the movement locus of default described prospect agglomerate continues on sequential; The motion feature that the movement locus of default described prospect agglomerate should meet; The movement velocity threshold value of default described prospect agglomerate.
Step 102: by mean iterative shifting algorithm moving target is followed the tracks of, and on the position as a result of following the tracks of, extracted moving target.
Concrete, moving target after filtering is carried out initialization, comprise the Kalman filter of upgrading the moving target correspondence and with the color histogram of moving target as the target color histogram, by Kalman filter the movement position of moving target is predicted, obtain predicted position, with predicted position as the current iteration position, color histogram before the statistics in the iteration position, color histogram in the calculating current iteration position and the Grad between the target color histogram, this color histogram is the color histogram of the compression that calculates by joint probability distribution, judge whether Grad satisfies the iterated conditional that sets in advance, if satisfy, the tracing positional of output movement target then, otherwise, be displaced to next iteration position according to Grad, return the step of the color histogram in the statistics current iteration position, finish the tracing positional of output movement target until iteration.
Step 103: after the moving target that extracts carried out normalized, the profile of the moving target after the scanning normalized obtained the characteristic statistics value.
Concrete, obtain the width value and the height value of the To Template that sets in advance, and the width value of moving target and height value, according to the width value of To Template and the width value molded breadth zoom factor of moving target, and, moving target is carried out convergent-divergent by width zoom factor and the height zoom factor that calculates according to the height value of To Template and the height value computed altitude zoom factor of moving target; The boundary rectangle of moving target is set according to the profile of the moving target behind the convergent-divergent, be starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, make profile scan line perpendicular to the boundary rectangle four edges, sweep trace distance between the profile of record from the starting point to the moving target will obtain the characteristic statistics value of moving target apart from the feature as moving target.
Step 104: determine the type of moving target to finish current flow process according to characteristic statistics.
Concrete, with the good Support Vector Machine sorter of characteristic statistics value input training in advance, preserve the moving target type of different characteristic statistical value correspondence in this Support Vector Machine sorter, obtained the type of moving target according to the comparative result of Support Vector Machine sorter output.
Referring to Fig. 2, be the second embodiment process flow diagram of the objective classification method of the application's video image, this embodiment shows in detail the process that the moving target in the video image is classified:
Step 201: receiver, video image.
Step 202: pixel in the video image and background pixel point in the background image model that sets in advance compared obtain the foreground pixel point.
In the embodiment of the present application, need set in advance and be used for describing the fixed scene of video and the background image model of the motion of the regularity in the video, the background image model can be set up according to the video image of pre-collection some, and according to pre-set time interval background image updating model.For example, some fixed scenes are arranged in the background model, can be vehicle of road surface, stop etc., regular motion can be the rotation of electric fan.It is that this car is stopped the back and formed fixed scene because sail a car in the background model that begins to set up that background model is upgraded, and therefore need split the upright background model of establishing and upgrade, and this car is joined in the background model.
Video image in the background model is made up of some background pixel points, and to realize tracking to moving target, at first need the foreground pixel point in definite current video image, the foreground pixel point is meant the pixel that changes with respect to the point of the background pixel in the background model in the video image, the pixel that just moves.When the foreground pixel point of determining to move, can judge by calculating pixel in the current inputted video image and the difference between the background pixel point, set in advance a difference threshold value, when the pixel in the video image and the difference between the background pixel point surpass this threshold value, judge that then this pixel is the foreground pixel point, the current pixel point of supposing video image is X, then can judge by following formula (1):
F ( x ) = 0 , if ( x - b ( x ) ) < T 1 , if ( x - b ( x ) ) > = T - - - ( 1 )
In the above-mentioned formula (1), x represents the pixel value of current pixel point X, the pixel value of the pairing background pixel point of b (x) expression current pixel point, T represents the difference threshold value, by judging the difference value between the background pixel point that each pixel is corresponding with it in the video image, when difference value is not less than T, confirm that then current pixel point is the foreground pixel point, difference value F (x) gets " 1 ", when difference value during less than T, confirm that then current pixel point is not the foreground pixel point, difference value F (x) gets " 0 ".According to above-mentioned formula (1), can determine foreground pixel point and background pixel point in the video image.
In addition, the illumination variation of background image model in can absorbing environmental, the interference that filtering sleet brings, maintenance be a background model in conjunction with multiple image information.After carrying out foreground detection by the background image model, the pixel of institute's adularescent is represented the foreground pixel point in the video image of output, and the pixel of all black is represented the background pixel point.
Step 203: the set of the pixel of extraction profile closure is as the prospect agglomerate in the past scene vegetarian refreshments.
Determined the foreground pixel point in the inputted video image by foreground detection after, also need further to determine the target that needs are followed the tracks of according to these foreground pixel points.Usually the pairing foreground pixel point of moving target spatially is continuous, in video image, show as a prospect agglomerate, the profile of these prospect agglomerates is normally closed, the corresponding unique profile of each prospect agglomerate is by following the tracks of the prospect agglomerate of these profiles in just can the marking video image.
Step 204: judge whether the prospect agglomerate meets default filtercondition, if then execution in step 205; Otherwise, filter the prospect agglomerate.
In all prospect agglomerates of aforementioned institute mark, be not all prospect agglomerates all be the moving target of need following the tracks of, under a lot of application scenarioss, background disturbance in the video image all may be exported the prospect agglomerate that generates in foreground detection, the prospect agglomerate that therefore a part need be comprised the spurious motion target information filters, otherwise system will produce a large amount of false alarms information.
When the prospect agglomerate is filtered, need follow the tracks of the prospect agglomerate that generates, the movement locus of record prospect agglomerate, and judge whether these movement locus meet the sports rule that sets in advance, when meeting the sports rule of these settings, then this prospect agglomerate correspondence is real moving target.The sports rule that can be provided with in the embodiment of the present application illustrates as follows:
Rule one: judge whether the time that movement locus continues meets the time threshold that sets in advance on sequential, for example, the time threshold that sets in advance is 2 seconds, then when the movement locus of the prospect agglomerate of following the tracks of on sequential lasting time above 2 seconds, determine that this prospect agglomerate is a moving target, otherwise, determine that this prospect agglomerate is the disturbance in short-term in the video image, no longer follows the tracks of it;
Rule two: judge whether the kinetic characteristic of movement locus meets the kinetic characteristic of moving target, for example, setting in advance the characteristics of motion is five pixels, then when the movement locus of tracking prospect agglomerate surpasses five pixels, determine that this prospect agglomerate is a moving target, otherwise, with this prospect agglomerate filtering;
Rule three: judge whether movement velocity meets the threshold speed that sets in advance, for example, setting in advance movement velocity is 20 pixels p.s., then when the movement velocity of tracking prospect agglomerate less than p.s. during 20 pixels, determine that this prospect agglomerate is a moving target, otherwise, with this prospect agglomerate filtering.
Sports rule in the embodiment of the present application is not confined to according to the actual needs, other sports rule can also be set in three kinds of regular scopes of above-mentioned institute example, can be the combination of the rule of any amount in the set sports rule yet.
Step 205: moving target is carried out initialization.
To disturb the prospect agglomerate that obtains after the filtration as moving target to the prospect agglomerate, before moving target is followed the tracks of, can carry out initialization to moving target earlier, comprise the Kalman filter of upgrading the moving target correspondence, and with the color histogram of moving target as the target color histogram, preserve this target color histogram as the target signature template.
Step 206: by Kalman filter the movement position of moving target is predicted, obtained predicted position.
The purpose that moving target is followed the tracks of is in order to set up the position corresponding relation of moving target on sequential, i.e. movement locus.When determining the position of moving target in current video image, at first utilize Kalman (Kalman) wave filter of moving target correspondence that moving target is carried out position prediction.Preserved the velocity information and the directional information of moving target in the Kalman wave filter, can the predicted motion target at next movement position constantly, be shown below:
state_post=T×state_pre (2)
In the following formula (2), state _ post = x y , T = 1 0 1 0 0.5 0 0 1 0 0 0 0.5 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 1 , state _ pre x &prime; y &prime; vx &prime; vy &prime; ax &prime; ay &prime;
Wherein, state_post represents the current predicted position of moving target, and T represents the transition matrix of Kalman wave filter, and state_pre represented coordinate, speed and the acceleration correction value of Kalman wave filter at the moving target in a last moment.
Step 207: predicted position as the current iteration position, is added up the color histogram in the current iteration position, color histogram in the calculating current iteration position and the Grad between the target color histogram.
In the embodiment of the present application on the predicted position of moving target, carry out mean shift (mean iterative shifting) algorithm keeps track, promptly in the position of Kalman wave filter predicted motion target, YUV color histogram in the statistics current iteration position, Grad between counting statistics gained histogram and the target histogram, when Grad surpasses pre-set threshold less than the threshold value that sets in advance or iterations, follow the tracks of and finish, the output tracking position, otherwise Grad is offset to next iteration position, return the step of YUV color histogram in the statistics current iteration position, Grad up to iteration surpasses pre-set threshold less than threshold value that sets in advance or iterations, follow the tracks of and finish the output tracking position.
What the YUV color histogram was represented is the probability that each YUV value occurs in image in the image, suppose to have the YUV value for (y, u, v), then in the prior art calculate the probability P that YUV occurs in image (y, u in the time of v), adopt following formula:
P ( y , u , v ) = &Sigma; x = 0 x = N &Sigma; y = 0 y = M I ( YUV ( x , y ) ) / ( M &times; N ) - - - ( 3 )
In the following formula (3),
Figure GSA00000034174400105
(x, y) (M and N be the height and the width of presentation video respectively for x, y) the YUV value in the position for presentation video for YUV.By prior art as can be known, each YUV value probability needs 256 * 256 * 256=16777216 memory location altogether in the presentation video, therefore needs a large amount of storage spaces.Consider the independence of each component in the yuv space, in the embodiment of the present application color histogram is carried out an approximate description, shown in (4):
P(y,u,v)=P(y)*P(u)*P(v) (4)
In the following formula 4,
Figure GSA00000034174400111
Wherein,
Figure GSA00000034174400112
In the following formula 4,
Figure GSA00000034174400113
Wherein,
Figure GSA00000034174400114
U (x, y) expression video image in the position (x, y) the U component value on;
In the following formula 4,
Figure GSA00000034174400115
Wherein, V (x, y) expression video image in the position (x, y) the V component value on;
In the following formula 4, M and N be the height and the width of presentation video respectively, owing to calculate the color histogram of compression in the embodiment of the present application by joint probability distribution, therefore describing a color histogram only needs 256+256+256=768 memory location.Hence one can see that, and the color histogram of resulting moving target in the embodiment of the present application takies still less memory headroom than color histogram of the prior art, and the data volume that relates in the calculating process reduces greatly.
Step 208: judge whether to satisfy the iterated conditional that sets in advance according to Grad, if then execution in step 209; Otherwise, return step 207 after Grad is displaced to next iteration position.
Step 209: the tracing positional of output movement target, and on tracing positional, extract moving target.On the final position of aforementioned mean shift algorithm keeps track, search prospect agglomerate, if there is not the prospect agglomerate in this position, then failure is followed the tracks of in explanation, if there is the prospect agglomerate, then use the size of this prospect agglomerate and center position tracking results as moving target, can utilize parameter and state_pre value in the tracking result information correction Kalman wave filter of moving target, and the YUV color histogram of statistics moving target in current scope of living in, utilize it that the target signature template is upgraded.
Step 210: according to the width value and the height value of the To Template that sets in advance, and the width zoom factor and the height zoom factor of the width value of moving target and height value calculating moving target.
Through the moving target that obtains behind moving target extraction and the motion target tracking, can there be bigger difference with target image in size, therefore need carry out size normalization to moving target handles, for example, the target image unification can be wide 40 pixels, the To Template of high 40 pixels, moving target is carried out in the process of convergent-divergent, for keeping the wide high proportion of moving target, then according to the wide high proportion of moving target, width adopts different zoom factors respectively with short transverse, to guarantee the consistance of size normalization front and back target wide high proportion.Suppose the current moving target that has, width value and height value are respectively w and h, according to following formula (5) it are carried out size normalization and handle:
scale_w=40/w
if(w>h)
scale_h=40/w (5)
scale_w=40/h
if(w<=h)
scale_h=40/h
In the following formula (5), scale_w and scale_h represent the scaling on width and the short transverse respectively.
Step 211: moving target is carried out convergent-divergent by width zoom factor and height zoom factor.
Step 212: according to the profile of moving target the boundary rectangle of moving target is set, and is starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, make profile scan line perpendicular to the boundary rectangle four edges.
Behind the moving target after obtaining normalized, profile according to this moving target is done boundary rectangle, the four edges of this boundary rectangle is all tangent with the profile of moving target, the upper left corner from the moving target boundary rectangle, in the counterclockwise direction, make profile scan line at rectangle four limit each points perpendicular to rectangular edges.
Step 213: the sweep trace distance between the profile of record from the starting point to the moving target will obtain the characteristic statistics value of moving target apart from the feature as moving target.
The distance of writing scan line between from the rectangular edges to the objective contour is with the contour feature statistical value of these distances as target.In actual applications,, can preserve an eigenwert, promptly get the mean value of four profile scan line lengths, can reduce data volume widely thus according to four lines of every scanning for reducing the quantity of characteristic statistics value.
Step 214: with the good Support Vector Machine sorter of characteristic statistics value input training in advance.
Preserved the moving target type of different characteristic statistical value correspondence in the described Support Vector Machine sorter, in the embodiment of the present application, the contour feature that extracts is advanced statistical value carry out normalized, each characteristic statistics value is zoomed between the 0-1, similar with prior art, the characteristic statistics value is input to the good SVM of precondition (Support Vector Machine, Support Vector Machine) sorter carries out sort operation, determine the type of target according to the result of sorter output.
Step 215: obtain the type of moving target according to the comparative result of Support Vector Machine sorter output, finish current flow process.
Corresponding with the embodiment of the objective classification method of the application's video image, the application also provides the embodiment of the target classification device of video image.
Referring to Fig. 3, be the first embodiment block diagram of the target classification device of the application's video image, this device comprises: receiving element 310, filter element 320, tracking cell 330, normalization unit 340, scanning element 350 and determining unit 360.
Wherein, receiving element 310 is used for the receiver, video image;
Filter element 320 is used for filtering the prospect agglomerate that obtains from video image, and the prospect agglomerate that will meet default filtercondition is as moving target;
Tracking cell 330 is used for by mean iterative shifting algorithm moving target being followed the tracks of, and extracts moving target on the position as a result of following the tracks of;
Normalization unit 340 is used for the moving target that extracts is carried out normalized;
Scanning element 350, the profile that is used to scan the moving target after the normalized obtains the characteristic statistics value;
Determining unit 360 is used for determining according to the characteristic statistics value type of moving target.
Referring to Fig. 4 A, be the second embodiment block diagram of the target classification device of the application's video image, this device comprises: receiving element 310, comparing unit 370, extraction unit 380, filter element 320, tracking cell 330, normalization unit 340, scanning element 350 and determining unit 360.
Receiving element 310 is used for the receiver, video image;
Comparing unit 370 is used for the pixel of video image that receiving element 310 is received and compares with background pixel point in the background image model that sets in advance and obtain the foreground pixel point;
Extraction unit 380 is used in the past scene vegetarian refreshments and extracts the set of pixel of profile closure as the prospect agglomerate;
Filter element 320 is used for filtering the prospect agglomerate that obtains from video image, and the prospect agglomerate that will meet default filtercondition is as moving target;
Tracking cell 330 is used for by mean iterative shifting algorithm moving target being followed the tracks of, and extracts moving target on the position as a result of following the tracks of;
Normalization unit 340 is used for the moving target that extracts is carried out normalized;
Scanning element 350, the profile that is used to scan the moving target after the normalized obtains the characteristic statistics value;
Determining unit 360 is used for determining according to the characteristic statistics value type of moving target.
Concrete, referring to Fig. 4 B, the embodiment block diagram for tracking cell 330 comprises:
Initialization subelement 331 is used for moving target is carried out initialization, comprise the Kalman filter of upgrading the moving target correspondence and with the color histogram of moving target as the target color histogram;
Predicted position subelement 332 is used for by Kalman filter the movement position of moving target being predicted, obtains predicted position;
Iterative computation subelement 333, be used for predicted position as the current iteration position, color histogram in the statistics current iteration position, calculate color histogram in the current iteration position and the Grad between the target color histogram, this color histogram is the color histogram of the compression that calculates by joint probability distribution;
Iteration judgment sub-unit 334 is used to judge whether Grad satisfies the iterated conditional that sets in advance;
The result carries out subelement 335, be used for when the judged result of iteration judgment sub-unit 334 when being, the tracing positional of output movement target, when the judged result of iteration judgment sub-unit 334 for not the time, Grad is displaced to next iteration position, returns the function of iterative computation subelement 333.
Concrete, the embodiment block diagram referring to Fig. 4 C is normalization unit 340 comprises:
Boundary values is obtained subelement 341, is used to obtain the width value and the height value of the To Template that sets in advance, and the width value of moving target and height value;
Zoom factor computation subunit 342 is used for according to the width value of To Template and the width value molded breadth zoom factor of moving target, and according to the height value of To Template and the height value computed altitude zoom factor of moving target;
Target convergent-divergent subelement 343 is used for by width zoom factor and height zoom factor moving target being carried out convergent-divergent.
Concrete, referring to Fig. 4 D, the embodiment block diagram for scanning element 350 comprises:
Profile is provided with subelement 351, is used for being provided with according to the profile of moving target the boundary rectangle of moving target;
Subelement 352 is carried out in scanning, and being used for is starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, makees the profile scan line perpendicular to the boundary rectangle four edges;
Statistical value record subelement 353 is used to write down the sweep trace distance between the profile from the starting point to the moving target, will obtain the characteristic statistics value of moving target apart from the feature as moving target.
Concrete, the embodiment block diagram referring to Fig. 4 E is determining unit 360 comprises:
Eigenwert input subelement 361 is used for the good Support Vector Machine sorter of characteristic statistics value input training in advance has been preserved the moving target type of different characteristic statistical value correspondence in this Support Vector Machine sorter;
Target type is obtained subelement 362, is used for obtaining according to the comparative result of Support Vector Machine sorter output the type of moving target.
As seen through the above description of the embodiments, in the embodiment of the present application behind the receiver, video image, the prospect agglomerate that filtration is obtained from video image, the prospect agglomerate that will meet default filtercondition is as moving target, by mean iterative shifting algorithm moving target is followed the tracks of, and on the position as a result of following the tracks of, extract moving target, after the moving target that extracts carried out normalized, the profile of the moving target after the scanning normalized obtains the characteristic statistics value, determines the type of moving target according to characteristic statistics.The embodiment of the present application utilizes the contour feature of target that target is classified, improved the accuracy of classification, the extraction of contour feature has the property distinguished preferably, can classify exactly to target, and operand is less, can operation fast in various real time algorithm system; By zoom factor moving target is carried out size normalization and handle, the ratio of width to height feature that can preserve moving target has overcome the inaccurate defective of wide high proportion feature that existing method for normalizing causes; In addition, calculate color histogram, reduced the data volume of color histogram, the corresponding arithmetic speed that improved by joint probability distribution.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the application and can realize by the mode that software adds essential general hardware platform.Based on such understanding, the part that the application's technical scheme contributes to prior art in essence in other words can embody with the form of software product, this computer software product can be stored in the storage medium, as ROM/RAM, magnetic disc, CD etc., comprise that some instructions are with so that a computer equipment (can be a personal computer, server, the perhaps network equipment etc.) carry out the method for some part of each embodiment of the application or embodiment.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, and identical similar part is mutually referring to getting final product between each embodiment, and each embodiment stresses all is difference with other embodiment.Especially, for system embodiment, because it is substantially similar in appearance to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
The application can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise distributed computing environment of above any system or equipment or the like.
The application can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract, program, object, assembly, data structure or the like.Also can in distributed computing environment, put into practice the application, in these distributed computing environment, by by communication network connected teleprocessing equipment execute the task.In distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium that comprises memory device.
Though described the application by embodiment, those of ordinary skills know, the application has many distortion and variation and the spirit that do not break away from the application, wish that appended claim comprises these distortion and variation and the spirit that do not break away from the application.

Claims (13)

1. the objective classification method of a video image is characterized in that, comprising:
Behind the receiver, video image, filter the prospect agglomerate that obtains from described video image, the prospect agglomerate that will meet default filtercondition is as moving target;
By mean iterative shifting algorithm described moving target is followed the tracks of, and on the position as a result of described tracking, extracted moving target;
After the moving target of described extraction carried out normalized, the profile that scans the moving target after the described normalized obtained the characteristic statistics value;
Determine the type of described moving target according to described characteristic statistics value.
2. method according to claim 1 is characterized in that, behind the described receiver, video image, also comprises:
Pixel in the described video image and background pixel point in the background image model that sets in advance compared obtain the foreground pixel point;
The set of the pixel of extraction profile closure is as the prospect agglomerate from described foreground pixel point.
3. method according to claim 1 is characterized in that, described default filtercondition comprises a following at least following condition:
The time threshold that the movement locus of default described prospect agglomerate continues on sequential;
The motion feature that the movement locus of default described prospect agglomerate should meet;
The movement velocity threshold value of default described prospect agglomerate.
4. method according to claim 1 is characterized in that, described described moving target the tracking by mean iterative shifting algorithm comprises:
Described moving target is carried out initialization, comprise the Kalman filter of upgrading described moving target correspondence and with the color histogram of described moving target as the target color histogram;
By described Kalman filter the movement position of described moving target is predicted, obtained predicted position;
With described predicted position as the current iteration position, add up the color histogram in the described current iteration position, calculate color histogram in the described current iteration position and the Grad between the described target color histogram, described color histogram is the color histogram of the compression that calculates by joint probability distribution;
Judge whether described Grad satisfies the iterated conditional that sets in advance, if, then export the tracing positional of described moving target, otherwise, be displaced to next iteration position according to described Grad, return the step of the color histogram in the described statistics current iteration position.
5. method according to claim 1 is characterized in that, described moving target to extraction carries out normalized and comprises:
Obtain the width value and the height value of the To Template that sets in advance, and the width value of described moving target and height value;
According to the width value of described To Template and the width value molded breadth zoom factor of described moving target, and according to the height value of described To Template and the height value computed altitude zoom factor of described moving target;
By described width zoom factor and height zoom factor described moving target is carried out convergent-divergent.
6. method according to claim 1 is characterized in that, the profile of the moving target after the described scanning normalized obtains statistical value and comprises:
The boundary rectangle of described moving target is set according to the profile of described moving target;
Be starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, make profile scan line perpendicular to described boundary rectangle four edges;
Sweep trace distance the profile of record from described starting point to moving target obtains described distance the characteristic statistics value of described moving target as the feature of described moving target.
7. method according to claim 1 is characterized in that, describedly determines that according to the characteristic statistics value type of described moving target comprises:
With the good Support Vector Machine sorter of described characteristic statistics value input training in advance, preserved the moving target type of different characteristic statistical value correspondence in the described Support Vector Machine sorter;
Obtain the type of moving target according to the comparative result of described Support Vector Machine sorter output.
8. the target classification device of a video image is characterized in that, comprising:
Receiving element is used for the receiver, video image;
Filter element is used for filtering the prospect agglomerate that obtains from described video image, and the prospect agglomerate that will meet default filtercondition is as moving target;
Tracking cell is used for by mean iterative shifting algorithm described moving target being followed the tracks of, and extracts moving target on the position as a result of described tracking;
The normalization unit is used for the moving target of described extraction is carried out normalized;
Scanning element, the profile that is used to scan the moving target after the described normalized obtains the characteristic statistics value;
Determining unit is used for determining according to described characteristic statistics value the type of described moving target.
9. device according to claim 8 is characterized in that, also comprises:
Comparing unit is used for the pixel of video image that described receiving element is received and compares with background pixel point in the background image model that sets in advance and obtain the foreground pixel point;
Extraction unit is used for extracting the set of pixel of profile closure as the prospect agglomerate from described foreground pixel point.
10. device according to claim 8 is characterized in that, described tracking cell comprises:
The initialization subelement is used for described moving target is carried out initialization, comprise the Kalman filter of upgrading described moving target correspondence and with the color histogram of described moving target as the target color histogram;
The predicted position subelement is used for by described Kalman filter the movement position of described moving target being predicted, obtains predicted position;
The iterative computation subelement, be used for described predicted position as the current iteration position, add up the color histogram in the described current iteration position, calculate color histogram in the described current iteration position and the Grad between the described target color histogram, described color histogram is the color histogram of the compression that calculates by joint probability distribution;
The iteration judgment sub-unit is used to judge that described Grad is to satisfy the iterated conditional that sets in advance;
The result carries out subelement, be used for when the judged result of described iteration judgment sub-unit when being, export the tracing positional of described moving target, when the judged result of described iteration judgment sub-unit for not the time, be displaced to next iteration position according to described Grad, return described iterative computation subelement.
11. device according to claim 8 is characterized in that, described normalization unit comprises:
Boundary values is obtained subelement, is used to obtain the width value and the height value of the To Template that sets in advance, and the width value of described moving target and height value;
The zoom factor computation subunit is used for according to the width value of described To Template and the width value molded breadth zoom factor of described moving target, and according to the height value of described To Template and the height value computed altitude zoom factor of described moving target;
Target convergent-divergent subelement is used for by described width zoom factor and height zoom factor described moving target being carried out convergent-divergent.
12. device according to claim 8 is characterized in that, described scanning element comprises:
Profile is provided with subelement, is used for being provided with according to the profile of described moving target the boundary rectangle of described moving target;
Subelement is carried out in scanning, and being used for is starting point with the point on the boundary rectangle four edges respectively in the counterclockwise direction, makees the profile scan line perpendicular to described boundary rectangle four edges;
Statistical value record subelement is used to write down the sweep trace distance the profile from described starting point to moving target, described distance is obtained the characteristic statistics value of described moving target as the feature of described moving target.
13. device according to claim 8 is characterized in that, described determining unit comprises:
Eigenwert input subelement is used for the good Support Vector Machine sorter of described characteristic statistics value input training in advance has been preserved the moving target type of different characteristic statistical value correspondence in the described Support Vector Machine sorter;
Target type is obtained subelement, is used for obtaining according to the comparative result of described Support Vector Machine sorter output the type of moving target.
CN2010101243178A 2010-02-26 2010-02-26 Target classification method of video image and device Active CN101882217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101243178A CN101882217B (en) 2010-02-26 2010-02-26 Target classification method of video image and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101243178A CN101882217B (en) 2010-02-26 2010-02-26 Target classification method of video image and device

Publications (2)

Publication Number Publication Date
CN101882217A true CN101882217A (en) 2010-11-10
CN101882217B CN101882217B (en) 2012-06-27

Family

ID=43054229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101243178A Active CN101882217B (en) 2010-02-26 2010-02-26 Target classification method of video image and device

Country Status (1)

Country Link
CN (1) CN101882217B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013075295A1 (en) * 2011-11-23 2013-05-30 浙江晨鹰科技有限公司 Clothing identification method and system for low-resolution video
CN103729639A (en) * 2013-12-18 2014-04-16 吉林大学 Palm image normalization method based on distance sensor
CN105051754A (en) * 2012-11-21 2015-11-11 派尔高公司 Method and apparatus for detecting people by a surveillance system
CN106295532A (en) * 2016-08-01 2017-01-04 河海大学 A kind of human motion recognition method in video image
CN106781521A (en) * 2016-12-30 2017-05-31 东软集团股份有限公司 The recognition methods of traffic lights and device
CN106856577A (en) * 2015-12-07 2017-06-16 北京航天长峰科技工业集团有限公司 The video abstraction generating method of multiple target collision and occlusion issue can be solved
CN107886098A (en) * 2017-10-25 2018-04-06 昆明理工大学 A kind of method of the identification sunspot based on deep learning
CN107923742A (en) * 2015-08-19 2018-04-17 索尼公司 Information processor, information processing method and program
CN109815784A (en) * 2018-11-29 2019-05-28 广州紫川物联网科技有限公司 A kind of intelligent method for classifying based on thermal infrared imager, system and storage medium
CN110291538A (en) * 2017-02-16 2019-09-27 国际商业机器公司 Filter the image recognition of image classification output distribution
CN110753239A (en) * 2018-07-23 2020-02-04 深圳地平线机器人科技有限公司 Video prediction method, video prediction device, electronic equipment and vehicle
CN111783777A (en) * 2020-07-07 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN113779672A (en) * 2021-08-31 2021-12-10 北京铁科英迈技术有限公司 Steel rail profile wear calculation method, device, equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103735269B (en) * 2013-11-14 2015-10-28 大连民族学院 A kind of height measurement method followed the tracks of based on video multi-target

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489112A (en) * 2002-10-10 2004-04-14 北京中星微电子有限公司 Sports image detecting method
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method
EP1916538A2 (en) * 2006-10-27 2008-04-30 Matsushita Electric Works, Ltd. Target moving object tracking device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1489112A (en) * 2002-10-10 2004-04-14 北京中星微电子有限公司 Sports image detecting method
EP1916538A2 (en) * 2006-10-27 2008-04-30 Matsushita Electric Works, Ltd. Target moving object tracking device
CN101159855A (en) * 2007-11-14 2008-04-09 南京优科漫科技有限公司 Characteristic point analysis based multi-target separation predicting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《江苏大学学报(自然科学版)》 20090930 杜宇人 一种基于轮廓特征的运动目标识别方法 514-517 2,9 第30卷, 第5期 2 *
《计算机仿真》 20080630 于纪征 等 基于均值漂移和边缘检测的轮廓跟踪算法 224-227 1-3,5,7-9,11,13 第25卷, 第6期 2 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013075295A1 (en) * 2011-11-23 2013-05-30 浙江晨鹰科技有限公司 Clothing identification method and system for low-resolution video
CN105051754A (en) * 2012-11-21 2015-11-11 派尔高公司 Method and apparatus for detecting people by a surveillance system
CN103729639A (en) * 2013-12-18 2014-04-16 吉林大学 Palm image normalization method based on distance sensor
CN103729639B (en) * 2013-12-18 2017-04-12 吉林大学 palm image normalization method based on distance sensor
CN107923742A (en) * 2015-08-19 2018-04-17 索尼公司 Information processor, information processing method and program
CN106856577A (en) * 2015-12-07 2017-06-16 北京航天长峰科技工业集团有限公司 The video abstraction generating method of multiple target collision and occlusion issue can be solved
CN106856577B (en) * 2015-12-07 2020-12-11 北京航天长峰科技工业集团有限公司 Video abstract generation method capable of solving multi-target collision and shielding problems
CN106295532B (en) * 2016-08-01 2019-09-24 河海大学 A kind of human motion recognition method in video image
CN106295532A (en) * 2016-08-01 2017-01-04 河海大学 A kind of human motion recognition method in video image
CN106781521A (en) * 2016-12-30 2017-05-31 东软集团股份有限公司 The recognition methods of traffic lights and device
CN110291538A (en) * 2017-02-16 2019-09-27 国际商业机器公司 Filter the image recognition of image classification output distribution
CN110291538B (en) * 2017-02-16 2023-05-16 国际商业机器公司 Method and system for managing image recognition
CN107886098A (en) * 2017-10-25 2018-04-06 昆明理工大学 A kind of method of the identification sunspot based on deep learning
CN110753239A (en) * 2018-07-23 2020-02-04 深圳地平线机器人科技有限公司 Video prediction method, video prediction device, electronic equipment and vehicle
CN110753239B (en) * 2018-07-23 2022-03-08 深圳地平线机器人科技有限公司 Video prediction method, video prediction device, electronic equipment and vehicle
CN109815784A (en) * 2018-11-29 2019-05-28 广州紫川物联网科技有限公司 A kind of intelligent method for classifying based on thermal infrared imager, system and storage medium
CN111783777A (en) * 2020-07-07 2020-10-16 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111783777B (en) * 2020-07-07 2023-11-24 抖音视界有限公司 Image processing method, apparatus, electronic device, and computer readable medium
CN113779672A (en) * 2021-08-31 2021-12-10 北京铁科英迈技术有限公司 Steel rail profile wear calculation method, device, equipment and storage medium
CN113779672B (en) * 2021-08-31 2024-04-12 北京铁科英迈技术有限公司 Rail profile abrasion calculation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN101882217B (en) 2012-06-27

Similar Documents

Publication Publication Date Title
CN101882217B (en) Target classification method of video image and device
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN101120382B (en) Method for tracking moving object in video acquired of scene with camera
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
Unzueta et al. Adaptive multicue background subtraction for robust vehicle counting and classification
CN101739551B (en) Method and system for identifying moving objects
CN102609682B (en) Feedback pedestrian detection method for region of interest
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN104134222A (en) Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion
CN104183127A (en) Traffic surveillance video detection method and device
CN104318263A (en) Real-time high-precision people stream counting method
Yaghoobi Ershadi et al. Robust vehicle detection in different weather conditions: Using MIPM
Aminuddin et al. A new approach to highway lane detection by using Hough transform technique
Johansson et al. Combining shadow detection and simulation for estimation of vehicle size and position
Rodríguez et al. An adaptive, real-time, traffic monitoring system
CN107315994B (en) Clustering method based on Spectral Clustering space trajectory
Buch et al. Vehicle localisation and classification in urban CCTV streams
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
Xia et al. Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach
CN102314591A (en) Method and equipment for detecting static foreground object
CN101877135B (en) Moving target detecting method based on background reconstruction
Chen et al. Vision-based traffic surveys in urban environments
Ashraf et al. HVD-net: a hybrid vehicle detection network for vision-based vehicle tracking and speed estimation
Du et al. Particle filter based object tracking of 3D sparse point clouds for autopilot
Muniruzzaman et al. Deterministic algorithm for traffic detection in free-flow and congestion using video sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: HANGZHOU HAIKANG WEISHI SOFTWARE CO., LTD.

Effective date: 20121025

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 310012 HANGZHOU, ZHEJIANG PROVINCE TO: 310051 HANGZHOU, ZHEJIANG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20121025

Address after: Hangzhou City, Zhejiang province 310051 Binjiang District East Road Haikang Science Park No. 700, No. 1

Patentee after: Hangzhou Hikvision Digital Technology Co., Ltd.

Address before: Ma Cheng Road Hangzhou City, Zhejiang province 310012 No. 36

Patentee before: Hangzhou Haikang Weishi Software Co., Ltd.