GB2484133A - Recognising features in a video sequence using histograms of optic flow - Google Patents

Recognising features in a video sequence using histograms of optic flow Download PDF

Info

Publication number
GB2484133A
GB2484133A GB1016496.0A GB201016496A GB2484133A GB 2484133 A GB2484133 A GB 2484133A GB 201016496 A GB201016496 A GB 201016496A GB 2484133 A GB2484133 A GB 2484133A
Authority
GB
United Kingdom
Prior art keywords
optic flow
feature
target region
histograms
video sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1016496.0A
Other versions
GB201016496D0 (en
GB2484133B (en
Inventor
Atsuto Maki
Frank Perbet
Bjorn Stenger
Oliver Woodford
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1016496.0A priority Critical patent/GB2484133B/en
Publication of GB201016496D0 publication Critical patent/GB201016496D0/en
Priority to JP2011204623A priority patent/JP5259798B2/en
Priority to US13/239,602 priority patent/US8750614B2/en
Publication of GB2484133A publication Critical patent/GB2484133A/en
Application granted granted Critical
Publication of GB2484133B publication Critical patent/GB2484133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06K9/00369
    • G06K9/4642
    • G06K9/6267
    • G06T7/2066
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Abstract

A method of classifying a feature in a video sequence includes: selecting a target region (202) of a frame of the video sequence; dividing the target region into a plurality cells (204); calculating histograms of optic flow within the cells 302; comparing the histograms of optic flow for pairs of cells (208); and assigning the feature to a class based on the result of the comparison (210). The method may further include image analysis such as by using histograms of orientated gradient (HOG) (628). The method could also include a means of compensating for camera motion. The method may find application in identifying pedestrians, either to avoid accidents involving vehicles, or to record their entering or leaving buildings.

Description

I
A Video Analysis Method and System Embodiments described herein relate generally to recognising features in a video sequence. When a feature is recognised in a video sequence, the feature is assigned to a class based on attributes of the video sequence.
There are many applications of feature recognition in video sequences. One such application is pedestrian recognition. In a pedestrian recognition method features in a video sequence are classified either as being pedestrians or not pedestrians.
Pedestrians can have varying motion and appearance in video sequences. Pedestrians may be static, moving in different directions, the appearance of a pedestrian can vary with different walking phases, and pedestrians may be occluded by objects in a video or may be connected to objects such as luggage which can vary the observed shapes for pedestrians in a video sequence.
In order to accurately classify features in a video sequence, for example, as pedestrians video analysis methods must be able to account for some or all of the factors discussed above.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, embodiments of the invention will be described with reference to the drawings in which: FIG. 1 is a block diagram of a data processing system for classifying a feature in a video sequence; FIG. 2 is a flow diagram of a method for classifying a feature in a frame of a video sequence; FIG. 3 is a schematic diagram showing a method for extracting features from a video sequence; FIG.4 is a graph showing a comparison of the results for pedestrian detection in video sequences; FIG.5 shows examples of pedestrian detection in frames of a video sequence; and FIG.6 shows a block diagram of a pedestrian detection system.
DETAILED DESCRIPTION
According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.
In this embodiment features in the video sequence are recognised by assigning them into classes. The assignment into classes is based on the comparison of histograms of optic flow. An optic flow field for a frame of the video sequence indicates the motion of parts of the frame between frames. The optic flow field may indicate the movement of each pixel of a frame between frames. Alternatively the optic flow field may indicate the movement of blocks of pixels between frames. Histograms of optic flow are calculated for cells of the target region. By comparing the histograms of optic flow, correlations of the motion of different parts of the target region can be identified. These correlations may be indicative of the presence of a particular feature, such as a pedestrian in the frame of the video sequence. As a result of the comparison, the feature in the target region is assigned to a class.
According to one embodiment, the method also includes performing image analysis on the frame of the video sequence. The result of the image analysis and the result of the comparison of the histograms of optic flow are used to classify the feature.
In this embodiment, features in the video are classified based both of motion and the result of the image recognition. This means that both characteristic of the static image, such as shape and characteristics of the motion are used to identify features in the video sequence.
According to one embodiment, the image analysis includes calculating histograms of oriented gradient for a static image corresponding to the target region. Histograms of oriented gradient is a method of recognising shapes in images.
According to one embodiment, a random decision forest is used to assign the feature to a class. The random decision forest is trained prior to carrying out the method using training data. The training data corresponds to a set of video sequences for which the classification is already known.
According to one embodiment, the method is used to identify pedestrians in the video sequence. The feature is assigned to a class by determining whether the feature corresponds to a pedestrian.
According to one embodiment, the optic flow field in each cell includes vectors indicating the magnitude and direction of apparent movement of parts of the frame of the video sequence between frames. Calculating histograms of optic flow includes assigning each vector of optic flow to a bin based on the direction of the vector.
According to one embodiment, the optic flow field for the frame is stored in channels.
Each channel corresponds to a range of orientations for the vector of optic flow.
According to one embodiment the optic flow field is stored as an integral image.
According to one embodiment, the method includes compensating for camera motion in the video sequence by subtracting a globally observed optic flow for the target region
from the optic flow field in each cell.
According to a further embodiment, a computer readable medium for causing a computer to execute a method according to the embodiments described above is provided.
According to a further embodiment, a video analysis system for carrying out methods according to embodiments described above is provided.
FIG. 1 shows a data processing system 100 for classifying a feature in a video sequence. The data processing system 100 comprises a processor 102, a memory 104, an input module 108, and an output module 110. The memory 104 stores a program 106 for classifying a feature in a video sequence. The program 106 can be executed on the processor 102. The input module 108 can receive input of a video sequence for analysis to classify a feature in the video sequence and the output module 110 can output the results of the classification. The input module may receive optic flow data indicating movement between frames of the video sequence.
Alternatively, the processor may be operable to calculate the optic flow data from an input video sequence. The input module 108 may be a data connection capable of receiving video data from a video camera or video recorder. The input module 108 may be a network connection capable of receiving video data over a network such as the internet. The data processing system 100 may be a conventional computer. The methods followed by the program 106 are described below.
FIG. 2 shows a flow diagram of a method for classifying a feature in a frame of a video sequence. The method uses optic flow data to classify a feature in a video based on correlations in the motion of different parts of the feature. The optic flow data is a field that shows movement between the frames of a video sequence. The optic flow data may indicate the movement of each pixel in a frame with respect to the previous or the next frame in the video sequence. The optic flow data may indicate the movement of a subset of the pixels in frames of the video sequence. For example, the optic flow data could indicate the motion of every other pixel in a frame. The optic flow field includes a magnitude and direction of motion for a set of points in a frame.
Step 202 of the method is selecting a target region of the frame. The following steps of the method are then applied to the target region. The target region could be identified by performing image analysis on the frame and identifying candidate features.
Alternatively, all regions having a particular size in a frame could be selected in sequence.
The target region is then divided into a number of cells in step 204. For each cell in the target region, a histogram of optic flow is calculated in step 206. The histograms of optic flow are calculated by assigning the optic flow for each pixel or each unit of optic flow field within the cell to a bin according to the direction of optic flow.
In step 208, histograms of pairs of cells are compared. Using the result of the comparison, the feature in the target region is classified in step 210. The classification is thus based upon correlations of motion of different parts of the target region.
In the above description, the classification is used to indicate the result of the method.
The classification may be to one of two classes, for example pedestrian' and not pedestrian'. Such a classification system would be used to identify pedestrians in a video sequence.
In the following, the method in which a comparison of optic flow at different points in a frame is analysed as described above is called co-occurrence flow (CoF). The CoF method can be used in conjunction with the histograms of gradient (HOG) method to identify pedestrians in a video sequence. The HOG method for the detection of pedestrians is described in N. Dalal and B. Triggs, Histograms of oriented gradients for human detection' CVPR (1), pages 886-893, 2005.
In the combined method, HOG features are extracted from an image by first making computations of the oriented edge energy responses by convolving the input image with oriented odd Gabor filters in d (= 8) different orientations. This filtering is performed on the whole input image and the results are stored as integral images. The use of integral images which are also known as summed area tables allows computations to be efficiently made for windows of various sizes. The outputs of Gabor filtering in the j-th direction are denoted as GO) for j = 1, ..., d.
For identifying pedestrians, a window, or target region which is rectangular and has an aspect ratio of 1:2 is used. For this window, features based on the HOG method and the CoF method are calculated. These features are used as inputs for a statistical model such as a random forest classifier, which uses the features to classify the object in the target region of the frame as either a pedestrian or not a pedestrian.
The process for computing features for a target region using HOG is as follows. For a candidate rectangular target region, cells which are sub regions of the target region, R are defined. The cells are grid wise generated in a multi-level fashion as in MIP mapping so that we have 2' >< 2 2' (I = 0, . ., Imax) cells in each level. Imax = 3 is chosen as a reasonable number for the finest level. To give a rough idea, this indicates that each cell in the bottom level consists of 8 >( 8 pixels for an R with size 64>< 128 pixels.
A set of feature elements f,(m, n) E Rd are computed from the sum of the outputs of Gabor filtering {G(j)} at each orientation channel within the cell. A cell of level, I is referred to as w,(m, n) for m = 1, ..., 2', n = 1, ..., 2+1. The set of feature elements is given by the following formula: fj(m,n) = {ej(m,n;j)} , j = 1,...,d el(m,n;j) = f G(u,v;j)dudv wj (in, it) Where (u, v) are local coordinates in R. f,(m,n) is normalised using the filter outputs over all directions. The normalised feature is given by the following formula: ti(m,n) = {i(m,n;j)}j1d ëj(m,n;j) = ej(rn,n;j) / __ To form a multidimensional HOG descriptor, outputs from coarser scales are incorporated to form N0-dimensional HOG descriptor VG by concatenating the features of different levels such that: = [ f2 fi foil Where N0 = d>7 21.21+1 The features VG calculated as above are used in addition to feature calculated using the co-occurrence flow (C0F) method to classify moving objects in a video as either pedestrians or not pedestrians.
To identify pedestrians, the CoF method can be implemented to extract features as follows.
FIG. 3 shows a schematic of the method for extracting features.
A regularised flow field for the whole image 300 is calculated, for example using the technique described in M.Werlberger, W. Trobin, T. Pock, A.Wedel, D. Cremers, and H. Bischof, Anisotropic Huber-LI optical flow', BMVC, 2009.
The flow field 302 includes a direction and magnitude for the optic flow at a number of points on the image 300.
The optic flow field is stored as separate channels F(i) according to the discrete orientations i = 1.. .b that will later be used for the bins in the histograms of optic flow.
Thus, the channels F(i) represent the output that correspond to a particular range of directions and a magnitude. The outputs to channels are stored as integral images 304 (also know as summed area tables), as this allows the histograms to be efficiently calculated for cells of target regions of the image.
Histograms of optic flow are calculated for cells of the sub region 306. The histograms 308 are constructed by calculating the sum over the area of the cell of interest wk(n,m) of each channel of {F(i)}. Thus, the i-th element of the histogram is computed as: h(imn;i) = f F(u,v;i)dudv, i = wk(m,n) If the video sequence is obtained from a moving camera, the computed flow field will be influenced by the camera motion. This motion can be compensated for by subtracting the dominant image flow before generating the histograms. The dominant background flow is calculated by averaging the flow globally observed in the target region sub-window, R. The corrected value of the i-th element of the histogram is given by: n; i) = h(rn, n; i) -F(u, v; i)dudv The range of bins for the histograms and also the discrete orientations in which the directions of the optic flow field are stored can either cover 180 degrees or 360 degrees. Where the range of the bins is 180 degrees, when calculating the histograms the flow field is included with both positive and negative values. So for example a field with a direction of 30 degrees would be included in the same bin as a field at 270 degrees with an opposite sign. In the case where the bins span 360 degrees, the angular coverage of each bin will be doubled, and only the magnitude will be used in the calculation of the histogram.
Comparisons of the histograms 308 for pairs of cells within R are then made. Using the L1 norm 310, each pair of cells outputs a scalar value. The scalar values are combined to produce a CoF feature vector 312.
In the case where the bins cover 360 degrees, the comparison of the histograms is carried out using histogram intersection to give a scalar value calculated from the sum of comparing each pair of bins.
The CoF feature vector and the HOG feature vector for a given region are input into a classifier based on a random decision forest to classify the feature within the target region.
Two independent random forests are trained using the CoF and the HOG features respectively. The class probability distributions are combined at the leaf nodes of the random forests across all the trees in the two forests.
FIG.4 shows a comparison of the results for pedestrian detection in video sequences using the co-occurrence flow method described above alone (CoF only), the histograms of orientated gradient method alone (HOG only) and a method in which pedestrians are detected using the CoF method and HOG method in combination as described in reference to FIG. 3 (HOG and CoF). As can be seen from FIG.4, the combined method performs better than either of the methods alone.
FIG.5 shows two examples of pedestrian detection using (a) HOG only, (b) CoF only and (d) HOG and CoF in combination. FIG.5 also shows the Optic Flow (c) used in the CoF method. The optic flow is colour coded according to magnitude. In FIG.5 (a), (b) and (d), the purple (black in monochrome images) boxes show features detected as pedestrians and the green (white in monochrome images) boxes show features that should have been detected as pedestrians.
In the example shown in the upper images of FIG.5, the woman in the road with a suitcase is missed by HOG, but detected by CoF. It is noted here that CoF is capable of detecting pedestrians in motion, for example when crossing a road, which is an important application of pedestrian detection. However, the CoF method does not detect stationary pedestrians which are detected by HOG.
In the lower images of FIG.5, the CoF method detects the man walking to the right hand side of the frame. This is missed by HOG, probably due to the dark background in combination with the dark coloured trousers of the pedestrian.
Thus, the use of the two methods in combination can detect pedestrians in a range of roles and thus the methods described in the present application compliment known methods such as histograms of gradient.
FIG. 6 shows a pedestrian detection system. The pedestrian detection system 600 has a video camera 610 and a video analysis module 620. The video analysis module has a optic flow field calculator 622, a target region selector 624, a co-occurrence of flow vector calculator 626, a histograms of orientated gradient vector calculator 628 and a classifier 630. The classifier stores two random decision forests: a co-occurrence of flow decision forest 632 and a histograms of orientated gradient decision forest 634.
The analysis module is attached to an output module 640.
The output module may be a video screen which can show the video sequence captured by the video camera 610 with pedestrians highlighted. The output module may output a signal that indicates the proximity of pedestrians or a signal that can be used to gather statistics on the number of pedestrians passing a particular point.
The pedestrian detection system 600 may be integrated into an automotive vehicle. In this application, the video camera 610 is directed in the direction of travel of the automotive vehicle, A number of cameras may be located on the automotive vehicle and the output from each of the cameras may be constantly monitored by the analysis module 620. Alternatively, the analysis module may be configured to switch to one of the video cameras depending on the direction of motion of the automotive vehicle.
In use, the video camera 610 captures the video sequence of the field of view in the direction of motion of the automotive vehicle. The optic flow calculator 622 calculates an optic flow field of the frames of the video sequence. The target region selector 624 selects target regions of frames of the video sequence. The target regions are selected by carrying out an analysis of the frames of the video sequence using the histograms of oriented gradient method discussed above. For a target region, the co-occurrence of flow vector calculator 626 calculates a vector of co-occurrence of optical flow. The co-occurrence of flow vector calculator 626 compensates for the motion of the vehicle in the video sequence by subtracting the dominant flow over the target region when calculating the co-occurrence of flow vector. The vector is calculated by comparing histograms of optic flow for bells of the target region.
The histograms of oriented gradient vector calculator 628 also calculates a vector for the target region. The vector calculated by the co-occurrence of flow vector calculator 626 and the vector calculated by the histograms of oriented gradient vector calculator 628 are input into classifier 630. The classifier 630 uses the vectors to classify the feature in the target region as a pedestrian or not a pedestrian. The classifier 630 uses stored random decision forests 632 and 634 to classify the vectors as either relating to a pedestrian or not a pedestrian. The output module 640 outputs a signal for each frame of the video sequence indicating the presence or absence of pedestrians.
The output signal may be a video sequence with pedestrians highlighted in it. This video sequence may be displayed on a video screen visible to the driver of the automotive vehicle. The output signal may operate an alarm to alert the driver to the presence of pedestrians in the path of the automotive vehicle. The output signal may be directly connected to the controls of the car and cause the car to slow down or divert its path due to the close proximity of a pedestrian.
The pedestrian detection system 600 may be integrated into a pedestrian monitoring system for use a public building such as a shopping centre or station. The video cameras may be fixed with a view of pedestrians entering or exiting the building. The output signal may indicate the number of pedestrians entering and exiting the building at any one time. Thus, the pedestrian detection system can generate statistics on the number of visitors to a building and may calculate the number of people inside the building at any one time.
In place of random decision forests, other statistical classifiers could be used to classify features in a video sequence based on the result of the comparison of histograms. For example, a linear classifier such as a support vector machine (SVM) could be used. A classifier trained using adaptive boosting (AdaBoost) could also be used.
The different types of classifiers could also be used to classify features in the analysis of static features of the frames. For example, during the analysis using HOG, a linear classifier or a boosted classifier could be used.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (19)

  1. CLAIMS: 1. A method of classifying a feature in a video sequence, the method comprising selecting a target region of a frame of the video sequence, the target region containing the feature; dividing the target region into a plurality of cells; calculating histograms of optic flow within cells of the plurality of cells; comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.
  2. 2. The method of claim 1, further comprising performing image analysis on the target region, and wherein assigning the feature to a class is based on the image analysis and the result of the comparison.
  3. 3. The method of claim 2, wherein performing image analysis comprises calculating histograms of orientated gradient for a static image corresponding to the target region.
  4. 4. The method of any preceding claim, wherein assigning the feature to a class comprises using a random decision forest classifier.
  5. 5. The method of any preceding claim, wherein assigning the feature to a class comprises determining if the feature corresponds to a pedestrian in the video sequence.
  6. 6. The method of any preceding claim, wherein an optic flow field in each cell comprises a plurality of vectors indicating a magnitude and a direction and calculating histograms of optic flow comprises assigning each vector of optic flow within a cell to a bin based on the direction of the vector.
  7. 7. The method of claim 6, further comprising storing the optic flow field for the frame as plurality of channels, wherein each channel corresponds to a range of orientations for the vector of optic flow.
  8. 8. The method of claim 7, wherein the optic flow field is stored as an integral image.
  9. 9. The method of any one of claims 6 to 8, further comprising compensating camera motion in the video sequence by subtracting a globally observed optic flow for the target region from the optic flow field in each cell.
  10. 10. A computer readable medium carrying computer executable instructions which when executed on a computer cause the processor to carryout a method according to any one of the preceding claims.
  11. 11 A video analysis system comprising: an input module operable to receive a frame of a video sequence; a storage module; a processor operable to select a target region of the frame, the target region containing the feature; divide the target region into a plurality of cells; calculate histograms of optic flow within cells of the plurality of cells; compare the histograms of optic flow for pairs of cells; and assign the feature to a class based at least in part on the result of the comparison; and an output module operable to output information indicating the class assigned to the feature.
  12. 12. The system of claim 11, the processor being further operable to perform image analysis on the target region, and wherein assigning the feature to a class is based on the image analysis and the result of the comparison.
  13. 13. The system of claim 12, wherein the processor is operable to perform image analysis by calculating histograms of orientated gradient for a static image corresponding to the target region.
  14. 14. The system of any one claim of claims 11 to 13, wherein the processor is operable to assign the feature to a class using a random decision forest classifier.
  15. 15. The system of any one claim of claims 11 to 14, wherein the processor is operable to assign the feature to a class by determining if the feature corresponds to a pedestrian in the video sequence.
  16. 16. The system of any one claim of claims 11 to 15, wherein an optic flow field in each cell comprises a plurality of vectors indicating a magnitude and a direction and the processor is operable to calculate histograms of optic flow by assigning each vector of optic flow within a cell to a bin based on the direction of the vector.
  17. 17. The system of claim 16, further comprising storage for storing the optic flow field for the frame as plurality of channels, wherein each channel corresponds to a range of orientations for the vector of optic flow.
  18. 18. The system of claim 17, wherein the optic flow field is stored as an integral image.
  19. 19. The system of any one of claims 16 to 18, the processor being further operable to compensate for camera motion in the video sequence by subtracting a globally observed optic flow for the target region from the optic flow field in each cell.
GB1016496.0A 2010-09-30 2010-09-30 A video analysis method and system Active GB2484133B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1016496.0A GB2484133B (en) 2010-09-30 2010-09-30 A video analysis method and system
JP2011204623A JP5259798B2 (en) 2010-09-30 2011-09-20 Video analysis method and system
US13/239,602 US8750614B2 (en) 2010-09-30 2011-09-22 Method and system for classifying features in a video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1016496.0A GB2484133B (en) 2010-09-30 2010-09-30 A video analysis method and system

Publications (3)

Publication Number Publication Date
GB201016496D0 GB201016496D0 (en) 2010-11-17
GB2484133A true GB2484133A (en) 2012-04-04
GB2484133B GB2484133B (en) 2013-08-14

Family

ID=43243321

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1016496.0A Active GB2484133B (en) 2010-09-30 2010-09-30 A video analysis method and system

Country Status (3)

Country Link
US (1) US8750614B2 (en)
JP (1) JP5259798B2 (en)
GB (1) GB2484133B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2484133B (en) * 2010-09-30 2013-08-14 Toshiba Res Europ Ltd A video analysis method and system
US9141196B2 (en) 2012-04-16 2015-09-22 Qualcomm Incorporated Robust and efficient learning object tracker
KR101407952B1 (en) 2012-11-09 2014-06-17 전창현 Elevator crime prvent system and method of controlling the same
US9251121B2 (en) * 2012-11-21 2016-02-02 Honeywell International Inc. Determining pushback direction
KR101467360B1 (en) * 2013-07-04 2014-12-03 성균관대학교산학협력단 Method and apparatus for counting pedestrians by moving directions
CN103336965B (en) * 2013-07-18 2016-08-31 国家电网公司 Based on profile difference and the histogrammic prospect of block principal direction and feature extracting method
US9514364B2 (en) 2014-05-29 2016-12-06 Qualcomm Incorporated Efficient forest sensing based eye tracking
KR101529620B1 (en) * 2014-07-28 2015-06-22 성균관대학교산학협력단 Method and apparatus for counting pedestrians by moving directions
JP6372282B2 (en) * 2014-09-26 2018-08-15 富士通株式会社 Image processing apparatus, image processing method, and program
MA41117A (en) 2014-12-05 2017-10-10 Myfiziq Ltd IMAGING OF A BODY
US10713501B2 (en) * 2015-08-13 2020-07-14 Ford Global Technologies, Llc Focus system to enhance vehicle vision performance
ITUB20153491A1 (en) * 2015-09-08 2017-03-08 Pitom S N C METHOD AND SYSTEM TO DETECT THE PRESENCE OF AN INDIVIDUAL IN PROXIMITY OF AN INDUSTRIAL VEHICLE
CN105208339A (en) * 2015-09-24 2015-12-30 深圳市哈工大交通电子技术有限公司 Accident detection method for recognizing vehicle collision through monitoring videos
US10242581B2 (en) * 2016-10-11 2019-03-26 Insitu, Inc. Method and apparatus for target relative guidance
DE102017113794A1 (en) * 2017-06-22 2018-12-27 Connaught Electronics Ltd. Classification of static and dynamic image segments in a driver assistance device of a motor vehicle
CN108205888B (en) * 2017-12-22 2021-04-02 北京奇虎科技有限公司 Method and device for judging passenger entering and exiting station
JP7156120B2 (en) * 2019-03-22 2022-10-19 株式会社デンソー Object recognition device
CN112926385B (en) * 2021-01-21 2023-01-13 中广(绍兴柯桥)有线信息网络有限公司 Video processing method of monitoring equipment and related product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647131B1 (en) * 1999-08-27 2003-11-11 Intel Corporation Motion detection using normal optical flow
EP1617376A2 (en) * 2004-07-13 2006-01-18 Nissan Motor Co., Ltd. Moving obstacle detecting device
EP1677251A2 (en) * 2004-12-21 2006-07-05 Samsung Electronics Co., Ltd. Apparatus and method for distinguishing between camera movement and object movement and extracting object in a video surveillance system
US7778466B1 (en) * 2003-12-02 2010-08-17 Hrl Laboratories, Llc System and method for processing imagery using optical flow histograms
WO2010119410A1 (en) * 2009-04-14 2010-10-21 Koninklijke Philips Electronics N.V. Key frames extraction for video content analysis

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3377743B2 (en) * 1998-01-20 2003-02-17 三菱重工業株式会社 Mobile object identification device
KR100729660B1 (en) * 2005-12-09 2007-06-18 한국전자통신연구원 Real-time digital video identification system and method using scene change length
JP4956273B2 (en) * 2007-05-17 2012-06-20 日本放送協会 Throwing ball type discriminating device, discriminator generating device, throwing ball type discriminating program and discriminator generating program
US8422741B2 (en) * 2007-08-22 2013-04-16 Honda Research Institute Europe Gmbh Estimating objects proper motion using optical flow, kinematics and depth information
WO2009054119A1 (en) * 2007-10-26 2009-04-30 Panasonic Corporation Situation judging device, situation judging method, situation judging program, abnormality judging device, abnormality judging method, abnormality judging program, and congestion estimating device
US8451384B2 (en) * 2010-07-08 2013-05-28 Spinella Ip Holdings, Inc. System and method for shot change detection in a video sequence
GB2484133B (en) * 2010-09-30 2013-08-14 Toshiba Res Europ Ltd A video analysis method and system
US8774499B2 (en) * 2011-02-28 2014-07-08 Seiko Epson Corporation Embedded optical flow features
US8842883B2 (en) * 2011-11-21 2014-09-23 Seiko Epson Corporation Global classifier with local adaption for objection detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647131B1 (en) * 1999-08-27 2003-11-11 Intel Corporation Motion detection using normal optical flow
US7778466B1 (en) * 2003-12-02 2010-08-17 Hrl Laboratories, Llc System and method for processing imagery using optical flow histograms
EP1617376A2 (en) * 2004-07-13 2006-01-18 Nissan Motor Co., Ltd. Moving obstacle detecting device
EP1677251A2 (en) * 2004-12-21 2006-07-05 Samsung Electronics Co., Ltd. Apparatus and method for distinguishing between camera movement and object movement and extracting object in a video surveillance system
WO2010119410A1 (en) * 2009-04-14 2010-10-21 Koninklijke Philips Electronics N.V. Key frames extraction for video content analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chaudhry et al. Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference. ISSN 1063-6319 *
Dalal, Triggs, and Schmid. A. Leonardis, H. Bischof, and A. Prinz (Eds.): ECCV 2006, Part II, LNCS 3952, pp. 428-441, 2006. Springer-Verlag Berlin Heidelberg 2006 *

Also Published As

Publication number Publication date
GB201016496D0 (en) 2010-11-17
JP2012084140A (en) 2012-04-26
US20120082381A1 (en) 2012-04-05
GB2484133B (en) 2013-08-14
JP5259798B2 (en) 2013-08-07
US8750614B2 (en) 2014-06-10

Similar Documents

Publication Publication Date Title
US8750614B2 (en) Method and system for classifying features in a video sequence
US11948350B2 (en) Method and system for tracking an object
Zaklouta et al. Real-time traffic sign recognition in three stages
US9008365B2 (en) Systems and methods for pedestrian detection in images
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
Hong et al. Fast multi-feature pedestrian detection algorithm based on histogram of oriented gradient using discrete wavelet transform
Momin et al. Vehicle detection and attribute based search of vehicles in video surveillance system
Saran et al. Traffic video surveillance: Vehicle detection and classification
CN111783665A (en) Action recognition method and device, storage medium and electronic equipment
Negri et al. Detecting pedestrians on a movement feature space
Kataoka et al. Fine-grained walking activity recognition via driving recorder dataset
Chetouane et al. A comparative study of vehicle detection methods in a video sequence
Fleyeh et al. Benchmark evaluation of HOG descriptors as features for classification of traffic signs
CN111027482B (en) Behavior analysis method and device based on motion vector segmentation analysis
Dong et al. Nighttime pedestrian detection with near infrared using cascaded classifiers
Razzok et al. A new pedestrian recognition system based on edge detection and different census transform features under weather conditions
Promlainak et al. Thai traffic sign detection and recognition for driver assistance
Kurnianggoro et al. Visual perception of traffic sign for autonomous vehicle using k-nearest cluster neighbor classifier
Yang et al. Categorization-based two-stage pedestrian detection system for naturalistic driving data
Misman et al. Camera-based vehicle recognition methods and techniques: Systematic literature review
Ramazankhani et al. Iranian license plate detection using cascade classifier
Tetik et al. Pedestrian detection from still images
Yamauchi et al. Feature co-occurrence representation based on boosting for object detection
Nunn et al. An improved adaboost learning scheme using LDA features for object recognition
Yun et al. Human detection in far-infrared images based on histograms of maximal oriented energy map