EP1772816A1 - Appareil en procédé de détection d'une personne - Google Patents

Appareil en procédé de détection d'une personne Download PDF

Info

Publication number
EP1772816A1
EP1772816A1 EP06255183A EP06255183A EP1772816A1 EP 1772816 A1 EP1772816 A1 EP 1772816A1 EP 06255183 A EP06255183 A EP 06255183A EP 06255183 A EP06255183 A EP 06255183A EP 1772816 A1 EP1772816 A1 EP 1772816A1
Authority
EP
European Patent Office
Prior art keywords
image
subject
node
detecting
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06255183A
Other languages
German (de)
English (en)
Inventor
Haizhou c/o Tsinghua University Tsinghuayuan Ai
Chang c/o Tsinghua University Tsinghuayuan Huang
Yuan c/o Tsinghua University Tsinghuayuan Li
Shihong c/o OMRON Corp. 801 Minamifudodo-cho Lao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Omron Corp
Original Assignee
Tsinghua University
Omron Corp
Omron Tateisi Electronics Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Omron Corp, Omron Tateisi Electronics Co filed Critical Tsinghua University
Publication of EP1772816A1 publication Critical patent/EP1772816A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7747Organisation of the process, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • the present invention relates to an effective technique applied in an apparatus and a method for detecting from a picked-up image a particular subject (such as a human, an animal, an object and the like) or a portion thereof contained in the image.
  • a particular subject such as a human, an animal, an object and the like
  • a particular subject such as a human, an animal, an object and the like
  • a portion thereof contained in the image there is one that detects faces of human from a picked-up image, i.e. face detection technique.
  • Face detection is, for a given image, to search it using a certain processing by a computer to determine whether a face is contained therein.
  • Difficulties of face detection lie in two aspects: one is the intrinsic variation of face, such as difference of the face shape; the other is the extrinsic variation of face, such as rotation in plane.
  • Some early works on face detection include for instance, Rowley's ANN method and Schneiderman's method based on Bayesian decision rule.
  • Schneiderman's method partitions a face into three views as the left profile, the frontal profile and right profile, respectively, and trained three detectors based on views by using Bayesian method and wavelet transformation. The final result is obtained by combining the results from the three detectors.
  • Schneiderman's has contributed greatly to the solution of multi-view face detection.
  • a first embodiment according to the invention includes an apparatus for detecting from an image a particular subject including: an image input unit; and a tree-structured detector for classifying an image inputted from the image input unit; wherein the tree structure detector has a root node that covers subject subspaces of all views of a subject to be branched; a child node branched from the root node covers the subject subspaces corresponding to at least one views of the subject; each of the root node and the child node contains a plurality of weak classifiers and collects the output of each weak classifier for each of the divided subject subspace so as to determine to which child node in the adjacent lower layer the image should be shifted.
  • each node includes one strong classifier and this strong classifier may be decided depending on an output value of a plurality of weak classifiers.
  • the values of the plural weak classifiers can be collected for each of a plurality of subject subspaces which should be determined by the strong classifier at a certain node. Therefore, accuracy is improved and efficient calculation can be carried out.
  • a second embodiment according to the invention provides a method for detecting a particular subject from an image wherein an information process device executes the steps including inputting the inputted image in a root node of a tree-structured detector; determining which child node in the adjacent lower layer the image should be shifted to at each branched node of the root node and a child node of the tree-structured detector by inputting the image in a plurality of weak classifiers and collecting the output of each weak classifier for each divided subject subspace.
  • a third embodiment according to the invention provides a method for configuring a tree-structured detector for detecting a particular subject from an image; wherein an information process device executes the steps including: configuring a node for classifying the image into a tree structure; configuring a root node of the tree structure so as to cover subject subspaces of divided all views of the subject and have a plurality of branches, wherein each branch is connected to a child node, and this child node corresponds to at least one view of the subjects; configuring a child node covering two or more subject subspaces so as to have a plurality of branches, wherein each branch is connected to the child node of the adjacent lower layer, and each child node of the adjacent lower layer covers at least one subject subspace; configuring a child node covering at least one subject subspace to be made into a leaf node of a tree structure; and configuring each of the root node and the child node in such a manner that an image is inputted in a plurality of weak classifiers and the outputs
  • a tree-structured detector (Refer to Figs. 8a and 8b) for detecting faces of human from human images will be described as a specific example of the particular subject detection apparatus.
  • MVFD There are two main tasks for MVFD: one is to distinguish between faces and non-faces; the other is to identify the pose of a face.
  • the first task needs to reject non-faces as quickly as possible, so it is inclined to find the similarities of faces of different poses so as to separate them from non-faces, while the latter task focuses on the diversities among different poses.
  • the conflict between the two tasks really leads to the dilemma that either treating all faces as a single class (as in the pyramid method) or treating them as different individually separated classes (as in the decision tree method), both of which are unsatisfactory for MVFD.
  • the difficulty of solving such problem exists in the fact that variations of face poses (including RIP and ROP) may generally lead to notable variations of structure and texture in the views, thereby complexity of classification increases.
  • the pyramid model and the decision-tree model are proposed respectively.
  • the former adopts coarse-to-fine strategy to divide the multi-view face space into single-view face element spaces according to variation degree of poses and a classifier design employing pyramid structure is used to separate multi-view faces from non-faces step by step.
  • pyramid structure treats faces of different poses as one class and puts emphasis on solving the classification problem between them and non-faces. As face space is divided more and more finely, reasonable pose estimation will be achieved with pyramid method. While the method of decision tree firstly solves the pose estimation problem and separates faces of different poses by the multi-classification, then returns to the conventional cascade model to solve the classification problem between faces and non-faces for a certain pose.
  • a so-called human image is an image at least containing a part of or whole image of a human face. Therefore, a human image may be an image containing a whole human or an image containing a human face or other part of the body. Additionally, a human image may be an image containing a plurality of humans, and a human image may contain any graphics such as scenery (including the object of interest as the subject), patterns, etc., other than humans in the background.
  • tree-structured detector is only an example, and the configuration of tree-structured detector is not limited to the following description.
  • an embodiment of the invention provides a multi-view face detection method, including the steps of dividing a face space into face subspaces for multiple views, for instance, rotating a face by ⁇ 90° ROP, getting face subspaces of five views of frontal, left half profile, left full profile, right half profile and right full profile, then rotating these face subspaces of five views by ⁇ 45° RIP so that face subspace of each view derives two face subspaces of RIP (such as ⁇ 30° RIP) to divide the face space into face subspaces of 15 views.
  • the rotation out-of-plane is the rotation around the Z-axis; however, this rotation may be that around the X-axis.
  • the rotation around the Y-axis is equivalent to rotation in plane.
  • the rotation in plane can be defined as the rotation around the Y-axis.
  • the rotation out-of-plane becomes the rotation around the Z-axis and the subject is directed right and left.
  • the rotation around the X-axis is also the rotation out-of-plane, and the subject is directed right and left.
  • the tree structure has a root node that covers the divided face subspaces of all views (i.e. face subspaces of 15 views in the above face division example) and has a plurality of branches, each branch corresponding to a child node that covers at least the face subspace of one view, wherein child node covering the face subspaces of more than one view has a plurality of branches.
  • the child node may be non-branching. In this case, non-branching implies that the child node is shifted to a lower layer without (further) dividing the face subspace.
  • Each branch corresponding to a child node in an adjacent lower layer that covers at least the face subspace of one view, and child node covering the face subspace of only one view is the leaf node of the tree structure.
  • a Vector Boosting algorithm is used to train each node as a determinative vector to determine which nodes in the adjacent child node of the lower layer the face images in the corresponding nodes should be sent to; a width-first-search is adopted when browsing all active nodes in the tree structure.
  • Rejection of non-faces, and obtaining faces of corresponding views is done by single-branch cascade classification of the leaf nodes.
  • Width-First-Search (WFS, Width First Search or BFS, Breadth First Search) Tree-Structured Detector
  • the proposed detector adopts a coarse-to-fine strategy to divide the entire face space into smaller and smaller subspaces as shown in Fig. 2.
  • the root node that covers the largest space is partitioned into left profile, frontal and right profile views in the second layer to describe the ROP more accurately, and full-profile and half-profile views are defined in the next layer below; finally in the fourth layer, each view is split into three categories according to their different RIP.
  • each node computes a determinative vector G(x) to determine which child nodes the image should be sent to.
  • G(x) (1, 1, 0)
  • G(x) (0, 0, 0)
  • G(x) (0, 0, 0)
  • Width-First-Search strategy in the tree-structured detector is shown in Fig. 3.
  • the WFS tree method does not try to give quick pose estimation like those previously proposed, which amounts to loss in accuracy, nor does it simply merge different poses without consideration of their in-class differences like in Li, et al., which amounts to loss in speed.
  • the WFS tree can outperform them by means of paying attention to both diversities and similarities among various poses, which guarantees both high accuracy and faster speed.
  • the Vector Boosting is proposed as an extended version of the Real AdaBoost in which both its weak classifier and its final output are vectors rather than scalars.
  • the original inspiration of Vector Boosting is drawn from the multi-class multi-label (MCML) version of the Real AdaBoost, which assigns a set of labels for each sample and decomposes the original problem into k orthogonal binary ones.
  • MCML multi-class multi-label
  • Vector Boosting will deal with them in a unified framework by means of a shared output space of multi-components vector.
  • Each binary problem has its own “interested" direction in this output space that is denoted as its projection vector.
  • different binary problems are not necessarily independent (with orthogonal projection vectors); they could also be correlated (with non-orthogonal projection vectors) in general.
  • the generalized Vector Boosting algorithm is configured to handle a complicated problem, which has been decomposed into n binary ones, in a k-dimensional output space.
  • the weak classifier is called repeatedly under the updated distribution to form a highly accurate classifier.
  • the kernel update rule (equation 1) the margin of a sample x i with its label y and projection vector v i is defined as y i (v i h(x i )) due to the vectorization of the output.
  • the orthogonal component of a weak classifier's output makes no contribution to the updating of the sample's weight.
  • Vector Boosting increases the weights of samples that have been wrongly classified according to the projection vector (in its "interested” direction) and decrease those of samples correctly predicted.
  • the final output is the linear combination of all trained weak classifiers (equation 2).
  • an n ⁇ k matrix A is employed to transform the k -dimensional output space into an n -dimensional confidence space (equation 3), so that all n projection vectors in set ⁇ are constructed.
  • Each dimension of the confidence space corresponds to a certain binary problem.
  • the strong classifier with Boolean outputs is achieved with the threshold vector B (equation 4).
  • H(x) is the dimension for left profile view
  • H(x, 3) is for right profile view.
  • the frontal view and its corresponding projection vector are omitted here for clarity). It is true that both left profile and right profile faces can be well separated from the non-faces in this 2-D space with their own projection vectors ⁇ 1 and ⁇ 3 .
  • Vector Boosting is exactly the same as Real AdaBoost. In fact, due to the consistency in updating rule (equation 1), Vector Boosting keeps the same training error bound as Real AdaBoost, that is, training error P error ⁇ ⁇ ⁇ Z t .
  • Fig. 4a shows the convergence of the algorithm (e.g. in the lower layer of the cascade when the face and non-face patterns are very similar).
  • Fig. 4b shows the divergences on a Haar feature selected in the fifth round
  • the coarse granularity of the threshold-type weak classifier greatly impedes the improvement of the speed and accuracy of the detector.
  • a more efficient design for weak classifiers divides the feature space into many bins with equal width and output a constant value for each bin.
  • the piece-wise function is able to approximate various distributions more accurately without the constraint of Boolean output, which is essentially a symmetrical equidistant sampling process. It also meets the very requirements of the weak classifier in Real AdaBoost since it is really a natural way of providing a domain partition.
  • LUT Look Up Table
  • a piece-wise function is configured by two parts: one is the division of feature space, the other is the output constant for each division (i.e. bin).
  • the first one is fixed for each feature empirically, and the latter one, the output constant for each bin, can be optimized as follows.
  • samples S ⁇ ( x 1 , v 1 , y 1 ),K , ( x m , v m , y m ) ⁇ is under the distribution of Dt(i).
  • This loss function is a convex with its independent variable C k .
  • each c j can be easily optimized with a proper optimization algorithm such as Newton-Step method.
  • x i ⁇ bin j , 1 ⁇ j ⁇ p For j 1, ...
  • novel contributions of MVFD of this and several embodiments according to the invention include the WFS tree, the Vector Boosting algorithm and the weak classifiers based on piece-wise function.
  • a strong classifier is contained in each node and is obtained through the Vector Boosting algorithm by a plurality of weak classifiers; the weak classifier performs feature extraction based on integral image Haar feature and performs weak classification using a piece-wise function implemented with Look Up Table (LUT).
  • LUT Look Up Table
  • Each the leaf node has only one branch that corresponds to a plurality of single-branch connected child leaf nodes; the leaf nodes and child leaf nodes thereof constitute a cascade classifier to reject non-faces and obtain the faces of corresponding views.
  • FIGs. 8a and 8b are functional block diagrams of face detection apparatus according to a second embodiment of the invention that is implemented by a CPU that plays the role of an input unit, an output unit and a tree-structured detector.
  • Face image input unit plays a role of an interface for inputting data of the original image (hereafter referred as original image data) of a human image into face detection apparatus.
  • the original image data may be still image data or dynamic image data.
  • the face image input unit may adopt a structure of any technique of inputting the original image data into the face detection apparatus.
  • the original image data may be input into the face detection apparatus via network, such as Local Area Network and World Wide Web.
  • the input unit may adopt a structure of network interface.
  • the original image data may be input into face detection apparatus from digital camera, scanner, personal computer, storage device (such as hard disk drive device) and the like.
  • the input unit may adopt a structure based on the standards of connecting the digital camera, personal computer, storage device and the like to the face detection apparatus for data communication, for instance, the standards of wired connection such as Universal Serial Bus (USB) and Small Computer System Interface (SCSI), and wireless connection such as blue tooth and the like.
  • USB Universal Serial Bus
  • SCSI Small Computer System Interface
  • the original image data stored in a storage medium such as various flash memories, soft disk (registered trademark), compact disk (CD) and digital versatile disc (DVD) may be input into the face detection apparatus.
  • the input unit may adopt a structure of employing an apparatus of reading data out of the storage medium, such as flash memory reader, soft disk drive device, CD drive device and DVD drive device.
  • the face detection apparatus can be contained in a pick-up apparatus such as digital camera and the like, or in a pick-up apparatus where a digital camera is provided, such as Personal Digital Assistant (PDA).
  • PDA Personal Digital Assistant
  • the picked-up human images are input into the face detection apparatus as the original image data.
  • the input unit may adopt a structure of using Charge-Coupled Device (CCD), Complementary Metal-Oxide Semiconductor (CMOS) sensor and so on, or adopt a structure of an interface for inputting the original image data picked up by CCD and CMOS sensors into the face detection apparatus.
  • the human image input into the image output device may be input as output data of the image output device into the face detection apparatus as the original image data.
  • the input unit may adopt a structure of employing an apparatus of transforming the original image data input into these image output devices into the data able to be handled in the face detection apparatus.
  • the input unit may adopt a structure of being suitable to the above various cases.
  • a function to cut the image data of the sub-window from the data of the present image while moving the sub-window and send it to the face detector may be included in the face image input unit. By adding this function, it is possible to detect a face from the image including a background.
  • the output unit plays a role of an interface for outputting the data representing whether the tree-structured detector has detected faces of human and/or the data representing the position and size of the detected faces to the outside of the face detection apparatus.
  • the output unit may adopt a structure of any prior technique of outputting the data related to human face detection results from the face detection apparatus.
  • the data related to the detection results may be output from the face detection apparatus via network.
  • the output unit may adopt a structure of using network interfaces.
  • the data related to detection results may be output to other information process devices like personal computer and storage devices.
  • the output unit may adopt a structure based on the standards of connecting the other information process devices like personal computer or the storage devices to the face detection apparatus for data communication.
  • the data related to detection results may be output (written) to a storage medium.
  • the output unit may adopt a structure of employing an apparatus of writing the data into these storage devices or storage medium, such as flash memory recorder, soft disk drive device, CD-R drive device and DVD-R drive device.
  • the data output from the output unit may be used in order to output graphics representing face regions detected by the face detection apparatus to a display device such as a monitor.
  • the output unit may adopt a structure of an interface able to communicate data with a display device such as a monitor, and also may adopt a structure of an interface of connecting to a display device such as a monitor or submitting the data to a built-in information process device.
  • Fig. 6 shows an example displayed on the monitor.
  • the output unit may adopt a structure wherein a digital camera performs controls related to the photograph, such as focus control, exposure compensation and the like, with the data output from the output unit as reference in the case that face detection apparatus is contained in the digital camera or various devices having digital cameras.
  • the output unit may adopt a structure of an interface able to communicate data with the information process device inside the digital camera.
  • the output unit may adopt a structure wherein the information process device determines process regions and process contents of the image compensation process with the data output from the output unit as reference in the case that for instance, the face detection apparatus is contained in an information process device performing image compensation process and is connected to such information process device.
  • the output unit may be for instance a structure of an interface able to communicate data with such information process device and the devices therein.
  • the output unit may adopt a structure of being suitable to the above various cases.
  • Fig. 11 shows the configuration of tree-structured multi-view face classifier, wherein each node contains a layer classifier (that is, strong classifier), and each layer classifier is obtained by many LUT-type weak classifiers based on Haar features through continuous AdaBoost algorithm.
  • layer classifier that is, strong classifier
  • the Haar feature is a kind of simple rectangular feature.
  • Each Haar feature is generally defined as difference of pixel gray scale of two regions in an image sub-window, each region being configured by a plurality of rectangles (basic blocks).
  • the Haar feature's ability to depict a mode is weaker than some other more complicated feature, it becomes the ideal feature selected by weak classifier because it can be fast calculated through an integral image.
  • the properties of a Haar feature include length and width of a basic block, position of the feature relative to sub-window and its class (shape).
  • the feature varies with sub-windows in the detection procedure, and keeps the relative position and relative size with respect to sub-window constant. In order to accelerate features calculation, some redundant information will be pre-calculated for each Haar feature when the sub-window varies.
  • a LUT-type weak classifier may be trained according to each Haar feature.
  • the weak classifier divides the value range of Haar feature into n equal parts, and gives confidence of binary classification (whether it is face or not) for each equally divided region, where n is the length of LUT.
  • the weak classifier of this embodiment according to the invention contains multi-LUTs that give confidence information of faces with respect to different views based on the same Haar feature.
  • the classifiers of different views share the same Haar feature (shared feature or mutual feature).
  • multi-LUT-type weak classifier is capable of giving the classification information for faces of each view synchronously, thereby it can achieve a better classification; while compared with the methods of individually training classifiers for each view, multi-LUT-type weak classifier improves the utilization efficiency of each Haar feature so as to use less Haar features under the same correct rate and enhance the detection speed accordingly.
  • Fig. 9 shows an example of a multi-LUT-type weak classifier, wherein three LUTs are provided to output confidences for the cases of 30°, 0, -30° RIP, respectively.
  • subscripts (indexes) of three LUTs are calculated from the same Haar feature.
  • the continuous AdaBoost algorithm as one of the weak learning methods, is capable of linearly combining a plurality of weak classifiers into a strong classifier.
  • the weak learning procedure of the continuous AdaBoost algorithm is based on a large amount of calibrated sample data: it adjusting weight of a sample, selecting continually new weak classifiers to combine linearly with existed classifiers to reduce the false rate in training sets until it converges.
  • related theorem has proved that the algorithm's generalization ability is rather good for test sets.
  • a serial of strong classifiers may be obtained by applying the continuous AdaBoost algorithm to LUT-type weak classifiers and using different training parameters and different classes of samples.
  • each strong classifier is called a "layer classifier" (the linear combination of a set of weak classifiers is regarded as one layer).
  • a layer classifier may give confidences of the sub-window for the faces of various different views (The number of different views is equal to the number of LUTs of weak classifiers in this layer classifier).
  • Fig. 10 shows an example of a layer classifier obtained by the linear combination of weak classifiers shown in Fig. 9 through continuous AdaBoost algorithm, the layer classifier being capable of outputting the confidences of whether it is face or not for the faces with 30°, 0, -30° RIP.
  • Fig. 11 shows the configuration of current tree-structured detector, wherein each node is configured by a layer classifier and the branch number of each node is equal to the number of classes of face views whose confidences can be output by this layer classifier.
  • the root node can output five confidences for five views of left full profile, left half profile, frontal, right half profile and right full profile respectively, so the root branching root has five child nodes. If an image sub-window's confidence output in a face view is more than a certain threshold through the detection of root node, the sub-window image will be input into corresponding child node for further detection.
  • the tree-structured classifier rejects non-faces layer by layer, and detects the sub-windows eventually arriving at the leaf node as the faces. Meanwhile it determines the views of the faces according to the different leaf nodes at which the sub-windows arrive.
  • the face views that can be covered by a tree-structured detector include 180° ROP (i.e. front and profile variation of left-right rotation.) and 90° RIP.
  • Sub-window search is that for a gray scale image sub-window input into tree-structured classifier and for various poses being covered by the classifier, the tree-structured classifier may output the sub-window as the confidence of faces of a pose if the sub-window passes detection.
  • the face detection in pictures conventionally, it is needed to enumerate each of sub-windows in pictures, and detect them through tree-structured classifier, then obtain the detection results of whole pictures.
  • For a picture of 320 ⁇ 240 pixels there are totally 5257476 rectangular sub-windows with a size from 24 ⁇ 24 to 240 ⁇ 240. The time complexity of full search is undesirable.
  • a pixel-by-pixel increment may be changed into increment in ratio, i.e, a scale ratio is multiplied every time;
  • point-by-point scanning may be changed into variable-resolution scanning, i.e. first scanning with coarse-resolution grids, and if there is a large possibility to present faces, then scanning the surrounding with fine-resolution grids.
  • Fig. 12 shows a possible search procedure of tree-structured detector.
  • a bold arrow represents a possible search path.
  • Grey nodes represent the nodes remaining in a search queue after the coarse search ends in such case.
  • the sub-window detection procedure is: The detection for each sub-window comprises two steps of coarse search and fine search. The comparison between coarse search and fine search is given below. Table 1 Coarse search Fine search number of layers searched 4 - 6 layers Remaining layers Search manner Width-first-search Simple search from top down
  • the corresponding Haar feature is calculated according to sub-windows to be detected for each node (layer classifier) at which the searcHaarives, thereby one or more confidences (determined by branch number of a node) are obtained. For a branch whose confidence is more than a threshold, the nodes towards which the branch passes are added to the search queue. Continuing WFS until the layer number of nodes in the queue reaches the largest layer number defined in the coarse search.
  • each leaf node corresponds to a face detection result, which is recorded and output.
  • the false report rate f and the detection rate d for each node the expected overall false sample rate F for all views; the sample group of the subject (face) is set at P and the sample group of the non-subject (non-face) is set at N.
  • MVFD of this embodiment takes only about 40 ms on the detection of a 320 ⁇ 240 image. Compared with the individual cascade method described by Wo, et al that reported 80ms, the consumed time is reduced by a half. Since MVFD covers ⁇ 45° RIP, it is simply rotated by 90°, 180° and 270° to construct three detectors in order to fully cover 360° RIP, and these detectors work together to deal with the rotation invariant problem.
  • the Vector Boosting algorithm used in several aspects of the present invention can be regarded as a structure expansion for the AdaBoost algorithm, in which if properly predefining projection vectors it works exactly as the classical Real AdaBoost algorithm.
  • the Vector Boosting algorithm covers the classical AdaBoost algorithm.
  • the main contribution of the Vector Boosting is to deal with both the simple binary classification version and the complicated multi-class multi-label version of the classical Adaboost algorithms in a unified framework.
  • the classical simple binary classification limits the output of classifier in a scalar space for optimization, and the multi-class problem is decomposed into a plurality of independent binary classification problems to deal with them respectively. Although such method is clear and direct, it is not tenable for complicated multi-class multi-label problems and difficult to build some link among decomposed binary classification problems.
  • the version of the Vector Boosting algorithm of used in several aspects of the present invention can deal with the complicated problems including a plurality of decomposed binary classification problems in the same vector output space, which unifies the prior Adaboost algorithms; Moreover, Vector Boosting will take into account the correlation among different classification problems, that is to say, it is an extended version of the Adaboost algorithm. Although the Vector Boosting algorithm is developed for the MVFD problem, it could be applied to other complicated classification problems as well.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
EP06255183A 2005-10-09 2006-10-06 Appareil en procédé de détection d'une personne Withdrawn EP1772816A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200510108159 2005-10-09

Publications (1)

Publication Number Publication Date
EP1772816A1 true EP1772816A1 (fr) 2007-04-11

Family

ID=37591883

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06255183A Withdrawn EP1772816A1 (fr) 2005-10-09 2006-10-06 Appareil en procédé de détection d'une personne

Country Status (1)

Country Link
EP (1) EP1772816A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102868934A (zh) * 2012-08-01 2013-01-09 青岛海信传媒网络技术有限公司 基于智能电视的视频对象信息检索方法及装置
US8396263B2 (en) 2008-12-30 2013-03-12 Nokia Corporation Method, apparatus and computer program product for providing face pose estimation
CN109190512A (zh) * 2018-08-13 2019-01-11 成都盯盯科技有限公司 人脸检测方法、装置、设备及存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213810A1 (en) * 2004-03-29 2005-09-29 Kohtaro Sabe Information processing apparatus and method, recording medium, and program
US20050220336A1 (en) * 2004-03-26 2005-10-06 Kohtaro Sabe Information processing apparatus and method, recording medium, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050220336A1 (en) * 2004-03-26 2005-10-06 Kohtaro Sabe Information processing apparatus and method, recording medium, and program
US20050213810A1 (en) * 2004-03-29 2005-09-29 Kohtaro Sabe Information processing apparatus and method, recording medium, and program

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
B. FRÖHBA AND A. ERNST: "Fast frontal-view face detection using a multi-path decision tree", LECTURE NOTES IN COMPUTER SCIENCE, vol. 2688, 2003, pages 921 - 928, XP002416172 *
CHANG HUANG ET AL: "Omni-directional face detection based on real adaboost", IMAGE PROCESSING, 2004. ICIP '04. 2004 INTERNATIONAL CONFERENCE ON SINGAPORE 24-27 OCT. 2004, PISCATAWAY, NJ, USA,IEEE, vol. 1, 24 October 2004 (2004-10-24), pages 593 - 596, XP010784887, ISBN: 0-7803-8554-3 *
M. JONES AND P. VIOLA: "Fast multi-view face detection", TECHNICAL REPORT, MITSUBISHI RESEARCH LABORATORIES, no. TR2003-96, June 2003 (2003-06-01), pages 1 - 8, XP002416302 *
Y.-Y. LIN AND T.-L. LIU: "Robust face detection with multi-class boosting", PROC. IEEE CONF. ON COMPUTER VISION AND PATTERN RECOGNITION, vol. 1, 20 June 2005 (2005-06-20), pages 680 - 687, XP002416171 *
Z. YANG ET AL.: "Face pose estimation and its application in video shot selection", PROC. INTL. CONF. ON PATTERN RECOGNITION, vol. 1, 23 August 2004 (2004-08-23), pages 322 - 325, XP002416173 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8396263B2 (en) 2008-12-30 2013-03-12 Nokia Corporation Method, apparatus and computer program product for providing face pose estimation
CN102868934A (zh) * 2012-08-01 2013-01-09 青岛海信传媒网络技术有限公司 基于智能电视的视频对象信息检索方法及装置
CN109190512A (zh) * 2018-08-13 2019-01-11 成都盯盯科技有限公司 人脸检测方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
US7876965B2 (en) Apparatus and method for detecting a particular subject
Huang et al. Vector boosting for rotation invariant multi-view face detection
US10796145B2 (en) Method and apparatus for separating text and figures in document images
Pandey et al. Hybrid deep neural network with adaptive galactic swarm optimization for text extraction from scene images
JP4724125B2 (ja) 顔認識システム
Tu Probabilistic boosting-tree: Learning discriminative models for classification, recognition, and clustering
US20100021066A1 (en) Information processing apparatus and method, program, and recording medium
EP3203417B1 (fr) Procédé pour détecter des textes inclus dans une image et appareil l'utilisant
Zhou et al. Learning to integrate occlusion-specific detectors for heavily occluded pedestrian detection
Said et al. Human detection based on integral Histograms of Oriented Gradients and SVM
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
Benkaddour et al. Human age and gender classification using convolutional neural network
CN110892409A (zh) 用于分析图像的方法和装置
Niinuma et al. Unmasking the devil in the details: What works for deep facial action coding?
EP1772816A1 (fr) Appareil en procédé de détection d'une personne
Imani et al. Semi-supervised Persian font recognition
Almeida et al. Automatic age detection based on facial images
Pourghassem A hierarchical logo detection and recognition algorithm using two-stage segmentation and multiple classifiers
Wang et al. Histogram feature-based Fisher linear discriminant for face detection
Smiatacz et al. Local texture pattern selection for efficient face recognition and tracking
Kim et al. Hidden conditional ordinal random fields for sequence classification
Wilkowski et al. City Bus Monitoring Supported by Computer Vision and Machine Learning Algorithms
Das et al. Design of pedestrian detectors using combinations of scale spaces and classifiers
Garcia-Ortiz et al. A Fast-RCNN implementation for human silhouette detection in video sequences
E Irhebhude et al. Recognition of mangoes and oranges colour and texture features and locality preserving projection

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061026

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20080131

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20120131