CN104346601B - Object identifying method and equipment - Google Patents
Object identifying method and equipment Download PDFInfo
- Publication number
- CN104346601B CN104346601B CN201310320936.8A CN201310320936A CN104346601B CN 104346601 B CN104346601 B CN 104346601B CN 201310320936 A CN201310320936 A CN 201310320936A CN 104346601 B CN104346601 B CN 104346601B
- Authority
- CN
- China
- Prior art keywords
- object properties
- pair
- subject area
- face
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses object identifying methods and equipment.The equipment includes extraction unit, each object properties pair being configured in the set for pre-defined object properties, the feature corresponding to the object properties pair in the diversity extracting object region based on the object properties pair;And recognition unit, it is configured for the object properties of the feature recognition subject area of extracted subject area.
Description
Technical field
The present invention relates to the methods and apparatus for the Object identifying in image.More particularly the invention relate to
The method and apparatus for identifying the object properties of the subject area in image.
Background technology
In recent years, object detection/identification in image is popular for use in image procossing, computer vision and pattern-recognition
Field, and play an important role wherein, object can be any one of face, hand, body etc..
A kind of common object detection/identification is the face in detection and identification image.In face recognition, usually realize
Include the identification of the attribute (such as, expression) of each face in the image of at least one face image, and there are a variety of use
The technology of the face recognition as realization.
It hereafter will be in the prior art for image to explain by taking the Facial expression recognition for the face for including in image as an example
In face's Attribute Recognition current techniques.The basic principle of method for Facial expression recognition follows frame shown in FIG. 1.
More particularly, for the face image of input, Facial expression recognition method obtains the face for including in image first
Portion region (face detection) then will likely be in the correspondence of different postures according to the facial feature points extracted in face area
It is aligned (face's registration) in the face of the face area.Then, (feature carries the feature of face image of this method extraction through alignment
Take), and the expression corresponding to the face area of face is finally determined according to the feature extracted.
For feature extraction, certain methods focus on the marking area (salient region) in face image, here
As shown in Fig. 2, marking area refers to region (such as eye of the characteristic part for being generally viewed as representing face in face image
Eyeball region, nasal area, mouth region etc.).
In such a case, the feature of four marking areas is extracted respectively (that is, left eye region feature fLeft eye, right eye region
Feature fRight eye, nasal area feature fNoseAnd mouth region feature fMouth), and by being attached to this four marking area features
Come together indicate face feature (fAlways), to,
fAlways=fLeft eye+fRight eye+fNose+fMouth
Feature fAlwaysIt is used to predict the expression of the face corresponding to face image.
In general, such method based on the marking area in face area extracts the feature of marking area rather than face
Then the feature of the whole region of image is predicted the expression of face according to the feature extracted, is such as shown in the prior art
Shown in the left part of Fig. 3 of the flow chart of Facial expression recognition based on the marking area in face image.The right part of Fig. 3 is schematic
Ground shows the example of such Facial expression recognition method based on marking area, wherein if in detecting face image
After dry facial feature points, four marking areas (that is, left eye region, right eye region, nasal area and mouth region) are corresponding
Ground positions.
The U.S. Patent application US2012/ of Industrial Technology Research Institute (TW) under one's name
0169895A1 discloses a kind of method for capturing countenance based on the marking area in face image.This method is distinguished
From the marking area feature of the face in four marking area capture images to generate target feature vector, then by target spy
Sign vector is compared with multiple previously stored feature vectors to generate parameter value.When parameter value is higher than threshold value, this method
Select one of image as target image.Based on the target image, Facial expression recognition and assorting process can be further executed.Example
Such as, recognition target image is to obtain countenance state, and is classified to image according to countenance state.
As the replacement of marking area, the representative area of other types of face image can be used for carrying out face's attribute
Identification.
The U.S. Patent application of Mitsubishi electric research laboratories, INC under one's name
US2010/0111375A1 discloses a kind of gathering to identify in image based on the sub-block for including in face image (patch)
The method of face's attribute.More specifically, face image is divided into one group of sub-block by this method, and by each sub-block and prototype
Sub-block compares one by one with the matched prototype sub-block of determination, and according to attribute set associated with matched prototype sub-block come really
Determine one group of attribute of face.Here, the sub-block set extracted in the method can be equal to each section in marking area.
The U.S. Patent application US2012/0076418A1 of Renesas Electronics Corporation under one's name are disclosed
A kind of face's attributes estimation method and apparatus.This method extracts specific region from face area, and sets the given zone
Zonule in domain.Then, face's group that this method calculates the zonule Yu stored one by one using similarity calculation method
At the similitude between each in part, to determine face's attribute.Here, other than the quantity of specific region, at this
The specific region used in method can be equal to marking area.
The above method in the prior art is usually (such as, more in face image from marking area or its equivalent regions
A sub-block or a small specific region) extraction feature, and by the feature extracted and corresponding to multiple known faces
Each in the pre-defined feature of one group of attribute is compared (that is, one-to-one comparison), to carry out face's attribute knowledge
Not.
In addition, the marking area positioned or equivalent regions in face image to be identified do not change during identification
Become, therefore for all comparisons during identification, there is only one and the constant feature vector derived from face image.That is,
Only it is used for there are one the feature vector from face image and corresponds to the multiple previously stored of multiple known face subordinate property
Each in feature vector is compared.
But a constant spy of face area to be identified is used for all one-to-one comparisons during identification
Sign may be not enough efficiently so that it cannot accurately identify face area.
It is noted that some marking areas may not have distinctiveness for some type of expression
(discriminative).For example, for sad expression and neutral expression, prodigious difference is just not present in nasal area, therefore,
Nasal area does not have distinctiveness for the identification of sad expression and neutral expression.Another problem is some in marking area
Part does not have distinctiveness.For example, for sad expression and neutral expression, the eyebrow part of eye areas does not have distinctiveness.
That is, if the marking area that is positioned and thus from the feature of the extracted region for pre-defined face's category
The comparison of the set of property is constant, then some parts in some regions and region may be for one in some expressions pair
The identification of the expression of a little types is redundancy.
As described above, it is still necessary to it is a kind of can be accurate based on the more distinctive feature in the face area in image
The really method of the attribute of identification face area.
Invention content
It is developed present invention is directed to the identification of the object in image, and aims to solve the problem that problem as described above.
According to an aspect of the invention, there is provided a kind of method of the subject area in image for identification, this method
Including extraction step is based on the object category for each object properties pair in the set for pre-defined object properties
The feature corresponding to the object properties pair in the diversity extracting object region of property pair;And identification step, it is carried for being based on
The object properties of the feature recognition subject area of the subject area taken.
According to another aspect of the present invention, the equipment for providing the subject area in a kind of image for identification, including:It carries
Unit is taken, each object properties pair being configured in the set for pre-defined object properties are based on the object category
The feature corresponding to the object properties pair in the diversity extracting object region of property pair;And recognition unit, it is configured for base
In the object properties of the feature recognition subject area for the subject area extracted.
Method and apparatus according to the invention for each object properties pair in the set of pre-defined object properties,
The feature corresponding to the object properties pair in the diversity extracting object region based on the object properties pair, and this feature is used
In Object identifying.Therefore, recognition efficiency and accuracy rate can be improved.
The following explanation of exemplary embodiment is read with reference to attached drawing, other feature of the invention will be apparent.
Description of the drawings
Be incorporated in specification and the attached drawing of a part for constitution instruction show the embodiment of the present invention, and with retouch
State principle for explaining the present invention together.In the accompanying drawings, similar reference numeral indicates similar project.
Fig. 1 shows the canonical process of Facial expression recognition in the prior art.
Fig. 2 shows the typical marking areas in face.
Fig. 3 is the flow chart for showing Facial expression recognition method in the prior art.
Fig. 4 is the block diagram for showing can be achieved the exemplary hardware arrangement of the computer system of the embodiment of the present invention.
Fig. 5 is the flow chart for showing object properties recognition methods according to the present invention.
Fig. 6 is the block diagram for showing object properties identification equipment according to the present invention.
Fig. 7 is the diagram for explaining the face area in face image.
Fig. 8 schematically shows the characteristic point in face area.
Fig. 9 is the flow chart for showing the process in extraction step.
Figure 10 schematically shows the positioning of the organic region in face area.
Figure 11 schematically shows the example of countenance pair.
Figure 12 is the flow chart of the determination for the template for schematically showing countenance pair.
Figure 13 shows several exemplary the average images.
Figure 14 shows the image of each expression for countenance centering accordingly divided.
Figure 15 shows the template of the countenance pair obtained from the segmentation image of each expression for countenance centering.
Figure 16 shows the different pixel in the face area for countenance pair dependent on the template of countenance pair
The positioning of block.
Figure 17 is the flow chart for showing the process in characteristic extraction step.
Figure 18 be show identification step it is a kind of realize in process flow chart.
Figure 19 is the flow chart for showing the process in another realization of identification step.
Specific implementation mode
Below in reference to attached drawing detailed description of the present invention embodiment.
It should be noted that similar reference numeral and letter indicate similar project in the accompanying drawings, and therefore once an item
Mesh is defined in an attached drawing, then subsequent attached drawing is no longer needed to be described it.
The meaning of certain terms used in the context of the disclosure will be explained first.
In the context of the disclosure, image will refer to a plurality of types of images, such as coloured image, gray level image
Deng.Since the processing of the present invention is executed mainly for gray level image, unless stated otherwise, otherwise the image in the disclosure will
Refer to including the gray level image of multiple pixels.
It is noted that the solution of the present invention can also be applied to other types of image (such as coloured image), as long as this
The image of sample can be converted into gray level image and the processing of the present invention can be directed to converted gray level image execution.
Image can usually include at least one object images, and object images generally comprise subject area, therefore at this
In disclosed context, object images and subject area are equal and are alternatively used each other.Object in common image
It is the face in image.
The feature of the characteristic for being generally characterized by representing such subject area of subject area in image, and usually may be used
To be color characteristic, textural characteristics, shape feature etc..Commonly it is characterized in color characteristic, is the spy of overall importance of representative image
It levies and can be usually obtained by being based on the color histogram of each color section (color bin).The feature of image usually quilt
It obtains in vector form, vectorial each ingredient corresponds to a color section.
Object properties refer to can correspond to the apparent state of the object of different condition, and object properties can belong to different
Classification.By taking face as an example, the classification of face's attribute can be selected from comprising countenance, correspond to when face is face and be somebody's turn to do
One kind in the gender of the people of face and the group at the age of people, face's attribute classification is not so limited, and can also be it
Its classification.When face's attribute corresponds to countenance, face's attribute can be a kind of expression (for example, it is sad, smile, laugh
Deng).
Certainly, it object properties and is not so limited, for example, object can be human body, and object properties can correspond to
Different physical conditions when people is running, standing, going down on one's kness or lying low etc..
Object properties are to being any pre-defined quantity by being contained in the set of pre-defined object properties
Pair that object properties are constituted, all object properties can distinguish in a certain category set in the set, and the set
It can be prepared, the set of the pre-defined object properties can form at least one object properties pair, each object properties pair
Object properties with identical quantity.
The object properties that object properties centering includes can arbitrarily be selected in the set of the object properties pre-defined from this,
And in such a case, the set of the pre-defined object properties may include Cn tA object properties pair, wherein n are the collection
The quantity of object properties in conjunction, and t is the quantity for the object properties that object properties centering includes.
Preferably, the quantity for the object properties that object properties centering includes can be 2.
Preferably, the object properties of object properties centering can be object properties as follows, i.e., the object properties it
Between difference it is big and even opposite.For example, by taking face as an example, object properties pair can be particularly by laugh expression and table of crying
Feelings are constituted, and more have distinctiveness to the part extracted hence for such object properties.
In the disclosure, term " first ", " second " etc. are used only for distinguishing element or step, rather than when indicating
Between sequence, preferential selection or importance.
Fig. 4 is the block diagram for showing to implement the hardware configuration of the computer system 1000 of the embodiment of the present invention.
As shown in figure 4, computer system includes computer 1110.Computer 1110 is deposited including processing unit 1120, system
Reservoir 1130, non-removable non-volatile memory interface 1140, removable non-volatile memory interface 1150, user's input
Interface 1160, network interface 1170, video interface 1190 and peripheral interface 1195, they are connected by system bus 1121
It connects.
System storage 1130 includes ROM (read-only memory) 1131 and RAM (random access memory) 1132.BIOS
(basic input output system) 1133 resides in ROM1131.Operating system 1134, application program 1135, other program modules
1136 reside in some program datas 1137 in RAM1132.
Non-removable nonvolatile memory 1141 (such as hard disk) is connected to non-removable non-volatile memory interface
1140.Non-removable nonvolatile memory 1141 for example can storage program area 1144, application program 1145, other program moulds
Block 1146 and some program datas 1147.
Removable nonvolatile memory (such as floppy disk 1151 and CD-ROM drive 1155) is connected to removable
Except non-volatile memory interface 1150.For example, diskette 1 152 can be inserted into floppy disk 1151, and CD (compact-disc) 1156
It can be inserted into CD-ROM drive 1155.
Such as input equipment of mouse 1161 and keyboard 1162 is connected to user input interface 1160.
Computer 1110 can be connected to remote computer 1180 by network interface 1170.For example, network interface 1170 can
It is connected to remote computer 1180 through LAN 1171.Alternatively, network interface 1170 may be connected to modem (modulation
Device-demodulator) 1172, and modem 1172 is connected to remote computer 1180 through wide area network 1173.
Remote computer 1180 may include the memory 1181 of such as hard disk, store remote application 1185.
Video interface 1190 is connected to monitor 1191.
Peripheral interface 1195 is connected to printer 1196 and loud speaker 1197.
Computer system shown in Fig. 4 is merely illustrative, and is in no way intended to limit the present invention, its application or is made
With.
Computer system shown in Fig. 4 can be implemented as any embodiment at the place in standalone computer or equipment
Reason system, wherein can remove one or more unnecessary components or one or more additional components can be added.
The object identifying method of basic embodiment according to the present invention is described below in reference to Fig. 5, Fig. 5 is shown according to this hair
Process in the method for bright basic embodiment.
In step S100 (hereinafter referred to as extraction step), for each in the set of pre-defined object properties
Object properties pair, diversity (dissimilarity) the extracting object region based on the object properties pair correspond to the object
The feature of attribute pair.
As described above, all object properties of the set of the pre-defined object properties belong to same category, and it is right
As attribute pair can be by (such as, two) the object category for any predetermined quantity for including in the set of the pre-defined object properties
Property constitute.
Alternatively, object properties are to that can be (such as, the two of the predetermined quantity for meeting predetermined relationship between them
It is a) object properties.
In one implementation, subject area can be aligned the subject area of (align), and subject area
Alignment can be realized (such as based on the characteristic point detected in subject area) in many ways.It is noted that subject area is
No alignment is not required for the realization of extraction operation.
In step S200 (hereinafter referred to as identification step), the feature recognition target area based on the subject area extracted
The object properties in domain.
In one implementation, the process in extraction step may include corresponding to the object for positioning in the subject area
The process (positioning step) of at least one block of the template of attribute pair, the template characterize the diversity between the object properties pair;
And the process of the feature corresponding to the object properties pair for extracting the subject area based on at least one block positioned
(characteristic extraction step).
Here, template can be considered as the diversity template of the diversity between the characterization object properties of object properties pair, and
And usually include by object properties centering object properties image between at least one different block of pixels formed.In fact,
Correspondence picture between the image for the object properties that each different block of pixels can correspond to the predetermined quantity that object properties centering includes
Plain block, which is located at the corresponding position of each image and has corresponding size, wherein different in each image
The position of block of pixels and size can be dependent on scheduled rule (such as, when each image has different sizes dependent on each
Ratio between the size of the image of a object properties) it is mapped onto one another.
Preferably, the object properties that the image of subject area and object properties centering include can be pretreated (such as, quilt
Alignment), so as to same size, and in the case, different pixel in template it is in the block each can correspond to pair
Respective pixel block between the image of the object properties for the predetermined quantity for including as attribute centering, the respective pixel block are located at each
Same position in image and there is same size.
Therefore, can be according to different pixel at least one block of positioning for the object properties from the subject area
Such position of block and size and the block of pixels positioned, if block of pixels can according to the mutually mapping of scheduled rule,
And preferably block of pixels position having the same and size.
The size of each block of pixels can be freely set, the realization of the solution without influencing the present invention.
In one implementation, the template of object properties pair can be realized in the following way:It will correspond respectively to the object category
Two mean object area images of two object properties that property centering includes are divided into multiple pieces to correspond to each other;It extracts and every
Multiple features of each in the block of the corresponding each divided mean object area image of an object attribute;Determine this two
Similitude between the feature of corresponding blocks in a divided mean object area image;And select the two divided
Such block in mean object area image is to form template, and the similitude between the block is less than pre-defined threshold value.
Here, the corresponding each image for dividing the object properties for referring to object properties centering can be drawn by corresponding pattern
Point, in object properties image it is divided it is in the block each can be mapped to according to pre-defined rule another pair as
In attribute image it is divided it is in the block each.Preferably, each image of the object properties of object properties centering has
Same size, thus partition mode for each image is identical and scale having the same, to an object properties image
In divided block and another pair as in attribute image corresponding divided block position having the same and size.It divides
Pattern can be any pattern, grid etc..
The template of object properties pair can be prepared and store, or can be produced during extraction operation.Acquisition pair
As the operation of the template of attribute pair can be comprised in extraction step, or can not be comprised in extraction step.
It can be by previously prepared in many ways and real in generality corresponding to the mean object area image of object properties
In existing, can by by corresponding to multiple analogical object area images with same size of same target attribute carry out averagely come
It is produced.
Preferably, position fixing process can be held based on the auxiliary area (auxiliary region) for including in subject area
Row, to further increase operating efficiency.Auxiliary area (such as, can be marked dependent in subject area in many ways
The position of the characteristic point of knowledge) positioning.In such a case, what position fixing process can be in positioning auxiliary region corresponds to characterization object
At least one block of the template of the diversity of attribute pair, and the template for characterizing the diversity of object properties pair may be based on object
Such auxiliary area in the image of the object properties of attribute centering is determined, rather than based on the object of object properties centering
The entirety of the image of attribute is determined.
In one implementation, characteristic extraction procedure may include positioning from subject area at least one in the block each
A middle extraction feature, and each piece extracted of feature is linked into the feature as subject area.Therefore, it finally extracts
It is generally characterized in the form of vector, each component in vector corresponds to each piece.
During identification step, the identification of object properties can be implemented in a number of ways.
In one implementation, identification can be realized in a manner of so-called " one-to-one (one against one) ", herein side
In formula, for the set of pre-defined object properties, object properties can be in Cn tIt votes in wheel, wherein n is in the set
Including object properties quantity, and t is the quantity and preferably 2 for the object properties that object properties centering includes.Tool
There are the object properties of top score that will be confirmed as object properties.
More specifically, which may include identification of steps, for the set for pre-defined object properties
In each object properties pair, the signature identification corresponding to the object properties pair based on subject area is corresponding with the subject area
The object properties centering include two object properties in an object properties, and by pair corresponding to the subject area
As the score of attribute increases predetermined value, wherein whole object properties included in the set of the pre-defined object properties
Initial score having the same;And attribute determines step, the tool in set for determining the pre-defined object properties
It is the object properties of the subject area to have the object properties of top score.
In another implementation, identification can be realized in a manner of so-called " victory one (one beating one) ",
In, in the case that the quantity for the pre-defined object properties for including in the set of pre-defined object properties is n, object
Attribute can be determined in n-1 wheels, wherein under only being advanced to the attribute won for an object properties in a wheel
One wheel, and the attribute finally won will be confirmed as object properties.
More specifically, which may include identification of steps, for the set for pre-defined object properties
In an object properties pair, the signature identification corresponding to an object properties pair based on subject area and the subject area
An object properties and attribute in two object properties that a corresponding object properties centering includes determine step,
For removing this in the set based on object properties and the pre-defined object properties corresponding to the subject area
Remaining object properties except object properties pair determine the object properties of the subject area, wherein if remaining object properties
Quantity is equal to 0, then the object properties corresponding to the subject area are confirmed as the object properties of the subject area, otherwise that this is right
Object properties as corresponding to region and in the set of the pre-defined object properties except an object properties are to it
Outer remaining object properties are grouped into new object properties set again, and the new object properties set is executed successively
The identification of steps and attribute determine step.
It is noted that the above method can every time in the image for may include at least one subject area a subject area
It executes, and repeats number identical with the quantity of subject area, one of subject area only includes one to be identified
Object.
Fig. 6 is the block diagram for showing object identification device according to the present invention.
The equipment 600 of identification for the subject area in image may include extraction unit 601, be configured as advance
Each object properties pair in the set of the object properties of definition, the diversity extracting object region based on the object properties pair
Corresponding to the feature of the object properties pair;And recognition unit 602, it is configured as the feature based on the subject area extracted and knows
The object properties of other subject area.
Preferably, extraction unit 601 may include positioning unit 601-1, be configured for positioning pair in the subject area
The diversity between the object properties pair should be characterized in at least one block of the template of the object properties pair, the template;And it is special
Levy extraction unit 601-2, be configured for positioned at least one block extract the subject area correspond to the object
The feature of attribute pair.
Preferably, positioning unit 601-1 may include being configured for the identified feature dependent in subject area
The unit of auxiliary area in the position positioning subject area of point;And be configured in positioning auxiliary region correspond to pair
As attribute pair characterization object properties pair between diversity template at least one block unit.
Preferably, feature extraction unit 601-2 may include being configured for from at least one block in subject area
Each in extract feature unit, and be configured for each piece will extracted feature connection be used as subject area
Feature unit.
Preferably, which may include identifying unit 602-1, be configured for for pre-defined object
Each object properties pair in the set of attribute, the signature identification corresponding to the object properties pair based on subject area are right with this
An object properties in two object properties for including as the corresponding object properties centering in region, and by the subject area
The score of corresponding object properties increases predetermined value, wherein complete included in the set of the pre-defined object properties
Portion's object properties initial score having the same;And attribute determining unit 602-2, it is configured for determining that this is pre-defined
The object properties with top score in the set of object properties are the object properties of the subject area.
Additionally or alternatively, the recognition unit 602 may include identify unit 602-3, be configured for for pre-
An object properties pair in the set of the object properties first defined correspond to an object properties pair based on subject area
Signature identification object properties centering corresponding with the subject area include two object properties in an object
Attribute and attribute determining unit 602-4, the object properties being configured for corresponding to the subject area and this in advance
The remaining object properties in addition to an object properties pair in the set of the object properties of definition determine the subject area
Object properties, wherein if the quantity of remaining object properties is equal to 0, the object properties corresponding to the subject area are determined
For the object properties of the subject area, otherwise by the object properties and the pre-defined object category corresponding to the subject area
The remaining object properties in addition to an object properties pair in the set of property are grouped into new object properties set again, and
And mark operation and attribute determination operation are executed successively for the new object properties set.
The template of diversity between characterization object properties pair discretely can be as described above pre-formed with equipment 600
And it stores.Additionally or alternatively, equipment 600 may include being configured for forming object properties pair in the above described manner
Characterize the unit of the template of the diversity between the object properties pair.
[advantageous technique effect]
To sum up, the present invention provides a kind of think ofs of the identification of the new object properties for the subject area in image
Road, wherein introducing the concept of object properties pair to improve the feature extraction and identification of subject area.
More specifically, the diversity between the object properties that object properties centering includes be used to be directed to object properties pair
Different block of pixels in extracting object region, and the feature for the subject area extracted is used for determining subject area and object
Which object properties of attribute centering are corresponding.Therefore, the extraction and identification of the feature of subject area are executed in pairs, by
This, recognition efficiency and accuracy can be enhanced.
It is noted that such different block of pixels of subject area in each round for being used as the pre-defined of comparison basis
Object properties set in each object properties to being determined and extracting, and can reflect pair that object properties centering includes
As the diversity between attribute.In addition, such part being extracted can change to being accommodated property during identification, that is, object
The different block of pixels in region can be dependent on each round relatively in comparison and change, rather than keep constant.
Therefore, the possibility of subject area for object properties to being that public rather than distinctive some parts can not be by
Extraction, and the part extracted can more accurately reflect the diversity between the object properties that object properties centering includes, and
And help to accurately determine which of the object properties that subject area includes corresponding to object properties centering, to object
The object properties in region can be determined more accurately.
Hereafter, in order to help to understand thoroughly realization of the invention, using face as the example of object to be identified
To explain the exemplary realization of solution of the invention.It is noted that the solution of the present invention applies also for other classes
The object of type.
For the face area in image to be identified, attribute can belong to plurality of classes.For example, the class of face's attribute
Can not be in gender and the group at age selected from the people corresponding with the face comprising countenance, when face is face
One kind.Certainly, it the classification of face's attribute and is not so limited, and can be other classifications in addition to above-mentioned classification.
[example 1]
Hereafter, face's attribute (such as, face's table of the face area in image for identification according to the present invention will be described
Feelings) process.
Generally, for the face area in its expression input picture to be identified, for pre-defined face
Each countenance pair in the set of expression, the diversity extraction between the countenance for including based on the countenance centering
The feature corresponding to the countenance pair of the face area is then based on the feature of extracted face area to identify the face
The countenance in portion region.When, there are when multiple faces, this process is repeated identical with the quantity of face in the image of input
Number.
The details of this process is described below.
Initially, for the input picture that may include at least one face, the face area in the input picture is detected, usually
One face area corresponds to a face in image.Fig. 7 shows the face area of the rectangle detected from input picture.
Preferably, before by the face area detected for feature extraction, face area is usually aligned respectively, and
And the alignment can be executed in many ways.
In one implementation, characteristic point of the face area based on the predetermined quantity extracted from face image is aligned, wherein
The quantity of characteristic point can be set based on the experience of operator, and be not limited to certain specific quantities.Feature Points Extraction can
To be such as Xudong Cao, Yichen Wei, Fang Wen, Jian Sun.Face alignment by explicit
Shape regression CVPR, 2012 and D.Cristinacce and T.F.Cootes.Boosted
Regression active shape models.BMVC, ASM disclosed in 2007.It is noted that Feature Points Extraction is not
Therefore limited, and can be any other method known in the art.
Fig. 8 schematically shows from face area and extracts 7 characteristic points, and as shown in figure 8, this 7 characteristic points are:Two
Two corners of the mouths at each two canthus, nose and mouth in a eyes.
Alignment can be executed as follows.It is noted that the process for alignment below being known in the art is only
Illustratively, and alignment can also be executed by other processes.
In alignment, the mean place for 7 characteristic points being extracted is counted according to the sample of the handmarking of predetermined quantity
It calculates.It is assumed that the sample marked there are n, seven point Pi(xi, yi) mean place of (i=1~7) is calculated as follows:
Here, x and y can represent horizontal axis position and vertical pivot position.
This seven point Pi(xi, yi) mean place of (i=1~7) is defined as target face (objective face), and
And using affine maps process so that input face is aligned with target face.
The size of the face of alignment can be 200*200 pixels.It is noted that the size of the face area of alignment is not limited
System, and can be any other size.
Next, the face area that the possibility in input picture has been aligned will be subjected to feature extraction.Fig. 9 is to show to be used for
The flow chart of the process of feature extraction, wherein step S101 are shown by dashed lines, it means that the step is optional.
In this feature extraction process, among positioning is corresponding to the set of pre-defined countenance in face area
Countenance pair template at least one different block of pixels, the template characterize countenance centering countenance between
Diversity (S102), then, based at least one different block of pixels positioned come extract face area correspond to the face
The feature (S103) of expression pair.
The template of countenance pair can be by corresponding to each other among the image of each countenance of countenance centering
At least one block is constituted, which can reflect the diversity between the countenance image of the countenance centering, and
And it will be described in the details of template later.
Process for positioning different block of pixels in one implementation can be for each countenance to according to the face
It is (such as, big in same position and having the same piece that the template of expression pair depends directly on predetermined corresponding sexual intercourse
It is small) at least one different block of pixels is positioned in face image.It is noted that block at least one different block of pixels and template it
Between correspondence sexual intercourse it is therefore not limited, and other rules can be met.
In another implementation, it can first carry out in advance for positioning auxiliary region (such as, the organic region) in face image
Process (S101), to which the positioning of the different block of pixels in face image can be (all only for the auxiliary area in face image
Such as, the organic region in face image) it executes, rather than executed for entire face image.
Auxiliary area can be any shape, rectangle, rectangular etc., and can be any big according to the experiment of operator
It is small.
As shown in Figure 10, four organic regions can be positioned, including two-eye area, a nasal area and a mouth
Region.In identification process, for the face of each alignment, the size in this four regions is fixed.For example, in 200*200
In the face of pixel, the size of eye rectangle is 80*60, and the size of nose rectangle is 140*40, and mouth rectangle is big
Small is 140*80.
Preferably, the positioning of organic region can be determined by the characteristic point in face image.If origin is image
The upper left corner.When positioning left eye region, the center of rectangular area can be consistent with the midpoint of line AB in Figure 10.Similarly, right eye
It the center of the rectangle in region can be identical as the midpoint of line CD in Figure 10.For nasal area, if the position coordinates in the upper left corner
Position coordinates for (n1, n2), the lower right corner are (n3, n4), and the position coordinates of nose E are (e1, e2), then these three are put
Coordinate meets following equation:
E1=α * (n1+n3), e2=n2+ β * (n4-n2),
Here, 0.3≤α≤0.7,0.5≤β≤0.8.
For mouth region, if H (h1, h2) is the midpoint of line FG, the upper left corner of mouth region is (m1, m2), the lower right corner
For (m3, m4).The relationship of coordinate meets following equation:
H1=γ * (m1+m3), h2=m2+ δ * (m4-m2),
Here, 0.3≤γ≤0.7,0.3≤δ≤0.6.
Four auxiliary areas in face image can be positioned as a result, and in the face image of countenance pair
Different block of pixels can refer to the template of the countenance pair and only positioned in the auxiliary area.In such a case, excellent
The template of selection of land, countenance pair can be only by different piece of structure in the auxiliary area in each countenance of the countenance centering
At, auxiliary area in countenance it is corresponding with the auxiliary area positioned in face area in a predefined manner (such as, with face
Auxiliary area in region is in same position and has same size).
Hereafter, it will be described in the template of countenance pair.
In the present invention, for each countenance in the set of pre-defined countenance to determining template.Face
Portion's expression preferably includes two kinds of countenances to may include the countenance of predetermined quantity, and therein any two
Kind countenance can form countenance pair.
For example, as shown in figure 11, if the set of pre-defined countenance includes three kinds of countenances:It is sad, neutral
And indignation, therefore C may be present3 2A countenance pair.For example, as shown in figure 11, countenance pair can be by sad expression and neutrality
Expression is constituted, and countenance pair can have neutral expression and angry facial expression to constitute, and countenance pair can be by sad expression and anger
Anger expression is constituted.
In another implementation, the countenance that countenance centering includes can be that the difference between them is very big even
Reciprocal countenance.For example, countenance pair can be specifically made of laugh expression and expression of crying, hence in this way
Countenance distinctiveness is had more to the block being extracted.
Figure 12 shows the flow chart of the process of the template for configuring countenance pair.Such process can execute this hair
It is first carried out in advance before bright process, to the mould for all countenances pair for including in the set of pre-defined countenance
Plate can be preconfigured and store.Alternatively, such process can in time be executed with the execution of the process of the present invention.
First, two average face images of two countenances that countenance centering includes are corresponded respectively to by each other
Accordingly it is divided into multiple pieces.
The average face image of each expression usually can by by the face being aligned in same expression carry out it is average by by
Structure.By taking the average face of laugh expression as an example, it is assumed that the average face image I of the laugh sample with N number of alignment, laugh is logical
Cross following formula acquisition:
This equation refers to that the gray value of the pixel being respectively aligned in face image is added with weight 1/N
To together, to obtain the average face image laughed.
Figure 13 is exemplarily illustrated the average face image of laugh, neutrality, sadness and smile, and such average face
Image is usually in advance based on countenance database and is produced and is stored.
Figure 14 schematically shows two the corresponding of average face image and divides.The two average face images are utilized respectively
Identical pattern (such as, grid) is divided, and divided piece of size is not limited.For example, in 200*200 pictures
In the average face image of element, the size of block is 10*10 pixels.
It is noted that the partition mode of each average face image and be not so limited, and can be right each other otherwise
It answers, for example, when each average face image has different sizes, each block in partition mode can be according to each average
The ratio of the size of face image corresponds to each other.
Next, by multiple features of each in the block of each divided average face image of extraction.There are more
Kind extracting method, and each in a variety of extracting methods can be applied to this process.
As it is known in the art, feature extracting method can be such as Timo Ojala, Matti Pietikainen,
And Topi Maenpaa, Multi-resolution gary-scale and rotation invariant texture
Classification with local binary patterns, IEEE Transaction on Pattern
Analysis and Machine Intelligence, local binary (LBP) disclosed in 2002, or such as Ville
Ojansivu, and Janne Heikkila, Blur insensitive texture classification using
Local phase quantization (LPQ) disclosed in local phase quantization, ICISP2008.
In the case of LBP, block size is identical as the size of different block of pixels, and the sum of section is such as 59.Cause
This, each block LBP features have 59 dimensions.Feature calculation process is summarized as follows:
1) for each pixel in input picture, LBP is calculated8,1
A) value of the center pixel as current pixel is obtained
B) pixel value in eight adjacent areas is extracted
C) g is calculated by bilinear interpolationP, (P=0,1 ..., 7)
Here, gpIt is the gray value of one of adjacent pixel, and gcIt is the gray value of center pixel.
2) Ville Ojansivu and Janne Heikkila, Blur insensitive texture are used
LBP value mapping tables disclosed in classification using local phase quantization.ICISP2008 lead to
It crosses and builds the LBP histograms of 59 dimensions by the LBP of each pixel in the block is added together.
It is next determined that the similitude between the feature of corresponding blocks in two divided average face images.
For example, Euclidean distance can be used to be determined in the similitude (referring to Fig.1 4) of corresponding blocks.It is assumed that two features
Vector f 1=<A1, a2 ... an>, f2=<B1, b2 ... bn>, the similitude of f1 and f2 is as follows:
Therefore, the similitude between the two average face images divided can be determined one by one piece as described above.
It is noted that the determination of similitude is not therefore limited, and can be realized with other ways known.
Finally, select two division average face images in they between similitude less than predetermined threshold block with
Form template.
More specifically, the similitude of the corresponding blocks in two divided average face images is by with ascending sort, and
The block of one predetermined quantity to being selected, and the block of the predetermined quantity to index be saved using the mould as expression pair
Plate.The predetermined quantity (also correspond to pre-defined threshold value, the pre-defined threshold value can be the centering of predetermined quantity most
The similitude of the latter pair) it can be optimised by testing.One example of the template of expression pair is shown in fig.15.
It therefore, can be according to the template of the expression pair formed as already described above in facial regions for each expression pair
One group of different block of pixels is positioned in domain.Figure 16 shows this process.
First, according to the template that the face image division of input is blocking, and block division can be identical as template division (all
Such as there is same pattern and same block size).Then, due to the template of expression pair have adjusted it is different in face image
The index of block of pixels, different block of pixels are positioned according to the index in template in the face image of alignment.In fact, phase aniseikania
The size of plain block can be 10*10 pixels.
As described above, when the auxiliary area in face image is pre-positioned, the above process can be only for auxiliary area
It executes.
Based on the different block of pixels being positioned such that in face area to be identified, the feature of face area can extract.
Figure 17 shows the flow chart for such feature extraction.It particularly, can be in the block every at least one of face area
One extraction feature (S1031), and each piece of the feature extracted is concatenated the feature (S1032) as face area.
Feature extraction mode can be any method as known in the art, and for example, can be extracted with features described above
Method (for example, LBP) in the process is identical.
Then, link the feature of all different block of pixels to indicate the feature of countenance.Finally the dimension of vector is
59*n, n is the sum of different block of pixels here, and 59 represent the sum for the section for being used for feature extraction and can appoint
What its number.Merely with different block of pixels in auxiliary area, in each organic region, each phase aniseikania
The feature of plain block is by with fixed sequence interlock, and then the feature of the organic regions of four acquisitions is concatenated.
The identification of the countenance of face image is described below.
Figure 18 is a kind of flow chart realized for showing identification process, and in this implementation, identification is by with so-called " a pair of
One " mode realizes that in this approach, countenance will be voted-for C for the set of pre-defined countenancen tIt is secondary, n here
The quantity for the countenance for including in the set and t be the countenance that countenance centering includes quantity, that is, for
One countenance is to carrying out single ballot, and the countenance with top score will be confirmed as final countenance.
Each expression is indicated as B1..., Bn, and the score of each expression can be initially set to 0, be designated as f (B1)=…=f
(Bn)=0.For each countenance pair, when face area is determined to correspond to one of countenance (Bi) when, this
The score of the countenance of sample is increased steady state value, such as f (Bi)=f(Bi)+1。
Finally, correspond to maximum score f (B)=max { f (B1)…f(Bn) countenance be confirmed as face area
Countenance.
Figure 19 is the flow chart for another realization for showing identification process, and in this implementation, identification can be by with a so-called " victory
One " mode realizes, in this approach, the pre-defined countenance for including in the set of pre-defined countenance
In the case that quantity is n, the countenance of face area will be taken turns by n-1 to be determined, wherein the countenance only in a wheel
To the expression the won expression that will advance to next round, and finally win be confirmed as final countenance.
If each expression is indicated as B1..., Bn, and countenance is to including two countenances.At the beginning, from advance
Arbitrarily select countenance to (B in the set of the countenance of definitioni, Bi), and for the countenance pair, determine the face
Portion region is corresponding with which countenance.For example, determining, the countenance of face area is Bi。
Then, expression B will be excluded from the initial sets of expressioni, and remaining expression be organized as again it is new
Expression set.For the new set, the above process is executed again.
Therefore, such process will be performed n-1 wheels, and final remaining expression is identified as the table of face image
Feelings.
In general, " victory one " mode ratio " one-to-one " mode is more efficient, and poor accuracy is seldom identical.
Expression determination for a countenance pair (such as can be passed through classification in any manner known in the art
Device) it realizes.In the case of grader, for expression pair, feature vector is classified by two-value grader.Using such as
Chih-Chung Chang and Chih-Jen Lin LIBSVM:a library for support vector
Linear SVM disclosed in machines.2011 is as the grader.Decision function is sgn (wTF+b), wherein w and b are stored in
In dictionary, it is bias term (bias item) that w, which is for the SVM weights being trained to and b, and f be face area feature to
Amount.
Experimental result
Table 1 shows the data set used in experiment.Image in this data set is that had by from the Internet download
The frontal one image of natural expression.
Table 1
It laughs | It is neutral | It is sad | It smiles | |
Training set | 2042 | 1976 | 1746 | 2140 |
Test set | 717 | 487 | 1302 | 518 |
Sum | 2759 | 2463 | 3048 | 2658 |
Table 2 show the application solution comparison such as U.S. Patent application US2012/0169895 performance (for example,
Facial expression recognition accuracy).
Table 2
As shown in table 3, and the confusion matrix of the application is such as the mixed knowledge matrix (confusion matrix) of the prior art
Shown in table 4.
Table 3
Table 4
[example 2]
The mistake of face's attribute (such as, the age of face) in image for identification according to the present invention is described below
Journey.
If there is the set of pre-defined age level, including children, teenager, adult, the elderly etc..It then, will be
Age level pair, and the age that will determine face in pairs are configured in the set of the pre-defined age level.The process
It can be similarly implemented with the realization in example 1.
More specifically, at the beginning, which detects face and positions face area in the face image of input.
Next, the process according to the template for each age level pair being trained to for each age level to positioning
Different block of pixels in face area
Next, the process to the different block of pixels classified to each age level based on for obtaining face area
Feature.The feature of one age level pair can be indicated by linking the feature of different block of pixels.
Next, the process for each age level to determine face age level, and integrate classification results with
Determine the age level of face.
Preferably, before positioning different block of pixels, face can be aligned.
Preferably, before or during positioning different block of pixels, can in face area positioning auxiliary region, to fixed
The different block of pixels in position and subsequent operation period, can only handle auxiliary area.
It is noted that above-mentioned example is merely illustrative, rather than it is restrictive.The solution method of the present invention is not by therefore quilt
Limitation, and can be applied to other types of object properties identification.
[industrial applicability]
The present invention can be used for a variety of applications.For example, present invention can apply to the state of the object in detect and track image,
Smile detection in such as camera, spectators' response system and automatic picture annotation.
More specifically, in one implementation, object is can detect, and then can identify this using the method for the present invention
The attribute of object.
In the case of camera application, pass through camera capture images.System is by face detection techniques from being captured
Image select face image.Face image is inputted into Facial expression recognition module.Some pre-defined expressions (for example,
Happiness, sadness, neutrality etc.) it is identified.Then, recognition result is inputted into evaluation module, the evaluation module is according to the expression of spectators
To assess effect of meeting.The assessment result of this system final output effect of meeting.
Various ways can be used to carry out the method and system of the present invention.For example, can by software, hardware, firmware or it
Any combinations come carry out the present invention method and system.The sequence of the step of this method described above is only illustrative
, and unless specifically stated otherwise, otherwise the step of method of the invention is not limited to the sequence being described in detail above.In addition,
In some embodiments, the present invention can also be embodied as the program recorded in recording medium, including be used to implement according to the present invention
Method machine readable instructions.Therefore, present invention also contemplates that storage is used to implement program according to the method for the present invention
Recording medium.
Although describing the present invention by reference to example embodiment, it should be appreciated to those skilled in the art that above-mentioned show
Example is merely illustrative to limit the scope of the invention without being intended to.It will be understood by those skilled in the art that above-described embodiment can be
It is changed in the case of without departing substantially from scope and spirit of the present invention.The scope of the present invention is defined by the appended claims, appended
The scope of the claims will be given broadest explanation, to include all such modifications and equivalent structure and function.
Claims (10)
1. a kind of object identification device, including:
It is multiple to determine to be configured as the multiple subclass in the set for pre-defined object properties included for determination unit
Object properties pair, wherein each object properties of each object properties centering are respectively from different subclass, and the subclass
Middle attribute is the different expressions of face;
Extraction unit, be configured as the diversity extracting object region based on the object properties pair corresponds to the object properties pair
Feature;And
Recognition unit is configured for the object properties of the feature recognition subject area of extracted subject area.
2. equipment according to claim 1, wherein the subject area is face area, and the object properties are
Face's attribute, and
Wherein, face's attribute is countenance.
3. equipment according to claim 1, wherein the subject area is to have been based on the spy being identified in subject area
Levy the subject area of point alignment.
4. equipment according to claim 1, wherein the extraction unit includes:
Positioning unit is configured for positioning at least one of the template corresponding to the object properties pair in the subject area
Block, the template characterize the diversity between the object properties pair;And
Feature extraction unit, be configured for positioned at least one block extract the subject area correspond to the object
The feature of attribute pair.
5. equipment according to claim 4, wherein the positioning unit includes:
The auxiliary area being configured in the position positioning subject area dependent on the identified characteristic point in subject area
Unit;And
It is configured for positioning different between the characterization object properties pair corresponding to object properties pair in the auxiliary area
The unit of at least one block of the template of property.
6. equipment according to claim 4 or 5, wherein the phase between the characterization object properties pair of object properties pair
The anisotropic template is formed in the following way:
Two mean object area images for corresponding respectively to two object properties that the object properties centering includes are divided into
Multiple pieces to correspond to each other;
Corresponding with each object properties each divided mean object area image of extraction it is multiple it is in the block each
Feature;
Determine the similitude between the feature of the corresponding blocks in the two divided mean object area images;And
Select the block as follows in the two divided mean object area images to form template, it is similar between block
Property less than pre-defined threshold value.
7. equipment according to claim 4, wherein the feature extraction unit includes:
It is configured for from least one unit in the block for extracting feature in each in subject area, and
It is configured for unit of each piece will extracted of the feature connection as the feature of subject area.
8. equipment according to claim 1, wherein the recognition unit includes:
Identify unit, each object properties pair being configured in the set for pre-defined object properties, based on pair
Two objects for including as the signature identification corresponding to the object properties pair in the region subject area and the object properties centering
Which of attribute object properties are corresponding, and the score of the object properties corresponding to the subject area is increased and is made a reservation for
Value, wherein whole object properties initial score having the same included in the set of the pre-defined object properties;With
And
Attribute determining unit is configured for determining pair with top score in the set of the pre-defined object properties
As the object properties that attribute is the subject area.
9. equipment according to claim 1, wherein the recognition unit includes:
Identify unit, an object properties pair being configured in the set for pre-defined object properties, based on pair
As region includes corresponding to the signature identification of an object properties pair subject area and an object properties centering
Which of two object properties object properties are corresponding, and
Attribute determining unit, the object properties being configured for corresponding to the subject area and the pre-defined object
The remaining object properties in addition to an object properties pair in the set of attribute determine the object properties of the subject area,
Wherein, if the quantity of remaining object properties is equal to 0, it is right that the object properties corresponding to the subject area are confirmed as this
As the object properties in region,
Otherwise, by the set of object properties and the pre-defined object properties corresponding to the subject area except this one
Remaining object properties except a object properties pair are grouped into new object properties set again, and for the new object category
Property set successively execute the mark operation and attribute determine operation.
10. a kind of object identifying method, including:
Multiple object properties pair are determined for the multiple subclass for including in the set of pre-defined object properties, wherein
Each object properties of each object properties centering respectively from different subclass, and in the subclass attribute be face not
Same expression;
The feature corresponding to the object properties pair in the diversity extracting object region based on the object properties pair;And
The object properties of feature recognition subject area based on the subject area extracted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310320936.8A CN104346601B (en) | 2013-07-26 | 2013-07-26 | Object identifying method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310320936.8A CN104346601B (en) | 2013-07-26 | 2013-07-26 | Object identifying method and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104346601A CN104346601A (en) | 2015-02-11 |
CN104346601B true CN104346601B (en) | 2018-09-18 |
Family
ID=52502177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310320936.8A Active CN104346601B (en) | 2013-07-26 | 2013-07-26 | Object identifying method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104346601B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426812B (en) * | 2015-10-27 | 2018-11-02 | 浪潮电子信息产业股份有限公司 | A kind of expression recognition method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101667248A (en) * | 2008-09-04 | 2010-03-10 | 索尼株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
CN102663413A (en) * | 2012-03-09 | 2012-09-12 | 中盾信安科技(江苏)有限公司 | Multi-gesture and cross-age oriented face image authentication method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4457980B2 (en) * | 2005-06-21 | 2010-04-28 | ソニー株式会社 | Imaging apparatus, processing method of the apparatus, and program for causing computer to execute the method |
JP4197019B2 (en) * | 2006-08-02 | 2008-12-17 | ソニー株式会社 | Imaging apparatus and facial expression evaluation apparatus |
JP2012243179A (en) * | 2011-05-23 | 2012-12-10 | Sony Corp | Information processor, information processing method and program |
CN102314687B (en) * | 2011-09-05 | 2013-01-23 | 华中科技大学 | Method for detecting small targets in infrared sequence images |
-
2013
- 2013-07-26 CN CN201310320936.8A patent/CN104346601B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101667248A (en) * | 2008-09-04 | 2010-03-10 | 索尼株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
CN102663413A (en) * | 2012-03-09 | 2012-09-12 | 中盾信安科技(江苏)有限公司 | Multi-gesture and cross-age oriented face image authentication method |
Also Published As
Publication number | Publication date |
---|---|
CN104346601A (en) | 2015-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tudor Ionescu et al. | How hard can it be? Estimating the difficulty of visual search in an image | |
CN107609459B (en) | A kind of face identification method and device based on deep learning | |
CN104143079B (en) | The method and system of face character identification | |
Zhao et al. | Learning mid-level filters for person re-identification | |
CN105740780B (en) | Method and device for detecting living human face | |
Alyuz et al. | Regional registration for expression resistant 3-D face recognition | |
CN110909618B (en) | Method and device for identifying identity of pet | |
CN108229330A (en) | Face fusion recognition methods and device, electronic equipment and storage medium | |
CN110348319A (en) | A kind of face method for anti-counterfeit merged based on face depth information and edge image | |
CN105989331B (en) | Face feature extraction element, facial feature extraction method, image processing equipment and image processing method | |
CN106897675A (en) | The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features | |
CN106408037A (en) | Image recognition method and apparatus | |
CN111046886A (en) | Automatic identification method, device and equipment for number plate and computer readable storage medium | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN109993021A (en) | The positive face detecting method of face, device and electronic equipment | |
CN108764302A (en) | A kind of bill images sorting technique based on color characteristic and bag of words feature | |
CN110796659B (en) | Target detection result identification method, device, equipment and storage medium | |
CN104966075B (en) | A kind of face identification method and system differentiating feature based on two dimension | |
CN102637255A (en) | Method and device for processing faces contained in images | |
CN110874835B (en) | Crop leaf disease resistance identification method and system, electronic equipment and storage medium | |
CN107480628B (en) | Face recognition method and device | |
Hasan et al. | Improving alignment of faces for recognition | |
Kim et al. | Facial landmark extraction scheme based on semantic segmentation | |
Zhu et al. | Multiple human identification and cosegmentation: A human-oriented CRF approach with poselets | |
Xu et al. | Robust seed localization and growing with deep convolutional features for scene text detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |