CN103310219A - Method and equipment for precision evaluation of object shape registration, and method and equipment for registration - Google Patents

Method and equipment for precision evaluation of object shape registration, and method and equipment for registration Download PDF

Info

Publication number
CN103310219A
CN103310219A CN2012100587986A CN201210058798A CN103310219A CN 103310219 A CN103310219 A CN 103310219A CN 2012100587986 A CN2012100587986 A CN 2012100587986A CN 201210058798 A CN201210058798 A CN 201210058798A CN 103310219 A CN103310219 A CN 103310219A
Authority
CN
China
Prior art keywords
mark
model
shape
registration
calculate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100587986A
Other languages
Chinese (zh)
Other versions
CN103310219B (en
Inventor
朱福国
陈曾
胥立丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN201210058798.6A priority Critical patent/CN103310219B/en
Priority claimed from CN201210058798.6A external-priority patent/CN103310219B/en
Publication of CN103310219A publication Critical patent/CN103310219A/en
Application granted granted Critical
Publication of CN103310219B publication Critical patent/CN103310219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a method and equipment for precision evaluation of object shape registration, and a method and equipment for registration. The method used for evaluating the precision of a registered object shape represented by multiple calibration points in an image comprises a first fraction calculation step used for calculating a first fraction by using a first model according to the object shape; a second fraction calculation step used for calculating a second fraction by using a second model according to all the calibration points; and an evaluation step used for calculating the registration precision according to the first fraction and the second fraction.

Description

The method and apparatus of the precision assessment method of registration with objects shape and equipment, registration
Technical field
The present invention relates to image processing, computer vision and pattern-recognition.More specifically, the present invention relates to for assessment of the method and apparatus of the precision of the object shapes of the registration that is represented by a plurality of calibration points in the image and the method and apparatus that is used for the object shapes of registering images.
Background technology
Analyze such as face recognition, expression, in many computer vision fields that 3D D facial modelling and FA Facial Animation are made, automatically and accurately registration is important task by the shape of the object (for example, face) of the set description of calibration point.
For facial registration, many dissimilar methods have been proposed.Among these methods, the active shape model (ASM) and the active apparent model (AAM) that are proposed by people such as Cootes are proved to be effective scheme.Compare with AAM, ASM has better performance in speed, precision and generalization properties.Therefore, for ASM a lot of the improvement and variation proposed in recent years.
Initiatively apparent model (ASM) be by Tim Cootes and Chris Taylor in the statistical model of the object shapes of nineteen ninety-five exploitation, its carry out the iteration modification with new images in the example match of object.The shape of object is by the set expression of (controlling by shape) point.The ASM algorithm is intended to this model and new images coupling.It comes work by alternately carrying out following steps:
In the image around each point, seek the more good position for this point;
The Renewal model parameter is to carry out optimum matching with these newfound positions.
According to ASM, there have been many algorithms for the face portion registration in the reality.Come in, facial registration has expanded to the various visual angles registration.Carry out many work and expanded these algorithms to process the various visual angles face.
In general, the state of the facial method for registering of conventional various visual angles is based on the method at visual angle, and wherein, the visual angle is distributed the some sub-visual angle such as front, half side-view and full side etc.Fig. 2 illustrates the example of the facial registration of various visual angles, and wherein the visual angle is divided into seven sub-visual angles, and for every sub-visual angle, the set of training shapes model and local texture model.The method is out of shape the non-linear shape of visual angle face by selecting the correct model based on the visual angle to deal with.
The universal method of various visual angles registration is that facial visual angle is estimated.Can store the shape at each visual angle.As shown in Figure 1A, for input picture, at first estimate facial visual angle by the visual angle method of estimation, then carry out facial registration by the model corresponding with the visual angle of estimating.Figure 1A illustrates the processing sequence for the universal method of carrying out the facial registration of various visual angles by the ASM method.
But it is not gratifying usually that the visual angle of this universal method is estimated.Now, for better visual angle estimation, some present various visual angles method for registering have caused concern.
Method 1: based on the method for shape and visual angle parameter estimation
At L.Zhang, H.Ai, Multi-View Active Shape Model with Robust Parameter Estimation, ICPR in 2006, has proposed to be used for the method for parameter estimation that the various visual angles active shape is estimated.At first, respectively for the set of each visual angle training shapes model and local texture model.Secondly, estimate the initial visual angle of Given Graph picture, and carry out the Local Search that uses local texture model.Then, carry out parameter estimation by nonlinear optimization method.Use optimization method, in this optimization method, each unique point is carried out dynamic weighting so that only have the unique point consistent with shape will have large weight, and the impact of exceptional value (outlier) will be eliminated.At last, can obtain new shape until this shape convergence.Figure 1B illustrates the process flow diagram of the facial method for registering of various visual angles that utilizes parameter estimation.
Method 2: take based on the ASM that mixes the visual angle and the 3D mask method as the basis
At Yanchao Su, Haizhou Ai, Shihong Lao, Multi-View Face Alignment Using 3D Shape Model for View Estimation.The 3rd IAPR International Conference on Biometrics, in 2009, to making up based on the ASM at visual angle and the simple 3D face shape model of setting up at 500 3D scanning faces, to set up the facial registration arrangement of complete automatic multi-view-angle.Initiated by the various visual angles face detector, at first use the local texture model based on the visual angle to come Local Search original shape unique point on every side, then use 3D face shape model from these reconstruct 3D face shape.3D shape according to reconstruct, can obtain its facial information, can indicate the point of self-enclosed (self-occluded) according to this facial information, then adopt the 2D shape at this visual angle to come to estimate the non-close-shaped of observation that become more meticulous by nonlinear parameter.The whole registration process of method 1 and method 2 is basic identical.Main difference is to carry out parameter estimation with 3D face shape model in method 1.Fig. 1 C illustrates process flow diagram.Fig. 1 C illustrates method 1 and 2 both main flow charts (method 2 has the dotted line block diagram and method 1 does not have).For method 2 in whole registration process shown in Fig. 1 D.Fig. 1 D is the illustration of registration process.In (a), by the average shape of current visual angle algorithm is carried out initialization.In (b), use the local grain information exchange to cross the shape that Local Search obtains observing.In (c), utilize the shape of observing, use 3D shape reconstruct 3D shape, and estimate attitude.In (d), from the Shape Reconstruction 2D shape of observing.In (e), when iteration convergence, obtain net shape.
The problem of method 1 and method 2:
A common issue with of said method is the sub-view that they need to estimate the object in each iteration.Yet the visual angle estimation itself is very difficult and is easy to make mistakes.Although these visual angle methods of estimation are not to depend on the initial visual angle that is obtained by face detector fully, the out of true of final registration is estimated to cause in the visual angle.
Method 3: based on the method for Bayes's mixture model
At Y.Zhou, W.Zhang, X.Tang, and H.Shum.A bayesian mixture model for multi-view face alignment.CVPR in 2005, has proposed to be used for multi-mode Bayes's framework of the facial registration of various visual angles.At first, by various visual angles face detector initialization visual angle.Secondly, according to initial visual angle, describe distribution of shapes and some visuality with mixture model, then derive the posterior probability of the model parameter when the observed value of given unknown validity feature point.Particularly, the problem with multi-mode and alterable features point is expressed as unified Bayesian frame.At last, provide the visuality that the EM algorithm comes the point of the shape of estimation model parameter, regularization and this shape.Fig. 1 E illustrates this local updating and processes.
The problem of method 3:
On the one hand, in method 3, the texture model that uses in the Local Search of each gauge point depends on initial visual angle, and obtains initial visual angle by face detector.Yet it is usually very responsive for the estimation of hiding the visual angle parameter.When not having the initial visual angle of correct Prediction, the result of Local Search becomes unreliable.If potential exceptional value is not processed in the estimation of form parameter, the method will failure so.On the other hand, in the local updating step, carry out local updating with different weights with two models.A problem is that linear mixed model can not be described facial 3D shape fully, and this may cause net result inaccurate.
These methods are by selecting the correct model based on the visual angle to process the facial non-linear shape modification of various visual angles.Because the texture that uses in the local matching of each calibration point of given shape model depends on its visual angle classification, so these methods are very responsive for the estimation of initial visual angle classification.Yet the accurate estimation at facial visual angle itself is not have the problem of fine solution and still in development.Although the overlay defining of angular field of view can alleviate the mistake that the inappropriate initialization owing to the visual angle causes to a certain extent, if initial visual angle not by correct Prediction, the result of local matching will become unreliable so.
Thus, how effectively to select correct model to be based on committed step in the facial registration of various visual angles at visual angle.From the foregoing, it is very difficult selecting best visual angle before models fitting.Therefore, after the match execution model to select can be effective scheme for this problem.Selecting according to fitting result on the question essence of best model is the problem of how assessing from the precision of the fitting result of different models.
There have been the several methods for assessment of the precision of facial registration.In one of these methods, at Face Alignment via Component-based Discriminative Search, ECCV, 2008, L.Liang, R.Xiao, F.Wen among the J.Sun, uses Boosted apparent model (BAM) to calculate each access shape { S 0..., S kMark f (S k), and choose maximum mark as final output.The posterior probability P (S|I) that is used for form parameter optimization also can be regarded as the assessment of the good degree of shape registration.
In given situation with face data collection that actual value demarcates, deformation pattern (Fig. 3 A that training is demarcated based on actual value with study based on the sorter of boosting, from Xiaoming Liu.Generic face alignment using boosted appearance model.CVPR, 2007) (positive class) with based on the decision boundaries between the deformation pattern (bearing class) of the calibration point of random perturbation (perturb).One group of housebroken Weak Classifier based on the rectangular characteristic of Haar-like is determined the boosted apparent model.
Be regarded as the measurement of the good degree of registration shape from the classification confidence score of last strong sorter.
Fig. 3 A illustrates the deformation pattern for the disturbance of BAM training.The disturbance of the negative sample in the training is larger on the impact of the performance of BAM.Fig. 3 B (from Xiaoming Liu.Generic face alignment using boosted appearance model.CVPR, 2007) illustrates the boosted apparent model for facial registration.
With correct registration and two kinds of apparent models of training among the BAM of incorrect registration.Facial considered as a whole and positive and negative sample is deformed into 30 * 30 pixels.This work well, but throw away the information that comes in handy.In addition, the apparent model among the BAM can not be applied to the assessment of a plurality of models.
The method is considered as classification problem with facial registration.Synthesize the negative shape that is used for the BAM training by the random perturbation of form parameter.The selection of negative shape greatly depends on user's experience.If negative shape is not by suitable disturbance, then this will cause non-constant ground performance.If come disturbance to bear shape with a large amount of disturbances, then this will cause good classification performance, and the surplus of two classifications (that is, positive classification and negative classification) will be large.Can find out, the disturbance of the negative sample in the training has more impact for the performance of BAM, and exists sample to be regarded as some ambiguous surpluses zones.
Summary of the invention
Excited by the problem of said method, the present invention concentrates on for the fitting result of the model of different visual angles classification and selects.
A kind of method of precision of the object shapes for assessment of the registration that is represented by a plurality of calibration points in the image is provided in one aspect of the invention.The method comprises: the first mark calculation procedure, and use the first model to calculate the first mark according to object shapes; The second mark calculation procedure uses the second model to calculate the second mark according in described a plurality of calibration points each; And appraisal procedure, according to the precision of the first mark and the second mark calculating registration.
A kind of method of the object shapes for registering images is provided in another aspect of this invention.The method comprises: the shape step of registration, and according to a plurality of shape match object shapes that are used for object, each in described a plurality of shapes is corresponding to the image from a visual angle of object; Select step, calculate the mark of the fitting result of each shape based on fitting result by appraisal procedure according to aspects of the present invention, and the selection fitting result corresponding with highest score is as the registration result of object shapes.
The equipment of precision of the object shapes of the registration that is represented by a plurality of calibration points in a kind of evaluate image is provided in still another aspect of the invention.This equipment comprises: the first mark calculation element is used for using the first model to calculate the first mark according to object shapes; The second mark calculation element is used for using the second model to calculate the second mark according to each of described a plurality of calibration points; And apparatus for evaluating, be used for the precision according to the first mark and the second mark calculating registration.
A kind of equipment of the object shapes for registering images is provided in still another aspect of the invention.This equipment comprises: the shape registration apparatus, be used for according to a plurality of shape match object shapes that are used for object, and each in described a plurality of shapes is corresponding to the image from a visual angle of object; And selecting arrangement, be used for calculating by equipment according to aspects of the present invention based on fitting result the mark of the fitting result of each shape, and the selection fitting result corresponding with highest score is as the registration result of object shapes.
According to the present invention, find the registration best matching result afterwards of carrying out all shapes.In the present invention, need not the sub-visual angle of iterative estimate and match and do not rely on initial visual angle.The present invention is useful especially for the shape of selecting correct visual angle and the shape of removing incorrect visual angle.
Utilization is for assessment of the method and apparatus of the precision of the object shapes of the registration that is represented by a plurality of calibration points in the image and the method and apparatus that is used for the object images of registering images, can improve the precision of the object registration in the image, and can select the best-fit shape with low assessing the cost.
From below with reference to the explanation of accompanying drawing to exemplary embodiment, further feature of the present invention will become obvious.
Description of drawings
Figure 1A illustrates the processing sequence of carrying out the universal method of the facial registration of various visual angles by the ASM method.
Figure 1B illustrates respectively to 1E and utilizes parameter estimation to carry out the process flow diagram of the conventional method of the facial registration of various visual angles.
Fig. 2 illustrates the example of the facial registration of various visual angles.
Fig. 3 A illustrates the deformation pattern for the disturbance of BAM training.
Fig. 3 B illustrates the boosted apparent model for facial registration.
Fig. 4 illustrates the block diagram of hardware configuration that expression can realize the computer system of information output device of the present invention.
Fig. 5 illustrates the block diagram according to the equipment of various visual angles object registration of the present invention.
Fig. 6 illustrates the process flow diagram according to object registration of the present invention.
Fig. 7 illustrates the figure of the selection of candidate shape model.
Fig. 8 illustrates the assessment apparatus for assessment of the precision of the object shapes of the registration in the image.
Fig. 9 illustrates for the process flow diagram of registration by the precision assessment method of the object shapes of the set expression of calibration point.
Embodiment
To describe the preferred embodiments of the present invention in detail according to accompanying drawing now.Note, the positioned opposite of the assembly among the embodiment and the shape of device only are described to example, and are not intended to limit the scope of the invention to these examples.In addition, similar Reference numeral and letter refer to similar in the drawings, thus, as long as define one in a figure, then need not for follow-up figure this to be discussed.
Fig. 4 is the block diagram that the hardware configuration of the computer system 1000 that can realize embodiments of the invention is shown.
As shown in Figure 4, computer system comprises computing machine 1110.Computing machine 1110 comprises processing unit 1120, system storage 1130, irremovable non-volatile memory interface 1140, removable non-volatile memory interface 1150, user's input interface 1160, network interface 1170, video interface 1190 and the output peripheral interface 1195 that connects via system bus 1121.
System storage 1130 comprises ROM (ROM (read-only memory)) 1131 and RAM (random access memory) 1132.BIOS (Basic Input or Output System (BIOS)) 1133 resides among the ROM 1131.Operating system 1134, application program 1135, other program element 1136 and some routine data 1137 reside among the RAM 1132.
Irremovable nonvolatile memory 1141 such as hard disk is connected to irremovable non-volatile memory interface 1140.Irremovable nonvolatile memory 1141 for example can storage operating system 1144, application program 1145, other program element 1146 and some routine data 1147.
Removable nonvolatile memory such as floppy disk 1151 and CD-ROM drive 1155 is connected to removable non-volatile memory interface 1150.For example, diskette 1 152 can be inserted in the floppy disk 1151, and CD (CD) 1156 can be inserted in the CD-ROM drive 1155.
Input equipment such as mouse 1161 and keyboard 1162 is connected to user's input interface 1160.
Computing machine 1110 can be connected to remote computer 1180 by network interface 1170.For example, network interface 1170 can be connected to remote computer 1180 by LAN (Local Area Network) 1171.Alternatively, network interface 1170 can be connected to modulator-demodular unit (modulator-demodulator) 1172, and modulator-demodular unit 1172 is connected to remote computer 1180 via wide area network 1173.
Remote computer 1180 can comprise the storer 1181 such as hard disk, and it stores remote application 1185.
Video interface 1190 is connected to monitor 1191.
Output peripheral interface 1195 is connected to printer 1196 and loudspeaker 1197.
Computer system shown in Figure 4 only is illustrative and never is intended to invention, its application, or uses are carried out any restriction.
Computer system shown in Figure 4 can be implemented in any embodiment, can also can remove one or more unnecessary assemblies as the disposal system in the equipment as stand-alone computer, perhaps adds one or more additional assemblies to it.
To describe now according to the equipment for the object registration of the present invention and method.
Fig. 5 is the block diagram according to the equipment for various visual angles object registration of the present invention.
As shown in Figure 5, the equipment that is used for the object registration comprises: model memory storage 501, be used for a plurality of shapes that the object of registration is wanted in storage, and each shape is corresponding to the sub-view of object; Shape registration apparatus 502 is used for individually according to a plurality of shape match object shapes, and exports the fitting result of each model; And registration result selecting arrangement 503, be used for based on fitting result by calculating the mark of the fitting result of each shape according to the apparatus for evaluating of the present invention that will describe after a while, and from all results as the fitting result of object shapes, select optimum.
Utilize this to be used for the equipment of object registration, in case input face image, in the corresponding average shape of face-image placement for every sub-view, described sub-view comprises positron view, left half son's view, left complete sub-view, right half son's view, right complete sub-view etc.Correspondingly-shaped model from be stored in the model memory storage obtains average shape.As example, obtain to be used for the average shape of front sub-view from the shape that is used for front sub-view.Certainly, according to the requirement that image is processed, the quantity n of sub-view is variable.Correspondingly, the quantity n of shape also is variable.
For every sub-view, in shape registration apparatus 502, carry out the shape match.More specifically, in the initial average shape of image placement for every sub-view of corresponding model, and the set of selected shape parameter.Inspection from the zone of image around each point of average shape to find the best proximity matching for this point.Upgrade form parameter with the new-found point of best-fit.That is to say, carry out shape from the shape of correspondence and be out of shape to obtain shape for the object images match.Afterwards, fitting result, that is, the shape of match is transfused to fitting result selecting arrangement 503 to select the best-fit result, and this will describe subsequently.The best-fit result is output as the last registration of object images.
Think, pass the process of the method for carrying out for the equipment of object registration with reference to Fig. 6 through discussion.Fig. 6 is according to the process flow diagram for the object registration of the present invention.
In step 610, for input picture, can use face detector to detect face or object.Then, can estimate original shape and size from the framing mask of face.According to the present invention, adopt the various visual angles face detector based on the nested cascade detectors of boosting.The various visual angles face detector can provide the initial facial visual angle for facial registration.Alternatively, the various visual angles face detector can not provide the initial facial visual angle.
In step 620, according to initial visual angle and the shape that obtained by face detector, from the model of training, select the set of suitable shape.The model of these selections is corresponding to initial visual angle itself and be adjacent visual angle.For example, facial visual angle is divided into five visual angle classifications, that is, and just (F), left half side (LHP), left side (LP), right half side (RHP), right side (RP).Each visual angle classification is regarded as passage, and the training model have five models, comprise LP, LHP, F, RHP and RP shape.
The main process of the example of the model of selected shape shown in Fig. 7.Fig. 7 is for the diagram of selecting the candidate shape model.As shown in Figure 7, select to gather as candidate family corresponding to the shape at the visual angle adjacent with initial visual angle.Particularly, for passage F, selected shape can comprise F, LHP and RHP model; For passage LP, selected shape can comprise LP and LHP model; For passage RP, selected shape can comprise RP and RHP model; For passage LHP, selected shape can comprise F, LHP and LP model; For passage RHP, selected shape can comprise F, RHP and RP model.
As shown in Figure 7, select a plurality of shapes, rather than single model, this is so that reduce for the dependence at initial visual angle, and realizes than the better registration performance of single model.Also can carry out with any alternate manner according to demand and select.For example, can select four models adjacent with the model at initial visual angle.
Alternatively, if initial visual angle is not provided, then can select skilled shape in step 610.
At step S630, in case determine the set of shape, then can come with these models facial with respect to the original shape registration.In this step, adopt standard A SM scheme to carry out match.This scheme mainly is made of following steps: 1) obtain facial initial visual angle, position and the size of monitoring; 2) Local Search is to find the best match position of each unique point; And 3) select main points and upgrade form parameter by shape.In the models fitting step, can use many facial method for registering, described models fitting step such as the facial registration that uses statistical model and wavelet character, based on the facial registration of local grain sorter with via the facial registration based on the difference search of composition.
In step 640, by use respectively carry out match for the shape of these selections of face after, set up the appraisal procedure that finds the best-fit result.The details of appraisal procedure below will be described.
Following table illustrates the difference between the present invention and the prior art.
The comparison of table 1. the present invention and method of the prior art
Figure BDA0000141510830000111
In table 1, √ represents it is that * expression is no.As can be seen from Table 1, compare with conventional method, according to object registration apparatus of the present invention and the method not estimation of needs visual angle or renewal, any mixing shape and 3D shape.In addition, the present invention can produce a plurality of fitting results, and conventional method only produces a fitting result.
Compare according to object registration approach of the present invention and the scheme of use corresponding to the single model at sub-visual angle.
When adopting an only sub-visual angle that is obtained by face detector for the facial registration in the universal method, only select the shape corresponding with sub-visual angle to carry out match.If correctly do not predict initial visual angle, then registration result becomes unreliable.In object registration apparatus according to the present invention and method, the set of selected shape model from the shape of training; Come unique point around average shape corresponding to Local Search with in the selected model each; And select the best-fit result.This is not only avoided initial visual angle for the impact of whole registration process, and realizes high precision and the robustness of facial registration.
Now, use description to assess the method and apparatus of the precision of registration shape.Problem how to assess the precision of registration shape is divided into calculates two marks corresponding from different information: the first mark is used for assessing according to spatial prior (spatial prior) the good degree of the face shape of registration, might being out of shape of described spatial prior representative object shape; The second mark is used for using the image evidence (evidence) that is provided by object images to assess the good degree of registration shape.By the model of two off-line trainings, namely spatial prior model and standard likelihood model are come these two kinds of information are carried out modeling.
The present invention concentrate and shape registration accuracy evaluation problem also be expressed as Bayes's framework and be represented as posterior probability p (V|I).P (V|I) is illustrated in the probability of the V under the condition of image texture I.V=(x 1, x 2... x n, y 1, y 2..., y n) the general formula analysis of the general usefulness of description (GPA, Generalized Procrustes Analysis, a kind of Statistical Shape analysis) afterwards face shape, the general formula analysis of this general usefulness are removed similarity conversion attitude and are carried out normalization by zero barycenter and size normalization (unit norm) and be offset and size variation to remove.I represent calibration point number.(x i, y i) expression i calibration point coordinate.
Based on beta function
Figure BDA0000141510830000121
By following formula (1) expression above posterior probability p (V|I).
p ( V | I ) = p ( V ) p ( I | V ) p ( I ) ∝ p ( V ) p ( I|V ) - - - ( 1 )
P (V) is the prior probability of face shape V.P (I|V) is the facial apparent likelihood probability of this position in the image when given shape V.Here, picture material is arbitrarily, that is, the apparent Probability p (I) of any image equates.Thus, facial apparent Probability p (I) is considered to steady state value.Therefore,
Figure BDA0000141510830000123
Be directly proportional with p (V) p (I|V).
Now, will the mode of the prior probability p (V) that obtain face shape V be described.Carry out principal component analysis (PCA) (PCA) by the shape sample to hand labeled, total shape is diminished is divided into many independently compositions, and these independently each in the composition encoded by the major component of correspondence.PCA uses the set of the observed value of the variable that orthogonal transformation may be correlated with to be converted to the mathematical procedure of set of the value of the uncorrelated variables that is called major component.The number of major component is less than or equal to the number of source variable, to reduce the dimension through the data of conversion.
By the shape distortion of each major component coding be modeled as average be zero and this true value of correspondence of principal component model be the Gaussian distribution of variance.These distributions can be calculated from the face portion shape of hand labeled.
By current face shape V being mapped on the major component axle following calculating prior probability.
When given face shape V, face shape V is mapped as the linear combination V=V of K major component 0+ p 1V 1+ ... + p kV k+ ... + p KV KV kBe k major component, and be the vector of deriving from the sample statistics of hand labeled.p kBe the mapping coefficient corresponding with V, and be scalar.V is by p 1..., p k..., p KDetermine uniquely.Therefore, p (V) can be represented as mapping coefficient p 1..., p k..., p KJoin probability distribution p (p 1..., p k..., p K).
p(V)=p(p 1,…,p k,…,p K) (2)
According to the distribution of PCA, major component V kOrthogonal, namely independent of one another.Therefore, the data that represented by each major component are Gaussian distribution.Thus, calculate the prior probability of face shape V by following formula (3).
p ( V ) = p ( p 1 , · · · , p k , · · · , p K )
= Π k = 1 K ( p k )
= Π k = 1 K [ 1 2 πλ k exp ( - p k 2 2 λ k ) ] - - - ( 3 )
∝ exp ( - Σ k = 1 K p k 2 λ k ) Π k = 1 K 1 λ k
Wherein, λ kPCA coefficient p kStatistical variance.
Now, will the mode of the likelihood probability p (I|V) that obtain face shape V be described.P (I|V) is the facial apparent likelihood probability of the position in image I when given shape V.Position v iThe place sees that the image evidence of i calibration point is assumed that the image evidence that is independent of other calibration point when the locus of given other calibration point.Therefore,
Local likelihood probability is modeled as Gaussian distribution.That is to say likelihood probability p (I|v n) be modeled as Gaussian distribution.
In a word, can use following formula (4) calculated population likelihood probability:
( I | V ) ( I | v 1 , v 2 , · · · , v N )
∝ Π n = 1 N p ( I | v n )
∝ Π n = 1 N 1 2 πσ n 2 exp ( - ( v n , current - v n , true ) 2 2 σ n 2 ) - - - ( 4 )
∝ exp ( - Σ n = 1 N | | Δv n | | 2 σ n 2 ) Π n = 1 N 1 σ n
Wherein,
Figure BDA00001415108300001310
At given position v nThe time the statistical variance of local likelihood probability of image I.Δ v nRepresent the current location v of n calibration point N, currentWith true value position v N, trueBetween displacement.The true value position v of n calibration point when carrying out the shape registration N, trueUnknown.Yet apparent likelihood model is designed to use the increment regression model direct estimation displacement v based on gradient tree Boost n, this will discuss hereinafter.
By above two approximate, posterior probability can be rewritten as:
p ( V | I ) ∝ p ( V ) p ( I | V )
∝ exp ( - Σ k = 1 K p k 2 λ k ) Π k = 1 K 1 λ k · exp ( - Σ n = 1 N | | Δv n | | 2 σ n 2 ) Π n = 1 N 1 σ n ]
∝ exp ( α · Σ k = 1 K p k 2 λ k + β · Σ n = 1 N | | Δv n | | 2 σ n 2 ) Π k = 1 K 1 λ k Π n = 1 N 1 σ n - - - ( 5 )
∝ c 1 c 2 · exp ( α · s 1 + β · s 2 )
∝ exp ( α · s 1 + β · s 2 )
s 1 = Σ k K p k 2 λ k - - - ( 6 )
s 2 = Σ n = 1 N | | Δv n | | 2 σ n 2 - - - ( 7 )
Wherein, s 1The first mark that represents prior probability, s 2It is the second mark that represents likelihood probability.Parameter alpha and β are respectively s 1And s 2Weight.The purpose of introducing these two parameters is so that s 1And s 2Scope roughly the same, with for posterior probability balance s 1And s 2Distribution.The value of these two parameters can be adjusted according to demand.As example,
Figure BDA0000141510830000148
And
Figure BDA0000141510830000149
Figure BDA00001415108300001410
Only relevant with pre-determined model, and it is apparent to be independent of current image.In experiment, the inventor finds to replace c with constant 1c 2Impact for Model Selection of the present invention is negligible.Then, c 1c 2Exp (α s 1+ β s 2) be further approximated (the α s into exp 1+ β s 2) calculate to simplify.
When given registration shape and object images, can easily calculate s=α s with little assessing the cost 1+ β s 2This mark is for assessment of the good degree of shape registration.
Then, the mark of the registration shape corresponding with a plurality of shapes is compared to each other.Registration shape with highest score is selected as net result.
Table 2 is according to the experimental result of the facial registration of various visual angles of the present invention
Figure BDA0000141510830000151
As shown in table 2, the number of first row representative experiment, the shape that the first row representative will be used.Row under the corresponding shape represent the mark of the model in the corresponding experiment.For example, the mark that the mark " 0.472332 " during the 2nd row the 2nd is listed as means for the model " LP " of experiment 1 is-0.472332.
In the experiment 1 of table 2, the shape with highest score-0.183356 is selected as net result.In the experiment 2 of table 2, the shape with highest score-0.164278 is selected as net result.In the experiment 3 of table 2, the shape with highest score-0.187144 is selected as net result.In the experiment 4 of table 2, the shape with highest score-0.193906 is selected as net result.Experiment shows the good degree that can be distinguished with high robust the registration shape by the mark of the present invention's calculating.
In any case, above expression formula s=α s 1+ β s 2It only is a kind of method of calculating final mark.Alternatively, can pass through s 3, s 4Calculate s with multiplication.s 3Be
Figure BDA0000141510830000152
s 4Be
Figure BDA0000141510830000153
In fact, can calculate s based on any step in the above-mentioned derivation.
In a word, principle of the present invention is as follows.For the face shape of each registration, independently according to two marks of information calculations differently, and two mark linear combinations are produced final mark with the good degree of the face shape of assessment registration.Calculate the first mark according to the distortion of current face shape, and according to facial apparent information calculations the second mark that is provided by object images.
Use description to now to realize the method and apparatus of above-mentioned assessment of the precision of facial registration (object registration).
Fig. 8 illustrates the assessment apparatus according to the precision of the object shapes for assessment of the registration that is represented by a plurality of calibration points in the image of the present invention.
This assessment apparatus comprises: memory storage 1201 is used for storage in advance from spatial prior model and the object apparent model of hand labeled sample learning; The first mark calculation element 1202 is used for calculating the first mark according to current shape with respect to the distortion of spatial prior model; The second mark calculation element 1203 is used for using the object apparent model according to the image evidence of the position of the object images of being determined by the calibration point of all registrations, calculates the second mark; And apparatus for evaluating 1204, for the final mark that calculates the object shapes of registration by the weighting summation of these two marks.
In memory storage 1201, construct the spatial prior model by utilizing the general formula analysis of general usefulness (GPA) registration training shapes vector, be offset and size variation to remove to remove similarity conversion attitude and to carry out normalization by zero barycenter and size normalization, and the shape vector of registration is carried out principal component analysis (PCA) (PCA) whole shape distortion is divided into many independently compositions and by major component these compositions is encoded.By according to calibration point with total object apparent be divided into some independently local apparent, learn a displacement prediction device to estimate that current location and ground connection really are the displacements between the position for each calibration point, and with in the apparent likelihood score in part each be modeled as centered by the actual position of calibration point, average as zero and in advance the residual of the displacement prediction device of study be the Gaussian distribution of variance, construct the object apparent model.
In the first mark calculation element 1202, by the registration shape being mapped on the major component axle to obtain the shape deformation parameter, use with each to become Gaussian distribution model corresponding to split axle to calculate the mark of each parameter, and all marks are sued for peace to form the first mark, calculate the first mark.
In the second mark calculation element 1203, by estimate the displacement between current location and the actual position for each calibration point, calculate the mark of each calibration point with the displacement of the local apparent model of Gaussian distribution, and all marks are sued for peace to form the second mark, calculate the second mark.
Now, use description to assess the method for precision of the registration face shape of the set expression by calibration point.Fig. 9 illustrates for the process flow diagram of registration by the precision assessment method of the object shapes of the set expression of calibration point.
In step 1310, input is by the registration shape of the set expression of calibration point.By vectorial V=(x 1, x 2... x n, y 1, y 2..., y n) the expression shape, this vector comprises the coordinate of the calibration point in the image.v i=(x i, y i) the coordinate of i calibration point in the presentation video.
In step 1320, the method creates off-line spatial prior model and comes might being out of shape of indicated object shape.In order to learn the distribution that to be out of shape of shape, select the shape of the individual hand labeled of N (such as 300 or 1000) as training sample.Preferably, N is as far as possible greatly to create more accurate model.Yet, the quantity of needed training sample depend on shape complexity and can flexible degree.
After selecting all training samples, use the general formula analysis of general usefulness with shape vector and reference figuration vector alignment, to remove similarity conversion attitude and to carry out normalization to remove skew and size variation by zero barycenter and size normalization.
Following process is carried out modeling for shape is out of shape.The classification of the graphical model by being called k-fan, by allowing Gauss Markov Random Field Mixture model that shape is out of shape according to polynary Gaussian distribution or by Gaussian distribution, the spatial form apriority being carried out modeling.For the special object of wanting modeling such as face, hand and human body, the degree of shape distortion is different, and can use the model with different complicacy to carry out modeling.The model that uses depends on the complexity of shape and the degree of possibility shape distortion.
For the face shape registration, proof will be that Gaussian distribution is enough by the shape deformable modeling of each major component coding in experiment.
In step 1330, calculate the first mark V=(x of a shape according to the spatial prior model 1, x 2... x n, y 1, y 2..., y n).Use the Gaussian distribution pair shape corresponding with each major component of PCA shape distortion to carry out modeling.This mark is calculated as follows:
s 1 = Σ k K p k 2 λ k - - - ( 6 ) .
Wherein, p kK the mapping coefficient that major component is corresponding with the PCA shape.λ kPCA coefficient p kStatistical variance.K is the quantity of the major component of reservation.
The current shape peace that can use k-fans or Gauss Markov Random Field Mixture model or use more simply object is the distance between the shape all, calculates the first mark.
In step 1340, the image evidence when the apparent likelihood model of the method establishment shape is illustrated in given current shape.Object is apparent to be divided into the some local part corresponding with calibration point.At position v iThe image evidence of seeing i calibration point is assumed that the image evidence that is independent of other point when the locus of given other point.For each local part, apparent likelihood score is modeled as Gaussian distribution, this Gaussian distribution is centered by the actual position of calibration point and have a particular variance.
p ( I | v n ) ∝ exp ( - ( v n , current - v n , true ) 2 2 σ n 2 ) - - - ( 4 a )
In order to calculate local likelihood probability, set up model and use the direct predictive displacement Δ of increment regression model v based on gradient tree Boost n=v N, current-v N, true, and use the training residual as variance
Figure BDA0000141510830000182
Here can also use many other methods.For example, can be with learning image evidence with the GentleBoost logistic regression method of haar-like Feature Combination.As the basis of weak regression function local grain is mapped as size characteristic value h (I) with the Haar-like feature.And come the displacement calculating from eigenwert h (I) by weak regression function.
In step 1350, calculate the second mark according to the apparent likelihood model in part.
s 2 = Σ n = 1 N | | Δv n | | 2 σ n 2 - - - ( 7 ) .
Δ v wherein nCorresponding to n calibration point the current location of n local apparent likelihood model prediction and the displacement between the actual position in the indicated object image.N is the number of calibration point.
In step 1360, the accurate measurement that the weighted sum of the mark by two calculating is calculated current shape.
Score = α · Σ k K p k 2 λ k + β · Σ i = 1 N | | Δv i | | 2 σ i 2 - - - ( 5 a )
= α · s 1 + β · s 2
Parameter alpha and β are the weights of respective items, they (0,1] scope in and need to select according to experiment.Here,
Figure BDA0000141510830000186
And
Figure BDA0000141510830000187
In any case, above-mentioned expression formula Score=α s 1+ β s 2It only is a kind of mode of calculating final mark.Alternatively, by with s 3, s 4Calculate Score with multiplication.s 3Be
Figure BDA0000141510830000188
s 4Be In fact, can calculate Score based on any step in the above-mentioned derivation.
Compare with the people's such as L.Liang method, in the present invention, facially be divided into some regional areas according to face points.An apparent model is learnt independently for each face points by Local map photo place in 30 * 30 pixels.Compare with the apparent model of whole face-image study for 30 * 30 pixels, can represent more information by apparent model, and this will be than the people's such as L.Liang method robust more.In addition, in the present invention, use increment homing method structure apparent model.Create sample by the random file from actual position, and use the displacement marker samples.Regression training can be used obtainable whole training data.So more more accurate than BAM.
In addition, in appraisal procedure of the present invention,
Figure BDA0000141510830000191
Δ v n=v I, current-v I, groundtruthIt is the displacement between current some position and the actual position.Apparent likelihood model is designed to for n calibration point direct estimation displacement v nThus, can be with the little calculating likelihood probability that assesses the cost.
Can in many application, use according to above-mentioned appraisal procedure of the present invention.One of these examples are the Model Selection in the various visual angles face shape registration as shown in Fig. 5 and 6.
Can carry out method and system of the present invention by variety of way.For example, can carry out method and apparatus of the present invention by software, hardware, firmware or their combination.The said sequence of the step of described method only is intended to illustration, and the step of method of the present invention is not limited to the order of above special description, unless in addition special statement.In addition, in certain embodiments, the present invention can also be embodied as the program that is recorded in the recording medium, comprises for the machine readable instructions that realizes the method according to this invention.Thus, the present invention also covers the recording medium that storage is used for the program of realization the method according to this invention.
Although by the example particular instantiation specific embodiments more of the present invention, those skilled in the art are to be understood that above example only is intended to illustration rather than limits the scope of the invention.Those skilled in the art are to be understood that can be in the situation that do not depart from the scope of the present invention and the above embodiment of spirit modification.Scope of the present invention is defined by the following claims.
Although described the present invention with reference to exemplary embodiment, should understand and the invention is not restricted to disclosed exemplary embodiment.The scope of following claim should be given the 26S Proteasome Structure and Function of the widest explanation to comprise all modifications and to be equal to.

Claims (20)

1. method for assessment of the precision of the object shapes of the registration that is represented by a plurality of calibration points in the image, the method comprises:
The first mark calculation procedure uses the first model to calculate the first mark according to object shapes;
The second mark calculation procedure uses the second model to calculate the second mark according in described a plurality of calibration points each; And
Appraisal procedure is according to the precision of the first mark and the second mark calculating registration.
2. method according to claim 1, wherein,
In described appraisal procedure, calculate the precision of registration by the weighted sum of the first mark and the second mark.
3. method according to claim 1, wherein,
The first model is the spatial prior model, and
In the first mark calculation procedure, calculate the first mark according to object shapes with respect to the distortion of spatial prior model.
4. method according to claim 3, wherein,
In the first mark calculation procedure:
Object shapes is mapped to the major component axle to obtain the shape deformation parameter;
Use the Gaussian distribution model corresponding with each major component to calculate the mark of each shape deformation parameter; And
The mark corresponding with each major component sued for peace to obtain the first mark.
5. method according to claim 4, wherein,
In the first mark calculation procedure, calculate the first mark by following formula:
s 1 = Σ k = 1 K p k 2 λ k ,
Wherein, s 1Represent the first mark, K represents the number of major component, and k represents k major component, p kThe form parameter that represents k major component, λ kExpression form parameter p kStatistical variance.
6. method according to claim 1, wherein,
The second model is the object apparent model.
7. method according to claim 6, wherein,
In the second mark calculation procedure:
For the displacement between each calibration point estimation current location and the actual position;
Use displacement to calculate the mark of each calibration point by Gaussian distribution model; And
Mark to the calculating corresponding with each calibration point is sued for peace, to obtain the second mark.
8. method according to claim 7, wherein,
In the second mark calculation procedure, calculate the second mark by following formula,
s 2 = Σ i = 1 N | | Δv i | | 2 σ i 2 ,
Wherein, s 2Represent the second mark, N represents the number of calibration point, Δ v iThe current location that expression is corresponding with i calibration point and the assessed value of the displacement between the actual position,
Figure FDA0000141510820000022
The assessed value Δ v of expression displacement iStatistical variance.
9. according to claim 7 or 8 described methods, wherein,
Returning assessment by single order carries out for the current location of each calibration point and the estimation of the displacement between the actual position.
10. method that is used for the object shapes of registering images comprises:
The shape step of registration, according to a plurality of shape match object shapes that are used for object, each in described a plurality of shapes is corresponding to the image from a visual angle of object;
Select step, calculate the mark of the fitting result of each shape based on fitting result by each the appraisal procedure in according to claim 1-9, and the selection fitting result corresponding with highest score is as the registration result of object shapes.
11. the equipment of the precision of the object shapes of the registration that is represented by a plurality of calibration points in the evaluate image, this equipment comprises:
The first mark calculation element is used for using the first model to calculate the first mark according to object shapes;
The second mark calculation element is used for using the second model to calculate the second mark according to each of described a plurality of calibration points; And
Apparatus for evaluating is used for the precision according to the first mark and the second mark calculating registration.
12. equipment according to claim 11, wherein,
Described apparatus for evaluating calculates the precision of registration by the weighted sum of the first mark and the second mark.
13. equipment according to claim 12, wherein,
The first model is the spatial prior model, and
The first mark calculation element calculates the first mark according to object shapes with respect to the distortion of spatial prior model.
14. equipment according to claim 13, wherein,
The first mark calculation element is mapped to the major component axle to obtain the shape deformation parameter with object shapes; Use the Gaussian distribution model corresponding with each major component to calculate the mark of each shape deformation parameter; And the mark corresponding with each major component sued for peace to obtain the first mark.
15. equipment according to claim 14, wherein,
The first mark calculation element calculates the first mark by following formula:
s 1 = Σ k = 1 K p k 2 λ k ,
Wherein, s 1Represent the first mark, K represents the number of major component, and k represents k major component, p kThe form parameter that represents k major component, λ kExpression form parameter p kStatistical variance.
16. equipment according to claim 11, wherein,
The second model is the object apparent model.
17. equipment according to claim 16, wherein,
The second mark calculation element is for the displacement between each calibration point estimation current location and the actual position; Use displacement to calculate the mark of each calibration point by Gaussian distribution model; And the mark of the calculating corresponding with each calibration point sued for peace, to obtain the second mark.
18. equipment according to claim 17, wherein,
The second mark calculation element calculates the second mark by following formula,
s 2 = Σ i = 1 N | | Δv i | | 2 σ i 2 ,
Wherein, s 2Represent the second mark, N represents the number of calibration point, Δ v iThe current location that expression is corresponding with i calibration point and the assessed value of the displacement between the actual position,
Figure FDA0000141510820000033
The assessed value Δ v of expression displacement iStatistical variance.
19. according to claim 17 or 18 described equipment, wherein,
Returning assessment by single order carries out for the current location of each calibration point and the estimation of the displacement between the actual position.
20. an equipment that is used for the object shapes of registering images, this equipment comprises:
The shape registration apparatus is used for according to a plurality of shape match object shapes that are used for object, and each in described a plurality of shapes is corresponding to the image from a visual angle of object; And
Selecting arrangement is used for based on fitting result by calculating the mark of the fitting result of each shape to any equipment of 19 according to claim 11, and the selection fitting result corresponding with highest score is as the registration result of object shapes.
CN201210058798.6A 2012-03-08 The precision assessment method of registration with objects shape and equipment, the method and apparatus of registration Active CN103310219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210058798.6A CN103310219B (en) 2012-03-08 The precision assessment method of registration with objects shape and equipment, the method and apparatus of registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210058798.6A CN103310219B (en) 2012-03-08 The precision assessment method of registration with objects shape and equipment, the method and apparatus of registration

Publications (2)

Publication Number Publication Date
CN103310219A true CN103310219A (en) 2013-09-18
CN103310219B CN103310219B (en) 2016-11-30

Family

ID=

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426929A (en) * 2014-09-19 2016-03-23 佳能株式会社 Object shape alignment device, object processing device and methods thereof
CN106875376A (en) * 2016-12-29 2017-06-20 中国科学院自动化研究所 The construction method and lumbar vertebrae method for registering of lumbar vertebrae registration prior model
CN107240127A (en) * 2017-04-19 2017-10-10 中国航空无线电电子研究所 The image registration appraisal procedure of distinguished point based mapping
CN107766867A (en) * 2016-08-15 2018-03-06 佳能株式会社 Object shapes detection means and method, image processing apparatus and system, monitoring system
CN110023990A (en) * 2016-11-28 2019-07-16 德国史密斯海曼简化股份公司 Illegal article is detected using registration
CN111489425A (en) * 2020-03-21 2020-08-04 复旦大学 Brain tissue surface deformation estimation method based on local key geometric information
CN112465881A (en) * 2020-11-11 2021-03-09 常州码库数据科技有限公司 Improved robust point registration method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
US7907768B2 (en) * 2006-12-19 2011-03-15 Fujifilm Corporation Method and apparatus for probabilistic atlas based on shape modeling technique
CN102122359A (en) * 2011-03-03 2011-07-13 北京航空航天大学 Image registration method and device
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7907768B2 (en) * 2006-12-19 2011-03-15 Fujifilm Corporation Method and apparatus for probabilistic atlas based on shape modeling technique
CN101819628A (en) * 2010-04-02 2010-09-01 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic
CN102122359A (en) * 2011-03-03 2011-07-13 北京航空航天大学 Image registration method and device
CN102254169A (en) * 2011-08-23 2011-11-23 东北大学秦皇岛分校 Multi-camera-based face recognition method and multi-camera-based face recognition system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪晓妍: "基于ASM的人脸定位研究", 《信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426929A (en) * 2014-09-19 2016-03-23 佳能株式会社 Object shape alignment device, object processing device and methods thereof
CN105426929B (en) * 2014-09-19 2018-11-27 佳能株式会社 Object shapes alignment device, object handles devices and methods therefor
CN107766867A (en) * 2016-08-15 2018-03-06 佳能株式会社 Object shapes detection means and method, image processing apparatus and system, monitoring system
CN110023990A (en) * 2016-11-28 2019-07-16 德国史密斯海曼简化股份公司 Illegal article is detected using registration
CN106875376A (en) * 2016-12-29 2017-06-20 中国科学院自动化研究所 The construction method and lumbar vertebrae method for registering of lumbar vertebrae registration prior model
CN106875376B (en) * 2016-12-29 2019-10-22 中国科学院自动化研究所 The construction method and lumbar vertebrae method for registering of lumbar vertebrae registration prior model
CN107240127A (en) * 2017-04-19 2017-10-10 中国航空无线电电子研究所 The image registration appraisal procedure of distinguished point based mapping
CN111489425A (en) * 2020-03-21 2020-08-04 复旦大学 Brain tissue surface deformation estimation method based on local key geometric information
CN111489425B (en) * 2020-03-21 2022-07-22 复旦大学 Brain tissue surface deformation estimation method based on local key geometric information
CN112465881A (en) * 2020-11-11 2021-03-09 常州码库数据科技有限公司 Improved robust point registration method and system

Similar Documents

Publication Publication Date Title
Liu et al. Dense face alignment
CN1954342B (en) Parameter estimation method, parameter estimation device, and collation method
US8588519B2 (en) Method and system for training a landmark detector using multiple instance learning
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
US20050169536A1 (en) System and method for applying active appearance models to image analysis
CN103324938A (en) Method for training attitude classifier and object classifier and method and device for detecting objects
US9129152B2 (en) Exemplar-based feature weighting
CN108198172B (en) Image significance detection method and device
Detry et al. Continuous surface-point distributions for 3D object pose estimation and recognition
CN106599856A (en) Combined face detection, positioning and identification method
Akai et al. Misalignment recognition using Markov random fields with fully connected latent variables for detecting localization failures
CN109558814A (en) A kind of three-dimensional correction and weighting similarity measurement study without constraint face verification method
CN108985161B (en) Low-rank sparse representation image feature learning method based on Laplace regularization
Jorstad et al. Refining mitochondria segmentation in electron microscopy imagery with active surfaces
Qiu et al. Outdoor semantic segmentation for UGVs based on CNN and fully connected CRFs
Salinas-Gutiérrez et al. Using gaussian copulas in supervised probabilistic classification
Wang et al. Joint head pose and facial landmark regression from depth images
CN106971176A (en) Tracking infrared human body target method based on rarefaction representation
Jain et al. Object detection using coco dataset
Cootes et al. Locating objects of varying shape using statistical feature detectors
CN103310219A (en) Method and equipment for precision evaluation of object shape registration, and method and equipment for registration
CN103310219B (en) The precision assessment method of registration with objects shape and equipment, the method and apparatus of registration
CN103377382A (en) Optimum gradient pursuit for image alignment
Ibragimov et al. Landmark-based statistical shape representations
Costen et al. Compensating for ensemble-specific effects when building facial models

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant