CN101635028A - Image detecting method and image detecting device - Google Patents

Image detecting method and image detecting device Download PDF

Info

Publication number
CN101635028A
CN101635028A CN200910085796A CN200910085796A CN101635028A CN 101635028 A CN101635028 A CN 101635028A CN 200910085796 A CN200910085796 A CN 200910085796A CN 200910085796 A CN200910085796 A CN 200910085796A CN 101635028 A CN101635028 A CN 101635028A
Authority
CN
China
Prior art keywords
face
unique point
frame image
current frame
face unique
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910085796A
Other languages
Chinese (zh)
Inventor
谢东海
黄英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vimicro Corp
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CN200910085796A priority Critical patent/CN101635028A/en
Publication of CN101635028A publication Critical patent/CN101635028A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image detecting method and an image detecting device which are used for reducing the detection complexity of a mouth in an image and increasing the detection accuracy and the detection speed of the mouth in the image. The image detecting method comprises the following steps: initially confirming mouth characteristic points on the current frame image by detecting the mouth of the current frame image; locally searching the mouth characteristic points in a position area of the mouth characteristic points to reconfirming the mouth characteristic points on the current frame image; restricting the shape of the position area of the reconfirmed mouth characteristic points by using a mouth model which is designed in advance; and finally confirming the mouth characteristic points of the current frame image.

Description

A kind of image detecting method and device
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of image detecting method and device.
Background technology
The human face characteristic point position is one of most important information during recognition of face and man-machine interaction etc. are used, and for video processing technique, the position that how to trace into human face characteristic point rapidly and accurately is very crucial.
Traditional people's face feature point-tracking method is general all to have very strong similarity based on supposition interframe tracking target, adopt standards such as least mean-square error or histogram to weigh the similarity of interframe tracking target, find the candidate who satisfies similarity most as tracking results.
It is face that the people changes the abundantest organ on the face, and it is a very difficult problem that face is followed the tracks of.The main method of face tracking at present is based on the real time location tracking method of moving shape model (ASM, Active Shape Model) and movable appearance model (AAM, Active Appearance Model) etc.
But the algorithm complexity of existing ASM and AAM often needs very big calculated amount, and computing is got up very slow, and result of calculation is not accurate enough, can't satisfy real-time processing requirements, and therefore, it is not fine that face of the prior art detects effect.
Summary of the invention
The embodiment of the invention provides a kind of image detecting method and device, in order to reducing the complexity that the face in the image is detected, and improves accuracy and the speed that the face in the image is detected.
A kind of image detecting method that the embodiment of the invention provides comprises:
Detect by current frame image being carried out face, tentatively determine the face unique point on the current frame image;
The Local Search of face unique point is carried out in the band of position at described face unique point place, redefine the face unique point on the current frame image;
The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines, and finally determines the face unique point on the current frame image.
A kind of image detection device that the embodiment of the invention provides comprises:
Preliminary determining unit is used for detecting by current frame image being carried out face, tentatively determines the face unique point on the current frame image;
Redefine the unit, be used for the band of position at described face unique point place is carried out the Local Search of face unique point, redefine the face unique point on the current frame image;
Final determining unit is used to utilize the face model that sets in advance that shape constraining is carried out in the band of position at the described face unique point place that redefines, and finally determines the face unique point on the current frame image.
The embodiment of the invention detects by current frame image being carried out face, tentatively determines the face unique point on the current frame image; The Local Search of face unique point is carried out in the band of position at described face unique point place, redefine the face unique point on the current frame image; The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines, the final face unique point of determining on the current frame image, thereby reduced the complexity that the face in the image is detected, and improved accuracy and the speed that the face in the image is detected.
Description of drawings
The structural representation of a kind of image detection device that Fig. 1 provides for the embodiment of the invention;
The overall procedure synoptic diagram of a kind of image detecting method that Fig. 2 provides for the embodiment of the invention;
The idiographic flow synoptic diagram of a kind of image detecting method that Fig. 3 provides for the embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of image detecting method and device, in order to reducing the complexity that the face in the image is detected, and improves accuracy and the speed that the face in the image is detected.
The technical scheme that the embodiment of the invention provides is primarily aimed at the location and the tracking of face, certain location and tracking for other target objects, the equally also technical scheme that can adopt the embodiment of the invention to provide.
The embodiment of the invention is carried out the tracking of face unique point on the whole in two steps, the first step is to utilize KLT (Kanade-Lucas-Tomasi) algorithm to carry out the tracking of unique point, second step was utilized yardstick invariant features conversion (SIFT, Scale Invariant Feature Transform) algorithm training face unique point model carries out local fine search, after finishing, each tracking all utilize the ASM model to come the unique point on all faces that trace into is carried out shape constraining, i.e. face unique point by searching is determined the shape and the position of face.
KLT is based on the gray scale of image and gradient information and comes the unique point of target object is followed the tracks of, and can follow the tracks of the milder unique point of grey scale change more accurately, but can not the relatively more violent and grey scale change obvious characteristics point of pursuit movement.The SIFT feature is a kind of gradation of image, anamorphose and change of scale all to be compared the feature of robust, can be used for describing the more violent unique point of deformation ratio, such as the face unique point.ASM can carry out shape constraining to the point that traces on the whole, makes point after the tracking meet the shape of face on global shape.
Below in conjunction with accompanying drawing the technical scheme that the embodiment of the invention provides is described.
Referring to Fig. 1, a kind of image detection device that the embodiment of the invention provides comprises:
Preliminary determining unit 11 is used for detecting by current frame image being carried out face, tentatively determines the face unique point on the current frame image.
Shape constraining unit 12 is used to utilize the face model that sets in advance, and shape constraining is carried out in the band of position at the described preliminary determining unit 11 preliminary face unique point places of determining.
Redefine unit 13, be used for the band of position at the described preliminary determining unit 11 preliminary face unique point places of determining is carried out the Local Search of face unique point, redefine the face unique point on the current frame image.Perhaps, the Local Search of face unique point is carried out in the band of position that described shape constraining unit 12 is carried out the face unique point place after shape constraining is handled, and redefines the face unique point on the current frame image.
That is to say that described shape constraining unit 12 is optional.If be provided with shape constraining unit 12, then the effect of Jian Ceing can be better.
Final determining unit 14 is used to utilize the face model that sets in advance that shape constraining is carried out in the described band of position that redefines the face unique point place that unit 13 redefines, and finally determines the face unique point on the current frame image.
Preferably, described preliminary determining unit 11 comprises:
Positioning unit 21 is used for first two field picture to collecting, and carries out the face positioning feature point, tentatively determines the face unique point on first two field picture; And, when 23 pairs of current frame images of tracking cell carry out the tracking operation failure of face unique point,, tentatively determine the face unique point on the current frame image by current frame image is carried out the face positioning feature point.
Storage unit 22 is used for the band of position at the face unique point place on the image that store location unit 21 or tracking cell 23 determine.
Tracking cell 23 is used to utilize the band of position at the face unique point place on the former frame image of storage unit 22 storages, and current frame image is carried out the tracking of face unique point, tentatively determines the face unique point on the current frame image.
Preferably, described final determining unit 14 comprises:
Shape constraining unit 31 is used to utilize the face model that sets in advance that shape constraining is carried out in the described band of position that redefines the face unique point place that unit 13 redefines.
Authentication unit 32 is used to utilize the face unique point of training in advance, to verifying through the face unique point in the band of position behind the described shape constraining, finally determines the face unique point on the current frame image.
Be elaborated below.
Positioning unit 21:
The method of positioning feature point has a lot, relatively Chang Yong method comprise human face localization method based on ASM, based on the human face localization method of AAM and based on the method for unique point statistics.
The ASM method is to utilize the method for adding up to obtain the deformation rule of human face model, promptly the sample to the human face profile carries out pivot analysis (PCA, principal component analysis) conversion, the principal component that obtains with the PCA conversion is described the distortion of facial contour.The PCA conversion can extract the principal component that can reflected sample changes from a large amount of samples, these principal components have been controlled the situation of change of different piece on the human face respectively.
When positioning with ASM, at first need to orient the initial position of human face, then from initial position, search for the organ point according to edge feature, after each search finishes, the model after searching for is carried out shape constraining with the ASM model.
When carrying out the face location, can collect a large amount of face shapes in advance, train face ASM model then based on ASM.The advantage of this method is to come the result of search is retrained according to the ASM model, has therefore controlled final shape on the whole.
AAM and ASM are similar, and it is that the gray scale of people's face is carried out the PCA conversion, therefore when search, utilize people's face half-tone information to search for.
An other class localization method is promptly added up the feature of ad-hoc location based on characteristic statistics, respectively each feature is searched for when search.Owing to each feature is all trained, so these class methods are more accurate to the single-point location.
Tracking cell 23:
Tracking cell 23 adopts KLT track algorithm lip-syncing crust unique point to follow the tracks of.The KLT algorithm is converted to an optimized problem with tracking problem.Suppose that current frame image is G, the former frame image is F, and the position of unique point in G can utilize the method for location to obtain, be made as (x, y).The purpose of following the tracks of is to find out the position of this unique point in the F image in G.
Trace model can adopt the simplest translation, and mathematical model can be expressed as:
Σ i = 0 N ( F ( x i + Δx , y i + Δy ) - G ( x i , y i ) ) 2 → min ... ... ... formula (1)
Wherein, (x, y), (x, (Δ x and Δ y have represented pixel (x, shift value y) to F to G for x, gray-scale value y) y) to have represented pixel.The effect of formula (1) is to find suitable shift value Δ x, and Δ y makes the quadratic sum of the difference of the pixel value among the F corresponding after G and the translation reach minimum.This formula can resolve with the method (as least-squares algorithm) of linear optimization.
Unknown number in the above-mentioned formula is: Δ x and Δ y, utilize the method for least square to calculate Δ x and Δ y.At first need to carry out linearization, formula (1) expands into the single order Taylor's formula:
F (x+ Δ x, y+ Δ y)=F (x, y)+F xΔ x+F yΔ y............... formula (2)
The error term of each unique point correspondence is:
F x F y Δx Δy - ( G ( x , y ) - F ( x , y ) ) = v ... ... ... formula (3)
Formula (3) is exactly the minimum mean-square error formula, utilizes each unique point can obtain formula (3), and can utilize the method that adds up to calculate Δ x and final the separating of Δ y by the decomposition formula of matrix multiple.
In order to trace into more violent motion, the KLT track algorithm generally all carries out on the pyramid diagram picture.
In addition, above-mentioned formula can only be obtained translational movement, can also improve above-mentioned formula, and in fact, the objects in video often more complicated of moving uses translation model can not describe the motion of target and the distortion of interframe very accurately merely.This just need describe the motion of target with complicated model more, and such as the model that can be used as moving with affined transformation, the formula of affined transformation is as follows:
x′=ax+by+c
... ... ... formula (4)
y′=dx+ey+f
Wherein, (x y) is coordinate before the distortion, and corresponding coordinate figure among the current frame image G in the formula (1), (x ', y ') are the coordinates after the distortion, the corresponding coordinate of former frame image F in the formula (1).Affined transformation also is a linear transformation, so also can resolve its 6 unknown parameters with least square, makes KLT can trace into the unique point that affine variation has taken place.
Redefine unit 13:
Redefine the band of position that unit 13 utilizes SIFT algorithm lip-syncing crust unique point place and carry out the Local Search of face unique point, to redefine the face unique point on the current frame image.SIFT is the abbreviation of ScaleInvariant Feature Transform.The SIFT algorithm proposes based on metric space is theoretical, can both maintain the invariance to image zoom, rotation even affined transformation.Because the feature that SIFT extracts has above good robustness, thus once proposition, just become the focus of research, in Target Recognition, feature point extraction and tracking, and all there is important use aspect, field such as picture search.
The SIFT algorithm at first carries out feature detection at metric space, and the position of definite key point (Keypoints) and the residing yardstick of key point, use the direction character of the principal direction of key point neighborhood gradient then, to realize the independence of operator to yardstick and direction as this point.The SIFT proper vector that the SIFT algorithm extracts has following characteristic:
A) the SIFT feature is the local feature of image, and it changes rotation, scale, brightness and maintains the invariance, and visual angle change, affined transformation, noise are also kept to a certain degree stability.
B) unique (Distinctiveness) is good, and quantity of information is abundant, is applicable in the magnanimity property data base and mates fast and accurately.
C) volume is even several objects of minority also can produce a large amount of SIFT proper vectors.
D) high speed, the SIFT matching algorithm through optimizing even can reach real-time requirement.
E) extensibility can be united with other forms of proper vector very easily.
(DoG Difference-of-Gaussian) detects local extremum with as unique point to the SIFT operator simultaneously in the metric space, so that feature possesses is good unique and stable in image two dimensional surface space and difference of Gaussian.The DoG operator definitions is the difference of the gaussian kernel of two different scales, and it has the calculating characteristic of simple, is that (LoG, Laplacian-of-Gaussian) operator is approximate for the Laplace operator that proposes of normalization Gauss.The DoG operator is shown below:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
SIFT characteristic matching algorithm comprises two stages, and the phase one is the generation of SIFT feature, promptly extracts scale, rotation, brightness are changed irrelevant proper vector from several images to be matched; Subordinate phase is the coupling of SIFT proper vector.The generating algorithm of piece image SIFT proper vector comprises step altogether:
1, the metric space extreme value detects, in order to preliminary definite key point position and place yardstick.
2, by fitting three-dimensional quadratic function, in order to accurately to determine the position and the yardstick of key point, remove the key point and the unsettled skirt response point (because the DoG operator can produce stronger skirt response) of low contrast simultaneously, in order to strengthen coupling stability, to improve noise resisting ability.
3, utilize the gradient direction distribution character of key point neighborhood territory pixel to be each key point assigned direction parameter, make operator possess rotational invariance.
4, generate the SIFT proper vector.
Because the stronger robustness of SIFT feature, the embodiment of the invention utilizes the SIFT algorithm to calculate the feature of the unique point on the face.In order to make feature have stronger extendability, the embodiment of the invention is utilized more sample training face unique point.To each face sample, extract unique point, calculate the SIFT feature then, the feature of utilizing same position calculation to come out all averages and just obtains final training result at last.
Authentication unit 32:
The effect of authentication unit 32 is to judge whether the unique point of following the tracks of is real face unique point.Because the deformation ratio of face is more violent,, just need verify this moment the point of following the tracks of so the situation of losing usually can occur following the tracks of.Checking is extracted the SIFT feature based on the SIFT feature of training near the point that tracking obtains, the result with training compares then, if similarity is lower, so just thinks and follows the tracks of failure.If the checking of the major part point on face failure is so just thought and followed the tracks of failure, just need reorientate.
Referring to Fig. 2, a kind of image detecting method that the embodiment of the invention provides totally comprises step:
S101, detect, tentatively determine the face unique point on the current frame image by current frame image being carried out face.
S102, the Local Search of face unique point is carried out in the band of position at described face unique point place, redefine the face unique point on the current frame image.
The face model that S103, utilization set in advance carries out shape constraining to the band of position at the described face unique point place that redefines, and finally determines the face unique point on the current frame image.
Preferably, step S101 comprises:
Utilize the band of position at the face unique point place on the former frame image, current frame image is carried out the tracking of face unique point, tentatively determine the face unique point on the current frame image.
Preferably, as if the tracking operation failure that current frame image is carried out the face unique point,, tentatively determine the face unique point on the current frame image then by current frame image is carried out the face positioning feature point.
Preferably, between step S101 and the S102, this method also comprises:
The face model that utilization sets in advance carries out shape constraining to the band of position at the described preliminary face unique point place of determining.
Preferably, step S103 comprises:
The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines.Utilize the face unique point of training in advance,, finally determine the face unique point on the current frame image verifying through the face unique point in the band of position behind the described shape constraining.
The specific implementation process of the detection method of the face unique point that the embodiment of the invention provides comprises referring to Fig. 3:
The first step, at first from current frame image, orient unique point on the face.
Second the step, obtain the next frame image, utilize the KLT algorithm to come the face unique point that navigates to is followed the tracks of.
The ASM model that the 3rd step, utilization train comes the result who traces in second step is carried out shape constraining.
Result after the 4th step, employing SIFT algorithm were handled the 3rd step carries out Local Search.
The ASM model that the 5th step, utilization train carries out shape constraining to the result in the 4th step.
The 6th step, the result in the 5th step is verified, just jumped to for second step if the verification passes, begin to reorientate unique point on the face otherwise jump to the first step.
In sum, the embodiment of the invention detects by current frame image being carried out face, tentatively determines the face unique point on the current frame image; The Local Search of face unique point is carried out in the band of position at described face unique point place, redefine the face unique point on the current frame image; The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines, the final face unique point of determining on the current frame image, thereby reduced the complexity that the face in the image is detected, and improved accuracy and the speed that the face in the image is detected.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (10)

1, a kind of image detecting method is characterized in that, this method comprises:
Detect by current frame image being carried out face, tentatively determine the face unique point on the current frame image;
The Local Search of face unique point is carried out in the band of position at described face unique point place, redefine the face unique point on the current frame image;
The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines, and finally determines the face unique point on the current frame image.
2, method according to claim 1 is characterized in that, detects by current frame image being carried out face, determines that tentatively the step of the face unique point on the current frame image comprises:
Utilize the band of position at the face unique point place on the former frame image, current frame image is carried out the tracking of face unique point, tentatively determine the face unique point on the current frame image.
3, method according to claim 2 is characterized in that, as if the tracking operation failure that current frame image is carried out the face unique point, then by current frame image is carried out the face positioning feature point, tentatively determines the face unique point on the current frame image.
According to claim 1,2 or 3 described methods, it is characterized in that 4, after tentatively having determined the face unique point on the current frame image, carry out before the Local Search of face unique point, this method also comprises:
The face model that utilization sets in advance carries out shape constraining to the band of position at the described preliminary face unique point place of determining.
5, method according to claim 1 is characterized in that, utilizes the face model that sets in advance that shape constraining is carried out in the band of position at the described face unique point place that redefines, and determines that finally the step of the face unique point on the current frame image comprises:
The face model that utilization sets in advance carries out shape constraining to the band of position at the described face unique point place that redefines;
Utilize the face unique point of training in advance,, finally determine the face unique point on the current frame image verifying through the face unique point in the band of position behind the described shape constraining.
6, a kind of image detection device is characterized in that, described device comprises:
Preliminary determining unit is used for detecting by current frame image being carried out face, tentatively determines the face unique point on the current frame image;
Redefine the unit, be used for the band of position at described face unique point place is carried out the Local Search of face unique point, redefine the face unique point on the current frame image;
Final determining unit is used to utilize the face model that sets in advance that shape constraining is carried out in the band of position at the described face unique point place that redefines, and finally determines the face unique point on the current frame image.
7, device according to claim 6 is characterized in that, described preliminary determining unit comprises:
Storage unit is used for the band of position at the face unique point place on the memory image;
Tracking cell is used to utilize the band of position at the face unique point place on the former frame image, and current frame image is carried out the tracking of face unique point, tentatively determines the face unique point on the current frame image.
8, device according to claim 7 is characterized in that, described preliminary determining unit comprises:
Positioning unit is used for first two field picture to collecting, and carries out the face positioning feature point, tentatively determines the face unique point on first two field picture; And, when described tracking cell carries out the tracking operation failure of face unique point to current frame image,, tentatively determine the face unique point on the current frame image by current frame image is carried out the face positioning feature point.
According to claim 6,7 or 8 described devices, it is characterized in that 9, described preliminary determining unit and described redefining between the unit, this device also comprises:
The shape constraining unit is used to utilize the face model that sets in advance, and described preliminary determining unit is tentatively carried out shape constraining in the band of position at definite face unique point place.
10, device according to claim 6 is characterized in that, described final determining unit comprises:
The shape constraining unit is used to utilize the face model that sets in advance that shape constraining is carried out in the described band of position that redefines the face unique point place that the unit redefines;
Authentication unit is used to utilize the face unique point of training in advance, to verifying through the face unique point in the band of position behind the described shape constraining, finally determines the face unique point on the current frame image.
CN200910085796A 2009-06-01 2009-06-01 Image detecting method and image detecting device Pending CN101635028A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910085796A CN101635028A (en) 2009-06-01 2009-06-01 Image detecting method and image detecting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910085796A CN101635028A (en) 2009-06-01 2009-06-01 Image detecting method and image detecting device

Publications (1)

Publication Number Publication Date
CN101635028A true CN101635028A (en) 2010-01-27

Family

ID=41594213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910085796A Pending CN101635028A (en) 2009-06-01 2009-06-01 Image detecting method and image detecting device

Country Status (1)

Country Link
CN (1) CN101635028A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN105787416A (en) * 2014-12-23 2016-07-20 Tcl集团股份有限公司 Mobile terminal-based face recognition method and system
CN106650682A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Method and device for face tracking
CN107169397A (en) * 2016-03-07 2017-09-15 佳能株式会社 Feature point detecting method and device, image processing system and monitoring system
CN110415276A (en) * 2019-07-30 2019-11-05 北京字节跳动网络技术有限公司 Motion information calculation method, device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687957A (en) * 2005-06-02 2005-10-26 上海交通大学 Man face characteristic point positioning method of combining local searching and movable appearance model
CN1866272A (en) * 2006-06-22 2006-11-22 上海交通大学 Feature point positioning method combined with active shape model and quick active appearance model
US20070189584A1 (en) * 2006-02-10 2007-08-16 Fujifilm Corporation Specific expression face detection method, and imaging control method, apparatus and program
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1687957A (en) * 2005-06-02 2005-10-26 上海交通大学 Man face characteristic point positioning method of combining local searching and movable appearance model
US20070189584A1 (en) * 2006-02-10 2007-08-16 Fujifilm Corporation Specific expression face detection method, and imaging control method, apparatus and program
CN1866272A (en) * 2006-06-22 2006-11-22 上海交通大学 Feature point positioning method combined with active shape model and quick active appearance model
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘俊承,王淼鑫: "一种机器人导航中自然路标的匹配与跟踪方法", 《计算机工程与应用》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787416A (en) * 2014-12-23 2016-07-20 Tcl集团股份有限公司 Mobile terminal-based face recognition method and system
CN105224921A (en) * 2015-09-17 2016-01-06 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and disposal route
CN105224921B (en) * 2015-09-17 2018-08-07 桂林远望智能通信科技有限公司 A kind of facial image preferentially system and processing method
CN107169397A (en) * 2016-03-07 2017-09-15 佳能株式会社 Feature point detecting method and device, image processing system and monitoring system
CN107169397B (en) * 2016-03-07 2022-03-01 佳能株式会社 Feature point detection method and device, image processing system and monitoring system
CN106650682A (en) * 2016-12-29 2017-05-10 Tcl集团股份有限公司 Method and device for face tracking
CN110415276A (en) * 2019-07-30 2019-11-05 北京字节跳动网络技术有限公司 Motion information calculation method, device and electronic equipment
CN110415276B (en) * 2019-07-30 2022-04-05 北京字节跳动网络技术有限公司 Motion information calculation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Wang et al. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching
CN110147743B (en) Real-time online pedestrian analysis and counting system and method under complex scene
CN104574445B (en) A kind of method for tracking target
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
CN104200495B (en) A kind of multi-object tracking method in video monitoring
CN103488972B (en) Fingertip Detection based on depth information
CN103514432A (en) Method, device and computer program product for extracting facial features
CN105809693A (en) SAR image registration method based on deep neural networks
CN103514441A (en) Facial feature point locating tracking method based on mobile platform
CN101853388B (en) Unchanged view angle behavior identification method based on geometric invariable
CN101924871A (en) Mean shift-based video target tracking method
CN102903109B (en) A kind of optical image and SAR image integration segmentation method for registering
CN107798691B (en) A kind of unmanned plane independent landing terrestrial reference real-time detection tracking of view-based access control model
CN106407958A (en) Double-layer-cascade-based facial feature detection method
Lin et al. Hand-raising gesture detection in real classroom
CN103886619A (en) Multi-scale superpixel-fused target tracking method
CN103886325A (en) Cyclic matrix video tracking method with partition
CN103761747B (en) Target tracking method based on weighted distribution field
CN103617636A (en) Automatic video-target detecting and tracking method based on motion information and sparse projection
CN101635028A (en) Image detecting method and image detecting device
Li et al. Adaptive and compressive target tracking based on feature point matching
Alcantarilla et al. Learning visibility of landmarks for vision-based localization
Chao-jian et al. Image target identification of UAV based on SIFT
CN103996207A (en) Object tracking method
Zhang et al. A LiDAR-intensity SLAM and loop closure detection method using an intensity cylindrical-projection shape context descriptor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100127