WO2014112346A1 - 特徴点位置検出装置、特徴点位置検出方法および特徴点位置検出プログラム - Google Patents
特徴点位置検出装置、特徴点位置検出方法および特徴点位置検出プログラム Download PDFInfo
- Publication number
- WO2014112346A1 WO2014112346A1 PCT/JP2014/000102 JP2014000102W WO2014112346A1 WO 2014112346 A1 WO2014112346 A1 WO 2014112346A1 JP 2014000102 W JP2014000102 W JP 2014000102W WO 2014112346 A1 WO2014112346 A1 WO 2014112346A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature point
- point position
- target image
- feature
- initial information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to a feature point position detection technique for detecting the position of a feature point such as an eye or nose from an image such as a face.
- Feature point position detection is the detection of feature point positions of organs such as eyes, nose and mouth from images such as faces, and is an important technology for high-precision face recognition and facial expression recognition. Yes.
- AAM Active Appearance Model
- Non-patent Document 1 a model relating to the texture and shape of a face is constructed by a statistical method from a plurality of face images and feature point position information input in advance to these images, and this model is converted into an image including a face to be detected. Fit. Then, the feature point position is detected by repeatedly updating the model parameters so that the face image to be detected is close to the face image calculated from the model.
- Various enhancements have been made since AAM was proposed, and a method for combining a plurality of models to cope with profile detection, and improvements for higher speed and higher accuracy have been proposed.
- Non-Patent Document 2 it is known that AAM is strongly influenced by initial values (initial parameters) when fitting models.
- initial values initial parameters
- Patent Document 2 the performance of feature point detection is improved by estimating AAM parameters using a cylindrical head model.
- Patent Document 1 in order to cope with a change in the face direction, an identification method that is robust to a change in the face direction is provided by rotating the face image.
- the head model of Non-Patent Document 2 or the face of Patent Document 1 can be used.
- the amount of information is small only by rotating the image.
- problems such as model fitting falling into a local solution occur, and it is difficult to detect face feature point positions with high accuracy.
- the present invention has been made to solve the above-described problems, and its purpose is to model various changes in facial images such as facial expressions, individual variations, posture variations, and the like. It is possible to detect feature points with high accuracy, which can prevent the fitting of the above from falling into a local solution.
- a feature point position estimation apparatus includes feature point position initial information input means for inputting feature point position initial information from the outside according to a target image, the feature point position initial information, and feature point position estimation dictionary information.
- a feature point estimation position estimation means for estimating a feature point estimation position of a desired number of points in the target image; a model parameter calculation means for obtaining a search parameter for a feature point position search of the target image from the feature point estimation position; and the search Feature point search means for searching for and detecting the feature point position of the target image by performing parameter fitting of the model of the target image based on the parameter.
- external feature point position initial information is input according to the target image, and a desired number of points in the target image is obtained from the feature point position initial information and the feature point position estimation dictionary information.
- a feature point estimated position is estimated, a search parameter for searching for a feature point position of the target image is obtained from the feature point estimated position, and parameter fitting of the model of the target image is performed based on the search parameter. Search and detect feature point positions.
- a feature point position estimation program includes a process of inputting feature point position initial information from the outside according to a target image to a feature point position detection device, the feature point position initial information, and feature point position estimation dictionary information, A process for estimating a feature point estimated position of a desired number of points in the target image from the process, a process for obtaining a search parameter for a feature point position search of the target image from the feature point estimated position, and the target image based on the search parameter The process of searching for and detecting the feature point position of the target image is performed by performing parameter fitting of the model.
- a high-precision feature that prevents the fitting of a model from falling into a local solution against various changes such as facial expression fluctuations, individual fluctuations, and posture fluctuations of an image such as a target face. Point detection is possible.
- FIG. 1 is a block diagram illustrating a configuration of a feature point position detection apparatus 1 that detects the position of a feature point of an image such as a face according to an embodiment of the present invention.
- the feature point position detection apparatus 1 of the present embodiment includes a data processing device 100 and a storage device 200.
- the data processing apparatus 100 includes a feature point position initial information input unit 110 that inputs feature point position initial information of an image such as a face, a feature point estimated position estimation unit 120 that estimates a feature point estimated position, and a model parameter calculation unit 130. And a face feature point position searching means 140 for searching for the feature point position.
- the storage device 200 includes a feature point position estimation dictionary storage unit 210 that stores a feature point position estimation dictionary of an image such as a face.
- Feature point position initial information input means 110 inputs initial information of feature point positions from the outside according to the image 300 such as a face.
- the initial feature point position information is, for example, information on feature point positions such as eyes, nose, and mouth obtained by any external image feature point detection device.
- the feature point estimated position estimation unit 120 uses the feature point position estimation dictionary stored in the feature point position estimation dictionary storage unit 210 from the initial feature point position information input by the feature point position initial information input unit 110. Thus, the feature point estimation position of a desired number of points in the target image 300 is estimated.
- the model parameter calculation unit 130 obtains a search parameter in the search for the feature point position based on the feature point estimated position estimated by the feature point estimated position estimation unit 120.
- the search parameter will be described in detail in the description of a more specific embodiment described later.
- the feature point position search unit 140 performs model parameter fitting of the model such as the eyes, nose, and mouth of the image 300 based on the search parameter obtained by the model parameter calculation unit 130 as an initial value. A search is performed to detect the feature point position 310.
- FIG. 2 is a flowchart showing the operation of the feature point position detection apparatus 1 shown in FIG.
- the feature point position initial information input means 110 inputs the initial information of the feature point position from the outside according to the image 300 such as a face image (step S111).
- the feature point estimated position estimation unit 120 uses the feature point position estimation dictionary stored in the feature point position estimation dictionary storage unit 210 from the initial information of the feature point position input in S111 to A feature point estimation position having a desired number of points in the image 300 to be estimated is estimated (step S112).
- the model parameter calculation means 130 obtains a search parameter in the search for the feature point position based on the feature point estimated position estimated in S112 (step S113).
- the feature point position searching unit 140 searches for the feature point position by performing model parameter fitting based on the search parameter obtained in S113 as an initial value, and detects the feature point position 310. (Step S114).
- the initial value of an appropriate model parameter in the feature point position search unit 140 that is, the model parameter closer to the correct answer.
- the feature point position search can be performed. As a result, it is possible to prevent a local solution from falling into a local solution when searching for feature point positions, and to detect feature point positions with high accuracy.
- the storage device 200 is realized by, for example, a semiconductor memory or a hard disk.
- Feature point position initial information input means 110, feature point estimated position estimation means 120, model parameter calculation means 130, and feature point position search means 140 are realized by, for example, a CPU (Central Processing Unit) that executes processing according to program control. Is done.
- the feature point position estimation dictionary storage unit 210 is realized by, for example, a semiconductor memory or a hard disk.
- Feature point position initial information input means 110 inputs initial feature point position information from the outside to feature point estimated position estimation means 120 according to image 300.
- image 300 it is possible to specify a person of an image such as a face image in advance.
- the initial information of the feature point position is, for example, the position (coordinates) of the feature points such as eyes, nose, and mouth, which can be obtained in advance from an external feature point detection device or the like.
- the coordinate of the feature point position represents the position of the feature point on the image to be processed by the feature point position detection device 1 as a set of two numbers of x coordinate value and y coordinate value for each feature point position.
- FIG. 3 is a diagram illustrating a face image 301 which is an example of an image 300 to be processed by the feature point position detection apparatus 1.
- FIG. 4 is a diagram in which face feature point position initial information 302 input by the feature point position initial information input unit 110 is superimposed on the face image 301.
- the face feature point position initial information 302 input by the feature point position initial information input means 110 is indicated by a cross.
- x marks are attached to 14 points of both ends of the right and left eyebrows, the center and both ends of the left and right eyes, the lower part of the nose, and both the ends and the center of the mouth.
- the feature point estimated position estimating means 120 is a feature point position estimation dictionary storage means 210 based on the face feature point position initial information 302 inputted from the feature point position initial information input means 110, here, the coordinate value information of the face feature point position. Is used to estimate the face feature point estimated position of a desired number of points according to the target face image 301. In FIG. 5, the face feature point estimated position 303 is shown by being overlapped with the face image 301 with a cross. The estimation of the desired number of face feature point estimation positions 303 can be performed by canonical correlation analysis, for example. The desired number of points can be individually specified.
- the feature point estimated position estimation unit 120 performs 75 face features by canonical correlation analysis from the coordinate values of the 14 face feature point position initial information 302 input from the feature point position initial information input unit 110.
- the case where the coordinate value of the point estimation position 303 is estimated is shown.
- Canonical correlation analysis is a technique for analyzing the correlation between multivariate groups.
- a 28-dimensional vector in which the two-dimensional coordinate values of the 14 face feature point position initial information 302 are vertically arranged is x
- the two-dimensional coordinate values of the 75 face feature point estimated positions 303 are vertically aligned.
- the arranged 150-dimensional vector y is (Equation 1)
- T in Equation 1 represents transposition of a vector and a matrix.
- U, V, and ⁇ are matrices obtained by canonical correlation analysis.
- U is a matrix for obtaining a canonical variable of vector x
- the size is 28 ⁇ r
- V is a matrix for obtaining a canonical variable of vector y.
- the size is 150 ⁇ r
- ⁇ is a matrix having the square of the canonical correlation as a diagonal component
- the size is r ⁇ r.
- r is a positive integer less than or equal to the x and y dimensions, and is an arbitrary integer between 1 and 28 here.
- x 0, y 0, respectively, x 0 is fourteen face 28 dimensional vector the mean value of the two-dimensional coordinate values are arranged in the vertical feature point position initial information 302, y 0 is 75 facial feature points It is a 150-dimensional vector in which average values of two-dimensional coordinate values of the estimated position 303 are vertically arranged.
- ⁇ , U, V, x 0 , y 0 are stored in the feature point position estimation dictionary storage unit 210.
- the model parameter calculation unit 130 obtains a search parameter in the search for the facial feature point position based on the facial feature point estimated position 303 estimated by the feature point estimated position estimation unit 120.
- the feature point estimated position estimation means 120 estimates the coordinate values of 75 face feature point estimated positions 303 from the coordinate values of 14 face feature point position initial information 302.
- the 150-dimensional vector in which the two-dimensional coordinate values of the 75 facial feature point estimated positions 303 are vertically arranged is y
- the model relating to the facial shape of the facial feature point position searching means 140 is S
- the model relating to the facial texture is T
- the shape When the integrated model of the model S relating to and the model T relating to texture is A, the search parameter p is (Equation 2)
- Is calculated by S (y) and T (y) in Equation 2 are functions that return y as an input and search parameters relating to each model according to the predefined models S and T, respectively, and A is S (y), T ( This is a function that takes y) as an input and returns search parameters according to a predefined model A.
- AAM Active Appearance Model
- the models S, T, and A are usually modeled as linear subspaces. Therefore, a matrix in which vectors constituting each subspace are arranged is set as S, T, and A.
- the matrix sizes of S, T, and A are 150 ⁇ r s , (the number of dimensions of g (y)) ⁇ r t , and (r s + r t ) ⁇ r a , and r s , r t , r a denotes the rank of each model.
- p s, p t the size of the p a, respectively, the r s ⁇ 1, r t ⁇ 1, r a ⁇ 1.
- G (y) is the position and size of the face on the two-dimensional image from the 150-dimensional vector y in which the two-dimensional coordinate values of the 75 facial feature point estimation positions 303 are arranged vertically,
- This is a function for extracting a face image with normalized rotation angle and face shape.
- the output of the function g is a vector in which the values of the pixels of the normalized face image are arranged vertically. For example, when the size of the normalized face image is 100 pixels ⁇ 100 pixels, the output of the function g is a 10000-dimensional vector.
- the function g is known as warp image.
- G 0 is an average vector of g (y) obtained in advance from a plurality of face images and their feature point position information y.
- the feature point position search means 140 searches for a feature point position by performing model parameter fitting using the search parameter p obtained by the model parameter calculation means 130 as an initial value, and detects a feature point position.
- model parameter fitting for example, in the case of AAM, the method of Non-Patent Document 1 can be used.
- the search parameter p and the model are first used in the first step by using the model S, model T, and model A learned in advance with AAM.
- Request parameter p t about texture parameters p s and the face on the shape of the face from a.
- a face image g m estimated from the search parameter p is obtained using the texture parameter p t and the model T.
- R is a matrix learned in advance by AAM.
- an initial value of an appropriate model parameter in the feature point search unit 140 that is, a model parameter that is closer to the correct answer, is searched for the feature point position.
- the feature point position search can be performed based on the initial value of the means 140. As a result, it is possible to detect feature point positions with high accuracy even for facial expression variations, personal variations, posture variations, and the like.
- the present invention enables not only high-precision specification of feature point positions in a face image but also high-precision specification of feature point positions in the entire image.
- initial information such as the thumb, index finger or nail is detected in advance by another means as the feature point position of the hand instead of the face feature point position, and the feature point position initial information is input.
- the means 110 it is possible to detect the contours of the fingers, nails, and the like.
- the type of animal or plant can be specified using this, and an automobile, a ship, an airplane, an electronic device, a building, a painting as an artifact. It is also possible to specify the type.
- the initial information of the headlight as a feature point position of a predetermined type of automobile is detected in advance by another means and input by the feature point position initial information input means 110, whereby the head of the predetermined automobile Light detection is possible.
- the kind of automobile can be specified.
- animals and plants and other artifacts are examples of the initial information of the headlight as a feature point position of a predetermined type of automobile.
- Appendix 1 Feature point position initial information input means for inputting feature point position initial information from outside according to the target image, and feature points having a desired number of points in the target image from the feature point position initial information and feature point position estimation dictionary information
- a feature point estimated position estimating means for estimating an estimated position; a model parameter calculating means for obtaining a search parameter for searching a feature point position of the target image from the feature point estimated position; and a model of the target image based on the search parameter
- a feature point position detection device comprising: feature point position search means for searching and detecting a feature point position of the target image by performing parameter fitting.
- Appendix 2 The feature point position detection apparatus according to appendix 1, wherein the estimated feature point position is greater than the initial feature point position information.
- (Appendix 6) Input feature point position initial information from the outside according to the target image, estimate a feature point estimated position of a desired number of points in the target image from the feature point position initial information and the feature point position estimation dictionary information, and the feature A feature parameter position search search parameter of the target image is obtained from the point estimation position, and a feature point position of the target image is searched and detected by performing parameter fitting of the model of the target image based on the search parameter.
- Point position detection method (Appendix 7) The feature point position detection method according to appendix 6, wherein the estimated feature point position is greater than the initial feature point position information. (Appendix 8) The feature point position detection method according to appendix 6 or 7, wherein the feature point position estimation dictionary information uses a stored feature point position estimation dictionary.
- (Appendix 11) A feature of a desired number of points in the target image from the process of inputting external feature point position initial information to the feature point position detection device according to the target image, and the feature point position initial information and the feature point position estimation dictionary information
- a feature point position detection program for executing a process for searching for and detecting a feature point position of a target image.
- Appendix 13 13.
- Appendix 14 14.
- Appendix 15 15.
- (Appendix 16) Feature point position initial information input means for inputting feature point position initial information from outside according to the target image, and feature points having a desired number of points in the target image from the feature point position initial information and feature point position estimation dictionary information
- a feature point position detecting device comprising: a feature point estimated position estimating unit that estimates an estimated position; and a feature point position searching unit that starts searching for a feature point position from the desired number of feature point estimated positions.
- (Appendix 17) Input feature point position initial information from the outside according to the target image, estimate a feature point estimated position of a desired number of points in the target image from the feature point position initial information and feature point position estimation dictionary information, and the desired
- a feature point position detection method that starts searching for feature point positions from the estimated number of feature point positions.
- Appendix 18 A feature of a desired number of points in the target image from the process of inputting external feature point position initial information to the feature point position detection device according to the target image, and the feature point position initial information and the feature point position estimation dictionary information
- a feature point position detection program for executing a process for estimating a point estimation position and a process for starting a search for a feature point position from the feature point estimation positions of the desired number of points.
- the present invention relates to a feature point position detection technique for detecting the position of a feature point such as an eye or nose from an image such as a face, and is used for face authentication or facial expression recognition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
(数1)
(数2)
(数3)
(付記1)
対象画像に応じて外部からの特徴点位置初期情報を入力する特徴点位置初期情報入力手段と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する特徴点推定位置推定手段と、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求めるモデルパラメータ計算手段と、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する特徴点位置探索手段と、を備えた、特徴点位置検出装置。
(付記2)
前記特徴点推定位置は前記特徴点位置初期情報よりも多い、付記1に記載の特徴点位置検出装置。
(付記3)
前記特徴点位置推定辞書情報を格納する特徴点位置推定辞書記憶手段を備えた、付記1または2に記載の特徴点位置検出装置。
(付記4)
前記対象画像は人体を有する、付記1から3の内の1項記載の特徴点位置検出装置。
(付記5)
前記人体の前記特徴点位置は顔の目あるいは鼻あるいは口の情報を有する、付記4に記載の特徴点位置検出装置。
(付記6)
対象画像に応じて外部からの特徴点位置初期情報を入力し、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定し、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求め、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する、特徴点位置検出方法。
(付記7)
前記特徴点推定位置は前記特徴点位置初期情報よりも多い、付記6に記載の特徴点位置検出方法。
(付記8)
前記特徴点位置推定辞書情報は格納された特徴点位置推定辞書を用いる、付記6または7に記載の特徴点位置検出方法。
(付記9)
前記対象画像は人体を有する、付記6から8の内の1項記載の特徴点位置検出方法。
(付記10)
前記人体の前記特徴点位置は顔の目あるいは鼻あるいは口の情報を有する、付記9に記載の特徴点位置検出方法。
(付記11)
特徴点位置検出装置に、対象画像に応じて外部からの特徴点位置初期情報を入力する処理と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する処理と、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求める処理と、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する処理と、を実行させる特徴点位置検出プログラム。
(付記12)
前記特徴点推定位置は前記特徴点位置初期情報よりも多い、付記11に記載の特徴点位置検出プログラム。
(付記13)
前記特徴点位置推定辞書情報は格納された特徴点位置推定辞書を用いる、付記11または12に記載の特徴点位置検出プログラム。
(付記14)
前記対象画像は人体を有する、付記11から13の内の1項記載の特徴点位置検出プログラム。
(付記15)
前記人体の前記特徴点情報は顔の目あるいは鼻あるいは口の情報を有する、付記14に記載の特徴点位置検出プログラム。
(付記16)
対象画像に応じて外部からの特徴点位置初期情報を入力する特徴点位置初期情報入力手段と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する特徴点推定位置推定手段と、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する特徴点位置探索手段と、を備えた、特徴点位置検出装置。
(付記17)
対象画像に応じて外部からの特徴点位置初期情報を入力し、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定し、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する、特徴点位置検出方法。
(付記18)
特徴点位置検出装置に、対象画像に応じて外部からの特徴点位置初期情報を入力する処理と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する処理と、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する処理と、を実行させる特徴点位置検出プログラム。
100 データ処理装置
110 特徴点位置初期情報入力手段
120 特徴点推定位置推定手段
130 モデルパラメータ計算手段
140 特徴点位置探索手段
200 記憶装置
210 特徴点位置推定辞書記憶手段
300 画像
301 顔画像
302 顔特徴点位置初期情報
303 顔特徴点推定位置
310 顔特徴点位置
Claims (13)
- 対象画像に応じて外部からの特徴点位置初期情報を入力する特徴点位置初期情報入力手段と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する特徴点推定位置推定手段と、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求めるモデルパラメータ計算手段と、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する特徴点位置探索手段と、を備えた、特徴点位置検出装置。
- 前記特徴点推定位置は前記特徴点位置初期情報よりも多い、請求項1に記載の特徴点位置検出装置。
- 前記特徴点位置推定辞書情報を格納する特徴点位置推定辞書記憶手段を備えた、請求項1または2に記載の特徴点位置検出装置。
- 前記対象画像は人体を有する、請求項1から3の内の1項記載の特徴点位置検出装置。
- 前記人体の前記特徴点位置は顔の目あるいは鼻あるいは口の情報を有する、請求項4に記載の特徴点位置検出装置。
- 対象画像に応じて外部からの特徴点位置初期情報を入力し、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定し、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求め、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する、特徴点位置検出方法。
- 前記特徴点推定位置は前記特徴点位置初期情報よりも多い、請求項6に記載の特徴点位置検出方法。
- 前記特徴点位置推定辞書情報は格納された特徴点位置推定辞書を用いる、請求項6または7に記載の特徴点位置検出方法。
- 特徴点位置検出装置に、対象画像に応じて外部からの特徴点位置初期情報を入力する処理と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する処理と、前記特徴点推定位置から前記対象画像の特徴点位置探索の探索パラメータを求める処理と、前記探索パラメータに基づいて前記対象画像のモデルのパラメータフィッティングを行うことで前記対象画像の特徴点位置を探索し検出する処理と、を実行させる特徴点位置検出プログラム。
- 前記特徴点推定位置は前記特徴点位置初期情報よりも多い、請求項9に記載の特徴点位置検出プログラム。
- 対象画像に応じて外部からの特徴点位置初期情報を入力する特徴点位置初期情報入力手段と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する特徴点推定位置推定手段と、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する特徴点位置探索手段と、を備えた、特徴点位置検出装置。
- 対象画像に応じて外部からの特徴点位置初期情報を入力し、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定し、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する、特徴点位置検出方法。
- 特徴点位置検出装置に、対象画像に応じて外部からの特徴点位置初期情報を入力する処理と、前記特徴点位置初期情報と特徴点位置推定辞書情報とから前記対象画像における所望の点数の特徴点推定位置を推定する処理と、前記所望の点数の特徴点推定位置から特徴点位置の探索を開始する処理と、を実行させる特徴点位置検出プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480004808.5A CN104919492A (zh) | 2013-01-15 | 2014-01-14 | 特征点位置检测设备、特征点位置检测方法以及特征点位置检测程序 |
JP2014557390A JP6387831B2 (ja) | 2013-01-15 | 2014-01-14 | 特徴点位置検出装置、特徴点位置検出方法および特徴点位置検出プログラム |
US14/759,155 US20150356346A1 (en) | 2013-01-15 | 2014-01-14 | Feature point position detecting appararus, feature point position detecting method and feature point position detecting program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-004228 | 2013-01-15 | ||
JP2013004228 | 2013-01-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014112346A1 true WO2014112346A1 (ja) | 2014-07-24 |
Family
ID=51209443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/000102 WO2014112346A1 (ja) | 2013-01-15 | 2014-01-14 | 特徴点位置検出装置、特徴点位置検出方法および特徴点位置検出プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20150356346A1 (ja) |
JP (1) | JP6387831B2 (ja) |
CN (1) | CN104919492A (ja) |
WO (1) | WO2014112346A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021039403A1 (ja) * | 2019-08-30 | 2021-03-04 | オムロン株式会社 | 顔向き推定装置及び方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016030305A1 (en) * | 2014-08-29 | 2016-03-03 | Thomson Licensing | Method and device for registering an image to a model |
WO2018033137A1 (zh) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | 在视频图像中展示业务对象的方法、装置和电子设备 |
CN107194980A (zh) * | 2017-05-18 | 2017-09-22 | 成都通甲优博科技有限责任公司 | 人脸模型构建方法、装置及电子设备 |
US11521460B2 (en) | 2018-07-25 | 2022-12-06 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
AU2019208182B2 (en) | 2018-07-25 | 2021-04-08 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
CN114627147B (zh) * | 2022-05-16 | 2022-08-12 | 青岛大学附属医院 | 基于多阈值图像分割的颅面标志点自动识别方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006051607A1 (ja) * | 2004-11-12 | 2006-05-18 | Omron Corporation | 顔特徴点検出装置、特徴点検出装置 |
JP2010231354A (ja) * | 2009-03-26 | 2010-10-14 | Kddi Corp | 顔認識装置及び顔器官の特徴点特定方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4745207B2 (ja) * | 2006-12-08 | 2011-08-10 | 株式会社東芝 | 顔特徴点検出装置及びその方法 |
JP4946730B2 (ja) * | 2007-08-27 | 2012-06-06 | ソニー株式会社 | 顔画像処理装置及び顔画像処理方法、並びにコンピュータ・プログラム |
JP4951498B2 (ja) * | 2007-12-27 | 2012-06-13 | 日本電信電話株式会社 | 顔画像認識装置、顔画像認識方法、顔画像認識プログラムおよびそのプログラムを記録した記録媒体 |
-
2014
- 2014-01-14 US US14/759,155 patent/US20150356346A1/en not_active Abandoned
- 2014-01-14 CN CN201480004808.5A patent/CN104919492A/zh active Pending
- 2014-01-14 WO PCT/JP2014/000102 patent/WO2014112346A1/ja active Application Filing
- 2014-01-14 JP JP2014557390A patent/JP6387831B2/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006051607A1 (ja) * | 2004-11-12 | 2006-05-18 | Omron Corporation | 顔特徴点検出装置、特徴点検出装置 |
JP2010231354A (ja) * | 2009-03-26 | 2010-10-14 | Kddi Corp | 顔認識装置及び顔器官の特徴点特定方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021039403A1 (ja) * | 2019-08-30 | 2021-03-04 | オムロン株式会社 | 顔向き推定装置及び方法 |
JP2021039420A (ja) * | 2019-08-30 | 2021-03-11 | オムロン株式会社 | 顔向き推定装置及び方法 |
JP7259648B2 (ja) | 2019-08-30 | 2023-04-18 | オムロン株式会社 | 顔向き推定装置及び方法 |
Also Published As
Publication number | Publication date |
---|---|
US20150356346A1 (en) | 2015-12-10 |
JP6387831B2 (ja) | 2018-09-12 |
CN104919492A (zh) | 2015-09-16 |
JPWO2014112346A1 (ja) | 2017-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6387831B2 (ja) | 特徴点位置検出装置、特徴点位置検出方法および特徴点位置検出プログラム | |
Cristinacce et al. | Boosted regression active shape models. | |
US20180365515A1 (en) | Edge-based recognition, systems and methods | |
US11017210B2 (en) | Image processing apparatus and method | |
US9275273B2 (en) | Method and system for localizing parts of an object in an image for computer vision applications | |
JP5772821B2 (ja) | 顔特徴点位置補正装置、顔特徴点位置補正方法および顔特徴点位置補正プログラム | |
US8971572B1 (en) | Hand pointing estimation for human computer interaction | |
US7995805B2 (en) | Image matching apparatus, image matching method, computer program and computer-readable storage medium | |
WO2017088432A1 (zh) | 图像识别方法和装置 | |
US9443325B2 (en) | Image processing apparatus, image processing method, and computer program | |
JP4951498B2 (ja) | 顔画像認識装置、顔画像認識方法、顔画像認識プログラムおよびそのプログラムを記録した記録媒体 | |
JP2017506379A5 (ja) | ||
JP2007004767A (ja) | 画像認識装置、方法およびプログラム | |
JP6071002B2 (ja) | 信頼度取得装置、信頼度取得方法および信頼度取得プログラム | |
JP2016099982A (ja) | 行動認識装置、行動学習装置、方法、及びプログラム | |
US8971613B2 (en) | Image processing learning device, image processing learning method, and image processing learning program | |
KR20150127381A (ko) | 얼굴 특징점 추출 방법 및 이를 수행하는 장치 | |
JP2012221061A (ja) | 画像認識装置、画像認識方法、及びプログラム | |
Yang et al. | Face sketch landmarks localization in the wild | |
Bhuyan et al. | Trajectory guided recognition of hand gestures having only global motions | |
Haase et al. | Instance-weighted transfer learning of active appearance models | |
Quan et al. | Statistical shape modelling for expression-invariant face analysis and recognition | |
Cong et al. | Improved explicit shape regression face alignment algorithm | |
Lee et al. | Style adaptive contour tracking of human gait using explicit manifold models | |
Fan et al. | 3D hand skeleton model estimation from a depth image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14740739 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014557390 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14759155 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14740739 Country of ref document: EP Kind code of ref document: A1 |