CN103562964A - Image processing device, information generation device, image processing method, information generation method, control program, and recording medium - Google Patents

Image processing device, information generation device, image processing method, information generation method, control program, and recording medium Download PDF

Info

Publication number
CN103562964A
CN103562964A CN201280025429.5A CN201280025429A CN103562964A CN 103562964 A CN103562964 A CN 103562964A CN 201280025429 A CN201280025429 A CN 201280025429A CN 103562964 A CN103562964 A CN 103562964A
Authority
CN
China
Prior art keywords
point
mentioned
positional information
image
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280025429.5A
Other languages
Chinese (zh)
Other versions
CN103562964B (en
Inventor
入江淳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Corp filed Critical Omron Corp
Publication of CN103562964A publication Critical patent/CN103562964A/en
Application granted granted Critical
Publication of CN103562964B publication Critical patent/CN103562964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

A feature value extraction unit (26) extracts, at each of a plurality of sampling points for a reference point for a region point in an image, a feature value from a pixel or group of pixels of the sampling point, followed by a group of feature values for the reference point. A location information identification unit (29) refers to an LRF function which denotes a correspondence between the group of feature values for the reference point and location information which denotes a relative location of the region point for the reference point, and identifies the location information which corresponds to the group of feature values which the feature value extraction unit (26) has extracted. A region point identification unit (30) treats the location which the location information which the location information identification unit (29) has identified denotes as a region point of an object.

Description

Image processing apparatus, information generation device, image processing method, information generating method, control program and recording medium
Technical field
The present invention relates to image processing apparatus, information generation device, image processing method, information generating method, control program and the recording medium of the position point of a kind of point for detection of objects such as eye, mouths, unique point etc.
Background technology
Owing to detecting the technology of the point of eye, mouth from face image, can be applied in the application program etc. that portrait painting etc. was processed, generated to leading portion for authenticating face or inferring expression, so people studied this technology energetically in the past always.
For example, in patent documentation 1, recorded following technology: using the central point of the eye of user's appointment, mouth etc. as center, set the exploration scope of eye, mouth etc., within the scope of the exploration setting, scan, based on color component etc., extract a ,Zui region, region etc.In addition, in patent documentation 1, recorded following technology: determine the end points of the left and right such as eye ,Zui region, region extract, the end points based on left and right, sets for exploring the exploration scope of the upper and lower end points in a ,Zui region, region etc., extracts upper and lower end points.
In addition, in patent documentation 2, recorded following technology: in the situation that extract the point of eye, the end points of take about eye is reference point, based on reference point, the dynamic skeleton pattern of matching (fitting), utilizes the method for energy minimization to extract the point of eye.
In addition, the method as detect the point of eye, mouth according to face image, has the approximating method based on shape or texture model.Initiating structure display model) etc. active appearance models), ASAM(Active Structure Appearance Model active shape model), AAM(Active Appearance Model specifically, there is the ASM(Active Shape Model described in non-patent literature 1,2 and patent documentation 3,4::: approximating method.
The shape of ASM, AAM and ASAM is to show the shape of face, the model of texture with parameter seldom.These models, to facial feature points coordinate information, texture information application principal component analysis (PCA), in the basis vector obtaining thus, only utilize the basis vector that eigenwert is large to show the unique point coordinate of face.Thus, the shape of Data Representation face seldom not only can be utilized, the constraint condition of the shape of face can also be guaranteed to keep.ASM and AAM make its model and face image matching by energy minimization, and ASAM is calculated and made its model and face image matching by Errors, thus, detect the unique point coordinate of face.
Prior art document
Patent documentation
Patent documentation 1: Japan's publication communique " Japanese kokai publication hei 9-6964 communique (on January 10th, 1997 is open) "
Patent documentation 2: Japan's publication communique " TOHKEMY 2005-339288 communique (on Dec 8th, 2005 is open) "
Patent documentation 3: Japan's publication communique " No. 4093273 communique of patent (distribution on June 4th, 2008) "
Patent documentation 4: Japan's publication communique " No. 4501937 communique of patent (distribution on July 14th, 2010) "
Non-patent literature
Non-patent literature 1:T.F.Cootes, et al, " Active Shape Models-Their Training and Application ", and CVIU, Vol.6, No.1, p.38-59, nineteen ninety-five
Non-patent literature 2:T.F.Cootes, et al, " Active appearance models ", ECCV ' 98Vol.II, Freiburg, Germany, 1998
Summary of the invention
The problem that invention will solve
The expression of the shape of the shape by mouth, eye or the combination Deng,Shi face of these shapes diversely changes, and has various variable condition.Therefore, be difficult to perfect forecast to the shape state that is varied to the object such as eye, mouth of various shapes.Therefore, above-mentioned prior art is difficult to detect accurately the point of the shape object that great changes will take place of the point etc. of eye, mouth.
Specifically, in the technology described in patent documentation 1, in the shape of eye, mouth etc., there is unexpected variation, and cause the point of eye, mouth etc. all to exceed in the situation of exploration scope, can not detect exactly point.On the other hand, in order to cover the shape of various mouth, the shape of eye, and set, explore widely in the situation of scope, utilize the technology described in patent documentation 1 by scanning, to detect within the scope of exploration, therefore, it is very large that processing load becomes.Therefore,, in the technology described in patent documentation 1, set and explore widely scope impracticable.Therefore, the technology described in patent documentation 1 is difficult to the point of the shape object that great changes will take place accurately.
In addition, in the technology described in patent documentation 2, in the shape of object and the dynamic skeleton pattern situation devious of using, the point of extracting object need to spend very many time, or, can not extract point accurately.On the other hand, in order to cover the shape of various mouth, the shape of eye, and prepared in the situation of various models, although the precision of the extraction of point rises, the size of data of device storage in advance becomes large, or processing load becomes large.Therefore,, in the technology described in patent documentation 2, prepare various models impracticable.Therefore, the technology described in patent documentation 2 is difficult to the point of the shape object that great changes will take place accurately.
In addition, ASM and AAM exist and explore the shortcoming of processing expensive computing time.In addition, AAM need to prepare for everyone shape, has the low problem of precision that other people face is carried out to matching.
In addition, with respect to ASM and AAM, ASAM can realize high-speed, high precision.ASAM, can be by obtaining high-precision testing result for the few face of expression shape change using the shape of face as constraint condition.Yet ASAM, for the open and-shut mode of mouth, eye etc., the expression that shape state variation is very large, can not detect accurately.This is because the shape of the face that ASAM is used, for the world model of the shape of performance face integral body, can not detect the variation for each positions such as eye, mouths, exactly such as the performance that cannot accurately detect the variations such as switching, change of shape.
The present invention puts in view of the above problems and produces, the object of the invention is to, even be varied to the object of various shapes, also can realize image processing apparatus, information generation device, image processing method, information generating method, control program and recording medium for the shape of the object in detected image accurately.
For the means of dealing with problems
In order to address the above problem, the position point of image processing apparatus of the present invention inspected object from image, is characterized in that having: reference point determining unit, and it determines and puts corresponding reference point with above-mentioned position on above-mentioned image; Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining unit, it is with reference to correspondence relationship information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Detection side position point determining unit, it is by by the definite represented position of positional information of above-mentioned positional information determining unit, as the position point of above-mentioned object.
In order to address the above problem, the position point of image processing method of the present invention inspected object from image, is characterized in that, comprising: reference point determining step, and on above-mentioned image, determine and put corresponding reference point with above-mentioned position; Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining step, with reference to correspondence relationship information, determine the positional information corresponding with the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Position point determining step, by the represented position of positional information definite in above-mentioned positional information determining step, as the position point of above-mentioned object.
According to above-mentioned structure, above-mentioned positional information determining unit is with reference to the correspondence relationship information that represents the corresponding relation between characteristic quantity group and positional information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this characteristic quantity group refers to, from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, this feature locations information refers to, represent that above-mentioned position point is with respect to the positional information of the relative position of said reference point, detection side position point determining unit is using the position point as above-mentioned object by the represented position of the definite positional information of above-mentioned positional information determining unit.
The present inventor find: on image, between the point of the characteristic quantity group extracting and the organ on image, the relative position of unique point with respect to reference point, have incidence relation from comprise the region of organ of eye, mouth etc.Based on this opinion, by reference, represent the correspondence relationship information of the corresponding relation between above-mentioned characteristic quantity group and above-mentioned positional information, even the object of change of shape, also the position point of the object in detected image accurately.That is,, even in the situation that the change of shape of object, above-mentioned image processing apparatus and above-mentioned image processing method are also able to the effect of the position point of inspected object accurately.
The effect of invention
As mentioned above, image processing apparatus of the present invention has: reference point determining unit, and it determines and puts corresponding reference point with above-mentioned position on above-mentioned image; Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining unit, it is with reference to correspondence relationship information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Detection side position point determining unit, it is by by the definite represented position of positional information of above-mentioned positional information determining unit, as the position point of above-mentioned object.
In addition, image processing method of the present invention comprises: reference point determining step, and on above-mentioned image, determine and put corresponding reference point with above-mentioned position; Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining step, with reference to correspondence relationship information, determine the positional information corresponding with the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Position point determining step, by the represented position of positional information definite in above-mentioned positional information determining step, as the position point of above-mentioned object.
Therefore, above-mentioned image processing apparatus and above-mentioned image processing method reach following effect: in the situation that the change of shape of object, and the also position of inspected object point accurately.
Accompanying drawing explanation
Fig. 1 means the figure of embodiments of the present invention, means the block diagram of the major part structure of position point detection device.
Fig. 2 means the schematic diagram of interim matching.
Fig. 3 means the figure of embodiments of the present invention, the block diagram of the major part structure of LRF learning device.
Fig. 4 means that reference point determines the schematic diagram of method and positional information generation method.
Fig. 5 means that sampling location determines the schematic diagram of method and Characteristic Extraction method.
Fig. 6 means the schematic diagram of the LRF function of the incidence relation between positional information and characteristic quantity group.
Fig. 7 means the figure of an example of the LRF information of the LRF function comprising in the storage part that is kept at LRF learning device.
Fig. 8 means the figure as the corresponding relation between the characteristic quantity group of the input data of LRF function and the positional information of conduct output data.
Fig. 9 means the figure of an example of the LRF learning method that LRF learning device is performed.
Figure 10 utilizes image to schematically show the respectively transition graph of the state of processing that LRF learning method comprises.
Figure 11 means the figure of an example of the position point detecting method that position point detection device is performed.
Figure 12 utilizes image to schematically show the respectively transition graph of the state of processing that position point detecting method comprises.
Embodiment
(summary of the present invention)
The present inventor find: on image, in the characteristic quantity group extracting from comprise the region of the organs such as eye, mouth, and the point of organ that the reference point of take on image is initial point is, between the position of unique point, exist incidence relation.Based on this opinion, by regretional analysis, generate the model that represents the corresponding relation between above-mentioned characteristic quantity group and above-mentioned position, invented the detection method of using this model.
By utilizing this detection method, not only can detect exactly the expression of prior imagination, and, even if extremely open and close at eye, mouth etc. under the various conditions of such expression etc., also can detect exactly face, each organ.Below, the detection method that the present inventor are invented is called " Local Regression Fitting " (LRF: local regression matching) detection method, will be called LRF learning method for generating the learning method of above-mentioned model.
In addition, the present inventor have proposed such method: LRF detection method and the existing overall approximating method that catches face's global shape are used in combination, and are the best approaches that can accurately detect face, each organ.Specifically, proposed to have combined the interim matching of overall matching and local fit (LRF detection method), this overall situation matching refers to, the world model of the learning method of utilization based on ASAM etc. catches the matching of face's global shape, this local fit refers to, utilizes and based on LRF learning method, for the partial model of each organ of face, catches respectively the matching of the detail shape of each organ.
In more detail, as shown in Figure 2, first, in interim matching, by overall matching, detect the brows of eyebrow of temple (two places), left and right and the tip of the brow, inner eye corner and the tail of the eye, nostril (two places), the corners of the mouth (two places) and the chin etc. of two.Then, by LRF detection method, detect the point of other face, eyebrow, order, nose and mouth.Based on by overall matching and the detected point of LRF detection method, detect the profile of face and each organ.
Thus, even for the expression that can not utilize world model's performance, also can detect accurately the profile of face.And, by being designed to this stage structures, can reduce the very large error detection producing because of overall matching, and, even the face image of expression shape change also can utilize local fit to detect exactly face mask unique point.
Below, based on Fig. 1~Figure 16, position point detection device (image processing apparatus) and LRF learning device (information generation device) for an embodiment of the invention describe, wherein, this position point detection device utilizes the position point of point, unique point of the object in LRF detection method detected image etc., and this LRF learning device utilizes LRF learning method generation model.In addition, below, position point detection device and LRF learning device are described as device separated from one another, but the device that position point detection device and LRF learning device also can be integrated.
(structure of LRF learning device)
First, based on Fig. 3, for LRF learning device, describe.LRF learning device is the device that generates LRF function (correspondence relationship information), this LRF function representation, at the image being obtained by other device or utilize to carry that camera on this device is taken and image in, the corresponding relation between the characteristic quantity group of the position point of object assigned position extraction based on this position point and from image with respect to the relative position of the reference point on image and basis.
LRF learning device is such as being PC(personal computer), digital camera, mobile phone, PDA(Personal Digital Assistant: personal digital assistant), game machine, shooting print the device of photo, the device of edited image etc.
In the present embodiment, be not limited to the object of position point with the object of the above-mentioned corresponding relation of study to be made as the mankind's eye, mouth etc.For example, can also be mobile phone, televisor etc. for the face of the animals of dog, cat etc., organ etc., can also be buildings, cloud etc.
The position point of object refers to, the point in the region of the object on image.Specifically, at object, be for example eye in the situation that, position point refers to the point, pupil of eye etc.At this, the position point of object is called to learning object point, the object with learning object point is called to learning object thing, wherein, the position point of object is learnt the object of above-mentioned corresponding relation for LRF learning device.
Fig. 3 means the block diagram of an example of the major part structure of LRF learning device 2.As shown in Figure 3, LRF learning device 2 has control part 16, storage part 17, image input part 13, operating portion (input block) 14 and display part 15.In addition, LRF learning device 2 can also have the equipment of Department of Communication Force for communicating by letter with other device, Speech input portion, audio output unit etc., because the unique point with invention is irrelevant, so these equipment are not shown.
Image input part 13 receives image from outside image providing device (diagram is omitted).Image providing device is as long as for kept image or the image obtaining being provided to other device, can be any device.For example, image providing device is digital camera, PC, mobile phone, PDA, game machine, digital television, USB(Universal Serial Bus: USB (universal serial bus)) memory storage of storer etc. etc.In addition, also can replace image input part 13, and make LRF learning device 2 carry camera.
Operating portion 14, for making user to LRF learning device 2 input indicative signals, operates LRF learning device 2.Operating portion 14 can be by the formations such as input equipment of keyboard, mouse, miniature keyboard (Keypad), action button etc.In addition, can be the touch-screen of operating portion 14 and display part 15 one formations.In addition, operating portion 14 can be the remote control of telepilot separated with LRF learning device 2 etc.
Display part 15 shows image according to the indication of control part 16.Display part 15, as long as show image according to the indication of control part 16, for example, can be applied LCD(liquid crystal display), OLED display, plasma display etc.
Control part 16, by carrying out the program that reads to interim storage part (diagram is omitted) from storage part 17, carries out various computings, and the each several part that LRF learning device 2 has is controlled in unification.
In the present embodiment, in control part 16, as functional block, have: image acquiring unit (image acquisition unit) 21, region intercepting portion 22, reference point specifying unit (reference point determining unit) 23, position point determination portion (study sidepiece site determining unit) 24, sampling location determination portion (sampling location determining unit) 25, Characteristic Extraction portion (Characteristic Extraction unit) 26, positional information generating unit (positional information generation unit) 27 and LRF function calculating part (correspondence relationship information generation unit) 28.Random access memory) etc. ROM (read-only memory)) etc. CPU(central processing unit: central processing unit) by by ROM(read only memory: the program that the memory storage of realization is stored reads to and adopts RAM(random access memory: interim storage part, and carry out this program, realize these functional blocks (21~28) of control part 16.
Image acquiring unit 21 is obtained the image via 13 inputs of image input part.The image that image acquiring unit 21 is obtained to 22 outputs of region intercepting portion.In addition,, the in the situation that of storing image in storage part 17, image acquiring unit 21 can be from storage part 17 reading images.
The learning object area image extracting method of region intercepting portion 22 based on regulation, from obtained image, the image that extracts the region that comprises learning object point is learning object area image.In addition, region intercepting portion 22, based on specified standard method, by extracted learning object area image standardization, generates standardized images.The standardized images that region intercepting portion 22 generates to reference point specifying unit 23, position point determination portion 24 and 25 outputs of sampling location determination portion.
Specifically, for example, in the situation that learning object thing is " eye " or " mouth ", region intercepting portion 22 extracts face image from obtained image, extracted face image is modified to for example image of 100 pixel * 100 pixels, generates standardized images.
At this, as long as in advance for each position point (learning object point) regulation learning object area image extracting method and the standardized method of object, concrete grammar can be for arbitrarily.In addition, below, using the original image as standardized images, be that the image that image acquiring unit 21 is obtained is called original image.
Reference point specifying unit 23 is obtained standardized images from region intercepting portion 22, the reference point based on regulation is determined to method and the point of regulation in the standardized images obtained, is defined as reference point.Reference point specifying unit 23 is to positional information generating unit 27 output reference coordinates, and this reference coordinate is the coordinate of determined reference point in standardized images.
Specifically, as shown in Figure 4, for example, in the situation that learning object thing is " eye ", reference point specifying unit 23, by the central point of the eye in standardized images, is defined as reference point.Now, reference point specifying unit 23 can be on display part 15 display standard image, the central point of indicating user intended eye, using the specified point of user as reference point.In addition, inner eye corner point and the external eyes angle point of the eye that reference point specifying unit 23 also can be determined when extracting face image in region intercepting portion 22, be defined as reference point by the mid point of inner eye corner point and external eyes angle point.In addition, reference point specifying unit 23 also can be with reference to having set up the metadata (reference point location information) of corresponding relation with original image, and the position of the central point of the eye based on representing by metadata, utilizes definite reference points such as affined transformation.In this case, before LRF learning device 2 is carried out study, in advance, on each original image, confirm the position of the central point of the eye on original image, and metadata and the original image of information that comprises the position of the central point that represents determined eye set up to corresponding relation.In addition, in metadata, also can replace the information of the position of the central point that represents eye, and for example comprise, for determining the information (, inner eye corner point, external eyes angle point etc.) of position of the central point of eye.
In addition, no matter reference point is as long as for the point in standardized images, be which point all can.; in the situation that for example learning object thing is " eye "; also can be using inner eye corner point or external eyes angle point as reference point, also can be using the central point of face (central point of standardized images) as reference point, also can be using end points of the upper left of standardized images etc. as reference point.
As long as determine method for each position point (learning object point) stipulated standard point of object in advance, concrete method can be arbitrarily.
Position point determination portion 24 is obtained standardized images from region intercepting portion 22, and the indication of the user based on from operating portion 14 inputs, determines the learning object point in the standardized images of obtaining.Position point determination portion 24 is to positional information generating unit 27 output position coordinates, the coordinate in the standardized images that position coordinate is determined learning object point.
Specifically, for example, in the situation that learning object point is " the upper eyelid point " of the point as eye, position point determination portion 24 is at display part 15 display standard images, the upper eyelid point of indicating user intended eye, is defined as learning object point by the point of user's appointment.In addition, position point determination portion 24 also can be with reference to having set up the metadata (position dot position information) of corresponding relation with original image, and the position of the upper eyelid point of the eye based on representing by metadata, utilizes definite learning object points such as affined transformation.In this case, before LRF learning device 2 is carried out study, on each original image, determine in advance the position of the upper eyelid point of the eye on original image, metadata and the original image of information that comprises the position of the upper eyelid point that represents determined set up to corresponding relation.
In the example shown in Fig. 4, except the point of upper eyelid, palpebra inferior point, inner eye corner point and external eyes angle point can also be defined as to learning object point.In addition, upper eyelid point refers to, the summit of the circular arc of the formed upside of point of eye.In addition, palpebra inferior point refers to, the summit of the circular arc of the formed downside of point of eye.
Sampling location determination portion 25 is obtained standardized images from region intercepting portion 22, and method is determined in the sampling location based on regulation, determines a plurality of sampled points corresponding with reference point (position point) in the scope of the regulation in standardized images.At this, the scope of afore mentioned rules is called to sample range.
As long as determine method for each position point (learning object point) regulation sampling location of object in advance, it can be any method.
Specifically, the determined sampled point of sampling location determination portion 25, as long as in sample range, can be any point.For example, whole pixels that sampling location determination portion 25 can be in sample range are as sampled point.Sampling location determination portion 25 can be regularly or is selected brokenly the pixel in sample range, using selected pixel as sampled point.In addition, sampling location determination portion 25 is divided into a plurality of blocks by sample range, using the central point of block as sampled point.
At this, above-mentioned sample range, as long as for comprise the scope in the region of thinking learning object point place in standardized images, can be any scope.For example,, using the scope of n pixel * m pixel that comprises the region of thinking learning object point place as sample range.In addition, think the region at learning object point place can be for the position of the regulation in standardized images, there is the region of prescribed level.For example, at upper eyelid point for learning object point in the situation that, can determine according to inner eye corner point and external eyes angle point the central point of eye, according to the central point of eye using the specialized range of top as the region of thinking learning object point place.
In addition, can be using the scope that comprises the region of thinking learning object thing place as sample range.Specifically, as shown in Figure 5, in the situation that learning object thing is eye, in standardized images, covering can be thought to the scope in region at a place is as sample range, for example, as mentioned above, according to inner eye corner point and external eyes angle point, determine the central point of eye, can will using the scope of the i pixel * j pixel centered by the central point of eye as sample range.
In addition, the shape of sample range is not limited to the rectangle of i pixel * j pixel.The shape of sample range can, for arbitrarily, for example, can be other polygon or circle.In the example shown in Fig. 5, because the scope in region of covering being thought to a place is as sample range, so sample range be shaped as the shape of pruning four angles from rectangle.
The Characteristic Extraction method of Characteristic Extraction portion 26 based on regulation, for determined each sampled point of sampling location determination portion 25, from the pixel of sampled point or comprise the pixel groups of pixel of sampled point and extract characteristic quantity.Characteristic Extraction portion 26, for each position point, generates the characteristic quantity group consisting of corresponding with each sampled point respectively a plurality of characteristic quantities.
In other words, Characteristic Extraction portion 26 is for putting the corresponding a plurality of sampled points of corresponding reference point with position, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group.
At this, as long as in advance for each position point (learning object point) regulation Characteristic Extraction method of object, can be any method.
Specifically, the characteristic quantity that Characteristic Extraction portion 26 extracts can be for arbitrarily.For example, can be brightness value, marginal information, frequency characteristic (Gabor, Haar(Lis Hartel are levied) etc.), brightness slope characteristics amount (SIFT, HOG etc.) or by the characteristic quantity of these combinations.
In addition, in the situation that Characteristic Extraction portion 26 extracts characteristic quantity from comprise the pixel groups of the pixel with sampled point, the mean value of the value of whole pixels that can comprise based on pixel groups or median, extract characteristic quantity.In addition, in this case, one or more pixels that Characteristic Extraction portion 26 can comprise based on pixel groups, extract characteristic quantity.For example, in the situation that Characteristic Extraction portion 26 extracts characteristic quantity from comprising usining the pixel groups of sampled point as nine pixels of 3 * 3 pixels at center, mean value or the median of value that can be based on nine pixels, extract characteristic quantity.In addition, one or more pixels that Characteristic Extraction portion 26 can be based in nine pixels, extract characteristic quantity.
In addition, Characteristic Extraction portion 26 can extract from a sampled point characteristic quantity of a plurality of kinds.For example, Characteristic Extraction portion 26 can extract respectively brightness value and Lis Hartel value of levying (Haar-like feature value) from the pixel of a sampled point or pixel groups, is used as characteristic quantity.In addition, Characteristic Extraction portion 26 extracts brightness value from the pixel groups of 3 * 3 pixels centered by sampled point, is used as characteristic quantity, and extracts brightness value the pixel groups of 4 * 4 pixels centered by the sampled point by identical, be used as characteristic quantity, can extract the characteristic quantity of two kinds.
In the example shown in Fig. 5, Characteristic Extraction portion 26, from each pixel extraction Lis Hartel value of levying of sampled point, is used as characteristic quantity, generating feature amount group.In addition, sampling location determination portion 25 is for example set hundreds of sampled points in sample range.That is, Characteristic Extraction portion 26 for example generates the characteristic quantity group consisting of hundreds of characteristic quantities.
Positional information generating unit 27 is obtained reference coordinate from reference point specifying unit 23, from position point determination portion 24, obtains position coordinate.And, positional information the generations method of positional information generating unit 27 based on regulation, generation positional information, this positional information represents to take the position of the learning object point that reference point is initial point.In other words, the positional information generation method of positional information generating unit 27 based on regulation, generates and represents that learning object point is with respect to the positional information of the relative position of reference point.The positional information that positional information generating unit 27 generates to 28 outputs of LRF function calculating part.
Positional information is the coordinate of xy coordinate system or polar coordinate system.What in addition, reference coordinate and position coordinate can be for xy coordinate system or polar coordinate systems is some.
At this, as long as in advance for each position point (learning object point) assigned position information generating method of object, can be method arbitrarily.
In the example shown in Fig. 4, positional information generating unit 27 use xy coordinate systems represent reference coordinate and position coordinate and positional information, for each position coordinate, calculate the difference of position coordinate and reference coordinate, generate the positional information of each learning object point.That is,, if position coordinate is that (a, b), reference coordinate are (c, d), calculate positional information (X, Y) for (a-c, b-d).
In addition, in Fig. 4, " LeftX ", " LeftY " represent respectively x coordinate, the y coordinate of the positional information of external eyes angle point, " RightX ", " RightY " represent respectively x coordinate, the y coordinate of the positional information of inner eye corner point, " UpX ", " UpY " represent respectively x coordinate, the y coordinate of the positional information of upper eyelid point, and " DownX ", " DownY " represent respectively x coordinate, the y coordinate of the positional information of palpebra inferior point.
In LRF function calculating part 28, with regard to an image, for each learning object point, from Characteristic Extraction portion 26, obtain and learning object point characteristic of correspondence amount group, from positional information generating unit 27, obtain with this learning object and put corresponding positional information.LRF function calculating part 28 is for each learning object point, and characteristic quantity group and positional information based on generating from a plurality of images respectively, generate the correspondence relationship information of putting the corresponding relation of corresponding expression positional information and characteristic quantity group with this learning object.In addition, LRF function calculating part 28 is when generating correspondence relationship information, for identical learning object point, use by positional information and the characteristic quantity group of identical method (learning object area image extracting method, standardized method, reference point determine that method, sampling location determine that method, Characteristic Extraction method and positional information generation method are identical) generation.
Specifically, as shown in Figure 6, LRF function calculating part 28 is drawn characteristic quantity group and the positional information generating according to a plurality of images respectively, utilizes regretional analysis, calculates the LRF function (correspondence relationship information) that represents the incidence relation between positional information and characteristic quantity group.In addition, in Fig. 6, for convenience of description, by plane, represent LRF function, but because the corresponding relation between characteristic quantity group and positional information is high order (higher-dimension), so utilize the corresponding relation between super regression plane representation feature amount group and positional information, be LRF function actually.
In addition, in the example shown in Fig. 6, set sample range, make it cover whole regions of thinking each learning object point (external eyes angle point, inner eye corner point, upper eyelid point and palpebra inferior point) place, for each learning object dot generation same characteristic features amount group, utilize identical reference point, generate the positional information of each learning object point, therefore, the positional information of each learning object point and a characteristic quantity are set up to vertical corresponding relation.But be not limited to this, can also generate respectively positional information and characteristic quantity group for each learning object point, for each learning object point, obtain LRF function.
In the example shown in Fig. 6, if characteristic quantity group is made as to X, positional information is made as to Y, with Y=AX+B, represent Y.At this, for example, characteristic quantity group X consists of the characteristic quantity of the m extracting from k sampled point (m=k * (species number of the characteristic quantity extracting from a sampled point)), and positional information Y consists of x coordinate, the y coordinate of n learning object point, in the case, utilize characteristic quantity group X=(x 1, x 2..., x m) t, positional information Y=(y 1, y 2..., y 2n) trepresent X and Y.In this case, the matrix representation of 2n * m for coefficient A, the matrix representation of 2n for coefficient B * 1.
At this, the regretional analysis that LRF function calculating part 28 is used is as long as be the regretional analyses such as multiple regression, CCA.In addition, the LRF function that LRF function calculating part 28 is obtained can be the linear function shown in Fig. 6, can be also nonlinear function.
In addition, LRF function calculating part 28 can position-based information and characteristic quantity group between corresponding relation, generate the mapping table of the corresponding relation of determining both.
LRF function calculating part 28 is kept at LRF information in storage part 17, and this LRF information refers to, the position point of the position by LRF function representation corresponding relation and above-mentioned each method and the LRF function generating has been set up to the LRF information of corresponding relation.
Storage part 17 is preserved program for control part 16 references, data etc., for example, preserves above-mentioned LRF information 41 etc.
Based on Fig. 7, for the LRF information 41 being kept in storage part 17, describe.Fig. 7 means the figure of an example of the LRF information 41 being kept in storage part 17.
As shown in Figure 7, LRF information 41 is to put by position point with to this position the information that relevant LRF function has been set up corresponding relation.In addition, each method (learning object area image extracting method, standardized method, reference point determine that method, sampling location determine method, Characteristic Extraction method and positional information generation method) that LRF information 41 means position point and is respectively used to generating feature amount group and positional information has been set up the information of corresponding relation.
In the example shown in Fig. 7, with regard to study subject area image extraction method and standardized method, for each learning object point and corresponding association identical method; With regard to other method, for each object, for each learning object point and corresponding associated identical method, but be not limited to this.Can also, for each learning object point, adopt respectively diverse ways.
In addition, in the example shown in Fig. 7, LRF function is not limited to corresponding with each position point, can also, for each object, set up the corresponding relation of object and LRF function.For example, in the example shown in Fig. 7, with regard to each object, learning object area image extracting method, standardized method, reference point determine method, sampling location determine method and Characteristic Extraction method all identical.That is to say, as long as be identical object, the characteristic quantity group X extracting from certain image is identical, and irrelevant with position point.In this case, for example, with regard to right eye, at positional information Y=(y 1, y 2..., y 10) tin, by y 1~y 10be set as respectively x coordinate and the y coordinate of the positional information of the x coordinate of positional information of the x coordinate of positional information of the x coordinate of positional information of the x coordinate of positional information of upper eyelid point and y coordinate, palpebra inferior point and y coordinate, inner eye corner point and y coordinate, external eyes angle point and y coordinate, pupil, can enough Y=AX+B represent the LRF function of right eye.In addition A=(A, 1, A 2..., A 5) t, B=(B 1, B 2..., B 5) t.
In addition, in the example shown in Fig. 7, in LRF information 41, each method is not limited to set up corresponding relation with LRF function.In the example shown in Fig. 7, showing LRF learning device when study suitably selects each method to become the situation of LRF function next life, when learning or while detecting, in the situation that use the method for regulation to be in advance used as each method for each position point, the method (for example,, as long as enroll in learning program and trace routine) that position point detection device 1 and LRF learning device 2 are stipulated in advance as long as store for each position point.In this case, in LRF information 41, without each method and LRF function are set up to corresponding relation, as long as LRF information 41 comprises the information that represents to have set up with position point the LRF function of corresponding relation.
(structure of position point detection device)
Then,, based on Fig. 1, for position point detection device, describe.Position point detection device, the LRF information generating based on LRF learning device, from the image that obtained by other device or utilize the camera carrying on this device to take and in the image that obtains, the position point of the point of inspected object, unique point etc.
Position point detection device can be for such as digital camera, PC, mobile phone, PDA(Personal Digital Assistant), game machine, shooting print the device of photo, the device of edited image etc.
In the present embodiment, the object with the position point of detected object is made as to eye, mouth of the mankind etc., but is not limited to this.For example, can also be mobile phone, televisor etc. for the face of the animals of dog, cat etc., organ etc., can also be buildings, cloud etc.At this, position point detection device is called detected object point by the position point of above-mentioned detected object, and the object with detected object point is called to detected object thing.
Fig. 1 means the block diagram of an example of the major part structure of position point detection device 1.As shown in Figure 1, position point detection device 1 has control part 11, storage part 12, image input part 13, operating portion (input block) 14 and display part 15.In addition, position point detection device 1 can have the equipment of Department of Communication Force for communicating by letter with other device, Speech input portion, audio output unit etc., because the unique point with invention is irrelevant, so this equipment is not shown.
In addition, for convenience of description, the equipment included with LRF learning device 2 is there is to the identical Reference numeral of equipment annotation of identical function, and clipped explanation.
Storage part 12 is preserved program for control part 11 references, data etc., for example, preserves LRF information 41 grades that LRF learning device generates.The LRF information 41 being kept in storage part 12 can be for example data as shown in Figure 7.
Control part 11, by carrying out the program that reads to interim storage part (diagram is omitted) from storage part 12, carries out various computings, and unifies the each several part that controlling position point detection device 1 has.
In the present embodiment, in control part 11, as functional block, have: image acquiring unit 21, region intercepting portion 22, reference point specifying unit 23, sampling location determination portion (sampling location determining unit) 25, Characteristic Extraction portion 26, positional information determination portion (positional information determining unit) 29 and position point determination portion (detection side position point determining unit) 30.CPU reads to the program of storing in the memory storage by realizations such as ROM to adopt the interim storage part of RAM etc. and carry out this program, realizes these functional blocks (21~23,25,26,29,30) of control part 11.
Image acquiring unit 21 is obtained the image via 13 inputs of image input part.The image that image acquiring unit 21 is obtained to 22 outputs of region intercepting portion.
Region intercepting portion 22 reads LRF information 41 from storage part 12, for set up the learning object area image extracting method of corresponding relation with detected object point in LRF information 41, from obtained image, extract detected object area image, this detected object area image is the image in the region of inclusion test object-point.
In addition, region intercepting portion 22, based on set up the standardized method of corresponding relation with detected object point in LRF information 41, by extracted detected object area image standardization, generates standardized images.Region intercepting portion 22 exports to reference point specifying unit 23 and sampling location determination portion 25 standardized images generating.
Reference point specifying unit 23 reads LRF information 41 from storage part 12, from region intercepting portion 22, obtains standardized images.The reference point of reference point specifying unit 23 based on set up corresponding relation with detected object point in LRF information 41 determined method, and the point of the regulation in obtained standardized images is defined as to reference point.Reference point specifying unit 23 is to position point determination portion 30 output reference coordinates, and this reference coordinate is the coordinate of determined reference point in standardized images.
Sampling location determination portion 25 reads LRF information 41 from storage part 12, from region intercepting portion 22, obtain standardized images, method is determined in sampling location based on set up corresponding relation with detected object point in LRF information 41, in the scope of the regulation in standardized images, determine a plurality of sampled points corresponding with reference point (position point).
Characteristic Extraction portion 26 reads LRF information 41 from storage part 12, based on set up the Characteristic Extraction method of corresponding relation with detected object point in LRF information 41, for each sampled point corresponding with reference point, from the pixel of sampled point or comprise the pixel groups of pixel of sampled point and extract characteristic quantity.Then, Characteristic Extraction portion 26 generates the characteristic quantity group consisting of corresponding with each sampled point respectively a plurality of characteristic quantities.
Positional information determination portion 29 reads LRF information 41 from storage part 12, based on set up the LRF function of corresponding relation with detected object point in LRF information 41, determines positional information corresponding to characteristic quantity group generating with Characteristic Extraction portion 26.Positional information determination portion 29 is to the determined positional information of position point determination portion 30 output.
Specifically, as shown in Figure 8, the characteristic quantity group that positional information determination portion 29 generates Characteristic Extraction portion 26 is assigned to LRF function as input value, using its Output rusults as positional information.
Position point determination portion 30 is obtained reference coordinate from reference point specifying unit 23, from positional information determination portion 29, obtains positional information.Position point determination portion 30, in standardized images, is detected object point by take the point shown in reference coordinate as the represented location positioning of the positional information of initial point.
(LRF learning method)
Then,, based on Fig. 9 and Figure 10, the LRF learning method of carrying out for LRF learning device 2 describes.Fig. 9 means the figure of an example of the LRF learning method that LRF learning device 2 is carried out.Figure 10 is the respectively transition graph of the state of processing that utilizes the schematically illustrated LRF learning method of image to comprise.
In the example shown in Fig. 9 and Figure 10, generate the LRF function corresponding with the mankind's the eyes of face and the point of mouth.Specifically, by upper mid point and the lower mid point of the right corners of the mouth point of the external eyes angle point of right eye and left eye, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, mouth and left corners of the mouth point, upper lip and lower lip, as learning object point.In addition, the upper mid point of upper lip (lower lip) refers to, the point of the central upside of upper lip (lower lip); The lower mid point of upper lip (lower lip) refers to, the point of the central downside of upper lip (lower lip).
In addition, in the example shown in Fig. 9 and Figure 10, using the central point of right eye, left eye and mouth respectively as the reference point of right eye, left eye and mouth.In addition, sample range is set as covering respectively the scope of right eye, left eye and mouth.Specifically, by the specialized range of usining centered by the central point (reference point) of right eye, left eye and mouth as sample range.
As shown in Figure 9, first, image acquiring unit 21 is obtained the image (S1) via 13 inputs of image input part.State is now as shown in the state 1 of Figure 10.
Then,, the image that region intercepting portion 22 obtains from image acquiring unit 21, for example, based on learning object area image extracting method " G001 " (, existing face area detecting method or face's organ point detecting method), detect face image (S2).State is now as shown in the state 2 of Figure 10.State 2 times, the face image detecting is gone out by the coil with four jiaos, with white point, represents detected face organ point.
Region intercepting portion 22, based on standardized method " H001 ", intercepts detected face image, by the face image standardization of intercepting, generates standardized images (S3).State is now as shown in the state 3 of Figure 10.
Then, reference point specifying unit 23 is determined method " I001 ", " I002 ", " I003 " based on reference point respectively, the reference point of the right eye on settling the standard image, left eye and mouth (S4).State is now as shown in the state 4 of Figure 10.As mentioned above, state 4 times, at Ji Zui center, Yan center, left and right, be set with reference point separately.
Then, position point determination portion 24, in standardized images, determine external eyes angle point, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, the right corners of the mouth point of mouth and upper mid point and the lower mid point of left corners of the mouth point, upper lip and lower lip of right eye and left eye, be used as learning object point (S5).State is now as shown in the state 5 of Figure 10.
Then, sampling location determination portion 25 is determined method " J001 ", " J002 ", " J003 " based on sampling location respectively, determines respectively a plurality of sampled points (S6) in each sample range in standardized images.Then, Characteristic Extraction portion 26 is respectively based on Characteristic Extraction method " K001 ", " K002 ", " K003 ", from the pixel of the eye of left and right and each sampled point of mouth or pixel groups, extracts respectively characteristic quantity group (S7).State is now as shown in the state 6 of Figure 10.As mentioned above, state 6 times, to cover respectively the eye of left and right and the mode of mouth, the set positions of the regulation centered by the central point by each organ has sampled point.That is, at this, generate following three stack features amount groups: the external eyes angle point of right eye, inner eye corner point, upper eyelid point, palpebra inferior point and the corresponding characteristic quantity group of pupil; The external eyes angle point of left eye, inner eye corner point, upper eyelid point, palpebra inferior point and the corresponding characteristic quantity group of pupil; The corresponding characteristic quantity group of the upper mid point of the right corners of the mouth point of mouth and left corners of the mouth point, upper lip and lower lip and lower mid point.In other words, each reference point (central point) for right eye, left eye and mouth generates respectively three stack features amount groups.
Then, positional information generating unit 27 is position-based information generating methods " L001 ", " L002 ", " L003 " respectively, for each learning object point, generates the positional information (S8) of the position that represents to take the learning object point that reference point is initial point.State is now as shown in the state 7 of Figure 10.
A plurality of images are carried out to above processing, for each image, generate characteristic quantity group and the positional information of each learning object point.
LRF function calculating part 28 utilizes regretional analysis, according to many groups positional information and characteristic quantity group, generates and puts corresponding LRF function (S9) with each learning object respectively.Then, LRF function calculating part 28 LRF function corresponding to each used method (learning object area image extracting method, standardized method, reference point determine that method, sampling location determine method, Characteristic Extraction method and positional information generation method) is put with each learning object and is set up corresponding relation with generated respectively, generate LRF information 41, and be kept in storage part 12.
(position point detecting method)
Then,, based on Figure 11 and Figure 12, the position point detecting method performed for position point detection device 1 describes.Figure 11 means the figure of an example of the position point detecting method that position point detection device 1 is carried out.Figure 12 utilizes image to schematically show the respectively transition graph of the state of processing that position point detecting method comprises.
In the example shown in Figure 11 and Figure 12, in the storage part 12 of position point detection device 1, preserve the LRF information 41 shown in Fig. 7.In addition, at this, this LRF information 41 is for detection of the right eye of the mankind's face and the external eyes angle point of left eye, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, the right corners of the mouth point of mouth and upper mid point and the lower mid point of left corners of the mouth point, upper lip and lower lip.
As shown in figure 11, first, image acquiring unit 21 is obtained the image (S11) via 13 inputs of image input part.State is now as shown in the state 11 of Figure 12.
Then, region intercepting portion 22 reads LRF information 41 from storage part 12.At this, in LRF information 41, each detected object point has all been set up corresponding relation with identical learning object area image extracting method " G001 " and standardized method " H001 ".Therefore, region intercepting portion 22 is based on learning object area image extracting method " G001 ", the image obtaining from image acquiring unit 21, and cut-away view picture (S12).State is now as shown in the state 12 of Figure 12.State 12 times, detect face image and face's organ point, detected face image is gone out by the quadrangular coil of tool, utilizes white point to represent detected face organ point.
The detected face image of region intercepting portion 22 intercepting, based on standardized method " H001 ", by the face image standardization of intercepting, generates standardized images (S13).State is now as shown in the state 13 of Figure 12.
Then, reference point specifying unit 23 reads LRF information 41 from storage part 12.At this, in LRF information 41, with right eye, left eye ,Zui Wei unit, determine that with identical reference point method " I001 ", " I002 ", " I003 " have set up corresponding relation respectively.Therefore, reference point specifying unit 23, in standardized images, based on reference point, determine that method " I001 " determines the reference point of the detected object point of right eye, based on reference point, determine that method " I002 " determines the reference point of the detected object point of left eye, based on reference point, determine that method " I003 " determines the reference point (S14) of the detected object point of mouth.State is now as shown in the state 14 of Figure 12.As shown in the figure, state 14 times, the central point separately of right eye, left eye, mouth is defined as to reference point.
Then, sampling location determination portion 25 reads LRF information 41 from storage part 12.At this, in LRF information 41, with right eye, left eye ,Zui Wei unit, determine that with identical sampling location method " J001 ", " J002 ", " J003 " have set up corresponding relation respectively.Therefore, sampling location determination portion 25, in standardized images, based on sampling location, determine that method " J001 " determines the sampled point of the detected object point of right eye, based on sampling location, determine that method " J002 " determines the sampled point of the detected object point of left eye, based on sampling location, determine that method " J003 " determines the sampled point (S15) of the detected object point of mouth.
State is now as shown in the state 15 of Figure 12.As shown in the figure, state 15 times, to cover respectively the eye of left and right and the mode of mouth, in the scope of the regulation centered by the reference point by each organ, be set with sampled point.
Characteristic Extraction portion 26 reads LRF information 41 from storage part 12.At this, in LRF information 41, with right eye, left eye ,Zui Wei unit, set up corresponding relation with identical Characteristic Extraction method " K001 ", " K002 ", " K003 " respectively.Therefore, Characteristic Extraction portion 26 is from the pixel or pixel groups of the sampled point of the detected object point of right eye, based on Characteristic Extraction method " K001 ", extract the characteristic quantity group of the detected object point of right eye, from the pixel or pixel groups of the sampled point of the detected object point of left eye, based on Characteristic Extraction method " K002 ", extract the characteristic quantity group of the detected object point of left eye, from the pixel or pixel groups of the sampled point of the detected object point of mouth, based on Characteristic Extraction method " K003 ", extract the characteristic quantity group (S16) of the detected object point of mouth.
That is, at this, generate three following stack features amount groups: the external eyes angle point of right eye, inner eye corner point, upper eyelid point, palpebra inferior point and the corresponding characteristic quantity group of pupil; The external eyes angle point of left eye, inner eye corner point, upper eyelid point, palpebra inferior point and the corresponding characteristic quantity group of pupil; The right corners of the mouth point of mouth and left corners of the mouth point and, upper mid point and the corresponding characteristic quantity group of lower mid point of upper lip and lower lip.In other words, respectively for each reference point (central point) of right eye, left eye and mouth, generate three stack features amount groups.
Then, positional information determination portion 29 reads LRF information 41 from storage part 12.Then, positional information determination portion 29 has been set up respectively the LRF function of corresponding relation to the external eyes angle point with right eye, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, input these detected objects and put corresponding characteristic quantity group, determine respectively the positional information of external eyes angle point, inner eye corner point, upper eyelid point, palpebra inferior point and the pupil of right eye.In addition, positional information determination portion 29 has been set up respectively the LRF function of corresponding relation to the external eyes angle point with left eye, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, input these detected objects and put corresponding characteristic quantity group, determine respectively the positional information of external eyes angle point, inner eye corner point, upper eyelid point, palpebra inferior point and the pupil of left eye.In addition, the LRF function that positional information determination portion 29 has been set up corresponding relation to the right corners of the mouth point of mouth and the upper mid point of left corners of the mouth point, upper lip and lower lip and lower mid point respectively, input these detected objects and put corresponding characteristic quantity group, determine respectively the positional information (S17) of the right corners of the mouth point of mouth and the upper mid point of left corners of the mouth point, upper lip and lower lip and lower mid point.
Finally, position point determination portion 30 reads LRF information 41 from storage part 12.At this, in LRF information 41, with right eye, left eye ,Zui Wei unit, set up corresponding relation with identical positional information generation method " L001 ", " L002 ", " L003 " respectively.Therefore, position point determination portion 30 position-based information generating methods " L001 ", external eyes angle point from right eye, inner eye corner point, upper eyelid point, in the positional information of palpebra inferior point and pupil, determine respectively the coordinate of these detected object points in standardized images, position-based information generating method " L002 ", external eyes angle point from left eye, inner eye corner point, upper eyelid point, in the positional information of palpebra inferior point and pupil, determine respectively the coordinate of these detected object points in standardized images, position-based information generating method " L003 ", right corners of the mouth point and left corners of the mouth point from mouth, in the upper mid point of upper lip and lower lip and the positional information of lower mid point, determine respectively the coordinate (S18) of these detected object points in standardized images.
For example, for the external eyes angle point of right eye, the X coordinate figure of the reference point of right eye (central point), Y coordinate figure are added respectively to the X coordinate figure shown in the positional information of external eyes angle point, the difference value of Y coordinate figure.As by being added X coordinate figure, the Y coordinate figure of the value of obtaining, be the coordinate figure of external eyes angle point in standardized images.Other position point, each position point of left eye and each position point of mouth to right eye carries out same processing, the coordinate of each position point of the right eye on settling the standard image, left eye and mouth.
State is now as shown in the state 16 of Figure 12.As shown in the figure, state 16 times, determine external eyes angle point, inner eye corner point, upper eyelid point, palpebra inferior point and pupil, the right corners of the mouth point of mouth and the upper mid point of left corners of the mouth point, upper lip and lower lip and the position (coordinate) of lower mid point in standardized images of right eye and left eye.
Then, according to the coordinate figure of each position point in standardized images, utilization such as affined transformation etc. is calculated the coordinate figure of each position point on original image, determines the coordinate of each position point on original image.
(for the means of dealing with problems)
The position point of image processing apparatus of the present invention inspected object from image, is characterized in that having: reference point determining unit, and it determines and puts corresponding reference point with above-mentioned position on above-mentioned image; Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining unit, it is with reference to correspondence relationship information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Detection side position point determining unit, it is by by the definite represented position of positional information of above-mentioned positional information determining unit, as the position point of above-mentioned object.
Image processing method of the present invention is the image processing method of the position point of inspected object from image, it is characterized in that, comprising: reference point determining step, and on above-mentioned image, determine and put corresponding reference point with above-mentioned position; Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information determining step, with reference to correspondence relationship information, determine the positional information corresponding with the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point; Position point determining step, by the represented position of positional information definite in above-mentioned positional information determining step, as the position point of above-mentioned object.
According to above-mentioned structure, above-mentioned positional information determining unit is with reference to the correspondence relationship information that represents the corresponding relation between characteristic quantity group and positional information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this characteristic quantity group refers to, from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, this feature locations information refers to, represent that above-mentioned position point is with respect to the positional information of the relative position of said reference point, detection side position point determining unit is using the position point as above-mentioned object by the represented position of the definite positional information of above-mentioned positional information determining unit.
The present inventor find: on image, between the point of the characteristic quantity group extracting and the organ on image, the relative position of unique point with respect to reference point, have incidence relation from comprise the region of organ of eye, mouth etc.Based on this opinion, by reference, represent the correspondence relationship information of the corresponding relation between above-mentioned characteristic quantity group and above-mentioned positional information, even the object of change of shape, also the position point of the object in detected image accurately.That is,, even in the situation that the change of shape of object, above-mentioned image processing apparatus and above-mentioned image processing method are also able to the effect of the position point of inspected object accurately.
In addition, preferably, image processing apparatus of the present invention also has sampling location determining unit, and this sampling location determining unit, on above-mentioned image, within comprising the scope in the region of thinking some place, above-mentioned position, is determined the position of above-mentioned sampled point.
In addition, information generation device of the present invention generates the above-mentioned correspondence relationship information for the reference of above-mentioned image processing apparatus institute, it is characterized in that having: image acquisition unit, and it obtains the image of taking the position point that has object; Reference point determining unit, it determines and puts corresponding said reference point with above-mentioned position on above-mentioned image; Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information generation unit, it generates and represents that above-mentioned position point is with respect to the above-mentioned positional information of the relative position by the definite reference point of said reference point determining unit; Correspondence relationship information generation unit, the above-mentioned correspondence relationship information of the corresponding relation between the positional information that the characteristic quantity group that its generation expression above-mentioned Characteristic Extraction unit extracts and above-mentioned positional information generation unit generate.
In addition, information generating method of the present invention generates for the above-mentioned correspondence relationship information in the reference of above-mentioned image processing method institute, it is characterized in that, comprising: image acquisition step, obtain the image of taking the position point that has object; Reference point determining step is determined and is put corresponding said reference point with above-mentioned position on above-mentioned image; Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extract by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group; Positional information generates step, generates and represents that above-mentioned position point is with respect to the above-mentioned positional information of the relative position of reference point definite in said reference point determining step; Correspondence relationship information generates step, generates the above-mentioned correspondence relationship information that is illustrated in the corresponding relation between the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step and the positional information generating in above-mentioned positional information generation step.
According to above-mentioned structure, above-mentioned image acquisition unit obtains the image of taking the position point that has above-mentioned object, said reference point determining unit is determined and is put corresponding reference point with above-mentioned position on above-mentioned image, above-mentioned Characteristic Extraction unit is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extraction by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group, above-mentioned positional information generation unit generates and represents that above-mentioned position point is with respect to the positional information of the relative position by the definite reference point of said reference point determining unit, above-mentioned correspondence relationship information generation unit generates the correspondence relationship information that represents the corresponding relation between the characteristic quantity group of being extracted by above-mentioned Characteristic Extraction unit and the positional information being generated by above-mentioned positional information generation unit.
Therefore, can reach the effect generating for the correspondence relationship information of above-mentioned image processing apparatus reference.As mentioned above, owing to thering is corresponding relation between above-mentioned characteristic quantity group and above-mentioned positional information, so by using the correspondence relationship information generating, the accurately position of inspected object point.
In addition, preferably, in information generation device of the present invention, above-mentioned correspondence relationship information generation unit utilizes regretional analysis to generate above-mentioned correspondence relationship information.
In addition, preferably, information generation device of the present invention also has: input block, and it receives the operation indication from user; Study sidepiece site determining unit, it,, based on indicating to the operation of above-mentioned input block input, determines the position point of the above-mentioned object on above-mentioned image.
In addition, preferably, in information generation device of the present invention, above-mentioned image acquisition unit obtains in the lump above-mentioned image and has set up the position dot position information of position of the above-mentioned position point of expression of corresponding relation with this image, information generation device of the present invention also has study sidepiece site determining unit, this study sidepiece site determining unit, based on the represented position of dot position information, above-mentioned position, is determined the position point of the above-mentioned object on above-mentioned image.
In addition, above-mentioned image processing apparatus and above-mentioned information generation device can be realized by computing machine, in this case, computing machine is moved as each unit of above-mentioned image processing apparatus and above-mentioned information generation device, thus, make the control program that above-mentioned image processing apparatus and above-mentioned information generation device realize by computing machine, the recording medium that can be read by computing machine that records this program also belong to category of the present invention.
(supplementing)
The invention is not restricted to above-mentioned embodiment, can in the scope shown in claims, carry out various changes.That is, in the scope shown in claims, by combining the embodiment that the unit of the technology of suitable change obtains, be included in the scope of technology of the present invention.
Finally, each block of position point detection device 1 and LRF learning device 2, particularly control part 11 and control part 16 can consist of hardware logic, can also utilize CPU in following mode, by software, realize.
Random access memory), preserve the memory storage (recording medium) etc. of the storer etc. of said procedure and various data ROM (read-only memory)), launch the RAM(random access memory of said procedure central processing unit), preserved the ROM(read only memory of said procedure that is, position point detection device 1 and LRF learning device 2 have the CPU(central processing unit of the order of carrying out the control program realize each function:::.And, to position point detection device 1 and LRF learning device 2, supply with the recording medium that can be read by computing machine, this recording medium recording has as realizing the position point detection device 1 of software of above-mentioned function and the program coding of the control program of LRF learning device 2 (execute form program, intermediate code program, source program), by this computing machine (or, CPU, MPU(microprocessor)) read and the program coding of executive logging in recording medium, also can reach object of the present invention.
As aforementioned recording medium, can use following equipment: for example, tape, the tape class of magnetic tape cassette etc., comprise floppy disk (Floppy disk: the disk of registered trademark)/hard disk etc., CD-ROM(read-only optical disc)/MO(photomagneto disk)/MD(Mini Disk)/DVD(digital versatile disc)/CD-R(data write CD) etc. the disk sort of CD, the card class of IC-card (comprising storage card)/light-card etc. or mask ROM/EPROM(EEPROM (Electrically Erasable Programmable Read Only Memo))/EEPROM(EEPROM (Electrically Erasable Programmable Read Only Memo)) the semiconductor memory class of/flash ROM etc.
In addition, position point detection device 1 and LRF learning device 2 can be configured to and can be connected with communication network, via communication network, supply with said procedure coding.As this communication network, there is no particular limitation, for example, can utilize internet, Intranet, extranet, LAN, ISDN(ISDN (Integrated Service Digital Network)), VAN(value-added network), CATV communication network, VPN (virtual private network) (virtual private network), telephone line network, mobile communications network, satellite communication network etc.In addition, as the transmission medium that forms communication network, there is no particular limitation, for example, the cable network that can utilize IEEE1394, USB, line of electric force transmission, cable tv circuit, telephone wire, adsl line (ADSL (Asymmetric Digital Subscriber Line)) etc., can also utilize IrDA(infrared data tissue), infrared ray, the Bluetooth(registered trademark of remote control and so on), the wireless network of 802.11 wireless networks, HDR, cell phone network, satellite circuit, ground digital net etc.In addition, the present invention can also realize by utilizing electronics to transmit said procedure coding is embodied in to the mode that is embedded in the computer data signal in carrier wave.
Utilizability in industry
The present invention can be used in the image processing apparatus of position point of the regulation of the object in detected image.More preferably, can be used in the image processing apparatus of position point of the regulation of the object that detects the various variation of shape from image.
Description of reference numerals
1 position point detection device (image processing apparatus),
2 LRF learning devices (information generation device),
14 operating portions (input block),
21 image acquiring unit (image acquisition unit),
23 reference point specifying unit (reference point determining unit),
24 position point determination portions (study sidepiece site determining unit),
25 sampling location determination portions (sampling location determining unit),
26 Characteristic Extraction portions (Characteristic Extraction unit),
27 positional information generating units (positional information generation unit),
28 LRF function calculating parts (correspondence relationship information generation unit),
29 positional information determination portions (positional information determining unit),
30 position point determination portions (detection side position point determining unit).

Claims (11)

1. an image processing apparatus, from image, the position of inspected object point, is characterized in that,
Have:
Reference point determining unit, it determines and puts corresponding reference point with above-mentioned position on above-mentioned image,
Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extraction by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group
Positional information determining unit, it is with reference to correspondence relationship information, determine positional information corresponding to characteristic quantity group of extracting with above-mentioned Characteristic Extraction unit, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point
Detection side position point determining unit, it is by by the definite represented position of positional information of above-mentioned positional information determining unit, as the position point of above-mentioned object.
2. image processing apparatus as claimed in claim 1, is characterized in that,
Also have sampling location determining unit, this sampling location determining unit, on above-mentioned image, within comprising the scope in the region of thinking some place, above-mentioned position, determines the position of above-mentioned sampled point.
3. an information generation device, generates the above-mentioned correspondence relationship information for the reference of image processing apparatus as claimed in claim 1 or 2 institute, it is characterized in that having:
Image acquisition unit, it obtains the image of taking the position point that has object,
Reference point determining unit, it determines and puts corresponding said reference point with above-mentioned position on above-mentioned image,
Characteristic Extraction unit, it is for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extraction by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group
Positional information generation unit, it generates and represents that above-mentioned position point is with respect to the above-mentioned positional information of the relative position by the definite reference point of said reference point determining unit,
Correspondence relationship information generation unit, the above-mentioned correspondence relationship information of the corresponding relation between the positional information that the characteristic quantity group that its generation expression above-mentioned Characteristic Extraction unit extracts and above-mentioned positional information generation unit generate.
4. information generation device as claimed in claim 3, is characterized in that,
Above-mentioned correspondence relationship information generation unit utilizes regretional analysis to generate above-mentioned correspondence relationship information.
5. the information generation device as described in claim 3 or 4, is characterized in that,
Also have: input block, it receives the operation indication from user,
Study sidepiece site determining unit, it,, based on indicating to the operation of above-mentioned input block input, determines the position point of the above-mentioned object on above-mentioned image.
6. the information generation device as described in claim 3 or 4, is characterized in that,
Above-mentioned image acquisition unit obtains in the lump above-mentioned image and has set up the position dot position information of position of the above-mentioned position point of expression of corresponding relation with this image,
Also have study sidepiece site determining unit, this study sidepiece site determining unit, based on the represented position of dot position information, above-mentioned position, is determined the position point of the above-mentioned object on above-mentioned image.
7. an image processing method, from image, the position of inspected object point, is characterized in that,
Comprise:
Reference point determining step is determined and is put corresponding reference point with above-mentioned position on above-mentioned image,
Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extraction by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group
Positional information determining step, with reference to correspondence relationship information, determine the positional information corresponding with the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step, this correspondence relationship information represents: from each pixels of above-mentioned a plurality of sampled points or each pixel groups, extract with said reference point characteristic of correspondence amount group, and the corresponding relation between positional information, this positional information represents that above-mentioned position point is with respect to the relative position of said reference point
Position point determining step, by the represented position of positional information definite in above-mentioned positional information determining step, as the position point of above-mentioned object.
8. an information generating method, generates for the above-mentioned correspondence relationship information in image processing method reference as claimed in claim 7, it is characterized in that, comprising:
Image acquisition step, obtains the image of taking the position point that has object,
Reference point determining step is determined and is put corresponding said reference point with above-mentioned position on above-mentioned image,
Characteristic Extraction step, for each sampled point in a plurality of sampled points corresponding with said reference point, from the pixel of this sampled point or comprise the pixel groups of this pixel and extract characteristic quantity, extraction by corresponding with each extracted sampled point respectively a plurality of characteristic quantities, formed with said reference point characteristic of correspondence amount group
Positional information generates step, and generate and represent that above-mentioned position point is with respect to the above-mentioned positional information of the relative position of reference point definite in said reference point determining step,
Correspondence relationship information generates step, generates the above-mentioned correspondence relationship information that is illustrated in the corresponding relation between the characteristic quantity group of extracting in above-mentioned Characteristic Extraction step and the positional information generating in above-mentioned positional information generation step.
9. a control program, moves image processing apparatus as claimed in claim 1 or 2, it is characterized in that,
Computing machine is worked as above-mentioned each unit.
10. a control program, moves the information generation device as described in any one in claim 3~6, it is characterized in that,
Computing machine is worked as above-mentioned each unit.
11. 1 kinds of recording mediums, is characterized in that,
Can be read by computing machine, and record the control program as described in claim 9 or 10.
CN201280025429.5A 2011-06-07 2012-03-14 Image processing device, information generation device, image processing method, information generation method, control program, and recording medium Active CN103562964B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011127755A JP4998637B1 (en) 2011-06-07 2011-06-07 Image processing apparatus, information generation apparatus, image processing method, information generation method, control program, and recording medium
JP2011-127755 2011-06-07
PCT/JP2012/056516 WO2012169251A1 (en) 2011-06-07 2012-03-14 Image processing device, information generation device, image processing method, information generation method, control program, and recording medium

Publications (2)

Publication Number Publication Date
CN103562964A true CN103562964A (en) 2014-02-05
CN103562964B CN103562964B (en) 2017-02-15

Family

ID=46793925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280025429.5A Active CN103562964B (en) 2011-06-07 2012-03-14 Image processing device, information generation device, image processing method, information generation method, control program, and recording medium

Country Status (6)

Country Link
US (1) US9607209B2 (en)
EP (1) EP2720194A4 (en)
JP (1) JP4998637B1 (en)
KR (1) KR101525133B1 (en)
CN (1) CN103562964B (en)
WO (1) WO2012169251A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036296A (en) * 2014-06-20 2014-09-10 深圳先进技术研究院 Method and device for representing and processing image
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN108062742A (en) * 2017-12-31 2018-05-22 广州二元科技有限公司 A kind of eyebrow replacing options using Digital Image Processing and deformation

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6225460B2 (en) * 2013-04-08 2017-11-08 オムロン株式会社 Image processing apparatus, image processing method, control program, and recording medium
JP2015133085A (en) * 2014-01-15 2015-07-23 キヤノン株式会社 Information processing device and method thereof
US9444999B2 (en) * 2014-08-05 2016-09-13 Omnivision Technologies, Inc. Feature detection in image capture
JP6652263B2 (en) * 2015-03-31 2020-02-19 国立大学法人静岡大学 Mouth region detection device and mouth region detection method
US9830528B2 (en) 2015-12-09 2017-11-28 Axis Ab Rotation invariant object feature recognition
JP6872742B2 (en) * 2016-06-30 2021-05-19 学校法人明治大学 Face image processing system, face image processing method and face image processing program
JP7009864B2 (en) * 2017-09-20 2022-01-26 カシオ計算機株式会社 Contour detection device and contour detection method
CN110059522B (en) 2018-01-19 2021-06-25 北京市商汤科技开发有限公司 Human body contour key point detection method, image processing method, device and equipment
CN109871845B (en) * 2019-01-10 2023-10-31 平安科技(深圳)有限公司 Certificate image extraction method and terminal equipment
US11375968B2 (en) 2020-04-06 2022-07-05 GE Precision Healthcare LLC Methods and systems for user and/or patient experience improvement in mammography
CN111553286B (en) * 2020-04-29 2024-01-26 北京攸乐科技有限公司 Method and electronic device for capturing ear animation features
CN111738166B (en) * 2020-06-24 2024-03-01 平安科技(深圳)有限公司 Target contour defining method, device, computer system and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1669933A2 (en) * 2004-12-08 2006-06-14 Sony Corporation Generating a three dimensional model of a face from a single two-dimensional image
EP1811456A1 (en) * 2004-11-12 2007-07-25 Omron Corporation Face feature point detector and feature point detector
US20080080746A1 (en) * 2006-10-02 2008-04-03 Gregory Payonk Method and Apparatus for Identifying Facial Regions

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL8902372A (en) 1989-09-21 1991-04-16 Imec Inter Uni Micro Electr METHOD FOR MANUFACTURING A FIELD EFFECT TRANSISTOR AND SEMICONDUCTOR ELEMENT.
JPH0493273A (en) 1990-08-10 1992-03-26 Mitsubishi Electric Corp Paper clamping device
JP2806037B2 (en) * 1990-11-29 1998-09-30 富士通株式会社 Fingerprint collation device
JP3735893B2 (en) 1995-06-22 2006-01-18 セイコーエプソン株式会社 Face image processing method and face image processing apparatus
JP3454726B2 (en) * 1998-09-24 2003-10-06 三洋電機株式会社 Face orientation detection method and apparatus
JP3695990B2 (en) * 1999-05-25 2005-09-14 三菱電機株式会社 Face image processing device
JP3851050B2 (en) * 2000-02-15 2006-11-29 ナイルス株式会社 Eye state detection device
GB2384639B (en) * 2002-01-24 2005-04-13 Pixology Ltd Image processing to remove red-eye features
JP4011426B2 (en) * 2002-07-17 2007-11-21 グローリー株式会社 Face detection device, face detection method, and face detection program
JP2005339288A (en) 2004-05-27 2005-12-08 Toshiba Corp Image processor and its method
JP4217664B2 (en) * 2004-06-28 2009-02-04 キヤノン株式会社 Image processing method and image processing apparatus
KR100791372B1 (en) * 2005-10-14 2008-01-07 삼성전자주식회사 Apparatus and method for facial image compensating
JP4991317B2 (en) * 2006-02-06 2012-08-01 株式会社東芝 Facial feature point detection apparatus and method
JP4093273B2 (en) 2006-03-13 2008-06-04 オムロン株式会社 Feature point detection apparatus, feature point detection method, and feature point detection program
JP2008117333A (en) * 2006-11-08 2008-05-22 Sony Corp Information processor, information processing method, individual identification device, dictionary data generating and updating method in individual identification device and dictionary data generating and updating program
WO2009131209A1 (en) * 2008-04-24 2009-10-29 日本電気株式会社 Image matching device, image matching method, and image matching program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1811456A1 (en) * 2004-11-12 2007-07-25 Omron Corporation Face feature point detector and feature point detector
EP1669933A2 (en) * 2004-12-08 2006-06-14 Sony Corporation Generating a three dimensional model of a face from a single two-dimensional image
US20080080746A1 (en) * 2006-10-02 2008-04-03 Gregory Payonk Method and Apparatus for Identifying Facial Regions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KUNKA, B.等: "Non-intrusive infrared-free eye tracking method", 《SIGNAL PROCESSING ALGORITHMS, ARCHITECTURES, ARRANGEMENTS, AND APPLICATIONS CONFERENCE PROCEEDINGS (SPA), 2009》, 26 September 2009 (2009-09-26), pages 105 - 109, XP031955854 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036296A (en) * 2014-06-20 2014-09-10 深圳先进技术研究院 Method and device for representing and processing image
CN104036296B (en) * 2014-06-20 2017-10-13 深圳先进技术研究院 A kind of expression of image and processing method and processing device
CN106687989A (en) * 2014-10-23 2017-05-17 英特尔公司 Method and system of facial expression recognition using linear relationships within landmark subsets
CN106687989B (en) * 2014-10-23 2021-06-29 英特尔公司 Method, system, readable medium and apparatus for facial expression recognition
CN108062742A (en) * 2017-12-31 2018-05-22 广州二元科技有限公司 A kind of eyebrow replacing options using Digital Image Processing and deformation
CN108062742B (en) * 2017-12-31 2021-05-04 广州二元科技有限公司 Eyebrow replacing method by digital image processing and deformation

Also Published As

Publication number Publication date
JP2012256131A (en) 2012-12-27
WO2012169251A1 (en) 2012-12-13
CN103562964B (en) 2017-02-15
US9607209B2 (en) 2017-03-28
EP2720194A4 (en) 2015-03-18
EP2720194A1 (en) 2014-04-16
KR101525133B1 (en) 2015-06-10
KR20140004230A (en) 2014-01-10
JP4998637B1 (en) 2012-08-15
US20140105487A1 (en) 2014-04-17

Similar Documents

Publication Publication Date Title
CN103562964A (en) Image processing device, information generation device, image processing method, information generation method, control program, and recording medium
CN107609459B (en) A kind of face identification method and device based on deep learning
CN108492343B (en) Image synthesis method for training data for expanding target recognition
US8861800B2 (en) Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction
CA2789887C (en) Face feature vector construction
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN109859305B (en) Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
US11704357B2 (en) Shape-based graphics search
CN105184249A (en) Method and device for processing face image
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US11615516B2 (en) Image-to-image translation using unpaired data for supervised learning
CN109271930B (en) Micro-expression recognition method, device and storage medium
WO2017045404A1 (en) Facial expression recognition using relations determined by class-to-class comparisons
WO2021127916A1 (en) Facial emotion recognition method, smart device and computer-readabel storage medium
CN115668263A (en) Identification of physical products for augmented reality experience in messaging systems
CN111626130A (en) Skin color identification method and device, electronic equipment and medium
WO2023003642A1 (en) Adaptive bounding for three-dimensional morphable models
CN113837236A (en) Method and device for identifying target object in image, terminal equipment and storage medium
Ghayoumi et al. Improved human emotion recognition using symmetry of facial key points with dihedral group
CN116012218A (en) Virtual anchor expression control method, device, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant