GB2343945A - Photographing or recognising a face - Google Patents

Photographing or recognising a face Download PDF

Info

Publication number
GB2343945A
GB2343945A GB9926654A GB9926654A GB2343945A GB 2343945 A GB2343945 A GB 2343945A GB 9926654 A GB9926654 A GB 9926654A GB 9926654 A GB9926654 A GB 9926654A GB 2343945 A GB2343945 A GB 2343945A
Authority
GB
United Kingdom
Prior art keywords
face
image
check area
contour
photographing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9926654A
Other versions
GB2343945B (en
GB9926654D0 (en
Inventor
Woon Yong Kim
Jun Hyun Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SINTEC Co Ltd
Original Assignee
SINTEC Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1019990021715A external-priority patent/KR100326203B1/en
Priority claimed from KR1019990045407A external-priority patent/KR100347058B1/en
Application filed by SINTEC Co Ltd filed Critical SINTEC Co Ltd
Publication of GB9926654D0 publication Critical patent/GB9926654D0/en
Publication of GB2343945A publication Critical patent/GB2343945A/en
Application granted granted Critical
Publication of GB2343945B publication Critical patent/GB2343945B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

For intelligently photographing or recognising a face, the position of the face is identified in an image from a camera (10), a check area is selected in order to track the face, and the pan/tilt or zoom of the camera are controlled (20) to track the identified face. Presence of a face and whether the face is abnormal eg masked is determined by analysing the contour of the face. The eyes, nose and mouth may be checked, and an alarm may be generated (60) if a dangerous person is detected. The image of the face may be stored (50) and used for identification of a person, or transmitted to another location. The image may be analysed for areas of skin colour or movement.

Description

2343945 METHOD AND APPARATUS FOR PHOTOGRAPHING/RECOGNIZING A FACE
BACKGROUND OF THE INVENTIO 5 1. Technical field The present invention relates to method and apparatus for photographing/ recognizing a face which can intelligently photograph an object captured by a camera. Particularly, the present invention relates to method and apparatus for photographing/ recognizing a face which may 10 distinguish a specific person by photographing and recognizing a face portion of the person accurately through adjusting the camera according to motion of the object, and generate alarm to a dangerous character so to be applied to a security system, an image recognizing system, a picture telephone, an internet on-line game and a virtual reality system, etc. 15 2. Description of the Prior Art
In order to photograph a face, a conventional manless monitoring or face recognizing system employs at least one monitoring camera installed in a desired position such as ceiling or wall. With use of image signals from the camera, the conventional system enables to remotely monitor a specific field or manlessly record whole scene of the field. Such systems are usually used for judging condition of a specific field.
However, such manless monitoring system or face recognizing 1 system should capture image signals of the field from a designated position (such as height or distance). Therefore, the image signals are irregular due to variation of height of an object or distance between the object and the camera. Furthermore, an object or face which is hid or 5 moves fast to get away the camera can be hardly recognized. Recognizable image signals can be just obtained by photographing the object in front or artificially adjusting the camera with use of a remote controller or a direction adjusting key.
Therefore, the conventional system has trouble to recognize a face in moving pictures with use of such unrecognizable image data. Because it is very difficult to find an intact face image through retrieving a stored data, the image data obtained as above cannot provide sufficient evidence when security problem happens. In addition, because the stored data is in type of moving pictures, it needs a high-capacity storing unit for storing such moving pictures, unnecessarily.
In order to overcome such problems, there is needed to develop technique for photographing a face image in front more accurately by moving the camera according to motion of the object.
SUMMARY OF THE INVENTION
The present invention is designed to solve such problems. An object of the present invention is to provide method and apparatus for 2 photographing a face which can photograph a face image in a still picture by detecting skin color and motion of an object captured by a camera, automatically selecting most appropriate object, and making the camera track the object automatically without any artificial manipulation, and method and apparatus for recognizing a face which can accurately recognize a specific person by analyzing the face image of the captured object.
- Another object of the present invention is to provide method and apparatus for recognizing a face which can cope with any possible emergency in advance by identifying a face having risk factors (such as strange mask, dark sunglasses, pressed-down cap, etc.) through analyzing the face image signals of the selected object, and generating alarm to a dangerous person.
In order to obtain above object, one embodiment of the present invention provides a method of photographing a face comprising: initial analyzing step for identifying position of the face by sampling face data from image signals inputted from a camera installed in a predetermined position, and then selecting check area in order to track the identified face; pan/tilt or zoom control step for moving the camera to right/left, up/down or for-ward/ backward direction in order to track the identified face in the selected check area; determining step for determining whether the face is existing or abnormal by sampling and analyzing contour of the face 3 identified in the check area; and storing and transmitting step for storing image of the determined face or transmitting the image to another recognizing system.
In the embodiment, the method of photographing a face may further include the step of generating alarm about abnormal face after analyzing and determining the contour of the face sampled in the check area.
In order to accomplish the above objects, another embodiment of the present invention provides a method for recognizing a face comprising: initial analyzing step for identifing position of the face by sampling face data from image signals inputted from a camera installed in a predetermined position, and then selecting check area in order to track the identified face; pan/tilt or zoom control step for moving the camera to right/left, up/down or for-ward/ backward direction in order to track the identified face in the selected check area; determining step for determining whether the face is existing or abnormal by sampling and analyzing contour of the face identified in the check area; and composing a personal mask by generating 3-dimensional information for a face image in the check area in case that the face is existing, and then recognizing a specific person by comparing the personal mask with a previously stored personal mask.
For obtain the above objects, still another embodiment of the present invention provides a method for recognizing a face comprising the 4 steps of: selecting a plurality of check areas about image signals captured by a camera, selecting a specific check area by determining whether a face exists in the corresponding check area, and adjusting direction of the camera according to motion of an object in the selected check area; rearranging images by detecting characteristics about image signal where a face is existing in the check area after determining whether a face exists, generating 3-dimensional information according to displacement between the rearranged images, and composing a face mask of the object with use of the 3-dimensional information; and determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in a database.
For accomplish the above objects, another embodiment of the present invention provides a method for recognizing a face comprising the steps of. selecting a plurality of check areas by respectively sampling skin color data and motion data from image signals inputted through a camera, selecting a specific check area by retrieving a specific check area by inspecting face components about image signals in the corresponding check area, rearranging image by retrieving characteristics of the selected check area, and adjusting direction of the camera with determining whether a face exists or not with use of the rearranged image; rearranging images about image signals where a face is existing in the check area, generating 3-dimensional information according to displacement between the rearranged images, and composing a personal face mask by comparing the 3-dimensional information with a reference mask; and determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in a database.
For accomplish the above objects, the present invention also provides an apparatus for photographing a face comprising: image input means for capturing image of an object photographed by a camera installed in a predetermined position, and receiving serial image signals; pan/tilt or zoom driving means for moving the camera of the image input means to right/left, up/down or forward/ backward direction; main control means for controlling the pan/tilt or zoom driving means such that the camera tracks motion of the object, selecting a check area by detecting skin color and motion of the object from the image signals inputted from the image input means, retrieving face portions from the image signals in the selected check area, and storing and displaying the face portions; storing or retrieving means for storing the image signals from the main control means or retrieving face portion from the image signals; and display means for outputting the image signal corresponding to the face portion outputted from the main control means.
In the embodiment, the apparatus for photographing a face may further include an alarm generating unit for generating alarm about abnormal face after analyzing and determining the contour of the face 6 sampled in the check area.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings, in which like components are referred to by like reference numerals. In the drawings:
FIG.- 1 is a schematic block diagram for showing configuration of method for photographing/ recognizing a face according to one embodiment 10 of the present invention; FIG. 2 is a functional block diagram for showing a main controller shown in FIG. 1; FIG. 3 shows signal flow of an image input process; FIG. 4 shows signal flow of an initial analyzing process; FIG. 5 shows signal flow of a determining process; FIG. 6 is a reference showing a screen and a screen movable position for illustrating a pan/tilt or zoom control process; FIG. 7 is a reference for illustrating a process of photographing an object in center of the screen at an appropriate distance when an object is 20 captured in a photographing region; FIG. 8 is a functional block diagram for illustrating an image storing and transmitting process; 7 FIG. 9 shows a face template (a) and an input image (b) used for retrieving a face portion; FIG. 10 is a flow chart for illustrating outline of the step of processing image signals in order to explain the method for photographing/ recognizing a face according to one embodiment of the present invention; FIG. 11 is a flow chart for illustrating a face data normalizing process; FIG. 12 shows contour sampling arrays of the face data; FIG. 13 is a flow chart for illustrating the step of adjusting strength of the contour of the face data according to difference in brightness; FIG. 14 shows a template around eyes; FIG. 15 shows a template (a) and contour data (b) around mouth; FIG. 16 is a block diagram for illustrating method for photographing/ recognizing a face according to another embodiment of the present invention; FIG. 17 is a flow chart for illustrating the method for photographing/ recognizing a face implemented in FIG. 16; and FIG. 18 is a flow chart for illustrating another method for photographing/ recognizing a face implemented in FIG. 16.
DETAILED DESC-RIPTION OF-.TJJE PREFERRED EMBODIMEN 8 Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 to FIG. 14 are block diagrams of circuit configuration and flow char-ts of signal process for illustrating face photographing/ recognizing method and apparatus according to one embodiment of the present invention. FIG. 16 to FIG. 18 are a block diagram and flow charts for illustrating face photographing/ recognizing method and apparatus according to another embodiment of the present invention.
At first, one embodiment of the present invention is described in detail referring to FIG. 1 to FIG. 15.
FIG. I is a schematic block diagram for showing circuit configuration for implementing the apparatus for photographing/ recognizing a face.
As shown in the figure, the apparatus of the present invention includes an image input unit 10 for receiving serial image signals by capturing image of an ob ect photographed by a camera installed in a predetermined position. The apparatus also includes a pan/tilt or zoom driving unit 20 for moving the camera of the image input unit 10 to right/left, up/down or for-ward/ backward direction. The apparatus also includes a main controller 30 for controlling the pan/tilt or zoom driving unit 20 such that the camera tracks motion of the object. The main 9 controller 30 also selects a check area by detecting skin color and motion of the object from the image signals inputted from the image input unit 10. The main controller 30 also retrieves face portions from the image signals in the selected check area, and then stores and transmits the face portion of the image signals to a display discussed below. The apparatus is also provided with a storing or retrieving unit 50 for storing the image signals from the main controller 30 or retrieving face portion from the image signals. The display 40 is of course included in the apparatus for outputting the image signal corresponding to the face portion outputted from the main controller 30.
In addition, the apparatus of the present invention may include an alarm generating unit 60 for generating alarm by using a visual display, an audio device according to results of face retrieval from the main controller 30.
The image input unit 10 is installed to be movable to a up/down, right/left or forward/ backward direction with use of the pan/tilt or zoom driving unit 20 mounted outside. In addition the image input unit 10 includes a camera (e.g. a digital camera in the present invention) for converting the image captured in lens into an image signal which is defined in a format for digital process (e.g. RGB or YUV). The image input unit 10 also may include an image processor for processing the image signal in a regular frame interval and then providing serial digital image data (e.g. 15-30 frames per second).
FIG. 2 is a block diagram for showing functional inner configuration of the main controller 30 of FIG. 1. As shown in the figure, the main controller 30 includes an initial analyzing unit 31 for selecting a check area by sampling portions which show skin color or motion from the input images from the image input unit 10. The main controller 30 also includes a determining unit 32 for sampling contour of the face from the check area, comparing the contour with a predetermined face template, determining whether the face is existing or abnormal according the comparison, and then selecting an object for tracking. The main controller 30 includes an object managing unit 33 for deter-mining whether the ob ect for tracking selected by the determining unit 32 is positioned in j center of photographing region, and then calculating and outputting adjustment value of the camera for tracking motion of the corresponding object. The main controller 30 also includes a pan/tilt or zoom control unit 34 for generating a pan/tilt or zoom control signal in order to move the image input unit 10 to a right/left, up/down or forward/ backward direction according to the adjustment value of the object managing unit 33. In addition, the main controller 30 includes an image storing and transmitting unit 35 for storing face image captured by the image input unit 10 and transmitting the image to the display 40 and the storing or retrieving unit 50, respectively. The main controller 30 also includes a 11 face recognizing and retrieving unit 36 for recognizing and retrieving the stored face image, and then transmitting the results to the alarm generating unit 60.
The initial analyzing unit 3 1, as shown in FIG. 3, detects a moving portion S316 by comparing a current image data S314 sampled from an input image signal S313 with image data S315 previously stored. The initial analyzing unit 31 then detects a portion, of which proportions of color elements (e.g. red R, green G, blue B or YUV) are in range of skin color, in the image signal from the image input unit 10, S317. The initial analyzing unit 31 then selects a portion where both skin color and motion are detected as a check area, S318.
The determining unit 32, as shown in FIG. 4, samples a contour of face in the check area D selected from a camera photographing region S by the initial analyzing unit 31, S411. After that, the determining unit 32 converts size of a previously stored face template into that of the check area D, and then compares the template with the contour. Then, if a contour approximate to the face template is detected in the check area D, the determining unit 32 recognizes that the face exists in the check area D, S412.
In addition, as shown in FIG. 5, after determining that a face exists by comparing the template with the contour S51 1, the determining unit 32 determines whether there is a previous object for tracking, S512. After 12 the determination, the determining unit 32 selects an object for tracking with results of the comparison between the currently selected image data and the previously stored image data, S513, S514. At this time, in case that two or more objects for tracking are detected, the determining unit 32 selects an image most approximate to the previously stored object for tracking as a new object for tracking, S5,14. If there is no previously object for tracking, the determining unit 32 selects an image nearest to center of the photographing region as a new object for tracking, S513.
The object managing unit 33, as shown in FIG. 6, reads location (SX, SY) of a screen in whole region which can be photographed with the camera and then determines whether the object of tracking selected by the determining unit 32 is positioned in center of the camera photographing region (x, y). At the same time, the object managing unit 33 calculates location (MX, MY) to which the screen will move, about the whole area (Pmax, Tmax) which can be ranged by the camera through pan/tilt control, and then outputs camera adjustment value to the pan/tilt or zoom control unit 34.
As shown in FIG. 7, the pan/tilt or zoom control unit 34 generates a pan/tilt or zoom signal in order to move the camera of the image input unit 10 to a right/left, up/down or forward/ backward (toward paper/opposite paper in the figure) direction according to the camera adjustment value from the object managing unit 33. If an object (d) is 13 detected by the camera as shown in FIG. 7 (B), the pan/tilt or zoom control unit 34 positions the object (d) to center of the screen as shown in FIG. 7 (C).
As shown in the signal flow of FIG. 8, the image storing and transmitting unit 35 converts face image S611 captured by the image input unit 10 into a regular size S612, and then stores the face image in shape of bit map picture BMP S613. The image storing and transmitting unit 35 then stores the face image in a small format type (e.g. JPEG) for a predetermined long time. In addition, the image storing and transmitting unit 35 transmits the image of the object for tracking to the display 40 for displaying the image on the screen S615, or transmits to the face recognizing and retrieving unit 36 for recognizing the face S616.
The face recognizing and retrieving unit 36 retrieves characteristics of a person in order to find a face of the peculiar person from the input face data. The face recognizing and retrieving unit 36 then outputs a face image data corresponding to the characteristics (same characteristics of the face template of FIG. 9 (a) and the face image of FIG. 9 (b)).
FIG. 10 is a flow chart for illustrating outline of an image signal process for explaining one embodiment of the face photographing/ recognizing method according to the present invention.
As shown in the figure, the face photographing/ recognizing method according to one embodiment of the present invention includes initial 14 analyzing steps S 10 1 -S 109, determining steps S 111 -S 119, and an image storing and transmitting step S 12 1. The initial analyzing steps S 10 1 S 109 selects a check area by sampling portions having skin color and motion from the input image photographed through the image input unit 5 10 installed to a predetermined position. The determining steps S 111 S 119 selects an object for tracking by sampling a face contour from the selected check area, comparing the contour with a predetermined face template, and then determining whether the face is existing or abnormal according to the comparison. The image storing and transmitting step S121 recognizes and retrieves the face image captured by the image input unit 10, and then stores the inputted face image and the contour of the object for tracking sampled from the face image.
Though not shown in the figure, another embodiment of the present invention may include an object managing step and a pan/tilt or zoom control step. The object managing step determines whether the object for tracking selected in the determining step is positioned in center of the camera photographing region, and then calculates a camera adjustment value for tracking motion of the corresponding object for tracking. The pan/tilt or zoom control step controls pan/tilt or zoom function in order to move the image input unit 10 to a right/left, up/down or forward/ downward direction according to the camera adjustment value calculated in the object managing step.
Still another embodiment of the present invention may further include an alarm generating step S 120 for generating alarm when the face is abnormal after analyzing and retrieving the face contour in the sampled check area.
In addition, further another embodiment of the present invention may include the steps of generating 3-dimensional information for a face image in the check image in case that the face is existing in the check area, composing a personal mask with use of the 3-dimensional information, and then recognizing a specific person by comparing the composed personal mask with person masks previously stored in, such as, database, such that it can be applied to face image recognition.
Furthermore, the present invention may store an instantaneous face image (face image in a still picture) which is accurately photographed in the check area through the initial analyzing step and the pan/tilt or zoom control step in a separate storing unit such that the instantaneous face image can be applied to an image recognizing device for a investigation purpose, a security device designed for regulating machines by recognizing a face, and so on.
FIG. 11 is a flow chart for illustrating a face data normalizing process with arrays of eyes. FIG. 12 shows contour sampling arrays of the face data. FIG. 13 is a flow chart for illustrating the step of adjusting strength of the contour of the face data according to difference in 16 brightness. FIG. 14 shows a template around eyes and FIG. 15 shows a template (a) and contour data (b) around mouth.
Operation and effect of the present invention will be explained referring to the figures.
Referring to FIG. 11, an inclination is examined by using arrangement of the eyes retrieved from the face data, S71 1. The face data is then normalized S712, so that a face data having a regular size and shape can be sampled S713.
Referring to FIG. 13, at first, characteristics of the contour in eyes, nose, mouth, eyebrows of an input face data are sampled S81 1.
Arrangements of the sampled contour are substituted with each pixel S812. In addition, in order to determine strength of brightness, an average brightness is calculated from the input face data S813. After that, the average brightness is compared with brightness of the pixels S814 so to generate a horizontal contour sampling data S815.
- At first, a digital image inputted from the camera of the image input unit 10 is def-ined in a format for digital process (e.g. RGB or YUV) and provides resolution (e.g. having 640x480 pixels) required to the image processor. At this time, the current input image is compared with a previous input image, and then updated as a reference image (another previous input image for next input image). In addition, a check area is selected by detecting skin color and motion of an object for tracking 17 through comparison between the current input data and the previous input data.
After that, the initial analyzing unit 31 of the main controller 30 selects a range for retrieving by detecting a portion having skin color consisted of RGB proportions and motion through image sampling of the image data from the image input unit 10. The initial analyzing unit 31 stores the data after detecting the portion and motion, and then uses the data for detecting motion of next image. In addition, the initial analyzing unit 31 selects check area by using motion and transmits the check area to the determining unit 32.
The determining unit 32 then determines whether the face is existing by retrieving the check area analyzed in the initial analyzing unit 31. On the purpose of that, the determining unit 32, at first, detects a contour where a center portion is dark and upper and lower portions are bright from the image signal of the check area as shown in FIG. 12. As a result of the contour detection, a contour S815 in which eyebrows, eyes, lower part of nose and mouth are outstanding can be obtained. The determining unit 32 then determines the face existing in the check area in case that the contour in the check area indicates higher value than a reference value through comparison with a previously stored face template.
Because the check area does not have a regular size, size of the previously stored template should be changed before the comparison.
18 After that, the object managing unit 33 selects an object for tracking by comparing a current sampled face data with a previous data. If one face is detected in a new screen, the face becomes an object for tracking, while two or more faces are detected, one most approximate to the previous object is selected as an object for tracking. If there is no previous object for tracking, a face nearest to center of the screen becomes selected as an object for tracking.
I. Next, the pan/tilt or zoom control unit 34 is for operating the camera to track the object in case that the object for tracking is not positioned in center of the camera. The pan/tilt or zoom control unit 34 includes the pan/tilt or zoom driving unit 20 attached to the camera of the image input unit 10 such that the camera can be operated by commands from the pan/tilt or zoom control unit 34 of the main controller. The pan/tilt or zoom driving unit 20 is connected to the pan/tilt or zoom control unit 34 through serial, parallel, universal serial bus USB, or wireless communication.
After retrieving the input face images, the image storing and transmitting unit 35 stores an input face image in a separate storing unit in a picture type, and transmits the image together with the characteristics sampled for a recognizing system.
The face image retrieving unit 36 detects characteristics of a specific person in order to find a face in the input face data. The face image 19 retrieving unit 36 then outputs a face image data corresponding to the characteristics.
At this time, the processed data (input data) is a rectangular image data in which only face portion is sampled. Size of the data may be different each other according to size of a face, and inclination of the data is not aligned.
Therefore, the process for checking a face, at first, retrieves eyes with characteristics of face components with use of the face data, as shown in FIG. 10, S101, S103. Then, the process normalizes the face data with use of arrays of the eyes in order to make size and shape of the face uniformed as shown in FIG. 11, S 105.
After that, the process samples a contour of each component of the face having uniform size and shape S107, S109, and then retrieves eye portions with use of an eye template S 111 so to determine error by examining the retrieved eyes in detail S 113.
Sequentially, the process retrieves a mouth portion with use of mouth and mouth surrounding templates S 115, and therefore determines error through detail examination for the retrieved mouth S1 17.
In addition, with use of nose and nose surrounding templates and eyebrows and eyebrow surrounding templates (not shown in the figures), the process may retrieve a nose portion and eyebrow portions in order to determineerror through detail examination for the retrieved nose and eyebrows.
Then, the process synthetically determines error with use of results from the examination for the eyes and the mouth (possibly, nose and eyebrows), S 119.
If there is no error as a result of the eye, nose, mouth, eyebrow or synthetic determination, the process writes and stores a face picture of the person and transmits the retrieved data to a door managing system or a passenger identifying system with use of image recognition, S121.
Therefore, if any problem is occurred, a problematic person can be easily retrieved with use of any retrieving program.
On the other hand, if there is error as a result of the eye, nose, mouth, eyebrow or synthetic determination, the process determines the person having a risk factor, for example with a mask, dark sunglasses, a pressed-down cap. In this case, the process generates alarm such that a supervisor or a serviceman may cope with the emergency situation, S 120.
The present invention as described above can accurately photograph a face in a rapid speed with use of a camera installed in an entrance door, which enables to write a face of a passenger. In addition, capacity of the storing unit can be remarkably reduced in fact that the present invention stores only a still picture instead of moving pictures which are employed in the prior art.
Moreover, in case that an abnormal person who enters with a mask, 21 dark sunglasses or a cap pressed down, that is, a person having a risk factor appears, the present invention generates alarm to a supervisor or a serviceman in advance in order to cope with an emergency situation.
In addition, because a camera which tracks motion of a face is installed in a door, the present invention can make criminals cowered, which can prevent crimes more effectively than any other supervising systems.
Furthermore, the present invention may overcome inconvenience of the prior art which needs to show a face to a fixed camera because the camera intelligently moves to an up/down or right/left direction in order to show a face in front accurately in case of using such as picture phone.
In addition, the present invention provides a more convenient function because the camera can correspond to a tall or little person, which is impossible for the prior art.
The present invention also has much more advantages in that the present invention may identify a passenger in connection with a door system, or can be used to an animation manufacturing or an on-line real time simulation game using automatic image tracking.
Hereinafter, another embodiment of the face recognizing method according to the present invention will be described with reference to FIG. 16 to FIG. 18 in detail.
22 FIG. 16 is a block diagram for illustrating configuration of a face photographing/ recognizing system for implementing the face recognizing method according to another embodiment of the present invention.
As shown in the figure, an image input unit 100 photographs an 5 object through a plurality of cameras toward predetermined directions.
In general, the image input unit 100 employs at least two CCD cameras 110.
A preprocessing unit 200 receives an image signal from the image input unit 100. The preprocessing unit 200 eliminates noise in the image signal and then samples a contour about the image signal.
A characteristic sampling unit 300 samples characteristics of the. image signal on the basis of the contour of the image signal inputted from the preprocessing unit 200.
A 3-dimensional information sampling unit 400 rearranges image by rotating, zooming or moving the image signal from the preprocessing unit 200 with use of the characteristics sampled by the characteristic sampling unit 300, and then samples 3-dimensional information with use of displacement information between the rearranged images.
A mask generating unit 500 compares the 3-dimensional information sampled from the 3-dimensional information sampling unit 400 with a predetermined reference mask, and then generates a new mask peculiar to each person.
23 I A database 600 includes a plurality of image data obtained from image signals of proper users as a masking data.
A shape recognizing unit 700 is generally a central processing unit CPU. The shape recognizing unit 700 compares the new mask generated in the mask generating unit 500 with the masking data in the database 600, and then determines whether the new mask is same as the masking data. At this time, if there is a mask same as the new mask in the database 600, the current image photographed through the camera is recognized as a previously registered mask and the user is classified as a proper user. Or else, the image is recognized as an unregistered mask and the user is classified as an improper user, A storing media 800 includes an operating program for overall system and a plurality of data required for recognizing faces. Such storing media 800 operates according to a control signal from the shape recognizing unit 700 so to output the previously stored data to the shape recognizing unit 700. The storing media 800 can be a card having semiconductor memory, a hard disk, an optical disk, etc.
The image input unit 100 may include the CCD camera 110 for photographing an object, an image capturing unit 120 for capturing signal from the camera in order to display the signal in a designated type, a camera driving control unit 130 for outputting a camera driving control signal by calculating a current rotating range of the CCD camera with use 24 of a check area selected from the characteristic sampling unit 300, and a camera driving unit 140 for rotating the camera to a specific direction according to the camera driving control signal of the camera driving control unit 130.
FIG. 17 is a flow chart for illustrating the face photographing/ recognizing method according to another embodiment of the present invention. As shown in the figure, the method includes steps S200-S250 of selecting a check area for an image signal captured through the camera, detecting characteristics from the corresponding check area, rearranging the image, detecting a portion where a face is existing through retrieval of face components, sampling and examining characteristics of the portion having a face, and then adjusting direction of the camera with determining whether a face exists. The method further includes a step S260 of rearranging again the rearranged image about the image signal having a face in the check area by the characteristics, generating 3 dimensional information according to displacement information between the rearranged images, and the composing a face mask of the object with use of the 3-dimensional information. The method also further includes a step S270 of determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in the database.
FIG. 18 is a flow chart for illustrating the face photographing/ recognizing method according to another embodiment of the present invention. The method includes steps S300-S340 of selecting at least one check areas by respectively sampling skin color data and motion data from image signals inputted through cameras, matching the template with the face components with use of brightness difference in the corresponding area, and adjusting direction of the cameras after determining whether a face exists. The method also includes steps S350 S370 of rearranging images about image signals where a face is existing in the check area, generating 3-dimensional information according to displacement between the rearranged images, and composing a personal face mask by comparing the 3-dimensional information with a reference mask. The method also further includes steps S380-S400 of determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in a database.
Each embodiment of the present invention as constructed above is operated as follows.
At first, operation of an embodiment in FIG. 17 is explained with reference to the block diagram in FIG. 15.
The image input unit 100 receives image signal from the CCD camera 110 and the image capturing unit 120. The preprocessing unit eliminates noise, and then executes contour correction and filtering.
The image signal processed in such a manner is then inputted to the 26 characteristic detecting unit 300. The characteristic detecting unit 300 generates motion data by detecting motion of an object through comparison between a previous image and a current image of the image signal. The characteristic detecting unit 300 generates a skin color data 5 by detecting a portion in which proportions of color elements (e.g. R(red), G(green), B(blue) or YUV) are in range of skin color.
The characteristic detecting unit 300 selects at least one check areas from a image signal from a field inputted from the image input unit 100 and the preprocessing unit 200 with use of the motion data and the color data generated as above, S210. The characteristic detecting unit 300 selects a region where a face is existing by identifying the face components from the selected check area S2 10- 1, and then rearranges the input image from the CCD camera 110 and the preprocessing unit 200 with use of the characteristics, S220.
The motion and skin color data of the generated check area, the detected characteristics, and the rearranged image data of the check area are inputted to the 3-dimensional information sampling unit 400 and the camera driving control unit 130, respectively.
The camera driving control unit 130 compares the motion and skin color data of the generated check area, the detected characteristics, and the rearranged image data with previously inputted and processed check area data.
27 The camera driving control unit 130 also calculates moving distance of the check area by comparing the data of two check area as described above, S230. In addition, by -rotating the CCD camera 110 as much as the moving distance, rotating distance and angle are determined in order 5 to photograph the object again.
The determined rotating distance and angle of the CCD camera 110 are then inputted as a camera driving control signal to the camera driving unit 140. The camera driving unit 140 receiving the camera driving control signal then rotates the CCD camera 110 as much as the predetermined rotating angle to a certain direction, S240. Such camera control operation is repeated whenever a new object is detected.
On the other hand, the 3-dimensional information sampling unit 400 receiving the motion and skin color data of the check area, the detected characteristics, and the rearranged image data from the characteristic detecting unit 300 then rearranges the image one more time with use of the detected characteristics, and then determines whether a face is existing in the check area with use of the rearranged characteristics of the image, S250.
As a result of the determination, 3-dimensional information for an image signal having a face in the check area is generated with use of the displacement information between a plurality of the rearranged images. in addition, by comparing the generated 3-dimensional information with 28 a reference mask in the mask generating unit 500, a new personal mask is generated, S260.
Then, The new generated mask is compared with a mask data in the database 600 in order to determine whether same mask exists in the database. If same mask data exists in the database 600, the new mask is recognized as a registered mask and then the object currently captured through the camera is classified as a proper user. On the other hand, if there is no same mask in the database 600, the new mask is recognized as an unregistered mask and then the object is classified as an improper user, S270.
Operation of the embodiment in FIG. 18 is explained in conjunction with the block diagram in FIG. 16.
When an image signal is inputted through the CCD camera 110 and the image capturing unit 120 of the image input unit 100 and the preprocessing unit 200, the characteristic detecting unit 300 determines the input signal and then compares a current screen with a previous screen of the image signal. According to the comparison, motion of an object can be detected to generate a motion data, and a portion in which proportions of color elements (e.g. R, G, B or YUV) are in range of skin color is detected to generate a color data, S3 10.
In addition, the characteristic detecting unit 300 selects at least one check area from the image signal on one field with use of the color data
29 and the motion data generated in the above process, S320.
When the check areas are selected, the method of this embodiment detects a portion where a face is existing through examination of face components S320-1, and then picks out the object for tracking in the corresponding area. The image is rearranged by the characteristics detected by the characteristic detecting unit 300 S330, and then the present invention identifies presence of the object for tracking in the corresponding area from the rearranged image S340. If there is a face of the object for tracking in the corresponding area, the image is rearranged after finely adjusting the camera such that the image signal having a face in the corresponding area is positioned in center of the screen, S350. Then, 3-dimensional information is generated with use of displacement information among a plurality of the rearranged images S360. The 3dimensional information is compared with a reference mask in the mask generating unit 500 so to generate new masks each of which is personally different, S370.
If there is no accurate face, the camera is adjusted to return to a normal position (ordinary photographing position) by calculating a camera adjustment value indicating amount of the camera to move, S34 1. Then, the method of this embodiment executes a step S310 of generating skin data and motion data with receiving the image signal.
The new mask generated as above is compared with a mask data in the database 600 so to determine whether the new mask exists in the database S380. As a result of the determination, if same mask exists in the database 600, the new mask is recognized as a registered mask S390 and the object currently captured by the camera is classified as a proper user. If there is no same mask in the database 600, the new mask is recognized as an unregistered mask, and the object becomes classified as an improper user.
Therefore, -the present invention described above identifying a face of a person by sampling skin data and motion data from an image signal may repeatedly adjust position of the camera until recognizing the face fully by rotating the camera to a certain direction in case that the face is not recognized accurately, which gives advantage that the present invention may intelligently photograph an accurate face without a user for the recognizing system moving to a designated position.
The face photographing/ recognizing method and apparatus according to the present invention has been described in detail. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
31

Claims (29)

What is claimed is:
1. Method of photographing a face comprising: initial analyzing step for identifying position of the face by sampling face data from image signals inputted from a camera installed in a predetermined position, and then selecting check area in order to track the identified face; pan/tilt or zoom control step for moving the camera-to right/left, up/down or forward/ backward direction in order to track the identified face in the selected check area; determining step for determining whether the face is existing or abnormal by sampling and analyzing contour of the face identified in the check area; and storing and transmitting step for storing image of the determined face or transmitting the image to another recognizing system.
2. Method of photographing a face as claimed in claim 1, further comprising the step of generating alarm about abnormal face after analyzing and determining the contour of the face sampled in the check area.
3. Method of photographing a face as claimed in claim 2, wherein 32 the initial analyzing step comprises the steps ofdetecting a moving portion by comparing a currently sampled image date with a previously stored image data; detecting a portion, in which proportions of color elements are in range of skin color, from the currently sampled image data; and selecting a portion where both skin color and motion are detected as a check area.
4. Method for photographing a face as claimed in claim 3, wherein the initial analyzing step selects nearer image on the basis of a previously selected object for tracking as a new object for tracking in case that at least two faces are identified in the check area.
5. Method for photographing a face as claimed in claim 4, wherein the initial analyzing step selects near image to center of photographing region as a new object for tracking in case that there is no -previously selected object for tracking.
6. Method for photographing a face as claimed in claim 2, wherein the determining step adjusts strength of the contour according to characteristics of person and/or surroundings by sampling the contour of the face on the basis of average brightness of the image data in the 33 check area.
7. Method for photographing a face as claimed in claim 6, wherein the determining step compares the contour of the image data sampled in the check area with a predetermined face template, and determines that the face exists in the check area when a contour approximates to the face template.
8. Method for photographing a face as claimed in claim 7, wherein the determining step compares the predetermined face template with the contour in the check area by converting size of the predetermined face template to size of the check area.
9. Method for photographing a face as claimed in claim 7, wherein the determining step comprises the steps of: determining whether eyes are normal by partially retrieving the eyes after substituting the sampled face contour for a predetermined template around eyes: determining whether nose and mouth are normal by partially retrieving the nose and the mouth after substituting the sampled face contour for a predetermined template around nose and mouth, after completing the partial retrieving for the eyes; and 34 determining whether the face is wholly normal by substituting numerical data of the sampled face contour for predetermined numerical data of the eyes, the nose and the mouth, after completing the partial retrieving for the nose and the mouth.
10. Method for photographing a face as claimed in claim 2,wherein the face image storing and transmitting step retrieves and stores the face image among inputted data separately.
11. Method for recognizing a face comprising:
initial analyzing step for identifying position of the face by sampling face data from image signals inputted from a camera installed in a predetermined position, and then selecting check area in order to track the identified face; pan/tilt or zoom control step for moving the camera to right/left, up/down or for-ward/ backward direction in order to track the identified face in the selected check area; determining step for determining whether the face is existing or abnormal by sampling and analyzing contour of the face identified in the check area; and composing a personal mask by generating 3-dimensional information for a face image in the check area in case that the face is existing, and then recognizing a specific person by comparing the personal mask with a previously stored personal mask.
12. Method for recognizing a face as claimed in claim 11, further comprising the step of generating alarm about abnormal face after analyzing and determining the contour of the face sampled in the check area.
13. Method for recognizing a face as claimed in claim 12, wherein the initial analyzing step comprises the steps of:
detecting a moving portion by comparing a currently sampled image date with a previously stored image data; detecting a portion, in which proportions of color elements are in range of skin color, from the currently sampled image data; and selecting a portion where both skin color and motion are detected as a check area.
14. Method for recognizing a face as claimed in claim 13, wherein the initial analyzing step selects nearer image on the basis of a previously selected object for tracking as a new object for tracking in case that at least two faces are identified in the check area.
36
15. Method for recognizing a face as claimed in claim 14, wherein the initial analyzing step selects near image to center of photographing region as a new object for tracking in case that there is no previously selected object for tracking.
16. Method for recognizing a face as claimed in claim 12, wherein the determining step adjusts strength of the contour according to characteristics of person and/or surroundings by sampling the contour of the face on the basis of average brightness of the image data in the 10 check area.
17. Method for recognizing a face as claimed in claim 16, wherein the determining step compares the contour of the image data sampled in the check area with a predetermined face template, and determines that the face exists in the check area when a contour approximates to the face template.
18. Method for recognizing a face as claimed in claim 17, wherein the determining step compares the predetermined face template with the contour in the check area by converting size of the predetermined face template to size of the check area.
37
19. Method for recognizing a face as claimed in claim 17, wherein the determining step comprises the steps of: determining whether eyes are normal by partially retrieving the eyes after substituting the sampled face contour for a predetermined template around eyes:
determining whether nose and mouth are normal by partially retrieving the nose and the mouth after substituting the sampled face contour for a predetermined template around nose and mouth, after completing the partial retrieving for the eyes; and determining whether the face is wholly normal by substituting numerical data of the sampled face contour for predetermined numerical data of the eyes, the nose and the mouth, after completing the partial retrieving for the nose and the mouth.
20 Method for recognizing a face comprising the steps of:
selecting a plurality of check areas about image signals captured by a camera, selecting a specific check area by determining whether a face exists in the corresponding check area, and adjusting direction of the camera according to motion of an object in the selected check area; rearranging images by detecting characteristics about image signal where a face is existing in the check area after determining whether a face exists, generating 3-dimensional information according to displacement 38 between the rearranged images, and composing a face mask of the object with use of the 3-dimensional information; and determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in a database.
21. Method for recognizing a face comprising the steps of:
selecting a plurality of check areas by respectively sampling skin color data and motion data from image signals inputted through a camera, selecting a specific check area by retrieving a specific check area by inspecting face components about image signals in the corresponding check area, rearranging image by retrieving characteristics of the selected check area, and adjusting direction of the camera with determining whether a face exists or not with use of the rearranged image; rearranging images about image signals where a face is existing in the check area, generating 3-dimensional information according to displacement between the rearranged images, and composing a personal face mask by comparing the 3-dimensional information with a reference mask; and determining whether image for a specific person is registered or not by searching the composed face mask from personal registration masks in a database.
39
22. Apparatus for photographing a face comprising: image input means for capturing image of an object photographed by a camera installed in a predetermined position, and receiving serial image signals; pan/tilt or zoom driving means for moving the camera of the image input means to right/left, up/down or forward/ backward direction; main control means for controlling the pan/tilt or zoom driving means such that the camera tracks motion of the object, selecting a check area by detecting skin color and motion of the object from the image signals inputted from the image input means, retrieving face portions from the image signals in the selected check area, and storing and displaying the face portions; storing or retrieving means for storing the image signals from the main control means or retrieving face portion from the image signals; and display means for outputting the image signal corresponding to the face portion outputted from the main control means.
23. Apparatus for photographing a face as claimed in claim 22, further comprising alarm generating means for generating alarm according to results of face retrieval from the main control means.
24 Apparatus for photographing a face as claimed in claim 23, wherein the main control means comprises:
initial analyzing unit for selecting a check area by sampling portions which show skin color or motion from input images from the image input means; determining unit for sampling contour of the face from the check area, comparing the contour with a predetermined face template, determining whether the face is existing or abnormal according the comparison, and then selecting an object for tracking; object managing unit for determining whether the object for tracking selected by the determining unit is positioned in center of camera photographing region, and calculating and outputting a camera adjustment value of the camera for tracking motion of the corresponding object; pan/tilt or zoom control unit for generating pan/tilt or zoom control signal in order to move the image input means to a right/left, up/down or. forward/ backward direction according to the camera adjustment value from the object managing unit; image storing and transmitting unit for storing face image captured by the image input means and transmitting the image to the display means; and face recognizing and retrieving unit for recognizing and retrieving the 41 stored face image.
25. Apparatus for photographing a face as claimed in claim 24, wherein the initial analyzing unit detects a moving portion by comparing a currently sampled image data with a previously stored image data, detects a portion, in which proportions of color elements are in range of skin color, and selects a portion where both skin color and motion are detected as a check area.
26. Apparatus for photographing a face as claimed in claim 24, wherein the determining unit determines that a face exists in the check area when detecting a contour approximate to the face template after comparing the detected contour with the previously stored face template.
27. Apparatus for photographing a face as claimed in claim 26, wherein the determining unit compares the predetermined face template with the contour in the check area by converting size of the previously stored face template to the check area.
28. Apparatus for photographing a face as claimed in claim 27, wherein, when selecting an object for tracking by comparing a currently sampled image data with the previously stored image data, in case that 42 two or more objects for tracking are detected, the determining unit selects an image most approximate to a previously selected object for tracking as a new object for -tracking.
29. Apparatus for photographing a face as claimed in claim 27, wherein, when selecting an object for tracking by comparing a currently sampled image data with the previously stored image data, in case that two or more objects for tracking are detected and there is no previously selected object for tracking, the determining unit selects an image nearest to center of photographing region as a new object for tracking.
43
GB9926654A 1998-11-18 1999-11-10 Method and apparatus for photographing/recognizing a face Expired - Fee Related GB2343945B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR19980049468 1998-11-18
KR1019990021715A KR100326203B1 (en) 1999-06-11 1999-06-11 Method and apparatus for face photographing and recognizing by automatic trading a skin color and motion
KR1019990045407A KR100347058B1 (en) 1998-11-18 1999-10-19 Method for photographing and recognizing a face

Publications (3)

Publication Number Publication Date
GB9926654D0 GB9926654D0 (en) 2000-01-12
GB2343945A true GB2343945A (en) 2000-05-24
GB2343945B GB2343945B (en) 2001-02-28

Family

ID=27349845

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9926654A Expired - Fee Related GB2343945B (en) 1998-11-18 1999-11-10 Method and apparatus for photographing/recognizing a face

Country Status (3)

Country Link
JP (1) JP2000163600A (en)
DE (1) DE19955714A1 (en)
GB (1) GB2343945B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411209B1 (en) 2000-12-06 2002-06-25 Koninklijke Philips Electronics N.V. Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring
US6441734B1 (en) 2000-12-12 2002-08-27 Koninklijke Philips Electronics N.V. Intruder detection through trajectory analysis in monitoring and surveillance systems
US6525663B2 (en) 2001-03-15 2003-02-25 Koninklijke Philips Electronics N.V. Automatic system for monitoring persons entering and leaving changing room
US6690414B2 (en) 2000-12-12 2004-02-10 Koninklijke Philips Electronics N.V. Method and apparatus to reduce false alarms in exit/entrance situations for residential security monitoring
US6744462B2 (en) 2000-12-12 2004-06-01 Koninklijke Philips Electronics N.V. Apparatus and methods for resolution of entry/exit conflicts for security monitoring systems
US7206029B2 (en) 2000-12-15 2007-04-17 Koninklijke Philips Electronics N.V. Picture-in-picture repositioning and/or resizing based on video content analysis
EP2053853A1 (en) 2007-10-26 2009-04-29 Vestel Elektronik Sanayi ve Ticaret A.S. Detecting the location and identifying a user of an electronic device
US7787025B2 (en) 2001-09-18 2010-08-31 Ricoh Company, Limited Image pickup device that cuts out a face image from subject image data
US7868917B2 (en) 2005-11-18 2011-01-11 Fujifilm Corporation Imaging device with moving object prediction notification
US7917935B2 (en) 2004-10-01 2011-03-29 Logitech Europe S.A. Mechanical pan, tilt and zoom in a webcam
US7995843B2 (en) 2003-10-21 2011-08-09 Panasonic Corporation Monitoring device which monitors moving objects
US8228377B2 (en) 2003-09-12 2012-07-24 Logitech Europe S.A. Pan and tilt camera
WO2013147756A1 (en) * 2012-03-28 2013-10-03 Intel Corporation Content aware selective adjusting of motion estimation
US8599267B2 (en) 2006-03-15 2013-12-03 Omron Corporation Tracking device, tracking method, tracking device control program, and computer-readable recording medium
US9076035B2 (en) 2012-03-14 2015-07-07 Omron Corporation Image processor, image processing method, control program, and recording medium
US20160092724A1 (en) * 2013-05-22 2016-03-31 Fivegt Co., Ltd. Method and system for automatically tracking face position and recognizing face
AU2014240213A1 (en) * 2014-09-30 2016-04-14 Canon Kabushiki Kaisha System and Method for object re-identification
US11975526B2 (en) 2019-09-25 2024-05-07 Unilin, Bv On-line synchronous registering co-extrusion SPC floor and production process therefor

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4597391B2 (en) * 2001-01-22 2010-12-15 本田技研工業株式会社 Facial region detection apparatus and method, and computer-readable recording medium
WO2003077552A1 (en) 2002-02-13 2003-09-18 Reify Corporation Method and apparatus for acquisition, compression, and characterization of spatiotemporal signals
JP3999561B2 (en) 2002-05-07 2007-10-31 松下電器産業株式会社 Surveillance system and surveillance camera
JP4317465B2 (en) * 2004-02-13 2009-08-19 本田技研工業株式会社 Face identification device, face identification method, and face identification program
DE102004015806A1 (en) * 2004-03-29 2005-10-27 Smiths Heimann Biometrics Gmbh Method and device for recording areas of interest of moving objects
KR100587430B1 (en) 2004-09-23 2006-06-09 전자부품연구원 System and method for photographing after correcting face pose
JP4862447B2 (en) 2006-03-23 2012-01-25 沖電気工業株式会社 Face recognition system
JP5099488B2 (en) 2007-08-31 2012-12-19 カシオ計算機株式会社 Imaging apparatus, face recognition method and program thereof
KR101335346B1 (en) * 2008-02-27 2013-12-05 소니 컴퓨터 엔터테인먼트 유럽 리미티드 Methods for capturing depth data of a scene and applying computer actions
JP2010080993A (en) * 2008-09-23 2010-04-08 Brother Ind Ltd Intercom system
US20130054377A1 (en) * 2011-08-30 2013-02-28 Nils Oliver Krahnstoever Person tracking and interactive advertising
CN102324024B (en) * 2011-09-06 2014-09-17 苏州科雷芯电子科技有限公司 Airport passenger recognition and positioning method and system based on target tracking technique
JP6565061B2 (en) * 2015-08-24 2019-08-28 株式会社テララコード研究所 Viewing system
CN112172299A (en) * 2016-06-15 2021-01-05 浙江天振科技股份有限公司 Long decorative material with emboss and pattern coincident, and rolling method and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0578508A2 (en) * 1992-07-10 1994-01-12 Sony Corporation Video camera with colour-based target tracking system
EP0598355A1 (en) * 1992-11-17 1994-05-25 Alcatel SEL Aktiengesellschaft Camera control for a videophone
JPH09107534A (en) * 1995-10-11 1997-04-22 Canon Inc Video conference equipment and video conference system
US5631697A (en) * 1991-11-27 1997-05-20 Hitachi, Ltd. Video camera capable of automatic target tracking
JPH09149391A (en) * 1995-11-17 1997-06-06 Kyocera Corp Television telephone device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5631697A (en) * 1991-11-27 1997-05-20 Hitachi, Ltd. Video camera capable of automatic target tracking
EP0578508A2 (en) * 1992-07-10 1994-01-12 Sony Corporation Video camera with colour-based target tracking system
EP0598355A1 (en) * 1992-11-17 1994-05-25 Alcatel SEL Aktiengesellschaft Camera control for a videophone
JPH09107534A (en) * 1995-10-11 1997-04-22 Canon Inc Video conference equipment and video conference system
JPH09149391A (en) * 1995-11-17 1997-06-06 Kyocera Corp Television telephone device

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411209B1 (en) 2000-12-06 2002-06-25 Koninklijke Philips Electronics N.V. Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring
US6441734B1 (en) 2000-12-12 2002-08-27 Koninklijke Philips Electronics N.V. Intruder detection through trajectory analysis in monitoring and surveillance systems
US6593852B2 (en) 2000-12-12 2003-07-15 Koninklijke Philips Electronics N.V. Intruder detection through trajectory analysis in monitoring and surveillance systems
US6690414B2 (en) 2000-12-12 2004-02-10 Koninklijke Philips Electronics N.V. Method and apparatus to reduce false alarms in exit/entrance situations for residential security monitoring
US6744462B2 (en) 2000-12-12 2004-06-01 Koninklijke Philips Electronics N.V. Apparatus and methods for resolution of entry/exit conflicts for security monitoring systems
US7206029B2 (en) 2000-12-15 2007-04-17 Koninklijke Philips Electronics N.V. Picture-in-picture repositioning and/or resizing based on video content analysis
US6525663B2 (en) 2001-03-15 2003-02-25 Koninklijke Philips Electronics N.V. Automatic system for monitoring persons entering and leaving changing room
US7787025B2 (en) 2001-09-18 2010-08-31 Ricoh Company, Limited Image pickup device that cuts out a face image from subject image data
US7903163B2 (en) 2001-09-18 2011-03-08 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
US7920187B2 (en) * 2001-09-18 2011-04-05 Ricoh Company, Limited Image pickup device that identifies portions of a face
US7973853B2 (en) 2001-09-18 2011-07-05 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method calculating an exposure based on a detected face
US7978261B2 (en) 2001-09-18 2011-07-12 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
US8421899B2 (en) 2001-09-18 2013-04-16 Ricoh Company, Limited Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
US8228377B2 (en) 2003-09-12 2012-07-24 Logitech Europe S.A. Pan and tilt camera
US7995843B2 (en) 2003-10-21 2011-08-09 Panasonic Corporation Monitoring device which monitors moving objects
US7917935B2 (en) 2004-10-01 2011-03-29 Logitech Europe S.A. Mechanical pan, tilt and zoom in a webcam
US20110181738A1 (en) * 2004-10-01 2011-07-28 Logitech Europe S.A. Mechanical pan, tilt and zoom in a webcam
US7868917B2 (en) 2005-11-18 2011-01-11 Fujifilm Corporation Imaging device with moving object prediction notification
US8599267B2 (en) 2006-03-15 2013-12-03 Omron Corporation Tracking device, tracking method, tracking device control program, and computer-readable recording medium
EP2053853A1 (en) 2007-10-26 2009-04-29 Vestel Elektronik Sanayi ve Ticaret A.S. Detecting the location and identifying a user of an electronic device
US9076035B2 (en) 2012-03-14 2015-07-07 Omron Corporation Image processor, image processing method, control program, and recording medium
WO2013147756A1 (en) * 2012-03-28 2013-10-03 Intel Corporation Content aware selective adjusting of motion estimation
US9019340B2 (en) 2012-03-28 2015-04-28 Intel Corporation Content aware selective adjusting of motion estimation
US20160092724A1 (en) * 2013-05-22 2016-03-31 Fivegt Co., Ltd. Method and system for automatically tracking face position and recognizing face
US10248840B2 (en) * 2013-05-22 2019-04-02 Fivegt Co., Ltd Method and system for automatically tracking face position and recognizing face
AU2014240213A1 (en) * 2014-09-30 2016-04-14 Canon Kabushiki Kaisha System and Method for object re-identification
AU2014240213B2 (en) * 2014-09-30 2016-12-08 Canon Kabushiki Kaisha System and Method for object re-identification
US9852340B2 (en) 2014-09-30 2017-12-26 Canon Kabushiki Kaisha System and method for object re-identification
US11975526B2 (en) 2019-09-25 2024-05-07 Unilin, Bv On-line synchronous registering co-extrusion SPC floor and production process therefor

Also Published As

Publication number Publication date
GB2343945B (en) 2001-02-28
GB9926654D0 (en) 2000-01-12
DE19955714A1 (en) 2000-05-31
JP2000163600A (en) 2000-06-16

Similar Documents

Publication Publication Date Title
GB2343945A (en) Photographing or recognising a face
EP0989517B1 (en) Determining the position of eyes through detection of flashlight reflection and correcting defects in a captured frame
KR101337060B1 (en) Imaging processing device and imaging processing method
US7945938B2 (en) Network camera system and control method therefore
US8605955B2 (en) Methods and apparatuses for half-face detection
US8977056B2 (en) Face detection using division-generated Haar-like features for illumination invariance
US8319851B2 (en) Image capturing apparatus, face area detecting method and program recording medium
US20120281874A1 (en) Method, material, and apparatus to improve acquisition of human frontal face images using image template
JP2007264860A (en) Face area extraction device
KR20010002097A (en) Method and apparatus for face photographing and recognizing by automatic trading a skin color and motion
KR20050085583A (en) Expression invariant face recognition
CN103353933A (en) Image recognition apparatus and its control method
KR20060119968A (en) Apparatus and method for feature recognition
US20050041111A1 (en) Frame adjustment device and image-taking device and printing device
US11438501B2 (en) Image processing apparatus, and control method, and storage medium thereof
JP2000278584A (en) Image input device provided with image processing function and recording medium recording its image processing program
KR20090023218A (en) Image pickup apparatus, and image pickup method
KR20170015639A (en) Personal Identification System And Method By Face Recognition In Digital Image
US8208035B2 (en) Image sensing apparatus, image capturing method, and program related to face detection
CN111327829B (en) Composition guiding method, composition guiding device, electronic equipment and storage medium
JP2014064083A (en) Monitoring device and method
CN113239774A (en) Video face recognition system and method
JP3774495B2 (en) Image information extracting apparatus and method
WO2002035452A1 (en) Eye image obtaining method, iris recognizing method, and system using the same
TW466452B (en) Method and apparatus for photographing/recognizing a face

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20041110