WO2006054598A1 - 顔特徴照合装置、顔特徴照合方法、及びプログラム - Google Patents
顔特徴照合装置、顔特徴照合方法、及びプログラム Download PDFInfo
- Publication number
- WO2006054598A1 WO2006054598A1 PCT/JP2005/021036 JP2005021036W WO2006054598A1 WO 2006054598 A1 WO2006054598 A1 WO 2006054598A1 JP 2005021036 W JP2005021036 W JP 2005021036W WO 2006054598 A1 WO2006054598 A1 WO 2006054598A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- imaging
- imaging means
- person
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Definitions
- Facial feature matching device Facial feature matching device, facial feature matching method, and program
- the present invention relates to a facial feature matching device that can collate with a face registered in advance when, for example, a suspicious person is monitored using a plurality of surveillance cameras.
- Facial feature matching device capable of synthesizing and collating facial images when performing facial matching to identify humans, facial feature matching method used therefor, and execution to a computer to perform facial feature matching This is related to the program to be executed.
- a surveillance staff selects any one of the installed cameras using a controller or the like.
- the desired point was monitored.
- the surveillance camera designated by the supervisor may not always be able to capture the most appropriate image.
- Patent Document 1 As a method for improving the above point, for example, the method described in Patent Document 1 is known.
- this is a configuration in which cameras are installed at a plurality of positions, and the camera closest to the position instructed by the observer is automatically selected and displayed for the medium power of the plurality of cameras that can be turned. It is.
- Patent Document 1 JP 2002-77889 A
- the optimal video is that a video necessary for specifying (collating) a target person is displayed.
- the present invention has been made in view of the above circumstances, and a facial feature matching device capable of synthesizing an appropriate image necessary for identifying (matching) a person to be monitored that a monitor wants to see, It is an object to provide a face feature matching method and program.
- the face feature collating device of the present invention includes a plurality of imaging means, an imaging control means for controlling the plurality of imaging means, a display means for displaying images taken by the plurality of imaging means, and the imaging
- a collation unit that collates image data that has been registered in advance with image data, and the collation unit includes a person detection unit that detects a person from both the captured images, and a captured image of the detected person
- a facial feature extraction unit that determines a facial region from a facial image and extracts a facial image; a facial feature synthesis unit that synthesizes the extracted facial feature; and the combined facial feature and facial image database that are pre-registered It is characterized by having a collation unit that collates with face images.
- the imaging control unit is configured to specify an external IZF unit that is an interface with an external device that performs control of an external force, and image coordinates input from the external device.
- a coordinate conversion unit for converting to the world coordinates, and an imaging device control unit for calculating a turning angle for operating the imaging means.
- the collating unit includes a position detecting imaging unit that detects the position of the detected person and the tracking of the detected person among the plurality of imaging units. And an image pickup means selection section for selecting an image pickup means for tracking.
- the face feature matching method of the present invention includes a plurality of imaging means, an imaging control means for controlling the plurality of imaging means, a display means for displaying images taken by the plurality of imaging means, and the captured image.
- a collation unit that collates an image with pre-registered image data, detects a person from the captured image, determines a face area from the captured image of the detected person, and extracts a face image
- the facial features are extracted from the extracted facial images, the extracted facial features are synthesized, and the synthesized facial features are collated with the facial images in the registered facial image database.
- the program of the present invention includes a plurality of imaging units, an imaging control unit that controls the plurality of imaging units, a display unit that displays images captured by the plurality of imaging units, and a pre-registration of the captured images.
- a step of detecting a person from the captured image, and a step of determining a face area from the captured image of the detected person and extracting the face image Extracting a facial feature from the extracted facial image; synthesizing the extracted facial feature; and comparing the synthesized facial feature with a facial image in a facial image database registered in advance. It is characterized by performing a facial feature matching operation that causes a computer to execute.
- a recording medium storing a program for realizing a facial image synthesis method for synthesizing a plurality of facial features in a computer can be incorporated into any device.
- the image pickup control means that can control all the image pickup means that can select and image the person to be monitored in the display means can be provided, and a face image taken from a plurality of image pickup hands.
- FIG. 1 is a block diagram showing the configuration of a face feature matching device according to a first embodiment of the present invention.
- FIG. 2 (A) and (B) are explanatory diagrams showing an endpoint extraction method for facial feature synthesis according to the first embodiment of the present invention.
- FIG. 3 (A), (B), and (C) are explanatory diagrams showing facial feature synthesis according to the first embodiment of the present invention, respectively.
- FIG. 4 is a flowchart explaining the operation of the imaging control means according to the first embodiment of the present invention.
- FIG. 5 is a flowchart for explaining the operation of the collating means according to the first embodiment of the present invention.
- FIG. 6 is a configuration block diagram of a face feature matching system according to a second embodiment of the present invention.
- FIG. 7 A flow chart for explaining the operation of the imaging control means according to the second embodiment of the present invention.
- FIG. 8 is a flowchart for explaining the operation of the matching means according to the second embodiment of the present invention.
- FIG. 1 shows a system configuration of a facial feature matching apparatus according to the present invention.
- a display means 3 for displaying the person, detecting a person from the captured image, determining the face area of the person, extracting the face image, and extracting the facial features from the extracted face image powers.
- a collating means 4 for synthesizing these face features and collating with the facial features registered in the face image database 5 is prepared.
- the imaging means 1 is configured with a general surveillance camera (hereinafter referred to as a camera) 1A to 1: LE, which sets an imaging area in advance and images the imaging area, such as a fixed camera rotating camera. is there
- the imaging control means 2 converts the external I ZF unit 21 that is an interface with the external device 6 such as a controller, and the image coordinates input from the external device 6 into world coordinates that are recognized by the entire face feature matching system.
- the collating means 4 determines a force video input unit 41, which will be described later in detail, a person detection unit 42 that detects a person from the captured image, a face detection unit 43, and a face area of the person Then, a face extraction unit 44 that extracts a face image and a face feature from a plurality of extracted face images A face feature extraction unit 45, a face feature synthesis unit 46 that synthesizes a plurality of extracted face features, and a collation unit 47 that collates the face features with pre-registered face features.
- the image coordinates are output to the external IZF 21, and then the image coordinates are converted into a coordinate conversion unit.
- the data is output to 22 and converted into world coordinates from the image coordinates.
- the world coordinates are, for example, coordinates when the position of the camera is represented as the origin (0, 0, 0) and the space is expressed as coordinates.
- the field coordinates are uniquely determined by the following equation.
- the coefficients Rl 1, R 12, R 13, R 21, R 22, R 23, R 31, R 32, R 33, tx, ty, and tz expressed by the equation (1) may be obtained and converted into world coordinates.
- these factors can be calculated by calculating the formula (1) by placing a mark on the installation site of the surveillance camera, preparing multiple sets of image coordinates and field coordinates of the installation site mark, It is possible to obtain power equations such as simultaneous equations.
- each camera 1A-: LE determines whether or not the region including the world coordinates where the monitoring target exists can be imaged. This method can be determined by holding in advance the area where each imaging means 1 can shoot at the time of installation. If the result of the determination is that shooting is possible, the pan angle and tilt angle of the imaging unit 1 are calculated from the positional relationship between the preset position, which is the shooting angle of the imaging unit 1 in the initial state, and the target world coordinates. Then move the imaging means. In this way, the person to be monitored is photographed by all the imaging means 1 that can photograph the person to be monitored selected by the monitor.
- An image captured by the imaging unit 1 is input to the imaging input unit 41.
- the input video of each imaging means is output to the person detection unit 42, and a person is detected from the video of each imaging means.
- human detection is performed by, for example, capturing the movement and change of the image for each frame, which is the smallest unit captured by the imaging means 1, and detecting the person and the object by having an elliptical object above the movement and change. judge.
- the face detection unit 43 detects a face area from the detected person.
- the face detection is performed by, for example, registering and learning a plurality of bullying face images, creating an average face, and whether there is an area similar to the gray image of the average face ( Whether the degree of correlation is high).
- the facial feature extraction unit 44 extracts facial features such as eyes, nose, and mouth that are facial features. This facial feature extraction is similar to, for example, creating a mean feature by registering and learning multiple facial features such as right eye, left eye, nose, mouth, etc. Whether there is an area or not and whether the degree of involvement is high).
- the face orientation determination is performed by, for example, registering and learning multiple front faces, faces inclined 45 degrees to the right, faces inclined 45 degrees to the left, faces inclined 45 degrees upward, faces inclined 45 degrees downward, etc. Then, an average face at each angle is created, and it is determined which direction it faces depending on whether it is similar to the gray image of the average face (whether the degree of correlation is high). If it is determined that it is facing the front, then it is output to the collation unit 47, and if it is determined that it is not the front, it is output to the next face feature holding unit 45 and face feature synthesis unit 46.
- the matching method is performed, for example, by matching with features such as the positional relationship of face parts such as eyes, nose, mouth, eyebrows, and face contour, information such as thickness and length, and facial shading information.
- a registered image having a high degree of matching that is, a matching degree of a certain level or higher is output to the external IZF unit 21. Thereafter, the result is displayed on the display means 3.
- the end points of the facial part extracted by the face feature extraction method are detected (marked by X in FIG. 2 (B)).
- This method is, for example, extracted This is done by extracting the contour line of the facial part and using both ends as the end points.
- an inquiry is made to the face feature holding unit 45 as to whether there is an image tilted to the right that is the opposite direction. If features in the reverse direction have been extracted, then both images are combined.
- enlargement and reduction are performed so that two face images facing in opposite directions have the same magnification.
- conversion and translation are performed so that the feature end points such as the right eye, left eye, right nose, left nose, right mouth, and left mouth corresponding to both images overlap, and the images are combined.
- This conversion and parallel movement are performed by, for example, a well-known affine transformation.
- affine transformation is a type of transformation method that maintains geometric properties, such as points that are aligned on a straight line in the original figure are aligned on the straight line after conversion, and parallel lines are parallel lines after conversion. It is.
- the coordinates (x, y) corresponding to each coordinate (u, v) are calculated by obtaining the coefficients a, b, c, d, e, f shown in (2) and (3).
- the coordinates of the image in Fig. 3 (B) may be transformed.
- the force shown in the example of linear transformation such as affine transformation is not limited to this, and other linear transformation or non-linear transformation may be used.
- the composite image is output to the collation unit 47 and collated with the face image registered in the face image data base 5 in advance.
- the matching method is performed by matching with features such as positional information of face parts such as eyes, nose, mouth, eyebrows, and face outline, information such as thickness and length, and facial shading information.
- a registered image having a high degree of coincidence that is, a certain degree of matching is output to the external IZF unit 21. Thereafter, the result is displayed on the display means 3.
- FIG. 4 is a flowchart for explaining the operation of the imaging control means 2.
- image coordinates are input. This image coordinate is converted into world coordinates common to all imaging means 1.
- the coefficients shown in Equation (1), Rl l, R12, R13, R21, R22, R23, R31, R32, R33, tx, ty, tz Then, convert it to the field coordinates.
- processing is performed for each camera 1A to: LE as the imaging means 1.
- each imaging means 1 it is determined whether or not each imaging means 1 can capture the specified world coordinates (step S13). This can be determined by presetting the range that each camera 1A to 1E can shoot. Subsequently, if it is determined that each camera 1A to LE can be photographed, each camera 1A to IE calculates a turning angle for each camera 1A to IE so as to photograph the specified world coordinates. Each camera 1A-: If it is determined that the LE cannot shoot, the process returns to step S12 to perform the next imaging means 1 processing.
- FIG. 5 is a flow for explaining the operation of the matching means 4.
- a process of detecting a person is performed on the input video (step S21).
- human detection is performed by, for example, capturing a change in the motion of each frame, which is the smallest unit photographed by the imaging means, and determining that the person is an ellipse object above the motion change. To do.
- a face area is detected from the detected person (step S22).
- face detection is performed by registering a plurality of face images in advance to create an average face, and similar to the gray image of the average face, whether or not there is a region is highly related! Or not).
- facial features such as eyes, nose, and mouth, which are facial features, are extracted from the detected facial region (step S23).
- This facial feature extraction is performed by, for example, registering and learning multiple facial features such as the right eye, left eye, nose, mouth, etc., and creating an average feature, similar to the gray image of the average feature. Whether there is a certain area or not).
- step S24 a process of determining whether the face direction is the front is performed (step S24).
- the direction of the front face force for example, multiple front faces, 45 degrees to the right, 45 degrees to the left, 45 degrees to the top, 45 degrees to the bottom, etc.
- an average face at each angle is created, and it is determined which direction it is oriented according to whether it is similar to the gray image of the average face or whether the degree of relation is high.
- the process is performed to read out the face data with the registered face database power (step S28).
- the facial feature extracted in step S23 is collated with the facial feature whose facial database power is also read (step S29).
- this matching method is performed by, for example, matching with facial features such as eyes, nose, mouth, eyebrows, facial contours such as positional relationship, thickness and length, and features such as facial shading information. Do by. If it is determined that the front face is not the front as a result of the front face determination, the facial feature extracted in step S23 is stored (step S25).
- step S26 the medium feature of the facial feature in which the facial feature in the opposite direction to the facial feature extracted in step S23 is stored is also searched (step S26). If there is no face feature in the opposite direction to the face feature extracted in step S23, the collation process ends. If the facial features in the opposite direction to the facial features extracted in step S23 are present in the stored facial features, then the facial features extracted in step S23 and the read facial features are synthesized (step S27). 0 this method of synthesis is carried out by the method described above. That is, first, enlargement / reduction is performed so that two face images facing in opposite directions have the same magnification.
- the image is rotated and translated so that the feature end points such as the right eye, left eye, right nose, left nose, right mouth and left mouth corresponding to both images overlap.
- This rotation and parallel movement are performed by, for example, a well-known affine transformation as described above.
- the process of reading out the face data is also performed for the face database power registered in advance (step S28).
- the synthesized face feature is collated with the face feature read from the face database (step S29).
- this matching method is performed by, for example, matching with features such as information on the positional relationship, thickness, and length of the facial part such as the eyes, nose, mouth, eyebrows, and facial contours, and information on the facial shading. To do.
- the collated result is output to the display means (step S30).
- the face feature matching apparatus of the first embodiment of the present invention since a person to be monitored is photographed by a plurality of imaging means 1, a person who wants to watch is selected. Images with various angular forces can be seen by the imaging means 1 that can shoot. Also, if it is determined that an image captured by a certain imaging device 1 is suitable for collation, the image is synthesized into an image suitable for collation. Even if it failed, Since collation with another imaging means 1 is possible, the accuracy of collation can be improved.
- FIG. 6 illustrates a system configuration of a face feature matching apparatus according to the second embodiment of the present invention.
- the difference between the face feature collating unit of the present embodiment and the face feature collating unit according to the first embodiment shown in FIG. 1 is that the collating unit 4 between the person detecting unit 42 and the coordinate converting unit 22 includes an imaging unit selecting unit. It is a point having 48.
- the input of the imaging means selection unit 48 is connected to the output of the person detection unit 42.
- the person detection unit 42 is also connected to the coordinate conversion unit 22.
- an image taken from the image pickup means 1 (each camera 1A to LE: LE) is input to the image input unit 41.
- the person detection unit 42 detects a person from the input video. As described above, this person detection, for example, captures a change in the motion of the image for each frame, which is the smallest unit photographed by the imaging means 1, and has an elliptical object above the motion change. It is determined as a person.
- the moving direction of the person and the image coordinates where the person exists are also detected.
- the result of the person detection is output to the imaging means selection unit 48.
- the imaging means selection unit 48 selects the imaging means 1 that can be photographed from the imaging means 1 from the image coordinates that are the input person detection results.
- the imaging means 1 that has detected the person is the imaging means 1 that performs only the position detection function, and the imaging means 1 that automatically tracks the other of the previously selected imaging means 1 that can be captured.
- the instruction is output to the next coordinate conversion unit 22 to convert the image coordinates to the world coordinates for the imaging unit 1 that performs automatic tracking, and the world coordinates are output to the next imaging unit control unit 23.
- the turning angle is calculated so that each imaging means 1 captures the world coordinates, and each imaging means 1 is operated.
- image coordinates are output to the external IZF unit 21.
- the image coordinates are output to the coordinate conversion unit 22, and processing for converting the image coordinate power into the coordinate coordinates that the entire face feature matching system has is performed. This conversion method is performed using the above-described equation (1).
- the image pickup means 1 capable of shooting the coordinates indicating the position of the person selected by the image pickup means selection unit 48 is selected, and one of them is set as an image pickup means of the position detection function, and other image pickup means. 1 is the imaging means of the automatic tracking function.
- the instruction is output to the next coordinate conversion unit 22, and the image coordinate force is also converted into a world coordinate for the imaging unit 1 that performs automatic tracking, and the world coordinates are output to the next imaging unit control unit 23. Thereafter, the turning angle is calculated so that the world coordinates are photographed for each imaging means 1, and each imaging means 1 is operated.
- FIG. 7 is a flow for explaining the operation of the imaging means 1.
- the operation flow in this embodiment differs from the operation flow of the imaging control means 2 in the first embodiment shown in FIG. 4 in that an automatic tracking setting request is sent to the imaging means selection unit 48 in FIG. 7 (step S1A). It is a point which has.
- step S11 when a person displayed on the display means 3 is selected by a controller or the like as the external device 6, image coordinates are input. Subsequently, the image coordinates to which the controller equal force as the external device 6 is also input are output to the imaging device selection unit 48 of FIG. 6 (step S1A). After that, the imaging means 1 that performs automatic tracking is selected and returned. This operation will be described in the description of the operation flow of collation means 4 below. After that, the input image coordinates of the controller 6 as the external device 6 are converted into world coordinates common to the imaging means 1 that performs automatic tracking (step S11).
- the flow operation of the verification unit 4 according to the present embodiment is different from the flow operation of the verification unit 4 shown in FIG. 5 in that it has the selection of the imaging unit 1 (step S2A) and a coordinate conversion request (step S2B). Is a point.
- an image is input to the image input unit 41 from each imaging means 1.
- the person detection unit 2 performs a process of detecting a person on the video input from the image input unit 41 (step S21).
- this person detection is performed by, for example, capturing the motion change of the image for each frame, which is the minimum unit photographed by the imaging means 1, and an elliptical object above the motion change. It is determined that the person is a person.
- the moving direction of the person and the image coordinates where the person exists are also detected.
- the imaging means 1 is selected (step S2A).
- an imaging means capable of photographing from a plurality of imaging means 1 is selected from the image coordinates that are the input person detection results.
- the imaging means 1 that has detected the person is an imaging means that performs only the position detection function
- the other of the previously selected imaging means 1 that can be captured is an imaging means that performs automatic tracking.
- the person detection unit 42 outputs a coordinate conversion request signal to the coordinate conversion unit 22 via the imaging means selection unit 48 (step S2B).
- This coordinate conversion unit 22 is a part that performs coordinate conversion in accordance with [Equation 1] shown above.
- the image pickup means 1 that performs automatic tracking it converts the image coordinates into world coordinates, and performs the coordinate coordinates. Is output to the next imaging means control unit 23.
- the imaging means control unit 23 calculates the turning angle so as to capture the world coordinates for each imaging means 1 and operates each imaging means 1. Subsequent face detection processing (step S22) and subsequent processing are the same as in FIG.
- the face feature collation apparatus since the person to be monitored is continuously photographed by the plurality of imaging means 1, the person the monitor wants to see When is selected, it is possible to continuously view images with various angular forces from the imaging means 1 that can shoot. However, if it is determined that an image captured by a certain imaging means 1 is not suitable for collation, the image is unsuccessfully matched by combining the captured image with an image suitable for collation.
- the facial features have the effect of improving the accuracy of matching even if A verification system can be provided.
- the present invention is not limited to the above-described embodiment, and can be carried out in various forms within the scope without departing from the gist thereof.
- the face feature matching apparatus of the present invention includes an imaging control unit capable of controlling all imaging units capable of selecting and imaging a monitoring target person in the display unit, and is further photographed from a plurality of imaging units. It is equipped with a compositing means that synthesizes face images. Even if only one image capturing means fails to collate, accuracy can be improved by enabling collation. A camera monitoring system can be realized.
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/718,738 US8073206B2 (en) | 2004-11-16 | 2005-11-16 | Face feature collator, face feature collating method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-331894 | 2004-11-16 | ||
JP2004331894A JP4459788B2 (ja) | 2004-11-16 | 2004-11-16 | 顔特徴照合装置、顔特徴照合方法、及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006054598A1 true WO2006054598A1 (ja) | 2006-05-26 |
Family
ID=36407139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/021036 WO2006054598A1 (ja) | 2004-11-16 | 2005-11-16 | 顔特徴照合装置、顔特徴照合方法、及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US8073206B2 (ja) |
JP (1) | JP4459788B2 (ja) |
CN (1) | CN100583150C (ja) |
WO (1) | WO2006054598A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10049333B2 (en) * | 2014-01-29 | 2018-08-14 | Panasonic Intellectual Property Management Co., Ltd. | Sales clerk operation management apparatus, sales clerk operation management system, and sales clerk operation management method |
JP2020141427A (ja) * | 2020-06-15 | 2020-09-03 | 日本電気株式会社 | 情報処理システム、情報処理方法及びプログラム |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4459788B2 (ja) * | 2004-11-16 | 2010-04-28 | パナソニック株式会社 | 顔特徴照合装置、顔特徴照合方法、及びプログラム |
US7995802B2 (en) * | 2007-01-22 | 2011-08-09 | International Business Machines Corporation | Apparatus and methods for verifying identity using biometric information collected during a pre-enrollment phase |
JP2009027393A (ja) * | 2007-07-19 | 2009-02-05 | Hitachi Ltd | 映像検索システムおよび人物検索方法 |
JP2009055443A (ja) * | 2007-08-28 | 2009-03-12 | Toshiba Corp | 画像検索システム |
JP4937043B2 (ja) * | 2007-08-28 | 2012-05-23 | 株式会社東芝 | 映像検索システム及び映像検索装置 |
JP4569670B2 (ja) * | 2008-06-11 | 2010-10-27 | ソニー株式会社 | 画像処理装置、画像処理方法およびプログラム |
JP5642410B2 (ja) | 2010-03-30 | 2014-12-17 | パナソニック株式会社 | 顔認識装置及び顔認識方法 |
US20140093142A1 (en) * | 2011-05-24 | 2014-04-03 | Nec Corporation | Information processing apparatus, information processing method, and information processing program |
US9195883B2 (en) | 2012-04-09 | 2015-11-24 | Avigilon Fortress Corporation | Object tracking and best shot detection system |
US20130314523A1 (en) * | 2012-05-24 | 2013-11-28 | Joe Russ | Inverted interactive communication system |
CN103248867A (zh) * | 2012-08-20 | 2013-08-14 | 苏州大学 | 基于多摄像头数据融合的智能视频监控系统的监控方法 |
CN102946514B (zh) * | 2012-11-08 | 2014-12-17 | 广东欧珀移动通信有限公司 | 移动终端的自拍方法和装置 |
JP6128468B2 (ja) * | 2015-01-08 | 2017-05-17 | パナソニックIpマネジメント株式会社 | 人物追尾システム及び人物追尾方法 |
JP6592940B2 (ja) * | 2015-04-07 | 2019-10-23 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
TWI574047B (zh) * | 2015-06-16 | 2017-03-11 | 緯創資通股份有限公司 | 立體影像顯示裝置、方法以及系統 |
US10497014B2 (en) * | 2016-04-22 | 2019-12-03 | Inreality Limited | Retail store digital shelf for recommending products utilizing facial recognition in a peer to peer network |
JP6872742B2 (ja) * | 2016-06-30 | 2021-05-19 | 学校法人明治大学 | 顔画像処理システム、顔画像処理方法及び顔画像処理プログラム |
CN106780662B (zh) | 2016-11-16 | 2020-09-18 | 北京旷视科技有限公司 | 人脸图像生成方法、装置及设备 |
CN106780658B (zh) * | 2016-11-16 | 2021-03-09 | 北京旷视科技有限公司 | 人脸特征添加方法、装置及设备 |
CN106846564A (zh) * | 2016-12-29 | 2017-06-13 | 湖南拓视觉信息技术有限公司 | 一种智能门禁系统及控制方法 |
KR102380426B1 (ko) * | 2017-03-23 | 2022-03-31 | 삼성전자주식회사 | 얼굴 인증 방법 및 장치 |
US11010595B2 (en) | 2017-03-23 | 2021-05-18 | Samsung Electronics Co., Ltd. | Facial verification method and apparatus |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11175730A (ja) * | 1997-12-05 | 1999-07-02 | Omron Corp | 人間検出追跡システム |
JP2000331207A (ja) * | 1999-05-21 | 2000-11-30 | Omron Corp | ゲート装置 |
JP2002077889A (ja) * | 2000-08-29 | 2002-03-15 | Toshiba Corp | 監視カメラ制御システム及び監視カメラ制御装置 |
JP2004072628A (ja) * | 2002-08-08 | 2004-03-04 | Univ Waseda | 複数カメラを用いた移動体追跡システム及びその方法 |
Family Cites Families (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
JPH07264452A (ja) * | 1994-02-03 | 1995-10-13 | Samsung Electron Co Ltd | カメラ一体型の磁気記録再生装置およびその方法 |
JP2002504785A (ja) * | 1998-02-18 | 2002-02-12 | ゲーエムデー−フォルシュングスチェントルム インフォマーションズテクニク ゲーエムベーハー | 仮想テレビ・ビデオスタジオのためのカメラ追跡システム |
WO2001045004A1 (en) * | 1999-12-17 | 2001-06-21 | Promo Vu | Interactive promotional information communicating system |
JP2002024905A (ja) * | 2000-07-06 | 2002-01-25 | Nec Kofu Ltd | 電子ジャーナル作成方式及び電子ジャーナル作成方法 |
US6963659B2 (en) * | 2000-09-15 | 2005-11-08 | Facekey Corp. | Fingerprint verification system utilizing a facial image-based heuristic search method |
JP4675492B2 (ja) * | 2001-03-22 | 2011-04-20 | 本田技研工業株式会社 | 顔画像を使用した個人認証装置 |
US6498970B2 (en) * | 2001-04-17 | 2002-12-24 | Koninklijke Phillips Electronics N.V. | Automatic access to an automobile via biometrics |
US20020175997A1 (en) * | 2001-05-22 | 2002-11-28 | Matsushita Electric Industrial Co., Ltd. | Surveillance recording device and method |
JP4778158B2 (ja) * | 2001-05-31 | 2011-09-21 | オリンパス株式会社 | 画像選出支援装置 |
US6793128B2 (en) * | 2001-06-18 | 2004-09-21 | Hewlett-Packard Development Company, L.P. | Face photo storage system |
TWI299471B (en) * | 2001-08-24 | 2008-08-01 | Toshiba Kk | Person recognition apparatus |
US6879709B2 (en) * | 2002-01-17 | 2005-04-12 | International Business Machines Corporation | System and method for automatically detecting neutral expressionless faces in digital images |
US6725383B2 (en) * | 2002-05-15 | 2004-04-20 | Biocom, Llc | Data and image capture, compression and verification system |
JP3996015B2 (ja) * | 2002-08-09 | 2007-10-24 | 本田技研工業株式会社 | 姿勢認識装置及び自律ロボット |
KR100455294B1 (ko) * | 2002-12-06 | 2004-11-06 | 삼성전자주식회사 | 감시 시스템에서의 사용자 검출 방법, 움직임 검출 방법및 사용자 검출 장치 |
KR100547992B1 (ko) * | 2003-01-16 | 2006-02-01 | 삼성테크윈 주식회사 | 디지털 카메라와 그의 제어 방법 |
US7359529B2 (en) * | 2003-03-06 | 2008-04-15 | Samsung Electronics Co., Ltd. | Image-detectable monitoring system and method for using the same |
US7184602B2 (en) * | 2003-05-02 | 2007-02-27 | Microsoft Corp. | System and method for low bandwidth video streaming for face-to-face teleconferencing |
US7421097B2 (en) * | 2003-05-27 | 2008-09-02 | Honeywell International Inc. | Face identification verification using 3 dimensional modeling |
US7218760B2 (en) * | 2003-06-30 | 2007-05-15 | Microsoft Corporation | Stereo-coupled face shape registration |
DE602004025940D1 (de) * | 2003-07-11 | 2010-04-22 | Toyota Motor Co Ltd | Bildverarbeitungseinrichtung, bildverarbeitungsverfahren, bildverarbeitungsprogramm und aufzeichnungsmedium |
WO2005034025A1 (en) * | 2003-10-08 | 2005-04-14 | Xid Technologies Pte Ltd | Individual identity authentication systems |
EP1566788A3 (en) * | 2004-01-23 | 2017-11-22 | Sony United Kingdom Limited | Display |
JP2005242567A (ja) * | 2004-02-25 | 2005-09-08 | Canon Inc | 動作評価装置及び方法 |
JP4388905B2 (ja) * | 2004-02-27 | 2009-12-24 | 富士フイルム株式会社 | カード発行システム、カード発行方法、及びカード発行プログラム |
JP2005242775A (ja) * | 2004-02-27 | 2005-09-08 | Omron Corp | ゲートシステム |
EP1748390A4 (en) * | 2004-04-15 | 2014-01-01 | Panasonic Corp | DEVICE AND METHOD FOR CREATING IMAGES OF THE FACE |
JP4650669B2 (ja) * | 2004-11-04 | 2011-03-16 | 富士ゼロックス株式会社 | 動体認識装置 |
JP4459788B2 (ja) * | 2004-11-16 | 2010-04-28 | パナソニック株式会社 | 顔特徴照合装置、顔特徴照合方法、及びプログラム |
JP4685465B2 (ja) * | 2005-02-01 | 2011-05-18 | パナソニック株式会社 | 監視記録装置 |
JP2006259930A (ja) * | 2005-03-15 | 2006-09-28 | Omron Corp | 表示装置およびその制御方法、表示装置を備えた電子機器、表示装置制御プログラム、ならびに該プログラムを記録した記録媒体 |
GB2430736A (en) * | 2005-09-30 | 2007-04-04 | Sony Uk Ltd | Image processing |
US20070127787A1 (en) * | 2005-10-24 | 2007-06-07 | Castleman Kenneth R | Face recognition system and method |
JP2007148872A (ja) * | 2005-11-29 | 2007-06-14 | Mitsubishi Electric Corp | 画像認証装置 |
JP5239126B2 (ja) * | 2006-04-11 | 2013-07-17 | 株式会社ニコン | 電子カメラ |
JP2008243093A (ja) * | 2007-03-29 | 2008-10-09 | Toshiba Corp | 辞書データの登録装置及び辞書データの登録方法 |
JP2009087232A (ja) * | 2007-10-02 | 2009-04-23 | Toshiba Corp | 人物認証装置および人物認証方法 |
JP5187139B2 (ja) * | 2008-10-30 | 2013-04-24 | セイコーエプソン株式会社 | 画像処理装置およびプログラム |
KR101108835B1 (ko) * | 2009-04-28 | 2012-02-06 | 삼성전기주식회사 | 얼굴 인증 시스템 및 그 인증 방법 |
-
2004
- 2004-11-16 JP JP2004331894A patent/JP4459788B2/ja not_active Expired - Fee Related
-
2005
- 2005-11-16 CN CN200580039026A patent/CN100583150C/zh not_active Expired - Fee Related
- 2005-11-16 US US11/718,738 patent/US8073206B2/en not_active Expired - Fee Related
- 2005-11-16 WO PCT/JP2005/021036 patent/WO2006054598A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11175730A (ja) * | 1997-12-05 | 1999-07-02 | Omron Corp | 人間検出追跡システム |
JP2000331207A (ja) * | 1999-05-21 | 2000-11-30 | Omron Corp | ゲート装置 |
JP2002077889A (ja) * | 2000-08-29 | 2002-03-15 | Toshiba Corp | 監視カメラ制御システム及び監視カメラ制御装置 |
JP2004072628A (ja) * | 2002-08-08 | 2004-03-04 | Univ Waseda | 複数カメラを用いた移動体追跡システム及びその方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10049333B2 (en) * | 2014-01-29 | 2018-08-14 | Panasonic Intellectual Property Management Co., Ltd. | Sales clerk operation management apparatus, sales clerk operation management system, and sales clerk operation management method |
JP2020141427A (ja) * | 2020-06-15 | 2020-09-03 | 日本電気株式会社 | 情報処理システム、情報処理方法及びプログラム |
JP7036153B2 (ja) | 2020-06-15 | 2022-03-15 | 日本電気株式会社 | 情報処理システム、情報処理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP2006146323A (ja) | 2006-06-08 |
US8073206B2 (en) | 2011-12-06 |
US20090052747A1 (en) | 2009-02-26 |
CN101057256A (zh) | 2007-10-17 |
CN100583150C (zh) | 2010-01-20 |
JP4459788B2 (ja) | 2010-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006054598A1 (ja) | 顔特徴照合装置、顔特徴照合方法、及びプログラム | |
US6243103B1 (en) | Panoramic image generation in digital photography | |
US8791983B2 (en) | Image pickup apparatus and associated methodology for generating panoramic images based on location and orientation information | |
US7643066B2 (en) | Method and apparatus for producing frame accurate position data in a PTZ dome camera with open loop control | |
CN101796474B (zh) | 图像投影设备及其控制方法 | |
WO2017049816A1 (zh) | 一种控制无人机随脸转动的方法和装置 | |
JP4579980B2 (ja) | 一続きの画像の撮像 | |
US11210796B2 (en) | Imaging method and imaging control apparatus | |
US20080180550A1 (en) | Methods For Capturing a Sequence of Images and Related Devices | |
US20120275648A1 (en) | Imaging device and imaging method and program | |
US20050185058A1 (en) | Image stabilization system and method for a video camera | |
JP2008204384A (ja) | 撮像装置、物体検出方法及び姿勢パラメータの算出方法 | |
JP2004318823A (ja) | 情報表示システム、情報処理装置、ポインティング装置および情報表示システムにおけるポインタマーク表示方法 | |
JP5959923B2 (ja) | 検出装置、その制御方法、および制御プログラム、並びに撮像装置および表示装置 | |
TWI509466B (zh) | 物件辨識方法與裝置 | |
US6563528B2 (en) | Video conference system | |
KR101718081B1 (ko) | 손 제스처 인식용 초광각 카메라 시스템 및 그가 적용된 TVI(Transport Video Interface) 장치 | |
KR101715781B1 (ko) | 물체 인식 시스템 및 그 물체 인식 방법 | |
JP5376403B2 (ja) | 映像表示装置及びプログラム | |
JP2001057652A (ja) | 画像入力装置および画像入力方法 | |
JP2008211534A (ja) | 顔検知装置 | |
JP4812099B2 (ja) | カメラ位置検出方法 | |
JP2006238362A (ja) | 画像天地判定装置および方法並びにプログラム | |
JP2003288595A (ja) | 物体認識装置及び方法、並びにコンピュータ読み取り可能な記録媒体 | |
JP2005309992A (ja) | 画像処理装置および画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11718738 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580039026.6 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05807039 Country of ref document: EP Kind code of ref document: A1 |