WO2013038877A1 - Person recognition apparatus and method of controlling operation thereof - Google Patents

Person recognition apparatus and method of controlling operation thereof Download PDF

Info

Publication number
WO2013038877A1
WO2013038877A1 PCT/JP2012/071056 JP2012071056W WO2013038877A1 WO 2013038877 A1 WO2013038877 A1 WO 2013038877A1 JP 2012071056 W JP2012071056 W JP 2012071056W WO 2013038877 A1 WO2013038877 A1 WO 2013038877A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
similarity
image
face
parallax
Prior art date
Application number
PCT/JP2012/071056
Other languages
French (fr)
Japanese (ja)
Inventor
矢作 宏一
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2013038877A1 publication Critical patent/WO2013038877A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Definitions

  • the present invention relates to a person recognition device and an operation control method thereof.
  • Patent Document 1 For example, one that switches between face recognition in two-dimensional mode and face recognition in three-dimensional mode (Patent Document 1), and one that performs face recognition using face data registered by generating a three-dimensional model (Patent Document 1) 2), and the like, which generates unevenness data from the shadow area of a face and performs face recognition with reference to registered unevenness data (Patent Document 3).
  • the performance of the human recognition function is not yet sufficient, and it may be misrecognized or unrecognizable. Also, generating a three-dimensional model requires a large amount of calculation for generating the three-dimensional model, and human recognition cannot be performed quickly.
  • This invention is intended to recognize a person relatively quickly and relatively accurately.
  • a person recognition apparatus includes: a parallax image generating unit that generates a parallax image from a plurality of target images; a person determination unit that determines whether a human face is included in the target image; First similarity calculating means for calculating the similarity between the person and the person to be identified specified by the person specifying data stored in advance, the first similarity calculating means, First determination means for determining whether or not the similarity calculated by the similarity calculation means is equal to or higher than a predetermined level, and in response to the similarity being determined to be higher than or equal to a predetermined level by the first determination means.
  • the first person determining means for determining a person determined to be a predetermined level or higher by the determining means as a face person included in the target image, and the similarity is determined to be a predetermined level by the first determining means.
  • Second similarity calculating means for calculating the similarity, and the face based on the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means
  • a second person determination means for specifying a person in the image is provided.
  • the present invention also provides an operation control method suitable for the person recognition device. That is, in this method, the parallax image generation unit generates a parallax image from a plurality of target images, the person determination unit determines whether the target image includes a human face, and calculates the first similarity.
  • the means determines that the person's face is included by the person determination means, the similarity between the person and the person to be recognized specified by the person specifying data stored in advance is calculated
  • the first determining means determines whether or not the similarity calculated by the first similarity calculating means is equal to or higher than a predetermined level, and the first person determining means determines the similarity by the first determining means.
  • the person determined to be equal to or higher than the predetermined level by the first determination unit is determined as a face person included in the target image
  • the second similarity calculation unit includes: The first judgment hand
  • the similarity is determined to be less than a predetermined level
  • the part corresponding to the face included in the parallax image generated by the parallax image generation unit and the person to be recognized are stored in advance.
  • the degree of similarity between the face and the parallax image is calculated, and the second person determining means calculates the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means.
  • the person of the face image is specified based on the above.
  • a parallax image is generated from a plurality of target images. It is determined whether the target image contains a person's face, and if it is determined that the person's face is included, the person is identified by the person identification data stored in advance. The degree of similarity with the target person is calculated. If the calculated similarity is equal to or higher than a predetermined level, the reliability is high, and a person determined to be equal to or higher than the predetermined level is determined as a face person included in the target image. If the calculated similarity is less than a predetermined level, the reliability is considered low.
  • the similarity between the portion corresponding to the face included in the generated parallax image and the parallax image of the face stored in advance corresponding to the person to be recognized by the second similarity calculation means is obtained.
  • the person of the face image is specified based on the similarity calculated by the first similarity calculation means and the similarity calculated by the second similarity calculation means.
  • the parallax image also represents three-dimensional data of the person. Since the person is recognized using such a parallax image, the person can be recognized relatively accurately. Since it is relatively easy to generate a parallax image compared to the case of generating a three-dimensional model, a person can be recognized quickly.
  • the apparatus further includes a parallax level difference determining unit that determines whether or not a difference between a maximum value and a minimum value of pixel values in a portion corresponding to a face included in the parallax image generated by the parallax image generating unit is equal to or higher than a predetermined level.
  • a parallax level difference determining unit that determines whether or not a difference between a maximum value and a minimum value of pixel values in a portion corresponding to a face included in the parallax image generated by the parallax image generating unit is equal to or higher than a predetermined level.
  • the second similarity calculation means determines that the similarity is less than a predetermined level by the first determination means, and determines the difference between the maximum value and the minimum value by the parallax level difference determination means.
  • the similarity to the face parallax image stored in advance corresponding to the person to be recognized will be calculated.
  • the second person determination means specifies the person of the face image when the similarity calculated by the second similarity calculation means is equal to or higher than a predetermined level, for example.
  • the second person determining means is a person having a larger similarity between the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means. May be determined as a person of the face image.
  • An imaging device that captures a subject a plurality of times while changing the focus amount to obtain a plurality of target images may be further provided.
  • the parallax image generation means generates a parallax image from a plurality of target images obtained by the imaging device, for example.
  • An imaging apparatus that obtains a plurality of target images by performing imaging a plurality of times so that the subject moves relatively in the horizontal direction may be further provided.
  • the parallax image generation means generates a parallax image from a plurality of target images obtained by the imaging device, for example.
  • the parallax image generating means generates two parallax images for each of a first target image captured from a first viewpoint and a second target image captured from a second viewpoint.
  • the second similarity calculation means calculates the similarity for a portion corresponding to a face included in each of the two parallax images, for example.
  • the second similarity calculation means is included in the first target image among the left part or the right part of the part corresponding to the face of the parallax image generated by the parallax image generation means.
  • a portion of the blind area that is not included in the second target image or is included in the second target image but not included in the first target image, A similarity with a face parallax image stored in advance corresponding to a person may be calculated.
  • An example of the left viewpoint parallax image and the right viewpoint parallax image is shown.
  • the left viewpoint parallax image and the right viewpoint parallax image are shown.
  • a parallax image portion corresponding to the face and a parallax image portion corresponding to the face from which the blind spot area is excluded are shown.
  • FIG. 1 shows an embodiment of the present invention and is a block diagram showing an electrical configuration of a digital camera.
  • a person can be imaged and the captured person can be recognized.
  • the overall operation of the digital camera 1 is controlled by the CPU 10.
  • the digital still camera 1 includes a stereoscopic imaging mode for generating a stereoscopic image, an imaging mode for performing normal two-dimensional imaging, a two-dimensional reproduction mode for performing two-dimensional reproduction, a stereoscopic reproduction mode for displaying a stereoscopic image, a setting mode, and the like.
  • an operating device (not shown) including various buttons such as a mode setting button for setting the mode and a two-stroke type shutter / release button. An operation signal output from the operation device is input to the CPU 10.
  • the digital still camera includes a left viewpoint imaging device 1 that captures a left viewpoint image (target image) viewed by a viewer of a stereoscopic image with the left eye and a right viewpoint image (target image) viewed by the viewer of the stereoscopic image with a right eye.
  • a right viewpoint imaging device 2 for imaging is included.
  • the left viewpoint imaging device 1 includes a CCD 4 that images a subject and outputs a left viewpoint image signal representing a left viewpoint image.
  • a zoom lens 2 and a focus lens 3 In front of the CCD 4, there are provided a zoom lens 2 and a focus lens 3 whose zoom amounts and focus amounts are controlled by motor drivers 5 and 6, respectively.
  • pre-imaging through image imaging
  • the left viewpoint image signal is output from the CCD 4 at a constant cycle.
  • the left viewpoint image signal output from the CCD 4 is converted into left viewpoint image data in the analog / digital conversion circuit 7.
  • the left viewpoint image data is input to the digital signal processing device 21 by the image input controller 8.
  • the right viewpoint imaging device 11 includes a CCD 14 that images a subject and outputs a right viewpoint image signal representing the right viewpoint image.
  • a zoom lens 12 and a focus lens 13 In front of the CCD 14, there are provided a zoom lens 12 and a focus lens 13 whose zoom amounts and focus amounts are controlled by motor drivers 15 and 16, respectively.
  • the right viewpoint image signal is output from the CCD 14 during pre-imaging.
  • the right viewpoint image signal output from the CCD 14 is converted into right viewpoint image data in the analog / digital conversion circuit 17.
  • the right viewpoint image data is input to the digital signal processing device 21 by the image input controller 18.
  • predetermined digital signal processing is performed on the left viewpoint image data and the right viewpoint image data.
  • the left viewpoint image data and the right viewpoint image data output from the digital signal processing device 21 are given to the display device 27 via the display control device 26.
  • An image obtained by imaging is displayed as a three-dimensional moving image on the display screen of the display device 27 (through image display). Both left viewpoint image data and right viewpoint image data are given to the display device 27, and either left viewpoint image data or right viewpoint image data is given without displaying the subject image stereoscopically.
  • the subject image may be displayed two-dimensionally.
  • the left viewpoint image data and right viewpoint image data output from the digital signal processing device 21 are also provided to the parallax image generating device 28.
  • a parallax image 82 (see FIG. 7) described later is generated.
  • the person is specified and the person name is displayed corresponding to the person.
  • the subject is imaged by half-pressing the shutter release button, and at least one of the left viewpoint image data and the right viewpoint image data obtained by the imaging is input to the integrating device 23.
  • the integrating device 23 high frequency components and luminance data are integrated.
  • the focus amounts of the focus lenses 3 and 13 are determined based on the integrated value of the high frequency components.
  • the shutter speed of the so-called electronic shutter is determined.
  • the left viewpoint image data and the right viewpoint image data are read from the main memory 20 and input to the compression / decompression processor 22.
  • the left viewpoint image data and the right viewpoint image data are compressed by the compression / decompression processor 22, and the compressed left viewpoint image data and right viewpoint image data are recorded on the memory card 25 by the memory controller 24.
  • the left viewpoint image data and the right viewpoint image data recorded in the memory card 25 are read.
  • the read left viewpoint image data and right viewpoint image data are decompressed by the compression / decompression processor 22.
  • the expanded left viewpoint image data and right viewpoint image data are given to the display device 26, whereby a stereoscopic image is displayed.
  • FIG. 2 is an example of parallax images 41-43 represented by parallax image data stored in the main memory 20 of the digital camera.
  • the parallax images 41-43 represent the amount of horizontal displacement between corresponding pixels in at least two images with different viewpoints and converted into luminance values. Since the depth is determined according to the amount of parallax, it can be said that the parallax images 41-43 also represent information on the depth of the subject.
  • FIG. 2 shows a parallax image 41 of the face with the person name “Taro Tokkyo”, a parallax image 42 of the face with the person name “Practical Shinko”, and a parallax image 43 of the face with the person name “Ichiro Takumi”. ing.
  • a parallax image is generated from a subject image obtained by imaging a subject, and a facial parallax image that is most similar to a portion corresponding to the face of the generated parallax image is stored in advance. The person corresponding to the parallax image is determined as the person name of the subject.
  • 3 and 4 are flowcharts showing a person recognition processing procedure using a digital camera. Needless to say, if a person recognition process can be performed, it is not always necessary to use a digital camera. For example, a person recognition process can be performed on an image captured in advance.
  • FIG. 5 shows an example of the left viewpoint image 71 and an example of the right viewpoint image 72. Since the viewpoints of the left viewpoint image 71 and the right viewpoint image 72 are different, parallax occurs.
  • step 52 in FIG. 3 face detection processing is performed on the respective viewpoint images 71 and 72 (step 52 in FIG. 3). Even if the face detection process is not performed on both the images 71 and 72 of the left viewpoint image 71 and the right viewpoint image 72, the face detection process may be performed on either one of the images. If no face is detected (YES in step 53 in FIG. 3), the person recognition process is terminated (step 57 in FIG. 3) assuming that the person cannot be recognized.
  • a first similarity calculation process is performed on the left viewpoint image 71 or the right viewpoint image 72 in which the face is detected (step 54 in FIG. 3).
  • the first similarity calculation process does not use a parallax image as described above, but uses a left viewpoint image 71 or a right viewpoint image 72 obtained by imaging.
  • a template image person identification data
  • a face image representing the characteristics of a person corresponding to a person name
  • a face part image eye image, nose image, mouth image, eyebrow image, etc.
  • the stored template image is compared with the left viewpoint image 71 or the right viewpoint image 72 obtained by imaging, and the similarity is calculated according to the degree of coincidence.
  • FIG. 6 is an example of a similarity table.
  • the similarity table the similarity is stored corresponding to the name of the person who stores the image data representing the parallax image.
  • the face parallax images 41-43 shown in FIG. 2 are also shown for easy understanding. These parallax images 41-43 may or may not be stored in the similarity table.
  • the similarity between the imaged person (face detected person) and “Patent Taro” is 65%
  • the similarity between “Practical Shinko” is 50%
  • the similarity with “Ichiro Ichiro” is calculated as 70%.
  • the similarity calculation process (first similarity) is 80% (predetermined level) or more, the detected face person has such similarity. It is considered that the credibility of the person corresponding to the template image used in the above is high. For this reason, as described later, the similarity calculation process (second similarity calculation process) using the parallax image is not performed.
  • the person whose first similarity is determined to be 80% or more is determined as the detected face person (step 56 in FIG. 3). For example, when the first similarity for “Taro Taro” is 80% or more, the detected face person is determined to be “Taro Taro”.
  • a parallax image is generated from the left viewpoint image 71 and the right viewpoint image 72.
  • FIG. 7 is an example of the generated parallax image 81.
  • the parallax image 81 represents the parallax between corresponding pixels between the left viewpoint image 71 and the right viewpoint image 72.
  • the parallax is expressed by density.
  • the parallax image 81 includes a portion 82 corresponding to a face, and this face equivalent portion 82 is compared with a parallax image stored in advance corresponding to a person.
  • the density difference (parallax amount difference) between the maximum density (maximum value of parallax amount) and the minimum density (minimum value of parallax amount) of the face equivalent portion 82 included in the parallax image 81 is 10 It is determined whether or not there is at least% (step 59). If the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is 10% or more, the face obtained by imaging is more stereoscopic, and the person recognition process was performed using such a parallax image 81. The credibility of the results is high. On the other hand, if the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is not more than 10%, the credibility is low.
  • the person recognition process using the parallax image 81 is not performed. If the first similarity is not 50% or more (NO in step 66 in FIG. 4), since the credibility of the first similarity is low, person recognition cannot be performed (step 67 in FIG. 4). If the first similarity is 50% or more (YES in step 66 in FIG. 4), since the credibility of the first similarity is considered to be relatively high, the person determined based on the first similarity , The person of the detected face is determined (step 56 in FIG. 3).
  • the second similarity calculation process is performed using the parallax image 81 (step 60 in FIG. 4). ).
  • the degree of coincidence between the face equivalent portion 82 included in the parallax image 81 and the parallax image of the face stored in advance is determined as the similarity (second similarity). Is calculated as follows. The calculated similarity is stored in the similarity table shown in FIG.
  • the similarity between the face-corresponding portion 82 included in the parallax image 81 and the parallax image 41 of the face of “Taro Taro” is 40%.
  • the degree of similarity with the parallax image 42 is calculated as 35%, and the degree of similarity with the parallax image 43 of the face of “Design Ichiro” is calculated as 50%.
  • the credibility of the person determined based on the second similarity is low. If the first similarity is 20% or less (YES in step 62 in FIG. 4), the credibility of the person determined based on the first similarity is low. For this reason, if the second similarity is not 50% or more (NO in step 61 of FIG. 4), or if the first similarity is 20% or less (YES in step 62 of FIG. 4), the person cannot be recognized. (Step 67 in FIG. 4).
  • the second similarity is 50% or more (YES in step 61 of FIG. 4) and the first similarity is greater than 20% (NO in step 62 of FIG. 4)
  • the first similarity and the second A person is determined using the similarity.
  • the first similarity and the second similarity are compared (step 63 in FIG. 4). If the second similarity is larger than the first similarity (YES in step 64 in FIG. 4), the second similarity is higher than the first similarity, so the second similarity is higher. Of the degrees, the face person determined to have the highest similarity is determined (step 65 in FIG. 4). On the other hand, if the first similarity is greater than the second similarity (NO in step 64 in FIG. 4), the credibility of the first similarity is high, so that the first similarity is the highest. It is determined that the person has the determined face (step 56 in FIG. 3).
  • FIG. 8 is an example of the left viewpoint image 91 and the right viewpoint image 93 on which the person names recognized as described above are displayed.
  • the recognized person names 92 and 94 are displayed in the vicinity of the face image.
  • the shutter release button is fully pressed, an image file representing a captured image is generated, and data representing a recognized person name is stored in the header of the generated image file.
  • the first similarity and the second similarity are simply compared, but the first similarity and the second similarity are weighted and averaged with a predetermined weighting coefficient.
  • the person may be recognized based on the obtained value.
  • a weighted average value obtained by adding and averaging the first similarity and the second similarity at a one-to-one ratio is stored in the similarity table.
  • a person may be recognized based on the weighted average value thus obtained. For example, the person with the largest weighted average value is recognized as the person who has taken the image.
  • the weighted average is not a coefficient that makes the first similarity and the second similarity one to one, but can use any other coefficient.
  • FIG. 9 is a flowchart showing a part of the person recognition processing procedure
  • FIG. 10 is an example of a subject image obtained by imaging.
  • the left viewpoint imaging device 1 or the right viewpoint imaging device 11 is used.
  • both devices 1 and 11 may be used.
  • the parallax image generation process is performed as described above.
  • the subject is imaged multiple times with the focus amount changed (change of the position of the focus lens 2 or 12) to generate a parallax image (step 101 in FIG. 9).
  • the focus amount is adjusted so that the nose of a person included in the subject is in focus (focus amount: Near), and the subject is imaged.
  • a subject image 111 in which the person's nose is in focus is obtained.
  • the focus amount is adjusted so that the outline of the face of the person included in the subject is in focus (focus amount: middle), and the subject is imaged.
  • focus amount middle
  • a subject image 112 in which the outline of the person's face is in focus is obtained.
  • the focus amount is adjusted so that the tree in the background of the person in the subject is in focus (focus amount: Far), and the subject is imaged.
  • a subject image 113 in which the background tree of the person is in focus is obtained.
  • the relative distance of the face image portion is calculated from the spatial frequency of the plurality of subject images 111-113 obtained by imaging the subject while the focus amount is changed in this way, and the same parallax as described above An image is obtained.
  • the relative distance to the subject can be determined by imaging the subject with the focus amount changed.
  • a parallax image can be created from the relative distance.
  • the person recognition process is performed using the obtained parallax image as described above.
  • the focus amount is changed three times.
  • the focus amount is not limited to three times, and the focus amount may be changed any other number of times.
  • FIG. 11 is an example of the subject image 121-124 obtained by the through image capturing. Either the left viewpoint imaging device 1 or the right viewpoint imaging device 11 may be used, or both may be used.
  • an image in which a face image is detected for example, the image 122
  • an image in which a face image in a direction different from the direction of the detected face image is detected for example, a parallax image is generated from the image 123.
  • This modification generates both parallax images of the left viewpoint parallax image and the right viewpoint parallax image, and performs person recognition processing using both parallax images.
  • FIG. 12 is a flowchart showing a part of the person recognition processing procedure, and corresponds to the processing procedure of FIG.
  • FIG. 12 shows an example of the left viewpoint parallax image 141 and an example of the right viewpoint parallax image 142.
  • the left viewpoint image 141 represents the parallax between the left viewpoint image 71 and the right viewpoint image 72 (see FIG. 5) with reference to the left viewpoint image 71 (see FIG. 5).
  • the right viewpoint image 142 represents the parallax between the left viewpoint image 71 and the right viewpoint image 72 with the right viewpoint image 72 as a reference.
  • left parallax image 141 and right parallax image 142 are generated from left viewpoint image 71 and right viewpoint image 72 (step 131).
  • the generated second parallax image 141 is subjected to the second similarity calculation process for calculating the similarity with the parallax images 41, 42, and 43 stored in advance (step 132).
  • the generated right parallax image 142 is also subjected to the second similarity calculation process for calculating the similarity with the parallax images 41, 42, and 43 stored in advance (step 133).
  • the first similarity obtained by the first similarity calculation process already performed and the second similarity obtained by the second similarity calculation process performed on the left parallax image 141 are performed (step 134).
  • the highest second similarity is given and stored in advance
  • the person corresponding to the displayed parallax image is determined as the detected face person (step 136).
  • the highest second similarity obtained from the second similarity calculation process performed on the right parallax image is the highest (NO in step 135, YES in step 137). If the second similarity obtained from the second similarity calculation process performed on the right parallax image is the highest (NO in step 135, YES in step 137), the highest second similarity is obtained.
  • the person corresponding to the parallax image stored in advance and giving the degree is determined as the person of the detected face (step 138).
  • the person giving the highest first similarity is determined as the detected face person (step 56 in FIG. 3).
  • the highest second similarity is not less than a predetermined level such as 50% or more (step 61 in FIG. 4).
  • the highest second similarity is given in advance and stored.
  • the person corresponding to the displayed parallax image may be determined as the detected face person.
  • the highest first similarity is equal to or higher than a predetermined level such as 20% or more (step 63 in FIG. 4)
  • the person giving the highest first similarity is detected by the detected face. You may make it determine with a person.
  • 14 to 16 show other modified examples.
  • the blind spot area is excluded from the image portion used for similarity calculation.
  • FIG. 14 shows an example of the left viewpoint parallax image 161 and an example of the right viewpoint parallax image 162.
  • a left viewpoint image 71 captured from the left viewpoint and a right viewpoint image 72 captured from the right viewpoint When captured from different viewpoints, such as a left viewpoint image 71 captured from the left viewpoint and a right viewpoint image 72 captured from the right viewpoint, the subject portion that is not visible from the left viewpoint but is visible from the right viewpoint, the right There is a subject part that is not visible from the viewpoint but visible from the left viewpoint. For this reason, a portion that does not exist in the left viewpoint image 71 but exists in the right viewpoint image 72 and a portion that does not exist in the right viewpoint image 71 but exists in the left viewpoint image 72 are generated. Thus, a portion that does not exist in one image but exists in the other image is referred to as a blind spot region. In addition to the image obtained by the imaging, a blind spot region is similarly generated in the left viewpoint parallax image 161 and the right viewpoint parallax image 163 as described above.
  • a blind spot area 162 is generated on the left side of the portion corresponding to the face.
  • a blind spot region 164 is generated on the right side of the portion corresponding to the face.
  • FIG. 15 shows a face image portion 166 of the left viewpoint image 161.
  • the blind spot area 162 is generated in the face image portion 166 of the left viewpoint image 161. For this reason, even if this face image portion 166 is compared with the parallax images 41, 42, 43 stored in advance and the second similarity calculation process is performed, it is calculated due to the presence of the blind spot area 162. The second similarity is smaller. For this reason, in this modified example, as shown on the right side of FIG. 15, an image part where the blind spot area 162 does not exist except for the area where the blind spot area 162 exists in the left half or the right half of the face image part 166. Are compared with the parallax images 41, 42, 43 stored in advance, and the second similarity calculation process is performed.
  • the left viewpoint parallax image 161 and the right viewpoint parallax image 164 are used.
  • the second similarity calculation process can be performed using the portion where the blind spot area does not exist even when is generated.
  • FIG. 16 is a part of a flowchart for performing the person authentication processing procedure using the image portion on the side where no blind spot area exists in the left half or the right half of the face image described above.
  • FIG. 16 corresponds to FIG.
  • a parallax image is generated from the left viewpoint image 71 and the right viewpoint image 72 (step 58). It is determined whether the area of the blind spot area corresponding to the face of the generated parallax image is 5% or more of the area corresponding to the face (step 151).
  • the blind spot is calculated in the similarity between the face image part and the parallax images 41, 42, 43 stored in advance.
  • the effect of the area is considered to be small. For this reason, as described above, the second degree of calculating the similarity between the face image of the parallax image generated without performing the process of excluding the blind spot region and the parallax images 41, 42, and 43 stored in advance. Similarity calculation processing is performed (step 156).
  • the blind spot is calculated in the similarity calculation between the face image part and the parallax images 41, 42, 43 stored in advance. It is considered that the impact of the area cannot be ignored.
  • the face equivalent portion is rotated so as to stand upright, and the face equivalent portion is divided into a left half region and a right half region (step 152).
  • the area of the blind spot area included in the left half area and the area of the blind spot area included in the right half area are compared, and the area of the blind spot area included in the left half area is the right side. If the area is larger than the area of the blind spot area included in the half area (YES in step 153), the right half area of the face-corresponding portion where the area of the blind spot area is small and the parallax images 41, 42, 43 stored in advance A second similarity calculation process is performed to calculate the similarity with (step 154).
  • the area of the blind spot area included in the left half area is compared with the area of the blind spot area included in the right half area, and the area of the blind spot area included in the right half area is the left side. If the area is larger than the area of the blind spot area included in the half area (NO in step 153), the left half area of the face-corresponding portion where the area of the blind spot area is small and the parallax images 41, 42, 43 stored in advance A second similarity calculation process is performed to calculate the similarity with (step 155).
  • the second similarity with a relatively high value can be obtained.
  • the right eye viewpoint may be the first viewpoint and the left eye viewpoint may be the second viewpoint. Conversely, the left eye viewpoint may be the first viewpoint and the right eye viewpoint may be the second viewpoint.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The present invention improves the recognition rate of a person. A first similarity calculation is executed wherein the similarity between a person and prestored person-identifying data is calculated on the basis of a left viewpoint image and a right viewpoint image obtained by capturing an image of a person. A plurality of disparity images is stored in association with a plurality of persons. A disparity image is generated from a left viewpoint image and a right viewpoint image (step 58). A second similarity calculation is executed wherein the similarity between the prestored plurality of disparity images and the generated disparity image is calculated (step 60). The name of the person whose image was captured is determined on the basis of a first similarity obtained by the first similarity calculation and a second similarity obtained by the second similarity calculation (step 65).

Description

人物認識装置およびその動作制御方法Person recognition device and operation control method thereof
 この発明は,人物認識装置およびその動作制御方法に関する。 The present invention relates to a person recognition device and an operation control method thereof.
 人物を撮像する場合,被写体に含まれる人物のうち,特定の人物を認識して,その人物にピントを合わせる機能を有するディジタル・カメラがある。しかしながら,そのような機能性能は未だ十分ではなく,誤認識する場合や認識できない場合がある。 When imaging a person, there is a digital camera having a function of recognizing a specific person among the persons included in the subject and focusing on the person. However, such functional performance is still not sufficient, and there are cases where it is erroneously recognized or cannot be recognized.
 たとえば,二次元モードでの顔認識と三次元モードでの顔認識を切り換えるもの(特許文献1),三次元モデルを生成して登録された顔データを利用して顔認識を行うもの(特許文献2),顔の影の面積から凹凸データを作り,登録された凹凸データを参照して顔認識を行うもの(特許文献3)等がある。 For example, one that switches between face recognition in two-dimensional mode and face recognition in three-dimensional mode (Patent Document 1), and one that performs face recognition using face data registered by generating a three-dimensional model (Patent Document 1) 2), and the like, which generates unevenness data from the shadow area of a face and performs face recognition with reference to registered unevenness data (Patent Document 3).
特開2009-60379号公報JP 2009-60379 特開2009-37540号公報JP 2009-37540 A 特開2008-305192号公報JP 2008-305192 A
 しかしながら,人物の認識機能の性能は未だ十分ではなく,誤認識してしまう場合や,認識できない場合がある。また,三次元モデルを生成するのでは,三次元モデルを生成するための計算量が膨大であり,迅速に人物認識ができない。 However, the performance of the human recognition function is not yet sufficient, and it may be misrecognized or unrecognizable. Also, generating a three-dimensional model requires a large amount of calculation for generating the three-dimensional model, and human recognition cannot be performed quickly.
 この発明は,比較的迅速に,かつ比較的正確に人物を認識することを目的とする。 This invention is intended to recognize a person relatively quickly and relatively accurately.
 この発明による人物認識装置は,複数の対象画像から視差画像を生成する視差画像生成手段,上記対象画像に人物の顔が含まれているかを判定する人物判定手段,上記人物判定手段により人物の顔が含まれていると判定された場合,その人物と,あらかじめ記憶されている人物特定データによって特定される認識対象の人物との類似度を算出する第1の類似度算出手段,上記第1の類似度算出手段によって算出された類似度が所定レベル以上かどうかを判定する第1の判定手段,上記第1の判定手段によって類似度が所定レベル以上と判定されたことに応じて,上記第1の判定手段によって所定レベル以上と判定された人物を,上記対象画像に含まれる顔の人物と決定する第1の人物決定手段,上記第1の判定手段によって類似度が所定レベル未満と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出する第2の類似度算出手段,および上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とにもとづいて上記顔画像の人物を特定する第2の人物決定手段を備えていることを特徴とする。 A person recognition apparatus according to the present invention includes: a parallax image generating unit that generates a parallax image from a plurality of target images; a person determination unit that determines whether a human face is included in the target image; First similarity calculating means for calculating the similarity between the person and the person to be identified specified by the person specifying data stored in advance, the first similarity calculating means, First determination means for determining whether or not the similarity calculated by the similarity calculation means is equal to or higher than a predetermined level, and in response to the similarity being determined to be higher than or equal to a predetermined level by the first determination means. The first person determining means for determining a person determined to be a predetermined level or higher by the determining means as a face person included in the target image, and the similarity is determined to be a predetermined level by the first determining means. A portion corresponding to the face included in the parallax image generated by the parallax image generation means, and a parallax image of the face stored in advance corresponding to the person to be recognized. Second similarity calculating means for calculating the similarity, and the face based on the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means A second person determination means for specifying a person in the image is provided.
 この発明は,上記人物認識装置に適した動作制御方法も提供している。すなわち,この方法は,視差画像生成手段が,複数の対象画像から視差画像を生成し,人物判定手段が,上記対象画像に人物の顔が含まれているかを判定し,第1の類似度算出手段が,上記人物判定手段により人物の顔が含まれていると判定された場合,その人物と,あらかじめ記憶されている人物特定データによって特定される認識対象の人物との類似度を算出し,第1の判定手段が,上記第1の類似度算出手段によって算出された類似度が所定レベル以上かどうかを判定し,第1の人物決定手段が,上記第1の判定手段によって類似度が所定レベル以上と判定されたことに応じて,上記第1の判定手段によって所定レベル以上と判定された人物を,上記対象画像に含まれる顔の人物と決定し,第2の類似度算出手段が,上記第1の判定手段によって類似度が所定レベル未満と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出し,第2の人物決定手段が,上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とにもとづいて上記顔画像の人物を特定するものである。 The present invention also provides an operation control method suitable for the person recognition device. That is, in this method, the parallax image generation unit generates a parallax image from a plurality of target images, the person determination unit determines whether the target image includes a human face, and calculates the first similarity. When the means determines that the person's face is included by the person determination means, the similarity between the person and the person to be recognized specified by the person specifying data stored in advance is calculated, The first determining means determines whether or not the similarity calculated by the first similarity calculating means is equal to or higher than a predetermined level, and the first person determining means determines the similarity by the first determining means. When it is determined that the level is equal to or higher than the level, the person determined to be equal to or higher than the predetermined level by the first determination unit is determined as a face person included in the target image, and the second similarity calculation unit includes: The first judgment hand When the similarity is determined to be less than a predetermined level, the part corresponding to the face included in the parallax image generated by the parallax image generation unit and the person to be recognized are stored in advance. The degree of similarity between the face and the parallax image is calculated, and the second person determining means calculates the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means. The person of the face image is specified based on the above.
 この発明によると,複数の対象画像から視差画像が生成される。対象画像に,人物の顔が含まれているかどうかが判定され,人物の顔が含まれていると判定された場合には,その人物と,あらかじめ記憶されている人物特定データによって特定される認識対象の人物との類似度が算出される。算出された類似度が所定レベル以上であれば,信頼性が高いので,所定レベル以上と判定された人物が,対象画像に含まれる顔の人物と決定される。算出された類似度が所定レベル未満であれば,信頼性が低いと考えられる。このために,第2の類似度算出手段によって,生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度が算出される。第1の類似度算出手段によって算出された類似度と第2の類似度算出手段によって算出された類似度とにもとづいて顔画像の人物が特定される。視差画像は,人物の立体的なデータも表すこととなる。そのような視差画像を利用して人物を認識しているので,比較的正確に人物を認識させることができる。三次元モデルを生成する場合に比べて視差画像を生成するのは比較的簡単なので,迅速に人物を認識できる。 According to the present invention, a parallax image is generated from a plurality of target images. It is determined whether the target image contains a person's face, and if it is determined that the person's face is included, the person is identified by the person identification data stored in advance. The degree of similarity with the target person is calculated. If the calculated similarity is equal to or higher than a predetermined level, the reliability is high, and a person determined to be equal to or higher than the predetermined level is determined as a face person included in the target image. If the calculated similarity is less than a predetermined level, the reliability is considered low. For this reason, the similarity between the portion corresponding to the face included in the generated parallax image and the parallax image of the face stored in advance corresponding to the person to be recognized by the second similarity calculation means is obtained. Calculated. The person of the face image is specified based on the similarity calculated by the first similarity calculation means and the similarity calculated by the second similarity calculation means. The parallax image also represents three-dimensional data of the person. Since the person is recognized using such a parallax image, the person can be recognized relatively accurately. Since it is relatively easy to generate a parallax image compared to the case of generating a three-dimensional model, a person can be recognized quickly.
 上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分における画素値の最大値と最小値との差が所定レベル以上かどうかを判定する視差レベル差判定手段をさらに備えてもよい。この場合,上記第2の類似度算出手段は,たとえば,上記第1の判定手段によって類似度が所定レベル未満と判定され,かつ上記視差レベル差判定手段によって最大値と最小値との差が所定レベル差判定手段によって画素値の最大値と最小値との差が所定レベル以上と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出するものとなろう。 The apparatus further includes a parallax level difference determining unit that determines whether or not a difference between a maximum value and a minimum value of pixel values in a portion corresponding to a face included in the parallax image generated by the parallax image generating unit is equal to or higher than a predetermined level. Good. In this case, for example, the second similarity calculation means determines that the similarity is less than a predetermined level by the first determination means, and determines the difference between the maximum value and the minimum value by the parallax level difference determination means. A portion corresponding to a face included in the parallax image generated by the parallax image generation unit in response to the difference between the maximum value and the minimum value of the pixel value being determined by the level difference determination unit to be equal to or greater than a predetermined level; The similarity to the face parallax image stored in advance corresponding to the person to be recognized will be calculated.
 上記第2の人物決定手段は,たとえば,上記第2の類似度算出手段によって算出された類似度が所定レベル以上である場合に上記顔画像の人物を特定するものである。 The second person determination means specifies the person of the face image when the similarity calculated by the second similarity calculation means is equal to or higher than a predetermined level, for example.
 上記第2の人物決定手段は,上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とのうち,大きい方の類似度となる人物を,上記顔画像の人物と決定するものでもよい。 The second person determining means is a person having a larger similarity between the similarity calculated by the first similarity calculating means and the similarity calculated by the second similarity calculating means. May be determined as a person of the face image.
 フォーカス量を変えて被写体を複数回撮像して複数の対象画像を得る撮像装置をさらに備えてもよい。この場合,上記視差画像生成手段は,たとえば,上記撮像装置によって得られた複数の対象画像から視差画像を生成するものである。 An imaging device that captures a subject a plurality of times while changing the focus amount to obtain a plurality of target images may be further provided. In this case, the parallax image generation means generates a parallax image from a plurality of target images obtained by the imaging device, for example.
 被写体が相対的に水平方向に動くように複数回撮像して複数の対象画像を得る撮像装置をさらに備えてもよい。この場合,上記視差画像生成手段は,たとえば,上記撮像装置によって得られた複数の対象画像から視差画像を生成するものである。 An imaging apparatus that obtains a plurality of target images by performing imaging a plurality of times so that the subject moves relatively in the horizontal direction may be further provided. In this case, the parallax image generation means generates a parallax image from a plurality of target images obtained by the imaging device, for example.
 上記視差画像生成手段は,たとえば,第1の視点で撮像された第1の対象画像と第2の視点で撮像された第2の対象画像とのそれぞれについて二つの視差画像を生成するものであり,上記第2の類似度算出手段は,たとえば,上記二つの視差画像のそれぞれに含まれる顔に相当する部分について類似度を算出するものである。 For example, the parallax image generating means generates two parallax images for each of a first target image captured from a first viewpoint and a second target image captured from a second viewpoint. The second similarity calculation means calculates the similarity for a portion corresponding to a face included in each of the two parallax images, for example.
 上記第2の類似度算出手段は,上記視差画像生成手段により生成された視差画像の顔に相当する部分の左側部分または右側部分のうち,上記第1の対象画像には含まれているが上記第2の対象画像には含まれていない,あるいは上記第2の対象画像には含まれているが上記第1の対象画像には含まれていない死角領域が少ない方の部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出するものでもよい。 The second similarity calculation means is included in the first target image among the left part or the right part of the part corresponding to the face of the parallax image generated by the parallax image generation means. A portion of the blind area that is not included in the second target image or is included in the second target image but not included in the first target image, A similarity with a face parallax image stored in advance corresponding to a person may be calculated.
ディジタル・カメラの電気的構成を示すブロック図である。It is a block diagram which shows the electric constitution of a digital camera. 視差画像の一例である。It is an example of a parallax image. 人物認識処理手順を示すフローチャートである。It is a flowchart which shows a person recognition process procedure. 人物認識処理手順を示すフローチャートである。It is a flowchart which shows a person recognition process procedure. は左視点画像および右視点画像の一例を示している。Indicates an example of a left viewpoint image and a right viewpoint image. 類似度テーブルの一例である。It is an example of a similarity table. 視差画像の一例である。It is an example of a parallax image. 左視点画像および右視点画像の一例を示している。An example of a left viewpoint image and a right viewpoint image is shown. 人物認識処理手順の一部を示すフローチャートである。It is a flowchart which shows a part of person recognition processing procedure. フォーカス量を変えて撮像された被写体像の一例である。It is an example of the to-be-photographed object image imaged changing a focus amount. スルー画撮像により得られた被写体像の一例である。It is an example of the to-be-photographed image obtained by the through image imaging. 人物認識処理手順の一部を示すフローチャートである。It is a flowchart which shows a part of person recognition processing procedure. 左視点視差画像およびは右視点視差画像の一例を示している。An example of the left viewpoint parallax image and the right viewpoint parallax image is shown. 左視点視差画像およびは右視点視差画像を示している。The left viewpoint parallax image and the right viewpoint parallax image are shown. 顔に相当する視差画像部分および死角領域が除外された顔に相当する視差画像部分を示している。A parallax image portion corresponding to the face and a parallax image portion corresponding to the face from which the blind spot area is excluded are shown. 人物認識処理手順を示すフローチャートである。It is a flowchart which shows a person recognition process procedure.
 図1は,この発明の実施例を示すもので,ディジタル・カメラの電気的構成を示すブロック図である。 FIG. 1 shows an embodiment of the present invention and is a block diagram showing an electrical configuration of a digital camera.
 この実施例によるディジタル・カメラにおいては,人物を撮像して,撮像した人物を認識することができる。 In the digital camera according to this embodiment, a person can be imaged and the captured person can be recognized.
 ディジタル・カメラ1の全体の動作は,CPU10によって統括される。ディジタル・スチル・カメラ1には,立体画像生成用の立体撮像モード,通常の二次元撮像を行う撮像モード,二次元再生を行う二次元再生モード,立体画像表示を行う立体再生モード,設定モードなどのモードを設定するモード設定ボタン,二段ストローク・タイプのシャッタ・レリーズ・ボタンなどの各種ボタン類が含まれている操作装置(図示略)が設けられている。操作装置から出力される操作信号は,CPU10に入力する。 The overall operation of the digital camera 1 is controlled by the CPU 10. The digital still camera 1 includes a stereoscopic imaging mode for generating a stereoscopic image, an imaging mode for performing normal two-dimensional imaging, a two-dimensional reproduction mode for performing two-dimensional reproduction, a stereoscopic reproduction mode for displaying a stereoscopic image, a setting mode, and the like. There is provided an operating device (not shown) including various buttons such as a mode setting button for setting the mode and a two-stroke type shutter / release button. An operation signal output from the operation device is input to the CPU 10.
 ディジタル・スチル・カメラには,立体画像の閲覧者が左目で見る左視点画像(対象画像)を撮像する左視点撮像装置1および立体画像の閲覧者が右目で見る右視点画像(対象画像)を撮像する右視点撮像装置2が含まれている。 The digital still camera includes a left viewpoint imaging device 1 that captures a left viewpoint image (target image) viewed by a viewer of a stereoscopic image with the left eye and a right viewpoint image (target image) viewed by the viewer of the stereoscopic image with a right eye. A right viewpoint imaging device 2 for imaging is included.
 左視点撮像装置1には,被写体を撮像し,左視点画像を表わす左視点画像信号を出力するCCD4が含まれている。このCCD4の前方には,モータ・ドライバ5および6によってズーム量およびフォーカス量がそれぞれ制御されるズーム・レンズ2およびフォーカス・レンズ3が設けられている。 The left viewpoint imaging device 1 includes a CCD 4 that images a subject and outputs a left viewpoint image signal representing a left viewpoint image. In front of the CCD 4, there are provided a zoom lens 2 and a focus lens 3 whose zoom amounts and focus amounts are controlled by motor drivers 5 and 6, respectively.
 立体撮像モードが設定されると,シャッタ・レリーズ・ボタンの押し下げの前にプレ撮像(スルー画撮像)が行われ,一定周期でCCD4から左視点画像信号が出力される。CCD4から出力された左視点画像信号は,アナログ/ディジタル変換回路7において左視点画像データに変換される。左視点画像データは,画像入力コントローラ8によってディジタル信号処理装置21に入力する。 When the stereoscopic imaging mode is set, pre-imaging (through image imaging) is performed before the shutter release button is pressed, and the left viewpoint image signal is output from the CCD 4 at a constant cycle. The left viewpoint image signal output from the CCD 4 is converted into left viewpoint image data in the analog / digital conversion circuit 7. The left viewpoint image data is input to the digital signal processing device 21 by the image input controller 8.
 右視点撮像装置11には,被写体を撮像し,右視点画像を表わす右視点画像信号を出力するCCD14が含まれている。このCCD14の前方には,モータ・ドライバ15および16によってそれぞれズーム量およびフォーカス量が制御されるズーム・レンズ12およびフォーカス・レンズ13が設けられている。 The right viewpoint imaging device 11 includes a CCD 14 that images a subject and outputs a right viewpoint image signal representing the right viewpoint image. In front of the CCD 14, there are provided a zoom lens 12 and a focus lens 13 whose zoom amounts and focus amounts are controlled by motor drivers 15 and 16, respectively.
 立体撮像モードでは,プレ撮像において,CCD14からは右視点画像信号が出力される。CCD14から出力された右視点画像信号は,アナログ/ディジタル変換回路17において右視点画像データに変換される。右視点画像データは,画像入力コントローラ18によってディジタル信号処理装置21に入力する。 In the stereoscopic imaging mode, the right viewpoint image signal is output from the CCD 14 during pre-imaging. The right viewpoint image signal output from the CCD 14 is converted into right viewpoint image data in the analog / digital conversion circuit 17. The right viewpoint image data is input to the digital signal processing device 21 by the image input controller 18.
 ディジタル信号処理装置21において左視点画像データおよび右視点画像データに対して所定のディジタル信号処理が行われる。ディジタル信号処理装置21から出力された左視点画像データおよび右視点画像データは,表示制御装置26を介して表示装置27に与えられる。表示装置27の表示画面に撮像により得られた画像が立体的な動画で表示される(スルー画表示)。表示装置27に左視点像データおよび右視点画像データの両方の画像データを与え,被写体像を立体的に表示せずとも,左視点画像データまたは右視点画像データのいずれか一方の画像データを与え,被写体像を二次元的に表示するようにしてもよい。 In the digital signal processing device 21, predetermined digital signal processing is performed on the left viewpoint image data and the right viewpoint image data. The left viewpoint image data and the right viewpoint image data output from the digital signal processing device 21 are given to the display device 27 via the display control device 26. An image obtained by imaging is displayed as a three-dimensional moving image on the display screen of the display device 27 (through image display). Both left viewpoint image data and right viewpoint image data are given to the display device 27, and either left viewpoint image data or right viewpoint image data is given without displaying the subject image stereoscopically. The subject image may be displayed two-dimensionally.
 ディジタル信号処理装置21から出力された左視点画像データおよび右視点画像データは,視差画像生成装置28にも与えられる。この視差画像生成装置28において,後述する視差画像82(図7参照)が生成される。生成された視差画像82などを利用して被写体の中に人物が含まれている場合に,その人物が特定され,人物に対応して人物名が表示される。 The left viewpoint image data and right viewpoint image data output from the digital signal processing device 21 are also provided to the parallax image generating device 28. In the parallax image generation device 28, a parallax image 82 (see FIG. 7) described later is generated. When a person is included in the subject using the generated parallax image 82 or the like, the person is specified and the person name is displayed corresponding to the person.
 シャッタ・レリーズ・ボタンの半押しにより,被写体が撮像され,撮像により得られた左視点画像データおよび右視点画像データの少なくとも一方の画像データが積算装置23に入力する。積算装置23において高周波数成分の積算および輝度データの積算が行われる。高周波数成分の積算値にもとづいて,フォーカス・レンズ3および13のフォーカス量が決定される。輝度データの積算値にもとづいて,いわゆる電子シャッタのシャッタ速度が決定される。シャッタ・レリーズ・ボタンの全押しにより,被写体が撮像され,左視点撮像装置1の撮像により得られた左視点画像データおよび右視点撮像装置11の撮像により得られた右視点画像データは,メモリ制御装置19を介してメイン・メモリ20に与えられ,そのメイン・メモリ20に一時的に記憶される。左視点画像データおよび右視点画像データは,メイン・メモリ20から読み出されて圧縮伸張処理装置22に入力する。左視点画像データおよび右視点画像データは,圧縮伸張処理装置22において圧縮され,圧縮された左視点画像データおよび右視点画像データがメモリ制御装置24によってメモリ・カード25に記録される。 The subject is imaged by half-pressing the shutter release button, and at least one of the left viewpoint image data and the right viewpoint image data obtained by the imaging is input to the integrating device 23. In the integrating device 23, high frequency components and luminance data are integrated. The focus amounts of the focus lenses 3 and 13 are determined based on the integrated value of the high frequency components. Based on the integrated value of the luminance data, the shutter speed of the so-called electronic shutter is determined. When the shutter release button is fully pressed, the subject is imaged, and the left viewpoint image data obtained by imaging by the left viewpoint imaging device 1 and the right viewpoint image data obtained by imaging by the right viewpoint imaging device 11 are controlled by the memory. It is given to the main memory 20 via the device 19 and temporarily stored in the main memory 20. The left viewpoint image data and the right viewpoint image data are read from the main memory 20 and input to the compression / decompression processor 22. The left viewpoint image data and the right viewpoint image data are compressed by the compression / decompression processor 22, and the compressed left viewpoint image data and right viewpoint image data are recorded on the memory card 25 by the memory controller 24.
 立体再生モードが設定されると,メモリ・カード25に記録されている左視点画像データと右視点画像データとが読み出される。読み出された左視点画像データおよび右視点画像データが圧縮伸張処理装置22において伸張される。伸張された左視点画像データおよび右視点画像データが表示装置26に与えられることにより,立体画像が表示される。 When the stereoscopic playback mode is set, the left viewpoint image data and the right viewpoint image data recorded in the memory card 25 are read. The read left viewpoint image data and right viewpoint image data are decompressed by the compression / decompression processor 22. The expanded left viewpoint image data and right viewpoint image data are given to the display device 26, whereby a stereoscopic image is displayed.
 図2は,ディジタル・カメラのメイン・メモリ20に記憶されている視差画像データによって表される視差画像41-43の一例である。 FIG. 2 is an example of parallax images 41-43 represented by parallax image data stored in the main memory 20 of the digital camera.
 視差画像41-43は,視点の異なる少なくとも二つの画像において対応する画素同士の水平方向の位置ずれ量を輝度値に変換して表すものである。視差量に応じて奥行きが決まるので,視差画像41-43は,被写体の奥行きの情報も表しているとも言える。 The parallax images 41-43 represent the amount of horizontal displacement between corresponding pixels in at least two images with different viewpoints and converted into luminance values. Since the depth is determined according to the amount of parallax, it can be said that the parallax images 41-43 also represent information on the depth of the subject.
 図2には,人物名が「特許太郎」の顔の視差画像41,人物名が「実用新子」の顔の視差画像42,人物名が「意匠一郎」の顔の視差画像43が示されている。後述するように,被写体を撮像して得られる被写体像から視差画像が生成され,あらかじめ記憶されている顔の視差画像のうち,生成された視差画像の顔に相当する部分にもっとも類似する顔の視差画像に対応する人物が被写体の人物名と決定される。図2においては,三つの視差画像41-43が図示されているが,さらに多くの顔の視差画像を表すデータがメモリに記憶されていてもよい。また,一人の人物に対応して一つの顔の視差画像を表す画像データが記憶されているのではなく,一人の人物に対応して多数の顔の視差画像を表す画像データが記憶されていてもよい。 FIG. 2 shows a parallax image 41 of the face with the person name “Taro Tokkyo”, a parallax image 42 of the face with the person name “Practical Shinko”, and a parallax image 43 of the face with the person name “Ichiro Takumi”. ing. As will be described later, a parallax image is generated from a subject image obtained by imaging a subject, and a facial parallax image that is most similar to a portion corresponding to the face of the generated parallax image is stored in advance. The person corresponding to the parallax image is determined as the person name of the subject. In FIG. 2, three parallax images 41-43 are illustrated, but data representing more parallax images of the face may be stored in the memory. In addition, image data representing a parallax image of one face corresponding to one person is not stored, but image data representing a parallax image of many faces corresponding to one person is stored. Also good.
 図3および図4は,ディジタル・カメラを利用した人物認識処理手順を示すフローチャートである。人物認識処理ができれば,必ずしもディジタル・カメラを利用しなくともよいのはいうまでもない。たとえば,あらかじめ撮像されている画像について人物認識処理を行うこともできる。 3 and 4 are flowcharts showing a person recognition processing procedure using a digital camera. Needless to say, if a person recognition process can be performed, it is not always necessary to use a digital camera. For example, a person recognition process can be performed on an image captured in advance.
 被写体が撮像され,右視点画像および左視点画像が得られる(図3ステップ51)。図5に,左視点画像71の一例および右視点画像72の一例が示されている。左視点画像71と右視点画像72とは視点が異なるから,視差が生じている。 The subject is imaged, and a right viewpoint image and a left viewpoint image are obtained (step 51 in FIG. 3). FIG. 5 shows an example of the left viewpoint image 71 and an example of the right viewpoint image 72. Since the viewpoints of the left viewpoint image 71 and the right viewpoint image 72 are different, parallax occurs.
 左視点画像71および右視点画像72が得られると,それぞれの視点画像71および72について顔検出処理が行われる(図3ステップ52)。左視点画像71および右視点画像72の両方の画像71および72について顔検出処理が行われなくとも,いずれか一方の画像について顔検出処理が行われるようにしてもよい。顔が検出されないと(図3ステップ53でYES),人物認識はできないものとして人物認識処理は終了する(図3ステップ57)。 When the left viewpoint image 71 and the right viewpoint image 72 are obtained, face detection processing is performed on the respective viewpoint images 71 and 72 (step 52 in FIG. 3). Even if the face detection process is not performed on both the images 71 and 72 of the left viewpoint image 71 and the right viewpoint image 72, the face detection process may be performed on either one of the images. If no face is detected (YES in step 53 in FIG. 3), the person recognition process is terminated (step 57 in FIG. 3) assuming that the person cannot be recognized.
 顔が検出されると(図3ステップ53でNO),顔が検出された左視点画像71または右視点画像72について第1の類似度算出処理が行われる(図3ステップ54)。第1の類似度算出処理は,上述のように視差画像を利用するものではなく,撮像により得られた左視点画像71または右視点画像72が利用される。たとえば,人物名に対応してそれらの人物の特徴を表す顔画像,顔の部品画像(目の画像,鼻の画像,口の画像,眉の画像など)などのテンプレート画像(人物特定データ)が記憶されており,その記憶されているテンプレート画像と,撮像により得られた左視点画像71または右視点画像72とが比較され,一致度に応じて類似度が算出される。 When a face is detected (NO in step 53 in FIG. 3), a first similarity calculation process is performed on the left viewpoint image 71 or the right viewpoint image 72 in which the face is detected (step 54 in FIG. 3). The first similarity calculation process does not use a parallax image as described above, but uses a left viewpoint image 71 or a right viewpoint image 72 obtained by imaging. For example, a template image (person identification data) such as a face image representing the characteristics of a person corresponding to a person name, a face part image (eye image, nose image, mouth image, eyebrow image, etc.) The stored template image is compared with the left viewpoint image 71 or the right viewpoint image 72 obtained by imaging, and the similarity is calculated according to the degree of coincidence.
 図6は,類似度テーブルの一例である。 FIG. 6 is an example of a similarity table.
 類似度テーブルには,視差画像を表す画像データが記憶されている人物名に対応して類似度が格納される。図6においては分かりやすくするために,図2に示す顔の視差画像41-43も図示されている。これらの視差画像41-43は,類似度テーブルに格納されていても,格納されていなくともよい。 In the similarity table, the similarity is stored corresponding to the name of the person who stores the image data representing the parallax image. In FIG. 6, the face parallax images 41-43 shown in FIG. 2 are also shown for easy understanding. These parallax images 41-43 may or may not be stored in the similarity table.
 たとえば,上述の第1の類似度算出処理により,撮像された人物(顔検出された人物)と「特許太郎」との類似度が65%,「実用新子」との類似度が50%,「意匠一郎」との類似度が70%と算出される。算出されたこれらの第1の類似度が人物名に対応して類似度テーブルに格納される。 For example, according to the first similarity calculation process described above, the similarity between the imaged person (face detected person) and “Patent Taro” is 65%, the similarity between “Practical Shinko” is 50%, The similarity with “Ichiro Ichiro” is calculated as 70%. These calculated first similarities are stored in the similarity table corresponding to the person names.
 第1の類似度算出処理により算出された類似度(第1の類似度)が80%(所定レベル)以上であると,検出された顔の人物が,そのような類似度が得られたときに用いられたテンプレート画像に対応する人物である信憑性が高いと考えられる。このために,後述するように視差画像を利用した類似度算出処理(第2の類似度算出処理)は行われない。第1の類似度が80%以上と判定された人物が,検出された顔の人物と決定される(図3ステップ56)。たとえば,「特許太郎」について第1の類似度が80%以上となると,検出された顔の人物が「特許太郎」と決定される。 When the similarity calculated by the first similarity calculation process (first similarity) is 80% (predetermined level) or more, the detected face person has such similarity. It is considered that the credibility of the person corresponding to the template image used in the above is high. For this reason, as described later, the similarity calculation process (second similarity calculation process) using the parallax image is not performed. The person whose first similarity is determined to be 80% or more is determined as the detected face person (step 56 in FIG. 3). For example, when the first similarity for “Taro Taro” is 80% or more, the detected face person is determined to be “Taro Taro”.
 第1の類似度算出処理により算出された類似度が80%未満であると(図3ステップ55でNO),左視点画像71と右視点画像72とから視差画像が生成される。 If the similarity calculated by the first similarity calculation process is less than 80% (NO in step 55 in FIG. 3), a parallax image is generated from the left viewpoint image 71 and the right viewpoint image 72.
 図7は,生成された視差画像81の一例である。 FIG. 7 is an example of the generated parallax image 81.
 上述のように視差画像81は,左視点画像71と右視点画像72との間で対応する画素同士の視差を表すものである。その視差が濃度で表されている。 As described above, the parallax image 81 represents the parallax between corresponding pixels between the left viewpoint image 71 and the right viewpoint image 72. The parallax is expressed by density.
 視差画像81には,顔に相当する部分82が含まれており,この顔相当部分82が,あらかじめ人物に対応して記憶されている視差画像と比較される。 The parallax image 81 includes a portion 82 corresponding to a face, and this face equivalent portion 82 is compared with a parallax image stored in advance corresponding to a person.
 図4を参照して,視差画像81に含まれている顔相当部分82の最大濃度(視差量の最大値)と最小濃度(視差量の最小値)との濃度差分(視差量差分)が10%以上あるかどうかが判定される(ステップ59)。顔相当部分82の最大濃度と最小濃度との濃度差分が10%以上あれば,撮像により得られた顔がより立体的であり,そのような視差画像81を利用して人物認識処理を行った結果の信憑性が高い。これに対して顔相当部分82の最大濃度と最小濃度との濃度差分が10%以上なければ逆に信憑性は低い。このために,顔相当部分82の最大濃度と最小濃度との濃度差分が10%以上無かった場合には(図4ステップ59でNO),視差画像81を利用した人物認識処理は行われない。第1の類似度が50%以上でなければ(図4ステップ66でNO),第1の類似度の信憑性も低いので人物認識はできなくなる(図4ステップ67)。第1の類似度が50%以上あれば(図4ステップ66でYES),第1の類似度の信憑性は比較的高いと考えられるので,第1の類似度にもとづいて判定された人物が,検出された顔の人物と決定される(図3ステップ56)。 With reference to FIG. 4, the density difference (parallax amount difference) between the maximum density (maximum value of parallax amount) and the minimum density (minimum value of parallax amount) of the face equivalent portion 82 included in the parallax image 81 is 10 It is determined whether or not there is at least% (step 59). If the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is 10% or more, the face obtained by imaging is more stereoscopic, and the person recognition process was performed using such a parallax image 81. The credibility of the results is high. On the other hand, if the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is not more than 10%, the credibility is low. For this reason, when the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is not 10% or more (NO in step 59 in FIG. 4), the person recognition process using the parallax image 81 is not performed. If the first similarity is not 50% or more (NO in step 66 in FIG. 4), since the credibility of the first similarity is low, person recognition cannot be performed (step 67 in FIG. 4). If the first similarity is 50% or more (YES in step 66 in FIG. 4), since the credibility of the first similarity is considered to be relatively high, the person determined based on the first similarity , The person of the detected face is determined (step 56 in FIG. 3).
 顔相当部分82の最大濃度と最小濃度との濃度差分が10%以上あれば(図4ステップ59でYES),視差画像81を用いて第2の類似度算出処理が行われる(図4ステップ60)。第2の類似度算出処理は,視差画像81に含まれる顔相当部分82と,あらかじめ記憶されている顔の視差画像(図2参照)との一致の程度を類似度(第2の類似度)として算出するものである。算出された類似度は図6に示す類似度テーブルに格納される。 If the density difference between the maximum density and the minimum density of the face-corresponding portion 82 is 10% or more (YES in step 59 in FIG. 4), the second similarity calculation process is performed using the parallax image 81 (step 60 in FIG. 4). ). In the second similarity calculation process, the degree of coincidence between the face equivalent portion 82 included in the parallax image 81 and the parallax image of the face stored in advance (see FIG. 2) is determined as the similarity (second similarity). Is calculated as follows. The calculated similarity is stored in the similarity table shown in FIG.
 たとえば,上述の第2の類似度算出処理により,視差画像81に含まれる顔相当部分82と「特許太郎」の顔の視差画像41との類似度が40%,「実用新子」の顔の視差画像42との類似度が35%,「意匠一郎」の顔の視差画像43との類似度が50%と算出される。算出されたこれらの第2の類似度が人物名に対応して,図6に示す類似度テーブルに格納される。 For example, by the second similarity calculation process described above, the similarity between the face-corresponding portion 82 included in the parallax image 81 and the parallax image 41 of the face of “Taro Taro” is 40%. The degree of similarity with the parallax image 42 is calculated as 35%, and the degree of similarity with the parallax image 43 of the face of “Design Ichiro” is calculated as 50%. These calculated second similarities are stored in the similarity table shown in FIG. 6 corresponding to the person name.
 算出された第2の類似度が50%以上でなければ(図4ステップ61でNO),第2の類似度にもとづいて決定される人物の信憑性が低い。また,第1の類似度が20%以下であると(図4ステップ62でYES),第1の類似度にもとづいて決定される人物の信憑性も低い。このために,第2の類似度が50%以上でない(図4ステップ61でNO),あるいは第1の類似度が20%以下であると(図4ステップ62でYES),人物認識はできないとされる(図4ステップ67)。 If the calculated second similarity is not 50% or more (NO in step 61 in FIG. 4), the credibility of the person determined based on the second similarity is low. If the first similarity is 20% or less (YES in step 62 in FIG. 4), the credibility of the person determined based on the first similarity is low. For this reason, if the second similarity is not 50% or more (NO in step 61 of FIG. 4), or if the first similarity is 20% or less (YES in step 62 of FIG. 4), the person cannot be recognized. (Step 67 in FIG. 4).
 第2の類似度が50%以上であり(図4ステップ61でYES),かつ第1の類似度が20%より大きいと(図4ステップ62でNO),第1の類似度と第2の類似度とを用いて人物が決定される。第1の類似度と第2の類似度とが比較される(図4ステップ63)。第2の類似度が第1の類似度よりも大きければ(図4ステップ64でYES),第1の類似度の信憑性よりも第2の類似度の信憑性が高いので,第2の類似度のうち,もっとも高い類似度をもつものと判定された顔の人物と決定される(図4ステップ65)。逆に,第1の類似度が第2の類似度よりも大きければ(図4ステップ64でNO),第1の類似度の信憑性が高いので,もっとも高い第1の類似度をもつものと判定された顔の人物と決定される(図3ステップ56)。 If the second similarity is 50% or more (YES in step 61 of FIG. 4) and the first similarity is greater than 20% (NO in step 62 of FIG. 4), the first similarity and the second A person is determined using the similarity. The first similarity and the second similarity are compared (step 63 in FIG. 4). If the second similarity is larger than the first similarity (YES in step 64 in FIG. 4), the second similarity is higher than the first similarity, so the second similarity is higher. Of the degrees, the face person determined to have the highest similarity is determined (step 65 in FIG. 4). On the other hand, if the first similarity is greater than the second similarity (NO in step 64 in FIG. 4), the credibility of the first similarity is high, so that the first similarity is the highest. It is determined that the person has the determined face (step 56 in FIG. 3).
 図8は,上述のようにして認識された人物名が表示されている左視点画像91および右視点画像93の一例である。 FIG. 8 is an example of the left viewpoint image 91 and the right viewpoint image 93 on which the person names recognized as described above are displayed.
 上述のようにして人物名が認識されると,その認識された人物名92,94が顔画像の近傍に表示される。また,シャッタ・レリーズ・ボタンが全押しされると,撮像された画像を表す画像ファイルが生成され,生成された画像ファイルのヘッダに,認識された人物名を表すデータが格納される。 When the person name is recognized as described above, the recognized person names 92 and 94 are displayed in the vicinity of the face image. When the shutter release button is fully pressed, an image file representing a captured image is generated, and data representing a recognized person name is stored in the header of the generated image file.
 撮像された画像から複数の顔が検出された場合には,上述した処理が,その検出された顔の数だけ繰り返されることとなる。 When a plurality of faces are detected from the captured image, the above-described processing is repeated for the number of detected faces.
 上述した実施例では,第1の類似度と第2の類似度とを単に比較しているが,これらの第1の類似度と第2の類似度とを所定の重み付け係数で重み付け平均を行い,得られた値にもとづいて人物の認識を行うようにしてもよい。 In the above-described embodiment, the first similarity and the second similarity are simply compared, but the first similarity and the second similarity are weighted and averaged with a predetermined weighting coefficient. The person may be recognized based on the obtained value.
 図6を参照して,第1の類似度と第2の類似度とを一対一の割合で加算平均した重み付け平均の値が類似度テーブルに格納されている。このようにして得られた重み付け平均の値にもとづいて人物を認識してもよい。たとえば,重み付け平均の値がもっとも大きくなる人物が,撮像した人物と認識される。重み付け平均は第1の類似度と第2の類似度とが一対一となるような係数でなく,その他の任意の係数を利用できる。 Referring to FIG. 6, a weighted average value obtained by adding and averaging the first similarity and the second similarity at a one-to-one ratio is stored in the similarity table. A person may be recognized based on the weighted average value thus obtained. For example, the person with the largest weighted average value is recognized as the person who has taken the image. The weighted average is not a coefficient that makes the first similarity and the second similarity one to one, but can use any other coefficient.
 図9および図10は,変形例を示している。 9 and 10 show a modification.
 図9は,人物認識処理手順の一部を示すフローチャート,図10は,撮像により得られた被写体像の一例である。この変形例では左視点撮像装置1または右視点撮像装置11が利用される。もっとも両方の装置1,11を利用してもよい。 FIG. 9 is a flowchart showing a part of the person recognition processing procedure, and FIG. 10 is an example of a subject image obtained by imaging. In this modification, the left viewpoint imaging device 1 or the right viewpoint imaging device 11 is used. However, both devices 1 and 11 may be used.
 上述のように,算出された第1の類似度が80%以上でなければ(図3ステップ55でNO),上述のように視差画像の生成処理が行われる。この変形例では,視差画像を生成するためにフォーカス量が変えられて(フォーカス・レンズ2または12の位置の変更)被写体が複数回,撮像される(図9ステップ101)。たとえば,被写体に含まれる人物の鼻にピントが合うようにフォーカス量が調整されて(フォーカス量:Near),被写体が撮像される。これにより,図10の上段に示すように,人物の鼻にピントがあった被写体像111が得られる。次に被写体に含まれる人物の顔の輪郭にピントが合うようにフォーカス量が調整されて(フォーカス量:ミドル),被写体が撮像される。これにより,図10の中央に示すように,人物の顔の輪郭にピントがあった被写体像112が得られる。つづいて,被写体のうち人物の背景にある木にピントが合うようにフォーカス量が調整されて(フォーカス量:Far),被写体が撮像される。これにより,図10の下段に示すように,人物の背景の木にピントがあった被写体像113が得られる。 As described above, if the calculated first similarity is not 80% or more (NO in step 55 in FIG. 3), the parallax image generation process is performed as described above. In this modification, the subject is imaged multiple times with the focus amount changed (change of the position of the focus lens 2 or 12) to generate a parallax image (step 101 in FIG. 9). For example, the focus amount is adjusted so that the nose of a person included in the subject is in focus (focus amount: Near), and the subject is imaged. As a result, as shown in the upper part of FIG. 10, a subject image 111 in which the person's nose is in focus is obtained. Next, the focus amount is adjusted so that the outline of the face of the person included in the subject is in focus (focus amount: middle), and the subject is imaged. As a result, as shown in the center of FIG. 10, a subject image 112 in which the outline of the person's face is in focus is obtained. Subsequently, the focus amount is adjusted so that the tree in the background of the person in the subject is in focus (focus amount: Far), and the subject is imaged. As a result, as shown in the lower part of FIG. 10, a subject image 113 in which the background tree of the person is in focus is obtained.
 このようにしてフォーカス量が変更されながら,被写体が撮像されることにより得られた複数の被写体像111-113の空間周波数から顔画像部分の相対距離が算出されて,上述したのと同様の視差画像が得られる。フォーカス量が変更されて被写体が撮像されることにより,被写体までの相対距離が分かる。相対距離から視差画像が作成できる。得られた視差画像を利用して人物認識処理が行われるのは上述の通りである。 The relative distance of the face image portion is calculated from the spatial frequency of the plurality of subject images 111-113 obtained by imaging the subject while the focus amount is changed in this way, and the same parallax as described above An image is obtained. The relative distance to the subject can be determined by imaging the subject with the focus amount changed. A parallax image can be created from the relative distance. The person recognition process is performed using the obtained parallax image as described above.
 上述したように,左視点画像71と右視点画像72とが得られなくとも視差画像が得られれば人物を認識できる。立体視用画像を撮像できないようなディジタル・カメラで撮像された画像についても視差画像を利用して人物認識処理ができるようになる。また,上述の実施例においては,フォーカス量の変更は3回行われているが,フォーカス量の変更は3回に限らず,他の回数だけフォーカス量を変更してもよい。 As described above, even if the left viewpoint image 71 and the right viewpoint image 72 are not obtained, a person can be recognized if a parallax image is obtained. Person recognition processing can also be performed using a parallax image for an image captured by a digital camera that cannot capture a stereoscopic image. In the above-described embodiment, the focus amount is changed three times. However, the focus amount is not limited to three times, and the focus amount may be changed any other number of times.
 フォーカス量が変更させられて複数の被写体像111-113を得,視差画像を生成せずに,一定周期で被写体を複数回撮像する,いわゆるスルー画撮像を利用して視差画像を生成することもできる。この場合も立体視画像を撮像できないようなディジタル・カメラで撮像された画像についても視差画像を利用して人物認識処理ができるようになる。 It is also possible to generate a plurality of subject images 111-113 by changing the focus amount, and to generate a parallax image using so-called through-image imaging, in which a subject is imaged a plurality of times at a fixed period without generating a parallax image. it can. Also in this case, a person recognition process can be performed using a parallax image for an image captured by a digital camera that cannot capture a stereoscopic image.
 図11は,スルー画撮像により得られた被写体像121-124の一例である。左視点撮像装置1または右視点撮像装置11の一方を利用してもよいし両方を利用してもよい。 FIG. 11 is an example of the subject image 121-124 obtained by the through image capturing. Either the left viewpoint imaging device 1 or the right viewpoint imaging device 11 may be used, or both may be used.
 スルー画撮像により得られた被写体像121-124のうちから,顔画像が検出された画像(例えば,画像122)と,検出された顔画像の向きと異なる向きの顔画像が検出された画像(例えば,画像123)と,から視差画像が生成される。 Among the subject images 121-124 obtained by the through image capturing, an image in which a face image is detected (for example, the image 122) and an image in which a face image in a direction different from the direction of the detected face image is detected ( For example, a parallax image is generated from the image 123).
 図12および図13は,他の変形例を示している。 12 and 13 show other modified examples.
 この変形例は,左視点視差画像と右視点視差画像との両方の視差画像を生成し,それらの両方の視差画像を利用して人物認識処理を行うものである。 This modification generates both parallax images of the left viewpoint parallax image and the right viewpoint parallax image, and performs person recognition processing using both parallax images.
 図12は,人物認識処理手順の一部を示すフローチャートであり,図4の処理手順に対応している。図12は,左視点視差画像141の一例および右視点視差画像142の一例である。 FIG. 12 is a flowchart showing a part of the person recognition processing procedure, and corresponds to the processing procedure of FIG. FIG. 12 shows an example of the left viewpoint parallax image 141 and an example of the right viewpoint parallax image 142.
 図13に示すように,左視点画像141は,左視点画像71(図5参照)を基準として左視点画像71と右視点画像72(図5参照)との視差を表わすものである。これに対して,図13に示すように,右視点画像142は,右視点画像72を基準として左視点画像71と右視点画像72との視差を表わすものである。 As shown in FIG. 13, the left viewpoint image 141 represents the parallax between the left viewpoint image 71 and the right viewpoint image 72 (see FIG. 5) with reference to the left viewpoint image 71 (see FIG. 5). On the other hand, as shown in FIG. 13, the right viewpoint image 142 represents the parallax between the left viewpoint image 71 and the right viewpoint image 72 with the right viewpoint image 72 as a reference.
 図12を参照して,上述のように,左視点画像71と右視点画像72とから左視差画像141と右視差画像142とが生成される(ステップ131)。生成された左視差画像141について上述のように,あらかじめ記憶されている視差画像41,42,43との類似度を算出する第2の類似度算出処理が行われる(ステップ132)。また,生成された右視差画像142についても,あらかじめ記憶されている視差画像41,42,43との類似度を算出する第2の類似度算出処理が行われる(ステップ133)。 Referring to FIG. 12, as described above, left parallax image 141 and right parallax image 142 are generated from left viewpoint image 71 and right viewpoint image 72 (step 131). As described above, the generated second parallax image 141 is subjected to the second similarity calculation process for calculating the similarity with the parallax images 41, 42, and 43 stored in advance (step 132). The generated right parallax image 142 is also subjected to the second similarity calculation process for calculating the similarity with the parallax images 41, 42, and 43 stored in advance (step 133).
 すでに行われている第1の類似度算出処理により得られた第1の類似度と左視差画像141に対して行われた第2の類似度算出処理により得られた第2の類似度との比較および第1の類似度と右視差画像142に対して行われた第2の類似度算出処理により得られた第2の類似度との比較が,それぞれ行われる(ステップ134)。 The first similarity obtained by the first similarity calculation process already performed and the second similarity obtained by the second similarity calculation process performed on the left parallax image 141 The comparison and the comparison between the first similarity and the second similarity obtained by the second similarity calculation process performed on the right parallax image 142 are performed (step 134).
 左視差画像141に対して行われた第2の類似度算出処理から得られた第2の類似度がもっとも高ければ(ステップ135でYES),そのもっとも高い第2の類似度を与える,あらかじめ記憶されている視差画像に対応する人物が,検出された顔の人物と決定される(ステップ136)。 If the second similarity obtained from the second similarity calculation process performed on the left parallax image 141 is the highest (YES in step 135), the highest second similarity is given and stored in advance The person corresponding to the displayed parallax image is determined as the detected face person (step 136).
 また,右視差画像に対して行われた第2の類似度算出処理から得られた第2の類似度がもっとも高ければ(ステップ135でNO,ステップ137でYES),そのもっとも高い第2の類似度を与える,あらかじめ記憶されている視差画像に対応する人物が,検出された顔の人物と決定される(ステップ138)。 If the second similarity obtained from the second similarity calculation process performed on the right parallax image is the highest (NO in step 135, YES in step 137), the highest second similarity is obtained. The person corresponding to the parallax image stored in advance and giving the degree is determined as the person of the detected face (step 138).
 第1の類似度がもっとも高ければ(ステップ135,ステップ137のいずれでもNO),そのもっとも高い第1の類似度を与える人物が,検出された顔の人物と決定される(図3ステップ56)。 If the first similarity is the highest (NO in both step 135 and step 137), the person giving the highest first similarity is determined as the detected face person (step 56 in FIG. 3). .
 上述した実施例においても,もっとも高い第2の類似度が50%以上のように所定レベル以上であることを条件に(図4ステップ61),そのもっとも高い第2の類似度を与える,あらかじめ記憶されている視差画像に対応する人物が,検出された顔の人物と決定されるようにしてもよい。また,もっとも高い第1の類似度が20%以上のように所定レベル以上であることを条件に(図4ステップ63),そのもっとも高い第1の類似度を与える人物が,検出された顔の人物と決定されるようにしてもよい。 Also in the above-described embodiment, on the condition that the highest second similarity is not less than a predetermined level such as 50% or more (step 61 in FIG. 4), the highest second similarity is given in advance and stored. The person corresponding to the displayed parallax image may be determined as the detected face person. Also, on the condition that the highest first similarity is equal to or higher than a predetermined level such as 20% or more (step 63 in FIG. 4), the person giving the highest first similarity is detected by the detected face. You may make it determine with a person.
 図14から図16は,他の変形例を示している。この変形例は,死角領域を類似度算出に利用される画像部分から除外するものである。 14 to 16 show other modified examples. In this modification, the blind spot area is excluded from the image portion used for similarity calculation.
 図14は,左視点視差画像161の一例および右視点視差画像162の一例を示している。 FIG. 14 shows an example of the left viewpoint parallax image 161 and an example of the right viewpoint parallax image 162.
 左視点で撮像された左視点画像71,右視点で撮像された右視点画像72のように,異なる視点で撮像される場合,左視点からは見えないが,右視点からは見える被写体部分,右視点からは見えないが,左視点からは見える被写体部分がある。このために,左視点画像71には存在しないが,右視点画像72には存在する部分,右視点画像71には存在しないが,左視点画像72には存在する部分が生じる。このように,一方の画像には存在しないが,他方の画像には存在する部分を死角領域と呼ぶこととする。撮像により得られた画像だけでなく,上述のように左視点視差画像161および右視点視差画像163についても同様に死角領域が生じる。 When captured from different viewpoints, such as a left viewpoint image 71 captured from the left viewpoint and a right viewpoint image 72 captured from the right viewpoint, the subject portion that is not visible from the left viewpoint but is visible from the right viewpoint, the right There is a subject part that is not visible from the viewpoint but visible from the left viewpoint. For this reason, a portion that does not exist in the left viewpoint image 71 but exists in the right viewpoint image 72 and a portion that does not exist in the right viewpoint image 71 but exists in the left viewpoint image 72 are generated. Thus, a portion that does not exist in one image but exists in the other image is referred to as a blind spot region. In addition to the image obtained by the imaging, a blind spot region is similarly generated in the left viewpoint parallax image 161 and the right viewpoint parallax image 163 as described above.
 図14を参照して,左視点視差画像161には顔に相当する部分の左側に死角領域162が生じている。図14を参照して,右視点視差画像163には顔に相当する部分の右側に死角領域164が生じている。 Referring to FIG. 14, in the left viewpoint parallax image 161, a blind spot area 162 is generated on the left side of the portion corresponding to the face. Referring to FIG. 14, in the right viewpoint parallax image 163, a blind spot region 164 is generated on the right side of the portion corresponding to the face.
 図15は,左視点画像161の顔画像部分166を示している。 FIG. 15 shows a face image portion 166 of the left viewpoint image 161.
 上述のように,図15の左側に示すように,左視点画像161の顔画像部分166には,死角領域162が生じている。このために,この顔画像部分166と,あらかじめ記憶されている視差画像41,42,43とが比較されて第2の類似度算出処理が行われても,死角領域162の存在により,算出される第2の類似度は小さくなる。このために,この変形例では,図15の右側に示すように,顔画像部分166の左半分または右半分のうち,死角領域162が存在する領域を除外して死角領域162が存在しない画像部分と,あらかじめ記憶されている視差画像41,42,43とが比較されて第2の類似度算出処理が行われる。 As described above, as shown on the left side of FIG. 15, the blind spot area 162 is generated in the face image portion 166 of the left viewpoint image 161. For this reason, even if this face image portion 166 is compared with the parallax images 41, 42, 43 stored in advance and the second similarity calculation process is performed, it is calculated due to the presence of the blind spot area 162. The second similarity is smaller. For this reason, in this modified example, as shown on the right side of FIG. 15, an image part where the blind spot area 162 does not exist except for the area where the blind spot area 162 exists in the left half or the right half of the face image part 166. Are compared with the parallax images 41, 42, 43 stored in advance, and the second similarity calculation process is performed.
 顔画像部分166の左側に死角領域162が存在するから,顔画像部分166の左半分の領域(ハッチングで示す)168が除かれた右半分の領域169と,視差画像41,42,43とが比較されて第2の類似度算出処理が行われる。第2の類似度算出処理で得られる第2の類似度の値が高くなる。同様に顔画像部分の右側に死角領域が存在する右視点視差画像163の場合には,顔画像部分の右半分が除外されて,死角領域164が存在しない左半分の画像部分と,視差画像41,42,43とが比較されて第2の類似度算出処理が行われる。 Since there is a blind spot region 162 on the left side of the face image portion 166, the right half region 169 excluding the left half region (indicated by hatching) 168 of the face image portion 166 and the parallax images 41, 42, and 43 A second similarity calculation process is performed by comparison. The value of the second similarity obtained by the second similarity calculation process is increased. Similarly, in the case of the right viewpoint parallax image 163 in which a blind spot area exists on the right side of the face image part, the right half of the face image part is excluded, and the left half image part in which the blind spot area 164 does not exist and the parallax image 41 , 42, 43 are compared, and the second similarity calculation process is performed.
 上述の変形例においては,顔画像部分が成立するように顔画像部分の回転処理が行われた後に,顔画像部分の左半分または右半分のうち死角領域が存在しない部分と,視差画像41,42,43との比較が行われるようにすることが好ましい。 In the above-described modification, after the rotation of the face image part is performed so that the face image part is established, the part of the left half or the right half of the face image part where no blind spot area exists, the parallax image 41, It is preferable to make a comparison with 42 and 43.
 上述の変形例では左視点視差画像161と右視点視差画像164とを利用しているが,左視点視差画像161と右視点視差画像164との両方の視差画像が生成されずに一つの視差画像が生成される場合でも同様に死角領域が存在しない部分を利用して第2の類似度算出処理を行うことができる。 In the above-described modified example, the left viewpoint parallax image 161 and the right viewpoint parallax image 164 are used. Similarly, the second similarity calculation process can be performed using the portion where the blind spot area does not exist even when is generated.
 図16は,上述した顔画像の左半分または右半分のうち,死角領域が存在しない側の画像部分を用いて人物認証処理手順を行うフローチャートの一部である。図16は,図4に対応している。 FIG. 16 is a part of a flowchart for performing the person authentication processing procedure using the image portion on the side where no blind spot area exists in the left half or the right half of the face image described above. FIG. 16 corresponds to FIG.
 上述のように,左視点画像71と右視点画像72とから視差画像が生成される(ステップ58)。生成された視差画像の顔に相当する部分の死角領域の面積が顔相当部分の面積に対して5%以上あるかどうかが判定される(ステップ151)。 As described above, a parallax image is generated from the left viewpoint image 71 and the right viewpoint image 72 (step 58). It is determined whether the area of the blind spot area corresponding to the face of the generated parallax image is 5% or more of the area corresponding to the face (step 151).
 死角領域の面積が顔相当部分の面積に対して5%以上なければ(ステップ151でNO),顔画像部分と,あらかじめ記憶されている視差画像41,42,43との類似度算出において,死角領域が与える影響が少ないと考えられる。このために,上述したように死角領域を除外する処理は行われずに生成された視差画像の顔画像と,あらかじめ記憶されている視差画像41,42,43との類似度を算出する第2の類似度算出処理が行われる(ステップ156)。 If the area of the blind spot area is not more than 5% of the area corresponding to the face (NO in step 151), the blind spot is calculated in the similarity between the face image part and the parallax images 41, 42, 43 stored in advance. The effect of the area is considered to be small. For this reason, as described above, the second degree of calculating the similarity between the face image of the parallax image generated without performing the process of excluding the blind spot region and the parallax images 41, 42, and 43 stored in advance. Similarity calculation processing is performed (step 156).
 死角領域の面積が顔相当部分の面積に対して5%以上あると(ステップ151でYES),顔画像部分と,あらかじめ記憶されている視差画像41,42,43との類似度算出において,死角領域があたえる影響が無視できないと考えられる。このために,上述したように,まず,顔相当部分が正立するように,回転させられて,顔相当部分が左半分の領域と右半分の領域とに分割させられる(ステップ152)。 If the area of the blind spot area is 5% or more with respect to the area of the face-corresponding part (YES in step 151), the blind spot is calculated in the similarity calculation between the face image part and the parallax images 41, 42, 43 stored in advance. It is considered that the impact of the area cannot be ignored. For this purpose, as described above, first, the face equivalent portion is rotated so as to stand upright, and the face equivalent portion is divided into a left half region and a right half region (step 152).
 顔相当部分のうち,左半分の領域に含まれる死角領域の面積と右半分の領域に含まれる死角領域の面積とが比較されて,左半分の領域に含まれる死角領域の面積の方が右半分の領域に含まれる死角領域の面積よりも多いと(ステップ153でYES),顔相当部分のうち死角領域の面積の少ない右半分の領域と,あらかじめ記憶されている視差画像41,42,43との類似度算出を行う第2の類似度算出処理が行われる(ステップ154)。 In the face equivalent part, the area of the blind spot area included in the left half area and the area of the blind spot area included in the right half area are compared, and the area of the blind spot area included in the left half area is the right side. If the area is larger than the area of the blind spot area included in the half area (YES in step 153), the right half area of the face-corresponding portion where the area of the blind spot area is small and the parallax images 41, 42, 43 stored in advance A second similarity calculation process is performed to calculate the similarity with (step 154).
 顔相当部分のうち,左半分の領域に含まれる死角領域の面積と右半分の領域に含まれる死角領域の面積とが比較されて,右半分の領域に含まれる死角領域の面積の方が左半分の領域に含まれる死角領域の面積よりも多いと(ステップ153でNO),顔相当部分のうち死角領域の面積の少ない左半分の領域と,あらかじめ記憶されている視差画像41,42,43との類似度算出を行う第2の類似度算出処理が行われる(ステップ155)。 In the face equivalent part, the area of the blind spot area included in the left half area is compared with the area of the blind spot area included in the right half area, and the area of the blind spot area included in the right half area is the left side. If the area is larger than the area of the blind spot area included in the half area (NO in step 153), the left half area of the face-corresponding portion where the area of the blind spot area is small and the parallax images 41, 42, 43 stored in advance A second similarity calculation process is performed to calculate the similarity with (step 155).
 比較的高い値の第2の類似度が得られるようになる。 The second similarity with a relatively high value can be obtained.
 右目視点が第1の視点であり,左目視点が第2の視点と考えれるが,その逆に左目視点が第1の視点で,右目視点が第2の視点でもよい。 The right eye viewpoint may be the first viewpoint and the left eye viewpoint may be the second viewpoint. Conversely, the left eye viewpoint may be the first viewpoint and the right eye viewpoint may be the second viewpoint.
 10 CPU
 28 視差画像生成装置
10 CPU
28 Parallax image generator

Claims (9)

  1.  複数の対象画像から視差画像を生成する視差画像生成手段,
     上記対象画像に人物の顔が含まれているかを判定する人物判定手段,
     上記人物判定手段により人物の顔が含まれていると判定された場合,その人物と,あらかじめ記憶されている人物特定データによって特定される認識対象の人物との類似度を算出する第1の類似度算出手段,
     上記第1の類似度算出手段によって算出された類似度が所定レベル以上かどうかを判定する第1の判定手段,
     上記第1の判定手段によって類似度が所定レベル以上と判定されたことに応じて,上記第1の判定手段によって所定レベル以上と判定された人物を,上記対象画像に含まれる顔の人物と決定する第1の人物決定手段,
     上記第1の判定手段によって類似度が所定レベル未満と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出する第2の類似度算出手段,および
     上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とにもとづいて上記顔画像の人物を特定する第2の人物決定手段,
     を備えた人物認識装置。
    Parallax image generating means for generating a parallax image from a plurality of target images;
    Person determination means for determining whether or not a person's face is included in the target image;
    When it is determined by the person determination means that a person's face is included, a first similarity for calculating the similarity between the person and the person to be recognized specified by the person specifying data stored in advance Degree calculation means,
    First determination means for determining whether or not the similarity calculated by the first similarity calculation means is equal to or higher than a predetermined level;
    In response to determining that the similarity is equal to or higher than a predetermined level by the first determining means, the person determined to be equal to or higher than the predetermined level by the first determining means is determined as a face person included in the target image. First person determining means for
    Corresponding to the part corresponding to the face included in the parallax image generated by the parallax image generating unit and the person to be recognized in response to the similarity being determined to be less than the predetermined level by the first determining unit. Second similarity calculating means for calculating the similarity between the face parallax image stored in advance and the similarity calculated by the first similarity calculating means and the second similarity calculating means. Second person determining means for specifying a person of the face image based on the calculated similarity,
    A person recognition device comprising:
  2.  上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分における画素値の最大値と最小値との差が所定レベル以上かどうかを判定する視差レベル差判定手段をさらに備え,
     上記第2の類似度算出手段は,
     上記第1の判定手段によって類似度が所定レベル未満と判定され,かつ上記視差レベル差判定手段によって最大値と最小値との差が所定レベル差判定手段によって画素値の最大値と最小値との差が所定レベル以上と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出するものである,
     請求項1に記載の人物認識装置。
    A parallax level difference determining unit that determines whether or not a difference between a maximum value and a minimum value of pixel values in a portion corresponding to a face included in the parallax image generated by the parallax image generating unit is equal to or higher than a predetermined level;
    The second similarity calculation means is:
    The similarity is determined to be less than a predetermined level by the first determination unit, and the difference between the maximum value and the minimum value is determined by the parallax level difference determination unit between the maximum value and the minimum value of the pixel value by the predetermined level difference determination unit. When the difference is determined to be greater than or equal to a predetermined level, the portion corresponding to the face included in the parallax image generated by the parallax image generation unit and the face stored in advance corresponding to the person to be recognized It calculates the similarity with the parallax image.
    The person recognition device according to claim 1.
  3.  上記第2の人物決定手段は,
     上記第2の類似度算出手段によって算出された類似度が所定レベル以上である場合に上記顔画像の人物を特定するものである,
     請求項1または2に記載の人物認識装置。
    The second person determining means is:
    A person of the face image is specified when the similarity calculated by the second similarity calculation means is a predetermined level or more;
    The person recognition apparatus according to claim 1 or 2.
  4.  上記第2の人物決定手段は,
     上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とのうち,大きい方の類似度となる人物を,上記顔画像の人物と決定するものである,
     請求項1から3のうち,いずれか一項に記載の人物認識装置。
    The second person determining means is:
    Of the similarity calculated by the first similarity calculation means and the similarity calculated by the second similarity calculation means, the person having the larger similarity is determined as the person of the face image. To do,
    The person recognition device according to any one of claims 1 to 3.
  5.  フォーカス量を変えて被写体を複数回撮像して複数の対象画像を得る撮像装置をさらに備え,
     上記視差画像生成手段は,
     上記撮像装置によって得られた複数の対象画像から視差画像を生成するものである,
     請求項1から4のうち,いずれか一項に記載の人物認識装置。
    An imaging device for obtaining a plurality of target images by imaging a subject a plurality of times while changing a focus amount;
    The parallax image generating means includes
    A parallax image is generated from a plurality of target images obtained by the imaging device.
    The person recognition device according to any one of claims 1 to 4.
  6.  被写体が相対的に水平方向に動くように複数回撮像して複数の対象画像を得る撮像装置をさらに備え,
     上記視差画像生成手段は,
     上記撮像装置によって得られた複数の対象画像から視差画像を生成するものである,
     請求項1から4のうち,いずれか一項に記載の人物認識装置。
    An imaging device for obtaining a plurality of target images by imaging a subject a plurality of times so that the subject moves relatively horizontally;
    The parallax image generating means includes
    A parallax image is generated from a plurality of target images obtained by the imaging device.
    The person recognition device according to any one of claims 1 to 4.
  7.  上記視差画像生成手段は,
     第1の視点で撮像された第1の対象画像と第2の視点で撮像された第2の対象画像とのそれぞれについて二つの視差画像を生成するものであり,
     上記第2の類似度算出手段は,
     上記二つの視差画像のそれぞれに含まれる顔に相当する部分について類似度を算出するものである,
     請求項1から6のうち,いずれか一項に記載の人物認識装置。
    The parallax image generating means includes
    Two parallax images are generated for each of a first target image captured from a first viewpoint and a second target image captured from a second viewpoint;
    The second similarity calculation means is:
    The similarity is calculated for the part corresponding to the face included in each of the two parallax images.
    The person recognition device according to any one of claims 1 to 6.
  8.  上記第2の類似度算出手段は,
     上記視差画像生成手段により生成された視差画像の顔に相当する部分の左側部分または右側部分のうち,上記第1の対象画像には含まれているが上記第2の対象画像には含まれていない,あるいは上記第2の対象画像には含まれているが上記第1の対象画像には含まれていない死角領域が少ない方の部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出するものである,
     請求項7に記載の人物認識装置。
    The second similarity calculation means is:
    Of the left part or the right part of the part corresponding to the face of the parallax image generated by the parallax image generation means, it is included in the first target image but not in the second target image. Not stored or stored in advance corresponding to the portion of the blind area that is included in the second target image but is not included in the first target image and the person to be recognized It calculates the degree of similarity with the face parallax image.
    The person recognition device according to claim 7.
  9.  視差画像生成手段が,複数の対象画像から視差画像を生成し,
     人物判定手段が,上記対象画像に人物の顔が含まれているかを判定し,
     第1の類似度算出手段が,上記人物判定手段により人物の顔が含まれていると判定された場合,その人物と,あらかじめ記憶されている人物特定データによって特定される認識対象の人物との類似度を算出し,
     第1の判定手段が,上記第1の類似度算出手段によって算出された類似度が所定レベル以上かどうかを判定し,
     第1の人物決定手段が,上記第1の判定手段によって類似度が所定レベル以上と判定されたことに応じて,上記第1の判定手段によって所定レベル以上と判定された人物を,上記対象画像に含まれる顔の人物と決定し,
     第2の類似度算出手段が,上記第1の判定手段によって類似度が所定レベル未満と判定されたことに応じて,上記視差画像生成手段によって生成された視差画像に含まれる顔に相当する部分と,認識対象の人物に対応してあらかじめ記憶されている顔の視差画像との類似度を算出し,
     第2の人物決定手段が,上記第1の類似度算出手段によって算出された類似度と上記第2の類似度算出手段によって算出された類似度とにもとづいて上記顔画像の人物を特定する,
     人物認識装置の動作制御方法。
    A parallax image generating means generates a parallax image from a plurality of target images;
    A person determining means determines whether a person's face is included in the target image,
    When the first similarity calculating means determines that the person's face is included by the person determining means, the person and the person to be recognized specified by the person specifying data stored in advance Calculate the similarity,
    A first determination unit for determining whether the similarity calculated by the first similarity calculation unit is equal to or higher than a predetermined level;
    When the first person determining means determines that the similarity is determined to be greater than or equal to a predetermined level by the first determining means, the person determined to be greater than or equal to the predetermined level by the first determining means Determined to be a person of the face included in
    A portion corresponding to a face included in the parallax image generated by the parallax image generation unit when the second similarity calculation unit determines that the similarity is less than a predetermined level by the first determination unit. And the similarity between the face parallax image stored in advance corresponding to the person to be recognized,
    A second person determination unit identifies the person of the face image based on the similarity calculated by the first similarity calculation unit and the similarity calculated by the second similarity calculation unit;
    Operation control method of person recognition device.
PCT/JP2012/071056 2011-09-13 2012-08-21 Person recognition apparatus and method of controlling operation thereof WO2013038877A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011199741 2011-09-13
JP2011-199741 2011-09-13

Publications (1)

Publication Number Publication Date
WO2013038877A1 true WO2013038877A1 (en) 2013-03-21

Family

ID=47883111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/071056 WO2013038877A1 (en) 2011-09-13 2012-08-21 Person recognition apparatus and method of controlling operation thereof

Country Status (1)

Country Link
WO (1) WO2013038877A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (en) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd Method and device for identifying person
WO2004072899A1 (en) * 2003-02-13 2004-08-26 Nec Corporation Unauthorized person detection device and unauthorized person detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (en) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd Method and device for identifying person
WO2004072899A1 (en) * 2003-02-13 2004-08-26 Nec Corporation Unauthorized person detection device and unauthorized person detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOBUHIKO MASUI ET AL.: "A Preliminary Study for Recognition of Human Faces by 3-D Measurement", ITEJ TECHNICAL REPORT GAZO TSUSHIN SYSTEM GAZO OYO, vol. 14, no. 36, 29 June 1990 (1990-06-29), pages 7 - 12 *

Similar Documents

Publication Publication Date Title
JP5140210B2 (en) Imaging apparatus and image processing method
JP5414947B2 (en) Stereo camera
JP5204350B2 (en) Imaging apparatus, playback apparatus, and image processing method
JP5204349B2 (en) Imaging apparatus, playback apparatus, and image processing method
JP5320524B1 (en) Stereo camera
JP5814692B2 (en) Imaging apparatus, control method therefor, and program
WO2014000663A1 (en) Method and device for implementing stereo imaging
JP2013533672A (en) 3D image processing
WO2012002157A1 (en) Image capture device for stereoscopic viewing-use and control method of same
JP5467993B2 (en) Image processing apparatus, compound-eye digital camera, and program
TWI399972B (en) Image generating apparatus and program
JP5874192B2 (en) Image processing apparatus, image processing method, and program
CN107820019B (en) Blurred image acquisition method, blurred image acquisition device and blurred image acquisition equipment
JP2013042301A (en) Image processor, image processing method, and program
JP2015201750A (en) Imaging apparatus and feature part detection method
JP2011048295A (en) Compound eye photographing device and method for detecting posture of the same
TW201205449A (en) Video camera and a controlling method thereof
US20130106850A1 (en) Representative image decision apparatus, image compression apparatus, and methods and programs for controlling operation of same
WO2013038877A1 (en) Person recognition apparatus and method of controlling operation thereof
JP2014120139A (en) Image process device and image process device control method, imaging device and display device
CN102907081A (en) 3D imaging device, face detection device, and operation control method therefor
JP2020046475A (en) Image processing device and control method therefor
JP2014134723A (en) Image processing system, image processing method and program
US10425594B2 (en) Video processing apparatus and method and computer program for executing the video processing method
JP2014022826A (en) Image processing apparatus, imaging apparatus, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12832518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12832518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP