JP4461747B2 - Object determination device - Google Patents

Object determination device Download PDF

Info

Publication number
JP4461747B2
JP4461747B2 JP2003318701A JP2003318701A JP4461747B2 JP 4461747 B2 JP4461747 B2 JP 4461747B2 JP 2003318701 A JP2003318701 A JP 2003318701A JP 2003318701 A JP2003318701 A JP 2003318701A JP 4461747 B2 JP4461747 B2 JP 4461747B2
Authority
JP
Japan
Prior art keywords
face
determining
unit
center
faces
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2003318701A
Other languages
Japanese (ja)
Other versions
JP2005086682A (en
Inventor
善久 井尻
文一 今江
昌史 山元
卓也 露口
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Priority to JP2003318701A priority Critical patent/JP4461747B2/en
Priority claimed from KR1020057024724A external-priority patent/KR100839772B1/en
Publication of JP2005086682A publication Critical patent/JP2005086682A/en
Application granted granted Critical
Publication of JP4461747B2 publication Critical patent/JP4461747B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a technique that can be effectively applied to devices (such as a still camera and a video camera) that capture still images and moving images.

  Technologies such as autofocus for focusing on a specific subject and automatic exposure control for executing exposure control according to the specific subject are mounted on the imaging apparatus. Conventionally, autofocus and automatic exposure control have been performed on a subject existing at a specific position (for example, the center point in the frame) defined in advance in the frame. However, depending on the user's framing preference, the subject that the user wants to focus on, that is, the subject that is the subject of autofocus or automatic exposure control, is not always located at the center of the frame. In this case, the user must operate the imaging device so that the subject to be focused on is located at a specific position in the frame, and then perform autofocus and automatic exposure control, and then perform framing as desired. Such a process is troublesome work.

  As a technique for solving such a problem, there is a technique in which imaging is performed in advance in order to specify an object to be focused or an exposure control target (hereinafter, such imaging is referred to as “preliminary imaging”). In the technique for performing preliminary imaging, an object for focusing or exposure control is specified using an image captured in preliminary imaging.

  As a specific example of the technique for performing preliminary imaging, there is a technique for focusing on a person's face in the image regardless of the position / range of the person in the scene (in the image) (see Patent Document 1). Similarly, there is a technique for appropriately exposing a person in the image regardless of the position / range of the person in the image (see Patent Document 2).

JP 2003-107335 A JP 2003-107555 A

  By the way, when the number of human faces detected in the image in the preliminary imaging is one, it is sufficient to perform focusing and exposure control on this face unconditionally. However, when a plurality of human faces are detected in an image in preliminary imaging, it has not been sufficiently considered which face should be focused or controlled for exposure. For example, based on the policy that the face of the person in the center of the frame is the face that the user wants as a subject, there is a technology that performs focusing and exposure control on the face of the person closest to the center of the frame. It was. However, such technology is based on the premise that there is only one face in the frame that the user wants most as a subject, and it is accurate when there are multiple equally desired faces in the frame. It was not able to cope with.

  The present invention solves such problems and provides an apparatus for determining which face should be focused or controlled when a plurality of human faces are detected in preliminary imaging. With the goal.

[First embodiment]
In the following description, the skin color means the skin color of any person, and is not limited to the skin color of a specific race.

  In order to solve the above problems, the present invention has the following configuration. A first aspect of the present invention is an object determination device that includes a detection unit and a determination unit. In the first aspect of the present invention, a face to be subjected to focusing and exposure control is determined according to the position of the detected face. Note that the face determined by the first aspect of the present invention is not limited to focusing and exposure control, and may be any processing target. For example, it may be a target to perform processing such as color correction (white balance correction) and contour enhancement.

  Specifically, the detection means detects a human face from the input image. Any existing face detection technique may be applied as long as the detection means is a technique capable of detecting a plurality of human faces from an image. For example, an area having a skin color may be detected as a face area. As other examples, there are a detection technique using a template of a face and a part of the face, a technique of detecting a face region based on a difference in shading, and the like.

  The determination means is a target to be focused when imaging is performed from the faces of the plurality of persons based on the positions of the faces of the plurality of persons when the faces of the plurality of persons are detected by the detection means. And / or a face to be subjected to exposure control is determined. The determining means does not determine the position of each face independently, but determines based on the positions of a plurality of faces. That is, the determination unit performs determination based on at least a relative positional relationship between a plurality of faces. The determination unit may capture the position of the face as the position of a point included in the face or the position of the face region.

  Note that focusing or exposure control may be performed on a part of the face (eg, eyes, nose, forehead, mouth, ears) determined by the determining unit.

  According to the first aspect of the present invention, a face to be focused or subjected to exposure control is determined based on the detected positions of the plurality of faces. For this reason, it is possible to determine a predetermined number of faces even when a larger number of faces than the number of faces to be focused and exposure controlled are detected from the pre-captured image. Become.

  In addition, the detected face positions are not considered independently, but the face to be focused or exposed is determined based on the relative positions of the plurality of faces. For this reason, a face to be focused or subjected to exposure control is determined in accordance with a plurality of detected face situations (for example, the degree of face density, the position where the face is dense). For this reason, it is possible to perform focusing processing and exposure control according to the detected situation of a plurality of faces.

  Next, a more specific configuration of the determining means in the first aspect of the present invention will be described. The determining means in the first aspect of the present invention may be configured to include a center determining means and a face determining means.

  The center determining means determines the centers of the positions of the plurality of human faces based on the positions of the plurality of human faces detected by the detecting means. The center here refers to a conceptual center. For example, when it is said that the center is about three points, it indicates a plurality of conceptual centers such as the outer center, inner center, vertical center, and center of gravity. Thus, the center here refers to the middle required according to a certain concept.

Specifically, the center determining unit may be configured to obtain a center of a polygon circumscribing a plurality of human face positions. At this time, the polygon may be a polygon having a predetermined number of vertices. In this case, the generated polygon does not necessarily have all human faces as vertices.

  For example, when the circumscribed polygon is a triangle, the center determining means acquires the center of any one of the obtained outer center, inner center, vertical center, and center of gravity of the triangle.

  For example, when the circumscribed polygon is a quadrangle, the center determination unit acquires the center of the intersection of the obtained diagonal lines of the quadrangle. Further, for example, when the circumscribed polygon is a quadrangle, the center determining means divides the quadrangle into two triangles, and acquires any of the outer center, the inner center, the centroid, and the center of gravity for each triangle. Alternatively, the center may be acquired based on the two points.

  For example, when the number of vertices of the circumscribed polygon is 5 or more, the center determining means divides the circumscribed polygon into a plurality of triangles, and each of the triangles includes any of the outer center, the inner center, the centroid, and the center of gravity. It may be configured such that a new polygon is formed using the acquired points, and the center is acquired by repeating the above-described processing for the polygon.

  The center determining means may be configured to determine the center of gravity of the positions of a plurality of human faces.

  Next, the configuration of the face determination means will be described. The face determining unit determines a target face based on the center position obtained by the center determining unit. The face determination unit determines a target face according to the number of faces to be focused and / or the number of faces to be subjected to exposure control when imaging is performed.

  Specifically, the face determination means may be configured to determine the face of the person closest to the center as the target face.

  Further, the face determination means may be configured to determine a target face according to a predetermined reference from faces positioned within a predetermined distance from the center. The predetermined standard is, for example, based on the size of the face (eg, the standard for determining the largest face, the standard for determining the middle face, or the standard for determining the smallest face). It may be based on the position of the face in the image (e.g., the standard that is the face closest to the center of the image), or based on the orientation of the face (e.g., facing the front) A standard based on the face), or based on the likelihood of a face (eg, a standard that determines the face with the largest amount indicating the likelihood of facial expression), or the gender estimated from the face Based on the age estimated from the face (e.g., criteria for determining a face estimated to be a man, criteria for determining a face estimated to be a woman) Criteria for determining the face estimated for age, medium It may be a basis) of determining the age and the estimated face to, may be any other such criteria. Further, the predetermined standard may be a combination of a plurality of the above-mentioned standards as appropriate.

In the first aspect of the present invention configured as described above, as a reference for the relative positions of the plurality of faces, a face to be focused or exposed is determined based on the centers of the positions of the plurality of faces. The For example, a face positioned at the approximate center of a plurality of faces is determined as a face to be subject to focusing and exposure control. In a situation where a group is photographed (a situation where a plurality of faces are densely packed), the user often performs focusing and exposure control (unconsciously) on the face near the center of the group. For this reason, it becomes possible to save a user's effort by implement | achieving operation by such a user's manual automatically. Therefore, in autofocus and automatic exposure control, it is generally desirable to perform focusing and exposure control on a face located near the center of a plurality of faces in a group. And according to the 1st aspect of this invention comprised as mentioned above, it becomes possible to perform the above desirable controls automatically, and it becomes possible to save a user's effort.

  Moreover, the determination means in the 1st aspect of this invention may be comprised so that the face located in the lowest part may be determined as the said target face among several human faces. Note that the lowermost face does not have to be strictly the lowermost face, and may be, for example, the second face from the bottom or the third face from the bottom. When a group photo is taken in a state where human faces are lined up vertically, generally the face located below is a face closer to the imaging device. For example, when the person in the front row leans and the person in the rear row stands and takes a picture, the above state is obtained. Further, for example, when a group photo is taken using a doll platform, the above state is obtained. For this reason, when the face located at the lowermost position is set as the face to be focused and exposed, the possibility that the focus and exposure control are performed on the face closest to the imaging device is increased.

  The first aspect of the present invention may be configured to further include a classification unit and a set determination unit. The classifying unit classifies the plurality of detected human faces into a plurality of sets based on the respective positions when the detecting unit detects a plurality of human faces. More specifically, the classifying means classifies the detected faces into a plurality of sets so that faces close to each other in the image are included in the same set by using a general clustering technique. .

  The set determining means is a set for determining a face to be focused or controlled for exposure from among a plurality of sets classified by the classifying means (that is, a set including a face to be focused or controlled for exposure). : (Referred to as “selected set”). That is, the face to be focused or exposed is determined from the faces included in the selected set finally determined by the set determining means.

  The set determining means may determine the selected set based on any criterion. For example, the set determining means may determine the selected set based on the number of faces included in each set, or may determine the selected set based on the center of the positions of a plurality of faces included in each set. The amount of features acquired based on the faces included in each set (for example, the amount acquired based on the face size, face orientation, number of faces, etc., the specific gender estimated from the face) The selected set may be determined based on the frequency, the frequency of a specific age estimated from the face, or an amount indicating the degree of facial appearance).

  When the first aspect of the present invention is configured to include the classifying means and the set determining means as described above, the determining means is configured based on the face of the person included in the selected set. The target face is configured to be determined from the faces.

  According to the first aspect of the present invention configured as described above, a plurality of faces in an image are divided into a plurality of sets, and a selection set for determining a face to be processed is determined from the plurality of sets. Is done. Then, from the faces included in the determined selection set, the determination unit finally determines the face to be processed.

  For this reason, when there are a plurality of sets of faces in the image, the face to be processed is not determined based on all the faces included in the plurality of sets, but the faces included in any set The face to be processed is determined based only on the above. Therefore, it is possible to perform control according to only the faces included in the selected set determined by the set determining means. For example, when a group of a plurality of persons are present apart from each other in an image, it is possible to perform focusing and / or exposure control specialized for any group, instead of processing that is averaged over these groups. Also, for example, when there are groups and isolated people in the image, it is possible to perform focusing and / or exposure control specialized for only one of the groups or the isolated people without including these isolated people It becomes.

The first aspect of the present invention may be configured to further include display means for displaying the face of the person determined by the determination means in distinction from other faces. For example, the display unit displays a frame around the determined person's face to distinguish it from other faces. Further, the display means displays a frame having a color, thickness, or shape different from the frame displayed around the other face, for example, around the determined person's face, thereby distinguishing it from the other face. . Further, the display means displays the image by distinguishing it from the other face, for example, by performing an image processing different from the other face on the determined person's face.

[Second embodiment]
A second aspect of the present invention is an object determination device that includes a detection unit and a determination unit. Among these, the detection means has the same configuration as the first aspect of the present invention.

  The determining means according to the second aspect of the present invention, when a plurality of human faces are detected by the detecting means, focuses or exposes the face of the person located in the middle based on the number of detected faces. It is determined as a face to be controlled. The determining means counts the number of detected faces and determines a face based on the intermediate value of the counted numbers. For example, this determination means sorts each face based on its x coordinate, and determines a face corresponding to the above-described intermediate value order as a target face. This determination means may determine the target face based on the y coordinate instead of the x coordinate. The determining means selects a face based on each of the x coordinate and the y coordinate, and when the same face is selected, determines the face as a target face, and when a different face is selected. May be configured such that any face is determined as a target face based on a predetermined criterion.

  According to the second aspect of the present invention, the same effect as the first aspect of the present invention configured to determine the target face based on the center of the positions of a plurality of faces is obtained in a pseudo manner. It becomes possible. In addition, according to the second aspect of the present invention, there is no need to acquire the centers of the positions of a plurality of faces. That is, it is sufficient to perform face counting and data sorting, and it is not necessary to perform geometric calculation. For this reason, when a general-purpose information processing apparatus is used, processing can be executed at a higher speed than in the first aspect of the present invention.

  Moreover, the 2nd aspect of this invention may be comprised so that a classification | category means and a set determination means may further be included similarly to the 1st aspect of this invention. In this case, the determining means in the second aspect of the present invention determines the target face based on the faces of the people included in the selected set.

[Third embodiment]
A third aspect of the present invention is an object determination device that includes a detection unit, a classification unit, a provisional determination unit, and a final determination unit. Among these, the detection means and the classification means have the same configuration as the first aspect of the present invention.

  The provisional determination means in the third aspect of the present invention provisionally determines a face to be focused or controlled for exposure in each of a plurality of sets generated by the classification means. In other words, the provisional determination means, for each of a plurality of sets, from a human face included in the set, a face to be focused and / or a face to be subjected to exposure control when imaging is performed. decide. At this time, the provisional determination means may determine the target face based on any criterion. For example, the provisional determination means may determine the target face by the same processing as the determination means shown in the first aspect and the second aspect of the present invention. Further, for example, the provisional determination unit may determine a target face based on a predetermined criterion in the face determination unit according to the first aspect of the present invention.

The final determination means in the third aspect of the present invention finally determines the face to be focused or exposed from the faces determined by the provisional determination means. For example, the final determining means selects the target face from the faces determined by the provisional determining means in each set by the same processing as the determining means shown in the first aspect and the second aspect of the present invention. You may decide finally. Further, for example, the final determining means finally determines the target face from the faces determined by the temporary determining means in each set based on a predetermined criterion in the face determining means of the first aspect of the present invention. It may be decided.

  According to the third aspect of the present invention, it is possible to obtain the same effects as those of the first aspect and the second aspect of the present invention configured to further include a classification unit and a set determination unit. Further, according to the third aspect of the present invention, the face is determined based on various criteria when the target face is determined while maintaining the effect of including the classification means and the set determination means. It becomes possible.

[Fourth embodiment]
A fourth aspect of the present invention is an object determination device that includes a detection unit, a block determination unit, and a determination unit.

  The detection means in the fourth aspect of the present invention detects a human face in each of a plurality of blocks obtained by dividing the input image. The type of block into which the input image is divided may be determined in advance or may be determined dynamically.

  Based on the detection result of the detection means, the block determination means determines a face to be focused and / or a face to be subject to exposure control when imaging is performed (that is, a block for focusing). And a block including a face whose exposure is to be controlled (referred to as a “selected block”). For example, a block can be determined according to the same criteria as the set determination means in the first aspect of the present invention.

  The determining means in the fourth aspect of the present invention determines the target face from the faces included in the selected block. At this time, this determination means may determine the target face based on any criterion. For example, this determination means may determine the target face by the same processing as the determination means shown in the first aspect and the second aspect of the present invention. In addition, for example, the determining unit may determine a target face based on a predetermined reference in the face determining unit according to the first aspect of the present invention.

[Fifth aspect]
A fifth aspect of the present invention is an object determination device that includes a detection unit, a determination unit, a selection unit, and a determination unit. Among these, the detection means has the same configuration as the first aspect of the present invention.

  The judging means judges the largest face among the detected human faces when a plurality of faces are detected by the detecting means. The determination means may determine the size based on, for example, the number of skin color pixels in the detected human face. Further, the determination means may determine the size of the face based on the size of the face rectangle used when, for example, it is detected as a human face. The determination means may be configured to determine the size of the face according to any other criteria. The maximum face need not be strictly maximum, and may be, for example, the second largest face or the third largest face.

  The selection unit selects the largest face among the detected faces and another face having a size within a predetermined range from the size of the face.

The determining means in the fifth aspect of the present invention determines a face to be focused and / or a face to be subjected to exposure control from among the faces selected by the selecting means when imaging is performed. To do. As described above, the determining means in the fifth aspect of the present invention does not determine the target face based on the positions of all of the plurality of faces detected by the detecting means, but the largest of the detected faces. A target face is determined from the face and another face having a size within a predetermined range from the size of the face. This determination means may determine the target face based on any criterion. For example, this determination means may determine the target face by the same processing as the determination means shown in the first aspect and the second aspect of the present invention. In addition, for example, the determining unit may determine a target face based on a predetermined reference in the face determining unit according to the first aspect of the present invention.

  According to the fifth aspect of the present invention, the face to be focused is selected based on only the face having a relatively large size in the focusing image. For this reason, it is possible to prevent the face of a person who has been photographed as a background, that is, a person who is not conscious of the subject as a subject, from being included in the processing target. Therefore, for example, when this determining means performs the same processing as the determining means in the first aspect of the present invention, the process of selecting a set including the face to be focused or the center based on the faces included in the set The accuracy of the process of acquiring the image and the process of selecting the face to be focused is improved.

[Sixth aspect]
A sixth aspect of the present invention is an object determination device that includes a detection means, a classification means, a set determination means, and a determination means. Among these, the detection means, the classification means, and the set determination means have the same configuration as in the first aspect.

  The determining means in the sixth aspect of the present invention determines the target face from the faces included in the set determined by the set determining means. At this time, the determination means may determine the target face based on any criterion. For example, this determination means may determine the target face by the same processing as the determination means shown in the first aspect and the second aspect of the present invention. In addition, for example, the determining unit may determine a target face based on a predetermined reference in the face determining unit according to the first aspect of the present invention.

  According to the sixth aspect of the present invention, the same effect as in the third aspect of the present invention can be obtained.

[Seventh aspect]
A seventh aspect of the present invention is an object determination device that includes a detection unit, a feature amount acquisition unit, and a determination unit. Among these, the detection means has the same configuration as the first aspect of the present invention.

  The feature amount acquisition unit acquires a feature amount indicating a human face degree for each of a plurality of human faces detected by the detection unit. The “human face degree” is given, for example, by a distance from an identification boundary line that divides whether an image is a human face or not. The human face degree may be indicated by using a value acquired at the time of face detection processing by the detection means.

  The determining unit determines a face to be focused and / or a face to be subjected to exposure control when imaging is performed based on the feature amount.

[Others]
The first to seventh aspects of the present invention may be realized by executing a program by an information processing apparatus. That is, the present invention specifies the processing executed by each means in the first to seventh aspects as a program for causing the information processing apparatus to execute or a recording medium on which the program is recorded. Can do.

According to the present invention described above, when a plurality of human faces are detected in preliminary imaging, focusing or exposure is performed on any of the faces based on the positions and sizes of the detected faces. It is automatically determined whether control should be performed.

  Below, the imaging device 1 (1a, 1b, 1c, 1d, 1e) provided with the focusing object determination part 5 (5a, 5b, 5c, 5d, 5e) is demonstrated using figures. In this description, the person image is an image including at least a part or all of the face of the person. Therefore, the person image may include an image of the entire person, or may include an image of only the face of the person or only the upper body. Moreover, you may include the image about a several person. Furthermore, the background may include any pattern such as a scenery (background: including objects that have attracted attention as a subject) or a pattern other than a person.

  In addition, the following description about the focusing object determination part 5 and the imaging device 1 is an illustration, The structure is not limited to the following description.

[First embodiment]
〔System configuration〕
First, the imaging apparatus 1a provided with the focusing object determination part 5a is demonstrated. The in-focus object determining unit 5 a is a first embodiment of the in-focus object determining unit 5.

In terms of hardware, the imaging device 1a includes a CPU (central processing unit), a main storage device (RAM (Read Only Memory)), an auxiliary storage device, a digital still camera, and a digital video camera connected via a bus. Each device (imaging lens, mechanical mechanism, CCD (Charge-Coupled Devices), operation unit, motor, etc.) and the like. The auxiliary storage device is configured using a nonvolatile storage device. The nonvolatile storage device referred to here is a so-called ROM (Read-Only Memory: EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), mask ROM.
Etc.), FRAM (Ferroelectric RAM), hard disk and the like.

  FIG. 1 is a diagram illustrating functional blocks of the imaging apparatus 1a. The imaging device 1a is loaded with various programs (OS, application programs, etc.) stored in the auxiliary storage device and executed by the CPU, whereby the input unit 2, the imaging unit 3, the image storage unit 4, It functions as an apparatus including the focusing target determining unit 5a, the distance measuring unit 10, and the like. The focusing target determining unit 5a is realized by executing a focusing target determining program by the CPU. Further, the focusing target determining unit 5a may be realized as dedicated hardware.

  Hereinafter, the functional units included in the imaging apparatus 1a will be described with reference to FIG.

<Input section>
The input unit 2 is configured using a shutter release button or the like. When the input unit 2 detects an input of a command by the user, the command is notified to each unit of the imaging apparatus 1a according to the input command. For example, when the input unit 2 detects an input of an autofocus command by the user, the input unit 2 notifies the imaging unit 3 of an autofocus command. For example, when the input unit 2 detects an input of an imaging command by the user, the input unit 2 notifies the imaging unit 3 of the imaging command.

  When the input unit 2 is configured using a shutter release button, an autofocus command is detected by the input unit 2 when the shutter release button is pressed halfway, for example. The imaging command is detected by the input unit 2 when the shutter release button is completely pressed, for example.

<Imaging section>
The imaging unit 3 is configured as a device having an autofocus function using an imaging lens, a mechanical mechanism, a CCD, a motor, and the like. The imaging lens includes, for example, a zoom lens for realizing a zoom function, a focus lens for focusing on an arbitrary subject, and the like. The mechanical mechanism includes a mechanical shutter, a diaphragm, a filter, and the like. The motor includes a zoom lens motor, a focus motor, a shutter motor, and the like.

  The imaging unit 3 includes a storage unit (not shown), and stores predetermined focus information in the storage unit. The predetermined focus information is a piece of predetermined focus information. When the power of the imaging device 1a is on and there is no focus information input from the distance measuring unit 10, the focus lens of the imaging unit 3 is controlled to be in a state based on the predetermined focus information. Is done. On the other hand, when focus information is input from the distance measuring unit 10, the focus lens of the imaging unit 3 is controlled to be in a state based on the input focus information.

  The imaging unit 3 performs imaging by converting an image of a subject formed through the imaging lens including the focus lens controlled as described above into an electrical signal by the CCD.

<Image storage unit>
The image storage unit 4 is configured using a readable / writable recording medium such as a so-called RAM. The image storage unit 4 may be configured using a recording medium that is detachable from the imaging device 1a. The image storage unit 4 stores image data captured by the imaging unit 3.

<Focus target determination unit>
The focusing target determining unit 5a determines a subject to be focused, that is, a face to be focused, from the subjects imaged by the imaging unit 3. At this time, the focusing target determining unit 5a performs processing using the image captured for focusing by the imaging unit 3. The image picked up for focusing is input by the image pickup unit 3 to the focus target determining unit 5a.

  The focusing target determining unit 5a includes a face detecting unit 6a and a determining unit 7a. Hereinafter, each function part which comprises the focusing object determination part 5a is demonstrated.

<Face detection unit>
The face detection unit 6a obtains the coordinates of an area including a human face (hereinafter referred to as “face area”) from the in-focus image. The position and size of the human face area in the in-focus image are specified by the coordinates of the face rectangle.

  The detection of the face area may be realized by applying any existing technique. For example, the coordinates of the face rectangle may be obtained by template matching using a reference template corresponding to the outline of the entire face. Further, the coordinates of the face rectangle may be obtained by template matching based on face components (eyes, nose, ears, etc.). Further, the vertex of the hair may be detected by chroma key processing, and the coordinates of the face rectangle may be obtained based on this vertex. Further, the coordinates of the face rectangle may be obtained based on the detection of the skin color area.

<Decision part>
The determination unit 7a determines a face to be focused from among the faces detected by the face detection unit 6a. The determination unit 7a passes the coordinates of the face rectangle (eg, coordinates indicating the center of the face rectangle) related to the determined face to the distance measurement unit 10. The determination unit 7a includes a center determination unit 8a and a face determination unit 9a. Hereinafter, each function part which comprises the determination part 7a is demonstrated.

<Center determination part>
When there are a plurality of faces detected by the face detection unit 6a, the center determination unit 8a obtains the centers of the positions of the detected faces. Here, the center point of the face rectangle (hereinafter referred to as “face point”) indicates the position of the face. When there are two detected faces, the center determining unit 8a obtains the midpoint between the two face points. When there are three or more detected faces, the center determining unit 8a obtains the center by one of the following methods.

<< circumscribed polygon method >>
In the circumscribed polygon method, the center determination unit 8a obtains the centers of the polygons circumscribing the plurality of face points as the centers of the positions of the plurality of faces. The number of vertices of the polygon used may be any number and is determined in advance. Here, a case where the number of vertices is four will be described.

  FIG. 2 is a diagram for explaining processing in which the center determination unit 8a acquires the center based on the circumscribed polygon method. The center determination unit 8a determines the maximum value and the minimum value for each of the x coordinate and the y coordinate at the plurality of face points. The center determining unit 8a uses a straight line parallel to the y-axis including the face point taking the maximum value and the minimum value of the x-coordinate, and a straight line parallel to the x-axis including the face point taking the maximum value and the minimum value of the y-coordinate. A rectangle to be generated (in the orthogonal coordinate system, a rectangle that is parallel to the x-axis and the y-axis and that intersects with the maximum value and the minimum value of each coordinate component of the face coordinate respectively constitutes a vertex) is generated. Then, the center determination unit 8a acquires the coordinates of the intersection of the generated rectangular diagonal lines as the center coordinates.

《Center of gravity method》
FIG. 3 is a diagram for explaining processing in which the center determination unit 8a acquires the center based on the center-of-gravity method. In the centroid method, the center determination unit 8a acquires the centroids of a plurality of face points. Specifically, the center determination unit 8a acquires the position vector of the center of gravity by dividing the sum of the position vectors of each face point by the number of face points, and acquires the coordinates of the center of gravity based on this position vector. Then, the center determining unit 8a acquires the acquired coordinates of the center of gravity as the coordinates of the center. In FIG. 3, with respect to six face points, position vectors from a certain reference point (for example, a point indicated by the minimum value of each coordinate component of each face point) are obtained, and barycentric coordinates (Xg, Yg) are obtained. It shows what is required.

<Face determination part>
The face determination unit 9a determines which face should be focused based on the coordinates of the center acquired by the center determination unit 8a. FIG. 4 is a diagram for explaining processing by the face determination unit 9a. The face determination unit 9a selects a face according to a predetermined criterion from faces positioned within a predetermined distance from the center. At this time, the face determination unit 9a may determine the face based on any criteria such as face size, face position, face orientation. For example, the face determination unit 9a is the highest point in the image (frame) center point (illustrated by the symbol “x” in FIG. 4) among the faces located within a predetermined distance from the center acquired by the center determination unit 8a. Select a close face (a face point surrounded by a thick circle).

  In the example shown in FIG. 4, for two face points F1 and F2 existing in a circle C1 having a radius R from the center FO of the face points, distances L1 and L2 from the center point O of the image (frame) are Measured. Since L1 <L2, the face point F1 is determined as the face point to be focused (that is, the face to be focused).

  The face determination unit 9 a passes the coordinates of the selected face point to the distance measurement unit 10.

<Ranging section>
The distance measuring unit 10 acquires focus information for focusing on the face determined by the face determining unit 9a of the focusing target determining unit 5a. At this time, the distance measuring unit 10 specifies a subject to be focused on based on the two-dimensional coordinates (face point coordinates) input from the face determining unit 9a. The distance measurement performed by the distance measurement unit 10 may be performed by, for example, emitting infrared rays to the actual face (active method), or a method other than such an active method (for example, Passive method) may be applied.

  For example, the distance measuring unit 10 is configured to acquire focus information by measuring the distance to the subject by emitting infrared rays or the like to the subject. In this case, the distance measuring unit 10 determines the direction in which infrared rays and the like are emitted based on the two-dimensional coordinates input from the face determining unit 9a.

  When the distance measurement unit 10 acquires the focus information, the distance measurement unit 10 passes the acquired focus information to the imaging unit 3.

[Operation example]
FIG. 5 is a flowchart illustrating an operation example of the imaging apparatus 1a. Hereinafter, an operation example of the imaging apparatus 1a will be described with reference to FIG.

  When the image pickup apparatus 1a is turned on, the image pickup unit 3 controls the focus lens so as to be in a state based on predetermined focus information.

  When the input unit 2 detects that the shutter release button is half-pressed by the user (S01), the input unit 2 notifies the imaging unit 3 that an autofocus command has been input.

  The imaging unit 3 determines whether focus information is input from the distance measuring unit 10. When focus information is not input (S02-NO), the imaging unit 3 captures an in-focus image (S03). The imaging unit 3 passes the captured image for focusing to the focusing target determining unit 5a.

  The focusing target determining unit 5a determines a face to be focused based on the input focusing image (S04). FIG. 6 is a flowchart illustrating an operation example of the focus target determination unit 5a (an example of a face determination process (S04) to be focused). The process of S04 will be described with reference to FIG.

  First, the face detector 6a detects a human face from the input image (S10). The face detection unit 6a passes the detected face image information to the center determination unit 8a of the determination unit 7a.

  The center determination unit 8a checks the number of faces detected by the face detection unit 6a. When the number of faces is less than 1 (S11: <1), that is, when it is 0, the center determining unit 8a notifies the face determining unit 9 that there is no face. Then, the face determination unit 9a passes the two-dimensional coordinates at the center of the image to the distance measurement unit 10 (S12).

  When the number of faces is greater than 1 (S11:> 1), that is, when a plurality of faces are detected, the determination unit 7a executes a process of selecting one face from the plurality of faces (S13). FIG. 7 is a flowchart illustrating an example of a process (S13) of selecting one face from a plurality of faces. The process of S13 is demonstrated using FIG.

  The center determination unit 8a acquires the coordinates of the center of the detected face point (S15). The center determination unit 8a passes the acquired coordinates of the center to the face determination unit 9a. Next, the face determination unit 9a selects one face based on the input coordinates of the center by the method described with reference to FIG. 4, for example (S16).

Returning to the description with reference to FIG. When there is one face detected from the image (S11: = 1),
Alternatively, after the process of S13, the face determination unit 9a passes the two-dimensional coordinates of the face points of one face to the distance measurement unit 10 (S14). The one face here is the only face detected when the number of detected faces is one, and the face selected after the process of S13.

  Returning to the description with reference to FIG. After the process of S04, the distance measuring unit 10 acquires focus information based on the two-dimensional coordinates passed from the face determining unit 9a of the focusing target determining unit 5a (S05). The distance measuring unit 10 passes the acquired focus information to the imaging unit 3. After this process, the processes after S02 are executed again.

  In S02, when focus information is input from the distance measuring unit 10 to the imaging unit 3 (S02-YES), the imaging unit 3 controls the focus lens based on the input focus information (S06). That is, the imaging unit 3 controls the focus lens so that the face determined by the focus target determination unit 5a is in focus.

  After controlling the focus lens, the imaging unit 3 performs imaging (S07). By this imaging, an image focused on the face determined by the focusing target determining unit 5a is captured. And the image memory | storage part 4 memorize | stores the data of the image imaged by the imaging part 3 (S08).

[Action / Effect]
According to the imaging device 1a which is the first embodiment of the present invention, a human face existing in an image is detected, and any of the detected faces is automatically determined as a focus target. For this reason, the user does not need to position the face to be focused on at the center of the image (frame). In other words, the user does not need to place the face to be focused at the center of the frame.

  Moreover, according to the imaging device 1a which is the first embodiment of the present invention, when there are a plurality of faces in the image, the object to be focused is determined based on the centers of the positions of the plurality of faces present in the image. The face will be determined. For this reason, even when there are a plurality of faces in the image, it is possible to determine one face to be focused. In addition, it is possible to focus on faces located near the center of a plurality of faces in the group.

[Modification]
The face determination unit 9a may be configured to select a face according to a predetermined criterion from among faces positioned within a predetermined distance from the center and facing the front. At this time, the face determination unit 9a determines whether or not the face is facing the front by, for example, a technique described in the following publicly known document.

  H. Schneiderman, T. Kanade. "A Statistical Method for 3D Object Detection Applied to Faces and Cars." IEEE Conference on Computer Vision and Pattern Recognition, 2000.

  In the above description, the configuration in which the center determining unit 8a captures the face position as a point has been described. However, the center determining unit 8a may be configured to capture the face position as a region.

  Further, the face determined by the face determination unit 9a may be configured such that a frame is displayed around the face via a display unit (not shown). With this configuration, the user can know which face has been determined as a focus target. If the user is not satisfied with being focused on the face displayed in this way, the user can manually focus on another face by inputting a command through the input unit 2. It may be designed to be operable.

  Further, the face determination unit 9a may be configured to select a face according to a predetermined criterion from among faces positioned within a predetermined distance from the center and having a predetermined size or more.

  The imaging device 1a may be configured to perform exposure control based on the face determined by the determination unit 7a. This also applies to the second to fifth embodiments described later.

  The determination unit 7a sorts the faces detected by the face detection unit 6a based on the x-coordinate or y-coordinate, and the face corresponding to the rank of the intermediate value of the number of detected faces is the target face. It may be determined as

  Moreover, the determination part 7a may be comprised so that a face may be determined according to a predetermined reference | standard from the faces detected by the face detection part 6a. The predetermined standard is, for example, based on the size of the face (eg, the standard for determining the largest face, the standard for determining the middle face, or the standard for determining the smallest face). It may be based on the position of the face in the image (e.g., the standard that is the face closest to the center of the image), or based on the orientation of the face (e.g., facing the front) A standard based on the face), or based on the likelihood of a face (eg, a standard that determines the face with the largest amount indicating the likelihood of facial expression), or the gender estimated from the face Based on the age estimated from the face (e.g., criteria for determining a face estimated to be a man, criteria for determining a face estimated to be a woman) Criteria for determining the face estimated for age, medium It may be a basis) of determining the age and the estimated face to, may be any other such criteria. Further, the predetermined standard may be a combination of a plurality of the above-mentioned standards as appropriate.

  In the above description, it is assumed that the imaging unit 3 is configured to focus on one subject. For this reason, the in-focus target determining unit 5 a passes the coordinates of one face to the distance measuring unit 10. However, when the imaging unit 3 is configured to focus on a plurality of subjects, the focusing target determining unit 5a is configured to pass the coordinates of a plurality of faces to the distance measuring unit 10 according to this number. May be. As an example of such an imaging unit 3, there is an apparatus described in JP-A-11-295826. In this case, the face determination unit 9a of the determination unit 5a prioritizes faces according to a predetermined standard, and selects a predetermined number of faces based on the priority order. For example, the face determination unit 9a can be configured to select a predetermined number of faces from the order close to the center determined by the center determination unit 8a. If the number of selected faces does not reach the predetermined number, the coordinates passed to the distance measuring unit 10 may be acquired by any other method.

[Second Embodiment]
Next, the imaging device 1b provided with the focusing object determination part 5b is demonstrated. The in-focus object determining unit 5 b is a second embodiment of the in-focus object determining unit 5. Hereinafter, differences between the imaging device 1b and the focusing target determination unit 5b from the imaging device 1a and the focusing target determination unit 5a in the first embodiment will be described.

〔System configuration〕
FIG. 8 is a diagram illustrating functional blocks of the imaging device 1b. The imaging device 1b is different from the imaging device 1a in the first embodiment in that it includes a focusing target determination unit 5b instead of the focusing target determination unit 5a. Therefore, description of the input unit 2, the imaging unit 3, the image storage unit 4, and the distance measuring unit 10 of the imaging device 1b will be omitted below.

<Focus target determination unit>
The focus target determination unit 5b includes a face detection unit 6a, a classification unit 11, a set determination unit 12, and a determination unit 7b. Hereinafter, each function part which comprises the focusing object determination part 5b is demonstrated. However, since the face detection unit 6a included in the focus target determination unit 5b has the same configuration as the face detection unit 6a included in the focus target determination unit 5a in the first embodiment, the description thereof is omitted.

<Classification part>
The classification unit 11 clusters the facial points detected by the face detection unit 6a into a plurality of sets. Any method such as a nearest neighbor method may be used for this clustering. The classification unit 11 performs clustering based on the position of each face point. As a result of clustering by the classification unit 11, the face points are classified so that face points having close positions in the image are included in the same set.

  FIG. 9 is a diagram illustrating a processing example of the classification unit 11. When a plurality of face point groups exist as shown in FIG. 9, each group is classified as one set.

<Set determination unit>
The set determination unit 12 is a set for determining a face to be focused among a plurality of sets classified by the classification unit 11 (that is, a set including a face to be focused: Called a "set"). That is, the coordinates of the face points passed to the distance measuring unit 10 are the coordinates of the face points of the faces included in the selected set determined by the set determining unit 12.

  The set determining unit 12 may determine the selected set based on any criteria such as the number of faces included in each set and the feature amount related to the faces included in each set. For example, the set determining unit 12 determines a set including the most faces in each set as a selected set. Further, for example, the set determining unit 12 determines a set having the largest sum of feature amounts related to faces included in each set as a selected set. The feature amount related to the face can be acquired based on, for example, the size of the face, the orientation of the face, the number of faces, and the like. Further, the feature amount related to the face may be a value indicating the degree of the human face. The human face degree is given, for example, by a distance from an identification boundary line that determines whether an image is a human face. The human face degree may be indicated by using a value acquired at the time of face detection processing in the face detection unit 6a.

  A processing example of the set determination unit 12 will be described with reference to FIG. From a plurality of sets (three sets in FIG. 9) generated by the processing of the classification unit 11, the set determination unit 12 performs upper left according to a predetermined criterion (here, the number of face points included in the set is large). A set is selected as the selected set.

<Decision part>
The determining unit 7b performs processing based on the faces included in the selected set determined by the set determining unit 12 instead of the faces included in the entire image captured by the imaging unit 3 (the selected set is a mother of face determination). It is different from the determination unit 7a in that it is a group. The determination unit 7b performs the same processing as the determination unit 7a except for the above points.

  FIG. 10 is a diagram illustrating an example of processing performed by the determination unit 7b. In the selected set by the set determining unit 12, the center of the face point is acquired by the center determining unit 8b of the determining unit 7b. Then, based on the acquired center, the face determination unit 9b performs one face point (a face point surrounded by a thick circle) according to a predetermined reference (here, the one close to the center point of the image (frame)). Is selected.

[Operation example]
Next, an operation example of the imaging device 1b in the second embodiment will be described. The operation of the imaging device 1b is the same as the operation of the imaging device 1a in the first embodiment except for the content of the processing in S13. For this reason, only the content of the process of S13 is demonstrated about the operation example of the imaging device 1b.

  FIG. 11 is a flowchart showing a part of the process of the focusing target determining unit 5b in the second embodiment, that is, the content of the process of S13 in FIG. In the second embodiment, in S13, the process shown in FIG. 11 is performed instead of the process of FIG. 7 described in the first embodiment.

  When the process of S13 is started, the classification unit 11 clusters a plurality of face points detected by the face detection unit 6a into a plurality of sets (S17). Next, the set determination unit 12 selects one selected set from the plurality of sets generated by the classification unit 11 (S18).

  The center determination unit 8b acquires the centers of a plurality of face points included in the selected set based on the faces included in the selected set selected in S18 (S19). Then, the face determination unit 9b selects one face as the face to be focused based on the center acquired in S19 (S20).

[Action / Effect]
According to the imaging device 1b in the second embodiment, when there are a plurality of groups in an image, it is possible to determine a face to be focused based only on a person included in one of the groups. It becomes. When a user images a certain group, the people of the group often gather in one place. At this time, even if a person who does not belong to the group (single person, a group of single persons, a group of other persons) is present in the frame, the user is focused on among the persons belonging to the group that the user wants to capture. It is desirable that a subject to be selected is selected. According to the imaging device 1b in the second embodiment, it is possible to realize such a user's desire. In other words, in the second embodiment, the group (selected set) having the largest number of people that can be accommodated in the frame is selected as the group to be imaged (selected set), and focusing is performed based on any person belonging to the group. Realized. As will be described later in the column of the modification example, when the selected set selected by the set determining unit 12 is not a set desired by the user, the user may be able to manually select any set as the selected set. good.

  FIG. 12 is a diagram for illustrating an effect produced by the imaging apparatus 1b. Even when there are a plurality of groups as shown in FIG. 12, for example, based on only the position of the face included in the group located in the lower left of the image, a face close to the center of the group (enclosed by a bold rectangle). Face) is selected.

[Modification]
The set determining unit 12 may be configured to determine the selected set based on the center of the facial points of the faces included in each set. In this case, the center determination unit 8b acquires the center of the face points included in each set generated by the classification unit 11. The set determining unit 12 determines a selected set based on the center in each set acquired by the center determining unit 8b. For example, the set determining unit 12 determines a set having the coordinates of the center of the face points included in the set closest to the center point of the image as the selected set. Then, the face determination unit 9b determines a face to be focused based on the center of the selected set.

In addition, the set determination unit 12 may be configured to determine a selected set based on face points of faces that are provisionally determined as faces to be focused in each set. In this case, the determination unit 7b tentatively determines the face to be focused on for each set generated by the classification unit 11 based only on the faces included in the set. Then, the set determining unit 12 determines a selected set based on the face points in each set acquired by the determining unit 7b. For example, the set determination unit 12 determines, as the selected set, a set in which the coordinates of the temporarily determined face points included in the set are closest to the center point of the image. That is, the set determining unit 12 finally determines the face to be focused. In this case, the set determination unit 12 may be configured to pass the coordinates of the face points of the determined face to the distance measurement unit 10.

  Further, in the case of such a configuration, when the set selected by the set determination unit 12 is not a set desired by the user (the selection result of the set is presented to the user via a display unit not shown). The other set may be selected as the selected set by the user using the input unit 2. In this case, the set selected by the user may be configured to shift based on the priority order each time the input unit 2 is operated. The priority order is an order based on a predetermined criterion determined by the set determination unit 12.

  As described above, a focus target face (sometimes referred to as “target face”) is provisionally determined for each set, and a final target face is determined from the provisional target faces. Can be presented to the user, and the target face can be changed to one of other provisionally determined target faces when the user's intention does not match.

  Instead of the above configuration, before the determining unit 7b determines the final target face from the temporary target face, the temporary target face is presented to the user, and the user selects the final target face from the provisional target face. You may comprise. The provisional target face is presented to the user by displaying a frame around the face via a display unit (not shown).

  Alternatively, before the target face is determined, the classification result of the set by the classification unit 11 is presented to the user via a display unit (not shown), the user determines the selection set, and is determined from the selection set determined by the user The unit 7b may select the target face. In this case, the set determination unit 12 does not perform the selection set determination process, but only performs an operation of passing the selection set determination result by the user to the determination unit 7b. Thus, the determination operation of the set determination unit 12 may be turned on / off by an optional operation.

[Third embodiment]
Next, the imaging apparatus 1c provided with the focusing object determination part 5c is demonstrated. The in-focus object determining unit 5 c is a third embodiment of the in-focus object determining unit 5. Hereinafter, differences between the imaging device 1c and the focusing target determination unit 5c from the imaging device 1b and the focusing target determination unit 5b in the second embodiment will be described.

〔System configuration〕
FIG. 13 is a diagram illustrating functional blocks of the imaging device 1c. The imaging device 1c is different from the imaging device 1b in the second embodiment in that it includes a focusing target determination unit 5c instead of the focusing target determination unit 5b. Therefore, description of the input unit 2, the imaging unit 3, the image storage unit 4, and the distance measuring unit 10 of the imaging device 1c will be omitted below.

<Focus target determination unit>
The focusing target determining unit 5c is different from the focusing target determining unit 5b in the second embodiment in that the determining unit 7c includes a determining unit 7c instead of the determining unit 7b. Therefore, the description of the face detection unit 6a, the classification unit 11, and the set determination unit 12 is omitted.

<Decision part>
The determination unit 7c performs processing based on faces included in the set (selected set) determined by the set determination unit 12. The determination unit 7c determines a face to be focused based on the size of the face. For example, the determination unit 7c determines the largest face among the faces included in the selected set as the face to be focused. Alternatively, the determination unit 7c may determine, for example, a face having an intermediate size among the faces included in the selected set determined by the set determination unit 12 as the face to be focused. In the following description, the determination unit 7c is described as determining the largest face as the face to be focused.

[Operation example]
Next, an operation example of the imaging device 1c in the third embodiment will be described. The operation of the imaging device 1c is the same as the operation of the imaging device 1b in the second embodiment except for the content of the processing in S13. For this reason, only the content of the process of S13 is demonstrated about the operation example of the imaging device 1c.

  FIG. 14 is a flowchart showing a part of the process of the focus target determining unit 5c in the third embodiment, that is, the content of the process of S13 in FIG.

  When the process of S13 is started, the classification unit 11 clusters a plurality of face points detected by the face detection unit 6a into a plurality of sets (S17). Next, the set determination unit 12 selects one selected set from the plurality of sets generated by the classification unit 11 (S18). Then, the determination unit 7c selects one face as the face to be focused based on the size of each face from the faces included in the selection set selected in the process of S18 (S21). .

[Action / Effect]
According to the imaging device 1c in the third embodiment, as in the imaging device 1b in the second embodiment, when there are a plurality of groups in an image, the matching is based on only the people included in one group. It becomes possible to determine the face to be focused.

  Moreover, according to the imaging device 1c in the third embodiment, a face to be focused is selected from the faces included in one selected group based on the size of the face. For the user, the face that the user wants to pay attention to may be the face closest to the imaging device. In addition, the face closest to the imaging device is likely to be the largest face when imaged. For this reason, according to the imaging device 1c, the face that is most likely to be noticed by the user is selected from the faces included in the selected group based on the size of the face. Therefore, in this case, the user does not need to manually focus on the face to be noticed. At least, it is not necessary to place the face to be most noticed at or near the center of the frame as in the prior art.

[Modification]
The determination unit 7 c may be configured to perform processing based on the feature amount regarding each face included in the set determined by the set determination unit 12. The determination unit 7c acquires a feature amount related to each face. The determination unit 7c is configured to acquire a feature amount regarding each face based on, for example, the size of the face, the orientation of the face, and the like. Further, the determination unit 7c is configured to acquire, for example, an amount indicating the degree of facialness as a feature amount for each face. The “amount indicating the degree of face-likeness” is indicated by using a value acquired at the time of face detection processing in the face detection unit 6a, for example. Specifically, when the face detection unit 6a is configured to calculate a value related to the face-likeness of the image in this area when determining whether or not the face is included in a certain area, A value related to the likelihood is used as a feature amount.

[Fourth embodiment]
Next, the imaging device 1d provided with the focusing object determination unit 5d will be described. The in-focus object determining unit 5 d is a fourth embodiment of the in-focus object determining unit 5. Hereinafter, differences between the imaging device 1d and the focusing target determination unit 5d from the imaging device 1a and the focusing target determination unit 5a in the first embodiment will be described.

〔System configuration〕
FIG. 15 is a diagram illustrating functional blocks of the imaging device 1d. The imaging device 1d is different from the imaging device 1a in the first embodiment in that it includes a focusing target determination unit 5d instead of the focusing target determination unit 5a. Therefore, hereinafter, description of the input unit 2, the imaging unit 3, the image storage unit 4, and the distance measuring unit 10 of the imaging device 1d is omitted.

<Focus target determination unit>
The focus target determination unit 5d includes a face detection unit 6d, a block determination unit 13, and a determination unit 7d. Hereinafter, each functional unit constituting the focusing target determining unit 5d will be described.

<Face detection unit>
The face detection unit 6d is different from the face detection unit 6a in the first embodiment in that a face region is detected for each of a plurality of predetermined blocks when the face region is detected from an in-focus image. FIG. 16 is a diagram illustrating an example of an in-focus image divided into a plurality of blocks. As shown in FIG. 16, the in-focus image includes a plurality of blocks (in FIG. 16, two line segments in which a rectangular image (frame) is parallel to the vertical or horizontal axis and passes through the center point of the image. Is divided into four blocks, but can be divided into any number of blocks). For example, the face detection unit 6d stores the blocks into which the in-focus image is divided.

  The face detection unit 6d detects a face for each of a plurality of blocks. Then, the face detection unit 6d passes the detection result to the block determination unit 13 for each block.

<Block determination part>
The block determination unit 13 is a block for determining a face to be focused from among a plurality of predetermined blocks (that is, a block including a face to be focused: hereinafter referred to as “selected block”. Select). That is, the coordinates passed to the distance measuring unit 10 are the coordinates of the face points of the face detected in the selected block determined by the block determining unit 13.

  The block determination unit 13 may determine the selected block based on any criteria such as the number of faces detected in each block and the feature amount related to the face detected in the block. For example, the block determination unit 13 determines the block in which the most faces are detected in each block as the selected block. When the block determination unit 13 is configured in this way, in the situation shown in FIG. 16, the block on the upper left is determined as the selected block by the block determination unit 13.

  In addition, for example, the block determination unit 13 can determine a block having the largest sum of feature amounts related to faces included in each block as a selected block. A technique similar to the technique described in the above-described embodiment can be applied as a feature amount calculation technique for the face.

<Decision part>
The determination unit 7d is different from the determination unit 7a in that the processing is performed based on the face included in the selected block determined by the block determination unit 13 instead of the face included in the entire image captured by the imaging unit 3. The determination unit 7d performs the same processing as the determination unit 7a except for the above points.

  FIG. 17 is a diagram illustrating a processing example by the determination unit 7d. In the upper left block (selected block) selected by the block determination unit 13, the center of the face point is acquired. Then, based on the acquired center, one face point (a face point surrounded by a thick circle) is selected according to a predetermined reference (here, the one close to the center point of the image (frame)).

[Operation example]
Next, an operation example of the imaging apparatus 1d in the fourth embodiment will be described. The operation of the imaging device 1d is the same as the operation of the imaging device 1a in the first embodiment except for the content of the process of S04. For this reason, only the content of the process of S04 is demonstrated about the operation example of the imaging device 1d.

  FIG. 18 is a flowchart showing a part of the process of the focus target determining unit 5d in the fourth embodiment, that is, the content of the process of S04 in FIG. Thus, in the fourth embodiment, in S04, the process shown in FIG. 18 is executed instead of the process shown in FIG. 6 applied in the first embodiment.

  When the process of S04 starts, the face detection unit 6d detects a face for each block (S22). Next, the block determination unit 13 checks the number of blocks in which a face has been detected. If the number of blocks in which a face is detected is less than 1 (S23: <1), that is, 0, the block determination unit 13 notifies the determination unit 7d that there is no face. Then, the determination unit 7d passes the two-dimensional coordinates at the center of the image (frame) to the distance measurement unit 10 (S24).

  When the number of blocks in which a face is detected is greater than 1 (S23:> 1), the block determination unit 13 selects one selected block from a plurality of blocks in which a face has been detected (S25). When the number of blocks in which a face is detected is one (S23: = 1), or after the processing of S25, the determination unit 7d checks the number of faces included in the selected block selected by the block determination unit 13. When the number of faces is larger than 1 (S26:> 1), that is, when a plurality of faces are detected in this selection block, the determination unit 7d selects one face from the plurality of faces (S27). The process of S27 is the same process as the flowchart shown in FIG. However, it differs from the first embodiment in that the processing target is not the entire image but the selected block.

  Returning to the description with reference to FIG. When the number of detected faces is one (S26: = 1) or after the process of S27, the determination unit 7d passes the two-dimensional coordinates of the face points of one face to the distance measurement unit 10 (S28). The term “one face” as used herein refers to the only detected face when the number of detected faces is one, and the selected face when after the process of S27. After this process, the process after S05 in FIG. 5 is implemented.

[Action / Effect]
According to the imaging device 1d in the fourth embodiment, the focusing image is divided into a plurality of blocks, and one selected block including the face to be focused is selected from the plurality of blocks. Then, a face to be focused is selected from the one selected block. For this reason, in the imaging device 1d in the fourth embodiment, it is possible to obtain the same effect as the second embodiment in a pseudo manner. Specifically, when there are a plurality of groups in an image, these groups can be roughly divided by blocks. Then, a face to be focused is selected based on a group or a part of the group included in one selected block selected according to a predetermined criterion.

  Further, unlike the imaging device 1b in the second embodiment, the imaging device 1d in the fourth embodiment does not execute the clustering process. For this reason, the imaging device 1d according to the fourth embodiment can perform processing at a higher speed than the imaging device 1b according to the second embodiment.

[Modification]
The determination unit 7d may be configured similarly to the determination unit 7b in the second embodiment. In this case, the determination unit 7d performs processing based on the face included in the selected block determined by the block determination unit 13 instead of the face included in the selected set determined by the set determination unit 12. 7b is configured differently. That is, it is configured such that a set is classified for a selected block rather than the entire image, and a selected set is determined. In this case, the determination unit 7d is configured to perform the same processing as the determination unit 7b except for the above points.

  With this configuration, it is not necessary to perform clustering for all faces included in the screen. That is, it is only necessary to perform clustering for the faces included in the selected block. For this reason, when a large number of groups are included in the screen, processing can be performed at a higher speed than the imaging device 1b in the second embodiment. In addition, it is possible to select a set including a face to be focused and a face to be focused more accurately than the imaging device 1d in the normal fourth embodiment.

[Fifth embodiment]
Next, the imaging apparatus 1e provided with the focusing object determination part 5e is demonstrated. The in-focus object determining unit 5e is a fifth embodiment of the in-focus object determining unit 5. Hereinafter, the imaging device 1e and the focusing target determination unit 5e will be described with respect to differences from the imaging device 1a and the focusing target determination unit 5a in the first embodiment.

〔System configuration〕
FIG. 19 is a diagram illustrating functional blocks of the imaging device 1e. The imaging device 1e is different from the imaging device 1a in the first embodiment in that it includes a focusing target determination unit 5e instead of the focusing target determination unit 5a. Therefore, description of the input unit 2, the imaging unit 3, the image storage unit 4, and the distance measuring unit 10 of the imaging device 1e is omitted below.

<Focus target determination unit>
The focus target determination unit 5e includes a face detection unit 6a, a maximum face selection unit 14, a candidate selection unit 15, and a determination unit 7e. Hereinafter, each function part which comprises the focusing object determination part 5e is demonstrated. However, since the face detection unit 6a included in the focus target determination unit 5e has the same configuration as the face detection unit 6a included in the focus target determination unit 5a in the first embodiment, the description thereof is omitted.

<Maximum face selection section>
The maximum face selection unit 14 selects the largest face (maximum face) from the plurality of faces detected by the face detection unit 6a. When a face rectangle is used in the face detection unit 6a, the maximum face selection unit 14 performs selection based on the size of the face rectangle. In addition, when the face detection unit 6a performs pattern matching, the maximum face selection unit 14 performs selection based on the size of the pattern. The maximum face selection unit 14 may be configured to select the maximum face by any other method. The maximum face selection unit 14 passes information on the selected maximum face (eg, coordinates of face points, face size) to the candidate selection unit 15.

  FIG. 20 is a diagram illustrating a processing example of the maximum face selection unit 14. In FIG. 20, the size of each face point indicates the size of the face corresponding to each face point. By the processing of the maximum face selection unit 14, a face point surrounded by a thick frame is selected as the maximum face.

<Candidate selection section>
The candidate selection unit 15 selects a face candidate to be focused based on the size of the maximum face selected by the maximum face selection unit 14. In other words, the candidate selection unit 15 determines a face to be processed by the determination unit 7e.

The candidate selection unit 15 selects the maximum face selected by the maximum face selection unit 14 and another face having a size within a predetermined range from the size of the maximum face as the target face candidates. The size within the predetermined range indicates a size smaller than the maximum face by, for example, several percent to several tens percent. Moreover, the size within the predetermined range indicates, for example, a size of half or more of the maximum face or a size of 2/3 or more.

  FIG. 21 is a diagram illustrating a processing example of the candidate selection unit 15. In FIG. 21, small white circles indicate face points that are not selected by the candidate selection unit 15. A black circle indicates a face point of another face selected by the candidate selection unit 15. A face point surrounded by a thick frame indicates the face point of the maximum face selected by the determination unit 7e.

<Decision part>
The determination unit 7e is different from the determination unit 7a in that the process is performed based on the face included in the candidate selected by the candidate selection unit 15 instead of the face included in the entire image captured by the imaging unit 3. The determination unit 7e performs the same processing as the determination unit 7a except for the above points.

  The process of the determination part 7e is demonstrated using FIG. The determination unit 7e selects a face to be focused based on the three face points selected by the candidate selection unit 15. At this time, the center determining unit 8e of the determining unit 7e acquires the centers of the three face points (shaded circles). Then, based on this center, according to a predetermined reference (here, the one closest to the center (center point) of the image (frame)), the face determination unit 9e of the determination unit 7e has one face point (here, thick face). Select a face point (maximum face) surrounded by a frame.

[Operation example]
Next, an operation example of the imaging device 1e in the fifth embodiment will be described. The operation of the imaging device 1e is the same as the operation of the imaging device 1a in the first embodiment except for the content of the processing in S13. For this reason, only the content of the process of S13 is demonstrated about the operation example of the imaging device 1e.

  FIG. 22 is a flowchart showing a part of the process of the focusing target determining unit 5e in the fifth embodiment, that is, the content of the process of S13 in FIG. Thus, the fifth embodiment differs from the first embodiment in that the process shown in FIG. 22 is executed instead of the process shown in FIG. 7 in S13.

  When the processing of S13 is started, the maximum face selection unit 14 selects the maximum face among the plurality of faces detected by the face detection unit 6a (S29). Next, the candidate selection unit 15 selects a candidate face based on the size of the face (maximum face) selected by the maximum face selection unit 14 (S30).

  Next, the center determination unit 8e of the determination unit 7e is a candidate face selected by the candidate selection unit 15 (the maximum face and at least one other size having a size belonging to a predetermined range with the maximum face as a reference). The center of the face point is acquired based on (face) (S31). Then, the face determination unit 9e of the determination unit 7e selects one face, that is, a face to be focused based on the acquired center (S32). According to the above process, when a face corresponding to another face is not detected, the maximum face is determined as the target face.

[Action / Effect]
According to the imaging device 1e in the fifth embodiment, a face to be focused is selected based only on a face having a certain size. A certain size is determined based on the maximum face size.

For this reason, it is possible to prevent the face of a person who has been photographed as a background, that is, a person who is not conscious of the subject as a subject, from being included in the processing target. Accordingly, the accuracy of processing for selecting a set including faces to be focused, processing for acquiring a center based on the faces included in the set, and processing for selecting faces to be focused is improved.

[Modification]
The maximum face selection unit 14 may be configured to perform selection based on the size of the detected skin color area of the face.

It is a figure which shows the functional block of 1st embodiment. It is a figure for demonstrating a circumscribed polygon method. It is a figure for demonstrating the gravity center method. It is a figure for demonstrating the process of a face determination part. It is a flowchart which shows the operation example of 1st embodiment. It is a flowchart which shows a part of operation example of 1st embodiment. It is a flowchart which shows a part of operation example of 1st embodiment. It is a figure which shows the functional block of 2nd embodiment. It is a figure which shows the process example of a classification | category part. It is a figure which shows the process example of the determination part in 2nd embodiment. It is a flowchart which shows a part of operation example of 2nd embodiment. It is a figure for showing the effect in a second embodiment. It is a figure which shows the functional block of 3rd embodiment. It is a flowchart which shows a part of operation example of 3rd embodiment. It is a figure which shows the functional block of 4th embodiment. It is a figure which shows the example of the image for a focus divided | segmented into the several block. It is a figure which shows the process example of the determination part in 4th embodiment. It is a flowchart which shows a part of operation example of 4th embodiment. It is a figure which shows the functional block of 5th embodiment. It is a figure which shows the process example of the largest face selection part. It is a figure which shows the process example of a candidate selection part. It is a flowchart which shows a part of operation example of 5th embodiment.

Explanation of symbols

1a, 1b, 1c, 1d, 1e Imaging device 2 Input unit 3 Imaging unit 4 Image storage unit 5a, 5b, 5c, 5d, 5e Focus target determination unit 6a, 6d Face detection unit 7a, 7b, 7c, 7d, 7e Determination unit 8a, 8b, 8d, 8e Center determination unit 9a, 9b, 8d, 8e Face determination unit 10 Distance measurement unit 11 Classification unit 12 Set determination unit 13 Block determination unit 14 Maximum face selection unit 15 Candidate selection unit

Claims (9)

  1. Detection means for detecting a human face from the input image;
    When a plurality of human faces are detected by the detection means, a face to be focused on when imaging is performed from the plurality of human faces based on the positions of the plurality of human faces And / or determining means for determining a face to be subjected to exposure control,
    The determining means includes
    Center determining means for determining the center of the positions of the plurality of people based on the positions of the faces of the plurality of people;
    A target determination device including a face determination unit configured to determine the target face based on the center position.
  2. The object determining device according to claim 1, wherein the center determining unit determines a center of a polygon circumscribing the positions of the plurality of human faces as the center.
  3. The target determining apparatus according to claim 1, wherein the center determining unit determines a center of gravity of the positions of the plurality of human faces as the center.
  4. The target determination apparatus according to claim 1, wherein the face determination unit determines a face of a person located closest to the center as the target face.
  5. The target determination apparatus according to claim 1, wherein the face determination unit determines the target face according to a predetermined reference from faces positioned within a predetermined distance from the center.
  6. A classifying unit for classifying a plurality of detected human faces into a plurality of sets based on respective positions when a plurality of human faces are detected by the detection unit;
    Set determining means for determining a selected set for determining the target face from the plurality of sets;
    Further including
    The determination unit, based on a human face included in the selected set, the object determining device according to any one of claims 1 to 5 for determining the face to be the target.
  7. Object determining device according to any one of claims 1 to 6, further comprising a display means for a human face that has been determined by said determining means, for displaying in distinction from the other faces.
  8. Information processing device
    Detecting a human face from the input image;
    When a plurality of human faces are detected, based on the positions of the plurality of human faces, the faces and / or exposures to be focused when imaging is performed from the plurality of human faces An object determining method for executing a determining step for determining a face to be controlled;
    The determining step comprises:
    Determining a center of the positions of the plurality of people based on the positions of the faces of the plurality of people;
    Determining a target face based on the center position.
  9. The program for making an information processing apparatus perform each step of the object determination method of Claim 8 .
JP2003318701A 2003-09-10 2003-09-10 Object determination device Active JP4461747B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003318701A JP4461747B2 (en) 2003-09-10 2003-09-10 Object determination device

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2003318701A JP4461747B2 (en) 2003-09-10 2003-09-10 Object determination device
KR1020057024724A KR100839772B1 (en) 2003-07-15 2004-07-14 Object decision device and imaging device
EP04747521.5A EP1653279B1 (en) 2003-07-15 2004-07-14 Object decision device and imaging device
PCT/JP2004/010055 WO2005006072A1 (en) 2003-07-15 2004-07-14 Object decision device and imaging device
US10/564,392 US7526193B2 (en) 2003-07-15 2004-07-14 Object determining device and imaging apparatus
US12/404,908 US7912363B2 (en) 2003-07-15 2009-03-16 Object determining device and imaging apparatus

Publications (2)

Publication Number Publication Date
JP2005086682A JP2005086682A (en) 2005-03-31
JP4461747B2 true JP4461747B2 (en) 2010-05-12

Family

ID=34417911

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003318701A Active JP4461747B2 (en) 2003-09-10 2003-09-10 Object determination device

Country Status (1)

Country Link
JP (1) JP4461747B2 (en)

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4548174B2 (en) 2005-03-24 2010-09-22 コニカミノルタエムジー株式会社 Microchip for inspection and inspection apparatus using the same
JP4537255B2 (en) * 2005-04-28 2010-09-01 富士フイルム株式会社 Imaging apparatus and imaging method
JP2007074620A (en) * 2005-09-09 2007-03-22 Omron Corp Imaging apparatus
JP4730044B2 (en) * 2005-10-04 2011-07-20 ソニー株式会社 Imaging device, imaging device control method, and computer program
JP4422667B2 (en) 2005-10-18 2010-02-24 富士フイルム株式会社 Imaging apparatus and imaging method
JP4639271B2 (en) 2005-12-27 2011-02-23 三星電子株式会社Samsung Electronics Co.,Ltd. Camera
JP4670635B2 (en) * 2005-12-27 2011-04-13 株式会社ニコン Imaging device
JP4714056B2 (en) * 2006-03-23 2011-06-29 株式会社日立製作所 Media recognition system
JP4657960B2 (en) * 2006-03-27 2011-03-23 富士フイルム株式会社 Imaging method and apparatus
JP4721434B2 (en) * 2006-03-31 2011-07-13 キヤノン株式会社 Imaging apparatus and control method thereof
JP5070728B2 (en) * 2006-04-04 2012-11-14 株式会社ニコン Camera
JP2007282118A (en) * 2006-04-11 2007-10-25 Nikon Corp Electronic camera and image processing apparatus
US8306280B2 (en) 2006-04-11 2012-11-06 Nikon Corporation Electronic camera and image processing apparatus
JP4910462B2 (en) 2006-04-14 2012-04-04 株式会社ニコン camera
JP4182117B2 (en) 2006-05-10 2008-11-19 キヤノン株式会社 Imaging device, its control method, program, and storage medium
JP2007311861A (en) * 2006-05-16 2007-11-29 Fujifilm Corp Photographic apparatus and method
JP4207980B2 (en) 2006-06-09 2009-01-14 ソニー株式会社 Imaging device, imaging device control method, and computer program
JP5098259B2 (en) 2006-09-04 2012-12-12 株式会社ニコン Camera
JP4876836B2 (en) * 2006-10-12 2012-02-15 日本電気株式会社 Imaging apparatus and method, and program
JP4314266B2 (en) 2006-11-22 2009-08-12 キヤノン株式会社 Image control apparatus and control method thereof
JP4902562B2 (en) * 2007-02-07 2012-03-21 パナソニック株式会社 Imaging apparatus, image processing apparatus, control method, and program
JP4899982B2 (en) * 2007-03-28 2012-03-21 カシオ計算機株式会社 Imaging apparatus, captured image recording method, and program
JP4798042B2 (en) * 2007-03-29 2011-10-19 オムロン株式会社 Face detection device, face detection method, and face detection program
JP5019939B2 (en) * 2007-04-19 2012-09-05 パナソニック株式会社 Imaging apparatus and imaging method
CN101731004B (en) 2007-04-23 2012-07-04 夏普株式会社 Image picking-up device, computer readable recording medium including recorded program for control of the device, and control method
JP4858849B2 (en) 2007-05-18 2012-01-18 カシオ計算機株式会社 Imaging apparatus and program thereof
JP4916403B2 (en) * 2007-08-30 2012-04-11 キヤノン株式会社 Image processing apparatus and control method thereof
KR101431535B1 (en) * 2007-08-30 2014-08-19 삼성전자주식회사 Apparatus and method for picturing image using function of face drecognition
JP4544282B2 (en) 2007-09-14 2010-09-15 ソニー株式会社 Data processing apparatus, data processing method, and program
JP5200534B2 (en) * 2007-12-28 2013-06-05 カシオ計算機株式会社 Imaging apparatus and program
JP5043736B2 (en) 2008-03-28 2012-10-10 キヤノン株式会社 Imaging apparatus and control method thereof
JP2009294416A (en) * 2008-06-05 2009-12-17 Sony Corp Imaging apparatus and its control method
JP4835713B2 (en) * 2009-03-24 2011-12-14 カシオ計算機株式会社 Image processing apparatus and computer program
JP5276538B2 (en) * 2009-07-22 2013-08-28 富士フイルム株式会社 AF frame automatic tracking system
JP4787905B1 (en) * 2010-03-30 2011-10-05 富士フイルム株式会社 Image processing apparatus and method, and program
JP2010252352A (en) * 2010-05-24 2010-11-04 Seiko Epson Corp Image processing device, digital camera, image data structure, printer with automatic color correction function, method of producing face object information-attached photographed image, and color correction method
JP2011103128A (en) * 2010-11-29 2011-05-26 Seiko Epson Corp Image selection device and image selection method
JP5867157B2 (en) * 2012-02-23 2016-02-24 リコーイメージング株式会社 Imaging device, subject tracking method, and subject tracking program
JP5397502B2 (en) * 2012-05-17 2014-01-22 カシオ計算機株式会社 Imaging apparatus and program

Also Published As

Publication number Publication date
JP2005086682A (en) 2005-03-31

Similar Documents

Publication Publication Date Title
US8797448B2 (en) Rapid auto-focus using classifier chains, MEMS and multiple object focusing
US20170070649A1 (en) Partial face detector red-eye filter method and apparatus
US9398209B2 (en) Face tracking for controlling imaging parameters
US20190213434A1 (en) Image capture device with contemporaneous image correction mechanism
US20200014847A1 (en) Photographing apparatus, method and medium using image recognition
KR101731771B1 (en) Automated selection of keeper images from a burst photo captured set
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
US8559722B2 (en) Image recognition apparatus and method
US8184870B2 (en) Apparatus, method, and program for discriminating subjects
US8861806B2 (en) Real-time face tracking with reference images
US8942436B2 (en) Image processing device, imaging device, image processing method
EP2242253B1 (en) Electronic camera and image processing method
US20160065861A1 (en) Modification of post-viewing parameters for digital images using image region or feature information
EP1855466B1 (en) Focus adjustment apparatus and method
US8520093B2 (en) Face tracker and partial face tracker for red-eye filter method and apparatus
US8417059B2 (en) Image processing device, image processing method, and program
EP1471455B1 (en) Digital camera
US8331619B2 (en) Image processing apparatus and image processing method
US7502493B2 (en) Image processing apparatus and method and program storage medium
JP3114668B2 (en) The method object detection and background removal, recording medium recording a device and program
JP4569670B2 (en) Image processing apparatus, image processing method, and program
CN105933589B (en) A kind of image processing method and terminal
US8494286B2 (en) Face detection in mid-shot digital images
US8682097B2 (en) Digital image enhancement with reference images
US7043059B2 (en) Method of selectively storing digital images

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060630

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090317

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090518

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090609

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090723

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20090818

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090917

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20091125

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100126

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100208

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130226

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4461747

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140226

Year of fee payment: 4