JP4135100B2 - Imaging device - Google Patents

Imaging device Download PDF

Info

Publication number
JP4135100B2
JP4135100B2 JP2004083062A JP2004083062A JP4135100B2 JP 4135100 B2 JP4135100 B2 JP 4135100B2 JP 2004083062 A JP2004083062 A JP 2004083062A JP 2004083062 A JP2004083062 A JP 2004083062A JP 4135100 B2 JP4135100 B2 JP 4135100B2
Authority
JP
Japan
Prior art keywords
face
image
means
person
position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2004083062A
Other languages
Japanese (ja)
Other versions
JP2005269562A (en
Inventor
健一郎 綾木
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2004083062A priority Critical patent/JP4135100B2/en
Publication of JP2005269562A publication Critical patent/JP2005269562A/en
Application granted granted Critical
Publication of JP4135100B2 publication Critical patent/JP4135100B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a photographing apparatus, and more particularly, to a photographing apparatus that digitally processes an image obtained from an image sensor and records the digital data on a recording medium.

  In recent years, cameras have become more automated, and even those who are not used to taking photographs can take pictures with relatively few failures. Under such circumstances, the difference between those who are not familiar with photography and those who are familiar with photography is considered to be largely due to composition, angle, and framing.

  Therefore, in order to enable even a person who is not proficient in photographing to easily photograph a good composition and framing, Patent Document 1 stores guide information regarding the framing technique and the photographing technique in the photographing apparatus. It has been proposed to display the guide information on a monitor at the time of photographing.

  Patent Document 2 proposes to display a grid that guides framing during shooting so as to overlap the through image.

  Further, Patent Documents 3 and 4 propose that a human-type frame line image that guides a subject to be placed at an optimal position according to a shooting mode is displayed on a monitor.

Japanese Patent Application Laid-Open No. 2004-228561 proposes detecting the subject's eyes from an image obtained from an image sensor and automatically focusing the eyes so that the eyes are in focus.
JP 2002-281356 A JP 2002-290780 A JP 2002-094838 A JP 2002-232753 A JP 2001-215403 A

  However, the photographing apparatus of Patent Document 1 has a drawback that it is troublesome because it is a method for determining an optimal composition with reference to guide information.

  In addition, since the photographing apparatus of Patent Document 2 has a configuration in which only a grid is displayed, there is a drawback in that a person who is not used to photographing does not know how to use and cannot be used effectively.

  In addition, the photographing apparatuses disclosed in Patent Documents 3 and 4 have a drawback that it is difficult to see the monitor because the human-type frame line image to be guided is displayed on the monitor while being superimposed on the through image. In addition, since the guide is simply arranged so that the subject is arranged at the optimum position, it is not possible to take a picture in consideration of the background, and as a result, an unfavorable image may be taken as a person picture. In other words, for example, a “neck cut shot” in which horizontal lines such as the background horizon and the horizon line appear to cross the person's neck, and a “skewer shot” in which the background trees and power poles pierce the person's head are photographed. There is a risk of being.

  In addition, the photographing apparatus of Patent Document 5 has a drawback that the final composition has to be determined by the user only by automatically focusing on the subject's eyes.

  The present invention has been made in view of such circumstances, and an object of the present invention is to provide an imaging apparatus that can easily capture an image with a good composition.

According to a first aspect of the present invention, there is provided a photographing apparatus for recording an image obtained from an imaging unit on a storage medium in response to an image recording instruction in order to achieve the object, and a person out of images obtained from the imaging unit. Detecting means for detecting the position of the face of the person, means for detecting the ratio of the size of the person's face in the image obtained from the imaging means, and the number of persons in the image obtained from the imaging means Guide means for guiding the position of the face of the person detected by the face detection means to be located at a predetermined position in the image when the ratio of the face sizes is equal to or less than a certain ratio. An imaging device is provided.
According to a second aspect of the present invention, there is provided a photographing apparatus for recording an image obtained from an imaging unit on a storage medium in response to an image recording instruction in order to achieve the object. A face detecting means for detecting the position of a person's face from the image, a means for detecting a ratio of the size of the person's face in the image obtained from the imaging means, and an image in the image obtained from the imaging means A means for setting a face arrangement area in an image in accordance with a ratio of the size of a person's face, and guiding the position of the person's face detected by the face detection means to be located within the face arrangement area An imaging apparatus comprising: a guide unit.

  According to the present invention, when the photographing device is pointed at the subject, the position of the person's face is detected by the face detection means from the image obtained from the imaging means, and the position of the face is positioned at a predetermined position in the image. Guided by the guide means. Thereby, for example, if the predetermined position is set to the position of the golden ratio of the image, anyone can easily shoot the subject by arranging the subject at the optimum position.

According to a third aspect of the present invention, in order to achieve the above object, a line segment detection unit that detects a horizontal or vertical line segment from an image obtained from the imaging unit, and a person detected by the face detection unit face provides a photographing apparatus according to claim 1 or 2, characterized in that and a warning means for warning that overlaps with the detected horizontal or vertical line segments in the line segment detecting unit.

  According to the present invention, a horizontal or vertical line segment is detected by the line segment detection unit from the image obtained from the imaging unit, the horizontal or vertical line segment detected by the line segment detection unit, and the face detection unit. When the face detected in (1) overlaps, a warning is issued by the warning means. Thereby, it is possible to prevent the so-called “neck cut shot” and “skewer shot” from being taken.

According to a fourth aspect of the present invention, there is provided a photographing apparatus for recording an image obtained from an imaging unit on a storage medium in accordance with an image recording instruction in order to achieve the object. A face detecting means for detecting the position of a person's face from the image, a means for detecting a ratio of the size of the person's face in the image obtained from the imaging means, and an image in the image obtained from the imaging means Means for setting a shooting execution area in an image in accordance with a ratio of the size of a person's face; and if the position of the person's face detected by the face detection means belongs to the shooting execution area, the recording instruction is issued. There is provided a photographing apparatus comprising a recording instruction means for outputting.

  According to the present invention, a person's face is detected from an image obtained from the imaging means, and a recording instruction is output when the position of the face is located at a predetermined position in the image. As a result, an image having an optimal composition can be automatically taken.

  With the photographing apparatus according to the present invention, it is possible to easily photograph an image with a good composition.

  The best mode for carrying out the photographing apparatus according to the present invention will be described below with reference to the accompanying drawings.

  FIG. 1 is a front perspective view showing an embodiment of a photographing apparatus according to the present invention. This photographing apparatus is a digital camera that receives light passing through a lens by a solid-state imaging device, converts the light into a digital signal, and records it on a storage medium.

  The camera body 12 of the digital camera 10 is formed in the shape of a horizontally long rectangular box, and on the front thereof is a lens 14, a strobe 16, a finder window 18, a self-timer lamp 20, an AF auxiliary light lamp 22, a strobe light control sensor. 24 etc. are provided. A shutter button 26, a power / mode switch 28, a mode dial 30 and the like are provided on the upper surface of the camera body 12.

  FIG. 2 is a rear view of the digital camera 10 shown in FIG. 1. As shown in FIG. 2, a monitor 32, a viewfinder eyepiece 34, a speaker 36, a zoom button 38, and a cross are placed on the back of the camera body 12. A button 40, a MENU / OK button 42, a DISP button 44, a BACK button 46, and the like are provided.

  Note that a battery insertion portion and a memory card slot are provided on the lower surface of the camera body 12 (not shown) via an openable / closable cover, and a battery and a memory card are loaded into the battery insertion portion and the memory card slot.

  The lens 14 is constituted by a retractable zoom lens, and is extended from the camera body 12 by setting the camera mode to the photographing mode by the power / mode switch 28.

  The shutter button 26 is constituted by a two-stage stroke type switch composed of so-called “half-press” and “full-press”. When the shutter button 26 is “half-pressed”, the digital camera 10 activates AE / AF, and “full-press” to execute shooting.

  The power / mode switch 28 has both a function as a power switch for turning on / off the power of the digital camera 10 and a function as a mode switch for setting the mode of the digital camera 10. It is slidably provided between “position” and “photographing position”. The digital camera 10 is turned on by sliding the power / mode switch 28 to the “reproduction position” or “photographing position”, and turned off by setting it to the “OFF position”. Then, the power / mode switch 28 is slid and set to “playback position” to set to “playback mode”, and to the “shooting position” to set to “shooting mode”.

  The mode dial 30 functions as a shooting mode setting means for setting the shooting mode of the digital camera 10. Depending on the setting position of the mode dial, the shooting mode of the digital camera 10 is “auto shooting mode”, “movie shooting mode”, “ "Portrait shooting mode", "Sport shooting mode", "Landscape shooting mode", "Night scene shooting mode", "Program shooting mode", "Aperture priority shooting mode", "Shutter speed priority shooting mode", "Manual shooting mode", etc. Set to

  The monitor 32 is composed of a liquid crystal display capable of color display. The monitor 32 is used as an image display panel for displaying a photographed image in the playback mode, and is also used as a user interface display panel for performing various setting operations. In the photographing mode, a through image is displayed as necessary, and is used as an electronic viewfinder for checking the angle of view.

  The zoom button 38 functions as zoom instruction means for instructing zooming, and includes a zoom tele button 38T for instructing zooming to the telephoto side and a zoom wide button 38W for instructing zooming to the wide angle side. In the digital camera 10, when the zoom tele button 38T and the zoom wide button 38W are operated in the shooting mode, the focal length of the lens 14 changes. Further, when the zoom tele button 38T and the zoom wide button 38W are operated in the reproduction mode, the image being reproduced is enlarged or reduced.

  The cross button 40 functions as direction indicating means for inputting instructions in four directions, up, down, left, and right, and is used, for example, for selecting menu items on a menu screen.

  The MENU / OK button 42 functions as a button (MENU button) for instructing a transition from the normal screen to the menu screen in each mode, and also functions as a button (OK button) for instructing selection confirmation, execution of processing, and the like. To do.

  The DISP button 44 functions as a button for instructing the display switching of the monitor 32. When the DISP button 44 is pressed during photographing, the display of the monitor 32 is switched from ON to framing guide display to OFF. If the DISP button 44 is pressed during playback, the playback mode is switched from normal playback to playback without character display to multi playback.

  The BACK button 46 functions as a button for instructing to cancel the input operation or return to the previous operation state.

  FIG. 3 is a block diagram showing a schematic configuration inside the digital camera.

  The overall operation of the digital camera 10 is comprehensively controlled by a central processing unit (CPU) 110. CPU 110 receives operation signals input from operation unit 112 (shutter button 26, power / mode switch 28, mode dial 30, zoom button 38, cross button 40, MENU / OK button 42, DISP button 44, BACK button 46, etc.). Based on the above, the entire digital camera 10 is comprehensively controlled according to a predetermined control program.

  The ROM 116 connected to the CPU 110 via the bus 114 stores a control program executed by the CPU 110 and various data necessary for control. The EEPROM 118 relates to the operation of the digital camera 10 such as user setting information. Various setting information and the like are stored. The memory (SDRAM) 120 is used as a calculation work area for the CPU 110 and is used as a temporary storage area for image data and the VRAM 122 is used as a temporary storage area dedicated to image data.

  As described above, the digital camera 10 is set to the shooting mode by setting the power / mode switch 28 to the shooting position, and shooting is possible. Then, when the shooting mode is set, the lens 14 is extended to enter a shooting standby state.

  Under this photographing mode, the subject light that has passed through the lens 14 forms an image on the light receiving surface of the solid-state imaging device 124 via the diaphragm 15. The solid-state imaging device 124 is constituted by a CCD, and on its light receiving surface, red (R), green (G), and blue (B) colors arranged in a predetermined arrangement structure (Bayer, G stripe, etc.). A large number of photodiodes (light receiving elements) are two-dimensionally arranged through a filter. The subject light that has passed through the lens 14 is received by each photodiode and converted into a signal charge in an amount corresponding to the amount of incident light.

  The signal charge accumulated in each photodiode is sequentially read out as a voltage signal (image signal) corresponding to the signal charge based on the drive pulse supplied from the timing generator (TG) 126, and is analog processing section (CDS / AMP). 128.

  The analog processing unit 128 samples and holds the input R, G, and B signals for each pixel, amplifies them, and outputs them to the A / D converter 130. The A / D converter 130 converts the analog R, G, and B signals output from the analog processing unit 128 into digital R, G, and B signals and outputs the signals, and the A / D converter 130 outputs the signals. The digital R, G, and B signals are taken into the memory 120 via the image input control unit 132.

  The image signal processing unit 134 processes the R, G, and B signals fetched into the memory 120 in accordance with an instruction from the CPU 110, and generates a luminance signal (Y signal) and a color difference signal (Cr, Cb signal). That is, the image signal processing unit 134 includes a synchronization circuit (a processing circuit that converts a color signal into a simultaneous type by interpolating a spatial shift of the color signal associated with the color filter array of the single CCD), and a white balance correction circuit. Functions as image processing means including a gamma correction circuit, a contour correction circuit, a luminance / color difference signal generation circuit, etc., and performs signal processing on R, G, B signals input using the memory 120 in accordance with instructions from the CPU 110 Thus, a luminance signal and a color difference signal (luminance / color difference signal) are generated. The generated luminance / color difference signal is stored in the VRAM 122.

  When the captured image is output to the monitor 32, the generated luminance / color difference signal is sent from the VRAM 122 to the video encoder 136. The video encoder 136 converts the input luminance / color difference signal into a display signal format (for example, an NTSC color composite video signal), and outputs it to the monitor 32. As a result, an image captured by the CCD 124 is displayed on the monitor 32.

  An image signal is periodically taken in from the CCD 124, the image data in the VRAM 122 is periodically rewritten by the luminance / color difference signal generated from the image signal, and output to the monitor 32. Is displayed. The photographer can confirm the shooting angle of view by viewing the image (through image) displayed in real time on the monitor 32.

  Note that the luminance / color difference signal applied from the VRAM 122 to the video encoder 136 is added to the character MIX unit 138 as necessary, and is combined with a predetermined character, graphic, etc., and then applied to the video encoder 136. As a result, required shooting information and the like are displayed superimposed on the through image.

  Shooting is performed by pressing the shutter button 26. When the shutter button 26 is half-pressed, an S1 ON signal is input to the CPU 110, and the CPU 110 performs AE / AF processing.

  First, an image signal captured from the CCD 124 via the image input control unit 132 is input to the AF detection unit 140 and the AE / AWB detection unit 142.

  The AE / AWB detection unit 142 includes a circuit that divides one screen into a plurality of areas (for example, 16 × 16) and integrates R, G, and B signals for each divided area, and provides the integrated value to the CPU 110. . The CPU 110 detects the brightness of the subject (subject brightness) based on the integrated value obtained from the AE / AWB detection unit 142, and calculates an exposure value (shooting EV value) suitable for shooting. Then, an aperture value and a shutter speed are determined from the obtained photographing EV value and a predetermined program diagram, and an electronic exposure and an aperture drive unit 144 of the CCD 124 are controlled according to the determined aperture value and an appropriate exposure amount.

  The AE / AWB detection unit 142 calculates an average integrated value for each color of the R, G, and B signals for each divided area during automatic white balance adjustment, and provides the calculation result to the CPU 110. The CPU 110 obtains the ratio of R / G and B / G for each divided area from the obtained integrated value of R, integrated value of B, and integrated value of G, and calculates the R / G and B / G values obtained. The light source type is determined based on the distribution in the color space of R / G and B / G. Then, according to the white balance adjustment value suitable for the discriminated light source type, for example, the value of each ratio is approximately 1 (that is, the RGB integration ratio is R: G: B≈1: 1: 1 on one screen). As described above, the gain value (white balance correction value) for the R, G, and B signals of the white balance adjustment circuit is controlled to correct the signal of each color channel.

  The AF detection unit 140 includes a high-pass filter that passes only high-frequency components of the G signal, an absolute value processing unit, an AF area extraction unit that extracts a signal within a predetermined focus area (for example, the center of the screen), and an absolute value within the AF area. The CPU 110 is notified of the integrated value data obtained by the AF detection unit 140. The CPU 110 calculates a focus evaluation value (AF evaluation value) at a plurality of AF detection points while moving the focus lens group of the lens 14 by controlling the lens driving unit 146, and aligns the lens position where the evaluation value becomes the maximum. Determine as the focal position. Then, the lens driving unit 146 is controlled so that the focus lens group moves to the obtained in-focus position.

  As described above, the AE / AF process is performed by half-pressing the shutter button 26. The photographer operates the zoom button 38 as necessary to zoom the lens 14 and adjust the angle of view.

  Thereafter, when the shutter button 26 is fully pressed, an S2 ON signal is input to the CPU 110, and the CPU 110 starts photographing and recording processing. That is, the CCD 124 is exposed with the shutter speed and aperture value determined based on the photometric result.

  The image signal output from the CCD 124 is taken into the memory 120 via the analog processing unit 128, the A / D converter 130, and the image input control unit 132, and is converted into a luminance / color difference signal by the image signal processing unit 134. Stored in the memory 120.

  The image data stored in the memory 120 is added to the compression / decompression processing unit 148, compressed in accordance with a predetermined compression format (for example, JPEG format), then stored in the memory 120, and a predetermined image recording format (for example, Exif format). Are recorded on the storage medium 152 via the media control unit 150.

  The image recorded on the storage medium 152 as described above can be reproduced and displayed on the monitor 32 by setting the power / mode switch 28 to the reproduction position and setting the mode of the digital camera 10 to the reproduction mode. .

  When the power / mode switch 28 is set to the playback position and the mode of the digital camera 10 is set to the playback mode, the CPU 110 outputs a command to the media control unit 150 and reads the image file recorded last on the storage medium 152. Let it come out.

  The compressed image data of the read image file is added to the compression / decompression processing unit 148, decompressed to an uncompressed luminance / color difference signal, and then output to the monitor 32 via the video encoder 136. As a result, the image recorded on the storage medium 152 is reproduced and displayed on the monitor 32. It is to be noted that the luminance / color difference signal of the image to be reproduced is added to the character MIX unit 138 and synthesized with a predetermined character, figure, or the like as necessary, and then added to the video encoder 136. As a result, predetermined photographing information and the like are displayed on the monitor 32 so as to be superimposed on the photographed image. .

  The frame advance of the image is performed by operating the left and right keys of the cross button 40. When the right key of the cross button 40 is pressed, the next image file is read from the storage medium 152 and reproduced and displayed on the monitor 32. When the left key of the cross button 40 is pressed, the previous image file is read from the storage medium 152 and reproduced and displayed on the monitor 32.

  As described above, in the digital camera 10 according to the present embodiment, a through image can be displayed on the monitor 32 at the time of shooting. By viewing the through image displayed on the monitor 32, The composition of an image to be taken can be determined easily.

  However, it is difficult for those who are not accustomed to shooting to determine the optimal composition, and the composition tends to be uninteresting.

  Therefore, the digital camera 10 of the present embodiment is provided with a composition guide function as a function for assisting composition determination. Hereinafter, the composition guide function will be described.

  The composition guide function provided in the digital camera 10 of the present embodiment includes a composition guide function [1] that guides the position of the face of a person who is a subject to be placed at a preferable position for photographing a person, and a so-called “neck trimming”. There are two composition guide functions [2], such as “Shot” and “Skewered Shot”, which prevent the composition from becoming unfavorable for human photography, and each composition guide function is turned on / off by the menu screen. Has been made configurable.

  First, the composition guide function [1] for guiding the position of the face of the person serving as the subject to be arranged at a preferable position for photographing a person will be described.

  When the composition guide function [1] is ON, when the release button 26 is half-pressed, the CPU 110 executes the process of the composition guide [1].

  In this case, the luminance / color difference signal generated by the image signal processing unit 134 is added to the face detection unit 154. The face detection unit 154 extracts a face in the image from the input luminance / color difference signal and detects its barycentric coordinate. In the face extraction, for example, the face of a person in the image is extracted by extracting the skin color data in the image.

  The face position data (coordinate coordinates of the face area) detected by the face detector 154 is output to the CPU 110. The CPU 110 determines whether or not the detected face position P belongs to a face arrangement area set in advance, and if not, performs voice guidance.

  Here, the face arrangement area where the face of the subject is located is set as follows. That is, as shown in FIG. 4, the screen is divided at a ratio (golden ratio) of the top and bottom, left and right 1.618033989: 1 and 1: 1.618033989. Then, regions S1 to S4 having a radius r centering on the intersections P1 to P4 of the dividing lines L1 to L4 are set as face arrangement regions. The data of the face arrangement areas S1 to S4 set in this way is recorded in the ROM 116, and the CPU 110 reads out the data of the face arrangement areas S1 to S4 recorded in the ROM 116 and is detected by the face detection unit 154. It is determined whether or not the set face position P belongs to preset face placement areas S1 to S4.

  The voice guidance is performed by outputting the direction of the camera (lens) to be corrected from the speaker 36 so that the face of the subject is located in the face arrangement region closest to the current face position. For example, as shown in FIG. 5 (a), when the current face position P is at a position slightly to the upper right of the center of the screen, the face of the subject is positioned in the face placement region S2 closest to the position. The direction of the camera (lens) to be corrected is output from the speaker 36.

  In this case, first, the left-right direction guide is implemented, and then the up-down direction guide is implemented. In the case of the example shown in FIG. 5 (a), the face of the subject located at the center of the screen and slightly closer to the upper right is positioned in the face placement region S2 at the upper right. Need to turn.

  Therefore, first, a voice guide “Please turn the camera to the left” is output to correct the shift in the left-right direction. The photographer turns the camera to the left according to the voice guide.

  As a result, when the position in the left-right direction is corrected, a voice guide “Please stop” is output to stop the direction correction. Thereby, as shown in FIG.5 (b), the shift | offset | difference of the left-right direction is corrected.

  Next, in order to correct the vertical displacement, a voice guide “Please point the camera downward” is output. The photographer turns the camera to the left according to the voice guide.

  As a result, when the vertical position is corrected, a voice guide “Please stop” is output to stop the direction correction.

  In this way, when voice guidance is performed and the face position P of the subject belongs to the planned face placement area S2, as shown in FIG. 5C, the voice guidance for direction correction is terminated. Voice guide “Please shoot” is output. The photographer presses the release button 26 in accordance with the voice guide “Please shoot” and performs shooting.

  Note that the audio for guide is read from a required audio file recorded in advance in the ROM 116, added to the audio signal processing unit 156, and subjected to required signal processing by the audio signal processing unit 156, and then D / A The signal is converted into an analog signal by the converter 158 and output from the speaker 36 via the amplifier 160.

  FIG. 6 is a flowchart showing a processing procedure of the composition guide function [1] described above.

  First, it is determined whether or not the release button 26 is half-pressed (step S10). If it is determined that the release button 26 is half-pressed, the composition guide [1] processing is executed.

  First, based on the image obtained from the CCD 124, the face detection unit 154 detects the face position of the person (step S11).

  The CPU 110 determines whether there is a human face in the image to be shot based on the detection result of the face detection unit 154 (step S12). If it is determined that there is no person's face in the image to be photographed, the composition guide [1] process is terminated.

  On the other hand, when a person's face is detected in the image to be photographed, it is determined whether or not the position of the person's face belongs to a preset face arrangement area (step S13). If it is determined that the detected face position belongs to the face arrangement area, the composition guide [1] process is terminated.

  On the other hand, if it is determined that the detected face position does not belong to the face placement area, voice guidance processing is executed.

  FIG. 7 is a flowchart showing a procedure of voice guide processing in the composition guide [1].

  First, a face placement area closest to the detected face position is obtained, and a correction direction for placement at that position is calculated (step S20). And the presence or absence of the correction of the left-right direction is determined from the result of this calculation (step S21).

  If there is no correction in the left-right direction, the process proceeds to step 28, and if there is a correction in the left-right direction, it is determined whether or not the correction direction is the left direction (step S22). If the correction direction is the left direction, a voice guide “Please turn the camera to the left” is output (step S23). If the correction direction is not the left direction, the voice guide “Turn the camera to the right”. Is output (step S24). The photographer corrects the direction of the camera according to the voice guide.

  Since the correction changes the left and right face position of the subject in the image, the CPU 110 determines whether or not the left and right direction deviation has been corrected from the changed face position (step S25). Then, a voice guide “Please stop” is output to stop the correction of the camera orientation by the user (step S26). As a result, the horizontal deviation of the position of the subject's face is corrected.

  Next, it is determined whether or not there is correction in the vertical direction (step S27). If there is no correction in the vertical direction, the process proceeds to step 33 to output a voice guide “Please shoot” (step S33).

  On the other hand, if there is correction in the vertical direction, it is determined whether or not the correction direction is upward (step S28). If the correction direction is upward, a voice guide “Please turn the camera upward” is output (step S29). If the correction direction is not upward, the voice guide “Please turn the camera downward”. Is output (step S30). The photographer corrects the direction of the camera according to the voice guide.

  Since the correction changes the vertical face position of the subject in the image, the CPU 110 determines whether or not the vertical shift has been corrected from the changed face position (step S31). Then, a voice guide “Please stop” is output to stop the correction of the camera orientation by the user (step S32). As a result, the horizontal deviation of the position of the subject's face is corrected.

  As a result of the series of correction processes described above, the position of the subject's face in the image is located in a preset face placement area, so the CPU 110 outputs a voice guide “Please shoot” (step S33). The voice guidance process ends.

  Thus, by taking a picture using the composition guide function [1], it is possible to shoot with the face position of the person who is the subject placed at the position of the golden division ratio on the screen, and anyone can easily take pictures. This makes it possible to shoot a composition that is preferable for portrait photography.

  In the above embodiment, the subject is in the areas S1 to S4 shown in FIG. 4 (area of radius r centering on the intersections P1 to P4 of the dividing lines L1 to L4 when the screen is divided by the golden ratio). However, the position set as the position where the person's face is arranged is not limited to this.

  For example, the screen is divided by the ratio of 1.618033989: 1 and 1: 1.618033989 (golden ratio) and guided so that the face of the subject is positioned on the dividing line (L1 and L2 in FIG. 4). Alternatively, the screen may be divided by a ratio (golden ratio) of 1.618033989: 1 and 1: 1.618033989 up and down, and the subject's face on the dividing line (L3 and L4 in FIG. 4). You may make it guide so that may be located. Further, the guide may be guided so that the face of the subject is positioned in the face placement area S1 or S2 in FIG. 4 of the above embodiment. Further, the user may arbitrarily set the position where the subject's face is arranged.

  Further, the composition guide processing may be executed only when the ratio of the size of the face of the subject in the image is detected and the image is photographed with a size equal to or smaller than a certain ratio. In other words, when shooting up, there is a risk of unnatural composition, so when a person's face is shot at a certain ratio or less (for example, shooting smaller than the size of the person's face during bust-up shooting) It is preferable to execute the composition guide processing only when it is performed.

  Further, the face arrangement area may be changed in accordance with the ratio of the size of the face of the subject in the image. For example, if the subject's face is shot larger than the size of the person's face during bust-up shooting, a face placement area is set in the center of the screen and is smaller than the size of the person's face during bust-up shooting. When the face of the subject is photographed, the areas S1 to S4 in FIG. 4 are set as face arrangement areas.

  In the above embodiment, the composition correction direction is guided by voice, but the correction direction guide method is not limited to this. For example, when the monitor 32 is used as a viewfinder, as shown in FIG. 8, a mark indicating the correction direction (here, “▲” mark) may be displayed over the through image. . Note that the example shown in FIG. 8 indicates that the camera orientation should be corrected to the left and downward (diagonally downward).

  Next, the composition guide function [2] for preventing an unfavorable composition for human photographing will be described.

  When the composition guide function [2] is ON, when the release button 26 is half-pressed, the CPU 110 executes the process of the composition guide [2].

  In this case, the luminance / color difference signal generated by the image signal processing unit 134 is added to the face detection unit 154 and the line segment detection unit 162. The line segment detection unit 162 extracts horizontal and vertical line segments in the image from the input luminance / color difference signal, and detects the position thereof.

  As a method for extracting vertical and horizontal line segments, for example, an input image is subjected to filter processing to extract a contour, and approximately horizontal and vertical line segments are extracted from the extracted contour.

  The position data of the face detected by the face detection unit 154 and the position data of the vertical and horizontal line segments detected by the line segment detection unit 162 are output to the CPU 110.

  The CPU 110 determines whether or not the horizontal line segment crosses under the face, that is, the part corresponding to the neck, based on the input face position data and line segment position data. Further, it is determined whether or not a vertical line segment passes through the face.

  That is, whether or not an image is a so-called “neck shot” in which the horizontal line crosses the neck as shown in FIG. 9A, or the vertical line as shown in FIG. It is determined whether it is a so-called “skewer shot” that passes through the head.

  As a result, if it is determined that the heading shot or the skewering shot, a voice guide “Please change the composition” is output.

  FIG. 11 is a flowchart showing a processing procedure of the composition guide function [2].

  First, it is determined whether or not the release button 26 is half-pressed (step S40). If it is determined that the release button 26 is half-pressed, the composition guide [2] processing is executed.

  First, the face detection unit 154 detects the face position of a person based on the image obtained from the CCD 124, and also detects horizontal and vertical line segments (step S41).

  The CPU 110 determines whether there is a human face in the image to be shot based on the detection result of the face detection unit 154 (step S42). If the result of this determination is that there is no person's face in the image to be shot, the composition guide [2] processing is terminated.

  On the other hand, when a person's face is detected in the image to be photographed, the composition is based on the position of the detected person's face and the position of the horizontal / vertical line segment, so-called “neck shot” or “ It is determined whether or not a “skewered shot” is set (step S43). As a result of this determination, if it is determined that the current composition is not “neck shot” or “skewer shot”, the composition guide [2] processing is terminated.

  On the other hand, if it is determined that the current composition is a “neck cut shot” or a “skewer shot”, a voice guidance process is executed (step S44). That is, in order to prompt the composition change, a voice guide “Please change the composition” is output from the speaker.

  The photographer changes the composition according to this voice guide and adjusts the composition so that it does not become a “neck cut shot” or a “skewer shot” (see FIGS. 9B and 10B).

  Thus, by taking a picture using the composition guide function [2], it is possible to take a picture while preventing an unfavorable composition as a person picture.

  In the above embodiment, the composition change is prompted by the sound output from the speaker 36, but a message prompting the composition change may be displayed on the monitor 32. Further, a warning may be urged by a beep sound or LED light emission.

  In addition, in the case of “neck cut shot” or “skewer shot”, the shooting may be disabled.

  As described above, in the digital camera 10 according to the present embodiment, by using the composition guide function [1], it is possible to shoot by placing the subject's face at a preferred position for human photography, Further, by taking a picture using the composition guide function [2], it is possible to take a picture while preventing an unfavorable composition as a person picture.

  The composition guide function [1] and the composition guide function [2] can be executed simultaneously. In this case, even if the subject's face is arranged at a position suitable for photographing a person by the composition guide function [1], if it is a so-called “neck shot” or “skewer shot”, the composition guide function Due to [2], a warning is issued.

  In the above embodiment, the composition guide functions [1] and [2] are set to ON / OFF on the menu screen. For example, when the person photographing mode is set, the composition guide function is automatically set. [1] and [2] may be executed. Further, ON / OFF may be switched by a dedicated switch.

  Next, other photographing functions provided in the digital camera 10 of the present embodiment will be described.

  The digital camera 10 according to the present embodiment has an automatic photographing function that automatically performs photographing when the face of a person serving as a subject is located at a predetermined position in an image. That is, for example, assuming that the position area S1 to S4 is set as the shooting execution area for the face shown in FIG. 4, when a person's face is located in any of the shooting execution areas S1 to S4, the shutter is automatically released. Is cut off and shooting is performed.

  This automatic shooting function is set to ON / OFF by a shooting menu, for example. Further, when performing automatic photographing, the digital camera 10 is attached to a tripod or the like and fixed at an arbitrary place.

  FIG. 12 is a flowchart showing the processing procedure of the automatic photographing function.

  First, based on the image obtained from the CCD 124, the face detection unit 154 detects the face position of the person (step S50).

  The CPU 110 determines whether there is a human face in the image to be shot based on the detection result of the face detection unit 154 (step S51). If it is determined that there is no person's face in the image to be photographed, the process proceeds to step 54 to determine whether or not there is an instruction to end the automatic photographing function (step S54).

  On the other hand, when a person's face is detected in the image to be photographed, it is determined whether or not the position of the person's face belongs to a preset face arrangement area (step S52). As a result of this determination, if it is determined that the detected face position (centroid coordinate P) does not belong to the shooting execution areas S1 to S4, as shown in FIG. It is determined whether or not there is an end instruction (step S54).

  On the other hand, if it is determined that the detected face position belongs to one of the shooting execution areas S1 to S4 as shown in FIG. 13B, shooting is performed and the shot image is stored in the storage medium 152. Recording is performed (step S53).

  Thereafter, it is determined whether or not there is an instruction to end the automatic photographing function (step S54). If it is determined that there is no instruction to end, the process returns to step 50, and the face position of the person is detected again. On the other hand, if it is determined that the end instruction ant, the automatic photographing function is ended.

  As described above, when the automatic photographing function is executed, it is possible to automatically photograph an image having a composition preferable for photographing a person.

  In the above embodiment, the areas S1 to S4 shown in FIG. 4 (areas with a radius r centered on the intersections P1 to P4 of the dividing lines L1 to L4, which are obtained by dividing the screen by the golden ratio), are used as the imaging execution areas. ) Is set, and shooting is executed when the subject's face is located in this area. However, the area for executing shooting is not limited to this.

  For example, when the screen is divided at a ratio of 1.618033989: 1 and 1: 1.618033989 (golden ratio) and the face of the subject is positioned on the dividing line (L1 and L2 in FIG. 4), the image is taken. In addition, the screen may be divided by a ratio (golden ratio) of 1.618033989: 1 and 1: 1.618033989 on the top and bottom, and on the dividing line (L3 and L4 in FIG. 4). Shooting may be performed when the face of the subject is positioned. Further, photographing may be executed when the face of the subject is located in the region S1 or S2 in FIG.

  Further, the ratio of the size of the face of the subject in the image may be detected, and the shooting execution area may be changed according to the ratio.

  In the series of embodiments described above, the case where the present invention is applied to still image shooting has been described as an example. However, the same application can be applied to shooting a moving image.

  In the series of embodiments described above, the case where the present invention is applied to a digital camera has been described as an example. However, the application of the present invention is not limited to a digital camera, but a digital video camera, a mobile phone with a camera, or a PDA with a camera. The present invention can be applied to all electronic devices (photographing apparatuses) having a photographing function such as a personal computer with a camera.

Front perspective view of a digital camera to which the present invention is applied Rear view of a digital camera to which the present invention is applied Block diagram showing schematic configuration inside digital camera The figure which shows the example of a setting of a face arrangement area Conceptual diagram of composition guide function [1] The flowchart which shows the procedure of a process of composition guide function [1] Flow chart showing the procedure of voice guidance processing Diagram showing another example of the guide method Conceptual diagram of composition guide function [2] Conceptual diagram of composition guide function [2] The flowchart which shows the procedure of a process of composition guide function [2] Flow chart showing processing procedure of automatic shooting function Conceptual diagram of automatic shooting function

Explanation of symbols

  DESCRIPTION OF SYMBOLS 10 ... Digital camera, 12 ... Camera body, 14 ... Lens, 16 ... Strobe, 18 ... Viewfinder window, 20 ... Self-timer lamp, 22 ... AF auxiliary light lamp, 24 ... Strobe light control sensor, 26 ... Shutter button, 28 ... Power / mode switch, 30 ... mode dial, 32 ... monitor, 34 ... finder eyepiece, 36 ... speaker, 38 ... zoom button, 40 ... cross button, 42 ... MENU / OK button, 44 ... DISP button, 46 ... BACK 110, CPU, 112, operation unit, 114, bus, 116, ROM, 118, EEPROM, 120, memory (SDRAM), 122, VRAM, 124, solid-state imaging device (CCD), 126, timing generator (TG) 128 ... Analog processing unit (CDS / AMP), 130 ... A / D 132: Image input control unit, 134 ... Image signal processing unit, 136 ... Video encoder, 138 ... Character MIX unit, 140 ... AF detection unit, 142 ... AE / AWB detection unit, 144 ... Aperture drive unit, 146 ... Lens drive unit, 148 ... compression / decompression processing unit, 150 ... media control unit, 152 ... storage medium, 154 ... face detection unit, 156 ... audio signal processing unit, 158 ... D / A converter, 160 ... amplifier, 162 ... line Minute detector

Claims (4)

  1. In a photographing apparatus for recording an image obtained from an imaging unit in response to an image recording instruction on a storage medium,
    Face detection means for detecting the position of a person's face from the image obtained from the imaging means;
    Means for detecting a ratio of the size of a person's face in the image obtained from the imaging means;
    The position of the face of the person detected by the face detection means is located at a predetermined position in the image when the ratio of the size of the person's face in the image obtained from the imaging means is equal to or less than a certain ratio Guiding means for guiding, and
    An imaging apparatus comprising:
  2. In a photographing apparatus for recording an image obtained from an imaging unit in response to an image recording instruction on a storage medium,
    Face detection means for detecting the position of a person's face from the image obtained from the imaging means;
    Means for detecting a ratio of the size of a person's face in the image obtained from the imaging means;
    Means for setting a face placement area in the image according to a ratio of the size of the face of the person in the image obtained from the imaging means;
    Guide means for guiding the face position of the person detected by the face detection means to be located within the face arrangement area ;
    An imaging apparatus comprising:
  3. Line segment detection means for detecting a horizontal or vertical line segment from the image obtained from the imaging means;
    Warning means for warning that the face of the person detected by the face detection means overlaps a horizontal or vertical line segment detected by the line segment detection means;
    The imaging apparatus according to claim 1, further comprising:
  4. In a photographing apparatus for recording an image obtained from an imaging unit in response to an image recording instruction on a storage medium,
    Face detection means for detecting the position of a person's face from the image obtained from the imaging means;
    Means for detecting a ratio of the size of a person's face in the image obtained from the imaging means;
    Means for setting a shooting execution area in the image according to a ratio of the size of a person's face in the image obtained from the imaging means;
    Recording instruction means for outputting the recording instruction when the position of the face of the person detected by the face detection means belongs to the shooting execution area ;
    An imaging apparatus comprising:
JP2004083062A 2004-03-22 2004-03-22 Imaging device Active JP4135100B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004083062A JP4135100B2 (en) 2004-03-22 2004-03-22 Imaging device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004083062A JP4135100B2 (en) 2004-03-22 2004-03-22 Imaging device

Publications (2)

Publication Number Publication Date
JP2005269562A JP2005269562A (en) 2005-09-29
JP4135100B2 true JP4135100B2 (en) 2008-08-20

Family

ID=35093572

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004083062A Active JP4135100B2 (en) 2004-03-22 2004-03-22 Imaging device

Country Status (1)

Country Link
JP (1) JP4135100B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905721A (en) * 2012-12-25 2014-07-02 卡西欧计算机株式会社 Imaging device, imaging control method and storage medium

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4572815B2 (en) * 2005-11-18 2010-11-04 富士フイルム株式会社 Imaging apparatus and imaging method
WO2007072663A1 (en) * 2005-12-22 2007-06-28 Olympus Corporation Photographing system and photographing method
JP4427515B2 (en) 2006-01-27 2010-03-10 富士フイルム株式会社 Target image detection display control apparatus and control method thereof
JP4864502B2 (en) * 2006-03-23 2012-02-01 富士フイルム株式会社 Imaging apparatus and imaging condition guide method
JP4665933B2 (en) * 2006-07-04 2011-04-06 セイコーエプソン株式会社 Document editing support apparatus, program, and storage medium
JP2008022333A (en) * 2006-07-13 2008-01-31 Fujifilm Corp Imaging device and imaging method
JP4984773B2 (en) * 2006-09-14 2012-07-25 セイコーエプソン株式会社 Document editing apparatus and program
JP2008160280A (en) * 2006-12-21 2008-07-10 Sanyo Electric Co Ltd Imaging apparatus and automatic imaging method
JP2008186332A (en) * 2007-01-31 2008-08-14 Seiko Epson Corp Layout evaluation device, program and storage medium
JP2008191746A (en) * 2007-02-01 2008-08-21 Seiko Epson Corp Animation creation device, program and storage medium
JP4902562B2 (en) * 2007-02-07 2012-03-21 パナソニック株式会社 Imaging apparatus, image processing apparatus, control method, and program
JP4442616B2 (en) 2007-02-14 2010-03-31 セイコーエプソン株式会社 Document editing apparatus, program, and storage medium
JP2008204179A (en) * 2007-02-20 2008-09-04 Seiko Epson Corp Document evaluation device, program and storage medium
US7995106B2 (en) 2007-03-05 2011-08-09 Fujifilm Corporation Imaging apparatus with human extraction and voice analysis and control method thereof
JP2008219451A (en) * 2007-03-05 2008-09-18 Fujifilm Corp Imaging device and control method thereof
KR101387490B1 (en) * 2007-07-27 2014-04-21 엘지전자 주식회사 Mobile terminal and Method for photographing an image in thereof
JP4869270B2 (en) * 2008-03-10 2012-02-08 三洋電機株式会社 Imaging apparatus and image reproduction apparatus
JP5040760B2 (en) * 2008-03-24 2012-10-03 ソニー株式会社 Image processing apparatus, imaging apparatus, display control method, and program
JP5157647B2 (en) * 2008-05-30 2013-03-06 株式会社ニコン camera
KR101539043B1 (en) * 2008-10-31 2015-07-24 삼성전자주식회사 Image photography apparatus and method for proposing composition based person
JP2010136071A (en) * 2008-12-04 2010-06-17 Olympus Corp Image processor and electronic apparatus
JP5434104B2 (en) * 2009-01-30 2014-03-05 株式会社ニコン Electronic camera
JP5525757B2 (en) * 2009-05-18 2014-06-18 オリンパス株式会社 Image processing apparatus, electronic device, and program
JP2010278624A (en) * 2009-05-27 2010-12-09 Sony Corp Photographing device, photographing method, and program
JP5397059B2 (en) 2009-07-17 2014-01-22 ソニー株式会社 Image processing apparatus and method, program, and recording medium
KR101615290B1 (en) * 2009-08-26 2016-04-26 삼성전자주식회사 Method And System For Photographing
US8803992B2 (en) * 2010-05-12 2014-08-12 Fuji Xerox Co., Ltd. Augmented reality navigation for repeat photography and difference extraction
US9536132B2 (en) * 2011-06-24 2017-01-03 Apple Inc. Facilitating image capture and image review by visually impaired users
KR101430924B1 (en) 2013-05-14 2014-08-18 주식회사 아빅스코리아 Method for obtaining image in mobile terminal and mobile terminal using the same
KR101431651B1 (en) 2013-05-14 2014-08-22 중앙대학교 산학협력단 Apparatus and method for mobile photo shooting for a blind person
CN104869299B (en) * 2014-02-26 2019-12-24 联想(北京)有限公司 Prompting method and device
CN105120144A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Image shooting method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905721A (en) * 2012-12-25 2014-07-02 卡西欧计算机株式会社 Imaging device, imaging control method and storage medium

Also Published As

Publication number Publication date
JP2005269562A (en) 2005-09-29

Similar Documents

Publication Publication Date Title
US7973848B2 (en) Method and apparatus for providing composition information in digital image processing device
JP4510713B2 (en) Digital camera
JP4245699B2 (en) Imaging device
US8294813B2 (en) Imaging device with a scene discriminator
JP4904243B2 (en) Imaging apparatus and imaging control method
JP4340358B2 (en) Image shooting device
JP4521071B2 (en) Digital still camera with composition assist function and operation control method thereof
JP2005086499A (en) Imaging apparatus
JP2004207774A (en) Digital camera
JP2006101466A (en) Digital still camera
JP4518131B2 (en) Imaging method and apparatus
JP2004208028A (en) Imaging device
JP4406937B2 (en) Imaging device
US7532235B2 (en) Photographic apparatus
US8254771B2 (en) Image taking apparatus for group photographing
US7924340B2 (en) Image-taking apparatus and image display control method
US20020171747A1 (en) Image capturing apparatus, and method of display-control thereof
US7706674B2 (en) Device and method for controlling flash
JP2006033241A (en) Image pickup device and image acquiring means
JP2011045039A (en) Compound-eye imaging apparatus
JP4340806B2 (en) Image processing apparatus, method, and program
JP4152280B2 (en) Video signal output method, video signal output device, and digital camera
JP2006025238A (en) Imaging device
KR20090125699A (en) Camera, storage medium having stored therein camera control program, and camera control method
JPH11136568A (en) Touch panel operation-type camera

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060522

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20061214

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080130

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080328

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080508

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080521

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110613

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110613

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120613

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120613

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130613

Year of fee payment: 5

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250