WO2020137602A1 - Imaging device, imaging method, and program - Google Patents

Imaging device, imaging method, and program Download PDF

Info

Publication number
WO2020137602A1
WO2020137602A1 PCT/JP2019/048877 JP2019048877W WO2020137602A1 WO 2020137602 A1 WO2020137602 A1 WO 2020137602A1 JP 2019048877 W JP2019048877 W JP 2019048877W WO 2020137602 A1 WO2020137602 A1 WO 2020137602A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
focus
pupil
unit
frame
Prior art date
Application number
PCT/JP2019/048877
Other languages
French (fr)
Japanese (ja)
Inventor
高弘 佐藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to US17/416,890 priority Critical patent/US11539892B2/en
Priority to JP2020563075A priority patent/JPWO2020137602A1/en
Priority to EP19904199.7A priority patent/EP3904956A4/en
Publication of WO2020137602A1 publication Critical patent/WO2020137602A1/en
Priority to US18/087,119 priority patent/US20230276120A1/en
Priority to JP2023204458A priority patent/JP2024019284A/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Definitions

  • the present technology relates to an image capturing apparatus, an image capturing method, and a program, and particularly to an image capturing apparatus, an image capturing method, and a program capable of easily performing image capturing focusing on a subject or a specific portion of the subject.
  • Patent Document 1 proposes an imaging device that detects a pupil region, which is a region of a person's pupil, and focuses on the detected pupil region.
  • the imaging device described in Patent Document 1 detects only the human pupil region, and it was difficult to detect the animal pupil region.
  • the present technology has been made in view of such a situation, and makes it possible to easily perform imaging by focusing on a subject or a specific portion of the subject.
  • the image pickup device includes a display control unit that displays a notice frame that gives a notice of a specific area to be focused on the image acquired from the image pickup unit according to the type of subject.
  • a notice frame that gives advance notice of a specific area to be focused is displayed on the image acquired from the imaging unit according to the type of subject.
  • FIG. 7 is a flowchart illustrating an image capturing process in a human pupil detection mode.
  • FIG. 11 is a flowchart illustrating an image pickup process in a person's pupil detection mode following FIG. 10. It is a flowchart explaining the imaging process of the pupil detection mode of an animal.
  • FIG. 13 is a flowchart illustrating an imaging process in an animal pupil detection mode following FIG. 12. 13 is a flowchart illustrating a pupil region selection process of step S114 of FIG.
  • An imaging device has a detection mode for each specific region that detects a region of a specific region of a subject and uses the detected region of the specific region for focusing.
  • Examples of the detection mode for each specific part include a human pupil detection mode and an animal pupil detection mode.
  • FIG. 1 is a diagram showing an example of processing for detecting the pupil of a person.
  • FIG. 1 shows a screen displayed on the display unit of the imaging device when the human eye detection mode is set.
  • the screen on the left side of FIG. 1 is a screen displayed before the user performs an operation for instructing to start focusing.
  • the screen on the right is a screen that is displayed after the user performs an operation to instruct the start of focusing.
  • the face of the person is displayed on the screen.
  • a face area overlapping the focus frame F is detected in the face area that is a face area, and the detected face area is detected.
  • a face notice frame PF for advance notice is displayed.
  • the face notice frame PF is displayed so as to partially overlap the focus frame F.
  • the focus frame F is a focus setting frame for setting the focus.
  • the pupil region which is the pupil region
  • the detected pupil region is detected. Is displayed instead of the face notice frame PF.
  • focusing is performed on the pupil region surrounded by the pupil frame AE.
  • FIG. 2 is a diagram showing an example of processing for detecting the pupil of an animal.
  • FIG. 2 shows a screen displayed on the display unit of the imaging device when the animal pupil detection mode is set.
  • the screen on the left side of FIG. 2 is a screen displayed before the user performs an operation for instructing to start focusing.
  • the screen on the right is a screen that is displayed after the user performs an operation to instruct the start of focusing.
  • the face of an animal (cat) is displayed on the screen.
  • the animal pupil detection mode As shown on the left side of FIG. 2, the pupil region is detected for the inside and outside of the focus frame F, and the pupil notice frame for notifying the detected pupil region is made. PE is displayed. At this time, the detection of the pupil area is performed by giving priority to the inside of the focus frame F.
  • a pupil frame AE indicating the pupil region is displayed in place of the pupil notice frame PE in the pupil region, as shown on the right side of FIG.
  • focusing is performed on the pupil region surrounded by the pupil frame AE.
  • the pupil frame AE is displayed in a different display method from the pupil frame PE, for example, the pupil frame PE is displayed in a white frame and the pupil frame AE is displayed in a green frame. The same applies to the face preview frame PF and the pupil frame AE.
  • the face area is detected before the operation of instructing the start of focusing is performed by the user, and the face notice frame PF is added to the detected face area. Is displayed.
  • the pupil area is detected before the operation of instructing the start of focusing is performed by the user, and the pupil notice frame PE is displayed in the detected pupil area.
  • 3 and 4 are diagrams showing a focusing start instruction method when the animal pupil detection mode is set.
  • the focus start can be instructed by the user performing an operation such as pressing the pupil AF (autofocus) button P, pressing the AF-ON button Q, or pressing the shutter button R halfway. ..
  • the pupil AF button P is configured as, for example, a central button located at the center of a cross button provided on the back surface of the imaging device.
  • the pupil AF button P is a dedicated button for instructing to start focusing on the pupil, which is a specific part of the subject.
  • the image of the subject on the display unit of the image pickup apparatus displays the pupil notice frame in the detected pupil region.
  • the focus frame F and the pupil frame AE are displayed on the screen in an overlapping manner with the image of the cat facing forward.
  • the focus frame F is displayed in the center of the screen near the back of the cat.
  • the range up to about 50% of the screen is set as the pupil region detection range W1 with the center of the focus frame F as a reference.
  • the detection range W1 is not actually displayed.
  • the focus frame F itself may be set in a wide range such as the detection range W1.
  • the pupil frame AE is displayed.
  • the AF-ON button Q is provided, for example, on the upper part of the back surface of the imaging device, as shown on the left side of FIG.
  • the AF-ON button Q is a button for instructing to start focusing on the pupil in the focus frame F.
  • the shutter button R is provided, for example, on the upper surface of the imaging device, as shown on the left side of FIG.
  • the shutter button R When the shutter button R is half-pressed by the user, it becomes a button for instructing to start focusing on the pupil in the focus frame F, like the AF-ON button Q, and when fully pressed by the user, It becomes a button for instructing the shutter.
  • the image of the subject on the display unit of the image pickup apparatus displays the pupil notice frame in the detected pupil region.
  • a focus frame F and a pupil frame AE are displayed on the screen, overlaid on the image of the cat facing forward and the dog lying next to the cat.
  • the focus frame F is displayed in the left eye of the cat.
  • the range from the center of the focus frame F to the vicinity of the focus frame F is set as the pupil area detection range W2.
  • the detection range W2 is not actually displayed.
  • the pupil area is displayed before the AF-ON button Q is pressed.
  • the pupil frame AE is displayed instead of the pupil notice frame.
  • the image capturing apparatus displays the pupil notice frame for notifying the pupil region before the user performs the operation to instruct the start of focusing.
  • a pupil frame AE indicating a pupil area is displayed, and focusing is performed on the pupil area.
  • buttons are installed in the imaging device are not limited to the positions shown in FIGS. 3 and 4, and may be other positions.
  • FIG. 5 is a block diagram showing a main configuration example of the image pickup apparatus.
  • the image pickup apparatus 100 shown in FIG. 5 has a detection mode for each specific part of the subject including a human pupil detection mode and an animal pupil detection mode. It should be noted that the image capturing apparatus 100 can be provided with a detection mode according to a specific part, without being limited to a human eye or an animal eye. The user can select and set a desired detection mode from the detection modes of the subject and the specific part of the subject.
  • the imaging device 100 is configured to include a lens 101, a diaphragm 102, an imaging element 103, an analog signal processing unit 104, an A/D conversion unit 105, and a digital signal processing unit 106.
  • the image pickup apparatus 100 is configured to include a lens driver 121, a TG (Timing Generator) 122, a gyro 123, and a system controller 131.
  • the image pickup apparatus 100 is also configured to include a display unit 141, a storage unit 142, an input unit 143, an output unit 144, a communication unit 145, an operation unit 146, and a drive 147.
  • the lens 101 adjusts the focus to the subject and collects the light from the in-focus position.
  • the diaphragm 102 adjusts the exposure.
  • the image sensor 103 captures an image of a subject to obtain a captured image. That is, the image sensor 103 photoelectrically converts light from the subject and outputs it as an image signal to the analog signal processing unit 104.
  • the image sensor 103 can capture a still image or a moving image by such photoelectric conversion.
  • the analog signal processing unit 104 performs analog signal processing on the image signal obtained by the image sensor 103.
  • the A/D converter 105 performs A/D conversion on the image signal subjected to the analog signal processing to obtain image data which is a digital signal.
  • the digital signal processing unit 106 performs digital signal processing on the image data obtained by the A/D conversion unit 105.
  • the digital signal processing unit 106 performs, as digital signal processing, at least processing of detecting a subject or a region of a specific portion of the subject from a moving image supplied as image data, and setting a focus region.
  • the specific part of the subject will be simply referred to as the specific part.
  • the digital signal processing unit 106 also performs processing such as controlling the display of a frame or the like indicating the region of the subject or the specific part based on the detection result of the region of the subject or the specific part. Details of these processes will be described later.
  • the digital signal processing unit 106 may perform color mixture correction, black level correction, white balance adjustment, demosaic processing, matrix processing, gamma correction, and YC conversion as digital signal processing. Further, the digital signal processing unit 106 may perform codec processing, which is processing relating to encoding and decoding of image data, as digital signal processing.
  • the lens driver 121 drives the lens 101 and the diaphragm 102 to control the focal length or exposure.
  • the TG 122 drives the image sensor 103 by generating a synchronization signal and supplying it to the image sensor 103, and controls image capturing.
  • the gyro 123 is a sensor that detects the position and orientation of the imaging device 100. The gyro 123 outputs the detected sensor information to the A/D conversion unit 105.
  • the system controller 131 is composed of, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like. Controls the processing unit. Further, the system controller 131 receives an operation input by the user based on the signal supplied from the operation unit 146, and performs processing or control corresponding to the operation input.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the system controller 131 can control the focal length or the exposure based on the detection result of the subject or the region of the specific portion supplied from the digital signal processing unit 106.
  • the display unit 141 is configured as, for example, a liquid crystal display or the like, and displays an image corresponding to the image data stored in the memory of the digital signal processing unit 106.
  • the display unit 141 can display a captured image obtained by the image sensor 103, a stored captured image, and the like.
  • the storage unit 142 stores the image data stored in the memory of the digital signal processing unit 106. At that time, the storage unit 142 stores the encoded data encoded by the digital signal processing unit 106 in order to reduce the data amount.
  • the encoded data stored in the storage unit 142 is read by the digital signal processing unit 106, decoded, and displayed on the display unit 141, for example.
  • the input unit 143 has an external input interface such as an external input terminal, and various data (for example, image data or encoded data) supplied from outside the imaging device 100 via the external input interface is input to the digital signal processing unit 106. Output.
  • the output unit 144 has an external output interface such as an external output terminal, and outputs various data supplied via the digital signal processing unit 106 to the outside of the imaging device 100 via the external output interface.
  • the communication unit 145 performs predetermined communication, which is at least one of wired communication and wireless communication, with other devices, and exchanges data with other devices via the predetermined communication. For example, the communication unit 145 outputs various data (for example, image data and encoded data) supplied from the digital signal processing unit 106 to another device via predetermined communication. The communication unit 145 also acquires various data from another device via predetermined communication, and outputs the acquired data to the digital signal processing unit 106.
  • predetermined communication is at least one of wired communication and wireless communication, with other devices, and exchanges data with other devices via the predetermined communication.
  • the communication unit 145 outputs various data (for example, image data and encoded data) supplied from the digital signal processing unit 106 to another device via predetermined communication.
  • the communication unit 145 also acquires various data from another device via predetermined communication, and outputs the acquired data to the digital signal processing unit 106.
  • the operation unit 146 is configured by an arbitrary input device such as a key, a button, or a touch panel.
  • the operation part 146 includes the pupil AF button P, the AF-ON button Q, or the shutter button R described above with reference to FIG. 3 or 4.
  • the operation unit 146 receives an operation input from the user and outputs a signal corresponding to the operation input to the system controller 131.
  • the drive 147 reads out information (programs, data, etc.) stored in a removable recording medium 148 such as a semiconductor memory mounted on itself.
  • the drive 147 supplies the information read from the removable recording medium 148 to the system controller 131. Further, the drive 147 causes the removable recording medium 148 to store information (image data, encoded data, etc.) supplied via the system controller 131 when the writable removable recording medium 148 is attached to itself.
  • the lens 101, the diaphragm 102, and the lens driver 121 described above are formed as a replaceable lens 151 that is detachable (replaceable) from the image pickup apparatus 100 and is a case separate from the image pickup apparatus 100. May be.
  • FIG. 6 is a block diagram showing a configuration example of the digital signal processing unit 106.
  • the digital signal processing unit 106 has a memory 211, a subject detection unit 212, a region setting unit 213, a display control unit 214, and a codec processing unit 215.
  • the memory 211 stores the image data supplied from the A/D conversion unit 105.
  • the image data is, for example, image data of each frame of a moving image or image data of a still image.
  • the subject detection unit 212 detects a subject or a region of a specific part from the image data stored in the memory 211 based on a signal corresponding to a user's operation input supplied from the system controller 131.
  • the subject detection unit 212 outputs the detection result of the region of the subject or the specific part to the region setting unit 213 and the display control unit 214.
  • the subject detection unit 212 includes a person detection unit 212-1, an animal detection unit 212-2, and an animal detection unit 212-3.
  • the person detection unit 212-1 detects the face area of the person and displays the detection result of the detected face area on the area setting unit 213 and the display unit. Output to the control unit 214.
  • the person detection unit 212-1 detects the pupil area of the person based on the detection result of the face area when the operation of instructing the start of focusing is performed by the user, and the detection result of the pupil area is set to the area setting unit. 213 and the display control unit 214.
  • the animal detection units 212-2 and 212-3 differ in the types of animals to be detected.
  • the animal detection units 212-2 and 212-3 detect the pupil region of the target animal when the detection mode for each specific part of the subject is the animal pupil detection mode, and the detection result of the detected animal pupil region is detected. Is output to the area setting unit 213 and the display control unit 214.
  • the animal detection units 212-2 and 212-3 detect the pupil of the animal according to the focus frame and display the detection result of the detected pupil area of the animal. It is output to the area setting unit 213 and the display control unit 214.
  • the animal detection unit 212-2 detects the pupil area of animals such as dogs and cats.
  • the animal detection unit 212-3 detects the pupil region of animals such as lizards and frogs. Not only the animal detection units 212-2 and 212-3, but other animal detection units may be provided depending on, for example, the types of animals having the same characteristics at the time of detection.
  • the area setting unit 213 sets any one of the area of the specific region of the subject detected by the subject detection unit 212 and the region indicated by the focus frame as the focus region according to the detection mode for each specific region of the subject. ..
  • the area setting unit 213 supplies information on the set focusing area to the system controller 131.
  • the display control unit 214 generates a focus frame according to a signal corresponding to a user's operation input supplied from the system controller 131, and superimposes the focus frame on the image from the memory 211 to display the focus frame on the display unit 141.
  • Information on the focus frame is output to the subject detection unit 212.
  • the display control unit 214 based on a signal corresponding to a user's operation input supplied from the system controller 131, a predetermined frame (face frame, notice) corresponding to the face or pupil region detected by the subject detection unit 212. Frame, or pupil frame) is generated.
  • the display control unit 214 superimposes the generated predetermined frame on the image from the memory 211 and causes the display unit 141 to display it. Information on the face frame, the notice frame, or the pupil frame is output to the subject detection unit 212 as necessary.
  • the display control unit 214 generates an image of a GUI (Graphical User Interface) such as a menu, a button, or a cursor, and displays the image together with a captured image, a captured image, and the like.
  • GUI Graphic User Interface
  • the codec processing unit 215 performs processing relating to encoding and decoding of image data of moving images and still images stored in the memory 211.
  • FIG. 7 is a diagram showing a selection method when a plurality of pupils are detected in the human pupil detection mode.
  • FIG. 7A shows a screen displayed before the user performs an operation to instruct the start of focusing.
  • FIG. 7B shows a screen that is displayed after the user performs an operation to instruct the start of focusing.
  • a face area overlapping the focus frame F is detected in the face area, and a face notice frame PF indicating the detected face area is detected. Is displayed.
  • the pupil area located in front of the imaging device 100 is detected. Therefore, when the left pupil is located in front of the imaging device 100, even if the user adjusts the focus frame F to the right pupil, as shown on the left side of FIG. Will be displayed in the eyes. Further, when the right pupil is located in front of the imaging device 100, the user may or may not adjust the focus frame F to the right pupil, as shown on the right side of FIG. 7B.
  • the pupil frame AE is displayed on the right pupil.
  • the user can select the face and focus on the front pupil in the selected face area regardless of the position of the focus frame.
  • FIG. 8 is a diagram showing a selection method when a plurality of pupils are detected in the animal pupil detection mode.
  • FIG. 8A shows a screen that is displayed before the user performs an operation for instructing to start focusing.
  • FIG. 8B shows a screen displayed after the user performs an operation to instruct the start of focusing.
  • the pupil region is detected for the inside and outside of the focus frame F, and the pupil notice frame for notifying the detected pupil region is provided. PE is displayed. At this time, the detection of the pupil area is performed by giving priority to the inside of the focus frame F.
  • the pupil notice frame PE is displayed in a pupil region located in front of the imaging device 100 and close to the center (center position) of the focus frame F. The details of the selection of the pupil region will be described later with reference to FIG.
  • the pupil frame AE is displayed in the selected pupil region, and focusing is performed on the pupil region indicated by the pupil frame AE. Done.
  • FIG. 9 is a diagram showing the relationship between the orientation of the subject's face and the number of pupil regions.
  • FIG. 9 shows images P1 to P7 with the fox as a subject.
  • the foxes shown in images P1 to P7 have different face orientations (angles).
  • the solid line rectangle indicates the front pupil region detected in each image, and the broken line rectangle indicates the back pupil region detected in each image.
  • Image P1 shows a fox with his face facing to the left.
  • the fox whose face is facing diagonally forward left is shown.
  • the image P3 a fox whose face is directed slightly diagonally to the front left is shown.
  • the image P4 shows a fox whose face is facing forward.
  • the image P5 a fox whose face is facing slightly diagonally forward right is shown.
  • the image P6 shows a fox with his face facing diagonally forward right.
  • the image P7 shows a fox with his face facing to the right.
  • an image P1, an image P2, an image P6, and an image P7 in which a fox whose face is directed diagonally forward or laterally (left and right) are shown, respectively, in front of the imaging device 100. It shows a case where only the located pupil region is detected.
  • Two pupil areas are detected in each of the images P3 and P5 in which the fox whose face is facing slightly diagonally forward is shown, and one of the pupil areas is in front of the imaging device 100. It is easy to determine whether or not the other pupil region is located at the back.
  • the notice frame or the pupil frame is displayed in the pupil region located in front of the imaging device 100 and close to the center of the focus frame.
  • FIG. 10 is a flowchart illustrating an image capturing process in the human pupil detection mode of the image capturing apparatus 100.
  • the image processing in the human pupil detection mode in FIG. 10 is started when the power is turned on by operating the power button, for example.
  • the detection mode for each specific part of the subject is preset as a human pupil detection mode from a setting screen or the like.
  • step S11 of FIG. 10 the system controller 131 determines whether to end the process, for example, whether the power button has been operated.
  • step S11 If it is determined in step S11 to end the process, the imaging process ends.
  • step S11 If it is determined in step S11 that the process is not completed, the process proceeds to step S12.
  • step S12 the image sensor 103 acquires the electrical signal of each pixel of the image by photoelectrically converting the light from the subject condensed through the lens 101 and the diaphragm 102 on a pixel-by-pixel basis.
  • the image signal which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
  • step S13 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image.
  • step S14 the person detection unit 212-1 detects a face area from the image data stored in the memory 211.
  • the person detection unit 212-1 supplies the detected face area information to the area setting unit 213 and the display control unit 214.
  • the user gives an instruction to start focusing by pressing the pupil AF button, pressing the AF-ON button, or pressing the shutter button halfway. Note that the instruction to start focusing is issued for each image capturing unit.
  • the operation unit 146 receives an operation input from the user and outputs a signal corresponding to the operation input to the system controller 131.
  • step S15 the system controller 131 determines whether or not the user has pressed the eye AF button. If it is determined in step S15 that the pupil AF button has been pressed, the process proceeds to step S16.
  • step S16 the system controller 131 forcibly changes the focus frame to a wide range.
  • step S17 the person detection unit 212-1 detects the pupil area for the face area in the focus frame under the control of the system controller 131.
  • Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
  • step S18 the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S18 that the pupil area has been detected, the process proceeds to step S19.
  • step S19 the area setting unit 213 sets the pupil area detected by the person detection unit 212-1 as the focus area. Information on the set focus area is supplied to the system controller 131.
  • step S18 If it is determined in step S18 that the pupil region has not been detected, the process proceeds to step S20.
  • step S20 the area setting unit 213 sets the face area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
  • step S15 if it is determined in step S15 that the eye AF button has not been pressed, the process proceeds to step S21.
  • step S21 the system controller 131 determines whether the user has half-pressed the shutter button or whether the AF-ON button has been pressed.
  • step S21 If it is determined in step S21 that the shutter button has been half-pressed or the AF-ON button has been pressed, the process proceeds to step S22.
  • step S22 the system controller 131 does not change the focus frame.
  • step S23 the person detection unit 212-1 detects the pupil area for the face area in the focus frame under the control of the system controller 131.
  • Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
  • step S24 the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S24 that the pupil area has been detected, the process proceeds to step S25.
  • step S25 the area setting unit 213 sets the pupil area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is supplied to the system controller 131.
  • step S24 If it is determined in step S24 that the pupil area has not been detected, the process proceeds to step S26.
  • step S26 the area setting unit 213 sets the face area or the focus frame detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
  • step S20 If it is determined in step S20 that the shutter button has not been half pressed and the AF-ON button has not been pressed, the process proceeds to step S27.
  • step S27 the area setting unit 213 sets the face area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
  • step S28 the display control unit 214 generates a face notice frame in the face area detected by the person detection unit 212-1 and superimposes the face notice frame on the live view image to display it on the display unit 141.
  • step S29 the system controller 131 controls the lens driver 121 to drive the optical system such as the lens 101 and the diaphragm 102 so that the focus area is in focus. Then, a process progresses to step S30.
  • step S27 in FIG. 11 the process proceeds to step S30.
  • step S30 the system controller 131 determines whether or not focus is achieved.
  • step S30 If it is determined in step S30 that the subject is in focus, the process proceeds to step S31.
  • step S31 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image. Further, the display control unit 214 superimposes a focus frame (pupil frame, face frame, or focus frame), which is a frame surrounding the set focus region, on the live view image and displays it on the display unit 141.
  • a focus frame piupil frame, face frame, or focus frame
  • step S30 If it is determined in step S30 that the subject is out of focus, step S31 is skipped and the process proceeds to step S32.
  • step S29 in FIG. 10 the process proceeds to step S32.
  • step S32 the system controller 131 determines whether or not the shutter button has been fully pressed based on a signal corresponding to an operation input from the operation unit 146. If it is determined in step S32 that the shutter button has been fully pressed, the process proceeds to step S33.
  • step S33 the image sensor 103 photoelectrically converts the light from the subject condensed through the optical system such as the lens 101 and the diaphragm 102 on a pixel-by-pixel basis to acquire an electric signal of each pixel of the image.
  • the image signal which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
  • step S34 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a captured image.
  • step S35 the codec processing unit 215 encodes the image data stored in the memory 211.
  • the codec processing unit 215 supplies the encoded image data to the storage unit 142.
  • step S36 the codec processing unit 215 causes the storage unit 142 to record the encoded image data. Then, the process returns to step S11, and the subsequent processes are repeated.
  • step S32 when it is determined in step S32 that the shutter button has not been fully pressed, the process returns to step S11, and the subsequent processing is repeated.
  • FIG. 12 is a flowchart illustrating the image pickup processing of the animal eye detection mode of the image pickup apparatus 100.
  • the imaging processing in the animal pupil detection mode of FIG. 12 is started when the power is turned on by operating the power button, for example.
  • the pupil detection mode of the animal is preset on the setting screen or the like.
  • FIG. 12 an example in which the animal detection unit 212-2 that detects the eyes of a cat or a dog detects the eyes of an animal will be described.
  • step S111 of FIG. 12 the system controller 131 determines whether to end the process, for example, whether the power button has been operated.
  • step S111 If it is determined in step S111 to end the process, the imaging process ends.
  • step S111 If it is determined in step S111 that the process is not completed, the process proceeds to step S112.
  • step S112 the image sensor 103 acquires an electrical signal of each pixel of the image by photoelectrically converting the light from the subject condensed through the lens 101 and the diaphragm 102 on a pixel-by-pixel basis.
  • the image signal which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
  • step S113 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image.
  • step S114 the animal detection unit 212-2 detects the pupil area from the image data stored in the memory 211.
  • the animal detection unit 212-2 supplies the detected pupil area information to the area setting unit 213 and the display control unit 214.
  • step S115 the display control unit 214 performs a pupil region selection process detected by the animal detection unit 212-2.
  • the pupil area selection processing the pupil area in which the notice frame is displayed is selected from the plurality of detected pupil areas.
  • step S116 the system controller 131 determines whether or not the user has pressed the eye AF button.
  • step S116 If it is determined in step S116 that the eye AF button has been pressed, the process proceeds to step S117.
  • step S117 the system controller 131 forcibly changes the focus frame to a wide range.
  • step S118 the animal detection unit 212-2 detects a pupil region in the focus frame that has been changed in a wide range under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
  • step S119 the area setting unit 213 determines whether a pupil area has been detected. If it is determined in step S119 that the pupil region has been detected, the process proceeds to step S120.
  • step S120 the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
  • step S116 determines whether the eye AF button has been pressed. If it is determined in step S116 that the eye AF button has not been pressed, the process proceeds to step S121.
  • step S121 the system controller 131 determines whether the user has half-pressed the shutter button or whether the AF-ON button has been pressed.
  • step S121 If it is determined in step S121 that the shutter button has been half-pressed or the AF-ON button has been pressed, the process proceeds to step S122.
  • step S122 the system controller 131 does not change the focus frame.
  • step S123 the animal detection unit 212-2 detects the pupil area in the focus frame under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
  • step S124 the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S124 that the pupil region has been detected, the process proceeds to step S125.
  • step S125 the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
  • step S125 If it is determined in step S125 that the pupil area has not been detected, the process proceeds to step S126.
  • step S126 the area setting unit 213 sets the focus frame as the focus area as another condition. Information on the set focus area is output to the system controller 131.
  • step S121 If it is determined in step S121 that the shutter button has not been half pressed and the AF-ON button has not been pressed, the process proceeds to step S127.
  • step S127 the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S127 that the pupil region has been detected, the process proceeds to step S128.
  • step S128, the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
  • step S129 the display control unit 214 generates a pupil preview frame in the pupil region detected by the animal detection unit 212-2, superimposes the pupil preview frame on the live view image, and causes the display unit 141 to display it.
  • step S130 the system controller 131 controls the lens driver 121 to drive the optical system such as the lens 101 and the diaphragm 102 so that the focus area is in focus. Then, a process progresses to step S131.
  • step S126 of FIG. 12 the process proceeds to step S131.
  • step S131 the system controller 131 determines whether or not focus is achieved.
  • step S131 If it is determined in step S131 that the subject is in focus, the process proceeds to step S132.
  • step S132 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image.
  • the display control unit 214 causes the display unit 141 to display a focus frame (pupil frame or focus frame), which is a frame surrounding the set focus region, on the live view image.
  • step S132 is skipped and the process proceeds to step S133.
  • step S119 or S127 in FIG. 12 If it is determined in step S119 or S127 in FIG. 12 that the pupil region is not detected, focusing is not performed, and the process proceeds to step S133 in FIG. After step S129, the process proceeds to step S133.
  • step S133 the system controller 131 determines whether or not the shutter button has been fully pressed based on a signal corresponding to an operation input from the operation unit 146. If it is determined in step S133 that the shutter button has been fully pressed, the process proceeds to step S134.
  • step S134 the image sensor 103 photoelectrically converts the light from the subject condensed through the optical system such as the lens 101 and the diaphragm 102 in pixel units to acquire the electric signal of each pixel of the image.
  • the image signal which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
  • step S135 the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a captured image.
  • step S136 the codec processing unit 215 encodes the image data stored in the memory 211.
  • the codec processing unit 215 supplies the encoded image data to the storage unit 142.
  • step S137 the codec processing unit 215 causes the storage unit 142 to record the encoded image data. After that, the process returns to step S111, and the subsequent processes are repeated.
  • step S133 determines whether the shutter button has been fully pressed. If it is determined in step S133 that the shutter button has not been fully pressed, the process returns to step S111, and the subsequent processing is repeated.
  • the focusing processing when the pupil area is not detected, the focusing processing is not executed, or the focusing processing is executed under other conditions such as the focus frame.
  • the operation when the pupil region is not detected may be selectively used depending on the setting by the user. At that time, the operation when not detected may be changed according to the target subject or the specific part which is not detected.
  • the focus process For example, by setting the focus process so that it will not be performed if it is not detected, it is possible to prevent further deterioration of the focus accuracy when the target subject or a specific part moves little. On the contrary, by setting the focus process at the position designated by the user when it is not detected, the focus process is performed in advance when the movement of the target subject or the specific part is large. be able to.
  • FIG. 14 is a flowchart illustrating the pupil area selection processing in step S114 of FIG.
  • step S151 the display control unit 214 determines whether or not there are two or more pupil regions in the focus frame. For example, when two animals are in a position in front of the imaging device 100, any number of pupils from 1 to 4 can be detected.
  • step S151 If it is determined in step S151 that there are two or more pupil regions in the focus frame, the process proceeds to step S52.
  • step S152 the display control unit 214 determines whether or not there are two or more front pupil regions.
  • step S152 If it is determined in step S152 that there are two or more front pupil regions, the process proceeds to step S153.
  • step S153 the display control unit 214 calculates the distance between each pupil region and the center of the focus frame.
  • step S154 the display control unit 214 selects the pupil area having the shortest distance from the center of the focus frame.
  • step S151 determines whether there are two or more detected pupil regions in the focus frame. If it is determined in step S151 that there are two or more detected pupil regions in the focus frame, the process proceeds to step S155.
  • step S155 the display control unit 214 selects one pupil area.
  • step S152 If it is determined in step S152 that there are no more than two front pupil regions, the process proceeds to step S156.
  • step S156 the display control unit 214 selects the front pupil region.
  • a notice frame for giving notice of a specific area to be focused is displayed on the image acquired from the imaging unit according to the type of subject.
  • the result may be unintended by the user depending on the condition of the subject. ..
  • the present technology by displaying the notice frame, it is possible to know in advance that an intended position or an unintended position has been detected. Therefore, the user can select not to perform automatic focusing. It is possible.
  • the user can easily focus on a specific part such as a pupil region or the animal itself, depending on the animal that is the type of subject.
  • the processing of detecting the pupils of animals such as dogs and cats has been described, but the present technology is the eyes, faces, parts of the face, neck, and neck of all living things such as birds, fish, reptiles, and amphibians. It can be applied to a specific part of the subject such as the head or the whole body (subject). The present technology can also be applied to a specific part of these subjects or a combination of subjects.
  • the present technology can be applied not only to living things, but also to specific parts of a subject such as a vehicle headlight, a front emblem, a windshield, or a driver's seat, or a motorcycle headlight or a helmet.
  • a detection mode for detecting a specific part of the subject is preset and used.
  • the user's intention such as which detection result or detection method among a plurality of detection results or detection methods is prioritized or which of the plurality of objects is prioritized is captured. Can tell the device.
  • the hair on the eyes may be focused instead of the eyes.
  • the focus position may be adjusted backward, or the subject that is likely to be in focus on the hair may be preset in the imaging device, and the imaging device may control based on the setting. It is possible to obtain a suitable imaging result.
  • the human eyes and the animal eyes are detected by different detection processes, but the human eyes may be detected by the same detection process as the animal eyes.
  • the series of processes described above can be executed by hardware or software.
  • a program forming the software is installed from a network or a recording medium.
  • this recording medium is composed of a removable recording medium 148 in which the program is recorded, which is distributed in order to distribute the program to the user, separately from the apparatus main body.
  • the removable recording medium 148 includes a magnetic disk (including a flexible disk) and an optical disk (including a CD-ROM and DVD). It also includes magneto-optical disks (including MD (Mini Disc)) and semiconductor memory.
  • the program can be installed in the storage unit 142 by mounting the removable recording medium 148 in the drive 147.
  • this program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be received via the communication unit 145 and installed in the storage unit 142.
  • this program can be installed in advance in a storage unit 142 or a ROM (Read Only Memory) in the system controller 131.
  • the program executed by the computer may be a program in which processing is performed in time series in the order described in this specification, or in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • the system means a set of a plurality of constituent elements (devices, modules (parts), etc.), and it does not matter whether or not all constituent elements are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device housing a plurality of modules in one housing are all systems. ..
  • the present technology can have a configuration of cloud computing in which a single function is shared by a plurality of devices via a network and jointly processes.
  • each step described in the above flow chart can be executed by one device or shared by a plurality of devices.
  • one step includes a plurality of processes
  • the plurality of processes included in one step can be executed by one device or shared by a plurality of devices.
  • An image pickup apparatus including a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the image pickup unit according to the type of subject.
  • the imaging device according to (1) wherein the subject is a person or an animal.
  • the imaging device according to (1) in which the subject can be set in advance.
  • the imaging device according to (4), wherein the specific part is a pupil.
  • An area detection unit for detecting the specific area is further provided,
  • the display control unit controls the display of the notice frame in accordance with a focus setting frame for setting a focus region when a plurality of the specific regions are detected.
  • (1) to (6) The imaging device described.
  • (8) The image pickup apparatus according to (7), wherein the display control unit controls the display of the notice frame according to the specific area closer to the center position of the focus setting frame.
  • a focusing instruction unit for instructing the start of focusing for each image pickup unit of the image The image pickup apparatus according to (7) or (8), further including: an area setting unit that sets the detected specific area as a focus area of the image when the start of the focus is instructed.
  • the imaging device according to (9), wherein the area setting unit sets the specific area detected within a predetermined range indicated by the focus setting frame as the focusing area.
  • An image capturing instruction unit that instructs image capturing
  • a focus control unit that controls the image capturing unit so as to perform the focus in the focus region set by the region setting unit to acquire the image when the image capturing is instructed.
  • the imaging device according to any one of (6) to (10).
  • the imaging device is An imaging method that displays a notice frame that gives an advance notice of a specific area to be focused on the image acquired by the imaging unit according to the type of subject.
  • a program that causes a computer to function as a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the imaging unit according to the type of subject.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)

Abstract

This technology relates to an imaging device, an imaging method, and a program which make it possible to easily perform imaging with attention paid to a subject or a specific portion of the subject. This imaging device displays a previous announcement frame that previously announces a specific region as a focusing object on an image acquired by an imaging unit according to the kind of a subject. This technology is applicable to an imaging device.

Description

撮像装置、撮像方法、およびプログラムImaging device, imaging method, and program
 本技術は、撮像装置、撮像方法、およびプログラムに関し、特に、被写体または被写体の特定部位に着目した撮像を容易に行うことができるようにした撮像装置、撮像方法、およびプログラムに関する。 The present technology relates to an image capturing apparatus, an image capturing method, and a program, and particularly to an image capturing apparatus, an image capturing method, and a program capable of easily performing image capturing focusing on a subject or a specific portion of the subject.
 特許文献1においては、人物の瞳の領域である瞳領域を検出して、検出した瞳領域に対して合焦を行う撮像装置が提案されている。 Patent Document 1 proposes an imaging device that detects a pupil region, which is a region of a person's pupil, and focuses on the detected pupil region.
国際公開第2015/045911号International Publication No. 2015/045911
 特許文献1に記載の撮像装置において検出されるのは人物の瞳領域だけであり、動物の瞳領域を検出することは難しかった。 The imaging device described in Patent Document 1 detects only the human pupil region, and it was difficult to detect the animal pupil region.
 本技術はこのような状況に鑑みてなされたものであり、被写体または被写体の特定部位に着目した撮像を容易に行うことができるようにするものである。 The present technology has been made in view of such a situation, and makes it possible to easily perform imaging by focusing on a subject or a specific portion of the subject.
 本技術の一側面の撮像装置は、合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる表示制御部とを備える。 The image pickup device according to one aspect of the present technology includes a display control unit that displays a notice frame that gives a notice of a specific area to be focused on the image acquired from the image pickup unit according to the type of subject.
 本技術の一側面においては、合焦対象とする特定領域を予告する予告枠が、被写体の種類に応じて撮像部より取得した画像の上に表示される。 In one aspect of the present technology, a notice frame that gives advance notice of a specific area to be focused is displayed on the image acquired from the imaging unit according to the type of subject.
人物の瞳を検出する処理の例を示す図である。It is a figure showing an example of processing which detects a pupil of a person. 動物の瞳を検出する処理の例を示す図である。It is a figure which shows the example of the process which detects a pupil of an animal. 動物の瞳検出モードの場合の合焦開始の指示方法を示す図である。It is a figure which shows the instruction|indication method of the focus start in the case of the animal pupil detection mode. 動物の瞳検出モードの場合の合焦開始の指示方法を示す図である。It is a figure which shows the instruction|indication method of the focus start in the case of the animal pupil detection mode. 本技術を適用した撮像装置の主な構成例を示すブロック図である。It is a block diagram showing an example of main composition of an imaging device to which this art is applied. デジタル信号処理部の構成例を示すブロック図である。It is a block diagram which shows the structural example of a digital signal processing part. 人物の瞳検出モードにおいて瞳が複数検出された場合の選択方法を示す図である。It is a figure which shows the selection method when several pupils are detected in a person's pupil detection mode. 動物の瞳検出モードにおいて瞳が複数検出された場合の選択方法を示す図である。It is a figure which shows the selection method when several pupils are detected in the pupil detection mode of an animal. 被写体の顔の向きと検出される瞳領域の数との関係を示す図である。It is a figure which shows the relationship between the direction of a subject's face, and the number of the pupil regions detected. 人物の瞳検出モードの撮像処理を説明するフローチャートである。7 is a flowchart illustrating an image capturing process in a human pupil detection mode. 図10に続く、人物の瞳検出モードの撮像処理を説明するフローチャートである。FIG. 11 is a flowchart illustrating an image pickup process in a person's pupil detection mode following FIG. 10. 動物の瞳検出モードの撮像処理を説明するフローチャートである。It is a flowchart explaining the imaging process of the pupil detection mode of an animal. 図12に続く、動物の瞳検出モードの撮像処理を説明するフローチャートである。FIG. 13 is a flowchart illustrating an imaging process in an animal pupil detection mode following FIG. 12. 図12のステップS114の瞳領域選択処理を説明するフローチャートである。13 is a flowchart illustrating a pupil region selection process of step S114 of FIG.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
 1.本技術の概要
 2.撮像装置の構成例
 3.瞳が複数検出された場合の選択方法
 4.撮像装置の動作例
Hereinafter, modes for carrying out the present technology will be described. The description will be given in the following order.
1. Outline of this technology 2. Configuration example of imaging device 3. 3. Selection method when multiple pupils are detected Operation example of imaging device
<1.本技術の概要>
 本技術の一実施形態に係る撮像装置は、被写体の特定部位の領域を検出し、検出した特定部位の領域を合焦に用いる、特定部位毎の検出モードを有している。特定部位毎の検出モードとしては、例えば、人物の瞳検出モードおよび動物の瞳検出モードなどがある。
<1. Overview of this technology>
An imaging device according to an embodiment of the present technology has a detection mode for each specific region that detects a region of a specific region of a subject and uses the detected region of the specific region for focusing. Examples of the detection mode for each specific part include a human pupil detection mode and an animal pupil detection mode.
 図1は、人物の瞳を検出する処理の例を示す図である。 FIG. 1 is a diagram showing an example of processing for detecting the pupil of a person.
 図1には、人物の瞳検出モードが設定されている場合に撮像装置の表示部に表示される画面が示されている。図1の左側の画面は、合焦開始を指示する操作がユーザにより行われる前に表示される画面である。右側の画面は、合焦開始を指示する操作がユーザにより行われた後に表示される画面である。なお、画面には、人物の顔が表示されている。 FIG. 1 shows a screen displayed on the display unit of the imaging device when the human eye detection mode is set. The screen on the left side of FIG. 1 is a screen displayed before the user performs an operation for instructing to start focusing. The screen on the right is a screen that is displayed after the user performs an operation to instruct the start of focusing. The face of the person is displayed on the screen.
 人物の瞳検出モードが設定されている場合、図1の左側に示すように、顔の領域である顔領域のうち、フォーカス枠Fに重なる顔領域の検出が行われ、検出された顔領域を予告する顔予告枠PFが表示される。図1の左側の例においては、フォーカス枠Fに一部重なる形で顔予告枠PFが表示されている。フォーカス枠Fは、焦点を設定するための焦点設定枠である。 When the human pupil detection mode is set, as shown on the left side of FIG. 1, a face area overlapping the focus frame F is detected in the face area that is a face area, and the detected face area is detected. A face notice frame PF for advance notice is displayed. In the example on the left side of FIG. 1, the face notice frame PF is displayed so as to partially overlap the focus frame F. The focus frame F is a focus setting frame for setting the focus.
 この状態において合焦開始を指示する操作がユーザにより行われた場合、図1の右側に示すように、顔領域を対象として瞳の領域である瞳領域の検出が行われ、検出された瞳領域を示す瞳枠AEが、顔予告枠PFに代えて表示される。また、瞳枠AEにより囲まれる瞳領域を対象として合焦が行われる。 In this state, when the user performs an operation for instructing to start focusing, as shown on the right side of FIG. 1, the pupil region, which is the pupil region, is detected for the face region, and the detected pupil region is detected. Is displayed instead of the face notice frame PF. In addition, focusing is performed on the pupil region surrounded by the pupil frame AE.
 図2は、動物の瞳を検出する処理の例を示す図である。 FIG. 2 is a diagram showing an example of processing for detecting the pupil of an animal.
 図2には、動物の瞳検出モードが設定されている場合に撮像装置の表示部に表示される画面が示されている。図2の左側の画面は、合焦開始を指示する操作がユーザにより行われる前に表示される画面である。右側の画面は、合焦開始を指示する操作がユーザにより行われた後に表示される画面である。なお、画面には、動物(猫)の顔が表示されている。 FIG. 2 shows a screen displayed on the display unit of the imaging device when the animal pupil detection mode is set. The screen on the left side of FIG. 2 is a screen displayed before the user performs an operation for instructing to start focusing. The screen on the right is a screen that is displayed after the user performs an operation to instruct the start of focusing. The face of an animal (cat) is displayed on the screen.
 動物の瞳検出モードが設定されている場合、図2の左側に示すように、フォーカス枠Fの内と外を対象として瞳領域の検出が行われ、検出された瞳領域を予告する瞳予告枠PEが表示される。なお、このとき、瞳領域の検出は、フォーカス枠F内を優先して行われる。 When the animal pupil detection mode is set, as shown on the left side of FIG. 2, the pupil region is detected for the inside and outside of the focus frame F, and the pupil notice frame for notifying the detected pupil region is made. PE is displayed. At this time, the detection of the pupil area is performed by giving priority to the inside of the focus frame F.
 この状態において合焦開始を指示する操作がユーザにより行われた場合、図2の右側に示すように、瞳領域に、瞳領域を示す瞳枠AEが瞳予告枠PEに代えて表示される。また、瞳枠AEにより囲まれる瞳領域を対象として合焦が行われる。 In this state, when the user performs an operation for instructing to start focusing, a pupil frame AE indicating the pupil region is displayed in place of the pupil notice frame PE in the pupil region, as shown on the right side of FIG. In addition, focusing is performed on the pupil region surrounded by the pupil frame AE.
 例えば、瞳予告枠PEが白い枠で表示され、瞳枠AEが緑色の枠で表示されるというように、瞳枠AEは、瞳予告枠PEと異なる表示方法で表示される。なお、顔予告枠PFと瞳枠AEについても同様である。 The pupil frame AE is displayed in a different display method from the pupil frame PE, for example, the pupil frame PE is displayed in a white frame and the pupil frame AE is displayed in a green frame. The same applies to the face preview frame PF and the pupil frame AE.
 以上のように、人間の瞳検出モードが設定されている場合、合焦開始を指示する操作がユーザにより行われる前に、顔領域の検出が行われ、検出された顔領域に顔予告枠PFが表示される。 As described above, when the human pupil detection mode is set, the face area is detected before the operation of instructing the start of focusing is performed by the user, and the face notice frame PF is added to the detected face area. Is displayed.
 一方、動物の瞳検出モードが設定されている場合、合焦開始を指示する操作がユーザにより行われる前に、瞳領域の検出が行われ、検出された瞳領域に瞳予告枠PEが表示される。 On the other hand, when the animal's pupil detection mode is set, the pupil area is detected before the operation of instructing the start of focusing is performed by the user, and the pupil notice frame PE is displayed in the detected pupil area. It
 合焦開始を指示する操作がユーザにより行われる前に予告枠が表示されることにより、意図する位置、または、意図しない位置が検出されていることが事前にわかるため、ユーザが自動合焦を行わないことを選択することが可能である。これにより、被写体の特定部位に着目した撮像を容易に行うことができるようになる。 By displaying the notice frame before the user gives an instruction to start focusing, it is possible to know in advance that an intended position or an unintended position has been detected. It is possible to choose not to do it. As a result, it becomes possible to easily perform image pickup focusing on a specific part of the subject.
 図3および図4は、動物の瞳検出モードが設定されている場合の合焦開始の指示方法を示す図である。 3 and 4 are diagrams showing a focusing start instruction method when the animal pupil detection mode is set.
 撮像装置においては、瞳AF(オートフォーカス)ボタンPの押下、AF-ONボタンQの押下、またはシャッタボタンRの半押しなどの操作がユーザにより行われることで合焦開始を指示することができる。 In the image pickup apparatus, the focus start can be instructed by the user performing an operation such as pressing the pupil AF (autofocus) button P, pressing the AF-ON button Q, or pressing the shutter button R halfway. ..
 瞳AFボタンPは、図3の左側に示されるように、例えば、撮像装置の裏面に設けられる十字ボタンの中央に位置する中央ボタンとして構成される。瞳AFボタンPは、被写体の特定部位である瞳を対象として合焦開始を指示するための専用ボタンである。 As shown on the left side of FIG. 3, the pupil AF button P is configured as, for example, a central button located at the center of a cross button provided on the back surface of the imaging device. The pupil AF button P is a dedicated button for instructing to start focusing on the pupil, which is a specific part of the subject.
 上述したように、合焦開始を指示する操作がユーザにより行われる前に、撮像装置の表示部の被写体の画像には、検出された瞳領域に瞳予告枠が表示されている。 As described above, before the user performs the operation to instruct the start of focusing, the image of the subject on the display unit of the image pickup apparatus displays the pupil notice frame in the detected pupil region.
 この状態においてユーザにより瞳AFボタンPの押下が行われた場合、表示部には、図3の右側に示されるような画面が表示される。 When the user presses the pupil AF button P in this state, a screen as shown on the right side of FIG. 3 is displayed on the display unit.
 図3の右側に示すように、画面には、猫が正面を向いている画像に重ねて、フォーカス枠F、および瞳枠AEが表示される。図3の右側においては、フォーカス枠Fは画面の中央の、猫の背の辺りに表示されている。 As shown on the right side of FIG. 3, the focus frame F and the pupil frame AE are displayed on the screen in an overlapping manner with the image of the cat facing forward. On the right side of FIG. 3, the focus frame F is displayed in the center of the screen near the back of the cat.
 ユーザにより瞳AFボタンPの押下が行われた場合、フォーカス枠Fの中心を基準として、画面の50%程度までの範囲が、瞳領域の検出範囲W1として設定される。なお、検出範囲W1は、実際には表示されない。なお、フォーカス枠F自体が、検出範囲W1のように広範囲に設定されてもよい。 When the user presses the pupil AF button P, the range up to about 50% of the screen is set as the pupil region detection range W1 with the center of the focus frame F as a reference. The detection range W1 is not actually displayed. The focus frame F itself may be set in a wide range such as the detection range W1.
 ユーザにより瞳AFボタンPの押下が行われた場合に、検出範囲W1内に瞳領域が検出されているとき、瞳領域には、瞳AFボタンPの押下が行われる前に表示されていた瞳予告枠の代わりに、瞳枠AEが表示される。 When the user presses the pupil AF button P and a pupil region is detected within the detection range W1, the pupil displayed before the pupil AF button P is pressed in the pupil region. Instead of the notice frame, the pupil frame AE is displayed.
 また、AF-ONボタンQは、図4の左側に示されるように、例えば、撮像装置の裏面の上部に設けられる。AF-ONボタンQは、フォーカス枠F内において瞳を対象として合焦開始を指示するためのボタンである。 Also, the AF-ON button Q is provided, for example, on the upper part of the back surface of the imaging device, as shown on the left side of FIG. The AF-ON button Q is a button for instructing to start focusing on the pupil in the focus frame F.
 シャッタボタンRは、図4の左側に示されるように、例えば、撮像装置の上面に設けられる。シャッタボタンRは、ユーザにより半押しされた場合、AF-ONボタンQと同様に、フォーカス枠F内において瞳を対象として合焦開始を指示するためのボタンとなり、ユーザにより全押しされた場合、シャッタを指示するためのボタンとなる。 The shutter button R is provided, for example, on the upper surface of the imaging device, as shown on the left side of FIG. When the shutter button R is half-pressed by the user, it becomes a button for instructing to start focusing on the pupil in the focus frame F, like the AF-ON button Q, and when fully pressed by the user, It becomes a button for instructing the shutter.
 上述したように、合焦開始を指示する操作がユーザにより行われる前に、撮像装置の表示部の被写体の画像には、検出された瞳領域に瞳予告枠が表示されている。 As described above, before the user performs the operation to instruct the start of focusing, the image of the subject on the display unit of the image pickup apparatus displays the pupil notice frame in the detected pupil region.
 この状態においてユーザによりAF-ONボタンQの押下が行われた場合、または、シャッタボタンRの半押しが行われた場合、表示部には、図4の右側に示されるような画面が表示される。 When the user presses the AF-ON button Q or the shutter button R is half-pressed in this state, a screen as shown on the right side of FIG. 4 is displayed on the display unit. It
 図4の右側に示すように、画面には、猫が正面を向き、猫の横で犬が寝ている画像に重ねて、フォーカス枠F、および瞳枠AEが表示される。図4の右側においては、フォーカス枠Fは猫の左の瞳に表示される。 As shown on the right side of FIG. 4, a focus frame F and a pupil frame AE are displayed on the screen, overlaid on the image of the cat facing forward and the dog lying next to the cat. On the right side of FIG. 4, the focus frame F is displayed in the left eye of the cat.
 ユーザによりAF-ONボタンQの押下が行われた場合、フォーカス枠Fの中心を基準としてフォーカス枠F付近までの範囲が、瞳領域の検出範囲W2として設定される。なお、検出範囲W2は、実際には表示されない。 When the user presses the AF-ON button Q, the range from the center of the focus frame F to the vicinity of the focus frame F is set as the pupil area detection range W2. The detection range W2 is not actually displayed.
 ユーザによりAF-ONボタンQの押下が行われた場合に、検出範囲W2内に瞳領域が検出されているとき、瞳領域には、AF-ONボタンQの押下が行われる前に表示されていた瞳予告枠の代わりに、瞳枠AEが表示される。 When the user presses the AF-ON button Q and a pupil area is detected within the detection range W2, the pupil area is displayed before the AF-ON button Q is pressed. The pupil frame AE is displayed instead of the pupil notice frame.
 以上のように、撮像装置においては、動物の瞳検出モードの場合、合焦開始を指示する操作がユーザにより行われる前に、瞳領域を予告する瞳予告枠が表示される。合焦開始を指示する操作がユーザにより行われたとき、瞳領域を示す瞳枠AEが表示され、瞳領域を対象として合焦が行われる。 As described above, in the case of the animal pupil detection mode, the image capturing apparatus displays the pupil notice frame for notifying the pupil region before the user performs the operation to instruct the start of focusing. When the user performs an operation for instructing to start focusing, a pupil frame AE indicating a pupil area is displayed, and focusing is performed on the pupil area.
 したがって、画像における動物の瞳の位置をすぐに把握することができる。これにより、ユーザは、動物の瞳に着目した撮像を容易に行うことができる。 Therefore, it is possible to immediately understand the position of the animal's eyes in the image. As a result, the user can easily perform the imaging focusing on the eyes of the animal.
 なお、撮像装置において、各ボタンが設置される位置は、図3および図4に示された位置に限らず、他の位置であってもよい。 Note that the positions where the buttons are installed in the imaging device are not limited to the positions shown in FIGS. 3 and 4, and may be other positions.
 また、上記説明では、被写体の特定部位としての瞳を検出する例について説明したが、被写体が鳥などの小さい動物である場合は、被写体自体を検出するようにしてもよい。 Also, in the above description, an example of detecting the pupil as a specific part of the subject has been described, but when the subject is a small animal such as a bird, the subject itself may be detected.
<2.撮像装置の構成例>
 図5は、撮像装置の主な構成例を示すブロック図である。
<2. Configuration example of imaging device>
FIG. 5 is a block diagram showing a main configuration example of the image pickup apparatus.
 図5に示される撮像装置100は、人物の瞳検出モードと動物の瞳検出モードを含む、被写体の特定部位毎の検出モードを有している。なお、撮像装置100においては、人物の瞳または動物の瞳に限らず、特定部位に応じた検出モードを設けることが可能である。ユーザは、被写体や被写体の特定部位の各検出モードから、所望の検出モードを選択して、設定することができる。 The image pickup apparatus 100 shown in FIG. 5 has a detection mode for each specific part of the subject including a human pupil detection mode and an animal pupil detection mode. It should be noted that the image capturing apparatus 100 can be provided with a detection mode according to a specific part, without being limited to a human eye or an animal eye. The user can select and set a desired detection mode from the detection modes of the subject and the specific part of the subject.
 図5に示されるように、撮像装置100は、レンズ101、絞り102、撮像素子103、アナログ信号処理部104、A/D変換部105、およびデジタル信号処理部106を含むように構成される。撮像装置100は、レンズドライバ121、TG(Timing Generator)122、ジャイロ123、およびシステムコントローラ131を含むように構成される。 As shown in FIG. 5, the imaging device 100 is configured to include a lens 101, a diaphragm 102, an imaging element 103, an analog signal processing unit 104, an A/D conversion unit 105, and a digital signal processing unit 106. The image pickup apparatus 100 is configured to include a lens driver 121, a TG (Timing Generator) 122, a gyro 123, and a system controller 131.
 また、撮像装置100は、表示部141、記憶部142、入力部143、出力部144、通信部145、操作部146、およびドライブ147を含むように構成される。 The image pickup apparatus 100 is also configured to include a display unit 141, a storage unit 142, an input unit 143, an output unit 144, a communication unit 145, an operation unit 146, and a drive 147.
 レンズ101は、被写体までの焦点を調整し、焦点が合った位置からの光を集光する。絞り102は、露出の調整を行う。 The lens 101 adjusts the focus to the subject and collects the light from the in-focus position. The diaphragm 102 adjusts the exposure.
 撮像素子103は、被写体を撮像して撮像画像を得る。すなわち、撮像素子103は、被写体からの光を光電変換して画像信号としてアナログ信号処理部104に出力する。撮像素子103は、このような光電変換により、静止画像を取り込むこともできるし、動画像を取り込むこともできる。 The image sensor 103 captures an image of a subject to obtain a captured image. That is, the image sensor 103 photoelectrically converts light from the subject and outputs it as an image signal to the analog signal processing unit 104. The image sensor 103 can capture a still image or a moving image by such photoelectric conversion.
 アナログ信号処理部104は、撮像素子103により得られた画像信号に対してアナログ信号処理を行う。A/D変換部105は、アナログ信号処理された画像信号をA/D変換し、デジタル信号である画像データを得る。 The analog signal processing unit 104 performs analog signal processing on the image signal obtained by the image sensor 103. The A/D converter 105 performs A/D conversion on the image signal subjected to the analog signal processing to obtain image data which is a digital signal.
 デジタル信号処理部106は、A/D変換部105において得られた画像データに対してデジタル信号処理を行う。デジタル信号処理部106は、デジタル信号処理として、少なくとも、画像データとして供給される動画像から被写体または被写体の特定部位の領域を検出し、合焦領域を設定する処理などを行う。以下、被写体の特定部位を、単に特定部位と称する。 The digital signal processing unit 106 performs digital signal processing on the image data obtained by the A/D conversion unit 105. The digital signal processing unit 106 performs, as digital signal processing, at least processing of detecting a subject or a region of a specific portion of the subject from a moving image supplied as image data, and setting a focus region. Hereinafter, the specific part of the subject will be simply referred to as the specific part.
 また、デジタル信号処理部106においては、被写体または特定部位の領域の検出結果に基づいて被写体または特定部位の領域を示す枠などの表示を制御する処理なども行われる。これらの処理の詳細については、後述する。 The digital signal processing unit 106 also performs processing such as controlling the display of a frame or the like indicating the region of the subject or the specific part based on the detection result of the region of the subject or the specific part. Details of these processes will be described later.
 なお、デジタル信号処理の内容は任意であり、上述した以外の処理も行われるようにしてもよい。例えば、デジタル信号処理部106が、混色補正、黒レベル補正、ホワイトバランス調整、デモザイク処理、マトリックス処理、ガンマ補正、およびYC変換などをデジタル信号処理として行うようにしてもよい。また、デジタル信号処理部106が、デジタル信号処理として、画像データの符号化や復号に関する処理であるコーデック処理を行うようにしてもよい。 Note that the content of digital signal processing is arbitrary, and processing other than the above may be performed. For example, the digital signal processing unit 106 may perform color mixture correction, black level correction, white balance adjustment, demosaic processing, matrix processing, gamma correction, and YC conversion as digital signal processing. Further, the digital signal processing unit 106 may perform codec processing, which is processing relating to encoding and decoding of image data, as digital signal processing.
 レンズドライバ121は、レンズ101および絞り102を駆動させ、焦点距離または露出などを制御する。TG122は、同期信号を生成して撮像素子103に供給することにより撮像素子103を駆動させ、撮像を制御する。ジャイロ123は、撮像装置100の位置および姿勢を検知するセンサである。ジャイロ123は、検知したセンサの情報をA/D変換部105に出力する。 The lens driver 121 drives the lens 101 and the diaphragm 102 to control the focal length or exposure. The TG 122 drives the image sensor 103 by generating a synchronization signal and supplying it to the image sensor 103, and controls image capturing. The gyro 123 is a sensor that detects the position and orientation of the imaging device 100. The gyro 123 outputs the detected sensor information to the A/D conversion unit 105.
 システムコントローラ131は、例えば、CPU(Central Processing Unit)、ROM(Read Only Memory)、およびRAM(Random Access Memory)などよりなり、プログラムを実行したり、データを処理したりして撮像装置100の各処理部の制御を行う。また、システムコントローラ131は、操作部146から供給された信号に基づいて、ユーザによる操作入力を受け、操作入力に対応する処理または制御を行う。 The system controller 131 is composed of, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like. Controls the processing unit. Further, the system controller 131 receives an operation input by the user based on the signal supplied from the operation unit 146, and performs processing or control corresponding to the operation input.
 例えば、システムコントローラ131は、デジタル信号処理部106から供給される被写体または特定部位の領域の検出結果などに基づいて、焦点距離または露出などを制御することができる。 For example, the system controller 131 can control the focal length or the exposure based on the detection result of the subject or the region of the specific portion supplied from the digital signal processing unit 106.
 表示部141は、例えば、液晶ディスプレイなどとして構成され、デジタル信号処理部106のメモリに記憶されている画像データに対応する画像を表示する。例えば、表示部141は、撮像素子103において得られた取り込み画像や保存した撮像画像などを表示することができる。 The display unit 141 is configured as, for example, a liquid crystal display or the like, and displays an image corresponding to the image data stored in the memory of the digital signal processing unit 106. For example, the display unit 141 can display a captured image obtained by the image sensor 103, a stored captured image, and the like.
 記憶部142は、デジタル信号処理部106のメモリに記憶されている画像データを記憶する。その際、記憶部142は、データ量を低減させるために、デジタル信号処理部106により符号化された符号化データを記憶する。記憶部142に記憶されている符号化データは、デジタル信号処理部106により読み出され、復号されて、例えば、表示部141に表示される。 The storage unit 142 stores the image data stored in the memory of the digital signal processing unit 106. At that time, the storage unit 142 stores the encoded data encoded by the digital signal processing unit 106 in order to reduce the data amount. The encoded data stored in the storage unit 142 is read by the digital signal processing unit 106, decoded, and displayed on the display unit 141, for example.
 入力部143は、外部入力端子などの外部入力インタフェースを有し、外部入力インタフェースを介して撮像装置100の外部から供給される各種データ(例えば画像データや符号化データ)をデジタル信号処理部106に出力する。 The input unit 143 has an external input interface such as an external input terminal, and various data (for example, image data or encoded data) supplied from outside the imaging device 100 via the external input interface is input to the digital signal processing unit 106. Output.
 出力部144は、外部出力端子などの外部出力インタフェースを有し、デジタル信号処理部106を介して供給される各種データを、外部出力インタフェースを介して撮像装置100の外部に出力する。 The output unit 144 has an external output interface such as an external output terminal, and outputs various data supplied via the digital signal processing unit 106 to the outside of the imaging device 100 via the external output interface.
 通信部145は、他の装置と、有線通信および無線通信の少なくとも一方である所定の通信を行い、所定の通信を介して他の装置とデータの授受を行う。例えば、通信部145は、デジタル信号処理部106から供給される各種データ(例えば画像データや符号化データ)を、所定の通信を介して他の装置に出力する。また、通信部145は、所定の通信を介して他の装置から各種データを取得し、取得したデータをデジタル信号処理部106に出力する。 The communication unit 145 performs predetermined communication, which is at least one of wired communication and wireless communication, with other devices, and exchanges data with other devices via the predetermined communication. For example, the communication unit 145 outputs various data (for example, image data and encoded data) supplied from the digital signal processing unit 106 to another device via predetermined communication. The communication unit 145 also acquires various data from another device via predetermined communication, and outputs the acquired data to the digital signal processing unit 106.
 操作部146は、例えば、キー、ボタン、またはタッチパネルなどの任意の入力デバイスにより構成される。図3または図4を参照して上述した瞳AFボタンP、AF-ONボタンQ、またはシャッタボタンRが操作部146に含まれる。操作部146は、ユーザによる操作入力を受け、操作入力に対応する信号をシステムコントローラ131に出力する。 The operation unit 146 is configured by an arbitrary input device such as a key, a button, or a touch panel. The operation part 146 includes the pupil AF button P, the AF-ON button Q, or the shutter button R described above with reference to FIG. 3 or 4. The operation unit 146 receives an operation input from the user and outputs a signal corresponding to the operation input to the system controller 131.
 ドライブ147は、自身に装着された、例えば、半導体メモリなどのリムーバブル記録媒体148に記憶されている情報(プログラムやデータなど)を読み出す。ドライブ147は、リムーバブル記録媒体148から読み出した情報をシステムコントローラ131に供給する。また、ドライブ147は、書き込み可能なリムーバブル記録媒体148が自身に装着された場合、システムコントローラ131を介して供給される情報(画像データや符号化データなど)を、リムーバブル記録媒体148に記憶させることができる。 The drive 147 reads out information (programs, data, etc.) stored in a removable recording medium 148 such as a semiconductor memory mounted on itself. The drive 147 supplies the information read from the removable recording medium 148 to the system controller 131. Further, the drive 147 causes the removable recording medium 148 to store information (image data, encoded data, etc.) supplied via the system controller 131 when the writable removable recording medium 148 is attached to itself. You can
 なお、以上に説明したレンズ101、絞り102、およびレンズドライバ121は、撮像装置100とは別筐体の、撮像装置100に着脱可能な(交換可能な)交換式レンズ151として形成されるようにしてもよい。 The lens 101, the diaphragm 102, and the lens driver 121 described above are formed as a replaceable lens 151 that is detachable (replaceable) from the image pickup apparatus 100 and is a case separate from the image pickup apparatus 100. May be.
 図6は、デジタル信号処理部106の構成例を示すブロック図である。 FIG. 6 is a block diagram showing a configuration example of the digital signal processing unit 106.
 デジタル信号処理部106は、メモリ211、被写体検出部212、領域設定部213、表示制御部214、およびコーデック処理部215を有する。 The digital signal processing unit 106 has a memory 211, a subject detection unit 212, a region setting unit 213, a display control unit 214, and a codec processing unit 215.
 メモリ211は、A/D変換部105から供給される画像データを記憶する。画像データは、例えば、動画像の各フレームの画像データまたは静止画像の画像データである。 The memory 211 stores the image data supplied from the A/D conversion unit 105. The image data is, for example, image data of each frame of a moving image or image data of a still image.
 被写体検出部212は、システムコントローラ131から供給されるユーザの操作入力に対応する信号に基づいて、メモリ211に記憶されている画像データから被写体または特定部位の領域を検出する。被写体検出部212は、被写体または特定部位の領域の検出結果を領域設定部213および表示制御部214に出力する。 The subject detection unit 212 detects a subject or a region of a specific part from the image data stored in the memory 211 based on a signal corresponding to a user's operation input supplied from the system controller 131. The subject detection unit 212 outputs the detection result of the region of the subject or the specific part to the region setting unit 213 and the display control unit 214.
 被写体検出部212は、人物検出部212-1、動物検出部212-2、および動物検出部212-3から構成される。 The subject detection unit 212 includes a person detection unit 212-1, an animal detection unit 212-2, and an animal detection unit 212-3.
 人物検出部212-1は、被写体の特定部位毎の検出モードが人物の瞳検出モードである場合に、人物の顔領域を検出し、検出した顔領域の検出結果を、領域設定部213および表示制御部214に出力する。人物検出部212-1は、合焦開始を指示する操作がユーザにより行われた場合、顔領域の検出結果に基づいて、人物の瞳領域を検出し、瞳領域の検出結果を、領域設定部213および表示制御部214に出力する。 When the detection mode for each specific part of the subject is the human pupil detection mode, the person detection unit 212-1 detects the face area of the person and displays the detection result of the detected face area on the area setting unit 213 and the display unit. Output to the control unit 214. The person detection unit 212-1 detects the pupil area of the person based on the detection result of the face area when the operation of instructing the start of focusing is performed by the user, and the detection result of the pupil area is set to the area setting unit. 213 and the display control unit 214.
 動物検出部212-2および212-3は、検出対象とする動物の種類が異なる。動物検出部212-2および212-3は、被写体の特定部位毎の検出モードが動物の瞳検出モードである場合に、対象の動物の瞳領域を検出し、検出した動物の瞳領域の検出結果を領域設定部213および表示制御部214に出力する。 The animal detection units 212-2 and 212-3 differ in the types of animals to be detected. The animal detection units 212-2 and 212-3 detect the pupil region of the target animal when the detection mode for each specific part of the subject is the animal pupil detection mode, and the detection result of the detected animal pupil region is detected. Is output to the area setting unit 213 and the display control unit 214.
 動物の瞳の検出には、例えば、ディープラーニングなどの手法が用いられる。動物検出部212-2および212-3は、合焦開始を指示する操作がユーザにより行われた場合、フォーカス枠に応じて、動物の瞳を検出し、検出した動物の瞳領域の検出結果を領域設定部213および表示制御部214に出力する。 For example, methods such as deep learning are used to detect the eyes of animals. When the user performs an operation for instructing to start focusing, the animal detection units 212-2 and 212-3 detect the pupil of the animal according to the focus frame and display the detection result of the detected pupil area of the animal. It is output to the area setting unit 213 and the display control unit 214.
 例えば、動物検出部212-2は、犬や猫などの動物の瞳領域を検出する。動物検出部212-3は、トカゲやカエルなどの動物の瞳領域を検出する。動物検出部212-2および212-3だけに限らず、例えば、検出時の特徴が同じである動物の種類に応じて、他の動物検出部が設けられてもよい。 For example, the animal detection unit 212-2 detects the pupil area of animals such as dogs and cats. The animal detection unit 212-3 detects the pupil region of animals such as lizards and frogs. Not only the animal detection units 212-2 and 212-3, but other animal detection units may be provided depending on, for example, the types of animals having the same characteristics at the time of detection.
 領域設定部213は、被写体の特定部位毎の検出モードに応じて、被写体検出部212により検出された被写体の特定部位の領域やフォーカス枠が示す領域のうち、いずれかを合焦領域として設定する。領域設定部213は、設定した合焦領域の情報をシステムコントローラ131に供給する。 The area setting unit 213 sets any one of the area of the specific region of the subject detected by the subject detection unit 212 and the region indicated by the focus frame as the focus region according to the detection mode for each specific region of the subject. .. The area setting unit 213 supplies information on the set focusing area to the system controller 131.
 表示制御部214は、システムコントローラ131から供給されるユーザの操作入力に対応する信号に応じて、フォーカス枠を生成し、メモリ211からの画像に重畳して表示部141に表示させる。フォーカス枠の情報は、被写体検出部212に出力される。 The display control unit 214 generates a focus frame according to a signal corresponding to a user's operation input supplied from the system controller 131, and superimposes the focus frame on the image from the memory 211 to display the focus frame on the display unit 141. Information on the focus frame is output to the subject detection unit 212.
 また、表示制御部214は、システムコントローラ131から供給されるユーザの操作入力に対応する信号に基づいて、被写体検出部212により検出された顔または瞳領域に応じた所定の枠(顔枠、予告枠、または瞳枠)を生成する。表示制御部214は、生成した所定の枠をメモリ211からの画像に重畳して表示部141に表示させる。顔枠、予告枠、または瞳枠の情報は、必要に応じて、被写体検出部212に出力される。 In addition, the display control unit 214, based on a signal corresponding to a user's operation input supplied from the system controller 131, a predetermined frame (face frame, notice) corresponding to the face or pupil region detected by the subject detection unit 212. Frame, or pupil frame) is generated. The display control unit 214 superimposes the generated predetermined frame on the image from the memory 211 and causes the display unit 141 to display it. Information on the face frame, the notice frame, or the pupil frame is output to the subject detection unit 212 as necessary.
 なお、表示制御部214は、メニュー、ボタン、またはカーソルなどのGUI(Graphical User Interface)の画像を生成して、取り込み画像や撮像画像などとともに表示する。 Note that the display control unit 214 generates an image of a GUI (Graphical User Interface) such as a menu, a button, or a cursor, and displays the image together with a captured image, a captured image, and the like.
 コーデック処理部215は、メモリ211に記憶される動画像や静止画像の画像データの符号化や復号に関する処理を行う。 The codec processing unit 215 performs processing relating to encoding and decoding of image data of moving images and still images stored in the memory 211.
<3.瞳が複数検出された場合の選択方法>
 図7は、人物の瞳検出モードにおいて瞳が複数検出された場合の選択方法を示す図である。
<3. Selection method when multiple eyes are detected>
FIG. 7 is a diagram showing a selection method when a plurality of pupils are detected in the human pupil detection mode.
 図7のAには、合焦開始を指示する操作がユーザにより行われる前に表示される画面が示されている。図7のBには、合焦開始を指示する操作がユーザにより行われた後に表示される画面が示されている。 FIG. 7A shows a screen displayed before the user performs an operation to instruct the start of focusing. FIG. 7B shows a screen that is displayed after the user performs an operation to instruct the start of focusing.
 人物の瞳検出モードが設定されている場合、図7のAに示すように、顔領域のうち、フォーカス枠Fに重なる顔領域の検出が行われ、検出された顔領域を示す顔予告枠PFが表示される。 When the person's pupil detection mode is set, as shown in A of FIG. 7, a face area overlapping the focus frame F is detected in the face area, and a face notice frame PF indicating the detected face area is detected. Is displayed.
 この状態において合焦開始を指示する操作がユーザにより行われた場合、図7のBに示すように、顔領域を対象として瞳領域の検出が行われ、検出された瞳領域を示す瞳枠AEが表示される。また、瞳枠AEにより囲まれる瞳領域を対象として合焦が行われる。 In this state, when the user performs an operation of instructing to start focusing, as shown in B of FIG. 7, the pupil area is detected for the face area, and the pupil frame AE indicating the detected pupil area is detected. Is displayed. In addition, focusing is performed on the pupil region surrounded by the pupil frame AE.
 このとき、検出された顔領域において、撮像装置100から見て手前に位置する瞳領域の検出が行われる。したがって、左側の瞳が撮像装置100から見て手前に位置する場合、ユーザが右側の瞳にフォーカス枠Fを合わせたとしても、図7のBの左側に示すように、瞳枠AEは、左側の瞳に表示されてしまう。また、右側の瞳が撮像装置100から見て手前に位置する場合、ユーザが右側の瞳にフォーカス枠Fを合わせたとしても、または合わせなくても、図7のBの右側に示すように、瞳枠AEは、右側の瞳に表示される。 At this time, in the detected face area, the pupil area located in front of the imaging device 100 is detected. Therefore, when the left pupil is located in front of the imaging device 100, even if the user adjusts the focus frame F to the right pupil, as shown on the left side of FIG. Will be displayed in the eyes. Further, when the right pupil is located in front of the imaging device 100, the user may or may not adjust the focus frame F to the right pupil, as shown on the right side of FIG. 7B. The pupil frame AE is displayed on the right pupil.
 このように、人物の瞳検出モードにおいては、フォーカス枠の位置によらず、ユーザが、顔を選択するだけで、選択した顔領域における手前の瞳に焦点が合わせられる。 In this way, in the human pupil detection mode, the user can select the face and focus on the front pupil in the selected face area regardless of the position of the focus frame.
 図8は、動物の瞳検出モードにおいて瞳が複数検出された場合の選択方法を示す図である。 FIG. 8 is a diagram showing a selection method when a plurality of pupils are detected in the animal pupil detection mode.
 図8のAには、合焦開始を指示する操作がユーザにより行われる前に表示される画面が示されている。図8のBには、合焦開始を指示する操作がユーザにより行われた後に表示される画面が示されている。 FIG. 8A shows a screen that is displayed before the user performs an operation for instructing to start focusing. FIG. 8B shows a screen displayed after the user performs an operation to instruct the start of focusing.
 動物の瞳検出モードが設定されている場合、図8のAに示すように、フォーカス枠Fの内と外を対象として瞳領域の検出が行われ、検出された瞳領域を予告する瞳予告枠PEが表示される。なお、このとき、瞳領域の検出は、フォーカス枠F内を優先して行われる。 When the animal pupil detection mode is set, as shown in A of FIG. 8, the pupil region is detected for the inside and outside of the focus frame F, and the pupil notice frame for notifying the detected pupil region is provided. PE is displayed. At this time, the detection of the pupil area is performed by giving priority to the inside of the focus frame F.
 ただし、複数の瞳領域が検出された場合、撮像装置100から見て手前に位置し、フォーカス枠Fの中心(中心位置)から近い瞳領域に、瞳予告枠PEが表示される。なお、瞳領域の選択の詳細は、図9を参照して後述される。 However, when a plurality of pupil regions are detected, the pupil notice frame PE is displayed in a pupil region located in front of the imaging device 100 and close to the center (center position) of the focus frame F. The details of the selection of the pupil region will be described later with reference to FIG.
 合焦開始を指示する操作がユーザにより行われた場合、図8のBに示すように、選択された瞳領域に瞳枠AEが表示され、瞳枠AEが示す瞳領域を対象として合焦が行われる。 When the operation of instructing to start focusing is performed by the user, as shown in B of FIG. 8, the pupil frame AE is displayed in the selected pupil region, and focusing is performed on the pupil region indicated by the pupil frame AE. Done.
 なお、瞳予告枠PEを表示しないように予め設定しておくことも可能である。この場合、瞳予告枠PEの表示により被写体の表情が隠れてしまうことを防ぐことができる。 Note that it is possible to set in advance not to display the eye notice frame PE. In this case, it is possible to prevent the expression of the subject from being hidden by the display of the eye notice frame PE.
 図9は、被写体の顔の向きと瞳領域の数との関係を示す図である。 FIG. 9 is a diagram showing the relationship between the orientation of the subject's face and the number of pupil regions.
 図9には、きつねを被写体とした画像P1乃至画像P7が示されている。画像P1乃至画像P7に写っているきつねは、顔の向き(角度)が異なる。実線の矩形は、各画像において検出された手前の瞳領域を示しており、破線の矩形は、各画像において検出された奥の瞳領域を示している。 FIG. 9 shows images P1 to P7 with the fox as a subject. The foxes shown in images P1 to P7 have different face orientations (angles). The solid line rectangle indicates the front pupil region detected in each image, and the broken line rectangle indicates the back pupil region detected in each image.
 画像P1には、左方向に顔を向けているきつねが写っている。画像P2には、左斜め前方向に顔を向けているきつねが写っている。画像P3には、ごく僅かな左斜め前方向に顔を向けているきつねが写っている。画像P4には、正面方向に顔を向けているきつねが写っている。画像P5には、ごく僅かな右斜め前方向に顔を向けているきつねが写っている。画像P6には、右斜め前方向に顔を向けているきつねが写っている。画像P7には、右方向に顔を向けているきつねが写っている。 Image P1 shows a fox with his face facing to the left. In the image P2, the fox whose face is facing diagonally forward left is shown. In the image P3, a fox whose face is directed slightly diagonally to the front left is shown. The image P4 shows a fox whose face is facing forward. In the image P5, a fox whose face is facing slightly diagonally forward right is shown. The image P6 shows a fox with his face facing diagonally forward right. The image P7 shows a fox with his face facing to the right.
 これらの画像のうち、斜め前方向乃至横(左右)方向に顔を向けているきつねが写っている画像P1、画像P2、画像P6、および画像P7では、それぞれ、撮像装置100から見て手前に位置する瞳領域しか検出されなかった場合を示している。 Among these images, an image P1, an image P2, an image P6, and an image P7 in which a fox whose face is directed diagonally forward or laterally (left and right) are shown, respectively, in front of the imaging device 100. It shows a case where only the located pupil region is detected.
 ごく僅かな斜め前方向に顔を向けているきつねが写っている画像P3と画像P5では、それぞれ2つの瞳領域が検出され、かつ、どちらか一方の瞳領域が撮像装置100から見て手前に位置し、どちらか他方の瞳領域が奥に位置するかを判定しやすい。 Two pupil areas are detected in each of the images P3 and P5 in which the fox whose face is facing slightly diagonally forward is shown, and one of the pupil areas is in front of the imaging device 100. It is easy to determine whether or not the other pupil region is located at the back.
 正面方向に顔を向けているきつねが写っている画像P4では、左の瞳領域と右の瞳領域のうち、どちらの瞳領域が手前であるのかが区別をつけるのが難しい。この場合、上述したように、撮像装置100から見て手前に位置し、フォーカス枠の中心から近い瞳領域に、予告枠または瞳枠が表示される。  It is difficult to distinguish which of the left and right pupil regions is the front one in image P4, which shows the fox with its face facing forward. In this case, as described above, the notice frame or the pupil frame is displayed in the pupil region located in front of the imaging device 100 and close to the center of the focus frame.
<4.撮像装置の動作例>
 図10は、撮像装置100の人物の瞳検出モードの撮像処理を説明するフローチャートである。
<4. Operation example of imaging device>
FIG. 10 is a flowchart illustrating an image capturing process in the human pupil detection mode of the image capturing apparatus 100.
 図10の人物の瞳検出モードの撮像処理は、例えば、電源ボタンが操作されることにより電源がオンにされたとき開始される。被写体の特定部位毎の検出モードは、設定画面などから人物の瞳検出モードとして予め設定されている。 The image processing in the human pupil detection mode in FIG. 10 is started when the power is turned on by operating the power button, for example. The detection mode for each specific part of the subject is preset as a human pupil detection mode from a setting screen or the like.
 図10のステップS11において、システムコントローラ131は、処理を終了するか否か、例えば、電源ボタンが操作されたか否かを判定する。 In step S11 of FIG. 10, the system controller 131 determines whether to end the process, for example, whether the power button has been operated.
 ステップS11において処理を終了すると判定された場合、撮像処理は終了される。 If it is determined in step S11 to end the process, the imaging process ends.
 ステップS11で処理を終了しないと判定された場合、処理は、ステップS12に進む。 If it is determined in step S11 that the process is not completed, the process proceeds to step S12.
 ステップS12において、撮像素子103は、レンズ101および絞り102を介して集光された被写体からの光を画素単位で光電変換することにより画像の各画素の電気信号を取得する。画像の各画素の電気信号である画像信号は、アナログ信号処理部104およびA/D変換部105を介して、デジタル信号処理部106のメモリ211に出力される。 In step S12, the image sensor 103 acquires the electrical signal of each pixel of the image by photoelectrically converting the light from the subject condensed through the lens 101 and the diaphragm 102 on a pixel-by-pixel basis. The image signal, which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
 ステップS13において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像をライブビュー画像として表示部141に表示させる。 In step S13, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image.
 ステップS14において、人物検出部212-1は、メモリ211に記憶されている画像データから顔領域を検出する。人物検出部212-1は、検出した顔領域の情報を領域設定部213および表示制御部214に供給する。 In step S14, the person detection unit 212-1 detects a face area from the image data stored in the memory 211. The person detection unit 212-1 supplies the detected face area information to the area setting unit 213 and the display control unit 214.
 ユーザは、瞳AFボタンの押下、AF-ONボタンの押下、またはシャッタボタンの半押しなどの操作を行うことにより、合焦開始を指示する。なお、合焦開始の指示は、画像の撮像単位毎に行われる。操作部146は、ユーザによる操作入力を受け、操作入力に対応する信号をシステムコントローラ131に出力する。 The user gives an instruction to start focusing by pressing the pupil AF button, pressing the AF-ON button, or pressing the shutter button halfway. Note that the instruction to start focusing is issued for each image capturing unit. The operation unit 146 receives an operation input from the user and outputs a signal corresponding to the operation input to the system controller 131.
 ステップS15において、システムコントローラ131は、ユーザにより瞳AFボタンの押下が行われたか否かを判定する。ステップS15において、瞳AFボタンの押下が行われたと判定された場合、処理はステップS16に進む。 In step S15, the system controller 131 determines whether or not the user has pressed the eye AF button. If it is determined in step S15 that the pupil AF button has been pressed, the process proceeds to step S16.
 ステップS16において、システムコントローラ131は、フォーカス枠を強制的に広範囲に変更する。 In step S16, the system controller 131 forcibly changes the focus frame to a wide range.
 ステップS17において、人物検出部212-1は、システムコントローラ131の制御に従って、フォーカス枠内の顔領域に対して瞳領域を検出する。検出した瞳領域の情報は、領域設定部213および表示制御部214に出力される。 In step S17, the person detection unit 212-1 detects the pupil area for the face area in the focus frame under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
 ステップS18において、領域設定部213は、瞳領域が検出されたか否かを判定する。ステップS18において、瞳領域が検出されたと判定された場合、処理はステップS19に進む。 In step S18, the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S18 that the pupil area has been detected, the process proceeds to step S19.
 ステップS19において、領域設定部213は、人物検出部212-1により検出された瞳領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に供給される。 In step S19, the area setting unit 213 sets the pupil area detected by the person detection unit 212-1 as the focus area. Information on the set focus area is supplied to the system controller 131.
 ステップS18において、瞳領域が検出されなかったと判定された場合、処理はステップS20に進む。 If it is determined in step S18 that the pupil region has not been detected, the process proceeds to step S20.
 ステップS20において、領域設定部213は、人物検出部212-1により検出された顔領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に出力される。 In step S20, the area setting unit 213 sets the face area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
 一方、ステップS15において、瞳AFボタンの押下が行われなかったと判定された場合、処理はステップS21に進む。 On the other hand, if it is determined in step S15 that the eye AF button has not been pressed, the process proceeds to step S21.
 ステップS21において、システムコントローラ131は、ユーザによりシャッタボタンの半押しが行われたか否か、または、AF-ONボタンの押下が行われたか否かを判定する。 In step S21, the system controller 131 determines whether the user has half-pressed the shutter button or whether the AF-ON button has been pressed.
 ステップS21において、シャッタボタンの半押しが行われた、または、AF-ONボタンの押下が行われたと判定された場合、処理はステップS22に進む。 If it is determined in step S21 that the shutter button has been half-pressed or the AF-ON button has been pressed, the process proceeds to step S22.
 ステップS22において、システムコントローラ131は、フォーカス枠を変更しない。 In step S22, the system controller 131 does not change the focus frame.
 ステップS23において、人物検出部212-1は、システムコントローラ131の制御に従って、フォーカス枠内の顔領域に対して瞳領域を検出する。検出した瞳領域の情報は、領域設定部213および表示制御部214に出力される。 In step S23, the person detection unit 212-1 detects the pupil area for the face area in the focus frame under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
 ステップS24において、領域設定部213は、瞳領域が検出されたか否かを判定する。ステップS24において、瞳領域が検出されたと判定された場合、処理はステップS25に進む。 In step S24, the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S24 that the pupil area has been detected, the process proceeds to step S25.
 ステップS25において、領域設定部213は、人物検出部212-1により検出された瞳領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に供給される。 In step S25, the area setting unit 213 sets the pupil area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is supplied to the system controller 131.
 ステップS24において、瞳領域が検出されなかったと判定された場合、処理はステップS26に進む。 If it is determined in step S24 that the pupil area has not been detected, the process proceeds to step S26.
 ステップS26において、領域設定部213は、人物検出部212-1により検出された顔領域またはフォーカス枠を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に出力される。 In step S26, the area setting unit 213 sets the face area or the focus frame detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
 また、ステップS20において、シャッタボタンの半押しとAF-ONボタンの押下が行われなかったと判定された場合、処理はステップS27に進む。 If it is determined in step S20 that the shutter button has not been half pressed and the AF-ON button has not been pressed, the process proceeds to step S27.
 ステップS27において、領域設定部213は、人物検出部212-1により検出された顔領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に出力される。 In step S27, the area setting unit 213 sets the face area detected by the person detecting unit 212-1 as the focus area. Information on the set focus area is output to the system controller 131.
 ステップS28において、表示制御部214は、人物検出部212-1により検出された顔領域に顔予告枠を生成し、ライブビュー画像に顔予告枠を重畳して、表示部141に表示させる。 In step S28, the display control unit 214 generates a face notice frame in the face area detected by the person detection unit 212-1 and superimposes the face notice frame on the live view image to display it on the display unit 141.
 ステップS19、S20、およびS25の後、処理は、図11のステップS29に進む。 After steps S19, S20, and S25, the process proceeds to step S29 in FIG.
 ステップS29において、システムコントローラ131は、レンズドライバ121を制御し、合焦領域に焦点が合うように、レンズ101および絞り102などの光学系を駆動する。その後、処理は、ステップS30に進む。 In step S29, the system controller 131 controls the lens driver 121 to drive the optical system such as the lens 101 and the diaphragm 102 so that the focus area is in focus. Then, a process progresses to step S30.
 図11のステップS27の後も処理は、ステップS30に進む。 After step S27 in FIG. 11, the process proceeds to step S30.
 ステップS30において、システムコントローラ131は、合焦したか否かを判定する。 In step S30, the system controller 131 determines whether or not focus is achieved.
 ステップS30において、合焦したと判定された場合、処理は、ステップS31に進む。 If it is determined in step S30 that the subject is in focus, the process proceeds to step S31.
 ステップS31において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像をライブビュー画像として表示部141に表示させる。また、表示制御部214は、設定された合焦領域を囲む枠である合焦枠(瞳枠、顔枠、またはフォーカス枠)をライブビュー画像に重畳して表示部141に表示させる。 In step S31, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image. Further, the display control unit 214 superimposes a focus frame (pupil frame, face frame, or focus frame), which is a frame surrounding the set focus region, on the live view image and displays it on the display unit 141.
 ステップS30において、合焦していないと判定された場合、ステップS31をスキップし、処理は、ステップS32に進む。 If it is determined in step S30 that the subject is out of focus, step S31 is skipped and the process proceeds to step S32.
 また、図10のステップS29の後、処理は、ステップS32に進む。 After step S29 in FIG. 10, the process proceeds to step S32.
 ステップS32において、システムコントローラ131は、操作部146からの操作入力に対応する信号に基づいて、シャッタボタンの全押しが行われたか否かを判定する。ステップS32においてシャッタボタンの全押しが行われたと判定された場合、処理はステップS33に進む。 In step S32, the system controller 131 determines whether or not the shutter button has been fully pressed based on a signal corresponding to an operation input from the operation unit 146. If it is determined in step S32 that the shutter button has been fully pressed, the process proceeds to step S33.
 ステップS33において、撮像素子103は、レンズ101および絞り102などの光学系を介して集光された被写体からの光を画素単位で光電変換することにより画像の各画素の電気信号を取得する。画像の各画素の電気信号である画像信号は、アナログ信号処理部104およびA/D変換部105を介して、デジタル信号処理部106のメモリ211に出力される。 In step S33, the image sensor 103 photoelectrically converts the light from the subject condensed through the optical system such as the lens 101 and the diaphragm 102 on a pixel-by-pixel basis to acquire an electric signal of each pixel of the image. The image signal, which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
 ステップS34において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像を撮影画像として表示部141に表示させる。 In step S34, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a captured image.
 ステップS35において、コーデック処理部215は、メモリ211に記憶される画像データを符号化する。コーデック処理部215は、符号化後の画像データを記憶部142に供給する。 In step S35, the codec processing unit 215 encodes the image data stored in the memory 211. The codec processing unit 215 supplies the encoded image data to the storage unit 142.
 ステップS36において、コーデック処理部215は、記憶部142に、符号化後の画像データを記録させる。その後、ステップS11に戻り、それ以降の処理が繰り返される。 In step S36, the codec processing unit 215 causes the storage unit 142 to record the encoded image data. Then, the process returns to step S11, and the subsequent processes are repeated.
 また、ステップS32においてシャッタボタンの全押しが行われなかったと判定された場合も、ステップS11に戻り、それ以降の処理が繰り返される。 Also, when it is determined in step S32 that the shutter button has not been fully pressed, the process returns to step S11, and the subsequent processing is repeated.
 図12は、撮像装置100の動物の瞳検出モードの撮像処理を説明するフローチャートである。 FIG. 12 is a flowchart illustrating the image pickup processing of the animal eye detection mode of the image pickup apparatus 100.
 図12の動物の瞳検出モードの撮像処理は、例えば、電源ボタンが操作されることにより電源がオンにされたとき開始される。被写体の特定部位毎の検出モードは、設定画面などで動物の瞳検出モードが予め設定されている。図12においては、猫または犬の瞳を検出する動物検出部212-2により動物の瞳が検出される例が説明される。 The imaging processing in the animal pupil detection mode of FIG. 12 is started when the power is turned on by operating the power button, for example. As the detection mode for each specific part of the subject, the pupil detection mode of the animal is preset on the setting screen or the like. In FIG. 12, an example in which the animal detection unit 212-2 that detects the eyes of a cat or a dog detects the eyes of an animal will be described.
 図12のステップS111において、システムコントローラ131は、処理を終了するか否か、例えば、電源ボタンが操作されたか否かを判定する。 In step S111 of FIG. 12, the system controller 131 determines whether to end the process, for example, whether the power button has been operated.
 ステップS111において処理を終了すると判定された場合、撮像処理は終了される。 If it is determined in step S111 to end the process, the imaging process ends.
 ステップS111で処理を終了しないと判定された場合、処理は、ステップS112に進む。 If it is determined in step S111 that the process is not completed, the process proceeds to step S112.
 ステップS112において、撮像素子103は、レンズ101および絞り102を介して集光された被写体からの光を画素単位で光電変換することにより画像の各画素の電気信号を取得する。画像の各画素の電気信号である画像信号は、アナログ信号処理部104およびA/D変換部105を介して、デジタル信号処理部106のメモリ211に出力される。 In step S112, the image sensor 103 acquires an electrical signal of each pixel of the image by photoelectrically converting the light from the subject condensed through the lens 101 and the diaphragm 102 on a pixel-by-pixel basis. The image signal, which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
 ステップS113において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像をライブビュー画像として表示部141に表示させる。 In step S113, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image.
 ステップS114において、動物検出部212-2は、メモリ211に記憶されている画像データから瞳領域を検出する。動物検出部212-2は、検出した瞳領域の情報を領域設定部213および表示制御部214に供給する。 In step S114, the animal detection unit 212-2 detects the pupil area from the image data stored in the memory 211. The animal detection unit 212-2 supplies the detected pupil area information to the area setting unit 213 and the display control unit 214.
 ステップS115において、表示制御部214は、動物検出部212-2により検出された瞳領域選択処理を行う。瞳領域選択処理により、複数検出された瞳領域の中から、予告枠が表示される瞳領域が選択される。 In step S115, the display control unit 214 performs a pupil region selection process detected by the animal detection unit 212-2. By the pupil area selection processing, the pupil area in which the notice frame is displayed is selected from the plurality of detected pupil areas.
 ステップS116において、システムコントローラ131は、ユーザにより瞳AFボタンの押下が行われたか否かを判定する。 In step S116, the system controller 131 determines whether or not the user has pressed the eye AF button.
 ステップS116において、瞳AFボタンの押下が行われたと判定された場合、処理はステップS117に進む。 If it is determined in step S116 that the eye AF button has been pressed, the process proceeds to step S117.
 ステップS117において、システムコントローラ131は、フォーカス枠を強制的に広範囲に変更する。 In step S117, the system controller 131 forcibly changes the focus frame to a wide range.
 ステップS118において、動物検出部212-2は、システムコントローラ131の制御に従って、広範囲に変更されたフォーカス枠内に対して瞳領域を検出する。検出した瞳領域の情報は、領域設定部213および表示制御部214に出力される。 In step S118, the animal detection unit 212-2 detects a pupil region in the focus frame that has been changed in a wide range under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
 ステップS119において、領域設定部213は、瞳領域が検出されたか否かを判定する。ステップS119において、瞳領域が検出されたと判定された場合、処理はステップS120に進む。 In step S119, the area setting unit 213 determines whether a pupil area has been detected. If it is determined in step S119 that the pupil region has been detected, the process proceeds to step S120.
 ステップS120において、領域設定部213は、動物検出部212-2により検出された瞳領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に供給される。 In step S120, the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
 一方、ステップS116において、瞳AFボタンの押下が行われなかったと判定された場合、処理はステップS121に進む。 On the other hand, if it is determined in step S116 that the eye AF button has not been pressed, the process proceeds to step S121.
 ステップS121において、システムコントローラ131は、ユーザによりシャッタボタンの半押しが行われたか否か、または、AF-ONボタンの押下が行われたか否かを判定する。 In step S121, the system controller 131 determines whether the user has half-pressed the shutter button or whether the AF-ON button has been pressed.
 ステップS121において、シャッタボタンの半押しが行われた、または、AF-ONボタンの押下が行われたと判定された場合、処理はステップS122に進む。 If it is determined in step S121 that the shutter button has been half-pressed or the AF-ON button has been pressed, the process proceeds to step S122.
 ステップS122において、システムコントローラ131は、フォーカス枠を変更しない。 In step S122, the system controller 131 does not change the focus frame.
 ステップS123において、動物検出部212-2は、システムコントローラ131の制御に従って、フォーカス枠内に対して瞳領域を検出する。検出した瞳領域の情報は、領域設定部213および表示制御部214に出力される。 In step S123, the animal detection unit 212-2 detects the pupil area in the focus frame under the control of the system controller 131. Information on the detected pupil region is output to the region setting unit 213 and the display control unit 214.
 ステップS124において、領域設定部213は、瞳領域が検出されたか否かを判定する。ステップS124において、瞳領域が検出されたと判定された場合、処理はステップS125に進む。 In step S124, the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S124 that the pupil region has been detected, the process proceeds to step S125.
 ステップS125において、領域設定部213は、動物検出部212-2により検出された瞳領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に供給される。 In step S125, the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
 ステップS125において、瞳領域が検出されなかったと判定された場合、処理はステップS126に進む。 If it is determined in step S125 that the pupil area has not been detected, the process proceeds to step S126.
 ステップS126において、領域設定部213は、他の条件として、フォーカス枠を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に出力される。 In step S126, the area setting unit 213 sets the focus frame as the focus area as another condition. Information on the set focus area is output to the system controller 131.
 また、ステップS121において、シャッタボタンの半押しとAF-ONボタンの押下が行われなかったと判定された場合、処理はステップS127に進む。 If it is determined in step S121 that the shutter button has not been half pressed and the AF-ON button has not been pressed, the process proceeds to step S127.
 ステップS127において、領域設定部213は、瞳領域が検出されたか否かを判定する。ステップS127において、瞳領域が検出されたと判定された場合、処理はステップS128に進む。 In step S127, the area setting unit 213 determines whether a pupil area has been detected. When it is determined in step S127 that the pupil region has been detected, the process proceeds to step S128.
 ステップS128において、領域設定部213は、動物検出部212-2により検出された瞳領域を合焦領域として設定する。設定した合焦領域の情報は、システムコントローラ131に供給される。 In step S128, the area setting unit 213 sets the pupil area detected by the animal detection unit 212-2 as the focus area. Information on the set focus area is supplied to the system controller 131.
 ステップS129において、表示制御部214は、動物検出部212-2により検出された瞳領域に瞳予告枠を生成し、ライブビュー画像に瞳予告枠を重畳して、表示部141に表示させる。 In step S129, the display control unit 214 generates a pupil preview frame in the pupil region detected by the animal detection unit 212-2, superimposes the pupil preview frame on the live view image, and causes the display unit 141 to display it.
 ステップS120およびS125の後、処理は、図13のステップS130に進む。 After steps S120 and S125, the process proceeds to step S130 in FIG.
 ステップS130において、システムコントローラ131は、レンズドライバ121を制御し、合焦領域に焦点が合うように、レンズ101および絞り102などの光学系を駆動する。その後、処理は、ステップS131に進む。 In step S130, the system controller 131 controls the lens driver 121 to drive the optical system such as the lens 101 and the diaphragm 102 so that the focus area is in focus. Then, a process progresses to step S131.
 図12のステップS126の後も処理は、ステップS131に進む。 After step S126 of FIG. 12, the process proceeds to step S131.
 ステップS131において、システムコントローラ131は、合焦したか否かを判定する。 In step S131, the system controller 131 determines whether or not focus is achieved.
 ステップS131において、合焦したと判定された場合、処理は、ステップS132に進む。 If it is determined in step S131 that the subject is in focus, the process proceeds to step S132.
 ステップS132において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像をライブビュー画像として表示部141に表示させる。また、表示制御部214は、設定された合焦領域を囲む枠である合焦枠(瞳枠またはフォーカス枠)をライブビュー画像に重畳して表示部141に表示させる。 In step S132, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a live view image. In addition, the display control unit 214 causes the display unit 141 to display a focus frame (pupil frame or focus frame), which is a frame surrounding the set focus region, on the live view image.
 ステップS131において、合焦していないと判定された場合、ステップS132をスキップし、処理は、ステップS133に進む。 If it is determined in step S131 that the subject is out of focus, step S132 is skipped and the process proceeds to step S133.
 また、図12のステップS119またはS127において、瞳領域が検出されていないと判定された場合、合焦は行われず、処理は、図13のステップS133に進む。ステップS129の後も、処理は、ステップS133に進む。 If it is determined in step S119 or S127 in FIG. 12 that the pupil region is not detected, focusing is not performed, and the process proceeds to step S133 in FIG. After step S129, the process proceeds to step S133.
 ステップS133において、システムコントローラ131は、操作部146からの操作入力に対応する信号に基づいて、シャッタボタンの全押しが行われたか否かを判定する。ステップS133においてシャッタボタンの全押しが行われたと判定された場合、処理はステップS134に進む。 In step S133, the system controller 131 determines whether or not the shutter button has been fully pressed based on a signal corresponding to an operation input from the operation unit 146. If it is determined in step S133 that the shutter button has been fully pressed, the process proceeds to step S134.
 ステップS134において、撮像素子103は、レンズ101および絞り102などの光学系を介して集光された被写体からの光を画素単位で光電変換することにより画像の各画素の電気信号を取得する。画像の各画素の電気信号である画像信号は、アナログ信号処理部104およびA/D変換部105を介して、デジタル信号処理部106のメモリ211に出力される。 In step S134, the image sensor 103 photoelectrically converts the light from the subject condensed through the optical system such as the lens 101 and the diaphragm 102 in pixel units to acquire the electric signal of each pixel of the image. The image signal, which is an electric signal of each pixel of the image, is output to the memory 211 of the digital signal processing unit 106 via the analog signal processing unit 104 and the A/D conversion unit 105.
 ステップS135において、表示制御部214は、メモリ211に記憶されている画像データに基づく画像を撮影画像として表示部141に表示させる。 In step S135, the display control unit 214 causes the display unit 141 to display an image based on the image data stored in the memory 211 as a captured image.
 ステップS136において、コーデック処理部215は、メモリ211に記憶される画像データを符号化する。コーデック処理部215は、符号化後の画像データを記憶部142に供給する。 In step S136, the codec processing unit 215 encodes the image data stored in the memory 211. The codec processing unit 215 supplies the encoded image data to the storage unit 142.
 ステップS137において、コーデック処理部215は、記憶部142に、符号化後の画像データを記録させる。その後、ステップS111に戻り、それ以降の処理が繰り返される。 In step S137, the codec processing unit 215 causes the storage unit 142 to record the encoded image data. After that, the process returns to step S111, and the subsequent processes are repeated.
 また、ステップS133においてシャッタボタンの全押しが行われなかったと判定された場合も、ステップS111に戻り、それ以降の処理が繰り返される。 Also, if it is determined in step S133 that the shutter button has not been fully pressed, the process returns to step S111, and the subsequent processing is repeated.
 なお、上述した処理において、瞳領域が検出されなかった場合、合焦処理を実行しなかったり、フォーカス枠などの他の条件で合焦処理を実行したりするように説明した。瞳領域が検出されなかった場合の動作は、ユーザによる設定などで使い分けられるようにしてもよい。その際、検出されなかった対象被写体や特定部位に応じて、検出されなかった場合の動作を変更するようにしてもよい。 In addition, in the above-mentioned processing, when the pupil area is not detected, the focusing processing is not executed, or the focusing processing is executed under other conditions such as the focus frame. The operation when the pupil region is not detected may be selectively used depending on the setting by the user. At that time, the operation when not detected may be changed according to the target subject or the specific part which is not detected.
 例えば、検出されなかった場合に合焦処理を行わないように設定しておくことで、対象被写体または特定部位の動きが少ない場合に、合焦の精度のさらなる悪化を防ぐことができる。反対に、検出されなかった場合にユーザにより指示された位置での合焦処理を行うように設定しておくことで、対象被写体または特定部位の動きが大きい場合に、事前の合焦処理を行うことができる。 For example, by setting the focus process so that it will not be performed if it is not detected, it is possible to prevent further deterioration of the focus accuracy when the target subject or a specific part moves little. On the contrary, by setting the focus process at the position designated by the user when it is not detected, the focus process is performed in advance when the movement of the target subject or the specific part is large. be able to.
 図14は、図12のステップS114の瞳領域選択処理を説明するフローチャートである。 FIG. 14 is a flowchart illustrating the pupil area selection processing in step S114 of FIG.
 ステップS151において、表示制御部214は、フォーカス枠内に、瞳領域が2つ以上あるか否かを判定する。例えば、2匹の動物が、撮像装置100から見て手前の位置にいる場合、1つ乃至4つのいずれかの数の瞳が検出されうる。 In step S151, the display control unit 214 determines whether or not there are two or more pupil regions in the focus frame. For example, when two animals are in a position in front of the imaging device 100, any number of pupils from 1 to 4 can be detected.
 ステップS151において、フォーカス枠内に、瞳領域が2つ以上あると判定された場合、処理はステップS52に進む。 If it is determined in step S151 that there are two or more pupil regions in the focus frame, the process proceeds to step S52.
 ステップS152において、表示制御部214は、手前の瞳領域が2つ以上あるか否かを判定する。 In step S152, the display control unit 214 determines whether or not there are two or more front pupil regions.
 ステップS152において、手前の瞳領域が2つ以上あると判定された場合、処理はステップS153に進む。 If it is determined in step S152 that there are two or more front pupil regions, the process proceeds to step S153.
 ステップS153において、表示制御部214は、各瞳領域とフォーカス枠の中心との距離を計算する。 In step S153, the display control unit 214 calculates the distance between each pupil region and the center of the focus frame.
 ステップS154において、表示制御部214は、フォーカス枠の中心からの距離が最も短い瞳領域を選択する。 In step S154, the display control unit 214 selects the pupil area having the shortest distance from the center of the focus frame.
 一方、ステップS151において、フォーカス枠内に、検出された瞳領域が2つ以上ないと判定された場合、処理はステップS155に進む。 On the other hand, if it is determined in step S151 that there are two or more detected pupil regions in the focus frame, the process proceeds to step S155.
 ステップS155において、表示制御部214は、1つの瞳領域を選択する。 In step S155, the display control unit 214 selects one pupil area.
 また、ステップS152において、手前の瞳領域が2つ以上ないと判定された場合、処理はステップS156に進む。 If it is determined in step S152 that there are no more than two front pupil regions, the process proceeds to step S156.
 ステップS156において、表示制御部214は、手前の瞳領域を選択する。 In step S156, the display control unit 214 selects the front pupil region.
 ステップS154乃至S156の後、瞳領域選択処理は終了し、図12のステップS114に戻る。 After steps S154 to S156, the pupil area selection process ends, and the process returns to step S114 in FIG.
 以上のように、本技術においては、合焦対象とする特定領域を予告する予告枠が、被写体の種類に応じて撮像部より取得した画像の上に表示される。 As described above, in the present technology, a notice frame for giving notice of a specific area to be focused is displayed on the image acquired from the imaging unit according to the type of subject.
 例えば、予告枠が表示されない場合に、合焦開始を指示する操作がユーザにより行われ、自動合焦と撮像を行ってしまうと、被写体の条件によってはユーザの意図せぬ結果となることがある。 For example, if the user performs an operation for instructing to start focusing and automatic focusing and imaging are performed when the notice frame is not displayed, the result may be unintended by the user depending on the condition of the subject. ..
 本技術によれば、予告枠が表示されることにより、意図する位置、または、意図しない位置が検出されていることが事前にわかるため、ユーザが自動合焦を行わないことを選択することが可能である。 According to the present technology, by displaying the notice frame, it is possible to know in advance that an intended position or an unintended position has been detected. Therefore, the user can select not to perform automatic focusing. It is possible.
 以上により、ユーザは、被写体の種類である動物に応じて、瞳領域などの特定部位や動物自体に簡単に焦点を合わせることができる。 With the above, the user can easily focus on a specific part such as a pupil region or the animal itself, depending on the animal that is the type of subject.
 なお、上記説明においては、犬や猫などの動物の瞳を検出する処理について説明したが、本技術は、鳥、魚、爬虫類、両生類などあらゆる生物の瞳、顔、顔の一部、首、頭部などの被写体の特定部位または全身(被写体)に適用することができる。また、本技術は、これらの被写体の特定部位や被写体の組み合わせにも適用することができる。 In the above description, the processing of detecting the pupils of animals such as dogs and cats has been described, but the present technology is the eyes, faces, parts of the face, neck, and neck of all living things such as birds, fish, reptiles, and amphibians. It can be applied to a specific part of the subject such as the head or the whole body (subject). The present technology can also be applied to a specific part of these subjects or a combination of subjects.
 さらに、本技術は、生物に限らず、車両のヘッドライト、フロントエンブレム、フロントガラス、または運転席、あるいは、バイクのヘッドライトまたはヘルメットなど、被写体の特定部位にも適用することができる。 Furthermore, the present technology can be applied not only to living things, but also to specific parts of a subject such as a vehicle headlight, a front emblem, a windshield, or a driver's seat, or a motorcycle headlight or a helmet.
 これらの場合、被写体の特定部位を検出するための検出モードが予め設定されて用いられる。このようにすることで、複数ある検出結果や検出方法のうち、どの検出結果や検出方法を優先するか、あるいは、複数ある被写体の中で、どの被写体を優先するかなどのユーザの意図を撮像装置に伝えることができる。 In these cases, a detection mode for detecting a specific part of the subject is preset and used. By doing so, the user's intention such as which detection result or detection method among a plurality of detection results or detection methods is prioritized or which of the plurality of objects is prioritized is captured. Can tell the device.
 なお、人物の場合と異なり、毛が長い動物などの場合、瞳に焦点を合わせようとすると、瞳ではなく、瞳にかかる毛に焦点が合ってしまうことがある。この場合、焦点の位置を後ろに調整したり、毛に合焦しやすい被写体であることを撮像装置に予め設定を行い、設定に基づいて撮像装置が制御したりすることで、ユーザの意図により即した撮像結果を得ることができる。 Note that, unlike the case of a person, when trying to focus on the eyes of an animal with long hair, the hair on the eyes may be focused instead of the eyes. In this case, the focus position may be adjusted backward, or the subject that is likely to be in focus on the hair may be preset in the imaging device, and the imaging device may control based on the setting. It is possible to obtain a suitable imaging result.
 ただし、毛が短い被写体の場合にも同じように制御を行うと焦点が合っていない結果となる。したがって、検出の対象被写体または特定部位、その他の検出結果を用いて、どのような調整を反映するのかを予め調整して使い分けることで、より利便性を高めることができる。 However, if a subject with short hair is similarly controlled, the result will be out of focus. Therefore, it is possible to further improve the convenience by adjusting in advance what kind of adjustment is reflected by using the detection target object or the specific part, and other detection results.
 なお、上記説明において、人物の瞳と動物の瞳を異なる検出処理により検出するように説明したが、人物の瞳の場合も、動物の瞳と同じ検出処理により検出するようにしてもよい。 In the above description, the human eyes and the animal eyes are detected by different detection processes, but the human eyes may be detected by the same detection process as the animal eyes.
 また、上述した一連の処理は、ハードウェアにより実行させることもできるし、ソフトウェアにより実行させることもできる。上述した一連の処理をソフトウェアにより実行させる場合には、ソフトウェアを構成するプログラムが、ネットワークや記録媒体からインストールされる。 Also, the series of processes described above can be executed by hardware or software. When the series of processes described above is executed by software, a program forming the software is installed from a network or a recording medium.
 この記録媒体は、例えば、図5に示されるように、装置本体とは別に、ユーザにプログラムを配信するために配布される、プログラムが記録されているリムーバブル記録媒体148により構成される。このリムーバブル記録媒体148には、磁気ディスク(フレキシブルディスクを含む)や光ディスク(CD-ROMやDVDを含む)が含まれる。さらに、光磁気ディスク(MD(Mini Disc)を含む)や半導体メモリなども含まれる。 As shown in FIG. 5, for example, this recording medium is composed of a removable recording medium 148 in which the program is recorded, which is distributed in order to distribute the program to the user, separately from the apparatus main body. The removable recording medium 148 includes a magnetic disk (including a flexible disk) and an optical disk (including a CD-ROM and DVD). It also includes magneto-optical disks (including MD (Mini Disc)) and semiconductor memory.
 この場合、プログラムは、リムーバブル記録媒体148をドライブ147に装着することにより、記憶部142にインストールすることができる。 In this case, the program can be installed in the storage unit 142 by mounting the removable recording medium 148 in the drive 147.
 また、このプログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することもできる。この場合、プログラムは、通信部145を介して受信し、記憶部142にインストールすることができる。 Also, this program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting. In this case, the program can be received via the communication unit 145 and installed in the storage unit 142.
 その他、このプログラムは、記憶部142やシステムコントローラ131内のROM(Read Only Memory)などに、あらかじめインストールしておくこともできる。 In addition, this program can be installed in advance in a storage unit 142 or a ROM (Read Only Memory) in the system controller 131.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたときなどの必要なタイミングで処理が行われるプログラムであってもよい。 Note that the program executed by the computer may be a program in which processing is performed in time series in the order described in this specification, or in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
 なお、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)など)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In this specification, the system means a set of a plurality of constituent elements (devices, modules (parts), etc.), and it does not matter whether or not all constituent elements are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device housing a plurality of modules in one housing are all systems. ..
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能を、ネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, the present technology can have a configuration of cloud computing in which a single function is shared by a plurality of devices via a network and jointly processes.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 Also, each step described in the above flow chart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes a plurality of processes, the plurality of processes included in one step can be executed by one device or shared by a plurality of devices.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limited, and other effects may be present.
<構成の組み合わせ例>
 本技術は、以下のような構成をとることもできる。
(1) 合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる表示制御部を備える撮像装置。
(2) 前記被写体は、人または動物である
 前記(1)に記載の撮像装置。
(3) 前記被写体は、予め設定可能である
 前記(1)に記載の撮像装置。
(4) 前記特定領域は、被写体の特定部位の領域である
 前記(1)乃至(3)のいずれかに記載の撮像装置。
(5) 前記特定部位は、瞳である
 前記(4)に記載の撮像装置。
(6) 前記特定領域は、被写体の領域である
 前記(1)に記載の撮像装置。
(7)前記特定領域を検出する領域検出部をさらに備え、
 前記表示制御部は、前記特定領域が複数検出された場合、焦点領域を設定するための焦点設定枠に応じて、前記予告枠の表示を制御する
 前記(1)乃至(6)のいずれかに記載の撮像装置。
(8) 前記表示制御部は、前記焦点設定枠の中心位置により近い前記特定領域に応じて、前記予告枠の表示を制御する
 前記(7)に記載の撮像装置。
(9) 前記画像の撮像単位で、合焦の開始を指示する合焦指示部と、
 前記合焦の開始が指示された場合、検出された前記特定領域を前記画像の合焦領域として設定する領域設定部と
 をさらに備える前記(7)または(8)に記載の撮像装置。
(10) 前記領域設定部は、所定の範囲内に検出された前記特定領域を前記合焦領域として設定する
 前記(9)に記載の撮像装置。
(11) 前記領域設定部は、前記焦点設定枠により示される所定の範囲内に検出された前記特定領域を前記合焦領域として設定する
 前記(9)に記載の撮像装置。
(12) 前記表示制御部は、前記領域設定部により前記合焦領域が設定された場合、前記予告枠の表示に代えて、前記合焦領域を示す合焦領域枠の表示を制御する
 前記(9)乃至(11)のいずれかに記載の撮像装置。
(13) 前記表示制御部は、前記予告枠の表示とは異なる表示方法で、前記合焦領域枠の表示を制御する
 前記(9)に記載の撮像装置。
(14)前記領域設定部は、前記焦点設定枠により示される所定の範囲内に検出された前記特定領域を前記合焦領域として設定する
 前記(9)に記載の撮像装置。
(15) 撮像を指示する撮像指示部と、
 前記撮像が指示された場合、前記領域設定部により設定された前記合焦領域で前記合焦を行って前記画像を取得するように、前記撮像部を制御する合焦制御部と
 をさらに備える前記(6)乃至(10)のいずれかに記載の撮像装置。
(16) 撮像装置が、
 合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる
 撮像方法。
(16)
 合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる表示制御部と
 して、コンピュータを機能させるプログラム。
<Combination example of configuration>
The present technology may also be configured as below.
(1) An image pickup apparatus including a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the image pickup unit according to the type of subject.
(2) The imaging device according to (1), wherein the subject is a person or an animal.
(3) The imaging device according to (1), in which the subject can be set in advance.
(4) The imaging device according to any one of (1) to (3), wherein the specific area is an area of a specific part of a subject.
(5) The imaging device according to (4), wherein the specific part is a pupil.
(6) The imaging device according to (1), wherein the specific region is a subject region.
(7) An area detection unit for detecting the specific area is further provided,
The display control unit controls the display of the notice frame in accordance with a focus setting frame for setting a focus region when a plurality of the specific regions are detected. (1) to (6) The imaging device described.
(8) The image pickup apparatus according to (7), wherein the display control unit controls the display of the notice frame according to the specific area closer to the center position of the focus setting frame.
(9) A focusing instruction unit for instructing the start of focusing for each image pickup unit of the image,
The image pickup apparatus according to (7) or (8), further including: an area setting unit that sets the detected specific area as a focus area of the image when the start of the focus is instructed.
(10) The imaging device according to (9), wherein the area setting unit sets the specific area detected within a predetermined range as the focus area.
(11) The imaging device according to (9), wherein the area setting unit sets the specific area detected within a predetermined range indicated by the focus setting frame as the focus area.
(12) When the focusing area is set by the area setting unit, the display control unit controls display of a focusing area frame indicating the focusing area, instead of displaying the notice frame. The imaging device according to any one of 9) to (11).
(13) The imaging device according to (9), wherein the display control unit controls the display of the focusing area frame by a display method different from the display of the notice frame.
(14) The imaging device according to (9), wherein the area setting unit sets the specific area detected within a predetermined range indicated by the focus setting frame as the focusing area.
(15) An image capturing instruction unit that instructs image capturing,
A focus control unit that controls the image capturing unit so as to perform the focus in the focus region set by the region setting unit to acquire the image when the image capturing is instructed. The imaging device according to any one of (6) to (10).
(16) The imaging device is
An imaging method that displays a notice frame that gives an advance notice of a specific area to be focused on the image acquired by the imaging unit according to the type of subject.
(16)
A program that causes a computer to function as a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the imaging unit according to the type of subject.
 100 撮像装置, 101 レンズ, 102 絞り, 103 撮像素子, 104 アナログ信号処理部, 105 A/D変換部, 106 デジタル信号処理部, 121 レンズドライバ, 131 システムコントローラ, 141 表示部, 142 記憶部, 146 操作部, 211 メモリ, 212 被写体検出部, 212-1 人物検出部, 212-2 動物検出部, 212-3 動物検出部, 213 領域設定部, 214 表示制御部, 215 コーデック処理部 100 image pickup device, 101 lens, 102 diaphragm, 103 image pickup element, 104 analog signal processing unit, 105 A/D conversion unit, 106 digital signal processing unit, 121 lens driver, 131 system controller, 141 display unit, 142 storage unit, 146 Operation unit, 211 memory, 212 subject detection unit, 212-1 person detection unit, 212-2 animal detection unit, 212-3 animal detection unit, 213 area setting unit, 214 display control unit, 215 codec processing unit

Claims (16)

  1.  合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる表示制御部を備える撮像装置。 An imaging device that includes a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the imaging unit according to the type of subject.
  2.  前記被写体は、人または動物である
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, wherein the subject is a person or an animal.
  3.  前記被写体は、予め設定可能である
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, wherein the subject can be set in advance.
  4.  前記特定領域は、被写体の特定部位の領域である
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, wherein the specific region is a region of a specific portion of a subject.
  5.  前記特定部位は、瞳である
     請求項4に記載の撮像装置。
    The imaging device according to claim 4, wherein the specific portion is a pupil.
  6.  前記特定領域は、被写体の領域である
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, wherein the specific region is a region of a subject.
  7.  前記特定領域を検出する領域検出部をさらに備え、
     前記表示制御部は、前記特定領域が複数検出された場合、焦点領域を設定するための焦点設定枠に応じて、前記予告枠の表示を制御する
     請求項1に記載の撮像装置。
    Further comprising an area detection unit for detecting the specific area,
    The imaging device according to claim 1, wherein the display control unit controls the display of the notice frame according to a focus setting frame for setting a focus region when a plurality of the specific regions are detected.
  8.  前記表示制御部は、前記焦点設定枠の中心位置により近い前記特定領域に応じて、前記予告枠の表示を制御する
     請求項7に記載の撮像装置。
    The image pickup apparatus according to claim 7, wherein the display control unit controls display of the notice frame according to the specific region closer to the center position of the focus setting frame.
  9.  前記画像の撮像単位で、合焦の開始を指示する合焦指示部と、
     前記合焦の開始が指示された場合、検出された前記特定領域を前記画像の合焦領域として設定する領域設定部と
     をさらに備える請求項7に記載の撮像装置。
    A focus instruction unit for instructing the start of focusing for each image pickup unit of the image;
    The image pickup apparatus according to claim 7, further comprising: an area setting unit that sets the detected specific area as a focus area of the image when the start of the focus is instructed.
  10.  前記領域設定部は、所定の範囲内に検出された前記特定領域を前記合焦領域として設定する
     請求項9に記載の撮像装置。
    The image pickup apparatus according to claim 9, wherein the area setting unit sets the specific area detected within a predetermined range as the focus area.
  11.  前記領域設定部は、前記焦点設定枠により示される所定の範囲内に検出された前記特定領域を前記合焦領域として設定する
     請求項9に記載の撮像装置。
    The image pickup apparatus according to claim 9, wherein the area setting unit sets the specific area detected within a predetermined range indicated by the focus setting frame as the focus area.
  12.  前記表示制御部は、前記領域設定部により前記合焦領域が設定された場合、前記予告枠の表示に代えて、前記合焦領域を示す合焦領域枠の表示を制御する
     請求項9に記載の撮像装置。
    The display control unit controls the display of a focus area frame indicating the focus area, instead of the display of the notice frame, when the focus area is set by the area setting unit. Imaging device.
  13.  前記表示制御部は、前記予告枠の表示とは異なる表示方法で、前記合焦領域枠の表示を制御する
     請求項12に記載の撮像装置。
    The image pickup apparatus according to claim 12, wherein the display control unit controls the display of the focus area frame by a display method different from the display of the notice frame.
  14.  撮像を指示する撮像指示部と、
     前記撮像が指示された場合、前記領域設定部により設定された前記合焦領域で前記合焦を行って前記画像を取得するように、前記撮像部を制御する合焦制御部と
     をさらに備える請求項9に記載の撮像装置。
    An imaging instruction unit for instructing imaging,
    A focus control unit that controls the image capturing unit so as to perform the focus in the focus region set by the region setting unit and acquire the image when the image capturing is instructed. Item 9. The imaging device according to item 9.
  15.  撮像装置が、
     合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる
     撮像方法。
    The imaging device
    An imaging method that displays a notice frame that gives an advance notice of a specific area to be focused on the image acquired by the imaging unit according to the type of subject.
  16.  合焦対象とする特定領域を予告する予告枠を、被写体の種類に応じて撮像部より取得した画像の上に表示させる表示制御部と
     して、コンピュータを機能させるプログラム。
    A program that causes a computer to function as a display control unit that displays a notice frame that gives a notice of a specific area to be focused on an image acquired from the imaging unit according to the type of subject.
PCT/JP2019/048877 2018-12-28 2019-12-13 Imaging device, imaging method, and program WO2020137602A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US17/416,890 US11539892B2 (en) 2018-12-28 2019-12-13 Imaging device, imaging method, and program
JP2020563075A JPWO2020137602A1 (en) 2018-12-28 2019-12-13 Imaging equipment, imaging methods, and programs
EP19904199.7A EP3904956A4 (en) 2018-12-28 2019-12-13 Imaging device, imaging method, and program
US18/087,119 US20230276120A1 (en) 2018-12-28 2022-12-22 Imaging device, imaging method, and program
JP2023204458A JP2024019284A (en) 2018-12-28 2023-12-04 Imaging apparatus, imaging method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-246877 2018-12-28
JP2018246877 2018-12-28

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/416,890 A-371-Of-International US11539892B2 (en) 2018-12-28 2019-12-13 Imaging device, imaging method, and program
US18/087,119 Continuation US20230276120A1 (en) 2018-12-28 2022-12-22 Imaging device, imaging method, and program

Publications (1)

Publication Number Publication Date
WO2020137602A1 true WO2020137602A1 (en) 2020-07-02

Family

ID=71127972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/048877 WO2020137602A1 (en) 2018-12-28 2019-12-13 Imaging device, imaging method, and program

Country Status (4)

Country Link
US (2) US11539892B2 (en)
EP (1) EP3904956A4 (en)
JP (2) JPWO2020137602A1 (en)
WO (1) WO2020137602A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022070938A1 (en) * 2020-09-30 2022-04-07 ソニーグループ株式会社 Imaging device, imaging method, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010122358A (en) * 2008-11-18 2010-06-03 Nec Electronics Corp Autofocus device, autofocus method, and imaging apparatus
WO2015045911A1 (en) 2013-09-24 2015-04-02 ソニー株式会社 Imaging device, imaging method and program
JP2016118799A (en) * 2016-02-08 2016-06-30 オリンパス株式会社 Imaging device
JP2017103601A (en) * 2015-12-01 2017-06-08 株式会社ニコン Focus detector and camera
JP2017175606A (en) * 2016-03-22 2017-09-28 キヤノン株式会社 Electronic device, control method of the same, and imaging apparatus
JP2018207309A (en) * 2017-06-05 2018-12-27 オリンパス株式会社 Imaging apparatus, imaging method and program

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4182117B2 (en) * 2006-05-10 2008-11-19 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP4110178B2 (en) * 2006-07-03 2008-07-02 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP2009010777A (en) * 2007-06-28 2009-01-15 Sony Corp Imaging device, photography control method, and program
WO2010073608A1 (en) * 2008-12-26 2010-07-01 パナソニック株式会社 Image pickup equipment
US9020298B2 (en) * 2009-04-15 2015-04-28 Microsoft Technology Licensing, Llc Automated image cropping to include particular subjects
US9041844B2 (en) * 2012-04-27 2015-05-26 Blackberry Limited Camera device with a dynamic touch screen shutter and dynamic focal control area
JP5814281B2 (en) * 2013-02-22 2015-11-17 京セラドキュメントソリューションズ株式会社 Sheet tray, and sheet feeding apparatus, image forming apparatus, and image reading apparatus having the same
EP3686754A1 (en) * 2013-07-30 2020-07-29 Kodak Alaris Inc. System and method for creating navigable views of ordered images
JP6351231B2 (en) * 2013-10-18 2018-07-04 キヤノン株式会社 IMAGING DEVICE, IMAGING SYSTEM, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
WO2015064144A1 (en) * 2013-10-30 2015-05-07 オリンパスイメージング株式会社 Image capturing device, image capturing method, and program
JP6249825B2 (en) * 2014-03-05 2017-12-20 キヤノン株式会社 Imaging device, control method thereof, and control program
US9344673B1 (en) * 2014-03-14 2016-05-17 Brian K. Buchheit Enhancing a camera oriented user interface via an eye focus guide
US9824271B2 (en) * 2014-06-25 2017-11-21 Kodak Alaris Inc. Adaptable eye artifact identification and correction system
CN106060373B (en) * 2015-04-03 2019-12-20 佳能株式会社 Focus detection apparatus and control method thereof
KR102392789B1 (en) * 2015-10-21 2022-05-02 삼성전자주식회사 A method for setting focus and electronic device thereof
US10686979B2 (en) * 2016-02-01 2020-06-16 Sony Corporation Control apparatus and control method
CN109196855A (en) * 2016-03-31 2019-01-11 株式会社尼康 Photographic device, image processing apparatus and electronic equipment
US10499001B2 (en) * 2017-03-16 2019-12-03 Gvbb Holdings S.A.R.L. System and method for augmented video production workflow
DK179948B1 (en) * 2017-05-16 2019-10-22 Apple Inc. Recording and sending Emoji
US10958825B2 (en) * 2017-10-17 2021-03-23 Canon Kabushiki Kaisha Electronic apparatus and method for controlling the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010122358A (en) * 2008-11-18 2010-06-03 Nec Electronics Corp Autofocus device, autofocus method, and imaging apparatus
WO2015045911A1 (en) 2013-09-24 2015-04-02 ソニー株式会社 Imaging device, imaging method and program
JP2017103601A (en) * 2015-12-01 2017-06-08 株式会社ニコン Focus detector and camera
JP2016118799A (en) * 2016-02-08 2016-06-30 オリンパス株式会社 Imaging device
JP2017175606A (en) * 2016-03-22 2017-09-28 キヤノン株式会社 Electronic device, control method of the same, and imaging apparatus
JP2018207309A (en) * 2017-06-05 2018-12-27 オリンパス株式会社 Imaging apparatus, imaging method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022070938A1 (en) * 2020-09-30 2022-04-07 ソニーグループ株式会社 Imaging device, imaging method, and program

Also Published As

Publication number Publication date
JP2024019284A (en) 2024-02-08
US11539892B2 (en) 2022-12-27
US20230276120A1 (en) 2023-08-31
EP3904956A4 (en) 2022-02-16
US20220060635A1 (en) 2022-02-24
JPWO2020137602A1 (en) 2021-11-18
EP3904956A1 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
US8553134B2 (en) Imager processing a captured image
JP7276393B2 (en) Imaging device and imaging method
WO2013088917A1 (en) Image processing device, image processing method, and recording medium
JP5931464B2 (en) Imaging device
US10986262B2 (en) Imaging apparatus, control method, and non-transitory storage medium
JP2024019284A (en) Imaging apparatus, imaging method, and program
CN107087103B (en) Image pickup apparatus, image pickup method, and computer-readable storage medium
US20190349530A1 (en) Control apparatus and control method
JP2008145760A (en) Automatic focusing system
WO2020195073A1 (en) Image processing device, image processing method, program, and imaging device
JP2022186785A (en) Control unit, control method therefor, and program
JP2003018434A (en) Imaging apparatus
WO2022070938A1 (en) Imaging device, imaging method, and program
WO2020195198A1 (en) Image processing device, image processing method, program, and imaging device
JP6988355B2 (en) Imaging device
JP7019943B2 (en) Camera system, camera body and interchangeable lenses
JP2004215062A (en) Imaging device
JP2007078811A (en) Imaging apparatus
WO2023189367A1 (en) Imaging device, imaging control method, and program
JP2020165997A (en) Imaging device, aperture value setting method, and aperture value setting program
WO2021210340A1 (en) Image processing device, image processing method, and imaging device
JP7159371B2 (en) Control device, its control method, and program
WO2023189366A1 (en) Imaging device, imaging control method, and program
JP2013055609A (en) Digital camera
US20240196088A1 (en) Focus control apparatus and method, image capturing apparatus, and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19904199

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020563075

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019904199

Country of ref document: EP

Effective date: 20210728