WO2019082775A1 - Image processing device, image processing method, and imaging device - Google Patents

Image processing device, image processing method, and imaging device

Info

Publication number
WO2019082775A1
WO2019082775A1 PCT/JP2018/038742 JP2018038742W WO2019082775A1 WO 2019082775 A1 WO2019082775 A1 WO 2019082775A1 JP 2018038742 W JP2018038742 W JP 2018038742W WO 2019082775 A1 WO2019082775 A1 WO 2019082775A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
unit
area
subject
imaging
Prior art date
Application number
PCT/JP2018/038742
Other languages
French (fr)
Japanese (ja)
Inventor
亮輔 荒木
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2019082775A1 publication Critical patent/WO2019082775A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • G03B7/097Digital circuits for control of both exposure time and aperture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present technology relates to an image processing apparatus, an image processing method, and an imaging apparatus, and more particularly to an image processing apparatus and the like that can improve the accuracy of AF control and AE control for a moving subject.
  • Patent Document 1 describes an automatic focusing (AF) system established by setting a detection area at an AF area or subject recognition position set by the user and detecting the contrast in the detection area. .
  • AF automatic focusing
  • the subject moves when AF is started, it leaves the subject from the AF area set by the subject recognition result immediately before the start of AF.
  • the contrast of the subject can not be accurately detected, and the AF accuracy is deteriorated due to focusing on the background or false focusing.
  • subject recognition is performed first, and if the AF area is set using the result, the position of the subject and the AF area will always shift, and focusing during continuous shooting The rate drops.
  • the purpose of the present technology is to improve the accuracy of AF control and AE control for a moving subject.
  • a detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas;
  • a subject detection unit that detects a subject based on the image signal;
  • the detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit
  • An image processing apparatus is provided with a control unit that specifies a detection value corresponding to an area.
  • the detection processing unit detects an image signal input from the imaging area of the imaging unit in a detection area in which a plurality of imaging signals are arranged in the imaging area, and generates detection values corresponding to all detection areas.
  • the subject detection unit also generates detection values corresponding to each of the entire detection areas.
  • the detection area may be spread over the entire area of the imaging area.
  • the process of generating the detection value in the detection processing unit and the process of detecting the object in the object detection unit may be performed in parallel.
  • the control unit sets the detection area corresponding to the subject from all detection areas based on the detection result of the object detection unit, and the detection area set from the detection value corresponding to each of all the detection areas generated by the detection processing unit
  • the detected value corresponding to is identified. For example, in the setting of the detection area corresponding to the subject, the area where the subject is detected is set as the detection area, or the area excluding the area where the subject is detected is set as the detection area. It is also good.
  • a detection value storage unit that stores the detection value generated by the detection processing unit may be further provided. Then, in this case, the control unit causes the detection value of the detection area set corresponding to the subject detected based on the image signal of the predetermined frame by the subject detection unit to be stored in the detection value storage unit.
  • the image signal may be detected and identified from detected values corresponding to all detection areas generated.
  • the detection area may be a ranging area. Then, in this case, the control unit may perform focus control based on the identified detection value.
  • a plurality of phase difference detection areas may be further disposed in the imaging area, and the number of ranging areas may be greater than the number of phase difference detection areas.
  • the detection area may be a photometric area. Then, in this case, the control unit may perform exposure control based on the identified detection value.
  • the detection area corresponding to the subject is set from all detection areas of a plurality of detection areas arranged in the imaging area based on the subject detection result, and each detection area generated by the detection processing unit
  • the detection value corresponding to the set detection area is specified from the detection value corresponding to. Therefore, the detection value for the moving subject can be specified with high accuracy, and the accuracy of AF control or AE control for the moving subject can be improved by using the specified detection value.
  • An imaging unit A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas; A subject detection unit that detects a subject based on the image signal; The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit
  • An imaging apparatus is provided with a control unit that specifies a detected value corresponding to an area.
  • FIG. 2 is a block diagram illustrating an exemplary configuration of a digital still camera.
  • 5 is a flowchart showing an example of the procedure of contrast AF processing in the digital still camera of FIG. 1;
  • FIG. 1 is a block diagram showing a configuration example of a digital still camera as a first embodiment. It is a figure for demonstrating the example of arrangement
  • FIG. 6 is a diagram for describing an example of a captured image, the arrangement of an AF detection area corresponding thereto, and an AF detection area (AF detection frame) specified by object position information.
  • 5 is a flowchart showing an example of the procedure of contrast AF processing in the digital still camera of FIG.
  • FIG. 3 It is a figure which shows an example of a captured image (monitoring camera image) etc. when working with a robot arm. It is a block diagram showing an example of composition of a surveillance camera at the time of working with a robot arm as a 2nd embodiment.
  • FIG. 2 is a block diagram illustrating an exemplary configuration of a digital still camera.
  • 10 is a flowchart showing an example of the procedure of contrast AE processing in the digital still camera of FIG. 9
  • FIG. 14 is a block diagram showing an example of configuration of a digital still camera as a third embodiment.
  • 12 is a flowchart showing an example of the procedure of contrast AE processing in the digital still camera of FIG. 11; It is a block diagram showing an example of rough composition of a vehicle control system.
  • FIG. 1 shows a configuration example of a digital still camera 100.
  • the digital still camera 100 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108.
  • An object tracking unit 109, an AF detection area setting unit 110, an AF detection processing unit 111, an AF control unit 112, an image recording processing unit 113, and a recording medium 114 are provided.
  • the control unit 101 controls the operation of each unit of the digital still camera 100 based on a control program.
  • the operation unit 102 is connected to the control unit 101, configures a user interface that receives various operations by the user, and includes keys, dials, buttons, a touch panel, a remote controller, and the like.
  • the imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104.
  • the imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
  • the image sensor 104 is a complementary metal oxide semiconductor (CMOS) image sensor, a charge coupled device (CCD), or the like, which constitutes an imaging unit and has an imaging surface (imaging area) in which a plurality of pixels are arranged in a matrix. Light incident from a subject through the lens 103 is received by the imaging surface.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, It is supplied to the subject recognition unit 108 and the image recording processing unit 113.
  • analog signal processing such as amplification to the analog image signal from the imaging element 104
  • a / D Analog / Digital
  • the display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
  • the display unit 107 includes an LCD (Liquid Crystal Display), an organic EL display (OELD: Organic Electroluminescence Display), or the like.
  • the subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information.
  • the designation of the tracking target object is performed by, for example, the touch panel operation of the user.
  • the subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and sends the subject position information (including the shape information) to the display control unit 106 and the AF detection area setting unit 110. Supply.
  • the display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
  • the AF detection area setting unit 110 sets an AF detection area (AF detection frame) as a ranging area at the subject position in the imaging area based on the subject position information supplied from the subject tracking unit 109, and the information is It is supplied to the AF detection processing unit 111.
  • the AF detection processing unit 111 acquires an AF detection value (contrast detection value) of the AF detection area from the image data based on setting information of the AF detection area, and supplies the AF detection value to the AF control unit 112.
  • the AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result.
  • the focus lens is driven to be the focus lens position when high contrast is shown.
  • the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114.
  • the recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 100.
  • step ST1 the AF process is started by the user pressing the shutter button halfway.
  • step ST2 the focus lens position of the imaging lens 103 is driven to a position desired to be detected.
  • step ST3 an image signal is read from the imaging element 104, and in step ST4, the signal processing unit 105 generates image data.
  • step ST5 the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST6, the subject tracking unit 109 performs tracking processing of the subject present in the captured image.
  • the subject position information (including the shape information) indicating the position on the captured image is obtained.
  • step ST7 the AF detection area setting unit 110 sets an AF detection area (AF detection frame) at the object position in the imaging area.
  • step ST8 the AF detection processing unit 111 acquires an AF detection value (contrast detection value) of the AF detection area from the image data.
  • step ST9 the AF control unit 112 determines the in-focus state from the history of AF detection values. Then, in step ST10, when the in-focus determination can not be made, the process returns to step ST2, the focus lens position of the imaging lens 103 is moved, and the same process as described above is repeated.
  • step ST11 driving of the focus lens to the focus lens position of the AF detection value for which the in-focus determination is made by the AF control unit 112 is instructed to the imaging lens 103 in step ST11.
  • step ST12 the focusing lens is driven by the imaging lens 103 to a designated position.
  • step ST13 a series of AF processing is ended.
  • FIG. 3 shows an example of the configuration of a digital still camera 200 according to the first embodiment.
  • the digital still camera 200 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging element 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108.
  • An object tracking unit 109, an AF control unit 112, an image recording processing unit 113, a recording medium 114, an AF detection processing unit 201, an AF detection storage unit 202, and an AF detection value generation unit 203 are provided.
  • the control unit 101 controls the operation of each unit of the digital still camera 200 based on a control program.
  • the operation unit 102 is connected to the control unit 101, configures a user interface that receives various operations by the user, and includes keys, dials, buttons, a touch panel, a remote controller, and the like.
  • the imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104.
  • the imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
  • the imaging element 104 is a complementary metal oxide semiconductor (CMOS) image sensor, a charge coupled device (CCD), or the like having an imaging surface in which a plurality of pixels are arranged in a matrix, constituting an imaging unit. Light from the subject is received by the imaging surface.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AF detection processing unit 201, and the image recording processing unit 113.
  • analog signal processing such as amplification to the analog image signal from the imaging element 104
  • a / D Analog / Digital
  • the display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
  • the display unit 107 includes an LCD (Liquid Crystal Display), an organic EL display (OELD: Organic Electroluminescence Display), or the like.
  • the subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information.
  • the designation of the tracking target object is performed by, for example, the touch panel operation of the user.
  • the subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and obtains subject position information (including shape information).
  • the subject tracking unit 109 supplies subject position information to the display control unit 106.
  • the display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
  • the subject tracking unit 109 also supplies subject position information to the AF detection value generation unit 203 together with an image data ID identifying a frame of image data used when the subject position is determined.
  • the AF detection processing unit 201 acquires, from the image data from the signal processing unit 105, AF detection values (contrast detection values) of the AF detection area as a plurality of ranging areas arranged in the imaging area.
  • AF detection values contrast detection values
  • the AF detection area is spread over the entire area of the imaging area.
  • the AF detection area may be spread over all other areas except the periphery of the imaging area.
  • FIGS. 4A and 4B show a state in which rectangular AF detection areas are spread over the entire area of the imaging area
  • FIG. 4A is an example in which the size of the AF detection area is large.
  • FIG. 4B shows an example in which the size of the AF detection area is small.
  • 9 ⁇ 3 AF detection areas of 3 ⁇ 3 are disposed in the imaging area
  • 400 AF detection areas of 20 ⁇ 20 are disposed in the imaging area.
  • the number of AF detection areas is not limited to this example.
  • FIG. 4C and 4D show a state in which rectangular AF detection areas are spread in the area excluding the periphery of the imaging area, and FIG. 4C shows the size of the AF detection area.
  • FIG. 4D shows an example in which the size of the AF detection area is small.
  • 3 ⁇ 3 nine AF detection areas are arranged, and in FIG. 4D, 20 ⁇ 20 400 AF detection areas are arranged.
  • the number of AF detection areas is Is not limited to this example.
  • the AF detection processing unit 201 is an image for identifying a frame of image data used when obtaining the detection value, that is, an AF detection value acquired in all AF detection areas arranged in a plurality of imaging areas, that is, an AF detection value group.
  • the data is supplied to the AF detection storage unit 202 together with the data ID and temporarily stored.
  • the AF detection value generation unit 203 generates subject position information from among detection value groups associated with the same image data ID based on subject position information (including shape information) supplied from the subject tracking unit 109 and the image data ID.
  • the AF detection values of a predetermined number of AF detection areas corresponding to the subject position identified in step are taken out from the AF detection storage unit 202 and integrated (averaged) to generate an AF detection value (integral AF detection value) at the subject position Do.
  • FIG. 5A shows an example of a captured image
  • the captured image includes the subject OB.
  • FIG. 5B shows an example of arrangement of the captured image and the AF detection area DE spread in the imaging area.
  • FIG. 5C when the AF detection area DE is disposed as shown in FIG. 5B, the AF detection area DE within the range indicated by the arrow P corresponds to the subject based on the subject position information. It indicates that it is identified.
  • FIG. 5D also shows an arrangement example of the captured image and the AF detection area DE spread in the imaging area.
  • the size of the AF detection area DE is smaller than that of FIG. 5B, and the number of AF detection areas DE is increased accordingly.
  • FIG. 5E when the AF detection area DE is disposed as shown in FIG. 5D, the AF detection area DE within the range indicated by the arrow P corresponds to the subject based on the subject position information. It indicates that it is identified.
  • the present invention is not particularly limited thereto.
  • all of the AF detection areas in which the subject is detected based on the subject position information may be set as the predetermined number of AF detection areas to be identified.
  • an area in which the ratio of the subject information occupies is an arbitrary value or more, for example, 50% or more may be set as a predetermined number of AF detection areas to be specified.
  • an area including other than the subject information (such as background) may be set as the specified number of AF detection areas to be specified.
  • the AF detection value generation unit 203 supplies the generated AF detection value to the AF control unit 112.
  • the AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result.
  • the focus lens is driven to be the focus lens position when high contrast is shown.
  • the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method, and the compressed image data Recording is performed on the recording medium 114.
  • the recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 200.
  • the flowchart of FIG. 6 shows an example of the procedure of contrast AF processing in the digital still camera 200 of FIG.
  • the AF process is started by the user pressing the shutter button halfway.
  • step ST22 the focus lens position of the imaging lens 103 is driven to a position desired to be detected.
  • step ST23 an image signal is read from the imaging element 104, and in step ST24, the signal processing unit 105 generates image data.
  • step ST25 and step ST26 are performed, and the processes of step ST27 and step ST28 are performed in parallel.
  • step ST25 the subject recognition unit 108 recognizes, for example, a subject specified by the user's touch panel operation, and in step ST26, the subject tracking unit 109 determines the subject position and obtains subject position information (including shape information). Ru.
  • step ST27 AF detection values (contrast detection values) of the AF detection areas arranged in a plurality of imaging areas by the AF detection processing unit 201 are respectively acquired, and in step ST28, detection values of all detection areas, ie, detection values The group is stored in the AF detection storage unit 202 together with an image data ID identifying a frame of image data used to obtain it.
  • the AF detection value generation unit 203 generates the same image data from the AF detection storage unit 202 based on the subject position information (including the shape information) supplied from the subject tracking unit 109 and the image data ID. AF detection values of a predetermined number of AF detection areas corresponding to the subject position specified by the subject position information among the detection value groups associated by the ID are acquired and integrated, and the AF detection values at the subject position (Integral AF Detection value) is generated.
  • step ST30 the AF control unit 112 determines the in-focus state from the history of the AF detection value. Then, in step ST31, when it can not be determined to be in focus, the processing returns to step S22, the focus lens position of the imaging lens 103 is moved, and the same processing as described above is repeated.
  • step ST32 when it is determined that the image is in focus, in step ST32, the imaging lens 103 is instructed to drive the focus lens to the focus lens position of the AF detection value determined to be in focus by the AF control unit 112.
  • step ST33 the focusing lens is driven by the imaging lens 103 to the designated position.
  • step ST34 a series of AF processing is ended.
  • step ST27 the processing times of the AF detection process (step ST27) and the subject detection process (step ST26) in the flowchart of FIG. 6 coincide with each other, the detection value storage process (step ST28) becomes unnecessary.
  • the processing time for the subject detection process (step ST26) is longer than that for the AF detection process (step ST27). Therefore, when performing processing for generating an AF detection value in the AF detection value generation unit 203, in order to use an AF detection value whose timing matches the subject detection timing, the AF detection value of each acquired AF detection area
  • the AF detection storage unit 202 is required to temporarily store
  • the subject position detection process and the AF detection process are performed in parallel, and the AF detection value is stored in the AF detection storage unit 202 to detect the subject position. It becomes possible to refer to the AF detection value generated by the image data of the same frame as the used frame. Therefore, the AF detection area does not always shift with respect to a moving subject, the AF detection value for the subject can be correctly obtained, and high-accuracy AF control can be performed.
  • the AF detection areas are spread over all or almost all areas of the imaging area (see FIG. 4), a predetermined number of AF detection areas corresponding to the object position
  • the AF detection value for the subject can be accurately obtained by appropriately specifying it, and the AF control can be performed with high accuracy.
  • Second embodiment> [Configuration example of surveillance camera]
  • a predetermined number of AF detection areas corresponding to a subject position are appropriately specified based on subject position information obtained by subject detection, and a predetermined number of AF detection areas corresponding to the subject position are detected.
  • the AF control is performed by generating an AF detection value (integral AF detection value) using the AF detection value.
  • an AF detection value integrated AF detection value
  • a predetermined number of AF detection areas excluding the subject position are specified, and an AF detection value (integral AF detection using a predetermined number of AF detection values excluding the subject position) It is also conceivable to perform AF control by generating a value).
  • FIG. 7A shows a captured image (monitoring camera image) when working with the robot arm.
  • robot arms S1 and S3 and a work target S2 to be worked by the robot arms S1 and S3 exist.
  • the contrast AF is easy to focus on the robot arms S1 and S3 in order to focus on the closest one.
  • the AF detection area (AF detection frame) DE is arranged at high density in the imaging area. Then, the robot arms S1 and S3 are recognized by image recognition, and as shown in FIG. 7C, the detected values of the predetermined number of AF detection areas DE excluding the recognized robot arms S1 and S3 are integrated. Thus, an AF detection value (integral AF detection value) is generated, and AF control is performed using this AF detection value. As a result, it becomes possible to focus on the work target S2 without being influenced by the robot arms S1 and S3 present on the near side.
  • FIG. 8 shows a configuration example of a monitoring camera 300 when working with a robot arm according to the second embodiment.
  • the monitoring camera 300 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging element 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an AF detection processing unit 201.
  • An AF detection storage unit 202, an AF control unit 112, an image recording processing unit 113, a recording medium 114, an object recognition unit 301, an object tracking unit 302, and an AF detection value generation unit 303 are included.
  • the control unit 101 controls the operation of each unit of the monitoring camera 300 based on the control program.
  • the operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user.
  • the imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104.
  • the imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
  • the signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the object recognition unit 301, the AF detection processing unit 201, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
  • analog signal processing such as amplification to the analog image signal from the imaging element 104
  • a / D Analog / Digital
  • the object recognition unit 301 recognizes a robot arm to be excluded from the image data from the image data, and supplies information such as the feature amount to the object tracking unit 302 as tracking target information.
  • the object tracking unit 302 detects an object position, in this case, a robot arm position from image data using tracking target information from the object recognition unit 301, and obtains object position information (including shape information).
  • the object tracking unit 302 supplies object position information to the display control unit 106.
  • the display control unit 106 superimposes and displays a tracking frame on the captured image based on the object position information. Further, the object tracking unit 302 supplies the object position information to the AF detection value generation unit 303 together with the image data ID identifying the frame of the image data used when the object position is determined.
  • a plurality of AF detection processing units 201 are arranged in the imaging area from the image data from the signal processing unit 105. For example, AF detection values (contrast detection values) of a plurality of AF detection areas (see FIG. 4) Get each one.
  • the AF detection processing unit 201 supplies an AF detection value group to the AF detection storage unit 202 together with an image data ID identifying a frame of image data used when obtaining the detection value, and temporarily stores the AF detection value group.
  • the AF detection value generation unit 303 detects the detection data associated with the same image data ID from the AF detection storage unit 202 based on the object position information (including the shape information) supplied from the object tracking unit 302 and the image data ID.
  • the AF detection values of a predetermined number of AF detection areas excluding the portion of the object position specified by the object position information in the value group are taken out from the AF detection storage unit 202 and integrated (adding average).
  • An AF detection value (integral AF detection value) other than the arm is generated.
  • the method of setting a predetermined number of AF detection areas excluding the portion of the object position specified by the object position information is not particularly limited, but specifies the portion excluding all of the AF detection areas in which the object position is detected.
  • the predetermined number of AF detection areas may be set. Further, among the AF detection areas in which the object position is detected, it is set as a predetermined number of AF detection areas to be identified except an area where the ratio occupied by the object position information is an arbitrary value or more, for example 50% or more. It is also good.
  • the AF detection value generation unit 303 supplies the generated AF detection value to the AF control unit 112.
  • the AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result.
  • the focus lens is driven to be the focus lens position when high contrast is shown.
  • the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as the Moving Picture Experts Group (MPEG) method, and the compressed image data It is recorded on the recording medium 114 as surveillance image data.
  • MPEG Moving Picture Experts Group
  • the object position detection process and the AF detection process are performed in parallel, and the AF detection value is stored in the AF detection storage unit 202, the object position is detected. It becomes possible to refer to the AF detection value generated by the image data of the same frame as the used frame. Therefore, even if the object moves, it is possible to correctly obtain AF detection values for areas other than the object position, that is, areas other than the robot arm, and AF control for focusing on areas other than the robot arm can be performed with high accuracy.
  • FIG. 9 shows a configuration example of the digital still camera 400.
  • the digital still camera 400 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108.
  • An object tracking unit 109, an AE detection area setting unit 401, an AE detection processing unit 402, an AE control unit 403, an image recording processing unit 113, and a recording medium 114 are included.
  • the control unit 101 controls the operation of each unit of the digital still camera 400 based on a control program.
  • the operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user.
  • the imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104.
  • the imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
  • the signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AE detection processing unit 402, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
  • analog signal processing such as amplification to the analog image signal from the imaging element 104
  • a / D Analog / Digital
  • the subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information.
  • the subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and causes the display control unit 106 and the AE detection area setting unit 401 to provide subject position information (including shape information). Supply.
  • the display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
  • the AE detection area setting unit 401 sets an AE detection area (AE detection frame) as a photometric area at the subject position in the imaging area based on the subject position information supplied from the subject tracking unit 109, and AE The signal is supplied to the detection processing unit 402.
  • the AE detection processing unit 402 acquires an AE detection value (brightness level value) of the AE detection area from the image data based on setting information of the AE detection area, and supplies the AE detection value to the AE control unit 403.
  • the AE control unit 403 determines the aperture position and shutter speed of the imaging lens 103 and the gain of the signal processing unit 105 so that an appropriate exposure can be obtained from the AE detection value. Then, the AE control unit 403 notifies the imaging lens 103 of the determined aperture position and shutter speed, and controls the aperture position and shutter speed of the imaging lens 103. Further, the AE control unit 403 notifies the signal processing unit 105 of the determined gain, and controls the gain in the signal processing unit 105.
  • the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114.
  • the recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 400.
  • step ST41 for example, the AE process is started by turning on the power. Note that, thereafter, the processing of this flowchart is repeated for each frame.
  • step ST42 an image signal is read from the imaging element 104, and in step ST43, the signal processing unit 105 generates image data.
  • step ST44 the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST45, the subject tracking unit 109 detects the subject position from the image data. Position information (including shape information) indicating the position is obtained.
  • step ST46 the AE detection area setting unit 401 sets an AE detection area (AE detection frame) at the object position in the imaging area.
  • step ST47 the AE detection processing unit 402 acquires an AE detection value of the AE detection area from the image data.
  • step ST48 the AE control unit 403 determines the aperture position, shutter speed, and gain so as to obtain an appropriate exposure from the AE detection value. Then, in step ST49, the AE control unit 403 instructs the imaging lens 103 on the determined diaphragm position and shutter speed, and instructs the signal processing unit 105 on the determined gain.
  • step ST50 driving to the instructed aperture position and setting of the instructed shutter speed are performed by the imaging lens 103.
  • step ST51 the signal processing unit 105 performs setting of the instructed gain.
  • step ST52 a series of AE processing is ended.
  • FIG. 11 shows a configuration example of a digital still camera 500 according to the third embodiment.
  • the digital still camera 500 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108.
  • An object tracking unit 109, an AE detection processing unit 501, an AE detection storage unit 502, an AE detection value generation unit 503, an AE control unit 403, an image recording processing unit 113, and a recording medium 114 are included.
  • the control unit 101 controls the operation of each unit of the digital still camera 500 based on a control program.
  • the operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user.
  • the imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104.
  • the imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
  • the signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AE detection processing unit 501, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
  • analog signal processing such as amplification to the analog image signal from the imaging element 104
  • a / D Analog / Digital
  • the subject recognition unit 108 recognizes a subject from the image data, and supplies information such as its feature amount to the subject tracking unit 109 as tracking target information.
  • the subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and obtains subject position information (including shape information).
  • the subject tracking unit 109 supplies subject position information to the display control unit 106.
  • the display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
  • the subject tracking unit 109 also supplies subject position information to the AE detection value generation unit 503 together with an image data ID identifying a frame of image data used when the subject position is determined.
  • a plurality of AE detection processing units 501 are arranged in the imaging area from the image data from the signal processing unit 105, for example, AE detection areas as a plurality of photometric areas spread in the imaging area (similar to the AF detection area shown in FIG. Acquisition of the AE detection values (brightness level values) of The AE detection processing unit 501 supplies an AF detection value group to the AE detection storage unit 502 together with an image data ID identifying a frame of image data used when obtaining the detection value, and temporarily stores the AF detection value group.
  • the AF detection value generation unit 503 detects the detection data associated with the same image data ID from the AE detection storage unit 502 based on the subject position information (including the shape information) supplied from the subject tracking unit 109 and the image data ID. AE detection values of a predetermined number of AE detection areas corresponding to the subject position specified by the subject position information among the value group are taken out from the AE detection storage unit 502 and integrated (averaged) to obtain AE detection values at the subject position ( Integral AE detection value) is generated.
  • the method of setting the predetermined number of AE detection areas corresponding to the subject position specified by the subject position information is not particularly limited. For example, all AE detection areas in which the subject is detected based on the subject position information May be set as a predetermined number of AE detection areas to be identified. Further, among AE detection areas in which a subject is detected, an area in which the ratio occupied by subject information is an arbitrary value or more, for example, 50% or more may be set as a predetermined number of AE detection areas to be identified. Further, among the AE detection areas in which the subject is detected, an area including areas other than the subject information (such as background) may be set as the specified number of AE detection areas to be identified.
  • the AE detection value generation unit 503 supplies the generated AE detection value to the AE control unit 403.
  • the AE control unit 403 determines the aperture position and shutter speed of the imaging lens 103 and the gain of the signal processing unit 105 so that an appropriate exposure can be obtained from the AE detection value. Then, the AE control unit 403 notifies the imaging lens 103 of the determined aperture position and shutter speed, and controls the aperture position and shutter speed of the imaging lens 103. Further, the AE control unit 403 notifies the signal processing unit 105 of the determined gain, and controls the gain in the signal processing unit 105.
  • the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114.
  • the recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 500.
  • the flowchart of FIG. 12 shows an example of the procedure of the AE processing in the digital still camera 500 of FIG.
  • step ST61 for example, the AE process is started by turning on the power. Note that, thereafter, the processing of this flowchart is repeated for each frame.
  • step ST62 an image signal is read from the imaging element 104, and in step ST63, the signal processing unit 105 generates image data.
  • steps ST64 and ST65 are performed, and the processes of steps ST66 and ST67 are performed in parallel.
  • step S64 the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST65, the subject tracking unit 109 detects the subject position from the image data, and indicates the position of the subject on the captured image. Position information (including shape information) is obtained.
  • step ST66 the AE detection processing unit 501 acquires AE detection values of a plurality of AE detection areas arranged in the imaging area, and in step ST67 the detection values of all detection areas, that is, the detection value group obtains them.
  • the AE detection storage unit 502 Stored in the AE detection storage unit 502 together with an image data ID identifying a frame of image data used for
  • the AE detection value generation unit 503 generates the same image data from the AE detection storage unit 502 based on the object position information (including the shape information) supplied from the object tracking unit 109 and the image data ID.
  • AE detection values of a predetermined number of AE detection areas corresponding to the subject position specified by the subject position information among the detected value groups associated with the ID are acquired and integrated (summation average), and AE detection at the subject position A value (integral AE detection value) is generated.
  • step ST69 the AE control unit 403 determines, from the AE detection value, the aperture position, the shutter speed, and the gain such that the appropriate exposure can be obtained. Then, in step ST70, the AE control unit 403 instructs the imaging lens 103 of the determined diaphragm position and shutter speed, and instructs the signal processing unit 105 of the determined gain.
  • step ST71 drive to the instructed aperture position and setting of the instructed shutter speed are performed by the imaging lens 103.
  • step ST72 the signal processing unit 105 performs setting of the instructed gain.
  • step ST73 a series of AE processing is ended.
  • the subject position detection process and the AE detection process are performed in parallel, and when the AE detection value is stored in the AE detection storage unit 502, the subject position is detected. It becomes possible to refer to the AE detection value generated by the image data of the same frame as the used frame. Therefore, the AE detection area does not always shift with respect to a moving subject, the AE detection value for the subject can be correctly obtained, and highly accurate AE control becomes possible.
  • the AE detection areas are spread over the entire area or almost the entire area of the imaging area (the same arrangement as the AF detection area shown in FIG. 4).
  • the predetermined number of AF detection areas can be properly specified to obtain the AE detection value for the subject with high accuracy, and the AE control can be performed with high accuracy.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of mobile object such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot May be
  • FIG. 13 is a block diagram showing a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an external information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated as a functional configuration of the integrated control unit 12050.
  • the driveline control unit 12010 controls the operation of devices related to the driveline of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device for generating a drive force of a vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • Body system control unit 12020 controls the operation of various devices equipped on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device of various lamps such as a headlamp, a back lamp, a brake lamp, a blinker or a fog lamp.
  • the body system control unit 12020 may receive radio waves or signals of various switches transmitted from a portable device substituting a key.
  • Body system control unit 12020 receives the input of these radio waves or signals, and controls a door lock device, a power window device, a lamp and the like of the vehicle.
  • Outside vehicle information detection unit 12030 detects information outside the vehicle equipped with vehicle control system 12000.
  • an imaging unit 12031 is connected to the external information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the external information detection unit 12030 may perform object detection processing or distance detection processing of a person, a vehicle, an obstacle, a sign, characters on a road surface, or the like based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of light received.
  • the imaging unit 12031 can output an electric signal as an image or can output it as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
  • In-vehicle information detection unit 12040 detects in-vehicle information.
  • a driver state detection unit 12041 that detects a state of a driver is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera for imaging the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver does not go to sleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040, and a drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 controls the driving force generating device, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040 so that the driver can Coordinated control can be performed for the purpose of automatic driving that travels autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12030 based on the information outside the vehicle acquired by the external information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or oncoming vehicle detected by the external information detection unit 12030, and performs cooperative control for the purpose of antiglare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits an output signal of at least one of audio and image to an output device capable of visually or aurally notifying information to a passenger or the outside of a vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as the output device.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 14 is a diagram illustrating an example of the installation position of the imaging unit 12031.
  • imaging units 12101, 12102, 12103, 12104, and 12105 are provided as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, on the front nose of the vehicle 12100, a side mirror, a rear bumper, a back door, an upper portion of a windshield of a vehicle interior, and the like.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle cabin mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 included in the side mirror mainly acquire an image of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the imaging unit 12105 provided on the top of the windshield in the passenger compartment is mainly used to detect a leading vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 14 shows an example of the imaging range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors
  • the imaging range 12114 indicates The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by overlaying the image data captured by the imaging units 12101 to 12104, a bird's eye view of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging devices, or an imaging device having pixels for phase difference detection.
  • the microcomputer 12051 measures the distance to each three-dimensional object in the imaging ranges 12111 to 12114, and the temporal change of this distance (relative velocity with respect to the vehicle 12100). In particular, it is possible to extract a three-dimensional object traveling at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100 as a leading vehicle, in particular by finding the it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. As described above, it is possible to perform coordinated control for the purpose of automatic driving or the like that travels autonomously without depending on the driver's operation.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data relating to three-dimensional objects into two-dimensional vehicles such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, telephone poles, and other three-dimensional objects. It can be classified, extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles difficult to see.
  • the microcomputer 12051 determines the collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk is a setting value or more and there is a possibility of a collision, through the audio speaker 12061 or the display unit 12062 By outputting an alarm to the driver or performing forcible deceleration or avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared light.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the images captured by the imaging units 12101 to 12104.
  • pedestrian recognition is, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as an infrared camera, and pattern matching processing on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not
  • the procedure is to determine
  • the audio image output unit 12052 generates a square outline for highlighting the recognized pedestrian.
  • the display unit 12062 is controlled so as to display a superimposed image. Further, the audio image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can also obtain distance information with a subject (a leading vehicle or a pedestrian, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like) from the imaging units 12101 to 12104. Used.
  • the method of measuring the distance to the subject based on the detection value detected from the distance measurement area is not particularly limited, but the following known method may be used. That is, the imaging unit has a distance measurement window, and can measure the distance to the subject based on the detection value detected from the distance measurement area by the method of triangulation.
  • the technology according to the present disclosure (the present technology) can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 15 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technology (the present technology) according to the present disclosure can be applied.
  • FIG. 15 illustrates a surgeon (doctor) 11131 performing surgery on a patient 11132 on a patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as an insufflation tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 for supporting the endoscope 11100.
  • a cart 11200 on which various devices for endoscopic surgery are mounted.
  • the endoscope 11100 includes a lens barrel 11101 whose region of a predetermined length from the tip is inserted into a body cavity of a patient 11132, and a camera head 11102 connected to a proximal end of the lens barrel 11101.
  • the endoscope 11100 configured as a so-called rigid endoscope having a rigid barrel 11101 is illustrated, but even if the endoscope 11100 is configured as a so-called flexible mirror having a flexible barrel Good.
  • the endoscope 11100 may be a straight endoscope, or may be a oblique endoscope or a side endoscope.
  • An optical system and an imaging device are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is condensed on the imaging device by the optical system.
  • the observation light is photoelectrically converted by the imaging element to generate an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image.
  • the image signal is transmitted as RAW data to a camera control unit (CCU: Camera Control Unit) 11201.
  • CCU Camera Control Unit
  • the CCU 11201 is configured by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and centrally controls the operations of the endoscope 11100 and the display device 11202. Furthermore, the CCU 11201 receives an image signal from the camera head 11102 and performs various image processing for displaying an image based on the image signal, such as development processing (demosaicing processing), on the image signal.
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under control of the CCU 11201.
  • the light source device 11203 includes, for example, a light source such as an LED (light emitting diode), and supplies the endoscope 11100 with irradiation light at the time of imaging an operation part or the like.
  • a light source such as an LED (light emitting diode)
  • the input device 11204 is an input interface to the endoscopic surgery system 11000.
  • the user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging condition (type of irradiated light, magnification, focal length, and the like) by the endoscope 11100, and the like.
  • the treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of a blood vessel, and the like.
  • the insufflation apparatus 11206 is a gas within the body cavity via the insufflation tube 11111 in order to expand the body cavity of the patient 11132 for the purpose of securing a visual field by the endoscope 11100 and securing a working space of the operator.
  • Send The recorder 11207 is a device capable of recording various types of information regarding surgery.
  • the printer 11208 is an apparatus capable of printing various types of information regarding surgery in various types such as text, images, and graphs.
  • the light source device 11203 that supplies the irradiation light when imaging the surgical site to the endoscope 11100 can be configured of, for example, an LED, a laser light source, or a white light source configured by a combination of these.
  • a white light source is configured by a combination of RGB laser light sources
  • the output intensity and output timing of each color (each wavelength) can be controlled with high precision. It can be carried out.
  • the laser light from each of the RGB laser light sources is irradiated to the observation target in time division, and the drive of the image pickup element of the camera head 11102 is controlled in synchronization with the irradiation timing to cope with each of RGB. It is also possible to capture a shot image in time division. According to the method, a color image can be obtained without providing a color filter in the imaging device.
  • the drive of the light source device 11203 may be controlled so as to change the intensity of the light to be output every predetermined time.
  • the drive of the imaging device of the camera head 11102 is controlled in synchronization with the timing of the change of the light intensity to acquire images in time division, and by combining the images, high dynamic without so-called blackout and whiteout is obtained. An image of the range can be generated.
  • the light source device 11203 may be configured to be able to supply light of a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, the mucous membrane surface layer is irradiated by irradiating narrow band light as compared with irradiation light (that is, white light) at the time of normal observation using the wavelength dependency of light absorption in body tissue.
  • the so-called narrow band imaging is performed to image a predetermined tissue such as a blood vessel with high contrast.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiation with excitation light.
  • body tissue is irradiated with excitation light and fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into body tissue and the body tissue is Excitation light corresponding to the fluorescence wavelength of the reagent can be irradiated to obtain a fluorescence image or the like.
  • the light source device 11203 can be configured to be able to supply narrow band light and / or excitation light corresponding to such special light observation.
  • Figure 16 is a block diagram showing an example of a functional configuration of the camera head 11102 and CCU11201 shown in FIG. 15.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • the CCU 11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and the CCU 11201 are communicably connected to each other by a transmission cable 11400.
  • the lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101.
  • the observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and is incident on the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the imaging device constituting the imaging unit 11402 may be one (a so-called single-plate type) or a plurality (a so-called multi-plate type).
  • the imaging unit 11402 When the imaging unit 11402 is configured as a multi-plate type, for example, an image signal corresponding to each of RGB may be generated by each imaging element, and a color image may be obtained by combining them.
  • the imaging unit 11402 may be configured to have a pair of imaging devices for acquiring image signals for right eye and left eye corresponding to 3D (dimensional) display. By performing 3D display, the operator 11131 can more accurately grasp the depth of the living tissue in the operation site.
  • a plurality of lens units 11401 may be provided corresponding to each imaging element.
  • the imaging unit 11402 may not necessarily be provided in the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the driving unit 11403 is configured by an actuator, and moves the zoom lens and the focusing lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head control unit 11405. Thereby, the magnification and the focus of the captured image by the imaging unit 11402 can be appropriately adjusted.
  • the communication unit 11404 is configured of a communication device for transmitting and receiving various types of information to and from the CCU 11201.
  • the communication unit 11404 transmits the image signal obtained from the imaging unit 11402 to the CCU 11201 as RAW data via the transmission cable 11400.
  • the communication unit 11404 also receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405.
  • the control signal includes, for example, information indicating that the frame rate of the captured image is designated, information indicating that the exposure value at the time of imaging is designated, and / or information indicating that the magnification and focus of the captured image are designated, etc. Contains information about the condition.
  • the imaging conditions such as the frame rate, exposure value, magnification, and focus described above may be appropriately designated by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. Good. In the latter case, the so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function are incorporated in the endoscope 11100.
  • AE Auto Exposure
  • AF Auto Focus
  • AWB Automatic White Balance
  • the camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is configured by a communication device for transmitting and receiving various types of information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling driving of the camera head 11102 to the camera head 11102.
  • the image signal and the control signal can be transmitted by telecommunication or optical communication.
  • An image processing unit 11412 performs various types of image processing on an image signal that is RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various types of control regarding imaging of a surgical site and the like by the endoscope 11100 and display of a captured image obtained by imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display a captured image in which a surgical site or the like is captured, based on the image signal subjected to the image processing by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects a shape, a color, and the like of an edge of an object included in a captured image, thereby enabling a surgical tool such as forceps, a specific biological site, bleeding, mist when using the energy treatment tool 11112, and the like. It can be recognized.
  • control unit 11413 may superimpose various surgical support information on the image of the surgery section using the recognition result.
  • the operation support information is superimposed and presented to the operator 11131, whereby the burden on the operator 11131 can be reduced and the operator 11131 can reliably proceed with the operation.
  • a transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electric signal cable corresponding to communication of an electric signal, an optical fiber corresponding to optical communication, or a composite cable of these.
  • communication is performed by wire communication using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
  • the technology according to the present disclosure can also be suitably used for a camera head 11102 used in endoscopic surgery.
  • a blood vessel or body tissue is recognized as a subject, a detection frame is specified based on the result, and focus control is performed based on the detection value, thereby improving AF accuracy at the time of surgery.
  • an artificial tool or the like such as the surgical tool 11110 or waste, as the subject, instead of the blood vessels or body tissue to be originally focused on. In this case.
  • FIG. 17 shows an example in which a detection frame (ranging frame) using the present technology and a frame of a phase difference detection formula are combined.
  • the number and area of the distance measurement frame are both larger than the number and area of the phase difference detection type frame in FIG. 17, the number of the range detection frame and the number of areas of the phase difference detection type is not particularly limited thereto.
  • the area of the distance measurement frame may be large (small) and the number of frames may be small (large). Further, the number and area of the distance measurement frames may be the same as the number and area of the frames of the phase difference detection frame.
  • the effect of further improving the AF accuracy particularly in the hybrid AF can also be obtained.
  • the phase difference detection type AF it is known that the accuracy of the AF is deteriorated in a high luminance portion or a repetitive pattern.
  • the detected contrast AF detection value enables high luminance determination and repetitive pattern detection within the frame of the phase difference detection formula.
  • the present technology can also be configured as follows.
  • a detection processing unit that detects image signals input from an imaging area of an imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas,
  • a subject detection unit that detects a subject based on the image signal;
  • the detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit
  • An image processing apparatus comprising: a control unit that specifies a detected value corresponding to an area.
  • the image processing apparatus according to (6), wherein the image processing apparatus according to (6) above is specified from the detection value corresponding to the entire detection area generated by detecting an image signal of a frame.
  • the image processing apparatus according to any one of (1) to (7), wherein the detection area is spread over the entire area of the imaging area.
  • the process of generating the detection value in the detection processing unit and the process of detecting the object in the object detection unit are performed in parallel.
  • a plurality of phase difference detection areas are further arranged in the imaging area, The image processing apparatus according to (2) or (3), wherein the number of the ranging areas is larger than the number of the phase difference detection areas.
  • the detection processing unit detects an image signal input from the imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates a detection value corresponding to each of the entire detection areas, A subject detection unit detects a subject based on the image signal, The control unit sets a detection area corresponding to the subject from the entire detection area based on the detection result of the subject detection unit, and based on the detection value corresponding to each of the full detection areas generated by the detection processing unit An image processing method for specifying a detection value corresponding to the set detection area.
  • an imaging unit A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area and generates detection values corresponding to each of the entire detection areas; A subject detection unit that detects a subject based on the image signal; The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit
  • An imaging apparatus comprising a control unit that specifies a detection value corresponding to an area.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automatic Focus Adjustment (AREA)
  • Studio Devices (AREA)

Abstract

The present invention improves the accuracy of AF control, AE control, etc., on a moving subject. An image signal inputted from the imaging area of an imaging unit is detected in a detection area of which a plurality are provided in the imaging area, and a detection value corresponding to each of all detection areas is generated. A subject is detected on the basis of the image signal. A detection area that corresponds to the subject is set from all detection areas on the basis of the result of subject detection, and a detection value that corresponds to the detection area set from the detection value that corresponds to each of all detection areas is specified.

Description

画像処理装置、画像処理方法および撮像装置Image processing apparatus, image processing method and imaging apparatus
 本技術は、画像処理装置、画像処理方法および撮像装置に関し、特に、動いている被写体に対するAF制御やAE制御などの精度を向上し得る画像処理装置等に関する。 The present technology relates to an image processing apparatus, an image processing method, and an imaging apparatus, and more particularly to an image processing apparatus and the like that can improve the accuracy of AF control and AE control for a moving subject.
 例えば、特許文献1には、ユーザが設定したAFエリアや被写体認識位置に検波エリアを設定し、その中のコントラストを検波することによって成立する自動フォーカス(AF:Auto Focus)システムが記載されている。この場合、AFを開始したときに被写体が動くと、AF開始直前の被写体認識結果によって設定したAFエリアから被写体から出てしまう。その結果,被写体のコントラストを正確に検知できず、背景に合焦したり、偽合焦が発生したりしてAF精度が悪化する。さらに、動き続けている被写体では、被写体認識を先に行い、その結果を使ってAFエリアの設定を行っていると、AFエリアと被写体の位置が常にずれることになり、連続撮影時の合焦率が低下する。 For example, Patent Document 1 describes an automatic focusing (AF) system established by setting a detection area at an AF area or subject recognition position set by the user and detecting the contrast in the detection area. . In this case, if the subject moves when AF is started, it leaves the subject from the AF area set by the subject recognition result immediately before the start of AF. As a result, the contrast of the subject can not be accurately detected, and the AF accuracy is deteriorated due to focusing on the background or false focusing. Furthermore, for a subject that is moving, subject recognition is performed first, and if the AF area is set using the result, the position of the subject and the AF area will always shift, and focusing during continuous shooting The rate drops.
 また、従来の自動露出(AE:Automatic Exposure)システムにおいても、同様の不都合がある。例えば、被写体認識結果によってAEエリアを設定し、その検波値を使ってAEを行う際、AEを開始したときに被写体が動くと、AE開始直前の被写体認識結果によって設定したAEエリアから被写体が出てしまう。その結果、被写体の明るさを正確に検知できず、正確な自動露出を行うことができない。さらに、動き続けている被写体では、この状態が連続的に継続して起こるため、正確に自動露出が行えない状態が続く。 Also, in the conventional Automatic Exposure (AE) system, there are similar disadvantages. For example, when the AE area is set according to the subject recognition result and AE is performed using the detected value, if the subject moves when AE is started, the subject comes out from the AE area set by the subject recognition result immediately before AE start. It will As a result, the brightness of the subject can not be accurately detected, and accurate automatic exposure can not be performed. Furthermore, in the case of a subject that is moving, this state continuously occurs continuously, so the state in which automatic exposure can not be accurately performed continues.
特開2010-160297号公報Unexamined-Japanese-Patent No. 2010-160297
 本技術の目的は、動いている被写体に対するAF制御やAE制御などの精度向上を図ることにある。 The purpose of the present technology is to improve the accuracy of AF control and AE control for a moving subject.
 本技術の概念は、
 撮像部の撮像エリアから入力される画像信号を上記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
 上記画像信号に基づいて被写体を検出する被写体検出部と、
 上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
 画像処理装置にある。
The concept of this technology is
A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas;
A subject detection unit that detects a subject based on the image signal;
The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An image processing apparatus is provided with a control unit that specifies a detection value corresponding to an area.
 本技術において、検波処理部により、撮像部の撮像エリアから入力される画像信号が撮像エリアに複数配置される検波エリアにおいて検波され、全検波エリアのそれぞれに対応する検波値が生成される。また、被写体検出部により、全検波エリアのそれぞれに対応する検波値が生成される。例えば、検波エリアは、撮像エリアの全エリアに敷き詰められている、ようにされてもよい。また、例えば、検波処理部における検波値を生成する処理と被写体検出部における被写体を検出する処理とが並行して行われる、ようにされてもよい。 In the present technology, the detection processing unit detects an image signal input from the imaging area of the imaging unit in a detection area in which a plurality of imaging signals are arranged in the imaging area, and generates detection values corresponding to all detection areas. The subject detection unit also generates detection values corresponding to each of the entire detection areas. For example, the detection area may be spread over the entire area of the imaging area. Further, for example, the process of generating the detection value in the detection processing unit and the process of detecting the object in the object detection unit may be performed in parallel.
 制御部により、被写体検出部の検出結果に基づいて全検波エリアから被写体に対応した検波エリアが設定され、検波処理部で生成された全検波エリアのそれぞれに対応する検波値から設定された検波エリアに対応する検波値が特定される。例えば、被写体に対応した検波エリアの設定では、被写体検出がされるエリアが検波エリアとして設定されるか、あるいは被写体検出がされるエリアを除いたエリアが検波エリアとして設定される、ようにされてもよい。 The control unit sets the detection area corresponding to the subject from all detection areas based on the detection result of the object detection unit, and the detection area set from the detection value corresponding to each of all the detection areas generated by the detection processing unit The detected value corresponding to is identified. For example, in the setting of the detection area corresponding to the subject, the area where the subject is detected is set as the detection area, or the area excluding the area where the subject is detected is set as the detection area. It is also good.
 また、例えば、検波処理部で生成された検波値を保存する検波値保存部をさらに備える、ようにされてもよい。そして、この場合、制御部は、被写体検出部で所定フレームの画像信号に基づいて検出された被写体に対応して設定した検波エリアの検波値を、検波値保存部に保存されている所定フレームの画像信号を検波して生成された全検波エリアに対応する検波値から特定する、ようにされてもよい。 In addition, for example, a detection value storage unit that stores the detection value generated by the detection processing unit may be further provided. Then, in this case, the control unit causes the detection value of the detection area set corresponding to the subject detected based on the image signal of the predetermined frame by the subject detection unit to be stored in the detection value storage unit. The image signal may be detected and identified from detected values corresponding to all detection areas generated.
 また、例えば、検波エリアは、測距エリアである、ようにされてもよい。そして、この場合、制御部は、特定した検波値に基づいてフォーカス制御をする、ようにされてもよい。また、この場合、例えば、撮像エリアに複数の位相差検出エリアがさらに配置されており、測距エリアの数は位相差検出エリアの数より多く配置されている、ようにされてもよい。また、例えば、検波エリアは、測光エリアである、ようにされてもよい。そして、この場合、制御部は、特定した検波値に基づいて露出制御をする、ようにされてもよい。 Also, for example, the detection area may be a ranging area. Then, in this case, the control unit may perform focus control based on the identified detection value. In this case, for example, a plurality of phase difference detection areas may be further disposed in the imaging area, and the number of ranging areas may be greater than the number of phase difference detection areas. Also, for example, the detection area may be a photometric area. Then, in this case, the control unit may perform exposure control based on the identified detection value.
 このように本技術においては、被写体検出結果に基づいて撮像エリアに複数配置された検波エリアの全検波エリアから被写体に対応した検波エリアを設定し、検波処理部で生成された全検波エリアのそれぞれに対応する検波値から設定した検波エリアに対応する検波値を特定するものである。そのため、動いている被写体に対する検波値を精度よく特定でき、この特定した検波値を用いることで動いている被写体に対するAF制御やAE制御などの精度向上が可能となる。 As described above, in the present technology, the detection area corresponding to the subject is set from all detection areas of a plurality of detection areas arranged in the imaging area based on the subject detection result, and each detection area generated by the detection processing unit The detection value corresponding to the set detection area is specified from the detection value corresponding to. Therefore, the detection value for the moving subject can be specified with high accuracy, and the accuracy of AF control or AE control for the moving subject can be improved by using the specified detection value.
 また、本技術の他の概念は、
 撮像部と、
 前記撮像部の撮像エリアから入力される画像信号を前記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
 上記画像信号に基づいて被写体を検出する被写体検出部と、
 上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
 撮像装置にある。
Also, the other concept of this technology is
An imaging unit,
A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas;
A subject detection unit that detects a subject based on the image signal;
The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An imaging apparatus is provided with a control unit that specifies a detected value corresponding to an area.
 本技術によれば、動いている被写体に対するAF制御やAE制御などの精度向上が可能となる。なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また付加的な効果があってもよい。 According to the present technology, it is possible to improve the accuracy of AF control and AE control for a moving subject. The effects described in the present specification are merely examples and are not limited, and additional effects may be present.
デジタルスチルカメラの構成例を示すブロック図である。FIG. 2 is a block diagram illustrating an exemplary configuration of a digital still camera. 図1のデジタルスチルカメラにおけるコントラストAF処理の手順の一例を示すフローチャートである。5 is a flowchart showing an example of the procedure of contrast AF processing in the digital still camera of FIG. 1; 第1の実施の形態としてのデジタルスチルカメラの構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of a digital still camera as a first embodiment. 撮像エリアにおける複数のAF検波エリアの配置例を説明するための図である。It is a figure for demonstrating the example of arrangement | positioning of several AF detection area in an imaging area. 撮像画像と、それに対するAF検波エリアの配置と、被写体位置情報で特定されるAF検波エリア(AF検波枠)の一例を説明するための図である。FIG. 6 is a diagram for describing an example of a captured image, the arrangement of an AF detection area corresponding thereto, and an AF detection area (AF detection frame) specified by object position information. 図3のデジタルスチルカメラにおけるコントラストAF処理の手順の一例を示すフローチャートである。5 is a flowchart showing an example of the procedure of contrast AF processing in the digital still camera of FIG. 3; ロボットアームで作業するときの撮像画像(監視カメラ画像)等の一例を示す図である。It is a figure which shows an example of a captured image (monitoring camera image) etc. when working with a robot arm. 第2の実施の形態としてのロボットアームで作業をするときの監視カメラの構成例を示すブロック図である。It is a block diagram showing an example of composition of a surveillance camera at the time of working with a robot arm as a 2nd embodiment. デジタルスチルカメラの構成例を示すブロック図である。FIG. 2 is a block diagram illustrating an exemplary configuration of a digital still camera. 図9のデジタルスチルカメラにおけるコントラストAE処理の手順の一例を示すフローチャートである。10 is a flowchart showing an example of the procedure of contrast AE processing in the digital still camera of FIG. 9; 第3の実施の形態としてのデジタルスチルカメラの構成例を示すブロック図である。FIG. 14 is a block diagram showing an example of configuration of a digital still camera as a third embodiment. 図11のデジタルスチルカメラにおけるコントラストAE処理の手順の一例を示すフローチャートである。12 is a flowchart showing an example of the procedure of contrast AE processing in the digital still camera of FIG. 11; 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram showing an example of rough composition of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of a vehicle exterior information detection part and an imaging part. 内視鏡手術システムの概略的な構成の一例を示す図である。It is a figure which shows an example of a schematic structure of an endoscopic surgery system. カメラヘッド及びCCUの機能構成の一例を示すブロック図である。It is a block diagram which shows an example of a function structure of a camera head and CCU. 本技術を用いた検波枠(測距枠)と位相差検出式の枠を組み合わせた場合の例を示す図である。It is a figure which shows the example at the time of combining the frame of the detection frame (ranging frame) and phase difference detection type | formula which used this technique.
 以下、発明を実施するための形態(以下、「実施の形態」とする)について説明する。なお、説明を以下の順序で行う。
 1.第1の実施の形態
 2.第2の実施の形態
 3.第3の実施の形態
 4.移動体への応用例
 5.内視鏡手術システムへの応用例
 6.変形例
Hereinafter, modes for carrying out the invention (hereinafter referred to as “embodiments”) will be described. The description will be made in the following order.
1. First Embodiment Second embodiment 3. Third embodiment 4. Application example to mobile object 5. Application example to endoscopic surgery system 6. Modified example
 <1.第1の実施の形態>
 [デジタルスチルカメラの構成例]
 図1は、デジタルスチルカメラ100の構成例を示している。このデジタルスチルカメラ100は、制御部101と、操作部102と、撮像レンズ103と、撮像素子104と、信号処理部105と、表示制御部106と、表示部107と、被写体認識部108と、被写体追尾部109と、AF検波エリア設定部110と、AF検波処理部111と、AF制御部112と、画像記録処理部113と、記録メディア114を有している。
<1. First embodiment>
[Configuration Example of Digital Still Camera]
FIG. 1 shows a configuration example of a digital still camera 100. The digital still camera 100 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108. An object tracking unit 109, an AF detection area setting unit 110, an AF detection processing unit 111, an AF control unit 112, an image recording processing unit 113, and a recording medium 114 are provided.
 制御部101は、制御プログラムに基づいて、デジタルスチルカメラ100の各部の動作を制御する。操作部102は、制御部101に接続されており、ユーザによる種々の操作を受け付けるユーザインタフェースを構成し、キー、ダイヤル、ボタン、タッチパネル、リモートコントローラなどで構成されている。 The control unit 101 controls the operation of each unit of the digital still camera 100 based on a control program. The operation unit 102 is connected to the control unit 101, configures a user interface that receives various operations by the user, and includes keys, dials, buttons, a touch panel, a remote controller, and the like.
 撮像レンズ103は、複数のレンズなどからなり、被写体から入射した光を集光して撮像素子104の撮像面(撮像エリア)へと導く。この撮像レンズ103は、フォーカスレンズを有しており、このフォーカスレンズを駆動することでフォーカス制御が可能とされている。また、この撮像レンズ103は、絞りおよびシャッタを有しており、絞り位置やシャッタ速度の制御で露出制御が可能とされている。 The imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104. The imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
 撮像素子104は、撮像部を構成し、複数の画素が行列状に配列された撮像面(撮像エリア)を有するCMOS(Complementary Metal Oxide Semiconductor)イメージセンサやCCD(Charge Coupled Device)等からなり、撮像レンズ103を介して被写体から入射した光を撮像面で受光する。 The image sensor 104 is a complementary metal oxide semiconductor (CMOS) image sensor, a charge coupled device (CCD), or the like, which constitutes an imaging unit and has an imaging surface (imaging area) in which a plurality of pixels are arranged in a matrix. Light incident from a subject through the lens 103 is received by the imaging surface.
 信号処理部105は、撮像素子104からのアナログの画像信号に増幅などのアナログ信号処理を適用し、さらに、その結果得られる画像信号をA/D(Analog/Digital)変換する。また、信号処理部105は、A/D変換により得られるデジタル信号で示される画像データに対し、ノイズ除去処理等のデジタル信号処理を適用し、その結果得られる画像データを、表示制御部106、被写体認識部108および画像記録処理部113に供給する。 The signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, It is supplied to the subject recognition unit 108 and the image recording processing unit 113.
 表示制御部106は、信号処理部105からの画像データに対応する撮像画像を、表示部107に表示させる。表示部107は、LCD(Liquid Crystal Display)や有機ELディスプレイ(OELD:Organic Electroluminescence Display)等から構成される。 The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105. The display unit 107 includes an LCD (Liquid Crystal Display), an organic EL display (OELD: Organic Electroluminescence Display), or the like.
 被写体認識部108は、画像データから追尾対象の被写体を認識し、その特徴量等の情報を追尾対象情報として被写体追尾部109に供給する。なお、追尾対象の被写体の指定は、例えばユーザのタッチパネル操作によって行われる。被写体追尾部109は、被写体認識部108からの追尾対象情報を用いて、画像データから被写体位置を検知し、被写体位置情報(形状情報も含む)を表示制御部106およびAF検波エリア設定部110に供給する。表示制御部106は、被写体位置情報に基づいて、撮像画像に対して追尾枠を重畳して表示させる。 The subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information. The designation of the tracking target object is performed by, for example, the touch panel operation of the user. The subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and sends the subject position information (including the shape information) to the display control unit 106 and the AF detection area setting unit 110. Supply. The display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
 AF検波エリア設定部110は、被写体追尾部109から供給される被写体位置情報に基づいて、撮像エリア内の被写体位置に測距エリアとしてのAF検波エリア(AF検波枠)を設定し、その情報をAF検波処理部111に供給する。AF検波処理部111は、AF検波エリアの設定情報に基づいて、画像データからAF検波エリアのAF検波値(コントラスト検波値)を取得して、AF制御部112に供給する。 The AF detection area setting unit 110 sets an AF detection area (AF detection frame) as a ranging area at the subject position in the imaging area based on the subject position information supplied from the subject tracking unit 109, and the information is It is supplied to the AF detection processing unit 111. The AF detection processing unit 111 acquires an AF detection value (contrast detection value) of the AF detection area from the image data based on setting information of the AF detection area, and supplies the AF detection value to the AF control unit 112.
 AF制御部112は、フォーカスレンズの移動操作に伴って得られるAF検波値の履歴に基づいて、最も高いコントラストを示したときのフォーカスレンズ位置を求め、その結果を撮像レンズ103に通知し、最も高いコントラストを示したときのフォーカスレンズ位置となるようにフォーカスレンズを駆動する。 The AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result. The focus lens is driven to be the focus lens position when high contrast is shown.
 画像記録処理部113は、制御部101からの制御のもと、例えば、シャッターボタンが押されたときに、信号処理部105からの画像データをJPEG(Joint Photographic Experts Group)方式等の所定の圧縮方式に従って圧縮し、圧縮された画像データを記録メディア114に記録する。記録メディア114は、例えば、メモリカード等の記録媒体であり、デジタルスチルカメラ100に対して容易に着脱可能とされている。 Under control of the control unit 101, for example, when the shutter button is pressed, the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114. The recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 100.
 図2のフローチャートは、図1のデジタルスチルカメラ100におけるコントラストAF処理の手順の一例を示している。ステップST1において、ユーザによりシャッターボタンの半押しがされることで、AF処理が開始される。次に、ステップST2において、撮像レンズ103のフォーカスレンズ位置が検波したい位置に駆動される。 The flowchart of FIG. 2 shows an example of the procedure of contrast AF processing in the digital still camera 100 of FIG. In step ST1, the AF process is started by the user pressing the shutter button halfway. Next, in step ST2, the focus lens position of the imaging lens 103 is driven to a position desired to be detected.
 次に、ステップST3において、撮像素子104から画像信号が読み出され、ステップST4において、信号処理部105で画像データが生成される。次に、ステップST5において、被写体認識部108で例えばユーザのタッチパネル操作によって指定された被写体が認識され、ステップST6において、被写体追尾部109で撮像画像内に存在する被写体の追尾処理が行われ、被写体の撮像画像上の位置を示す被写体位置情報(形状情報も含む)が得られる。 Next, in step ST3, an image signal is read from the imaging element 104, and in step ST4, the signal processing unit 105 generates image data. Next, in step ST5, the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST6, the subject tracking unit 109 performs tracking processing of the subject present in the captured image. The subject position information (including the shape information) indicating the position on the captured image is obtained.
 次に、ステップST7において、AF検波エリア設定部110で、撮像エリア内の被写体位置にAF検波エリア(AF検波枠)が設定される。次に、ステップST8において、AF検波処理部111で画像データからAF検波エリアのAF検波値(コントラスト検波値)が取得される。 Next, in step ST7, the AF detection area setting unit 110 sets an AF detection area (AF detection frame) at the object position in the imaging area. Next, in step ST8, the AF detection processing unit 111 acquires an AF detection value (contrast detection value) of the AF detection area from the image data.
 次に、ステップST9において、AF制御部112でAF検波値の履歴から合焦の判断が行われる。そして、ステップST10において、合焦判断ができないとき、ステップST2の処理に戻り、撮像レンズ103のフォーカスレンズ位置が動かされ、上述したと同様の処理が繰り返される。 Next, in step ST9, the AF control unit 112 determines the in-focus state from the history of AF detection values. Then, in step ST10, when the in-focus determination can not be made, the process returns to step ST2, the focus lens position of the imaging lens 103 is moved, and the same process as described above is repeated.
 一方、合焦判断ができたとき、ステップST11において、AF制御部112で合焦判断が下されたAF検波値のフォーカスレンズ位置へのフォーカスレンズの駆動が撮像レンズ103に指示される。次に、ステップST12において、撮像レンズ103でフォーカスレンズが指示された位置に駆動される。そして、ステップST13において、一連のAF処理が終了される。 On the other hand, when the in-focus determination is made, driving of the focus lens to the focus lens position of the AF detection value for which the in-focus determination is made by the AF control unit 112 is instructed to the imaging lens 103 in step ST11. Next, in step ST12, the focusing lens is driven by the imaging lens 103 to a designated position. Then, in step ST13, a series of AF processing is ended.
 図1のデジタルスチルカメラ100における上述したコントラストAF処理(図2参照)では、被写体認識を行った後にAF検波エリア(AF検波枠)を設定し、その後にAF検波を開始してAF検波エリアのAF検波値を得る手順となっている。そのため、動いている被写体に対しては、被写体検知をしたときの被写体位置とAF検波を開始したときの被写体位置がずれたものとなり、被写体位置に対応したAF検波値を得ることが困難であり、精度の高いAF制御は不可能である。 In the above-described contrast AF processing (see FIG. 2) in the digital still camera 100 of FIG. 1, after subject recognition is performed, an AF detection area (AF detection frame) is set, and thereafter AF detection is started to The procedure is to obtain an AF detection value. Therefore, for a moving subject, the subject position when subject detection is performed deviates from the subject position when AF detection is started, and it is difficult to obtain an AF detection value corresponding to the subject position. , High precision AF control is impossible.
 図3は、第1の実施の形態としてのデジタルスチルカメラ200の構成例を示している。この図3において、図1と対応する部分には同一符号を付し、適宜、その詳細説明を省略する。このデジタルスチルカメラ200は、制御部101と、操作部102と、撮像レンズ103と、撮像素子104と、信号処理部105と、表示制御部106と、表示部107と、被写体認識部108と、被写体追尾部109と、AF制御部112と、画像記録処理部113と、記録メディア114と、AF検波処理部201と、AF検波格納部202と、AF検波値生成部203を有している。 FIG. 3 shows an example of the configuration of a digital still camera 200 according to the first embodiment. In FIG. 3, parts corresponding to FIG. 1 are given the same reference numerals, and the detailed description thereof will be omitted as appropriate. The digital still camera 200 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging element 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108. An object tracking unit 109, an AF control unit 112, an image recording processing unit 113, a recording medium 114, an AF detection processing unit 201, an AF detection storage unit 202, and an AF detection value generation unit 203 are provided.
 制御部101は、制御プログラムに基づいて、デジタルスチルカメラ200の各部の動作を制御する。操作部102は、制御部101に接続されており、ユーザによる種々の操作を受け付けるユーザインタフェースを構成し、キー、ダイヤル、ボタン、タッチパネル、リモートコントローラなどで構成されている。 The control unit 101 controls the operation of each unit of the digital still camera 200 based on a control program. The operation unit 102 is connected to the control unit 101, configures a user interface that receives various operations by the user, and includes keys, dials, buttons, a touch panel, a remote controller, and the like.
 撮像レンズ103は、複数のレンズなどからなり、被写体から入射した光を集光して撮像素子104の撮像面(撮像エリア)へと導く。この撮像レンズ103は、フォーカスレンズを有しており、このフォーカスレンズを駆動することでフォーカス制御が可能とされている。また、この撮像レンズ103は、絞りおよびシャッタを有しており、絞り位置やシャッタ速度の制御で露出制御が可能とされている。 The imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104. The imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
 撮像素子104は、撮像部を構成し、複数の画素が行列状に配列された撮像面を有するCMOS(Complementary Metal Oxide Semiconductor)イメージセンサやCCD(Charge Coupled Device)等からなり、撮像レンズ103を介して被写体から入射した光を撮像面で受光する。 The imaging element 104 is a complementary metal oxide semiconductor (CMOS) image sensor, a charge coupled device (CCD), or the like having an imaging surface in which a plurality of pixels are arranged in a matrix, constituting an imaging unit. Light from the subject is received by the imaging surface.
 信号処理部105は、撮像素子104からのアナログの画像信号に増幅などのアナログ信号処理を適用し、さらに、その結果得られる画像信号をA/D(Analog/Digital)変換する。また、信号処理部105は、A/D変換により得られるデジタル信号で示される画像データに対し、ノイズ除去処理等のデジタル信号処理を適用し、その結果得られる画像データを、表示制御部106、被写体認識部108、AF検波処理部201および画像記録処理部113に供給する。 The signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AF detection processing unit 201, and the image recording processing unit 113.
 表示制御部106は、信号処理部105からの画像データに対応する撮像画像を、表示部107に表示させる。表示部107は、LCD(Liquid Crystal Display)や有機ELディスプレイ(OELD:Organic Electroluminescence Display)等から構成される。 The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105. The display unit 107 includes an LCD (Liquid Crystal Display), an organic EL display (OELD: Organic Electroluminescence Display), or the like.
 被写体認識部108は、画像データから追尾対象の被写体を認識し、その特徴量等の情報を追尾対象情報として被写体追尾部109に供給する。なお、追尾対象の被写体の指定は、例えばユーザのタッチパネル操作によって行われる。被写体追尾部109は、被写体認識部108からの追尾対象情報を用いて、画像データから被写体位置を検知し、被写体位置情報(形状情報も含む)を得る。 The subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information. The designation of the tracking target object is performed by, for example, the touch panel operation of the user. The subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and obtains subject position information (including shape information).
 被写体追尾部109は、被写体位置情報を、表示制御部106に供給する。表示制御部106は、この被写体位置情報に基づいて、撮像画像に対して追尾枠を重畳して表示させる。また、被写体追尾部109は、被写体位置情報を、被写体位置を決定した際に使用した画像データのフレームを識別する画像データIDと共に、AF検波値生成部203に供給する。 The subject tracking unit 109 supplies subject position information to the display control unit 106. The display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information. The subject tracking unit 109 also supplies subject position information to the AF detection value generation unit 203 together with an image data ID identifying a frame of image data used when the subject position is determined.
 AF検波処理部201は、信号処理部105からの画像データから撮像エリアに複数配置される測距エリアとしてのAF検波エリアのAF検波値(コントラスト検波値)をそれぞれ取得する。この場合、AF検波エリアは、撮像エリアの全エリアに敷き詰められている。なお、AF検波エリアは、撮像エリアの周辺を除くその他の全エリアに敷き詰められていてもよい。 The AF detection processing unit 201 acquires, from the image data from the signal processing unit 105, AF detection values (contrast detection values) of the AF detection area as a plurality of ranging areas arranged in the imaging area. In this case, the AF detection area is spread over the entire area of the imaging area. The AF detection area may be spread over all other areas except the periphery of the imaging area.
 図4(a),(b)は、撮像エリアの全エリアに矩形のAF検波エリアが敷き詰められている状態を示しており、図4(a)はAF検波エリアのサイズが大きい例であり、図4(b)はAF検波エリアのサイズが小さい例を示している。なお、図4(a)では撮像エリアに3×3の9個のAF検波エリアが配置され、図4(b)では撮像エリアに20×20の400個のAF検波エリアが配置されているが、AF検波エリアの個数はこの例に限定されない。 FIGS. 4A and 4B show a state in which rectangular AF detection areas are spread over the entire area of the imaging area, and FIG. 4A is an example in which the size of the AF detection area is large. FIG. 4B shows an example in which the size of the AF detection area is small. In FIG. 4A, 9 × 3 AF detection areas of 3 × 3 are disposed in the imaging area, and in FIG. 4B, 400 AF detection areas of 20 × 20 are disposed in the imaging area. The number of AF detection areas is not limited to this example.
 また、図4(c),(d)は、撮像エリアの周辺を除いたエリアに矩形のAF検波エリアが敷き詰められている状態を示しており、図4(c)はAF検波エリアのサイズが大きい例であり、図4(d)はAF検波エリアのサイズが小さい例を示している。なお、図4(c)では3×3の9個のAF検波エリアが配置され、図4(d)では20×20の400個のAF検波エリアが配置されているが、AF検波エリアの個数はこの例に限定されない。 4C and 4D show a state in which rectangular AF detection areas are spread in the area excluding the periphery of the imaging area, and FIG. 4C shows the size of the AF detection area. FIG. 4D shows an example in which the size of the AF detection area is small. In FIG. 4C, 3 × 3 nine AF detection areas are arranged, and in FIG. 4D, 20 × 20 400 AF detection areas are arranged. However, the number of AF detection areas is Is not limited to this example.
 AF検波処理部201は、撮像エリアに複数配置されるAF検波エリアの全てで取得したAF検波値、つまりAF検波値群を、当該検波値を得る際に使用した画像データのフレームを識別する画像データIDと共に、AF検波格納部202に供給して、一時的に格納する。AF検波値生成部203は、被写体追尾部109から供給される被写体位置情報(形状情報も含む)と画像データIDに基づいて、同じ画像データIDで関連付けられている検波値群のうち被写体位置情報で特定される被写体位置に対応した所定数のAF検波エリアのAF検波値をAF検波格納部202から取り出して積分(加算平均)し、被写体位置でのAF検波値(積分AF検波値)を生成する。 The AF detection processing unit 201 is an image for identifying a frame of image data used when obtaining the detection value, that is, an AF detection value acquired in all AF detection areas arranged in a plurality of imaging areas, that is, an AF detection value group. The data is supplied to the AF detection storage unit 202 together with the data ID and temporarily stored. The AF detection value generation unit 203 generates subject position information from among detection value groups associated with the same image data ID based on subject position information (including shape information) supplied from the subject tracking unit 109 and the image data ID. The AF detection values of a predetermined number of AF detection areas corresponding to the subject position identified in step are taken out from the AF detection storage unit 202 and integrated (averaged) to generate an AF detection value (integral AF detection value) at the subject position Do.
 ここで、撮像画像と、それに対するAF検波エリアの配置と、被写体位置情報で特定されるAF検波エリア(AF検波枠)の一例を説明する。図5(a)は、撮像画像の一例を示しており、この撮像画像には被写体OBが含まれている。図5(b)は、撮像画像と撮像エリア内に敷き詰められたAF検波エリアDEの配置例を示している。図5(c)は、図5(b)のようにAF検波エリアDEが配置されている場合、被写体位置情報に基づいて矢印Pで示す範囲内のAF検波エリアDEが被写体に対応したものとして特定されることを示している。 Here, an example of the captured image, the arrangement of the AF detection area corresponding thereto, and the AF detection area (AF detection frame) specified by the subject position information will be described. FIG. 5A shows an example of a captured image, and the captured image includes the subject OB. FIG. 5B shows an example of arrangement of the captured image and the AF detection area DE spread in the imaging area. In FIG. 5C, when the AF detection area DE is disposed as shown in FIG. 5B, the AF detection area DE within the range indicated by the arrow P corresponds to the subject based on the subject position information. It indicates that it is identified.
 図5(d)も、撮像画像と撮像エリア内に敷き詰められたAF検波エリアDEの配置例を示している。この配置例の場合、図5(b)と比べて、AF検波エリアDEのサイズが小さくされており、その分AF検波エリアDEの数は多くなっている。図5(e)は、図5(d)のようにAF検波エリアDEが配置されている場合、被写体位置情報に基づいて矢印Pで示す範囲内のAF検波エリアDEが被写体に対応したものとして特定されることを示している。 FIG. 5D also shows an arrangement example of the captured image and the AF detection area DE spread in the imaging area. In the case of this arrangement example, the size of the AF detection area DE is smaller than that of FIG. 5B, and the number of AF detection areas DE is increased accordingly. In FIG. 5E, when the AF detection area DE is disposed as shown in FIG. 5D, the AF detection area DE within the range indicated by the arrow P corresponds to the subject based on the subject position information. It indicates that it is identified.
 被写体位置情報で特定される被写体位置に対応した所定数のAF検波エリアを設定する方法の一例として上記に記載したが、特にこれらに限定されるわけではない。例えば、被写体位置情報に基づいて被写体が検出されているAF検波エリアの全てを特定される所定数のAF検波エリアとして設定しても良い。また被写体が検出されているAF検波エリアのうち、被写体情報が占める割合が任意の値以上、例えば50%以上、のエリアを特定される所定数のAF検波エリアとして設定しても良い。また、被写体が検出されているAF検波エリアのうち、被写体情報以外(背景等)を含むエリアを除いて、特定される所定数のAF検波エリアとして設定しても良い。 Although described above as an example of the method of setting the predetermined number of AF detection areas corresponding to the subject position specified by the subject position information, the present invention is not particularly limited thereto. For example, all of the AF detection areas in which the subject is detected based on the subject position information may be set as the predetermined number of AF detection areas to be identified. Further, among the AF detection areas in which the subject is detected, an area in which the ratio of the subject information occupies is an arbitrary value or more, for example, 50% or more may be set as a predetermined number of AF detection areas to be specified. Further, among the AF detection areas in which the subject is detected, an area including other than the subject information (such as background) may be set as the specified number of AF detection areas to be specified.
 図3に戻って、AF検波値生成部203は、生成したAF検波値をAF制御部112に供給する。AF制御部112は、フォーカスレンズの移動操作に伴って得られるAF検波値の履歴に基づいて、最も高いコントラストを示したときのフォーカスレンズ位置を求め、その結果を撮像レンズ103に通知し、最も高いコントラストを示したときのフォーカスレンズ位置となるようにフォーカスレンズを駆動する。 Returning to FIG. 3, the AF detection value generation unit 203 supplies the generated AF detection value to the AF control unit 112. The AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result. The focus lens is driven to be the focus lens position when high contrast is shown.
 画像記録処理部113は、制御部101からの制御のもと、信号処理部105からの画像データをJPEG(Joint Photographic Experts Group)方式等の所定の圧縮方式に従って圧縮し、圧縮された画像データを記録メディア114に記録する。記録メディア114は、例えば、メモリカード等の記録媒体であり、デジタルスチルカメラ200に対して容易に着脱可能とされている。 Under control of the control unit 101, the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method, and the compressed image data Recording is performed on the recording medium 114. The recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 200.
 図6のフローチャートは、図3のデジタルスチルカメラ200におけるコントラストAF処理の手順の一例を示している。ステップST21において、ユーザによりシャッターボタンの半押しがされることで、AF処理が開始される。次に、ステップST22において、撮像レンズ103のフォーカスレンズ位置が検波したい位置に駆動される。 The flowchart of FIG. 6 shows an example of the procedure of contrast AF processing in the digital still camera 200 of FIG. In step ST21, the AF process is started by the user pressing the shutter button halfway. Next, in step ST22, the focus lens position of the imaging lens 103 is driven to a position desired to be detected.
 次に、ステップST23において、撮像素子104から画像信号が読み出され、ステップST24において、信号処理部105で画像データが生成される。次に、ステップST25、ステップST26の処理が行われる共に、並行して、ステップST27、ステップST28の処理が行われる。 Next, in step ST23, an image signal is read from the imaging element 104, and in step ST24, the signal processing unit 105 generates image data. Next, the processes of step ST25 and step ST26 are performed, and the processes of step ST27 and step ST28 are performed in parallel.
 ステップST25において、被写体認識部108で例えばユーザのタッチパネル操作によって指定された被写体が認識され、ステップST26において、被写体追尾部109で被写体位置が決定され、被写体位置情報(形状情報も含む)が取得される。一方、ステップST27において、AF検波処理部201で撮像エリアに複数配置されるAF検波エリアのAF検波値(コントラスト検波値)がそれぞれ取得され、ステップST28において、全検波エリアの検波値、つまり検波値群がそれを得る際に使用した画像データのフレームを識別する画像データIDと共にAF検波格納部202に格納される。 In step ST25, the subject recognition unit 108 recognizes, for example, a subject specified by the user's touch panel operation, and in step ST26, the subject tracking unit 109 determines the subject position and obtains subject position information (including shape information). Ru. On the other hand, in step ST27, AF detection values (contrast detection values) of the AF detection areas arranged in a plurality of imaging areas by the AF detection processing unit 201 are respectively acquired, and in step ST28, detection values of all detection areas, ie, detection values The group is stored in the AF detection storage unit 202 together with an image data ID identifying a frame of image data used to obtain it.
 次に、ステップST29において、AF検波値生成部203で、被写体追尾部109から供給される被写体位置情報(形状情報も含む)と画像データIDに基づいて、AF検波格納部202から、同じ画像データIDで関連付けられている検波値群のうち被写体位置情報で特定される被写体位置に対応した所定数のAF検波エリアのAF検波値が取得されて積分され、被写体位置でのAF検波値(積分AF検波値)が生成される。 Next, in step ST29, the AF detection value generation unit 203 generates the same image data from the AF detection storage unit 202 based on the subject position information (including the shape information) supplied from the subject tracking unit 109 and the image data ID. AF detection values of a predetermined number of AF detection areas corresponding to the subject position specified by the subject position information among the detection value groups associated by the ID are acquired and integrated, and the AF detection values at the subject position (Integral AF Detection value) is generated.
 次に、ステップST30において、AF制御部112でAF検波値の履歴から合焦の判断が行われる。そして、ステップST31において、合焦と判断できないとき、ステップS22の処理に戻り、撮像レンズ103のフォーカスレンズ位置が動かされ、上述したと同様の処理が繰り返される。 Next, in step ST30, the AF control unit 112 determines the in-focus state from the history of the AF detection value. Then, in step ST31, when it can not be determined to be in focus, the processing returns to step S22, the focus lens position of the imaging lens 103 is moved, and the same processing as described above is repeated.
 一方、合焦と判断できたとき、ステップST32において、AF制御部112で合焦判断が下されたAF検波値のフォーカスレンズ位置へのフォーカスレンズの駆動が撮像レンズ103に指示される。次に、ステップST33において、撮像レンズ103でフォーカスレンズが指示された位置に駆動される。そして、ステップST34において、一連のAF処理が終了される。 On the other hand, when it is determined that the image is in focus, in step ST32, the imaging lens 103 is instructed to drive the focus lens to the focus lens position of the AF detection value determined to be in focus by the AF control unit 112. Next, in step ST33, the focusing lens is driven by the imaging lens 103 to the designated position. Then, in step ST34, a series of AF processing is ended.
 ここで、図3のデジタルスチルカメラ200におけるAF検波格納部202の必要性について説明する。図6のフローチャートのAF検波処理(ステップST27)と被写体検知処理(ステップST26)の処理時間が一致していれば、検波値格納処理(ステップST28)は不要となり、従って図3のデジタルスチルカメラ200におけるAF検波格納部202は不要となる。 Here, the necessity of the AF detection storage unit 202 in the digital still camera 200 of FIG. 3 will be described. If the processing times of the AF detection process (step ST27) and the subject detection process (step ST26) in the flowchart of FIG. 6 coincide with each other, the detection value storage process (step ST28) becomes unnecessary. The AF detection storage unit 202 in FIG.
 しかし、多くの場合、被写体検知処理(ステップST26)はAF検波処理(ステップST27)よりも処理時間が長くなる。そのため、AF検波値生成部203でAF検波値を生成する処理を行う際に、被写体検知タイミングとタイミングが一致するAF検波値を使用するためには、取得された各AF検波エリアのAF検波値を一時的に蓄積しておくAF検波格納部202が必要となる。 However, in many cases, the processing time for the subject detection process (step ST26) is longer than that for the AF detection process (step ST27). Therefore, when performing processing for generating an AF detection value in the AF detection value generation unit 203, in order to use an AF detection value whose timing matches the subject detection timing, the AF detection value of each acquired AF detection area The AF detection storage unit 202 is required to temporarily store
 図3に示すデジタルスチルカメラ200においては、被写体位置検知処理とAF検波処理が並行して行われ、AF検波格納部202においてAF検波値が格納されることで、被写体位置が検知される際に使用されたフレームと同じフレームの画像データで生成されたAF検波値を参照することが可能になる。そのため、動いている被写体に対してAF検波エリアが常にずれたものとなることがなく、被写体に対するAF検波値を正しく得ることができ、精度の高いAF制御が可能となる。 In the digital still camera 200 shown in FIG. 3, the subject position detection process and the AF detection process are performed in parallel, and the AF detection value is stored in the AF detection storage unit 202 to detect the subject position. It becomes possible to refer to the AF detection value generated by the image data of the same frame as the used frame. Therefore, the AF detection area does not always shift with respect to a moving subject, the AF detection value for the subject can be correctly obtained, and high-accuracy AF control can be performed.
 また、図3に示すデジタルスチルカメラ200においては、撮像エリアの全エリアあるいはほぼ全エリアにAF検波エリアが敷き詰められているので(図4参照)、被写体位置に対応した所定数のAF検波エリアを適切に特定して被写体に対するAF検波値を精度よく得ることでき、AF制御を高精度に行うことが可能となる。 Further, in the digital still camera 200 shown in FIG. 3, since the AF detection areas are spread over all or almost all areas of the imaging area (see FIG. 4), a predetermined number of AF detection areas corresponding to the object position The AF detection value for the subject can be accurately obtained by appropriately specifying it, and the AF control can be performed with high accuracy.
 <2.第2の実施の形態>
 [監視カメラの構成例]
 上述の図3に示すデジタルスチルカメラ200においては、被写体検知で得られた被写体位置情報に基づいて被写体位置に対応した所定数のAF検波エリアを適切に特定し、被写体位置に対応した所定数のAF検波値を用いてAF検波値(積分AF検波値)を生成してAF制御を行っている。しかし、被写体検知で得られた被写体位置情報に基づいて被写体位置を排除した所定数のAF検波エリアを特定し、被写体位置を除いた所定数のAF検波値を用いてAF検波値(積分AF検波値)を生成してAF制御を行うことも考えられる。
<2. Second embodiment>
[Configuration example of surveillance camera]
In the digital still camera 200 shown in FIG. 3 described above, a predetermined number of AF detection areas corresponding to a subject position are appropriately specified based on subject position information obtained by subject detection, and a predetermined number of AF detection areas corresponding to the subject position are detected. The AF control is performed by generating an AF detection value (integral AF detection value) using the AF detection value. However, based on subject position information obtained by subject detection, a predetermined number of AF detection areas excluding the subject position are specified, and an AF detection value (integral AF detection using a predetermined number of AF detection values excluding the subject position) It is also conceivable to perform AF control by generating a value).
 例えば、ロボットアームで作業をするときの監視カメラ画像を考える。図7(a)は、ロボットアームで作業するときの撮像画像(監視カメラ画像)を示している。この撮像画像には、ロボットアームS1,S3と、このロボットアームS1,S3で作業する作業対象S2が存在する。一般的に、コントラストAFは、最至近のものにピントを合わせるようにするため、ロボットアームS1,S3にピントが合いやすい。 For example, consider a surveillance camera image when working with a robot arm. FIG. 7A shows a captured image (monitoring camera image) when working with the robot arm. In the captured image, robot arms S1 and S3 and a work target S2 to be worked by the robot arms S1 and S3 exist. Generally, the contrast AF is easy to focus on the robot arms S1 and S3 in order to focus on the closest one.
 そこで、図7(b)に示すように、撮像エリアに、AF検波エリア(AF検波枠)DEを、高密度に配置する。そして、ロボットアームS1,S3を画像認識によって認識し、図7(c)に示すように、認識されたロボットアームS1,S3の部分を除いた所定数のAF検波エリアDEの検波値を積分してAF検波値(積分AF検波値)を生成し、このAF検波値でAF制御を行うこととする。これにより、至近側に存在するロボットアームS1,S3の影響を受けないで、作業対象S2にピントを合わせることが可能となる。 Therefore, as shown in FIG. 7B, the AF detection area (AF detection frame) DE is arranged at high density in the imaging area. Then, the robot arms S1 and S3 are recognized by image recognition, and as shown in FIG. 7C, the detected values of the predetermined number of AF detection areas DE excluding the recognized robot arms S1 and S3 are integrated. Thus, an AF detection value (integral AF detection value) is generated, and AF control is performed using this AF detection value. As a result, it becomes possible to focus on the work target S2 without being influenced by the robot arms S1 and S3 present on the near side.
 図8は、第2の実施の形態としてのロボットアームで作業をするときの監視カメラ300の構成例を示している。この監視カメラ300において、図3のデジタルスチルカメラ200と対応する部分には、同一符号を付し、適宜、その詳細説明は省略する。この監視カメラ300は、制御部101と、操作部102と、撮像レンズ103と、撮像素子104と、信号処理部105と、表示制御部106と、表示部107と、AF検波処理部201と、AF検波格納部202と、AF制御部112と、画像記録処理部113と、記録メディア114と、物体認識部301と、物体追尾部302と、AF検波値生成部303を有している。 FIG. 8 shows a configuration example of a monitoring camera 300 when working with a robot arm according to the second embodiment. In this surveillance camera 300, parts corresponding to the digital still camera 200 in FIG. 3 are assigned the same reference numerals, and the detailed description thereof will be omitted as appropriate. The monitoring camera 300 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging element 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an AF detection processing unit 201. An AF detection storage unit 202, an AF control unit 112, an image recording processing unit 113, a recording medium 114, an object recognition unit 301, an object tracking unit 302, and an AF detection value generation unit 303 are included.
 制御部101は、制御プログラムに基づいて、監視カメラ300の各部の動作を制御する。操作部102は、制御部101に接続されており、ユーザによる種々の操作を受け付けるユーザインタフェースを構成している。撮像レンズ103は、複数のレンズなどからなり、被写体から入射した光を集光して撮像素子104の撮像面(撮像エリア)へと導く。この撮像レンズ103は、フォーカスレンズを有しており、このフォーカスレンズを駆動することでフォーカス制御が可能とされている。また、この撮像レンズ103は、絞りおよびシャッタを有しており、絞り位置やシャッタ速度の制御で露出制御が可能とされている。 The control unit 101 controls the operation of each unit of the monitoring camera 300 based on the control program. The operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user. The imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104. The imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
 信号処理部105は、撮像素子104からのアナログの画像信号に増幅などのアナログ信号処理を適用し、さらに、その結果得られる画像信号をA/D(Analog/Digital)変換する。また、信号処理部105は、A/D変換により得られるデジタル信号で示される画像データに対し、ノイズ除去処理等のデジタル信号処理を適用し、その結果得られる画像データを、表示制御部106、物体認識部301、AF検波処理部201および画像記録処理部113に供給する。表示制御部106は、信号処理部105からの画像データに対応する撮像画像を、表示部107に表示させる。 The signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the object recognition unit 301, the AF detection processing unit 201, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
 物体認識部301は、画像データから画像データから排除すべきロボットアームを認識し、その特徴量等の情報を追尾対象情報として物体追尾部302に供給する。物体追尾部302は、物体認識部301からの追尾対象情報を用いて、画像データから物体位置、ここではロボットアーム位置を検知し、物体位置情報(形状情報も含む)を得る。 The object recognition unit 301 recognizes a robot arm to be excluded from the image data from the image data, and supplies information such as the feature amount to the object tracking unit 302 as tracking target information. The object tracking unit 302 detects an object position, in this case, a robot arm position from image data using tracking target information from the object recognition unit 301, and obtains object position information (including shape information).
 物体追尾部302は、物体位置情報を、表示制御部106に供給する。表示制御部106は、この物体位置情報に基づいて、撮像画像に対して追尾枠を重畳して表示させる。また、物体追尾部302は、物体位置情報を、物体位置を決定した際に使用した画像データのフレームを識別する画像データIDと共に、AF検波値生成部303に供給する。 The object tracking unit 302 supplies object position information to the display control unit 106. The display control unit 106 superimposes and displays a tracking frame on the captured image based on the object position information. Further, the object tracking unit 302 supplies the object position information to the AF detection value generation unit 303 together with the image data ID identifying the frame of the image data used when the object position is determined.
 AF検波処理部201は、信号処理部105からの画像データから撮像エリアに複数配置される、例えば撮像エリアに敷き詰められた複数のAF検波エリア(図4参照)のAF検波値(コントラスト検波値)をそれぞれ取得する。AF検波処理部201は、AF検波値群を、当該検波値を得る際に使用した画像データのフレームを識別する画像データIDと共に、AF検波格納部202に供給して、一時的に格納する。 A plurality of AF detection processing units 201 are arranged in the imaging area from the image data from the signal processing unit 105. For example, AF detection values (contrast detection values) of a plurality of AF detection areas (see FIG. 4) Get each one. The AF detection processing unit 201 supplies an AF detection value group to the AF detection storage unit 202 together with an image data ID identifying a frame of image data used when obtaining the detection value, and temporarily stores the AF detection value group.
 AF検波値生成部303は、物体追尾部302から供給される物体位置情報(形状情報も含む)と画像データIDに基づいて、AF検波格納部202から、同じ画像データIDで関連付けられている検波値群のうち物体位置情報で特定される物体位置の部分を除いた所定数のAF検波エリアのAF検波値をAF検波格納部202から取り出して積分(加算平均)し、物体位置以外、従ってロボットアーム以外でのAF検波値(積分AF検波値)を生成する。 The AF detection value generation unit 303 detects the detection data associated with the same image data ID from the AF detection storage unit 202 based on the object position information (including the shape information) supplied from the object tracking unit 302 and the image data ID. The AF detection values of a predetermined number of AF detection areas excluding the portion of the object position specified by the object position information in the value group are taken out from the AF detection storage unit 202 and integrated (adding average). An AF detection value (integral AF detection value) other than the arm is generated.
 物体位置情報で特定される物体位置の部分を除いた所定数のAF検波エリアを設定する方法としては、特に限定はされないが、物体位置が検出されたAF検波エリアの全てを除いた部分を特定される所定数のAF検波エリアとして設定してもよい。また、物体位置が検出されたAF検波エリアのうち、物体位置情報が占める割合が任意の値以上、例えば50%以上、のエリアを除いて、特定される所定数のAF検波エリアとして設定してもよい。 The method of setting a predetermined number of AF detection areas excluding the portion of the object position specified by the object position information is not particularly limited, but specifies the portion excluding all of the AF detection areas in which the object position is detected. The predetermined number of AF detection areas may be set. Further, among the AF detection areas in which the object position is detected, it is set as a predetermined number of AF detection areas to be identified except an area where the ratio occupied by the object position information is an arbitrary value or more, for example 50% or more. It is also good.
 AF検波値生成部303は、生成したAF検波値をAF制御部112に供給する。AF制御部112は、フォーカスレンズの移動操作に伴って得られるAF検波値の履歴に基づいて、最も高いコントラストを示したときのフォーカスレンズ位置を求め、その結果を撮像レンズ103に通知し、最も高いコントラストを示したときのフォーカスレンズ位置となるようにフォーカスレンズを駆動する。 The AF detection value generation unit 303 supplies the generated AF detection value to the AF control unit 112. The AF control unit 112 determines the focus lens position at the time of showing the highest contrast based on the history of AF detection values obtained along with the movement operation of the focus lens, and notifies the imaging lens 103 of the result. The focus lens is driven to be the focus lens position when high contrast is shown.
 画像記録処理部113は、制御部101からの制御のもと、信号処理部105からの画像データをMPEG(Moving Picture Experts Group)方式等の所定の圧縮方式に従って圧縮し、圧縮された画像データを監視画像データとして記録メディア114に記録する。 Under the control of the control unit 101, the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as the Moving Picture Experts Group (MPEG) method, and the compressed image data It is recorded on the recording medium 114 as surveillance image data.
 図8に示すデジタルスチルカメラ300においては、物体位置検知処理とAF検波処理が並行して行われ、AF検波格納部202においてAF検波値が格納されることで、物体位置が検知される際に使用されたフレームと同じフレームの画像データで生成されたAF検波値を参照することが可能になる。そのため、物体が動いても、物体位置以外、従ってロボットアーム以外に対するAF検波値を正しく得ることができ、ロボットアーム以外にピントを合わせるAF制御を高精度で可能となる。 In the digital still camera 300 shown in FIG. 8, when the object position detection process and the AF detection process are performed in parallel, and the AF detection value is stored in the AF detection storage unit 202, the object position is detected. It becomes possible to refer to the AF detection value generated by the image data of the same frame as the used frame. Therefore, even if the object moves, it is possible to correctly obtain AF detection values for areas other than the object position, that is, areas other than the robot arm, and AF control for focusing on areas other than the robot arm can be performed with high accuracy.
 <3.第3の実施の形態>
 [デジタルスチルカメラの構成例]
 図9は、デジタルスチルカメラ400の構成例を示している。このデジタルスチルカメラ400において、図1のデジタルスチルカメラ100と対応する部分には、同一符号を付し、適宜、その詳細説明は省略する。このデジタルスチルカメラ400は、制御部101と、操作部102と、撮像レンズ103と、撮像素子104と、信号処理部105と、表示制御部106と、表示部107と、被写体認識部108と、被写体追尾部109と、AE検波エリア設定部401と、AE検波処理部402と、AE制御部403と、画像記録処理部113と、記録メディア114を有している。
<3. Third embodiment>
[Configuration Example of Digital Still Camera]
FIG. 9 shows a configuration example of the digital still camera 400. In the digital still camera 400, parts corresponding to those of the digital still camera 100 in FIG. 1 are given the same reference numerals, and the detailed description thereof will be omitted as appropriate. The digital still camera 400 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108. An object tracking unit 109, an AE detection area setting unit 401, an AE detection processing unit 402, an AE control unit 403, an image recording processing unit 113, and a recording medium 114 are included.
 制御部101は、制御プログラムに基づいて、デジタルスチルカメラ400の各部の動作を制御する。操作部102は、制御部101に接続されており、ユーザによる種々の操作を受け付けるユーザインタフェースを構成している。撮像レンズ103は、複数のレンズなどからなり、被写体から入射した光を集光して撮像素子104の撮像面(撮像エリア)へと導く。この撮像レンズ103は、フォーカスレンズを有しており、このフォーカスレンズを駆動することでフォーカス制御が可能とされている。また、この撮像レンズ103は、絞りおよびシャッタを有しており、絞り位置やシャッタ速度の制御で露出制御が可能とされている。 The control unit 101 controls the operation of each unit of the digital still camera 400 based on a control program. The operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user. The imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104. The imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
 信号処理部105は、撮像素子104からのアナログの画像信号に増幅などのアナログ信号処理を適用し、さらに、その結果得られる画像信号をA/D(Analog/Digital)変換する。また、信号処理部105は、A/D変換により得られるデジタル信号で示される画像データに対し、ノイズ除去処理等のデジタル信号処理を適用し、その結果得られる画像データを、表示制御部106、被写体認識部108、AE検波処理部402および画像記録処理部113に供給する。表示制御部106は、信号処理部105からの画像データに対応する撮像画像を、表示部107に表示させる。 The signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AE detection processing unit 402, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
 被写体認識部108は、画像データから追尾対象の被写体を認識し、その特徴量等の情報を追尾対象情報として被写体追尾部109に供給する。被写体追尾部109は、被写体認識部108からの追尾対象情報を用いて、画像データから被写体位置を検知し、被写体位置情報(形状情報も含む)を表示制御部106およびAE検波エリア設定部401に供給する。表示制御部106は、被写体位置情報に基づいて、撮像画像に対して追尾枠を重畳して表示させる。 The subject recognition unit 108 recognizes a subject to be tracked from the image data, and supplies information such as the feature amount to the subject tracking unit 109 as tracking target information. The subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and causes the display control unit 106 and the AE detection area setting unit 401 to provide subject position information (including shape information). Supply. The display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information.
 AE検波エリア設定部401は、被写体追尾部109から供給される被写体位置情報に基づいて、撮像エリア内の被写体位置に測光エリアとしてのAE検波エリア(AE検波枠)を設定し、その情報をAE検波処理部402に供給する。AE検波処理部402は、AE検波エリアの設定情報に基づいて、画像データからAE検波エリアのAE検波値(輝度レベル値)を取得して、AE制御部403に供給する。 The AE detection area setting unit 401 sets an AE detection area (AE detection frame) as a photometric area at the subject position in the imaging area based on the subject position information supplied from the subject tracking unit 109, and AE The signal is supplied to the detection processing unit 402. The AE detection processing unit 402 acquires an AE detection value (brightness level value) of the AE detection area from the image data based on setting information of the AE detection area, and supplies the AE detection value to the AE control unit 403.
 AE制御部403は、AE検波値から、適正な露出が得られるように、撮像レンズ103での絞り位置やシャッタースピード、信号処理部105でのゲインを決定する。そして、AE制御部403は、決定した絞り位置やシャッタースピードを撮像レンズ103に通知し、撮像レンズ103における絞り位置やシャッタースピードを制御する。また、AE制御部403は、決定したゲインを信号処理部105に通知し、信号処理部105におけるゲインを制御する。 The AE control unit 403 determines the aperture position and shutter speed of the imaging lens 103 and the gain of the signal processing unit 105 so that an appropriate exposure can be obtained from the AE detection value. Then, the AE control unit 403 notifies the imaging lens 103 of the determined aperture position and shutter speed, and controls the aperture position and shutter speed of the imaging lens 103. Further, the AE control unit 403 notifies the signal processing unit 105 of the determined gain, and controls the gain in the signal processing unit 105.
 画像記録処理部113は、制御部101からの制御のもと、例えば、シャッターボタンが押されたときに、信号処理部105からの画像データをJPEG(Joint Photographic Experts Group)方式等の所定の圧縮方式に従って圧縮し、圧縮された画像データを記録メディア114に記録する。記録メディア114は、例えば、メモリカード等の記録媒体であり、デジタルスチルカメラ400に対して容易に着脱可能とされている。 Under control of the control unit 101, for example, when the shutter button is pressed, the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114. The recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 400.
 図10のフローチャートは、図9のデジタルスチルカメラ400におけるAE処理の手順の一例を示している。ステップST41において、例えば電源オンによりAE処理が開始される。なお、以降は、フレーム毎に、このフローチャートの処理が繰り返し行われる。ステップST41の後、ステップST42において、撮像素子104から画像信号が読み出され、ステップST43において、信号処理部105で画像データが生成される。 The flowchart of FIG. 10 shows an example of the procedure of the AE processing in the digital still camera 400 of FIG. In step ST41, for example, the AE process is started by turning on the power. Note that, thereafter, the processing of this flowchart is repeated for each frame. After step ST41, in step ST42, an image signal is read from the imaging element 104, and in step ST43, the signal processing unit 105 generates image data.
 次に、ステップST44において、被写体認識部108で例えばユーザのタッチパネル操作によって指定された被写体が認識され、ステップST45において、被写体追尾部109で画像データから被写体位置が検知され、被写体の撮像画像上の位置を示す位置情報(形状情報も含む)が得られる。次に、ステップST46において、AE検波エリア設定部401で、撮像エリア内の被写体位置にAE検波エリア(AE検波枠)が設定される。次に、ステップST47において、AE検波処理部402で画像データからAE検波エリアのAE検波値が取得される。 Next, in step ST44, the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST45, the subject tracking unit 109 detects the subject position from the image data. Position information (including shape information) indicating the position is obtained. Next, in step ST46, the AE detection area setting unit 401 sets an AE detection area (AE detection frame) at the object position in the imaging area. Next, in step ST47, the AE detection processing unit 402 acquires an AE detection value of the AE detection area from the image data.
 次に、ステップST48において、AE制御部403でAE検波値から適正露出が得られるような絞り位置、シャッタースピード、ゲインが決定される。そして、ステップST49において、AE制御部403で、決定された絞り位置、シャッタースピードが撮像レンズ103に指示され、また決定されたゲインが信号処理部105に指示される。 Next, in step ST48, the AE control unit 403 determines the aperture position, shutter speed, and gain so as to obtain an appropriate exposure from the AE detection value. Then, in step ST49, the AE control unit 403 instructs the imaging lens 103 on the determined diaphragm position and shutter speed, and instructs the signal processing unit 105 on the determined gain.
 次に、ステップST50において、撮像レンズ103で、指示された絞り位置への駆動と、指示されたシャッタースピードの設定が行われる。次に、ステップST51において、信号処理部105で、指示されたゲインの設定が行われる。そして、ステップST52において、一連のAE処理が終了される。 Next, in step ST50, driving to the instructed aperture position and setting of the instructed shutter speed are performed by the imaging lens 103. Next, in step ST51, the signal processing unit 105 performs setting of the instructed gain. Then, in step ST52, a series of AE processing is ended.
 図9のデジタルスチルカメラ400における上述したコントラストAE処理(図10参照)では、被写体認識を行った後にAE検波エリア(AE検波枠)を設定し、その後にAE検波を開始してAE検波エリアのAE検波値を得る手順となっている。そのため、動いている被写体に対しては、被写体検知をしたときの被写体位置とAE検波を開始したときの被写体位置がずれたものとなり、被写体位置に対応したAE検波値を得ることが困難であり、精度の高いAE制御は不可能である。 In the above-described contrast AE processing (see FIG. 10) in the digital still camera 400 of FIG. 9, after subject recognition is performed, an AE detection area (AE detection frame) is set, and thereafter AE detection is started to The procedure is to obtain an AE detection value. Therefore, for a moving subject, the subject position at the time of subject detection and the subject position at the start of AE detection are shifted, and it is difficult to obtain an AE detection value corresponding to the subject position. , High precision AE control is impossible.
 図11は、第3の実施の形態としてのデジタルスチルカメラ500の構成例を示している。このデジタルスチルカメラ500において、図3、図9のデジタルスチルカメラ200、400と対応する部分には、同一符号を付し、適宜、その詳細説明は省略する。このデジタルスチルカメラ500は、制御部101と、操作部102と、撮像レンズ103と、撮像素子104と、信号処理部105と、表示制御部106と、表示部107と、被写体認識部108と、被写体追尾部109と、AE検波処理部501と、AE検波格納部502と、AE検波値生成部503と、AE制御部403と、画像記録処理部113と、記録メディア114を有している。 FIG. 11 shows a configuration example of a digital still camera 500 according to the third embodiment. In this digital still camera 500, parts corresponding to those of the digital still cameras 200 and 400 in FIG. 3 and FIG. 9 are assigned the same reference numerals, and the detailed description thereof is omitted as appropriate. The digital still camera 500 includes a control unit 101, an operation unit 102, an imaging lens 103, an imaging device 104, a signal processing unit 105, a display control unit 106, a display unit 107, and an object recognition unit 108. An object tracking unit 109, an AE detection processing unit 501, an AE detection storage unit 502, an AE detection value generation unit 503, an AE control unit 403, an image recording processing unit 113, and a recording medium 114 are included.
 制御部101は、制御プログラムに基づいて、デジタルスチルカメラ500の各部の動作を制御する。操作部102は、制御部101に接続されており、ユーザによる種々の操作を受け付けるユーザインタフェースを構成している。撮像レンズ103は、複数のレンズなどからなり、被写体から入射した光を集光して撮像素子104の撮像面(撮像エリア)へと導く。この撮像レンズ103は、フォーカスレンズを有しており、このフォーカスレンズを駆動することでフォーカス制御が可能とされている。また、この撮像レンズ103は、絞りおよびシャッタを有しており、絞り位置やシャッタ速度の制御で露出制御が可能とされている。 The control unit 101 controls the operation of each unit of the digital still camera 500 based on a control program. The operation unit 102 is connected to the control unit 101, and constitutes a user interface that receives various operations by the user. The imaging lens 103 includes a plurality of lenses and the like, and condenses light incident from a subject and guides the light to the imaging surface (imaging area) of the imaging element 104. The imaging lens 103 has a focus lens, and by controlling the focus lens, focus control is enabled. Further, the imaging lens 103 has an aperture and a shutter, and exposure control can be performed by controlling the aperture position and the shutter speed.
 信号処理部105は、撮像素子104からのアナログの画像信号に増幅などのアナログ信号処理を適用し、さらに、その結果得られる画像信号をA/D(Analog/Digital)変換する。また、信号処理部105は、A/D変換により得られるデジタル信号で示される画像データに対し、ノイズ除去処理等のデジタル信号処理を適用し、その結果得られる画像データを、表示制御部106、被写体認識部108、AE検波処理部501および画像記録処理部113に供給する。表示制御部106は、信号処理部105からの画像データに対応する撮像画像を、表示部107に表示させる。 The signal processing unit 105 applies analog signal processing such as amplification to the analog image signal from the imaging element 104, and further A / D (Analog / Digital) converts the image signal obtained as a result. Further, the signal processing unit 105 applies digital signal processing such as noise removal processing to image data represented by a digital signal obtained by A / D conversion, and the image data obtained as a result is displayed on the display control unit 106, The information is supplied to the subject recognition unit 108, the AE detection processing unit 501, and the image recording processing unit 113. The display control unit 106 causes the display unit 107 to display a captured image corresponding to the image data from the signal processing unit 105.
 被写体認識部108は、画像データから被写体を認識し、その特徴量等の情報を追尾対象情報として被写体追尾部109に供給する。被写体追尾部109は、被写体認識部108からの追尾対象情報を用いて、画像データから被写体位置を検知し、被写体位置情報(形状情報も含む)を得る。 The subject recognition unit 108 recognizes a subject from the image data, and supplies information such as its feature amount to the subject tracking unit 109 as tracking target information. The subject tracking unit 109 detects the subject position from the image data using the tracking target information from the subject recognition unit 108, and obtains subject position information (including shape information).
 被写体追尾部109は、被写体位置情報を、表示制御部106に供給する。表示制御部106は、この被写体位置情報に基づいて、撮像画像に対して追尾枠を重畳して表示させる。また、被写体追尾部109は、被写体位置情報を、被写体位置を決定した際に使用した画像データのフレームを識別する画像データIDと共に、AE検波値生成部503に供給する。 The subject tracking unit 109 supplies subject position information to the display control unit 106. The display control unit 106 superimposes and displays a tracking frame on the captured image based on the subject position information. The subject tracking unit 109 also supplies subject position information to the AE detection value generation unit 503 together with an image data ID identifying a frame of image data used when the subject position is determined.
 AE検波処理部501は、信号処理部105からの画像データから撮像エリアに複数配置される、例えば撮像エリアに敷き詰められた複数の測光エリアとしてのAE検波エリア(図4に示すAF検波エリアと同様の配置)のAE検波値(輝度レベル値)をそれぞれ取得する。AE検波処理部501は、AF検波値群を、当該検波値を得る際に使用した画像データのフレームを識別する画像データIDと共に、AE検波格納部502に供給して、一時的に格納する。 A plurality of AE detection processing units 501 are arranged in the imaging area from the image data from the signal processing unit 105, for example, AE detection areas as a plurality of photometric areas spread in the imaging area (similar to the AF detection area shown in FIG. Acquisition of the AE detection values (brightness level values) of The AE detection processing unit 501 supplies an AF detection value group to the AE detection storage unit 502 together with an image data ID identifying a frame of image data used when obtaining the detection value, and temporarily stores the AF detection value group.
 AF検波値生成部503は、被写体追尾部109から供給される被写体位置情報(形状情報も含む)と画像データIDに基づいて、AE検波格納部502から、同じ画像データIDで関連付けられている検波値群のうち被写体位置情報で特定される被写体位置に対応した所定数のAE検波エリアのAE検波値をAE検波格納部502から取り出して積分(加算平均)し、被写体位置でのAE検波値(積分AE検波値)を生成する。 The AF detection value generation unit 503 detects the detection data associated with the same image data ID from the AE detection storage unit 502 based on the subject position information (including the shape information) supplied from the subject tracking unit 109 and the image data ID. AE detection values of a predetermined number of AE detection areas corresponding to the subject position specified by the subject position information among the value group are taken out from the AE detection storage unit 502 and integrated (averaged) to obtain AE detection values at the subject position ( Integral AE detection value) is generated.
 被写体位置情報で特定される被写体位置に対応した所定数のAE検波エリアを設定する方法としては、特に限定されないが、例えば、被写体位置情報に基づいて、被写体が検出されているAE検波エリアの全てを特定される所定数のAE検波エリアとして設定しても良い。また被写体が検出されているAE検波エリアのうち、被写体情報が占める割合が任意の値以上、例えば50%以上、のエリアを特定される所定数のAE検波エリアとして設定しても良い。また、被写体が検出されているAE検波エリアのうち、被写体情報以外(背景等)を含むエリアを除いて、特定される所定数のAE検波エリアとして設定しても良い。 The method of setting the predetermined number of AE detection areas corresponding to the subject position specified by the subject position information is not particularly limited. For example, all AE detection areas in which the subject is detected based on the subject position information May be set as a predetermined number of AE detection areas to be identified. Further, among AE detection areas in which a subject is detected, an area in which the ratio occupied by subject information is an arbitrary value or more, for example, 50% or more may be set as a predetermined number of AE detection areas to be identified. Further, among the AE detection areas in which the subject is detected, an area including areas other than the subject information (such as background) may be set as the specified number of AE detection areas to be identified.
 AE検波値生成部503は、生成したAE検波値をAE制御部403に供給する。AE制御部403は、AE検波値から、適正な露出が得られるように、撮像レンズ103での絞り位置やシャッタースピード、信号処理部105でのゲインを決定する。そして、AE制御部403は、決定した絞り位置やシャッタースピードを撮像レンズ103に通知し、撮像レンズ103における絞り位置やシャッタースピードを制御する。また、AE制御部403は、決定したゲインを信号処理部105に通知し、信号処理部105におけるゲインを制御する。 The AE detection value generation unit 503 supplies the generated AE detection value to the AE control unit 403. The AE control unit 403 determines the aperture position and shutter speed of the imaging lens 103 and the gain of the signal processing unit 105 so that an appropriate exposure can be obtained from the AE detection value. Then, the AE control unit 403 notifies the imaging lens 103 of the determined aperture position and shutter speed, and controls the aperture position and shutter speed of the imaging lens 103. Further, the AE control unit 403 notifies the signal processing unit 105 of the determined gain, and controls the gain in the signal processing unit 105.
 画像記録処理部113は、制御部101からの制御のもと、例えば、シャッターボタンが押されたときに、信号処理部105からの画像データをJPEG(Joint Photographic Experts Group)方式等の所定の圧縮方式に従って圧縮し、圧縮された画像データを記録メディア114に記録する。記録メディア114は、例えば、メモリカード等の記録媒体であり、デジタルスチルカメラ500に対して容易に着脱可能とされている。 Under control of the control unit 101, for example, when the shutter button is pressed, the image recording processing unit 113 compresses the image data from the signal processing unit 105 according to a predetermined compression method such as JPEG (Joint Photographic Experts Group) method. Image data compressed according to the method and compressed is recorded on the recording medium 114. The recording medium 114 is, for example, a recording medium such as a memory card, and can be easily attached to and detached from the digital still camera 500.
 図12のフローチャートは、図11のデジタルスチルカメラ500におけるAE処理の手順の一例を示している。ステップST61において、例えば電源オンによりAE処理が開始される。なお、以降は、フレーム毎に、このフローチャートの処理が繰り返し行われる。ステップST61の処理の後、ステップST62において、撮像素子104から画像信号が読み出され、ステップST63において、信号処理部105で画像データが生成される。次に、ステップST64、ステップST65の処理が行われる共に、並行して、ステップST66、ステップST67の処理が行われる。 The flowchart of FIG. 12 shows an example of the procedure of the AE processing in the digital still camera 500 of FIG. In step ST61, for example, the AE process is started by turning on the power. Note that, thereafter, the processing of this flowchart is repeated for each frame. After the process of step ST61, in step ST62, an image signal is read from the imaging element 104, and in step ST63, the signal processing unit 105 generates image data. Next, the processes of steps ST64 and ST65 are performed, and the processes of steps ST66 and ST67 are performed in parallel.
 ステップS64において、被写体認識部108で例えばユーザのタッチパネル操作によって指定された被写体が認識され、ステップST65において、被写体追尾部109で画像データから被写体位置が検知され、被写体の撮像画像上の位置を示す位置情報(形状情報も含む)が得られる。一方、ステップST66において、AE検波処理部501で撮像エリアに複数配置されるAE検波エリアのAE検波値が取得され、ステップST67において、全検波エリアの検波値、つまり検波値群がそれを得る際に使用した画像データのフレームを識別する画像データIDと共にAE検波格納部502に格納される。 In step S64, the subject recognition unit 108 recognizes, for example, the subject specified by the user's touch panel operation, and in step ST65, the subject tracking unit 109 detects the subject position from the image data, and indicates the position of the subject on the captured image. Position information (including shape information) is obtained. On the other hand, in step ST66, the AE detection processing unit 501 acquires AE detection values of a plurality of AE detection areas arranged in the imaging area, and in step ST67 the detection values of all detection areas, that is, the detection value group obtains them. Stored in the AE detection storage unit 502 together with an image data ID identifying a frame of image data used for
 次に、ステップST68において、AE検波値生成部503で、被写体追尾部109から供給される被写体位置情報(形状情報も含む)と画像データIDに基づいて、AE検波格納部502から、同じ画像データIDで関連付けられている検波値群のうち被写体位置情報で特定される被写体位置に対応した所定数のAE検波エリアのAE検波値が取得されて積分(加算平均)され、被写体位置でのAE検波値(積分AE検波値)が生成される。 Next, in step ST68, the AE detection value generation unit 503 generates the same image data from the AE detection storage unit 502 based on the object position information (including the shape information) supplied from the object tracking unit 109 and the image data ID. AE detection values of a predetermined number of AE detection areas corresponding to the subject position specified by the subject position information among the detected value groups associated with the ID are acquired and integrated (summation average), and AE detection at the subject position A value (integral AE detection value) is generated.
 次に、ステップST69において、AE制御部403でAE検波値から適正露出が得られるような絞り位置、シャッタースピード、ゲインが決定される。そして、ステップST70において、AE制御部403で、決定された絞り位置、シャッタースピードが撮像レンズ103に指示され、また決定されたゲインが信号処理部105に指示される。 Next, in step ST69, the AE control unit 403 determines, from the AE detection value, the aperture position, the shutter speed, and the gain such that the appropriate exposure can be obtained. Then, in step ST70, the AE control unit 403 instructs the imaging lens 103 of the determined diaphragm position and shutter speed, and instructs the signal processing unit 105 of the determined gain.
 次に、ステップST71において、撮像レンズ103で、指示された絞り位置への駆動と、指示されたシャッタースピードの設定が行われる。次に、ステップST72において、信号処理部105で、指示されたゲインの設定が行われる。そして、ステップST73において、一連のAE処理が終了される。 Next, in step ST71, drive to the instructed aperture position and setting of the instructed shutter speed are performed by the imaging lens 103. Next, in step ST72, the signal processing unit 105 performs setting of the instructed gain. Then, in step ST73, a series of AE processing is ended.
 図11に示すデジタルスチルカメラ500においては、被写体位置検知処理とAE検波処理が並行して行われ、AE検波格納部502においてAE検波値が格納されることで、被写体位置が検知される際に使用されたフレームと同じフレームの画像データで生成されたAE検波値を参照することが可能になる。そのため、動いている被写体に対してAE検波エリアが常にずれたものとなることがなく、被写体に対するAE検波値を正しく得ることができ、精度の高いAE制御が可能となる。 In the digital still camera 500 shown in FIG. 11, the subject position detection process and the AE detection process are performed in parallel, and when the AE detection value is stored in the AE detection storage unit 502, the subject position is detected. It becomes possible to refer to the AE detection value generated by the image data of the same frame as the used frame. Therefore, the AE detection area does not always shift with respect to a moving subject, the AE detection value for the subject can be correctly obtained, and highly accurate AE control becomes possible.
 また、図11に示すデジタルスチルカメラ500においては、撮像エリアの全エリアあるいはほぼ全エリアにAE検波エリアが敷き詰められているので(図4に示すAF検波エリアと同様の配置)、被写体位置に対応した所定数のAF検波エリアを適切に特定して被写体に対するAE検波値を精度よく得ることでき、AE制御を高精度に行うことが可能となる。 Further, in the digital still camera 500 shown in FIG. 11, the AE detection areas are spread over the entire area or almost the entire area of the imaging area (the same arrangement as the AF detection area shown in FIG. 4). Thus, the predetermined number of AF detection areas can be properly specified to obtain the AE detection value for the subject with high accuracy, and the AE control can be performed with high accuracy.
 <.移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
< 4 . Applications to mobiles>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on any type of mobile object such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot May be
 図13は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 13 is a block diagram showing a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図13に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(Interface)12053が図示されている。 Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001. In the example illustrated in FIG. 13 , the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an external information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The driveline control unit 12010 controls the operation of devices related to the driveline of the vehicle according to various programs. For example, the drive system control unit 12010 includes a drive force generation device for generating a drive force of a vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 Body system control unit 12020 controls the operation of various devices equipped on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device of various lamps such as a headlamp, a back lamp, a brake lamp, a blinker or a fog lamp. In this case, the body system control unit 12020 may receive radio waves or signals of various switches transmitted from a portable device substituting a key. Body system control unit 12020 receives the input of these radio waves or signals, and controls a door lock device, a power window device, a lamp and the like of the vehicle.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 Outside vehicle information detection unit 12030 detects information outside the vehicle equipped with vehicle control system 12000. For example, an imaging unit 12031 is connected to the external information detection unit 12030. The out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image. The external information detection unit 12030 may perform object detection processing or distance detection processing of a person, a vehicle, an obstacle, a sign, characters on a road surface, or the like based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of light received. The imaging unit 12031 can output an electric signal as an image or can output it as distance measurement information. The light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 In-vehicle information detection unit 12040 detects in-vehicle information. For example, a driver state detection unit 12041 that detects a state of a driver is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera for imaging the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver does not go to sleep.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040, and a drive system control unit A control command can be output to 12010. For example, the microcomputer 12051 realizes functions of an advanced driver assistance system (ADAS) including collision avoidance or shock mitigation of a vehicle, follow-up traveling based on an inter-vehicle distance, vehicle speed maintenance traveling, vehicle collision warning, vehicle lane departure warning, etc. It is possible to perform coordinated control aiming at
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generating device, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040 so that the driver can Coordinated control can be performed for the purpose of automatic driving that travels autonomously without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12030に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12030 based on the information outside the vehicle acquired by the external information detection unit 12030. For example, the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or oncoming vehicle detected by the external information detection unit 12030, and performs cooperative control for the purpose of antiglare such as switching the high beam to the low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図13の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio image output unit 12052 transmits an output signal of at least one of audio and image to an output device capable of visually or aurally notifying information to a passenger or the outside of a vehicle. In the example of FIG. 13 , an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as the output device. The display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
 図14は、撮像部12031の設置位置の例を示す図である。 FIG. 14 is a diagram illustrating an example of the installation position of the imaging unit 12031.
 図14では、撮像部12031として、撮像部12101、12102、12103、12104、12105を有する。 In FIG. 14 , imaging units 12101, 12102, 12103, 12104, and 12105 are provided as the imaging unit 12031.
 撮像部12101、12102、12103、12104、12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102、12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部12105は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, on the front nose of the vehicle 12100, a side mirror, a rear bumper, a back door, an upper portion of a windshield of a vehicle interior, and the like. The imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle cabin mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 included in the side mirror mainly acquire an image of the side of the vehicle 12100. The imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The imaging unit 12105 provided on the top of the windshield in the passenger compartment is mainly used to detect a leading vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図14には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 14 shows an example of the imaging range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, and the imaging range 12114 indicates The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by overlaying the image data captured by the imaging units 12101 to 12104, a bird's eye view of the vehicle 12100 viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging devices, or an imaging device having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 measures the distance to each three-dimensional object in the imaging ranges 12111 to 12114, and the temporal change of this distance (relative velocity with respect to the vehicle 12100). In particular, it is possible to extract a three-dimensional object traveling at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100 as a leading vehicle, in particular by finding the it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. As described above, it is possible to perform coordinated control for the purpose of automatic driving or the like that travels autonomously without depending on the driver's operation.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 converts three-dimensional object data relating to three-dimensional objects into two-dimensional vehicles such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, telephone poles, and other three-dimensional objects. It can be classified, extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk is a setting value or more and there is a possibility of a collision, through the audio speaker 12061 or the display unit 12062 By outputting an alarm to the driver or performing forcible deceleration or avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared light. For example, the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the images captured by the imaging units 12101 to 12104. Such pedestrian recognition is, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as an infrared camera, and pattern matching processing on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not The procedure is to determine When the microcomputer 12051 determines that a pedestrian is present in the captured image of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 generates a square outline for highlighting the recognized pedestrian. The display unit 12062 is controlled so as to display a superimposed image. Further, the audio image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12101ないし12104から被写体(先行車両又は、歩行者、障害物、信号機、交通標識又は車線等)との距離情報を取得する際にも用いられる。 The example of the vehicle control system to which the technology according to the present disclosure can be applied has been described above. Among the configurations described above, the technology according to the present disclosure can also obtain distance information with a subject (a leading vehicle or a pedestrian, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like) from the imaging units 12101 to 12104. Used.
 測距エリアを全枠検波しておくことにより、移動体から被写体の距離を測定する場合であってもより精度の高い距離測定が可能となる。ここで、測距エリアから検波された検波値を基に被写体との距離を測定する方法としては、特定に限定されないが、以下のような公知の方法が用いられる。すなわち、撮像部は測距窓を有しており、三角測量の方法により、測距エリアから検波される検波値に基づいて被写体との距離を測定することができる。 By performing full-frame detection on the ranging area, it is possible to perform more accurate distance measurement even when measuring the distance from the moving object. Here, the method of measuring the distance to the subject based on the detection value detected from the distance measurement area is not particularly limited, but the following known method may be used. That is, the imaging unit has a distance measurement window, and can measure the distance to the subject based on the detection value detected from the distance measurement area by the method of triangulation.
 <.内視鏡手術システムへの応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
< 5 . Application example to endoscopic surgery system>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system.
 図15は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 15 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technology (the present technology) according to the present disclosure can be applied.
 図15では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 15 illustrates a surgeon (doctor) 11131 performing surgery on a patient 11132 on a patient bed 11133 using the endoscopic surgery system 11000. As shown, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as an insufflation tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 for supporting the endoscope 11100. , A cart 11200 on which various devices for endoscopic surgery are mounted.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 includes a lens barrel 11101 whose region of a predetermined length from the tip is inserted into a body cavity of a patient 11132, and a camera head 11102 connected to a proximal end of the lens barrel 11101. In the illustrated example, the endoscope 11100 configured as a so-called rigid endoscope having a rigid barrel 11101 is illustrated, but even if the endoscope 11100 is configured as a so-called flexible mirror having a flexible barrel Good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 At the tip of the lens barrel 11101, an opening into which an objective lens is fitted is provided. A light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extended inside the lens barrel 11101, and an objective The light is emitted toward the observation target in the body cavity of the patient 11132 through the lens. In addition, the endoscope 11100 may be a straight endoscope, or may be a oblique endoscope or a side endoscope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an imaging device are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is condensed on the imaging device by the optical system. The observation light is photoelectrically converted by the imaging element to generate an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image. The image signal is transmitted as RAW data to a camera control unit (CCU: Camera Control Unit) 11201.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU 11201 is configured by a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and centrally controls the operations of the endoscope 11100 and the display device 11202. Furthermore, the CCU 11201 receives an image signal from the camera head 11102 and performs various image processing for displaying an image based on the image signal, such as development processing (demosaicing processing), on the image signal.
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under control of the CCU 11201.
 光源装置11203は、例えばLED(light emitting diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 includes, for example, a light source such as an LED (light emitting diode), and supplies the endoscope 11100 with irradiation light at the time of imaging an operation part or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface to the endoscopic surgery system 11000. The user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging condition (type of irradiated light, magnification, focal length, and the like) by the endoscope 11100, and the like.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of a blood vessel, and the like. The insufflation apparatus 11206 is a gas within the body cavity via the insufflation tube 11111 in order to expand the body cavity of the patient 11132 for the purpose of securing a visual field by the endoscope 11100 and securing a working space of the operator. Send The recorder 11207 is a device capable of recording various types of information regarding surgery. The printer 11208 is an apparatus capable of printing various types of information regarding surgery in various types such as text, images, and graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 The light source device 11203 that supplies the irradiation light when imaging the surgical site to the endoscope 11100 can be configured of, for example, an LED, a laser light source, or a white light source configured by a combination of these. When a white light source is configured by a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high precision. It can be carried out. Further, in this case, the laser light from each of the RGB laser light sources is irradiated to the observation target in time division, and the drive of the image pickup element of the camera head 11102 is controlled in synchronization with the irradiation timing to cope with each of RGB. It is also possible to capture a shot image in time division. According to the method, a color image can be obtained without providing a color filter in the imaging device.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 In addition, the drive of the light source device 11203 may be controlled so as to change the intensity of the light to be output every predetermined time. The drive of the imaging device of the camera head 11102 is controlled in synchronization with the timing of the change of the light intensity to acquire images in time division, and by combining the images, high dynamic without so-called blackout and whiteout is obtained. An image of the range can be generated.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 The light source device 11203 may be configured to be able to supply light of a predetermined wavelength band corresponding to special light observation. In special light observation, for example, the mucous membrane surface layer is irradiated by irradiating narrow band light as compared with irradiation light (that is, white light) at the time of normal observation using the wavelength dependency of light absorption in body tissue. The so-called narrow band imaging (Narrow Band Imaging) is performed to image a predetermined tissue such as a blood vessel with high contrast. Alternatively, in special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiation with excitation light. In fluorescence observation, body tissue is irradiated with excitation light and fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into body tissue and the body tissue is Excitation light corresponding to the fluorescence wavelength of the reagent can be irradiated to obtain a fluorescence image or the like. The light source device 11203 can be configured to be able to supply narrow band light and / or excitation light corresponding to such special light observation.
 図16は、図15に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 Figure 16 is a block diagram showing an example of a functional configuration of the camera head 11102 and CCU11201 shown in FIG. 15.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 includes a lens unit 11401, an imaging unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405. The CCU 11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and the CCU 11201 are communicably connected to each other by a transmission cable 11400.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101. The observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and is incident on the lens unit 11401. The lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The imaging device constituting the imaging unit 11402 may be one (a so-called single-plate type) or a plurality (a so-called multi-plate type). When the imaging unit 11402 is configured as a multi-plate type, for example, an image signal corresponding to each of RGB may be generated by each imaging element, and a color image may be obtained by combining them. Alternatively, the imaging unit 11402 may be configured to have a pair of imaging devices for acquiring image signals for right eye and left eye corresponding to 3D (dimensional) display. By performing 3D display, the operator 11131 can more accurately grasp the depth of the living tissue in the operation site. When the imaging unit 11402 is configured as a multi-plate type, a plurality of lens units 11401 may be provided corresponding to each imaging element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 In addition, the imaging unit 11402 may not necessarily be provided in the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The driving unit 11403 is configured by an actuator, and moves the zoom lens and the focusing lens of the lens unit 11401 by a predetermined distance along the optical axis under the control of the camera head control unit 11405. Thereby, the magnification and the focus of the captured image by the imaging unit 11402 can be appropriately adjusted.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is configured of a communication device for transmitting and receiving various types of information to and from the CCU 11201. The communication unit 11404 transmits the image signal obtained from the imaging unit 11402 to the CCU 11201 as RAW data via the transmission cable 11400.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 The communication unit 11404 also receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405. The control signal includes, for example, information indicating that the frame rate of the captured image is designated, information indicating that the exposure value at the time of imaging is designated, and / or information indicating that the magnification and focus of the captured image are designated, etc. Contains information about the condition.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 Note that the imaging conditions such as the frame rate, exposure value, magnification, and focus described above may be appropriately designated by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. Good. In the latter case, the so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function are incorporated in the endoscope 11100.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is configured by a communication device for transmitting and receiving various types of information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Further, the communication unit 11411 transmits a control signal for controlling driving of the camera head 11102 to the camera head 11102. The image signal and the control signal can be transmitted by telecommunication or optical communication.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 An image processing unit 11412 performs various types of image processing on an image signal that is RAW data transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various types of control regarding imaging of a surgical site and the like by the endoscope 11100 and display of a captured image obtained by imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 Further, the control unit 11413 causes the display device 11202 to display a captured image in which a surgical site or the like is captured, based on the image signal subjected to the image processing by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects a shape, a color, and the like of an edge of an object included in a captured image, thereby enabling a surgical tool such as forceps, a specific biological site, bleeding, mist when using the energy treatment tool 11112, and the like. It can be recognized. When displaying the captured image on the display device 11202, the control unit 11413 may superimpose various surgical support information on the image of the surgery section using the recognition result. The operation support information is superimposed and presented to the operator 11131, whereby the burden on the operator 11131 can be reduced and the operator 11131 can reliably proceed with the operation.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 A transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electric signal cable corresponding to communication of an electric signal, an optical fiber corresponding to optical communication, or a composite cable of these.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, communication is performed by wire communication using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、内視鏡手術で用いられるカメラヘッド11102に対しても好適に用いることができる。例えば、血管や体組織を被写体として認識し、その結果に基づいて検波枠を特定し、この検波値を基にフォーカス制御をすることで、術時のAF精度が向上する。また、被写体を自動検出する場合は、本来注目したい血管や体組織などではなく、術具11110やウェスなどの人工物等が被写体として認識されやすいということがある。この場合においては。被写体が検出された枠以外の検波枠を特定するように制御することにより、術時のAF精度を向上させることができる。なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Heretofore, an example of the endoscopic surgery system to which the technology according to the present disclosure can be applied has been described. The technology according to the present disclosure can also be suitably used for a camera head 11102 used in endoscopic surgery. For example, a blood vessel or body tissue is recognized as a subject, a detection frame is specified based on the result, and focus control is performed based on the detection value, thereby improving AF accuracy at the time of surgery. In addition, in the case of automatically detecting a subject, there are cases where it is easy to recognize an artificial tool or the like such as the surgical tool 11110 or waste, as the subject, instead of the blood vessels or body tissue to be originally focused on. In this case. By performing control to specify a detection frame other than the frame in which the subject is detected, it is possible to improve the AF accuracy at the time of surgery. In addition, although the endoscopic surgery system was demonstrated as an example here, the technique which concerns on this indication may be applied to others, for example, a microscopic surgery system etc.
 <6.変形例>
 なお、上述実施の形態においては、コントラスト検出式によるAFの例を示したが、本技術は、位相差検出式によるAFとの併用でも好適に用いることができる。図17は本技術を用いた検波枠(測距枠)と位相差検出式の枠を組み合わせた場合の例を示している。図17は測距枠の数とエリアはいずれも位相差検出式の枠の数とエリアよりも大きいが、特にこれに限定されず測距枠の数とエリアが位相差検出式の枠の数とエリアより少なくかつ小さくても良く、また測距枠のエリアが大きく(小さく)、枠の数は少なく(多く)てもよい。また、測距枠の数とエリアが位相差検出気の枠の数とエリアと同じであってもよい。
<6. Modified example>
In addition, in the above-mentioned embodiment, although the example of AF by contrast detection type was shown, this art can be suitably used also with combined use with AF by phase difference detection type. FIG. 17 shows an example in which a detection frame (ranging frame) using the present technology and a frame of a phase difference detection formula are combined. Although the number and area of the distance measurement frame are both larger than the number and area of the phase difference detection type frame in FIG. 17, the number of the range detection frame and the number of areas of the phase difference detection type is not particularly limited thereto. The area of the distance measurement frame may be large (small) and the number of frames may be small (large). Further, the number and area of the distance measurement frames may be the same as the number and area of the frames of the phase difference detection frame.
 ここで、図17の場合は特にハイブリッドAFにおける更なるAF精度の向上という効果も得られる。位相差検出式のAFでは高輝度部分や繰り返しパターンにおいて、AFの精度が悪化することが知られている。図17のようにコントラストAF検波枠(測距枠)と位相差検出式の枠を組み合わせ、更に測距枠を位相差検出式の枠の数よりも多く配置することによって、測距枠から得られたコントラストAF検波値により位相差検出式の枠内の高輝度判定や繰り返しパターン検出が可能になる。この処理により、従来から行われてきた位相差検出式AFによる高輝度や繰り返しパターン検出に加えて、コントラストAF検波枠(測距枠)による高密度な検出ができるようになり最終的なAFの精度を向上できる。 Here, in the case of FIG. 17, the effect of further improving the AF accuracy particularly in the hybrid AF can also be obtained. In the phase difference detection type AF, it is known that the accuracy of the AF is deteriorated in a high luminance portion or a repetitive pattern. As shown in FIG. 17, it is obtained from the distance measurement frame by combining the contrast AF detection frame (distance measurement frame) and the frame of the phase difference detection type, and arranging more distance measurement frames than the number of frames of the phase difference detection type. The detected contrast AF detection value enables high luminance determination and repetitive pattern detection within the frame of the phase difference detection formula. By this processing, in addition to high luminance and repetitive pattern detection by the phase difference detection type AF conventionally performed, high density detection by the contrast AF detection frame (ranging frame) can be performed, and the final AF Accuracy can be improved.
 また、上述の実施の形態は、例示という形態で本技術を開示しており、本技術の要旨を逸脱しない範囲で当業者が実施の形態の修正や代用をなし得ることは自明である。すなわち、本技術の要旨を判断するためには、特許請求の範囲を参酌すべきである。 Further, the above-described embodiment discloses the present technology in the form of exemplification, and it is obvious that those skilled in the art can make modifications and substitutions of the embodiment without departing from the scope of the present technology. That is, in order to determine the gist of the present technology, the claims should be taken into consideration.
 また、本技術は、以下のような構成を取ることもできる。
 (1)撮像部の撮像エリアから入力される画像信号を上記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
 上記画像信号に基づいて被写体を検出する被写体検出部と、
 上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
 画像処理装置。
 (2)上記検波エリアは、測距エリアである
 前記(1)に記載の画像処理装置。
 (3)上記制御部は、上記特定した検波値に基づいてフォーカス制御をする
 前記(2)に記載の画像処理装置。
 (4)上記検波エリアは、測光エリアである
 前記(1)に記載の画像処理装置。
 (5)上記制御部は、上記特定した検波値に基づいて露出制御をする
 前記(4)に記載の画像処理装置。
 (6)上記検波処理部で生成された検波値を保存する検波値保存部をさらに備える
 前記(1)から(5)のいずれかに記載の画像処理装置。
 (7)上記制御部は、上記被写体検出部で所定フレームの画像信号に基づいて検出された被写体に対応して設定した検波エリアの検波値を、上記検波値保存部に保存されている上記所定フレームの画像信号を検波して生成された上記全検波エリアに対応する検波値から特定する
 前記(6)に記載の画像処理装置。
 (8)上記検波エリアは、上記撮像エリアの全エリアに敷き詰められている
 前記(1)から(7)のいずれかに記載の画像処理装置。
 (9)上記検波処理部における上記検波値を生成する処理と上記被写体検出部における上記被写体を検出する処理とが並行して行われる
 前記(1)から(8)のいずれかに記載の画像処理装置。
 (10)上記撮像エリアに複数の位相差検出エリアがさらに配置されており、
 上記測距エリアの数は上記位相差検出エリアの数より多く配置されている
 前記(2)または(3)に記載の画像処理装置。
 (11)上記被写体に対応した検波エリアの設定では、被写体検出がされるエリアが検波エリアとして設定される
 前記(1)から(10)のいずれかに記載の画像処理装置。
 (12)上記被写体に対応した検波エリアの設定では、被写体検出がされるエリアを除いたエリアが検波エリアとして設定される
 前記(1)から(10)のいずれかに記載の画像処理装置。
 (13)検波処理部が、撮像部の撮像エリアから入力される画像信号を上記撮像エリアに複数配置される検波エリアにおいて検波し、上記全検波エリアのそれぞれに対応する検波値を生成し、
 被写体検出部が、上記画像信号に基づいて被写体を検出し、
 制御部が、上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する
 画像処理方法。
 (14)撮像部と、
 上記撮像部の撮像エリアから入力される画像信号を上記撮像エリアに複数配置される検波エリアにおいて検波し、上記全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
 上記画像信号に基づいて被写体を検出する被写体検出部と、
 上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
 撮像装置。
Furthermore, the present technology can also be configured as follows.
(1) A detection processing unit that detects image signals input from an imaging area of an imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas,
A subject detection unit that detects a subject based on the image signal;
The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An image processing apparatus comprising: a control unit that specifies a detected value corresponding to an area.
(2) The image processing apparatus according to (1), wherein the detection area is a ranging area.
(3) The image processing apparatus according to (2), wherein the control unit performs focus control based on the identified detection value.
(4) The image processing apparatus according to (1), wherein the detection area is a photometric area.
(5) The image processing apparatus according to (4), wherein the control unit performs exposure control based on the identified detection value.
(6) The image processing apparatus according to any one of (1) to (5), further including: a detection value storage unit that stores the detection value generated by the detection processing unit.
(7) The control unit stores the detection value of the detection area set corresponding to the object detected based on the image signal of the predetermined frame by the object detection unit in the detection value storage unit. The image processing apparatus according to (6), wherein the image processing apparatus according to (6) above is specified from the detection value corresponding to the entire detection area generated by detecting an image signal of a frame.
(8) The image processing apparatus according to any one of (1) to (7), wherein the detection area is spread over the entire area of the imaging area.
(9) The process of generating the detection value in the detection processing unit and the process of detecting the object in the object detection unit are performed in parallel. The image processing according to any one of (1) to (8) apparatus.
(10) A plurality of phase difference detection areas are further arranged in the imaging area,
The image processing apparatus according to (2) or (3), wherein the number of the ranging areas is larger than the number of the phase difference detection areas.
(11) The image processing apparatus according to any one of (1) to (10), wherein in the setting of the detection area corresponding to the subject, the area where the subject is detected is set as a detection area.
(12) In the setting of the detection area corresponding to the subject, an area excluding the area where the subject is detected is set as a detection area. The image processing apparatus according to any one of (1) to (10).
(13) The detection processing unit detects an image signal input from the imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates a detection value corresponding to each of the entire detection areas,
A subject detection unit detects a subject based on the image signal,
The control unit sets a detection area corresponding to the subject from the entire detection area based on the detection result of the subject detection unit, and based on the detection value corresponding to each of the full detection areas generated by the detection processing unit An image processing method for specifying a detection value corresponding to the set detection area.
(14) an imaging unit,
A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area and generates detection values corresponding to each of the entire detection areas;
A subject detection unit that detects a subject based on the image signal;
The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An imaging apparatus comprising a control unit that specifies a detection value corresponding to an area.
 100・・・デジタルスチルカメラ
 101・・・制御部
 102・・・操作部
 103・・・撮像レンズ
 104・・・撮像素子
 105・・・信号処理部
 106・・・表示制御部
 107・・・表示部
 108・・・被写体認識部
 109・・・被写体追尾部
 110・・・AF検波エリア設定部
 111・・・AF検波処理部
 112・・・AF制御部
 113・・・画像記録処理部
 114・・・記録メディア
 200・・・デジタルスチルカメラ
 201・・・AF検波処理部
 202・・・AF検波格納部
 203・・・AF検波値生成部
 300・・・監視カメラ
 301・・・物体認識部
 302・・・物体追尾部
 303・・・AF検波値生成部
 400・・・デジタルスチルカメラ
 401・・・AE検波エリア設定部
 402・・・AE検波処理部
 403・・・AE制御部
 500・・・デジタルスチルカメラ
 501・・・AE検波処理部
 502・・・AE検波格納部
 503・・・AE検波値生成部
100 ... digital still camera 101 ... control unit 102 ... operation unit 103 ... imaging lens 104 ... imaging device 105 ... signal processing unit 106 ... display control unit 107 ... display Unit 108 ... object recognition unit 109 ... object tracking unit 110 ... AF detection area setting unit 111 ... AF detection processing unit 112 ... AF control unit 113 ... image recording processing unit 114 · · · Recording media 200: digital still camera 201: AF detection processing unit 202: AF detection storage unit 203: AF detection value generation unit 300: monitoring camera 301: object recognition unit 302 Object tracking unit 303 AF detection value generation unit 400 Digital still camera 401 AE detection area setting unit 402 AE detection processing unit 4 3 ... AE control unit 500 ... digital still camera 501 ... AE detection processing unit 502 ... AE detector storage unit 503 ... AE detection value generation unit

Claims (14)

  1.  撮像部の撮像エリアから入力される画像信号を前記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
     上記画像信号に基づいて被写体を検出する被写体検出部と、
     上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
     画像処理装置。
    A detection processing unit that detects image signals input from an imaging area of an imaging unit in a plurality of detection areas arranged in the imaging area and generates detection values corresponding to all detection areas;
    A subject detection unit that detects a subject based on the image signal;
    The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An image processing apparatus comprising: a control unit that specifies a detected value corresponding to an area.
  2.  上記検波エリアは、測距エリアである
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the detection area is a ranging area.
  3.  上記制御部は、上記特定した検波値に基づいてフォーカス制御をする
     請求項2に記載の画像処理装置。
    The image processing apparatus according to claim 2, wherein the control unit performs focus control based on the identified detection value.
  4.  上記検波エリアは、測光エリアである
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the detection area is a photometric area.
  5.  上記制御部は、上記特定した検波値に基づいて露出制御をする
     請求項4に記載の画像処理装置。
    The image processing apparatus according to claim 4, wherein the control unit performs exposure control based on the identified detection value.
  6.  上記検波処理部で生成された検波値を保存する検波値保存部をさらに備える
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, further comprising a detection value storage unit that stores the detection value generated by the detection processing unit.
  7.  上記制御部は、上記被写体検出部で所定フレームの画像信号に基づいて検出された被写体に対応して設定した検波エリアの検波値を、上記検波値保存部に保存されている上記所定フレームの画像信号を検波して生成された上記全検波エリアに対応する検波値から特定する
     請求項6に記載の画像処理装置。
    The control unit controls the detection value of the detection area set corresponding to the object detected based on the image signal of the predetermined frame by the object detection unit, the image of the predetermined frame stored in the detection value storage unit. The image processing apparatus according to claim 6, wherein a signal is detected to specify from detection values corresponding to the entire detection area generated.
  8.  上記検波エリアは、上記撮像エリアの全エリアに敷き詰められている
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the detection area is spread over the entire area of the imaging area.
  9.  上記検波処理部における上記検波値を生成する処理と上記被写体検出部における上記被写体を検出する処理とが並行して行われる
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the process of generating the detection value in the detection processing unit and the process of detecting the object in the object detection unit are performed in parallel.
  10.  上記撮像エリアに複数の位相差検出エリアがさらに配置されており、
     上記測距エリアの数は上記位相差検出エリアの数より多く配置されている
     請求項2に記載の画像処理装置。
    A plurality of phase difference detection areas are further arranged in the imaging area,
    The image processing apparatus according to claim 2, wherein the number of the ranging areas is larger than the number of the phase difference detection areas.
  11.  上記被写体に対応した検波エリアの設定では、被写体検出がされるエリアが検波エリアとして設定される
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein in the setting of the detection area corresponding to the subject, an area in which the subject is detected is set as a detection area.
  12.  上記被写体に対応した検波エリアの設定では、被写体検出がされるエリアを除いたエリアが検波エリアとして設定される
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein in the setting of the detection area corresponding to the subject, an area excluding the area where the subject is detected is set as a detection area.
  13.  検波処理部が、撮像部の撮像エリアから入力される画像信号を前記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成し、
     被写体検出部が、上記画像信号に基づいて被写体を検出し、
     制御部が、上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する
     画像処理方法。
    A detection processing unit detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all the detection areas,
    A subject detection unit detects a subject based on the image signal,
    The control unit sets a detection area corresponding to the subject from the entire detection area based on the detection result of the subject detection unit, and based on the detection value corresponding to each of the full detection areas generated by the detection processing unit An image processing method for specifying a detection value corresponding to the set detection area.
  14.  撮像部と、
     上記撮像部の撮像エリアから入力される画像信号を前記撮像エリアに複数配置される検波エリアにおいて検波し、全検波エリアのそれぞれに対応する検波値を生成する検波処理部と、
     上記画像信号に基づいて被写体を検出する被写体検出部と、
     上記被写体検出部の検出結果に基づいて上記全検波エリアから上記被写体に対応した検波エリアを設定し、上記検波処理部で生成された上記全検波エリアのそれぞれに対応する検波値から上記設定した検波エリアに対応する検波値を特定する制御部を備える
     撮像装置。
    An imaging unit,
    A detection processing unit that detects image signals input from an imaging area of the imaging unit in a plurality of detection areas arranged in the imaging area, and generates detection values corresponding to all detection areas;
    A subject detection unit that detects a subject based on the image signal;
    The detection area corresponding to the subject is set from the full detection area based on the detection result of the subject detection unit, and the detection set from the detection values corresponding to the full detection area generated by the detection processing unit An imaging apparatus comprising a control unit that specifies a detection value corresponding to an area.
PCT/JP2018/038742 2017-10-24 2018-10-17 Image processing device, image processing method, and imaging device WO2019082775A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017205590 2017-10-24
JP2017-205590 2017-10-24

Publications (1)

Publication Number Publication Date
WO2019082775A1 true WO2019082775A1 (en) 2019-05-02

Family

ID=66246821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/038742 WO2019082775A1 (en) 2017-10-24 2018-10-17 Image processing device, image processing method, and imaging device

Country Status (1)

Country Link
WO (1) WO2019082775A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06230453A (en) * 1993-02-02 1994-08-19 Nikon Corp Camera provided with object tracking function
JP2009265239A (en) * 2008-04-23 2009-11-12 Nikon Corp Focus detecting apparatus, focus detection method, and camera
JP2016109766A (en) * 2014-12-03 2016-06-20 キヤノン株式会社 Imaging device
JP2017005443A (en) * 2015-06-09 2017-01-05 ソニー株式会社 Imaging control device, imaging apparatus, and imaging control method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06230453A (en) * 1993-02-02 1994-08-19 Nikon Corp Camera provided with object tracking function
JP2009265239A (en) * 2008-04-23 2009-11-12 Nikon Corp Focus detecting apparatus, focus detection method, and camera
JP2016109766A (en) * 2014-12-03 2016-06-20 キヤノン株式会社 Imaging device
JP2017005443A (en) * 2015-06-09 2017-01-05 ソニー株式会社 Imaging control device, imaging apparatus, and imaging control method

Similar Documents

Publication Publication Date Title
JP7022057B2 (en) Imaging device
JP2022044653A (en) Imaging apparatus
KR102306190B1 (en) compound eye camera module, and electronic device
JPWO2018025659A1 (en) Imaging device, solid-state imaging device, camera module, drive control unit, and imaging method
CN110573922B (en) Imaging device and electronic apparatus
US11750932B2 (en) Image processing apparatus, image processing method, and electronic apparatus
JP2018200423A (en) Imaging device and electronic apparatus
US10750060B2 (en) Camera module, method of manufacturing camera module, imaging apparatus, and electronic apparatus
US11936979B2 (en) Imaging device
US20210021771A1 (en) Imaging device and electronic device
WO2019155944A1 (en) Image processing device, image processing method, and imaging apparatus
US20220279134A1 (en) Imaging device and imaging method
WO2022009674A1 (en) Semiconductor package and method for producing semiconductor package
WO2019082775A1 (en) Image processing device, image processing method, and imaging device
WO2018051819A1 (en) Imaging element, method for driving same, and electronic device
WO2024062813A1 (en) Imaging device and electronic equipment
WO2020045202A1 (en) Imaging device, correction method, and computer program
CN117693945A (en) Imaging apparatus, imaging method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18870889

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18870889

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP