WO2015146006A1 - Imaging device, and imaging control method - Google Patents

Imaging device, and imaging control method Download PDF

Info

Publication number
WO2015146006A1
WO2015146006A1 PCT/JP2015/001170 JP2015001170W WO2015146006A1 WO 2015146006 A1 WO2015146006 A1 WO 2015146006A1 JP 2015001170 W JP2015001170 W JP 2015001170W WO 2015146006 A1 WO2015146006 A1 WO 2015146006A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
image
image data
unit
exposure condition
Prior art date
Application number
PCT/JP2015/001170
Other languages
French (fr)
Japanese (ja)
Inventor
利章 篠原
秀雄 斉藤
東澤 義人
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2015146006A1 publication Critical patent/WO2015146006A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures

Definitions

  • the present invention relates to an imaging apparatus and an imaging control method for imaging using exposure conditions.
  • an imaging device for example, a digital camera
  • a technique for capturing and synthesizing a plurality of images with different exposure conditions that is, camera parameters
  • HDR High Dynamic Range
  • an image having gradation characteristics that is difficult to reproduce under a single exposure condition can be obtained.
  • the dynamic range of the imaging environment may be wider than the dynamic range that the imaging device can capture.
  • Patent Document 1 As a prior art relating to the HDR technology, for example, an imaging apparatus shown in Patent Document 1 has been proposed.
  • the imaging apparatus shown in Patent Document 1 captures a plurality of images with different exposure conditions, and generates HDR image data with an expanded dynamic range by HDR combining the plurality of image data obtained by the imaging. Further, the imaging apparatus extracts a feature amount from at least one image data before HDR synthesis, and adds a special effect to the HDR image data based on the extracted feature amount. As a result, the imaging apparatus disclosed in Patent Document 1 can obtain an image with an HDR effect, that is, a dynamic range that is widened and a special effect added.
  • the imaging device shown in Patent Document 1 captures a plurality of images with different gradations at the same timing, and generates HDR image data by performing HDR composition using these plurality of images. For this reason, it is considered that HDR image data generated based on a plurality of images captured at a certain moment has a wide dynamic range and suppresses occurrence of whiteout or blackout.
  • Patent Document 1 when the imaging apparatus shown in Patent Document 1 captures a moving image, it may be difficult to capture an image with appropriate exposure conditions according to changes in the surrounding environment. For example, in the configuration of Patent Document 1, when the user holding the imaging device suddenly moves to a bright place, or when a lamp or illumination suddenly lights near the user, the previous state is Since appropriate exposure conditions are different, it may be difficult to capture a moving image using appropriate exposure conditions.
  • An imaging apparatus uses an imaging unit that captures an image and a signal that generates image data of the image captured by the imaging unit according to each exposure condition using a plurality of different exposure conditions.
  • An image recognition unit for recognizing the image data of the video generated by the signal processing unit for each exposure condition; and an image recognition of the image data of the video for each exposure condition in the image recognition unit.
  • a selection unit that selects an exposure condition suitable for capturing an image in the imaging unit based on the result.
  • FIG. 1 is a block diagram illustrating in detail an example of the internal configuration of the imaging apparatus according to the present embodiment.
  • FIG. 2 is a conceptual explanatory diagram regarding a set of three exposure conditions.
  • FIG. 3 is an explanatory diagram showing image data for recognition using each exposure condition, and examples of the maximum value and the minimum value of the adjustment width between the shutter speed of the camera parameter corresponding to the exposure condition and the gain of the image sensor. .
  • FIG. 4 is a flowchart illustrating an example of a detailed operation procedure of the imaging apparatus according to the present embodiment.
  • FIG. 5 is a time chart illustrating an example of a detailed operation procedure of the imaging apparatus according to the present embodiment.
  • FIG. 6 is a block diagram showing in detail an example of the system configuration of the imaging control system of the present modification.
  • this embodiment an embodiment of an imaging apparatus and an imaging control method according to the present invention (hereinafter referred to as “this embodiment”) will be described with reference to the drawings.
  • the imaging apparatus of the present embodiment will be described, and the imaging control method according to the present invention will be described as necessary.
  • the present invention is not limited to the invention of the device category of the imaging device and the invention of the method category of the imaging control method, and is an imaging control system including an imaging device and an image recognition device (for example, a PC (Personal Computer)) described later. It may be an invention of the system category.
  • the imaging apparatus of the present embodiment is a surveillance camera that is fixed to a predetermined position (for example, a ceiling surface) or supported by being suspended from a predetermined surface.
  • the imaging apparatus of the present embodiment may be a digital camera that is used by being held by a user.
  • the imaging apparatus according to the present embodiment may be used as an in-vehicle camera mounted on a moving body (for example, a railway, an airplane, a bus, a ship, an automobile, a motorcycle, or a bicycle).
  • the imaging apparatus uses an imaging unit that captures an image and a plurality of different exposure conditions at the time of imaging, and generates signal data of the image captured by the imaging unit according to each exposure condition
  • An image recognition unit that recognizes video image data generated by the signal processing unit for each exposure condition, and an image recognition result that the image recognition unit recognizes video image data for each exposure condition.
  • An exposure condition suitable for capturing an image in the unit is selected.
  • FIG. 1 is a block diagram showing in detail an example of the internal configuration of the imaging apparatus 10 of the present embodiment.
  • An imaging apparatus 10 illustrated in FIG. 1 includes an imaging unit IM, a signal processing unit 13, an output switching unit 15, a transmission image storage buffer 17, a recognition image storage buffer 19, an image compression unit 21, and an image transmission.
  • the configuration includes a unit 23, a motion detection unit 25, an image recognition unit 27, and a camera parameter control driver 29.
  • the motion detection unit 25 is not necessarily required depending on the usage state of the imaging apparatus 10 of the present embodiment, and may be omitted. For example, when the imaging device 10 is attached to a vehicle as in LPR (Licensed Plate Recognition) processing, or when the moving user has the imaging device 10, the imaging device 10 itself stops.
  • LPR Licensed Plate Recognition
  • the motion detector 25 When used in a situation that is not, the motion detector 25 may be omitted.
  • the imaging device 10 attached to the vehicle performs LPR processing, the range set by a UI (User Interface) operation signal or the detected lower range of the front vehicle (for example, the lower range from the rear gate glass) is a target. It becomes.
  • UI User Interface
  • the imaging unit IM includes a lens LS, a diaphragm IR, a shutter unit SH, and an image sensor 11.
  • the lens LS is configured using one or more optical lenses to form a subject image toward the imaging surface of the image sensor 11, for example, a single focus lens, a zoom lens, a fisheye lens, or a wide-angle image of a predetermined degree or more. It is a lens that can obtain corners.
  • a diaphragm IR is disposed behind the optical axis of the lens LS (right side of the drawing in FIG. 1; the same applies hereinafter).
  • the aperture portion IR has a variable aperture value (aperture) and limits the amount of subject light that has passed through the lens LS.
  • the aperture value of the aperture unit IR is an example of camera parameters that constitute the exposure conditions during imaging of the imaging apparatus 10 of the present embodiment, and is changed by the camera parameter control driver 29 described later. Details of the relationship between the camera parameter and the signal processing unit 13 will be described later.
  • a shutter unit SH is disposed behind the aperture unit IR.
  • the shutter unit SH alternately performs an opening operation and a closing operation at a predetermined shutter speed when the image capturing apparatus 10 captures an image, and passes the subject light that has passed through the aperture unit IR to the image sensor 11.
  • the shutter speed of the shutter unit SH is an example of a camera parameter that constitutes an exposure condition at the time of imaging of the imaging apparatus 10 of the present embodiment, and is changed by a camera parameter control driver 29 described later.
  • the image sensor 11 is configured by using, for example, a CCD (Charged ⁇ ⁇ Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) solid-state imaging device, and is imaged on the imaging surface of the image sensor 11 using the gain of the image sensor 11. A subject image is photoelectrically converted into an electrical signal. The output of the image sensor 11 is input to the signal processing unit 13.
  • CCD Charge ⁇ ⁇ Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • the signal processing unit 13 includes an exposure condition A signal processing unit 13a, an exposure condition B signal processing unit 13b, an exposure condition C signal processing unit 13c, and an HDR (High Dynamic Range) processing unit 13d.
  • the signal processing unit 13 uses a plurality of different exposure conditions, and according to each exposure condition, uses the electrical signal of the subject image generated by the photoelectric conversion of the image sensor 11 to generate video image data according to a predetermined format. As described above, transmission image data and recognition image data are generated.
  • the image data for transmission is transmitted (transmitted) to, for example, an external device connected to the imaging apparatus 10 via a network, and is provided for viewing by a user who operates the external device.
  • the recognition image data is image data individually generated according to a plurality of different exposure conditions described later, and is used for image recognition in the image recognition unit 27.
  • the image recognition unit 27 selects an exposure condition suitable for the periphery of the imaging device 10 based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions.
  • the signal processing unit 13 uses, for example, three different exposure conditions A, B, and C to generate recognition image data according to each exposure condition A, B, and C, and further, these exposures
  • the image data for recognition generated according to the conditions A, B, and C is synthesized (so-called HDR synthesis) to generate image data for transmission.
  • the description will be made using three different exposure conditions A, B, and C.
  • the exposure conditions are not limited to three and may be two or more.
  • the exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c do not perform signal processing at the same operation start timing, but perform pipeline processing as shown in FIG. As described above, signal processing is performed in parallel at different operation start timings.
  • camera parameters of the imaging device 10 for each exposure condition A, B, and C, camera parameters of the imaging device 10 (for example, a combination of one or a plurality of aperture values, shutter speeds, and gains of the image sensor 11) are controlled by camera parameters. It is set (set) by the driver 29.
  • the camera parameter corresponding to the exposure condition A is set to the exposure condition B by the camera parameter control driver 29.
  • the corresponding camera parameter is changed.
  • the camera parameter corresponding to the exposure condition B is changed to the camera parameter corresponding to the exposure condition C by the camera parameter control driver 29. Be changed.
  • the camera parameter control driver 29 changes the camera parameter corresponding to the exposure condition C to the camera parameter corresponding to the exposure condition C. Is done.
  • the signal processing unit 13 repeatedly performs such camera parameter setting.
  • the exposure condition A signal processing unit 13a uses the camera parameter corresponding to the exposure condition A to recognize image data according to the exposure condition A, that is, an RGB (Red Green Blue) format that has not been subjected to predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
  • RGB Red Green Blue
  • YUV luminance / color difference
  • the exposure condition B signal processing unit 13b uses the camera parameter corresponding to the exposure condition B, and the image data for recognition according to the exposure condition B, that is, the RGB (Red Green Blue) format not subjected to the predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
  • the exposure condition C signal processing unit 13c uses the camera parameters corresponding to the exposure condition C to recognize image data according to the exposure condition C, that is, an RGB (Red Green Blue) format that has not been subjected to predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
  • RGB Red Green Blue
  • YUV luminance / color difference
  • the HDR processing unit 13d synthesizes the frames of the recognition image data generated by the exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c.
  • the image data for transmission with respect to the external apparatus connected via this is produced
  • the transmission image data generated by the HDR processing unit 13d has a higher dynamic range than the image data for recognition generated according to a single exposure condition (for example, exposure condition A) due to the HDR effect.
  • the signal processing unit 13 can cover the dynamic range of human vision when the transmission image data generated by the HDR processing unit 13d is displayed on the display of the external device, for example, Highly clear image data suitable for the above can be obtained.
  • the output switching unit 15 as an example of the switching unit outputs the output of the signal processing unit 13 according to an external signal (for example, a UI operation signal generated by an input operation of the user of the imaging device 10), that is, a single exposure condition.
  • the output of the recognition image data generated according to the above or the transmission image data generated by the HDR processing unit 13d is switched to the transmission image storage buffer 17 or the recognition image storage buffer 19.
  • the output switching unit 15 is, for example, time-division (different time slots), for recognition image data or transmission
  • the output of the image data may be switched to the transmission image storage buffer 17 or the recognition image storage buffer 19.
  • the output switching unit 15 outputs recognition image data generated according to a single exposure condition to the transmission image storage buffer 17 for storage. Further, the output switching unit 15 outputs the recognition image data generated by the HDR processing unit 13d to the recognition image storage buffer 19 and stores it. Note that the output switching unit 15 may output and store the recognition image data generated according to the single exposure condition to the transmission image storage buffer 17.
  • the transmission image storage buffer 17 as an example of the second storage unit is configured by using, for example, a semiconductor memory such as a RAM (Random Access Memory) or a flash memory, and the transmission image data output from the output switching unit 15 is received.
  • the transmission image storage buffer 17 may store the recognition image data output from the output switching unit 15.
  • the transmission image storage buffer 17 may be a hard disk device built in the imaging apparatus 10 or an external connection medium (for example, a semiconductor memory such as a flash memory) that can be connected via, for example, a USB (Universal Serial Bus) terminal. .
  • the recognition image storage buffer 19 as an example of the first storage unit is configured using a semiconductor memory such as a RAM or a flash memory, and stores the recognition image data output from the output switching unit 15.
  • This image data for recognition is image data (RAW data) generated by the exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c.
  • the recognition image storage buffer 19 may be a hard disk device built in the imaging device 10 or an external connection medium (for example, a semiconductor memory such as a flash memory) that can be connected via, for example, a USB terminal.
  • the image compression unit 21 is configured using, for example, a codec and uses transmission image data (or recognition image data) stored in the transmission image storage buffer 17 to transmit transmission image data (or recognition image data). Encoded data for conversion to a data format that can be stored and transmitted. The image compression unit 21 outputs the encoded data to the image transmission unit 23.
  • the image transmission unit 23 uses the encoded data of the transmission image data (or recognition image data) generated by the image compression unit 21, for example, a packet for transmission to an external device (not shown) that is a transmission destination Generation processing is performed, and a packet of encoded data is transmitted to an external device via the network. Thereby, the imaging device 10 can transmit the encoded data of the transmission image data (or recognition image data) to the external device.
  • the motion detection unit 25 is configured using, for example, a DSP (Digital Signal Processor), reads recognition image data generated in accordance with a plurality of different exposure conditions A, B, and C from the recognition image storage buffer 19, and performs VMD ( Video Motion Detector) processing to detect the presence or absence of motion in the recognition image data.
  • DSP Digital Signal Processor
  • VMD Video Motion Detector
  • the motion detection unit 25 writes the recognition image data that has undergone the VMD process in the recognition image storage buffer 19, and outputs information related to the writing position (address) of the recognition image storage buffer 19 to the image recognition unit 27.
  • the recognition image data according to the exposure conditions A, B, and C is generated by parallel signal processing at different timings, the VMD processing in the motion detection unit 25 is also performed in parallel (FIG. 5). reference).
  • the image recognition unit 27 is configured using, for example, a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP, and based on the information regarding the address output from the motion detection unit 25, the image recognition unit 27 Then, image data for recognition generated according to a plurality of different exposure conditions A, B, and C is read and image recognition (for example, FD (face detection: FaceFDetection), FR (face recognition: Face Recognition)) is performed. Note that the image recognition unit 27 may perform image recognition of the image data for recognition using the VMD processing result of the motion detection unit 25.
  • a CPU Central Processing Unit
  • MPU Micro Processing Unit
  • DSP Digital Signal Processing Unit
  • the image recognition unit 27 as an example of the selection unit obtained the best image recognition result based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions A, B, and C.
  • An exposure condition corresponding to one piece of recognition image data is selected as an exposure condition suitable for the periphery of the imaging apparatus 10.
  • the image recognition unit 27 outputs information regarding the selected exposure condition to the camera parameter control driver 29.
  • the camera parameter control driver 29 has a control circuit that adjusts the aperture amount of the aperture unit IR, the shutter speed of the shutter unit SH, and the gain of the image sensor 11 in accordance with the output of the image recognition unit 27.
  • the camera parameter control driver 29 as an example of the exposure condition control unit uses each of the camera parameters corresponding to a plurality of different exposure conditions A, B, and C using, for example, information on the exposure condition output from the image recognition unit 27. Set for the iris IR, shutter SH, and image sensor 11.
  • the camera parameters of the present embodiment are the aperture amount of the aperture unit IR, the shutter speed of the shutter unit SH, and the gain of the image sensor 11, but only one of them may be used.
  • a combination of a plurality of camera parameters for example, aperture amount and shutter speed, shutter speed and gain, gain and aperture amount may be used.
  • FIG. 2 is a conceptual explanatory diagram regarding a set of three exposure conditions A, B, and C. Details will be described with reference to FIG. 3.
  • the shutter speed can be adjusted from 30 to 300
  • the gain of the image sensor 11 can be adjusted from 0 to 40, for example.
  • a dotted line BTL indicates the best exposure condition selected every time the image recognition unit 27 performs image recognition of the recognition image data. Therefore, in the first image recognition, the exposure condition for obtaining the best image recognition result is the exposure condition B, and in the second image recognition, the exposure condition for obtaining the best image recognition result. Is the exposure condition A. Similarly, in the nth image recognition, the exposure condition under which the best image recognition result is obtained is the exposure condition B.
  • the camera parameter control driver 29 selects the best of the exposure conditions A, B, and C, for example, when the exposure condition B is selected by the image recognition unit 27 by the first image recognition (time t1).
  • the camera parameters are set so that the exposure condition B becomes the center, and the values of some or all of the camera parameters corresponding to the exposure condition B are slightly changed (adjusted) to obtain other exposure conditions (exposure at time t1).
  • Condition A, C) is set.
  • Times t1, t2, t3,..., Tn indicate exposure condition setting timings in the camera parameter control driver 29.
  • the camera parameter control driver 29 selects the best exposure condition A, B, or C.
  • the camera parameters are set so that the exposure condition A becomes the center, and the values of some or all of the camera parameters corresponding to the exposure condition A are slightly changed (adjusted) to obtain other exposure conditions (exposure conditions at time t2). B, C) is set. Thereafter, exposure conditions A, B, and C are set similarly.
  • FIG. 3 shows an example of image data for recognition using each of the exposure conditions A, B, and C, and examples of the maximum and minimum values of the adjustment width between the shutter speed of the camera parameter corresponding to the exposure condition and the gain of the image sensor 11. It is explanatory drawing which shows.
  • the shutter speed can be adjusted from 30 to 300, and for example, the maximum value of the adjustment width is 30 and the minimum value of the adjustment width is 10. In other words, the shutter speed can be adjusted between 30 and 300 in increments of 10 to 30.
  • the gain of the image sensor 11 can be adjusted from 0 to 40.
  • the maximum value of the adjustment width is 3, and the minimum value of the adjustment width is 1. That is, the gain of the image sensor 11 can be adjusted in the range of 1 to 3 between 0 and 40.
  • the camera parameters corresponding to the exposure condition A have a gain of the image sensor 11 of 0 and a shutter speed of 200.
  • the gain of the image sensor 11 is 0 and the shutter speed is 30.
  • the camera parameters corresponding to the exposure condition C are 40 for the gain of the image sensor 11 and 30 for the shutter speed.
  • recognition image data IM ⁇ b> 2 generated by imaging camera parameters corresponding to the exposure condition A
  • recognition image data IM ⁇ b> 3 generated by imaging camera parameters corresponding to the exposure condition B
  • the image data for recognition IM4 generated by the imaging of the camera parameter corresponding to the exposure condition C is shown.
  • a recognition image generated by imaging camera parameters for example, the shutter speed is 300 and the gain of the image sensor 11 is 0
  • Data IM1 is also shown.
  • the image recognition unit 27 Since the image data for recognition IM1 and IM2 have low brightness as an image and a small contrast, the image recognition unit 27 is unlikely to obtain a good image recognition result.
  • the recognition image data IM4 has a high brightness as an image and a high contrast, so that there is a possibility that a good image recognition result can be obtained in the image recognition unit 27.
  • the image data for recognition IM3 has appropriate values for brightness and contrast as an image, and the image recognition unit 27 can obtain the best image recognition result.
  • FIG. 4 is a flowchart illustrating an example of a detailed operation procedure of the imaging apparatus 10 according to the present embodiment.
  • the imaging apparatus 10 switches the output of the recognition image data or the transmission image data to the transmission image storage buffer 17 or the recognition image storage buffer 19 in time division (different time slots).
  • the time slot when the time slot is not used for image recognition using the image data for recognition (S1, NO), the time slot is used for image transmission using the image data for transmission.
  • the parameter control driver 29 sets the camera parameters of the exposure conditions A, B, and C for obtaining the transmission image data in the aperture unit IR, the shutter unit SH, and the image sensor 11 (S2).
  • the camera parameters of the exposure conditions A, B, and C are set to predetermined values (fixed values).
  • the imaging apparatus 10 stands by until the setting of the camera parameters in step S2 is completed and the contents of the camera parameters are reflected on the aperture unit IR, the shutter unit SH, and the image sensor 11 (S3).
  • the imaging apparatus 10 After step S3, the imaging apparatus 10 generates transmission image data by combining (HDR combining) the recognition image data generated in accordance with the exposure conditions A, B, and C (S4). Then, the transmission image data is written into the transmission image storage buffer 17 (S5). Furthermore, the imaging apparatus 10 compresses the transmission image data written in the transmission image storage buffer 17, generates encoded data of the transmission image data, and transmits (transmits) the data to an external device (S6). Time slots used for image transmission using transmission image data are provided in advance to the extent that the operations from step S2 to step S6 are completed. After step S6, the operation of the imaging device 10 returns to step S1.
  • the camera parameter control driver 29 exposes exposure conditions A, B, and C for obtaining the image data for recognition.
  • the camera parameters of the exposure conditions A, B, and C are variable values.
  • the imaging apparatus 10 stands by until the setting of the camera parameters in step S7 is completed and the contents of the camera parameters are reflected on the aperture unit IR, the shutter unit SH, and the image sensor 11 (S8).
  • the imaging apparatus 10 After step S8, the imaging apparatus 10 generates recognition image data according to the exposure conditions A, B, and C (S9), and the recognition image data is stored in the recognition image storage buffer via the output switching unit 15. 19 is written (S10).
  • the imaging apparatus 10 performs VMD processing using the respective recognition image data generated according to the exposure conditions A, B, and C, and further acquires the respective recognition image data generated according to the exposure conditions A, B, and C. Using this, image recognition is performed (S11).
  • the imaging apparatus 10 uses the exposure conditions corresponding to the recognition image data that has obtained the best image recognition result among the respective recognition image data generated according to the exposure conditions A, B, and C. Camera parameters are selected (S12). After step S12, the operation of the imaging apparatus 10 returns to step S1, and the operation for each time slot determined in step S1 is repeated until the power of the imaging apparatus 10 is turned off.
  • FIG. 5 is a time chart showing an example of a detailed operation procedure of the imaging apparatus 10 of the present embodiment.
  • FIG. 5 shows time-series operation procedures of the image sensor 11, the motion detection unit 25, the camera parameter control driver 29 (simply described as “control driver” in FIG. 5), and the image recognition unit 27. .
  • initial values of the gain of the image sensor 11 and the shutter speed of the shutter unit SH are set in advance.
  • the time for the signal processing unit 13 to generate one frame of image data for transmission is 33.3 ms.
  • the camera parameter control driver 29 has a plurality of cameras with three different exposure conditions according to the camera parameters of the exposure conditions selected in the previous time slot used for image recognition using the image data for recognition.
  • An arrow FDBa indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition A in the image sensor 11.
  • the image sensor 11 performs imaging according to the camera parameter of the exposure condition A set by the arrow FDBa in the section IMa. Thereby, the image data for recognition according to the exposure condition A is produced
  • the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition B in the image sensor 11.
  • An arrow FDBb indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition B in the image sensor 11.
  • imaging according to the camera parameter of the exposure condition B set by the arrow FDBb is performed in the section IMb. Thereby, image data for recognition according to the exposure condition B is generated.
  • the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition C in the image sensor 11.
  • An arrow FDBc indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition C in the image sensor 11.
  • imaging according to the camera parameter of the exposure condition C set by the arrow FDBc is performed in the section IMc. Thereby, image data for recognition according to the exposure condition C is generated.
  • the motion detection unit 25 After the section IMa, the motion detection unit 25 performs VMD processing on the recognition image data generated by imaging the section IMa in the section VDa (for example, 66.6 ms), and performs recognition for VMD processing after the end of the section VDa.
  • the image data is written to the recognition image storage buffer 19 and information about the writing position (address) of the recognition image storage buffer 19 is output to the camera parameter control driver 29 (see arrow Bfa).
  • the motion detection unit 25 After the section IMb, the motion detection unit 25 performs VMD processing on the recognition image data generated by the imaging of the section IMb in the section VDb (similarly, for example, 66.6 ms), and after the end of the section VDb, The recognition image data is written into the recognition image storage buffer 19 and information about the writing position (address) of the recognition image storage buffer 19 is output to the camera parameter control driver 29 (see arrow Bfb).
  • the motion detection unit 25 After the section IMc, the motion detection unit 25 performs VMD processing on the recognition image data generated by the imaging of the section IMc in the section VDc (similarly, for example, 66.6 ms), and after the end of the section VDc, The image data for recognition is written into the image storage buffer for recognition 19 and information relating to the writing position (address) of the image storage buffer for recognition 19 is output to the camera parameter control driver 29 (see arrow Bfc).
  • the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfa, and acquires the recognition image data generated according to the exposure condition A from the recognition image storage buffer 19. To do.
  • the image recognition unit 27 performs image recognition of the recognition image data generated according to the exposure condition A in the section FDa after the section Dfa.
  • the image recognition unit 27 After the section FDa, the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfb, and acquires the recognition image data generated according to the exposure condition B from the recognition image storage buffer 19. To do. The image recognition unit 27 performs image recognition of the recognition image data generated according to the exposure condition B in the section FDb after the section Dfb.
  • the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfc, and recognizes the image data for recognition generated according to the exposure condition C as the recognition image storage buffer 19. Get from.
  • the image recognition unit 27 performs image recognition of the image data for recognition generated according to the exposure condition C in a similar section after the section Dfc.
  • the imaging apparatus 10 uses a plurality of different exposure conditions A, B, and C for recognition performed by the imaging unit IM in accordance with the camera parameters of the exposure conditions A, B, and C. Generate image data.
  • the imaging device 10 recognizes the image data for recognition generated by the signal processing unit 13 for each of the exposure conditions A, B, and C, and obtains the image recognition result of the image data for recognition for each of the exposure conditions A, B, and C. Based on this, a camera parameter having an exposure condition suitable for capturing an image in the imaging unit IM is selected and fed back for changing (adjusting) the camera parameter in the imaging unit IM.
  • the imaging apparatus 10 selects the exposure condition with the best image recognition result among the image recognition results of the respective image data for recognition captured according to a plurality of different exposure conditions A, B, and C. Even when the environment around the user who operates the apparatus 10 suddenly changes or no sudden change occurs, an image can be captured using appropriate exposure conditions.
  • the imaging apparatus 10 can individually store the image recognition image data generated for each of a plurality of different exposure conditions A, B, and C in the recognition image storage buffer 19.
  • the imaging apparatus 10 generates image recognition image data for each of a plurality of different exposure conditions A, B, and C, stores the image recognition image data in the recognition image storage buffer 19, and generates image transmission image data. Therefore, a plurality of storage units for distinguishing and storing the recognition image data and the transmission image data can be held.
  • the imaging apparatus 10 generates image data for video transmission by combining (for example, HDR combining) image recognition image data generated for each of a plurality of different exposure conditions A, B, and C. Compared to video image data generated according to a single exposure condition, image data with an expanded dynamic range can be generated and transmitted to an external device.
  • combining for example, HDR combining
  • the imaging apparatus 10 can easily switch the output of the recognition image data to the recognition image storage buffer 19 or the transmission image storage buffer 17 in accordance with a predetermined input operation (for example, a user input operation).
  • a predetermined input operation for example, a user input operation.
  • the imaging apparatus 10 recognizes the user by a simple input operation (for example, UI (User: Interface) operation).
  • Image data can be output to the transmission image storage buffer 17.
  • the imaging apparatus 10 can select an exposure condition selected based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions A, B, and C even when the surrounding environment suddenly changes. Since the camera parameters in the imaging unit IM are adjusted according to (for example, the exposure condition B), it is possible to capture an image using an appropriate exposure condition suitable for the surrounding environment.
  • the imaging apparatus 10 may select any one of, for example, an aperture amount of the aperture IR, a shutter speed of the shutter SH, and a gain of the image sensor 11 as predetermined camera parameters in accordance with the selected exposure condition (for example, exposure condition B). Or multiple combinations can be adjusted.
  • this modification in a modification of the present embodiment, the configuration of the motion detection unit 25 and the image recognition unit 27 of the imaging device 10 of the present embodiment described above is different from that of the imaging device 10.
  • An imaging control system 100 provided in a device for example, the image recognition device 50 shown in FIG. 6 will be described with reference to FIG.
  • FIG. 6 is a block diagram showing in detail an example of the system configuration of the imaging control system 100 of the present modification.
  • the imaging control system 100 shown in FIG. 6 has a configuration in which an imaging apparatus 10A and an image recognition apparatus 50 are connected via networks NW1 and NW2.
  • An imaging apparatus 10A illustrated in FIG. 6 is configured such that the motion detection unit 25 and the image recognition unit 27 are omitted from the configuration of the imaging apparatus 10 illustrated in FIG. Unit 13, output switching unit 15, transmission image storage buffer 17, recognition image storage buffer 19, image compression unit 21, image transmission unit 23, motion detection unit 25, image recognition unit 27, The camera parameter control driver 29 is included.
  • the image recognition device 50 includes a communication unit 31, a motion detection unit 25A, and an image recognition unit 27A.
  • a communication unit 31 a motion detection unit 25A
  • an image recognition unit 27A an image recognition unit
  • the image transmission unit 23A reads the transmission image data or the recognition image data from the transmission image storage buffer 17 or the recognition image storage buffer 19, and the image recognition apparatus via the network NW1. 50 (send).
  • the communication unit 31 receives the transmission image data or the recognition image data transmitted (transmitted) from the image transmission unit 23A and outputs it to the motion detection unit 25A.
  • the image recognition unit 27A recognizes each recognition image data generated for each of a plurality of different exposure conditions A, B, and C as in the above-described embodiment, and the best image recognition result is obtained.
  • the exposure condition corresponding to the image data for recognition from which the image recognition result is obtained is selected, and information related to the selected exposure condition is output to the communication unit 31.
  • the communication unit 31 transmits information on the exposure condition selected by the image recognition unit 27A to the camera parameter control driver 29 of the imaging apparatus 10A via the network NW2.
  • illustration of the communication part provided in 10 A of imaging devices between the communication part 31 and the camera parameter control driver 29 is abbreviate
  • the networks NW1 and NW2 may be the same network or different networks, and are wireless networks (for example, wireless LAN (Local Area Network), wireless WAN (Wide Area Network)).
  • the imaging apparatus 10A is imaged by the imaging unit IM according to each exposure condition A, B, C using a plurality of different exposure conditions A, B, C.
  • the image data for recognition is generated and transmitted to the image recognition device 50 via the network NW1.
  • the image recognition device 50 recognizes the image data for recognition received from the imaging device 10A for each of the exposure conditions A, B, and C. Further, the image recognition device 50 selects an exposure condition suitable for image capturing in the imaging device 10A based on the image recognition result of the recognition image data for each of the exposure conditions A, B, and C, and sets the camera parameters in the imaging unit IM. Feedback for changes (adjustments).
  • the image capturing apparatus 10A can cause the image recognizing apparatus 50 to perform image recognition processing of each recognition image data imaged according to a plurality of different exposure conditions A, B, and C, and thus is required for the image recognition processing.
  • the circuit configuration can be reduced.
  • the image recognition device 50 selects and selects an exposure condition with the best image recognition result among the image recognition results for a plurality of image data for recognition transmitted from the imaging device 10A connected via the network NW1.
  • a result (that is, information on the selected exposure condition) is returned to the imaging apparatus 10A. Therefore, in the imaging control system 100, the imaging device 10A can select the appropriate one selected by, for example, the image recognition device 50 installed at a remote place, even when the surrounding environment suddenly changes or does not suddenly change. If information on exposure conditions is received in real time, an image can be captured using exposure conditions that match the surrounding environment.
  • an image pickup unit that picks up an image and a signal processing unit that generates image data of the image picked up by the image pickup unit according to each exposure condition using a plurality of different exposure conditions
  • an image recognition unit for recognizing the video image data generated by the signal processing unit for each exposure condition, and an image recognition result of the video image data for each exposure condition in the image recognition unit.
  • a selection unit that selects an exposure condition suitable for capturing an image in the imaging unit.
  • the signal processing unit generates image data of a video imaged by the imaging unit according to each exposure condition using a plurality of different exposure conditions.
  • the image recognition unit recognizes the image data of the video generated by the signal processing unit for each exposure condition.
  • the selection unit selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition in the image recognition unit.
  • the imaging apparatus selects the exposure condition with the best image recognition result among the image recognition results of the respective image data captured according to a plurality of different exposure conditions, so that the surrounding environment suddenly changes However, even if no sudden change occurs, an image can be taken using appropriate exposure conditions.
  • an embodiment of the present invention is an imaging apparatus further comprising a first storage unit that stores image data of the video generated by the signal processing unit for each exposure condition.
  • the imaging apparatus can store the image data of the video generated for each of a plurality of different exposure conditions in the first storage unit (recognition image storage buffer 19).
  • the signal processing unit generates image data for recognizing the video for each exposure condition, stores the image data in a first storage unit, and the image is captured by the imaging unit.
  • An imaging apparatus that generates image data for video transmission and stores the generated image data in a second storage unit different from the first storage unit.
  • the imaging device generates image recognition image data for each of a plurality of different exposure conditions, stores the image recognition data in the first storage unit (recognition image storage buffer 19), and also for image transmission. Since the image data is generated and stored in the second storage unit (transmission image storage buffer 17), a plurality of storage units for distinguishing and storing the recognition image data and the transmission image data can be held.
  • the signal processing unit generates the image data for transmission of the video by synthesizing the image data for recognition of the video generated for each exposure condition. It is.
  • the imaging apparatus generates video transmission image data by combining (for example, HDR combining) video recognition image data generated for each of a plurality of different exposure conditions. Compared with video image data generated according to the exposure conditions, image data with an expanded dynamic range can be generated.
  • the embodiment of the present invention further includes a switching unit that switches output of the image recognition image data to the first storage unit or the second storage unit in accordance with a predetermined input operation.
  • a switching unit that switches output of the image recognition image data to the first storage unit or the second storage unit in accordance with a predetermined input operation.
  • An imaging device An imaging device.
  • the imaging apparatus stores the image recognition image data in the first storage unit (recognition image storage buffer 19) or the second storage in accordance with a predetermined input operation (for example, a user input operation).
  • a predetermined input operation for example, a user input operation
  • Output to the transmission unit (transmission image storage buffer 17) can be easily switched.
  • the imaging apparatus transfers the recognition image data to the second image by a simple input operation (for example, UI operation) by the user.
  • the data can be output to the storage unit (transmission image storage buffer 17).
  • an embodiment of the present invention is an imaging apparatus further comprising an exposure condition control unit that adjusts predetermined imaging parameters in the imaging unit in accordance with the exposure condition selected by the selection unit.
  • the imaging apparatus even when a sudden change in the surrounding environment occurs, the imaging apparatus according to the exposure condition selected based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions. Since predetermined imaging parameters in the imaging unit are adjusted, an image can be captured using an appropriate exposure condition suitable for the surrounding environment.
  • the imaging unit passes through the lens, a diaphragm unit that adjusts the amount of light passing through the lens, a shutter unit that alternately performs an opening operation and a closing operation during imaging, and the shutter.
  • an image sensor that photoelectrically converts an optical image formed on the imaging surface, and the predetermined imaging parameters include an aperture amount of the aperture unit, a shutter speed of the shutter unit, and a photoelectric conversion of the image sensor. It is an imaging device which is one or a combination of gains of time.
  • the imaging apparatus uses, as the predetermined imaging parameters, for example, one or a combination of a diaphragm amount of the diaphragm unit, a shutter speed of the shutter unit, and a gain of the image sensor according to the selected exposure condition. Can be adjusted.
  • one embodiment of the present invention is an imaging control method in an imaging apparatus, wherein an image of the video imaged according to each exposure condition using a step of imaging video and a plurality of different exposure conditions Based on the step of generating data, the step of recognizing the image data of the video generated for each exposure condition, and the image recognition result of the image data of the video for each exposure condition, the imaging of the video Selecting an appropriate exposure condition.
  • the signal processing unit generates image data of a video imaged by the imaging unit according to each exposure condition using a plurality of different exposure conditions.
  • the image recognition unit recognizes the image data of the video generated by the signal processing unit for each exposure condition.
  • the selection unit selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition in the image recognition unit.
  • the imaging apparatus selects the exposure condition with the best image recognition result among the image recognition results of the respective image data captured according to a plurality of different exposure conditions, so that the surrounding environment suddenly changes However, even if no sudden change occurs, an image can be taken using appropriate exposure conditions.
  • An embodiment of the present invention is an imaging control method in an imaging apparatus and an image recognition apparatus connected via a network, and each exposure is performed using a step of imaging video and a plurality of different exposure conditions.
  • the step of generating image data of the captured video the step of transmitting the image data of the video generated for each exposure condition from the imaging device, and the step of generating for each exposure condition
  • Receiving the image data of the video in the image recognition device recognizing the image data of the video generated for each exposure condition; and image recognition result of the image data of the video for each exposure condition.
  • a step of selecting an exposure condition suitable for the imaging of the video is an imaging control method in an imaging apparatus and an image recognition apparatus connected via a network, and each exposure is performed using a step of imaging video and a plurality of different exposure conditions.
  • the step of generating image data of the captured video the step of transmitting the image data of the video generated for each exposure condition from the imaging device, and the step of generating for each exposure condition
  • the imaging apparatus uses a plurality of different exposure conditions to generate image data of a video imaged by the imaging unit according to each exposure condition, and transmits the image data to the image recognition apparatus.
  • the image recognition device recognizes the image data of the video received from the imaging device for each exposure condition. Further, the image recognition apparatus selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition.
  • the image pickup apparatus can cause the image recognition apparatus to perform image recognition processing of each image data picked up in accordance with a plurality of different exposure conditions, so that the circuit configuration required for the image recognition processing can be reduced.
  • the image recognition apparatus selects an exposure condition with the best image recognition result among the image recognition results for a plurality of recognition image data transmitted from the imaging apparatuses connected via the network, and images the selection result. Reply to the device. Therefore, in the imaging control system, the imaging apparatus relates to an appropriate exposure condition selected by, for example, an image recognition apparatus installed in a remote place, even when the surrounding environment suddenly changes or does not suddenly change. If information is received in real time, an image can be captured using exposure conditions that match the surrounding environment.
  • the present invention is useful as an image pickup apparatus and an image pickup control method for picking up an image using appropriate exposure conditions regardless of whether the surrounding environment has changed.
  • Imaging device 11 Image sensor 13 Signal processing unit 13a Exposure condition A signal processing unit 13b Exposure condition B signal processing unit 13c Exposure condition C signal processing unit 13d HDR processing unit 15 Output switching unit 17 Transmission image storage buffer 19 For recognition Image storage buffer 21 Image compression unit 23, 23A Image transmission unit 25, 25A Motion detection unit 27, 27A Image recognition unit 29 Camera parameter control driver 31 Communication unit 50 Image recognition device 100 Imaging control system IR Aperture unit LS Lens SH Shutter unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

An imaging device according to an embodiment of the present invention is provided with: an imaging unit for capturing video; a signal processing unit which uses a plurality of different exposure conditions to generate, in accordance with each of the exposure conditions, video image data for the video captured by the imaging unit; an image recognition unit for performing image recognition on the video image data generated for each of the exposure conditions by the signal processing unit; and a selection unit which selects, on the basis of the image recognition results obtained from the image recognition unit for the video image data for each of the exposure conditions, an exposure condition for the imaging unit which is suitable for capturing the video.

Description

撮像装置及び撮像制御方法Imaging apparatus and imaging control method
 本発明は、露光条件を用いて撮像する撮像装置及び撮像制御方法に関する。 The present invention relates to an imaging apparatus and an imaging control method for imaging using exposure conditions.
 従来から、撮像装置(例えばデジタルカメラ)が被写体を撮像する際に、露光条件(即ち、カメラパラメータ)の異なる複数の画像を撮像して合成し、合成後の画像のダイナミックレンジを広げるための技術としてHDR(High Dynamic Range)技術が知られている。HDR技術により、単一の露光条件では再現が困難な階調特性を有する画像が得られる。例えば、日中の屋外で撮像装置が被写体を撮像する際、撮像装置が撮像可能なダイナミックレンジよりも撮像環境のダイナミックレンジが広い場合がある。このような場合には、撮像装置が撮像可能なダイナミックレンジ外の階調情報が得られないので、撮像により得られた画像には白飛びや黒潰れという現象が発生する。HDR技術により、このような現象の発生を抑制できる。 2. Description of the Related Art Conventionally, when an imaging device (for example, a digital camera) captures a subject, a technique for capturing and synthesizing a plurality of images with different exposure conditions (that is, camera parameters) and expanding the dynamic range of the synthesized image HDR (High Dynamic Range) technology is known. With the HDR technology, an image having gradation characteristics that is difficult to reproduce under a single exposure condition can be obtained. For example, when the imaging device captures an object outdoors during the daytime, the dynamic range of the imaging environment may be wider than the dynamic range that the imaging device can capture. In such a case, gradation information outside the dynamic range that can be imaged by the imaging apparatus cannot be obtained, so that a phenomenon such as overexposure or blackout occurs in an image obtained by imaging. The occurrence of such a phenomenon can be suppressed by the HDR technology.
 ここで、HDR技術に関する先行技術として、例えば特許文献1に示す撮像装置が提案されている。 Here, as a prior art relating to the HDR technology, for example, an imaging apparatus shown in Patent Document 1 has been proposed.
 特許文献1に示す撮像装置は、露光条件の異なる複数の画像を撮像し、撮像により得られた複数の画像データをHDR合成してダイナミックレンジを広げたHDR画像データを生成する。また、撮像装置は、HDR合成前の少なくとも1つの画像データから特徴量を抽出し、抽出された特徴量を基に、HDR画像データに対して特殊効果を付加する。これにより、特許文献1に示す撮像装置は、HDR効果、即ちダイナミックレンジが広がり、更に、特殊効果が付加された画像を得ることができる。 The imaging apparatus shown in Patent Document 1 captures a plurality of images with different exposure conditions, and generates HDR image data with an expanded dynamic range by HDR combining the plurality of image data obtained by the imaging. Further, the imaging apparatus extracts a feature amount from at least one image data before HDR synthesis, and adds a special effect to the HDR image data based on the extracted feature amount. As a result, the imaging apparatus disclosed in Patent Document 1 can obtain an image with an HDR effect, that is, a dynamic range that is widened and a special effect added.
特開2013-90095号公報JP 2013-90095 A
 特許文献1に示す撮像装置は、HDRモード撮像では、同一のタイミングに階調の異なる複数の画像を撮像し、これらの複数の画像を用いてHDR合成を行うことによってHDR画像データを生成する。このため、ある瞬間において撮像された複数の画像に基づいて生成されたHDR画像データは、ダイナミックレンジが広がり、白飛びや黒潰れの発生が抑制されていると考えられる。 In the HDR mode imaging, the imaging device shown in Patent Document 1 captures a plurality of images with different gradations at the same timing, and generates HDR image data by performing HDR composition using these plurality of images. For this reason, it is considered that HDR image data generated based on a plurality of images captured at a certain moment has a wide dynamic range and suppresses occurrence of whiteout or blackout.
 しかし、例えば特許文献1に示す撮像装置が動画を撮像している場合には、周囲の環境の変化に応じて、適正な露光条件が得られた状態で撮像することが困難な場合がある。例えば、特許文献1の構成では、撮像装置を把持しているユーザが急に明るい場所に移動した場合、又は、ユーザの近くでランプ若しくは照明が急に点灯した場合等には、直前の状態と適正な露光条件が異なるので、適正な露光条件を用いた動画の撮像が困難となる場合がある。 However, for example, when the imaging apparatus shown in Patent Document 1 captures a moving image, it may be difficult to capture an image with appropriate exposure conditions according to changes in the surrounding environment. For example, in the configuration of Patent Document 1, when the user holding the imaging device suddenly moves to a bright place, or when a lamp or illumination suddenly lights near the user, the previous state is Since appropriate exposure conditions are different, it may be difficult to capture a moving image using appropriate exposure conditions.
 本発明の実施形態における撮像装置は、映像を撮像する撮像部と、複数の異なる露光条件を用いて、各露光条件に応じて、前記撮像部により撮像された前記映像の画像データを生成する信号処理部と、前記露光条件毎に、前記信号処理部により生成された前記映像の画像データを画像認識する画像認識部と、前記画像認識部における前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記撮像部における映像の撮像に適する露光条件を選択する選択部とを備える。 An imaging apparatus according to an embodiment of the present invention uses an imaging unit that captures an image and a signal that generates image data of the image captured by the imaging unit according to each exposure condition using a plurality of different exposure conditions. An image recognition unit for recognizing the image data of the video generated by the signal processing unit for each exposure condition; and an image recognition of the image data of the video for each exposure condition in the image recognition unit. And a selection unit that selects an exposure condition suitable for capturing an image in the imaging unit based on the result.
図1は、本実施形態の撮像装置の内部構成の一例を詳細に示すブロック図である。FIG. 1 is a block diagram illustrating in detail an example of the internal configuration of the imaging apparatus according to the present embodiment. 図2は、3つの露光条件のセットに関する概念説明図である。FIG. 2 is a conceptual explanatory diagram regarding a set of three exposure conditions. 図3は、各露光条件を用いた認識用画像データと、露光条件に対応するカメラパラメータのシャッター速度とイメージセンサのゲインとの調整幅の最大値及び最小値の一例とを示す説明図である。FIG. 3 is an explanatory diagram showing image data for recognition using each exposure condition, and examples of the maximum value and the minimum value of the adjustment width between the shutter speed of the camera parameter corresponding to the exposure condition and the gain of the image sensor. . 図4は、本実施形態の撮像装置の詳細な動作手順の一例を示すフローチャートである。FIG. 4 is a flowchart illustrating an example of a detailed operation procedure of the imaging apparatus according to the present embodiment. 図5は、本実施形態の撮像装置の詳細な動作手順の一例を示すタイムチャートである。FIG. 5 is a time chart illustrating an example of a detailed operation procedure of the imaging apparatus according to the present embodiment. 図6は、本変形例の撮像制御システムのシステム構成の一例を詳細に示すブロック図である。FIG. 6 is a block diagram showing in detail an example of the system configuration of the imaging control system of the present modification.
 以下、本発明に係る撮像装置及び撮像制御方法の実施形態(以下、「本実施形態」という)について、図面を参照して説明する。以下、本実施形態の撮像装置について説明し、必要に応じて、本発明に係る撮像制御方法について説明する。なお、本発明は、撮像装置の装置カテゴリの発明、撮像制御方法の方法カテゴリの発明に限定されず、撮像装置と後述する画像認識装置(例えばPC(Personal Computer))とを含む撮像制御システムのシステムカテゴリの発明としても良い。 Hereinafter, an embodiment of an imaging apparatus and an imaging control method according to the present invention (hereinafter referred to as “this embodiment”) will be described with reference to the drawings. Hereinafter, the imaging apparatus of the present embodiment will be described, and the imaging control method according to the present invention will be described as necessary. Note that the present invention is not limited to the invention of the device category of the imaging device and the invention of the method category of the imaging control method, and is an imaging control system including an imaging device and an image recognition device (for example, a PC (Personal Computer)) described later. It may be an invention of the system category.
 本実施形態の撮像装置は、所定位置(例えば天井面)に固定され、又は所定面から吊り下げられて支持される監視カメラである。また、本実施形態の撮像装置は、ユーザにより把持されて使用されるデジタルカメラでも良い。また、本実施形態の撮像装置は、移動体(例えば鉄道、飛行機、バス、船舶、自動車、バイク、自転車)に搭載された車載カメラとして使用されても良い。 The imaging apparatus of the present embodiment is a surveillance camera that is fixed to a predetermined position (for example, a ceiling surface) or supported by being suspended from a predetermined surface. In addition, the imaging apparatus of the present embodiment may be a digital camera that is used by being held by a user. In addition, the imaging apparatus according to the present embodiment may be used as an in-vehicle camera mounted on a moving body (for example, a railway, an airplane, a bus, a ship, an automobile, a motorcycle, or a bicycle).
 本実施形態の撮像装置は、映像を撮像する撮像部と、複数の異なる撮像時の露光条件を用いて、各露光条件に応じて、撮像部により撮像された映像の画像データを生成する信号処理部と、露光条件毎に信号処理部が生成した映像の画像データを画像認識する画像認識部と、画像認識部が露光条件毎に映像の画像データを画像認識した画像認識結果を基に、撮像部における映像の撮像に適する露光条件を選択する。 The imaging apparatus according to the present embodiment uses an imaging unit that captures an image and a plurality of different exposure conditions at the time of imaging, and generates signal data of the image captured by the imaging unit according to each exposure condition An image recognition unit that recognizes video image data generated by the signal processing unit for each exposure condition, and an image recognition result that the image recognition unit recognizes video image data for each exposure condition. An exposure condition suitable for capturing an image in the unit is selected.
 以下、具体的な構成について、図1を参照して詳細に説明する。 Hereinafter, a specific configuration will be described in detail with reference to FIG.
 図1は、本実施形態の撮像装置10の内部構成の一例を詳細に示すブロック図である。図1に示す撮像装置10は、撮像部IMと、信号処理部13と、出力切換部15と、伝送用画像記憶バッファ17と、認識用画像記憶バッファ19と、画像圧縮部21と、画像伝送部23と、動き検出部25と、画像認識部27と、カメラパラメータ制御ドライバ29とを含む構成である。なお、本実施形態の撮像装置10の使用状況に応じて、動き検出部25は必ずしも必要ではなく、省略されても良い。例えば、LPR(Licensed Plate Recognition:ナンバープレート認識)処理のように撮像装置10が車両に取り付けられた場合又は移動中のユーザが撮像装置10を所持している場合等、撮像装置10自身が静止していない状況で使用される場合には、動き検出部25は省略されても良い。なお、車両に取り付けられた撮像装置10がLPR処理する場合には、UI(User Interface)操作信号により設定された範囲又は検出した前方の車両の下方範囲(例えばリアゲートガラスから下方の範囲)が対象となる。 FIG. 1 is a block diagram showing in detail an example of the internal configuration of the imaging apparatus 10 of the present embodiment. An imaging apparatus 10 illustrated in FIG. 1 includes an imaging unit IM, a signal processing unit 13, an output switching unit 15, a transmission image storage buffer 17, a recognition image storage buffer 19, an image compression unit 21, and an image transmission. The configuration includes a unit 23, a motion detection unit 25, an image recognition unit 27, and a camera parameter control driver 29. Note that the motion detection unit 25 is not necessarily required depending on the usage state of the imaging apparatus 10 of the present embodiment, and may be omitted. For example, when the imaging device 10 is attached to a vehicle as in LPR (Licensed Plate Recognition) processing, or when the moving user has the imaging device 10, the imaging device 10 itself stops. When used in a situation that is not, the motion detector 25 may be omitted. In addition, when the imaging device 10 attached to the vehicle performs LPR processing, the range set by a UI (User Interface) operation signal or the detected lower range of the front vehicle (for example, the lower range from the rear gate glass) is a target. It becomes.
 撮像部IMは、レンズLSと、絞り部IRと、シャッター部SHと、イメージセンサ11とを含む。レンズLSは、被写体像をイメージセンサ11の撮像面に向けて形成するために1つ以上の光学レンズを用いて構成され、例えば単焦レンズ、ズームレンズ、魚眼レンズ、又は所定度以上の広角な画角が得られるレンズである。 The imaging unit IM includes a lens LS, a diaphragm IR, a shutter unit SH, and an image sensor 11. The lens LS is configured using one or more optical lenses to form a subject image toward the imaging surface of the image sensor 11, for example, a single focus lens, a zoom lens, a fisheye lens, or a wide-angle image of a predetermined degree or more. It is a lens that can obtain corners.
 レンズLSの光軸の後方(図1の紙面右側。以下同様。)には、絞り部IRが配置されている。絞り部IRは、絞り値(口径)が可変であり、レンズLSを通過した被写体光の光量を制限する。絞り部IRの絞り値は、本実施形態の撮像装置10の撮像時の露光条件を構成するカメラパラメータの一例であり、後述するカメラパラメータ制御ドライバ29により変更される。カメラパラメータと信号処理部13との関係の詳細については後述する。 A diaphragm IR is disposed behind the optical axis of the lens LS (right side of the drawing in FIG. 1; the same applies hereinafter). The aperture portion IR has a variable aperture value (aperture) and limits the amount of subject light that has passed through the lens LS. The aperture value of the aperture unit IR is an example of camera parameters that constitute the exposure conditions during imaging of the imaging apparatus 10 of the present embodiment, and is changed by the camera parameter control driver 29 described later. Details of the relationship between the camera parameter and the signal processing unit 13 will be described later.
 絞り部IRの後方には、シャッター部SHが配置されている。シャッター部SHは、撮像装置10の撮像時に所定のシャッター速度で開動作及び閉動作を交互に行って、絞り部IRを通過した被写体光をイメージセンサ11に通過させる。シャッター部SHのシャッター速度は、本実施形態の撮像装置10の撮像時の露光条件を構成するカメラパラメータの一例であり、後述するカメラパラメータ制御ドライバ29により変更される。 A shutter unit SH is disposed behind the aperture unit IR. The shutter unit SH alternately performs an opening operation and a closing operation at a predetermined shutter speed when the image capturing apparatus 10 captures an image, and passes the subject light that has passed through the aperture unit IR to the image sensor 11. The shutter speed of the shutter unit SH is an example of a camera parameter that constitutes an exposure condition at the time of imaging of the imaging apparatus 10 of the present embodiment, and is changed by a camera parameter control driver 29 described later.
 イメージセンサ11は、例えばCCD(Charged Coupled Device)又はCMOS(Complementary Metal Oxide Semiconductor)の固体撮像素子を用いて構成され、イメージセンサ11のゲインを用いて、イメージセンサ11の撮像面に結像された被写体像を電気信号に光電変換する。イメージセンサ11の出力は、信号処理部13に入力される。 The image sensor 11 is configured by using, for example, a CCD (Charged 素 子 Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) solid-state imaging device, and is imaged on the imaging surface of the image sensor 11 using the gain of the image sensor 11. A subject image is photoelectrically converted into an electrical signal. The output of the image sensor 11 is input to the signal processing unit 13.
 信号処理部13は、露光条件A信号処理部13aと、露光条件B信号処理部13bと、露光条件C信号処理部13cと、HDR(High Dynamic Range)処理部13dとを含む。信号処理部13は、複数の異なる露光条件を用いて、各露光条件に応じて、イメージセンサ11の光電変換により生成された被写体像の電気信号を用いて、所定フォーマットに従った映像の画像データとして、伝送用画像データと認識用画像データとを生成する。 The signal processing unit 13 includes an exposure condition A signal processing unit 13a, an exposure condition B signal processing unit 13b, an exposure condition C signal processing unit 13c, and an HDR (High Dynamic Range) processing unit 13d. The signal processing unit 13 uses a plurality of different exposure conditions, and according to each exposure condition, uses the electrical signal of the subject image generated by the photoelectric conversion of the image sensor 11 to generate video image data according to a predetermined format. As described above, transmission image data and recognition image data are generated.
 伝送用画像データは、例えば撮像装置10とネットワークを介して接続された外部機器に伝送(送信)されて、外部機器を操作するユーザの閲覧に供される。一方、認識用画像データは、後述する複数の異なる露光条件に従って個々に生成された画像データであり、画像認識部27における画像認識に供される。なお、詳細は後述するが、画像認識部27は、複数の異なる露光条件に従って生成された認識用画像データの画像認識結果を基に、撮像装置10の周囲に適する露光条件を選択する。 The image data for transmission is transmitted (transmitted) to, for example, an external device connected to the imaging apparatus 10 via a network, and is provided for viewing by a user who operates the external device. On the other hand, the recognition image data is image data individually generated according to a plurality of different exposure conditions described later, and is used for image recognition in the image recognition unit 27. Although details will be described later, the image recognition unit 27 selects an exposure condition suitable for the periphery of the imaging device 10 based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions.
 具体的には、信号処理部13は、例えば3つの異なる露光条件A,B,Cを用いて、各露光条件A,B,Cに従った認識用画像データを生成し、更に、これらの露光条件A,B,Cに従って生成した認識用画像データを合成(所謂、HDR合成)して伝送用画像データを生成する。なお、本実施形態では、説明を分かり易くするために、3つの異なる露光条件A,B,Cを用いて説明するが、露光条件は3つに限定されず、2つ以上であれば良い。 Specifically, the signal processing unit 13 uses, for example, three different exposure conditions A, B, and C to generate recognition image data according to each exposure condition A, B, and C, and further, these exposures The image data for recognition generated according to the conditions A, B, and C is synthesized (so-called HDR synthesis) to generate image data for transmission. In this embodiment, in order to make the explanation easy to understand, the description will be made using three different exposure conditions A, B, and C. However, the exposure conditions are not limited to three and may be two or more.
 露光条件A信号処理部13a,露光条件B信号処理部13b,露光条件C信号処理部13cは、それぞれ同一の動作開始タイミングでは信号処理を行うのではなく、図5に示すように、パイプライン処理として、異なる動作開始タイミングで並列に信号処理を行う。また、本実施形態では、露光条件A,B,C毎に、撮像装置10のカメラパラメータ(例えば絞り値、シャッター速度、イメージセンサ11のゲインのうち1つ又は複数の組み合わせ)が、カメラパラメータ制御ドライバ29によりセット(設定)される。 The exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c do not perform signal processing at the same operation start timing, but perform pipeline processing as shown in FIG. As described above, signal processing is performed in parallel at different operation start timings. In this embodiment, for each exposure condition A, B, and C, camera parameters of the imaging device 10 (for example, a combination of one or a plurality of aperture values, shutter speeds, and gains of the image sensor 11) are controlled by camera parameters. It is set (set) by the driver 29.
 従って、信号処理部13では、露光条件A信号処理部13aが露光条件Aに従った認識用画像データを生成すると、カメラパラメータ制御ドライバ29により、露光条件Aに対応するカメラパラメータは露光条件Bに対応するカメラパラメータに変更される。 Therefore, in the signal processing unit 13, when the exposure condition A signal processing unit 13a generates image data for recognition according to the exposure condition A, the camera parameter corresponding to the exposure condition A is set to the exposure condition B by the camera parameter control driver 29. The corresponding camera parameter is changed.
 同様に、露光条件B信号処理部13bが露光条件Bに従った認識用画像データを生成すると、カメラパラメータ制御ドライバ29により、露光条件Bに対応するカメラパラメータは露光条件Cに対応するカメラパラメータに変更される。 Similarly, when the exposure condition B signal processing unit 13b generates image data for recognition according to the exposure condition B, the camera parameter corresponding to the exposure condition B is changed to the camera parameter corresponding to the exposure condition C by the camera parameter control driver 29. Be changed.
 更に、露光条件C信号処理部13cが露光条件Cに従った認識用画像データを生成すると、カメラパラメータ制御ドライバ29により、露光条件Cに対応するカメラパラメータは露光条件Cに対応するカメラパラメータに変更される。信号処理部13では、このようなカメラパラメータのセットが繰り返し行われる。 Further, when the exposure condition C signal processing unit 13c generates image data for recognition according to the exposure condition C, the camera parameter control driver 29 changes the camera parameter corresponding to the exposure condition C to the camera parameter corresponding to the exposure condition C. Is done. The signal processing unit 13 repeatedly performs such camera parameter setting.
 露光条件A信号処理部13aは、露光条件Aに対応するカメラパラメータを用いて、露光条件Aに従った認識用画像データ、即ち、所定の画像処理が施されていないRGB(Red Green Blue)形式又はYUV(輝度・色差)形式のRAWデータと言われる画像データのフレームを生成する。 The exposure condition A signal processing unit 13a uses the camera parameter corresponding to the exposure condition A to recognize image data according to the exposure condition A, that is, an RGB (Red Green Blue) format that has not been subjected to predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
 露光条件B信号処理部13bは、露光条件Bに対応するカメラパラメータを用いて、露光条件Bに従った認識用画像データ、即ち、所定の画像処理が施されていないRGB(Red Green Blue)形式又はYUV(輝度・色差)形式のRAWデータと言われる画像データのフレームを生成する。 The exposure condition B signal processing unit 13b uses the camera parameter corresponding to the exposure condition B, and the image data for recognition according to the exposure condition B, that is, the RGB (Red Green Blue) format not subjected to the predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
 露光条件C信号処理部13cは、露光条件Cに対応するカメラパラメータを用いて、露光条件Cに従った認識用画像データ、即ち、所定の画像処理が施されていないRGB(Red Green Blue)形式又はYUV(輝度・色差)形式のRAWデータと言われる画像データのフレームを生成する。 The exposure condition C signal processing unit 13c uses the camera parameters corresponding to the exposure condition C to recognize image data according to the exposure condition C, that is, an RGB (Red Green Blue) format that has not been subjected to predetermined image processing. Alternatively, a frame of image data called RAW data in YUV (luminance / color difference) format is generated.
 HDR処理部13dは、露光条件A信号処理部13a,露光条件B信号処理部13b,露光条件C信号処理部13cにより生成された認識用画像データのフレームを合成し、例えば撮像装置10とネットワークを介して接続された外部機器に対する伝送用画像データを生成する。HDR処理部13dにより生成された伝送用画像データは、HDR効果により、単一の露光条件(例えば露光条件A)に従って生成された認識用画像データに比べて、ダイナミックレンジが高くなっている。 The HDR processing unit 13d synthesizes the frames of the recognition image data generated by the exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c. The image data for transmission with respect to the external apparatus connected via this is produced | generated. The transmission image data generated by the HDR processing unit 13d has a higher dynamic range than the image data for recognition generated according to a single exposure condition (for example, exposure condition A) due to the HDR effect.
 これにより、信号処理部13は、例えばHDR処理部13dにより生成された伝送用画像データが外部機器のディスプレイに表示された場合には、人の視覚が有するダイナミックレンジにおいてカバーできるので、人の閲覧に適する明瞭性の高い画像データを得ることができる。 Thus, the signal processing unit 13 can cover the dynamic range of human vision when the transmission image data generated by the HDR processing unit 13d is displayed on the display of the external device, for example, Highly clear image data suitable for the above can be obtained.
 切換部の一例としての出力切換部15は、外部信号(例えば撮像装置10のユーザの入力操作によって生成されたUI操作信号)に応じて、信号処理部13の出力、即ち、単一の露光条件に従って生成された認識用画像データ又はHDR処理部13dにより生成された伝送用画像データの出力を、伝送用画像記憶バッファ17又は認識用画像記憶バッファ19に切り換える。 The output switching unit 15 as an example of the switching unit outputs the output of the signal processing unit 13 according to an external signal (for example, a UI operation signal generated by an input operation of the user of the imaging device 10), that is, a single exposure condition. The output of the recognition image data generated according to the above or the transmission image data generated by the HDR processing unit 13d is switched to the transmission image storage buffer 17 or the recognition image storage buffer 19.
 なお、出力切換部15は、上述した外部信号(例えば撮像装置10のユーザの入力操作によって生成されたUI操作信号)以外に、例えば時分割(異なるタイムスロット)で、認識用画像データ又は伝送用画像データの出力を伝送用画像記憶バッファ17又は認識用画像記憶バッファ19に切り換えても良い。ここでいうタイムスロットは、認識用画像データを用いた画像処理に供されるタイムスロットと、伝送用画像データを用いた画像伝送に供されるタイムスロットとの2つがある。 In addition to the above-described external signal (for example, a UI operation signal generated by an input operation of the user of the imaging apparatus 10), the output switching unit 15 is, for example, time-division (different time slots), for recognition image data or transmission The output of the image data may be switched to the transmission image storage buffer 17 or the recognition image storage buffer 19. Here, there are two time slots: a time slot used for image processing using recognition image data and a time slot used for image transmission using transmission image data.
 例えば、出力切換部15は、単一の露光条件に従って生成された認識用画像データを伝送用画像記憶バッファ17に出力して記憶させる。また、出力切換部15は、HDR処理部13dにより生成された認識用画像データを認識用画像記憶バッファ19に出力して記憶させる。なお、出力切換部15は、単一の露光条件に従って生成された認識用画像データを伝送用画像記憶バッファ17に出力して記憶させても良い。 For example, the output switching unit 15 outputs recognition image data generated according to a single exposure condition to the transmission image storage buffer 17 for storage. Further, the output switching unit 15 outputs the recognition image data generated by the HDR processing unit 13d to the recognition image storage buffer 19 and stores it. Note that the output switching unit 15 may output and store the recognition image data generated according to the single exposure condition to the transmission image storage buffer 17.
 第2の記憶部の一例としての伝送用画像記憶バッファ17は、例えばRAM(Random Access Memory)又はフラッシュメモリ等の半導体メモリを用いて構成され、出力切換部15から出力された伝送用画像データを記憶する。なお、伝送用画像記憶バッファ17は、出力切換部15から出力された認識用画像データを記憶しても良い。また、伝送用画像記憶バッファ17は、撮像装置10に内蔵されたハードディスク装置、又は、例えばUSB(Universal Serial Bus)端子を介して接続可能な外部接続媒体(例えばフラッシュメモリ等の半導体メモリ)でも良い。 The transmission image storage buffer 17 as an example of the second storage unit is configured by using, for example, a semiconductor memory such as a RAM (Random Access Memory) or a flash memory, and the transmission image data output from the output switching unit 15 is received. Remember. Note that the transmission image storage buffer 17 may store the recognition image data output from the output switching unit 15. Further, the transmission image storage buffer 17 may be a hard disk device built in the imaging apparatus 10 or an external connection medium (for example, a semiconductor memory such as a flash memory) that can be connected via, for example, a USB (Universal Serial Bus) terminal. .
 第1の記憶部の一例としての認識用画像記憶バッファ19は、例えばRAM又はフラッシュメモリ等の半導体メモリを用いて構成され、出力切換部15から出力された認識用画像データを記憶する。この認識用画像データは、露光条件A信号処理部13a,露光条件B信号処理部13b,露光条件C信号処理部13cにより生成された画像データ(RAWデータ)である。なお、認識用画像記憶バッファ19は、撮像装置10に内蔵されたハードディスク装置、又は、例えばUSB端子を介して接続可能な外部接続媒体(例えばフラッシュメモリ等の半導体メモリ)でも良い。 The recognition image storage buffer 19 as an example of the first storage unit is configured using a semiconductor memory such as a RAM or a flash memory, and stores the recognition image data output from the output switching unit 15. This image data for recognition is image data (RAW data) generated by the exposure condition A signal processing unit 13a, the exposure condition B signal processing unit 13b, and the exposure condition C signal processing unit 13c. The recognition image storage buffer 19 may be a hard disk device built in the imaging device 10 or an external connection medium (for example, a semiconductor memory such as a flash memory) that can be connected via, for example, a USB terminal.
 画像圧縮部21は、例えばコーデックを用いて構成され、伝送用画像記憶バッファ17に記憶された伝送用画像データ(又は認識用画像データ)を用いて、伝送用画像データ(又は認識用画像データ)の保存及び送信が可能なデータフォーマットに変換するための符号化データを生成する。画像圧縮部21は、符号化データを画像伝送部23に出力する。 The image compression unit 21 is configured using, for example, a codec and uses transmission image data (or recognition image data) stored in the transmission image storage buffer 17 to transmit transmission image data (or recognition image data). Encoded data for conversion to a data format that can be stored and transmitted. The image compression unit 21 outputs the encoded data to the image transmission unit 23.
 画像伝送部23は、画像圧縮部21により生成された伝送用画像データ(又は認識用画像データ)の符号化データを用いて、例えば送信先である外部機器(不図示)に送信するためのパケット生成処理を行い、ネットワークを介して、符号化データのパケットを外部機器に送信する。これにより、撮像装置10は、伝送用画像データ(又は認識用画像データ)の符号化データを外部機器に送信できる。 The image transmission unit 23 uses the encoded data of the transmission image data (or recognition image data) generated by the image compression unit 21, for example, a packet for transmission to an external device (not shown) that is a transmission destination Generation processing is performed, and a packet of encoded data is transmitted to an external device via the network. Thereby, the imaging device 10 can transmit the encoded data of the transmission image data (or recognition image data) to the external device.
 動き検出部25は、例えばDSP(Digital Signal Processor)を用いて構成され、複数の異なる露光条件A,B,Cに従って生成された認識用画像データを認識用画像記憶バッファ19から読み出して、VMD(Video Motion Detector)処理し、認識用画像データにおける動きの有無を検出する。 The motion detection unit 25 is configured using, for example, a DSP (Digital Signal Processor), reads recognition image data generated in accordance with a plurality of different exposure conditions A, B, and C from the recognition image storage buffer 19, and performs VMD ( Video Motion Detector) processing to detect the presence or absence of motion in the recognition image data.
 動き検出部25は、VMD処理を行った認識用画像データを認識用画像記憶バッファ19に書き込み、認識用画像記憶バッファ19の書き込み位置(アドレス)に関する情報を画像認識部27に出力する。なお、上述したように、露光条件A,B,Cに従った認識用画像データは異なるタイミングで並列な信号処理によって生成されるので、動き検出部25におけるVMD処理も並列に行われる(図5参照)。 The motion detection unit 25 writes the recognition image data that has undergone the VMD process in the recognition image storage buffer 19, and outputs information related to the writing position (address) of the recognition image storage buffer 19 to the image recognition unit 27. As described above, since the recognition image data according to the exposure conditions A, B, and C is generated by parallel signal processing at different timings, the VMD processing in the motion detection unit 25 is also performed in parallel (FIG. 5). reference).
 画像認識部27は、例えばCPU(Central Processing Unit)、MPU(Micro Processing Unit)又はDSPを用いて構成され、動き検出部25から出力されたアドレスに関する情報を基に、認識用画像記憶バッファ19から、複数の異なる露光条件A,B,Cに従って生成された認識用画像データを読み出して画像認識(例えばFD(顔検出:Face Detection)、FR(顔認識:Face Recognition))を行う。なお、画像認識部27は、動き検出部25のVMD処理結果を用いて、認識用画像データの画像認識を行っても良い。 The image recognition unit 27 is configured using, for example, a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or a DSP, and based on the information regarding the address output from the motion detection unit 25, the image recognition unit 27 Then, image data for recognition generated according to a plurality of different exposure conditions A, B, and C is read and image recognition (for example, FD (face detection: FaceFDetection), FR (face recognition: Face Recognition)) is performed. Note that the image recognition unit 27 may perform image recognition of the image data for recognition using the VMD processing result of the motion detection unit 25.
 また、選択部の一例としての画像認識部27は、複数の異なる露光条件A,B,Cに従って生成された認識用画像データの画像認識結果を基に、最も良好な画像認識結果が得られた1つの認識用画像データに対応する露光条件を、撮像装置10の周囲に適する露光条件として選択する。画像認識部27は、選択された露光条件に関する情報をカメラパラメータ制御ドライバ29に出力する。 Further, the image recognition unit 27 as an example of the selection unit obtained the best image recognition result based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions A, B, and C. An exposure condition corresponding to one piece of recognition image data is selected as an exposure condition suitable for the periphery of the imaging apparatus 10. The image recognition unit 27 outputs information regarding the selected exposure condition to the camera parameter control driver 29.
 カメラパラメータ制御ドライバ29は、画像認識部27の出力に応じて、絞り部IRの絞り量とシャッター部SHのシャッター速度とイメージセンサ11のゲインとをそれぞれ調整する制御回路を有する。露光条件制御部の一例としてのカメラパラメータ制御ドライバ29は、例えば画像認識部27から出力された露光条件に関する情報を用いて、複数の異なる露光条件A,B,Cに対応する各カメラパラメータを、絞り部IR,シャッター部SH,イメージセンサ11に対してセットする。 The camera parameter control driver 29 has a control circuit that adjusts the aperture amount of the aperture unit IR, the shutter speed of the shutter unit SH, and the gain of the image sensor 11 in accordance with the output of the image recognition unit 27. The camera parameter control driver 29 as an example of the exposure condition control unit uses each of the camera parameters corresponding to a plurality of different exposure conditions A, B, and C using, for example, information on the exposure condition output from the image recognition unit 27. Set for the iris IR, shutter SH, and image sensor 11.
 なお、上述したように、本実施形態のカメラパラメータは、絞り部IRの絞り量、シャッター部SHのシャッター速度、及びイメージセンサ11のゲインとするが、これらのうちいずれか1つだけでも良いし、複数のカメラパラメータからなる組み合わせ(例えば絞り量とシャッター速度、シャッター速度とゲイン、ゲインと絞り量)でも良い。 As described above, the camera parameters of the present embodiment are the aperture amount of the aperture unit IR, the shutter speed of the shutter unit SH, and the gain of the image sensor 11, but only one of them may be used. A combination of a plurality of camera parameters (for example, aperture amount and shutter speed, shutter speed and gain, gain and aperture amount) may be used.
 図2は、3つの露光条件A,B,Cのセットに関する概念説明図である。詳細は図3を参照して説明するが、例えばシャッター速度は30から300まで調整可能であって、例えばイメージセンサ11のゲインは0から40まで調整可能とする。図2の下段において、点線BTLは、画像認識部27における認識用画像データの画像認識が行われる度に選択された最も良好な露光条件を示す。このため、第1回目の画像認識では、最も良好な画像認識結果が得られた露光条件は露光条件Bであり、第2回目の画像認識では、最も良好な画像認識結果が得られた露光条件は露光条件Aであり、以降同様にして、第n回目の画像認識では、最も良好な画像認識結果が得られた露光条件は露光条件Bとする。 FIG. 2 is a conceptual explanatory diagram regarding a set of three exposure conditions A, B, and C. Details will be described with reference to FIG. 3. For example, the shutter speed can be adjusted from 30 to 300, and the gain of the image sensor 11 can be adjusted from 0 to 40, for example. In the lower part of FIG. 2, a dotted line BTL indicates the best exposure condition selected every time the image recognition unit 27 performs image recognition of the recognition image data. Therefore, in the first image recognition, the exposure condition for obtaining the best image recognition result is the exposure condition B, and in the second image recognition, the exposure condition for obtaining the best image recognition result. Is the exposure condition A. Similarly, in the nth image recognition, the exposure condition under which the best image recognition result is obtained is the exposure condition B.
 図2において、カメラパラメータ制御ドライバ29は、例えば第1回目の画像認識によって露光条件Bが画像認識部27により選択された場合には(時刻t1)、露光条件A,B,Cのうち、最良の露光条件Bが中心となるようにカメラパラメータをセットし、更に、露光条件Bに対応するカメラパラメータの一部又は全部について少し値を変更(調整)して他の露光条件(時刻t1では露光条件A,C)をセットする。時刻t1,t2,t3,…,tnは、カメラパラメータ制御ドライバ29における露光条件のセットタイミングを示す。 In FIG. 2, the camera parameter control driver 29 selects the best of the exposure conditions A, B, and C, for example, when the exposure condition B is selected by the image recognition unit 27 by the first image recognition (time t1). The camera parameters are set so that the exposure condition B becomes the center, and the values of some or all of the camera parameters corresponding to the exposure condition B are slightly changed (adjusted) to obtain other exposure conditions (exposure at time t1). Condition A, C) is set. Times t1, t2, t3,..., Tn indicate exposure condition setting timings in the camera parameter control driver 29.
 また、例えばカメラパラメータ制御ドライバ29は、例えば第2回目の画像認識によって露光条件Aが画像認識部27により選択された場合には(時刻t2)、露光条件A,B,Cのうち、最良の露光条件Aが中心となるようにカメラパラメータをセットし、更に、露光条件Aに対応するカメラパラメータの一部又は全部について少し値を変更(調整)して他の露光条件(時刻t2では露光条件B,C)をセットする。以降、同様にして、露光条件A,B,Cがセットされていく。 For example, when the exposure condition A is selected by the image recognition unit 27 by the second image recognition (time t2), for example, the camera parameter control driver 29 selects the best exposure condition A, B, or C. The camera parameters are set so that the exposure condition A becomes the center, and the values of some or all of the camera parameters corresponding to the exposure condition A are slightly changed (adjusted) to obtain other exposure conditions (exposure conditions at time t2). B, C) is set. Thereafter, exposure conditions A, B, and C are set similarly.
 図3は、各露光条件A,B,Cを用いた認識用画像データと、露光条件に対応するカメラパラメータのシャッター速度とイメージセンサ11のゲインとの調整幅の最大値及び最小値の一例とを示す説明図である。 FIG. 3 shows an example of image data for recognition using each of the exposure conditions A, B, and C, and examples of the maximum and minimum values of the adjustment width between the shutter speed of the camera parameter corresponding to the exposure condition and the gain of the image sensor 11. It is explanatory drawing which shows.
 図3の中段に示すように、例えばシャッター速度は30から300まで調整可能であって、例えば調整幅の最大値は30とし、調整幅の最小値は10とする。即ち、シャッター速度は、30から300の間で、10から30の値毎に調整可能である。同様に、例えばイメージセンサ11のゲインは0から40まで調整可能であって、例えば調整幅の最大値は3とし、調整幅の最小値は1とする。即ち、イメージセンサ11のゲインは、0から40の間で、1から3の値毎に調整可能である。 3, for example, the shutter speed can be adjusted from 30 to 300, and for example, the maximum value of the adjustment width is 30 and the minimum value of the adjustment width is 10. In other words, the shutter speed can be adjusted between 30 and 300 in increments of 10 to 30. Similarly, for example, the gain of the image sensor 11 can be adjusted from 0 to 40. For example, the maximum value of the adjustment width is 3, and the minimum value of the adjustment width is 1. That is, the gain of the image sensor 11 can be adjusted in the range of 1 to 3 between 0 and 40.
 また、図3の下段に示すように、露光条件Aに対応するカメラパラメータは、イメージセンサ11のゲインが0、シャッター速度が200である。露光条件Bに対応するカメラパラメータは、イメージセンサ11のゲインが0、シャッター速度が30である。露光条件Cに対応するカメラパラメータは、イメージセンサ11のゲインが40、シャッター速度が30である。 As shown in the lower part of FIG. 3, the camera parameters corresponding to the exposure condition A have a gain of the image sensor 11 of 0 and a shutter speed of 200. As for the camera parameters corresponding to the exposure condition B, the gain of the image sensor 11 is 0 and the shutter speed is 30. The camera parameters corresponding to the exposure condition C are 40 for the gain of the image sensor 11 and 30 for the shutter speed.
 また、図3の上段には、露光条件Aに対応するカメラパラメータの撮像によって生成された認識用画像データIM2と、露光条件Bに対応するカメラパラメータの撮像によって生成された認識用画像データIM3と、露光条件Cに対応するカメラパラメータの撮像によって生成された認識用画像データIM4とが示されている。また、図3の上段には、露光条件A,B,C以外の露光条件に対応するカメラパラメータ(例えば、シャッター速度が300、イメージセンサ11のゲインが0)の撮像によって生成された認識用画像データIM1も示されている。 In the upper part of FIG. 3, recognition image data IM <b> 2 generated by imaging camera parameters corresponding to the exposure condition A, and recognition image data IM <b> 3 generated by imaging camera parameters corresponding to the exposure condition B, The image data for recognition IM4 generated by the imaging of the camera parameter corresponding to the exposure condition C is shown. In the upper part of FIG. 3, a recognition image generated by imaging camera parameters (for example, the shutter speed is 300 and the gain of the image sensor 11 is 0) corresponding to exposure conditions other than the exposure conditions A, B, and C. Data IM1 is also shown.
 認識用画像データIM1,IM2は、画像としての明度が低く、更に、コントラストが小さいので、画像認識部27において良好な画像認識結果が得られる可能性は低い。一方、認識用画像データIM4は、画像としての明度が高く、更に、コントラストが大きいので、画像認識部27において良好な画像認識結果が得られる可能性はある。認識用画像データIM3は、画像としての明度、コントラストがともに適正値であり、画像認識部27において最も良好な画像認識結果が得られる。 Since the image data for recognition IM1 and IM2 have low brightness as an image and a small contrast, the image recognition unit 27 is unlikely to obtain a good image recognition result. On the other hand, the recognition image data IM4 has a high brightness as an image and a high contrast, so that there is a possibility that a good image recognition result can be obtained in the image recognition unit 27. The image data for recognition IM3 has appropriate values for brightness and contrast as an image, and the image recognition unit 27 can obtain the best image recognition result.
 次に、本実施形態の撮像装置10の詳細な動作手順の一例について、図4を参照して説明する。図4は、本実施形態の撮像装置10の詳細な動作手順の一例を示すフローチャートである。図4の説明では、撮像装置10は、時分割(異なるタイムスロット)で、認識用画像データ又は伝送用画像データの出力を伝送用画像記憶バッファ17又は認識用画像記憶バッファ19に切り換えるとする。 Next, an example of a detailed operation procedure of the imaging apparatus 10 of the present embodiment will be described with reference to FIG. FIG. 4 is a flowchart illustrating an example of a detailed operation procedure of the imaging apparatus 10 according to the present embodiment. In the description of FIG. 4, it is assumed that the imaging apparatus 10 switches the output of the recognition image data or the transmission image data to the transmission image storage buffer 17 or the recognition image storage buffer 19 in time division (different time slots).
 図4において、認識用画像データを用いた画像認識に供されるタイムスロットではない場合には(S1、NO)、伝送用画像データを用いた画像伝送に供されるタイムスロットであるため、カメラパラメータ制御ドライバ29は、伝送用画像データを得るための露光条件A,B,Cのカメラパラメータを、絞り部IR、シャッター部SH及びイメージセンサ11にセットする(S2)。伝送用画像データを用いた画像伝送に供されるタイムスロットでは、露光条件A,B,Cのカメラパラメータは所定値(固定値)とする。ステップS2におけるカメラパラメータのセットが完了して、カメラパラメータの内容が絞り部IR、シャッター部SH及びイメージセンサ11に反映されるまで、撮像装置10は待機する(S3)。 In FIG. 4, when the time slot is not used for image recognition using the image data for recognition (S1, NO), the time slot is used for image transmission using the image data for transmission. The parameter control driver 29 sets the camera parameters of the exposure conditions A, B, and C for obtaining the transmission image data in the aperture unit IR, the shutter unit SH, and the image sensor 11 (S2). In the time slot used for image transmission using the image data for transmission, the camera parameters of the exposure conditions A, B, and C are set to predetermined values (fixed values). The imaging apparatus 10 stands by until the setting of the camera parameters in step S2 is completed and the contents of the camera parameters are reflected on the aperture unit IR, the shutter unit SH, and the image sensor 11 (S3).
 ステップS3の後、撮像装置10は、露光条件A,B,Cに従って生成された認識用画像データを合成(HDR合成)することによって伝送用画像データを生成し(S4)、出力切換部15を介して、伝送用画像データを伝送用画像記憶バッファ17に書き込む(S5)。更に、撮像装置10は、伝送用画像記憶バッファ17に書き込んだ伝送用画像データを圧縮して伝送用画像データの符号化データを生成して外部機器に伝送(送信)する(S6)。伝送用画像データを用いた画像伝送に供されるタイムスロットは、ステップS2からステップS6までの動作が完結する程度に予め設けられている。ステップS6の後、撮像装置10の動作はステップS1に戻る。 After step S3, the imaging apparatus 10 generates transmission image data by combining (HDR combining) the recognition image data generated in accordance with the exposure conditions A, B, and C (S4). Then, the transmission image data is written into the transmission image storage buffer 17 (S5). Furthermore, the imaging apparatus 10 compresses the transmission image data written in the transmission image storage buffer 17, generates encoded data of the transmission image data, and transmits (transmits) the data to an external device (S6). Time slots used for image transmission using transmission image data are provided in advance to the extent that the operations from step S2 to step S6 are completed. After step S6, the operation of the imaging device 10 returns to step S1.
 一方、認識用画像データを用いた画像認識に供されるタイムスロットである場合には(S1、YES)、カメラパラメータ制御ドライバ29は、認識用画像データを得るための露光条件A,B,Cのカメラパラメータを、絞り部IR、シャッター部SH及びイメージセンサ11にセットする(S7)。認識用画像データを用いた画像伝送に供されるタイムスロットでは、図2を参照して説明したように、露光条件A,B,Cのカメラパラメータは可変値とする。ステップS7におけるカメラパラメータのセットが完了して、カメラパラメータの内容が絞り部IR、シャッター部SH及びイメージセンサ11に反映されるまで、撮像装置10は待機する(S8)。 On the other hand, when the time slot is used for image recognition using the image data for recognition (S1, YES), the camera parameter control driver 29 exposes exposure conditions A, B, and C for obtaining the image data for recognition. Are set in the aperture part IR, the shutter part SH, and the image sensor 11 (S7). In the time slot used for image transmission using the image data for recognition, as described with reference to FIG. 2, the camera parameters of the exposure conditions A, B, and C are variable values. The imaging apparatus 10 stands by until the setting of the camera parameters in step S7 is completed and the contents of the camera parameters are reflected on the aperture unit IR, the shutter unit SH, and the image sensor 11 (S8).
 ステップS8の後、撮像装置10は、露光条件A,B,Cに従った認識用画像データをそれぞれ生成し(S9)、出力切換部15を介して、認識用画像データを認識用画像記憶バッファ19に書き込む(S10)。撮像装置10は、露光条件A,B,Cに従って生成されたそれぞれの認識用画像データを用いてVMD処理を行い、更に、露光条件A,B,Cに従って生成されたそれぞれの認識用画像データを用いて画像認識を行う(S11)。 After step S8, the imaging apparatus 10 generates recognition image data according to the exposure conditions A, B, and C (S9), and the recognition image data is stored in the recognition image storage buffer via the output switching unit 15. 19 is written (S10). The imaging apparatus 10 performs VMD processing using the respective recognition image data generated according to the exposure conditions A, B, and C, and further acquires the respective recognition image data generated according to the exposure conditions A, B, and C. Using this, image recognition is performed (S11).
 撮像装置10は、ステップS11における画像認識において、露光条件A,B,Cに従って生成されたそれぞれの認識用画像データのうち最も良好な画像認識結果が得られた認識用画像データに対応する露光条件のカメラパラメータを選択する(S12)。ステップS12の後、撮像装置10の動作はステップS1に戻り、撮像装置10の電源がOFFされるまで、ステップS1において判断されたタイムスロット毎の動作が繰り返される。 In the image recognition in step S11, the imaging apparatus 10 uses the exposure conditions corresponding to the recognition image data that has obtained the best image recognition result among the respective recognition image data generated according to the exposure conditions A, B, and C. Camera parameters are selected (S12). After step S12, the operation of the imaging apparatus 10 returns to step S1, and the operation for each time slot determined in step S1 is repeated until the power of the imaging apparatus 10 is turned off.
 図5は、本実施形態の撮像装置10の詳細な動作手順の一例を示すタイムチャートである。図5では、イメージセンサ11と、動き検出部25と、カメラパラメータ制御ドライバ29(図5では、単に「制御ドライバ」記載)と、画像認識部27との時系列な動作手順が示されている。なお、図5に示す説明の前提として、イメージセンサ11のゲインとシャッター部SHのシャッター速度との各初期値が予めセットされているとする。また、図5では、信号処理部13が伝送用画像データの1フレームを生成する時間を33.3msとしている。 FIG. 5 is a time chart showing an example of a detailed operation procedure of the imaging apparatus 10 of the present embodiment. FIG. 5 shows time-series operation procedures of the image sensor 11, the motion detection unit 25, the camera parameter control driver 29 (simply described as “control driver” in FIG. 5), and the image recognition unit 27. . As an assumption of the explanation shown in FIG. 5, it is assumed that initial values of the gain of the image sensor 11 and the shutter speed of the shutter unit SH are set in advance. In FIG. 5, the time for the signal processing unit 13 to generate one frame of image data for transmission is 33.3 ms.
 図5において、カメラパラメータ制御ドライバ29は、認識用画像データを用いた画像認識に供される前回のタイムスロットにおいて選択された露光条件のカメラパラメータに応じて、複数の異なる3つの露光条件のカメラパラメータをセットする。矢印FDBaは、例えばカメラパラメータ制御ドライバ29が露光条件Aのカメラパラメータ(例えばゲイン)をイメージセンサ11にセットするタイミングを示す。イメージセンサ11では、区間IMaにおいて、矢印FDBaによってセットされた露光条件Aのカメラパラメータに応じた撮像が行われる。これにより、露光条件Aに従った認識用画像データが生成される。 In FIG. 5, the camera parameter control driver 29 has a plurality of cameras with three different exposure conditions according to the camera parameters of the exposure conditions selected in the previous time slot used for image recognition using the image data for recognition. Set parameters. An arrow FDBa indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition A in the image sensor 11. The image sensor 11 performs imaging according to the camera parameter of the exposure condition A set by the arrow FDBa in the section IMa. Thereby, the image data for recognition according to the exposure condition A is produced | generated.
 また、区間IMaが終了する前に、カメラパラメータ制御ドライバ29は、露光条件Bのカメラパラメータ(例えばゲイン)をイメージセンサ11にセットする。矢印FDBbは、例えばカメラパラメータ制御ドライバ29が露光条件Bのカメラパラメータ(例えばゲイン)をイメージセンサ11にセットするタイミングを示す。イメージセンサ11では、区間IMbにおいて、矢印FDBbによってセットされた露光条件Bのカメラパラメータに応じた撮像が行われる。これにより、露光条件Bに従った認識用画像データが生成される。 Also, before the end of the section IMa, the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition B in the image sensor 11. An arrow FDBb indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition B in the image sensor 11. In the image sensor 11, imaging according to the camera parameter of the exposure condition B set by the arrow FDBb is performed in the section IMb. Thereby, image data for recognition according to the exposure condition B is generated.
 また、区間IMbが終了する前に、カメラパラメータ制御ドライバ29は、露光条件Cのカメラパラメータ(例えばゲイン)をイメージセンサ11にセットする。矢印FDBcは、例えばカメラパラメータ制御ドライバ29が露光条件Cのカメラパラメータ(例えばゲイン)をイメージセンサ11にセットするタイミングを示す。イメージセンサ11では、区間IMcにおいて、矢印FDBcによってセットされた露光条件Cのカメラパラメータに応じた撮像が行われる。これにより、露光条件Cに従った認識用画像データが生成される。 Also, before the end of the section IMb, the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition C in the image sensor 11. An arrow FDBc indicates, for example, the timing when the camera parameter control driver 29 sets the camera parameter (for example, gain) of the exposure condition C in the image sensor 11. In the image sensor 11, imaging according to the camera parameter of the exposure condition C set by the arrow FDBc is performed in the section IMc. Thereby, image data for recognition according to the exposure condition C is generated.
 区間IMaの後、動き検出部25は、区間VDa(例えば66.6ms)において、区間IMaの撮像によって生成された認識用画像データをVMD処理し、区間VDaの終了後に、VMD処理後の認識用画像データを認識用画像記憶バッファ19に書き込むとともに、認識用画像記憶バッファ19の書き込み位置(アドレス)に関する情報をカメラパラメータ制御ドライバ29に出力する(矢印Bfa参照)。 After the section IMa, the motion detection unit 25 performs VMD processing on the recognition image data generated by imaging the section IMa in the section VDa (for example, 66.6 ms), and performs recognition for VMD processing after the end of the section VDa. The image data is written to the recognition image storage buffer 19 and information about the writing position (address) of the recognition image storage buffer 19 is output to the camera parameter control driver 29 (see arrow Bfa).
 区間IMbの後、動き検出部25は、区間VDb(同様に例えば66.6ms)において、区間IMbの撮像によって生成された認識用画像データをVMD処理し、区間VDbの終了後に、VMD処理後の認識用画像データを認識用画像記憶バッファ19に書き込むとともに、認識用画像記憶バッファ19の書き込み位置(アドレス)に関する情報をカメラパラメータ制御ドライバ29に出力する(矢印Bfb参照)。 After the section IMb, the motion detection unit 25 performs VMD processing on the recognition image data generated by the imaging of the section IMb in the section VDb (similarly, for example, 66.6 ms), and after the end of the section VDb, The recognition image data is written into the recognition image storage buffer 19 and information about the writing position (address) of the recognition image storage buffer 19 is output to the camera parameter control driver 29 (see arrow Bfb).
 区間IMcの後、動き検出部25は、区間VDc(同様に例えば66.6ms)において、区間IMcの撮像によって生成された認識用画像データをVMD処理し、区間VDcの終了後に、VMD処理後の認識用画像データを認識用画像記憶バッファ19に書き込むとともに、認識用画像記憶バッファ19の書き込み位置(アドレス)に関する情報をカメラパラメータ制御ドライバ29に出力する(矢印Bfc参照)。 After the section IMc, the motion detection unit 25 performs VMD processing on the recognition image data generated by the imaging of the section IMc in the section VDc (similarly, for example, 66.6 ms), and after the end of the section VDc, The image data for recognition is written into the image storage buffer for recognition 19 and information relating to the writing position (address) of the image storage buffer for recognition 19 is output to the camera parameter control driver 29 (see arrow Bfc).
 区間VDaの後、画像認識部27は、区間Dfaにおいて、カメラパラメータ制御ドライバ29が保持するアドレスに関する情報を参照し、露光条件Aに従って生成された認識用画像データを認識用画像記憶バッファ19から取得する。画像認識部27は、区間Dfaの後に、区間FDaにおいて、露光条件Aに従って生成された認識用画像データの画像認識を行う。 After the section VDa, the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfa, and acquires the recognition image data generated according to the exposure condition A from the recognition image storage buffer 19. To do. The image recognition unit 27 performs image recognition of the recognition image data generated according to the exposure condition A in the section FDa after the section Dfa.
 区間FDaの後、画像認識部27は、区間Dfbにおいて、カメラパラメータ制御ドライバ29が保持するアドレスに関する情報を参照し、露光条件Bに従って生成された認識用画像データを認識用画像記憶バッファ19から取得する。画像認識部27は、区間Dfbの後に、区間FDbにおいて、露光条件Bに従って生成された認識用画像データの画像認識を行う。 After the section FDa, the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfb, and acquires the recognition image data generated according to the exposure condition B from the recognition image storage buffer 19. To do. The image recognition unit 27 performs image recognition of the recognition image data generated according to the exposure condition B in the section FDb after the section Dfb.
 更に、区間FDbの後、画像認識部27は、区間Dfcにおいて、カメラパラメータ制御ドライバ29が保持するアドレスに関する情報を参照し、露光条件Cに従って生成された認識用画像データを認識用画像記憶バッファ19から取得する。画像認識部27は、区間Dfcの後に、同様な区間において、露光条件Cに従って生成された認識用画像データの画像認識を行う。 Further, after the section FDb, the image recognition unit 27 refers to the information about the address held by the camera parameter control driver 29 in the section Dfc, and recognizes the image data for recognition generated according to the exposure condition C as the recognition image storage buffer 19. Get from. The image recognition unit 27 performs image recognition of the image data for recognition generated according to the exposure condition C in a similar section after the section Dfc.
 以上により、本実施形態の撮像装置10は、複数の異なる露光条件A,B,Cを用いて、各露光条件A,B,Cのカメラパラメータに応じて、撮像部IMにより撮像された認識用画像データを生成する。撮像装置10は、露光条件A,B,C毎に、信号処理部13により生成された認識用画像データを画像認識し、露光条件A,B,C毎の認識用画像データの画像認識結果を基に、撮像部IMにおける映像の撮像に適する露光条件のカメラパラメータを選択して撮像部IMにおけるカメラパラメータの変更(調整)のためにフィードバックする。 As described above, the imaging apparatus 10 according to the present embodiment uses a plurality of different exposure conditions A, B, and C for recognition performed by the imaging unit IM in accordance with the camera parameters of the exposure conditions A, B, and C. Generate image data. The imaging device 10 recognizes the image data for recognition generated by the signal processing unit 13 for each of the exposure conditions A, B, and C, and obtains the image recognition result of the image data for recognition for each of the exposure conditions A, B, and C. Based on this, a camera parameter having an exposure condition suitable for capturing an image in the imaging unit IM is selected and fed back for changing (adjusting) the camera parameter in the imaging unit IM.
 これにより、撮像装置10は、複数の異なる露光条件A,B,Cに従って撮像されたそれぞれの認識用画像データの画像認識結果の中で最も画像認識結果が良好な露光条件を選択するので、撮像装置10を操作するユーザの周囲の環境が急に変化した場合でも急な変化が生じていない場合でも、適正な露光条件を用いて映像を撮像することができる。 Thereby, the imaging apparatus 10 selects the exposure condition with the best image recognition result among the image recognition results of the respective image data for recognition captured according to a plurality of different exposure conditions A, B, and C. Even when the environment around the user who operates the apparatus 10 suddenly changes or no sudden change occurs, an image can be captured using appropriate exposure conditions.
 また、撮像装置10は、複数の異なる露光条件A,B,C毎に生成された映像の認識用画像データを認識用画像記憶バッファ19に個別に記憶することができる。 Also, the imaging apparatus 10 can individually store the image recognition image data generated for each of a plurality of different exposure conditions A, B, and C in the recognition image storage buffer 19.
 また、撮像装置10は、複数の異なる露光条件A,B,C毎に映像の認識用画像データを生成して認識用画像記憶バッファ19に記憶し、また、映像の伝送用画像データを生成して伝送用画像記憶バッファ17に記憶するので、認識用画像データと伝送用画像データとを区別して記憶する複数の記憶部を保持できる。 Further, the imaging apparatus 10 generates image recognition image data for each of a plurality of different exposure conditions A, B, and C, stores the image recognition image data in the recognition image storage buffer 19, and generates image transmission image data. Therefore, a plurality of storage units for distinguishing and storing the recognition image data and the transmission image data can be held.
 また、撮像装置10は、複数の異なる露光条件A,B,C毎に生成された映像の認識用画像データを合成(例えばHDR合成)することにより、映像の伝送用画像データを生成するので、単一の露光条件に従って生成された映像の画像データに比べて、ダイナミックレンジが拡大された画像データを生成して外部機器に伝送することができる。 Further, the imaging apparatus 10 generates image data for video transmission by combining (for example, HDR combining) image recognition image data generated for each of a plurality of different exposure conditions A, B, and C. Compared to video image data generated according to a single exposure condition, image data with an expanded dynamic range can be generated and transmitted to an external device.
 また、撮像装置10は、所定の入力操作(例えばユーザの入力操作)に応じて、認識用画像データを認識用画像記憶バッファ19又は伝送用画像記憶バッファ17への出力を簡易に切り換えることができる。例えばユーザがデータ容量の大きい認識用画像データを外部機器に伝送することを選んだ場合には、撮像装置10は、ユーザの簡易な入力操作(例えばUI(UI:User Interface)操作)により、認識用画像データを伝送用画像記憶バッファ17に出力できる。 Further, the imaging apparatus 10 can easily switch the output of the recognition image data to the recognition image storage buffer 19 or the transmission image storage buffer 17 in accordance with a predetermined input operation (for example, a user input operation). . For example, when the user chooses to transmit recognition image data having a large data capacity to an external device, the imaging apparatus 10 recognizes the user by a simple input operation (for example, UI (User: Interface) operation). Image data can be output to the transmission image storage buffer 17.
 また、撮像装置10は、周囲の環境の急な変化が生じた場合でも、複数の異なる露光条件A,B,Cに従って生成された認識用画像データの画像認識結果を基に選択された露光条件(例えば露光条件B)に従って、撮像部IMにおけるカメラパラメータを調整するので、周囲の環境に合った適正な露光条件を用いて、映像を撮像することができる。 Further, the imaging apparatus 10 can select an exposure condition selected based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions A, B, and C even when the surrounding environment suddenly changes. Since the camera parameters in the imaging unit IM are adjusted according to (for example, the exposure condition B), it is possible to capture an image using an appropriate exposure condition suitable for the surrounding environment.
 また、撮像装置10は、選択された露光条件(例えば露光条件B)に従って、所定のカメラパラメータとして、例えば絞り部IRの絞り量、シャッター部SHのシャッター速度及びイメージセンサ11のゲインのうち、いずれか又は複数の組み合わせを調整することができる。 In addition, the imaging apparatus 10 may select any one of, for example, an aperture amount of the aperture IR, a shutter speed of the shutter SH, and a gain of the image sensor 11 as predetermined camera parameters in accordance with the selected exposure condition (for example, exposure condition B). Or multiple combinations can be adjusted.
 (本実施形態の変形例)
 なお、本実施形態では、認識用画像データのサイズは非常に大きいので、認識用画像データを直接、ネットワークを介して送受信することは、撮像装置10と外部機器との間のネットワークに相当の負荷がかかることになり、大きな伝送レートによる伝送が困難となり好ましくない。しかし、ネットワークの伝送技術が将来的に進歩してくると、非常に大きな認識用画像データを直接、ネットワークを介して送受信しても相当の負荷がかからないことも十分に考えられる。
(Modification of this embodiment)
In this embodiment, since the size of the image data for recognition is very large, transmitting and receiving the image data for recognition directly via the network is a considerable load on the network between the imaging device 10 and the external device. Therefore, transmission at a large transmission rate becomes difficult, which is not preferable. However, if network transmission technology advances in the future, it may be possible that a very large load is not applied even if very large image data for recognition is directly transmitted and received via the network.
 そこで、本実施形態の変形例(以下、「本変形例」という)では、上述した本実施形態の撮像装置10の動き検出部25及び画像認識部27の構成を、撮像装置10とは異なる外部機器(例えば図6に示す画像認識装置50)に設けた撮像制御システム100について、図6を参照して説明する。図6は、本変形例の撮像制御システム100のシステム構成の一例を詳細に示すブロック図である。 Therefore, in a modification of the present embodiment (hereinafter referred to as “this modification”), the configuration of the motion detection unit 25 and the image recognition unit 27 of the imaging device 10 of the present embodiment described above is different from that of the imaging device 10. An imaging control system 100 provided in a device (for example, the image recognition device 50 shown in FIG. 6) will be described with reference to FIG. FIG. 6 is a block diagram showing in detail an example of the system configuration of the imaging control system 100 of the present modification.
 図6に示す撮像制御システム100は、撮像装置10Aと、画像認識装置50とがネットワークNW1,NW2を介して接続された構成である。 The imaging control system 100 shown in FIG. 6 has a configuration in which an imaging apparatus 10A and an image recognition apparatus 50 are connected via networks NW1 and NW2.
 図6に示す撮像装置10Aは、図1に示す撮像装置10の構成のうち、動き検出部25及び画像認識部27が省略された構成であり、具体的には、撮像部IMと、信号処理部13と、出力切換部15と、伝送用画像記憶バッファ17と、認識用画像記憶バッファ19と、画像圧縮部21と、画像伝送部23と、動き検出部25と、画像認識部27と、カメラパラメータ制御ドライバ29とを含む構成である。 An imaging apparatus 10A illustrated in FIG. 6 is configured such that the motion detection unit 25 and the image recognition unit 27 are omitted from the configuration of the imaging apparatus 10 illustrated in FIG. Unit 13, output switching unit 15, transmission image storage buffer 17, recognition image storage buffer 19, image compression unit 21, image transmission unit 23, motion detection unit 25, image recognition unit 27, The camera parameter control driver 29 is included.
 画像認識装置50は、通信部31と、動き検出部25Aと、画像認識部27Aとを含む構成である。本変形例では、撮像装置10A及び画像認識装置50の各部の動作のうち、撮像装置10の対応する各部の動作と同一の内容については説明を簡略化又は省略し、異なる内容について説明する。 The image recognition device 50 includes a communication unit 31, a motion detection unit 25A, and an image recognition unit 27A. In the present modification, among the operations of the respective units of the imaging device 10A and the image recognition device 50, the same contents as the operations of the corresponding units of the imaging device 10 will be simplified or omitted, and different contents will be described.
 具体的には、撮像装置10Aでは、画像伝送部23Aは、伝送用画像データ又は認識用画像データを伝送用画像記憶バッファ17又は認識用画像記憶バッファ19から読み出し、ネットワークNW1を介して画像認識装置50に伝送(送信)する。 Specifically, in the imaging apparatus 10A, the image transmission unit 23A reads the transmission image data or the recognition image data from the transmission image storage buffer 17 or the recognition image storage buffer 19, and the image recognition apparatus via the network NW1. 50 (send).
 画像認識装置50では、通信部31は、画像伝送部23Aから伝送(送信)された伝送用画像データ又は認識用画像データを受信して動き検出部25Aに出力する。画像認識部27Aは、上述した本実施形態と同様に、複数の異なる露光条件A,B,C毎に生成されたそれぞれの認識用画像データを画像認識し、画像認識結果の中で最も良好な画像認識結果が得られた認識用画像データに対応する露光条件を選択し、選択された露光条件に関する情報を通信部31に出力する。 In the image recognition device 50, the communication unit 31 receives the transmission image data or the recognition image data transmitted (transmitted) from the image transmission unit 23A and outputs it to the motion detection unit 25A. The image recognition unit 27A recognizes each recognition image data generated for each of a plurality of different exposure conditions A, B, and C as in the above-described embodiment, and the best image recognition result is obtained. The exposure condition corresponding to the image data for recognition from which the image recognition result is obtained is selected, and information related to the selected exposure condition is output to the communication unit 31.
 通信部31は、画像認識部27Aにより選択された露光条件に関する情報を、ネットワークNW2を介して撮像装置10Aのカメラパラメータ制御ドライバ29に送信する。なお、図6の複雑化を避けるために、通信部31とカメラパラメータ制御ドライバ29との間に、撮像装置10A内に設けられる通信部の図示を省略している。カメラパラメータ制御ドライバ29の動作は上述した本実施形態のカメラパラメータ制御ドライバ29と同一のため、説明を省略する。なお、ネットワークNW1,NW2は同一のネットワークでも良いし、異なるネットワークでも良く、無線ネットワーク(例えば無線LAN(Local Area Network)、無線WAN(Wide Area Network))である。 The communication unit 31 transmits information on the exposure condition selected by the image recognition unit 27A to the camera parameter control driver 29 of the imaging apparatus 10A via the network NW2. In addition, in order to avoid complication of FIG. 6, illustration of the communication part provided in 10 A of imaging devices between the communication part 31 and the camera parameter control driver 29 is abbreviate | omitted. Since the operation of the camera parameter control driver 29 is the same as that of the camera parameter control driver 29 of the present embodiment described above, description thereof is omitted. The networks NW1 and NW2 may be the same network or different networks, and are wireless networks (for example, wireless LAN (Local Area Network), wireless WAN (Wide Area Network)).
 以上により、本変形例の撮像制御システム100では、撮像装置10Aは、複数の異なる露光条件A,B,Cを用いて、各露光条件A,B,Cに応じて、撮像部IMにより撮像された認識用画像データを生成し、ネットワークNW1を介して画像認識装置50に送信する。画像認識装置50は、露光条件A,B,C毎に、撮像装置10Aから受信した認識用画像データを画像認識する。また、画像認識装置50は、露光条件A,B,C毎の認識用画像データの画像認識結果を基に、撮像装置10Aにおける映像の撮像に適する露光条件を選択して撮像部IMにおけるカメラパラメータの変更(調整)のためにフィードバックする。 As described above, in the imaging control system 100 of the present modification, the imaging apparatus 10A is imaged by the imaging unit IM according to each exposure condition A, B, C using a plurality of different exposure conditions A, B, C. The image data for recognition is generated and transmitted to the image recognition device 50 via the network NW1. The image recognition device 50 recognizes the image data for recognition received from the imaging device 10A for each of the exposure conditions A, B, and C. Further, the image recognition device 50 selects an exposure condition suitable for image capturing in the imaging device 10A based on the image recognition result of the recognition image data for each of the exposure conditions A, B, and C, and sets the camera parameters in the imaging unit IM. Feedback for changes (adjustments).
 これにより、撮像装置10Aは、複数の異なる露光条件A,B,Cに従って撮像されたそれぞれの認識用画像データの画像認識処理を画像認識装置50に行わせることができるので、画像認識処理に要する回路構成を削減できる。 Accordingly, the image capturing apparatus 10A can cause the image recognizing apparatus 50 to perform image recognition processing of each recognition image data imaged according to a plurality of different exposure conditions A, B, and C, and thus is required for the image recognition processing. The circuit configuration can be reduced.
 また、画像認識装置50は、ネットワークNW1を介して接続された撮像装置10Aから送信された複数の認識用画像データに対する画像認識結果の中で最も画像認識結果が良好な露光条件を選択し、選択結果(即ち、選択された露光条件に関する情報)を撮像装置10Aに返信する。従って、撮像制御システム100では、撮像装置10Aは、周囲の環境が急に変化した場合でも急な変化が生じていない場合でも、例えば遠隔地に設置された画像認識装置50により選択された適正な露光条件に関する情報をリアルタイムに受信すれば、周囲の環境に合わせた露光条件を用いて映像を撮像することができる。 The image recognition device 50 selects and selects an exposure condition with the best image recognition result among the image recognition results for a plurality of image data for recognition transmitted from the imaging device 10A connected via the network NW1. A result (that is, information on the selected exposure condition) is returned to the imaging apparatus 10A. Therefore, in the imaging control system 100, the imaging device 10A can select the appropriate one selected by, for example, the image recognition device 50 installed at a remote place, even when the surrounding environment suddenly changes or does not suddenly change. If information on exposure conditions is received in real time, an image can be captured using exposure conditions that match the surrounding environment.
 以下、上述した本発明に係る撮像装置及び撮像制御方法の構成、作用及び効果を説明する。 Hereinafter, the configuration, operation, and effect of the above-described imaging apparatus and imaging control method according to the present invention will be described.
 本発明の一実施形態は、映像を撮像する撮像部と、複数の異なる露光条件を用いて、各露光条件に応じて、前記撮像部により撮像された前記映像の画像データを生成する信号処理部と、前記露光条件毎に、前記信号処理部により生成された前記映像の画像データを画像認識する画像認識部と、前記画像認識部における前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記撮像部における映像の撮像に適する露光条件を選択する選択部と、を備える、撮像装置である。 In one embodiment of the present invention, an image pickup unit that picks up an image and a signal processing unit that generates image data of the image picked up by the image pickup unit according to each exposure condition using a plurality of different exposure conditions And an image recognition unit for recognizing the video image data generated by the signal processing unit for each exposure condition, and an image recognition result of the video image data for each exposure condition in the image recognition unit. And a selection unit that selects an exposure condition suitable for capturing an image in the imaging unit.
 この構成では、信号処理部は、複数の異なる露光条件を用いて、各露光条件に応じて、撮像部により撮像された映像の画像データを生成する。画像認識部は、露光条件毎に、信号処理部により生成された映像の画像データを画像認識する。選択部は、画像認識部における露光条件毎の画像データの画像認識結果を基に、撮像部における映像の撮像に適する露光条件を選択する。 In this configuration, the signal processing unit generates image data of a video imaged by the imaging unit according to each exposure condition using a plurality of different exposure conditions. The image recognition unit recognizes the image data of the video generated by the signal processing unit for each exposure condition. The selection unit selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition in the image recognition unit.
 これにより、撮像装置は、複数の異なる露光条件に従って撮像されたそれぞれの画像データの画像認識結果の中で最も画像認識結果が良好な露光条件を選択するので、周囲の環境が急に変化した場合でも急な変化が生じていない場合でも、適正な露光条件を用いて映像を撮像することができる。 As a result, the imaging apparatus selects the exposure condition with the best image recognition result among the image recognition results of the respective image data captured according to a plurality of different exposure conditions, so that the surrounding environment suddenly changes However, even if no sudden change occurs, an image can be taken using appropriate exposure conditions.
 また、本発明の一実施形態は、前記露光条件毎に、前記信号処理部により生成された前記映像の画像データを記憶する第1の記憶部、を更に備える、撮像装置である。 Further, an embodiment of the present invention is an imaging apparatus further comprising a first storage unit that stores image data of the video generated by the signal processing unit for each exposure condition.
 この構成によれば、撮像装置は、複数の異なる露光条件毎に生成された映像の画像データを第1の記憶部(認識用画像記憶バッファ19)に記憶できる。 According to this configuration, the imaging apparatus can store the image data of the video generated for each of a plurality of different exposure conditions in the first storage unit (recognition image storage buffer 19).
 また、本発明の一実施形態は、前記信号処理部は、前記露光条件毎に、前記映像の認識用画像データを生成して第1の記憶部に記憶し、前記撮像部により撮像された前記映像の伝送用画像データを生成して前記第1の記憶部とは異なる第2の記憶部に記憶する、撮像装置である。 In one embodiment of the present invention, the signal processing unit generates image data for recognizing the video for each exposure condition, stores the image data in a first storage unit, and the image is captured by the imaging unit. An imaging apparatus that generates image data for video transmission and stores the generated image data in a second storage unit different from the first storage unit.
 この構成によれば、撮像装置は、複数の異なる露光条件毎に映像の認識用画像データを生成して第1の記憶部(認識用画像記憶バッファ19)に記憶し、また、映像の伝送用画像データを生成して第2の記憶部(伝送用画像記憶バッファ17)に記憶するので、認識用画像データと伝送用画像データとを区別して記憶する複数の記憶部を保持できる。 According to this configuration, the imaging device generates image recognition image data for each of a plurality of different exposure conditions, stores the image recognition data in the first storage unit (recognition image storage buffer 19), and also for image transmission. Since the image data is generated and stored in the second storage unit (transmission image storage buffer 17), a plurality of storage units for distinguishing and storing the recognition image data and the transmission image data can be held.
 また、本発明の一実施形態は、前記信号処理部は、前記露光条件毎に生成された前記映像の認識用画像データを合成することで、前記映像の伝送用画像データを生成する、撮像装置である。 In one embodiment of the present invention, the signal processing unit generates the image data for transmission of the video by synthesizing the image data for recognition of the video generated for each exposure condition. It is.
 この構成によれば、撮像装置は、複数の異なる露光条件毎に生成された映像の認識用画像データを合成(例えばHDR合成)することにより、映像の伝送用画像データを生成するので、単一の露光条件に従って生成された映像の画像データに比べて、ダイナミックレンジが拡大された画像データを生成することができる。 According to this configuration, the imaging apparatus generates video transmission image data by combining (for example, HDR combining) video recognition image data generated for each of a plurality of different exposure conditions. Compared with video image data generated according to the exposure conditions, image data with an expanded dynamic range can be generated.
 また、本発明の一実施形態は、所定の入力操作に応じて、前記映像の認識用画像データの前記第1の記憶部又は前記第2の記憶部への出力を切り換える切換部、を更に備える、撮像装置である。 The embodiment of the present invention further includes a switching unit that switches output of the image recognition image data to the first storage unit or the second storage unit in accordance with a predetermined input operation. , An imaging device.
 この構成によれば、撮像装置は、所定の入力操作(例えばユーザの入力操作)に応じて、映像の認識用画像データを第1の記憶部(認識用画像記憶バッファ19)又は第2の記憶部(伝送用画像記憶バッファ17)への出力を簡易に切り換えることができる。例えばユーザがデータ容量の大きい認識用画像データを外部機器に伝送することを選んだ場合には、撮像装置は、ユーザの簡易な入力操作(例えばUI操作)により、認識用画像データを第2の記憶部(伝送用画像記憶バッファ17)に出力できる。 According to this configuration, the imaging apparatus stores the image recognition image data in the first storage unit (recognition image storage buffer 19) or the second storage in accordance with a predetermined input operation (for example, a user input operation). Output to the transmission unit (transmission image storage buffer 17) can be easily switched. For example, when the user chooses to transmit the recognition image data having a large data capacity to the external device, the imaging apparatus transfers the recognition image data to the second image by a simple input operation (for example, UI operation) by the user. The data can be output to the storage unit (transmission image storage buffer 17).
 また、本発明の一実施形態は、前記選択部により選択された前記露光条件に応じて、前記撮像部における所定の撮像パラメータを調整する露光条件制御部、を更に備える、撮像装置である。 Also, an embodiment of the present invention is an imaging apparatus further comprising an exposure condition control unit that adjusts predetermined imaging parameters in the imaging unit in accordance with the exposure condition selected by the selection unit.
 この構成によれば、撮像装置は、周囲の環境の急な変化が生じた場合でも、複数の異なる露光条件に従って生成された認識用画像データの画像認識結果を基に選択された露光条件に従って、撮像部における所定の撮像パラメータを調整するので、周囲の環境に合った適正な露光条件を用いて、映像を撮像することができる。 According to this configuration, even when a sudden change in the surrounding environment occurs, the imaging apparatus according to the exposure condition selected based on the image recognition result of the recognition image data generated according to a plurality of different exposure conditions. Since predetermined imaging parameters in the imaging unit are adjusted, an image can be captured using an appropriate exposure condition suitable for the surrounding environment.
 また、本発明の一実施形態は、前記撮像部は、レンズと、前記レンズを通過する光量を調整する絞り部と、撮像時に開動作及び閉動作を交互に行うシャッター部と、前記シャッターを通過した光が撮像面に結像された光学像を光電変換するイメージセンサと、を含み、前記所定の撮像パラメータは、前記絞り部の絞り量、前記シャッター部のシャッター速度及び前記イメージセンサの光電変換時のゲインのうち、いずれか又は複数の組み合わせである、撮像装置である。 In one embodiment of the present invention, the imaging unit passes through the lens, a diaphragm unit that adjusts the amount of light passing through the lens, a shutter unit that alternately performs an opening operation and a closing operation during imaging, and the shutter. And an image sensor that photoelectrically converts an optical image formed on the imaging surface, and the predetermined imaging parameters include an aperture amount of the aperture unit, a shutter speed of the shutter unit, and a photoelectric conversion of the image sensor. It is an imaging device which is one or a combination of gains of time.
 この構成によれば、撮像装置は、選択された露光条件に従って、所定の撮像パラメータとして、例えば絞り部の絞り量、シャッター部のシャッター速度及びイメージセンサのゲインのうち、いずれか又は複数の組み合わせを調整することができる。 According to this configuration, the imaging apparatus uses, as the predetermined imaging parameters, for example, one or a combination of a diaphragm amount of the diaphragm unit, a shutter speed of the shutter unit, and a gain of the image sensor according to the selected exposure condition. Can be adjusted.
 また、本発明の一実施形態は、撮像装置における撮像制御方法であって、映像を撮像するステップと、複数の異なる露光条件を用いて、各露光条件に応じて、撮像された前記映像の画像データを生成するステップと、前記露光条件毎に生成された前記映像の画像データを画像認識するステップと、前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記映像の撮像に適する露光条件を選択するステップと、を有する、撮像制御方法である。 Moreover, one embodiment of the present invention is an imaging control method in an imaging apparatus, wherein an image of the video imaged according to each exposure condition using a step of imaging video and a plurality of different exposure conditions Based on the step of generating data, the step of recognizing the image data of the video generated for each exposure condition, and the image recognition result of the image data of the video for each exposure condition, the imaging of the video Selecting an appropriate exposure condition.
 この方法によれば、信号処理部は、複数の異なる露光条件を用いて、各露光条件に応じて、撮像部により撮像された映像の画像データを生成する。画像認識部は、露光条件毎に、信号処理部により生成された映像の画像データを画像認識する。選択部は、画像認識部における露光条件毎の画像データの画像認識結果を基に、撮像部における映像の撮像に適する露光条件を選択する。 According to this method, the signal processing unit generates image data of a video imaged by the imaging unit according to each exposure condition using a plurality of different exposure conditions. The image recognition unit recognizes the image data of the video generated by the signal processing unit for each exposure condition. The selection unit selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition in the image recognition unit.
 これにより、撮像装置は、複数の異なる露光条件に従って撮像されたそれぞれの画像データの画像認識結果の中で最も画像認識結果が良好な露光条件を選択するので、周囲の環境が急に変化した場合でも急な変化が生じていない場合でも、適正な露光条件を用いて映像を撮像することができる。 As a result, the imaging apparatus selects the exposure condition with the best image recognition result among the image recognition results of the respective image data captured according to a plurality of different exposure conditions, so that the surrounding environment suddenly changes However, even if no sudden change occurs, an image can be taken using appropriate exposure conditions.
 また、本発明の一実施形態は、ネットワークを介して接続された撮像装置と画像認識装置とにおける撮像制御方法であって、映像を撮像するステップと、複数の異なる露光条件を用いて、各露光条件に応じて、撮像された前記映像の画像データを生成するステップと、前記露光条件毎に生成された前記映像の画像データを前記撮像装置から送信するステップと、前記露光条件毎に生成された前記映像の画像データを前記画像認識装置において受信するステップと、前記露光条件毎に生成された前記映像の画像データを画像認識するステップと、前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記映像の撮像に適する露光条件を選択するステップと、を有する、撮像制御方法である。 An embodiment of the present invention is an imaging control method in an imaging apparatus and an image recognition apparatus connected via a network, and each exposure is performed using a step of imaging video and a plurality of different exposure conditions. According to the conditions, the step of generating image data of the captured video, the step of transmitting the image data of the video generated for each exposure condition from the imaging device, and the step of generating for each exposure condition Receiving the image data of the video in the image recognition device; recognizing the image data of the video generated for each exposure condition; and image recognition result of the image data of the video for each exposure condition. And a step of selecting an exposure condition suitable for the imaging of the video.
 この方法によれば、撮像装置は、複数の異なる露光条件を用いて、各露光条件に応じて、撮像部により撮像された映像の画像データを生成して画像認識装置に送信する。画像認識装置は、露光条件毎に、撮像装置から受信した映像の画像データを画像認識する。また、画像認識装置は、露光条件毎の画像データの画像認識結果を基に、撮像部における映像の撮像に適する露光条件を選択する。 According to this method, the imaging apparatus uses a plurality of different exposure conditions to generate image data of a video imaged by the imaging unit according to each exposure condition, and transmits the image data to the image recognition apparatus. The image recognition device recognizes the image data of the video received from the imaging device for each exposure condition. Further, the image recognition apparatus selects an exposure condition suitable for image capturing in the imaging unit based on the image recognition result of the image data for each exposure condition.
 これにより、撮像装置は、複数の異なる露光条件に従って撮像されたそれぞれの画像データの画像認識処理を画像認識装置に行わせることができるので、画像認識処理に要する回路構成を削減できる。また、画像認識装置は、ネットワークを介して接続された撮像装置から送信された複数の認識用画像データに対する画像認識結果の中で最も画像認識結果が良好な露光条件を選択し、選択結果を撮像装置に返信する。従って、撮像制御システムでは、撮像装置は、周囲の環境が急に変化した場合でも急な変化が生じていない場合でも、例えば遠隔地に設置された画像認識装置により選択された適正な露光条件に関する情報をリアルタイムに受信すれば、周囲の環境に合わせた露光条件を用いて映像を撮像することができる。 Thereby, the image pickup apparatus can cause the image recognition apparatus to perform image recognition processing of each image data picked up in accordance with a plurality of different exposure conditions, so that the circuit configuration required for the image recognition processing can be reduced. In addition, the image recognition apparatus selects an exposure condition with the best image recognition result among the image recognition results for a plurality of recognition image data transmitted from the imaging apparatuses connected via the network, and images the selection result. Reply to the device. Therefore, in the imaging control system, the imaging apparatus relates to an appropriate exposure condition selected by, for example, an image recognition apparatus installed in a remote place, even when the surrounding environment suddenly changes or does not suddenly change. If information is received in real time, an image can be captured using exposure conditions that match the surrounding environment.
 本発明は、周囲の環境の変化の有無に拘わらず、適正な露光条件を用いて映像を撮像する撮像装置及び撮像制御方法として有用である。 The present invention is useful as an image pickup apparatus and an image pickup control method for picking up an image using appropriate exposure conditions regardless of whether the surrounding environment has changed.
10,10A 撮像装置
11 イメージセンサ
13 信号処理部
13a 露光条件A信号処理部
13b 露光条件B信号処理部
13c 露光条件C信号処理部
13d HDR処理部
15 出力切換部
17 伝送用画像記憶バッファ
19 認識用画像記憶バッファ
21 画像圧縮部
23,23A 画像伝送部
25,25A 動き検出部
27,27A 画像認識部
29 カメラパラメータ制御ドライバ
31 通信部
50 画像認識装置
100 撮像制御システム
IR 絞り部
LS レンズ
SH シャッター部
10, 10A Imaging device 11 Image sensor 13 Signal processing unit 13a Exposure condition A signal processing unit 13b Exposure condition B signal processing unit 13c Exposure condition C signal processing unit 13d HDR processing unit 15 Output switching unit 17 Transmission image storage buffer 19 For recognition Image storage buffer 21 Image compression unit 23, 23A Image transmission unit 25, 25A Motion detection unit 27, 27A Image recognition unit 29 Camera parameter control driver 31 Communication unit 50 Image recognition device 100 Imaging control system IR Aperture unit LS Lens SH Shutter unit

Claims (9)

  1.  映像を撮像する撮像部と、
     複数の異なる露光条件を用いて、各露光条件に応じて、前記撮像部により撮像された前記映像の画像データを生成する信号処理部と、
     前記露光条件毎に、前記信号処理部により生成された前記映像の画像データを画像認識する画像認識部と、
     前記画像認識部における前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記撮像部における映像の撮像に適する露光条件を選択する選択部と、を備える、
     撮像装置。
    An imaging unit for imaging video;
    A signal processing unit that generates image data of the video imaged by the imaging unit according to each exposure condition using a plurality of different exposure conditions;
    An image recognition unit that recognizes image data of the video generated by the signal processing unit for each exposure condition;
    A selection unit that selects an exposure condition suitable for capturing an image in the imaging unit based on an image recognition result of the image data of the image for each exposure condition in the image recognition unit;
    Imaging device.
  2.  請求項1に記載の撮像装置であって、
     前記露光条件毎に、前記信号処理部により生成された前記映像の画像データを記憶する第1の記憶部、を更に備える、
     撮像装置。
    The imaging apparatus according to claim 1,
    A first storage unit that stores image data of the video generated by the signal processing unit for each exposure condition;
    Imaging device.
  3.  請求項1に記載の撮像装置であって、
     前記信号処理部は、
     前記露光条件毎に、前記映像の認識用画像データを生成して第1の記憶部に記憶し、
     前記撮像部により撮像された前記映像の伝送用画像データを生成して前記第1の記憶部とは異なる第2の記憶部に記憶する、
     撮像装置。
    The imaging apparatus according to claim 1,
    The signal processing unit
    For each exposure condition, generate image data for recognition of the video and store it in the first storage unit
    Generating image data for transmission of the video imaged by the imaging unit and storing it in a second storage unit different from the first storage unit;
    Imaging device.
  4.  請求項3に記載の撮像装置であって、
     前記信号処理部は、
     前記露光条件毎に生成された前記映像の認識用画像データを合成することで、前記映像の伝送用画像データを生成する、
     撮像装置。
    The imaging apparatus according to claim 3,
    The signal processing unit
    Generating image data for transmission of the video by synthesizing the image data for recognition of the video generated for each exposure condition;
    Imaging device.
  5.  請求項3又は4に記載の撮像装置であって、
     所定の入力操作に応じて、前記映像の認識用画像データの前記第1の記憶部又は前記第2の記憶部への出力を切り換える切換部、を更に備える、
     撮像装置。
    The imaging apparatus according to claim 3 or 4,
    A switching unit that switches output of the image recognition image data to the first storage unit or the second storage unit according to a predetermined input operation;
    Imaging device.
  6.  請求項1~5のうちいずれか一項に記載の撮像装置であって、
     前記選択部により選択された前記露光条件に応じて、前記撮像部における所定の撮像パラメータを調整する露光条件制御部、を更に備える、
     撮像装置。
    The imaging apparatus according to any one of claims 1 to 5,
    An exposure condition control unit that adjusts predetermined imaging parameters in the imaging unit according to the exposure condition selected by the selection unit;
    Imaging device.
  7.  請求項6に記載の撮像装置であって、
     前記撮像部は、
     レンズと、前記レンズを通過する光量を調整する絞り部と、撮像時に開動作及び閉動作を交互に行うシャッター部と、前記シャッター部を通過した光が撮像面に結像された光学像を光電変換するイメージセンサと、を含み、
     前記所定の撮像パラメータは、
     前記絞り部の絞り量、前記シャッター部のシャッター速度及び前記イメージセンサの光電変換時のゲインのうち、いずれか又は複数の組み合わせである、
     撮像装置。
    The imaging apparatus according to claim 6,
    The imaging unit
    A lens, a diaphragm that adjusts the amount of light that passes through the lens, a shutter that alternately opens and closes during imaging, and an optical image in which light passing through the shutter is imaged on the imaging surface An image sensor to convert,
    The predetermined imaging parameter is:
    The aperture amount of the aperture unit, the shutter speed of the shutter unit and the gain at the time of photoelectric conversion of the image sensor are any one or a combination of a plurality
    Imaging device.
  8.  撮像装置における撮像制御方法であって、
     映像を撮像するステップと、
     複数の異なる露光条件を用いて、各露光条件に応じて、撮像された前記映像の画像データを生成するステップと、
     前記露光条件毎に生成された前記映像の画像データを画像認識するステップと、
     前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記映像の撮像に適する露光条件を選択するステップと、を有する、
     撮像制御方法。
    An imaging control method in an imaging apparatus,
    Imaging a video;
    Using a plurality of different exposure conditions to generate image data of the captured video according to each exposure condition;
    Recognizing the image data of the video generated for each exposure condition;
    Selecting an exposure condition suitable for imaging the video based on an image recognition result of the image data of the video for each exposure condition,
    Imaging control method.
  9.  ネットワークを介して接続された撮像装置と画像認識装置とにおける撮像制御方法であって、
     映像を撮像するステップと、
     複数の異なる露光条件を用いて、各露光条件に応じて、撮像された前記映像の画像データを生成するステップと、
     前記露光条件毎に生成された前記映像の画像データを前記撮像装置から送信するステップと、
     前記露光条件毎に生成された前記映像の画像データを前記画像認識装置において受信するステップと、
     前記露光条件毎に生成された前記映像の画像データを画像認識するステップと、
     前記露光条件毎の前記映像の画像データの画像認識結果を基に、前記映像の撮像に適する露光条件を選択するステップと、を有する、
     撮像制御方法。
    An imaging control method in an imaging apparatus and an image recognition apparatus connected via a network,
    Imaging a video;
    Using a plurality of different exposure conditions to generate image data of the captured video according to each exposure condition;
    Transmitting image data of the video generated for each exposure condition from the imaging device;
    Receiving image data of the video generated for each exposure condition in the image recognition device;
    Recognizing the image data of the video generated for each exposure condition;
    Selecting an exposure condition suitable for imaging the video based on an image recognition result of the image data of the video for each exposure condition,
    Imaging control method.
PCT/JP2015/001170 2014-03-27 2015-03-05 Imaging device, and imaging control method WO2015146006A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014066689A JP2015192222A (en) 2014-03-27 2014-03-27 Imaging apparatus and imaging control method
JP2014-066689 2014-03-27

Publications (1)

Publication Number Publication Date
WO2015146006A1 true WO2015146006A1 (en) 2015-10-01

Family

ID=54194560

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/001170 WO2015146006A1 (en) 2014-03-27 2015-03-05 Imaging device, and imaging control method

Country Status (2)

Country Link
JP (1) JP2015192222A (en)
WO (1) WO2015146006A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020204668A1 (en) 2019-04-05 2020-10-08 Samsung Electronics Co., Ltd. Electronic device and method for controlling camera using external electronic device
TWI744876B (en) * 2019-05-10 2021-11-01 日商索尼半導體解決方案公司 Image recognition device, solid-state imaging device and image recognition method
CN114500870A (en) * 2021-12-30 2022-05-13 北京罗克维尔斯科技有限公司 Image processing method and device and electronic equipment

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6604297B2 (en) 2016-10-03 2019-11-13 株式会社デンソー Imaging device
JP6973163B2 (en) * 2018-02-22 2021-11-24 三菱電機株式会社 Shooting system, shooting unit and shooting device
WO2020085028A1 (en) 2018-10-26 2020-04-30 パナソニックIpマネジメント株式会社 Image recognition device and image recognition method
JP2023074056A (en) * 2021-11-17 2023-05-29 株式会社日立製作所 Image recognition system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007228458A (en) * 2006-02-27 2007-09-06 Fujifilm Corp Photographic conditions setting method and photographic apparatus employing the same
JP2007235640A (en) * 2006-03-02 2007-09-13 Fujifilm Corp Photographing device and method
JP2014052487A (en) * 2012-09-06 2014-03-20 Canon Inc Image capturing device, method of controlling the same, program, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007228458A (en) * 2006-02-27 2007-09-06 Fujifilm Corp Photographic conditions setting method and photographic apparatus employing the same
JP2007235640A (en) * 2006-03-02 2007-09-13 Fujifilm Corp Photographing device and method
JP2014052487A (en) * 2012-09-06 2014-03-20 Canon Inc Image capturing device, method of controlling the same, program, and recording medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020204668A1 (en) 2019-04-05 2020-10-08 Samsung Electronics Co., Ltd. Electronic device and method for controlling camera using external electronic device
EP3939247A4 (en) * 2019-04-05 2022-05-04 Samsung Electronics Co., Ltd. Electronic device and method for controlling camera using external electronic device
US11563889B2 (en) 2019-04-05 2023-01-24 Samsung Electronics Co., Ltd. Electronic device and method for controlling camera using external electronic device
TWI744876B (en) * 2019-05-10 2021-11-01 日商索尼半導體解決方案公司 Image recognition device, solid-state imaging device and image recognition method
US11805330B2 (en) 2019-05-10 2023-10-31 Sony Semiconductor Solutions Corporation Image recognition device, solid-state imaging device, and image recognition method
CN114500870A (en) * 2021-12-30 2022-05-13 北京罗克维尔斯科技有限公司 Image processing method and device and electronic equipment
CN114500870B (en) * 2021-12-30 2024-05-31 北京罗克维尔斯科技有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
JP2015192222A (en) 2015-11-02

Similar Documents

Publication Publication Date Title
WO2015146006A1 (en) Imaging device, and imaging control method
KR102557794B1 (en) Image processing apparatus, method for controlling image processing apparatus, and non-transitory computer-readable storage medium
KR102157675B1 (en) Image photographing apparatus and methods for photographing image thereof
JP6471953B2 (en) Imaging apparatus, imaging system, and imaging method
JP6319340B2 (en) Movie imaging device
US10757384B2 (en) Desaturation control
WO2011151867A1 (en) Imaging device, imaging means and program
KR101352440B1 (en) Image processing apparatus, image processing method, and recording medium
JP6732726B2 (en) Imaging device, imaging method, and program
JP2016096430A (en) Imaging device and imaging method
EP3402179B1 (en) Image-capturing system, image-capturing method, and program
US10600170B2 (en) Method and device for producing a digital image
JP2015109563A (en) Video signal processing device and control method thereof
JP7403242B2 (en) Image processing system, image processing method, and program
US11238285B2 (en) Scene classification for image processing
US20160112640A1 (en) Imaging apparatus and imaging method
JP2007288805A (en) Camera system and automatic exposure control method
JP2011171842A (en) Image processor and image processing program
JP2008271024A (en) Image processing apparatus and method and program
JP2017092876A (en) Imaging device, imaging system and imaging method
JP2017092850A (en) Imaging device, imaging method, and conference terminal equipment
JP2012134745A (en) Image signal processing device
CN112702588B (en) Dual-mode image signal processor and dual-mode image signal processing system
JP2015080157A (en) Image processing device, image processing method and program
JP7277236B2 (en) IMAGING DEVICE, IMAGING SYSTEM, PROGRAM, RECORDING MEDIUM, AND CONTROL METHOD

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15769206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15769206

Country of ref document: EP

Kind code of ref document: A1