WO2020071108A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method

Info

Publication number
WO2020071108A1
WO2020071108A1 PCT/JP2019/036388 JP2019036388W WO2020071108A1 WO 2020071108 A1 WO2020071108 A1 WO 2020071108A1 JP 2019036388 W JP2019036388 W JP 2019036388W WO 2020071108 A1 WO2020071108 A1 WO 2020071108A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
display
area
feature amount
Prior art date
Application number
PCT/JP2019/036388
Other languages
French (fr)
Japanese (ja)
Inventor
康一 郡司
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019054487A external-priority patent/JP2020061726A/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2020071108A1 publication Critical patent/WO2020071108A1/en
Priority to US17/220,080 priority Critical patent/US20210218887A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response

Definitions

  • the present invention relates to an image processing device and an image processing method.
  • An imaging device such as a digital camera or a digital video camera can perform imaging (imaging and recording of an image) while displaying an image (imaging image) obtained by imaging on an EVF (electronic viewfinder).
  • an imaging device such as a digital camera or a digital video camera
  • an image imaging image obtained by imaging on an EVF (electronic viewfinder).
  • a display panel included in the imaging device or a display device (external device) connected to the imaging device is used as an EVF, and a photographer checks a captured image displayed on the EVF to check various feature amounts of the captured image. I do.
  • the display device needs to be provided with a function of acquiring a feature amount of an image, and when additional information (MaxCLL or MaxFALL) input to the display device is used, the photographer is Display luminance that does not meet the intended purpose is realized.
  • additional information MaxCLL or MaxFALL
  • the object of the present invention is to provide a technique capable of more reliably realizing a display luminance intended by a photographer or the like.
  • a first aspect of the present invention provides: Generating means for generating output image data based on the target image data; Acquiring means for acquiring a feature amount from the output image data; Output means for outputting the output image data and feature information based on the feature amount, Has,
  • the acquisition unit may include the first area.
  • An image processing apparatus is characterized by acquiring the characteristic amount of (1).
  • a third aspect of the present invention is a program for causing a computer to function as each unit of the above-described image processing apparatus.
  • FIG. 2 is a block diagram illustrating a configuration example of an imaging device according to the first to fourth embodiments.
  • 9 is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the first to fourth embodiments.
  • FIG. 9 is a diagram illustrating an example of display image data and the like according to the first to fourth embodiments.
  • FIG. 1A is a block diagram illustrating a configuration example of an imaging device according to the present embodiment.
  • the lens group 100 includes at least one lens and guides light from a subject to the imaging sensor unit 101.
  • the lens group 100 is configured to be able to control the amount of light incident on the image sensor 101 from the lens group 100, the state of focusing, and the like.
  • Each pixel of the image sensor includes an R sub-pixel having a red color filter, a G sub-pixel having a green color filter, and a B sub-pixel having a blue color filter.
  • the arrangement of the R sub-pixel, the G sub-pixel, and the B sub-pixel is a predetermined arrangement. Specifically, in each pixel of the image sensor, one R sub-pixel, one B sub-pixel, and two G sub-pixels are arranged in a mosaic pattern.
  • Such an array is called a “Bayer array” or the like, and the image data output from the imaging sensor unit 101 (A / D converter) is also Bayer array image data (Bayer image data).
  • the development processing unit 102 performs a development process on the Bayer image data output from the imaging sensor unit 101, and outputs the image data after the development process to the display image generation unit 103.
  • the development processing includes offset adjustment for adding an offset value to a gradation value (R value, G value, B value, etc.), gain adjustment for multiplying a gradation value by a gain value, gamma conversion for converting a gradation characteristic, and the like. .
  • the conversion characteristics (gamma value, gamma curve, and the like) of the gamma conversion are determined in consideration of the characteristics of the lens group 100, the imaging sensor unit 101, and the like.
  • the development process includes a process of converting Bayer image data (RGB image data in which each pixel includes one R sub-pixel, one B sub-pixel, and two G sub-pixels) into YCbCr image data.
  • the YCbCr image data is image data in which each pixel value includes a luminance value (Y value) and a color difference value (Cb value and Cr value).
  • the development processing includes correction processing for correcting image distortion due to distortion of the lens group 100 (lens), image stabilization processing for reducing vibration of an image (subject shown in the image) due to vibration of the imaging device, and noise reduction of the image. It also includes noise reduction processing to reduce noise.
  • the image data output from the imaging sensor unit 101 and the development processing unit 102 is captured image data representing a subject, and is image data to be processed by the imaging device (target image data).
  • target image data is not limited to the captured image data.
  • the target image data may be CG (computer graphic) image data.
  • the display image generation unit 103 also combines the image data representing the predetermined graphic image with the YCbCr image data so that the predetermined graphic image is superimposed on the image represented by the YCbCr image data.
  • the predetermined graphic image is, for example, an image in which the photographing assist information is represented by figures or characters.
  • the display image generation unit 103 determines the YCbCr image data so that the aspect ratio of the display image data matches the aspect ratio of the display surface. Is added. Through these processes, display image data is generated.
  • FIG. 3A shows an example of display image data to which predetermined image data has been added.
  • the additional image image represented by predetermined image data
  • the additional image is a black band image (band-shaped black image), and black bands are formed above and below the target image (image represented by YCbCr image data).
  • An image has been added.
  • the state shown in FIG. 3A is called "letterbox" or the like.
  • a black band image may be added to the left and right of the target image, and such a state is called a “pillar box” or the like.
  • the additional image need not be a black band image or an image added to adjust the aspect ratio of display image data.
  • the shape and color of the additional image are not particularly limited.
  • the additional image may be an image on which a picture is drawn.
  • the feature amount acquiring unit 104 acquires a feature amount from the display image data generated by the display image generating unit 103, and outputs the feature amount to the additional information generating unit 105.
  • the feature amount is not particularly limited, in the present embodiment, MaxCLL (Maximum Content Content Light Level) indicating the maximum luminance value of the scene for each scene, and MaxFALL (Maximum Frame Average) indicating the maximum value of the average luminance value of the frame for each scene Light @ Level) is acquired as a feature amount.
  • the feature amount (the value of MaxCLL or MaxCLL) may dynamically change for each scene. In MaxCLL and MaxFALL, one frame can be treated as one scene.
  • MaxCLL can indicate the maximum luminance value of a frame for each frame
  • MaxFALL can indicate the average luminance value of a frame for each frame.
  • MaxCLL indicating the maximum luminance value of a frame for each frame and MaxFALL indicating the average luminance value of the frame for each frame are acquired as feature amounts.
  • the entire image area of the display image data (the entire image area including the image area of the target image and the image area of the black band image) is set as the average luminance value (MaxFALL) of the display image data shown in FIG. ) Is obtained.
  • the obtained average luminance value differs from the average luminance value of the target image, and differs from the value intended by the photographer (user of the image processing apparatus).
  • a broken line in FIG. 3B indicates MaxFALL obtained from the display image data in FIG. 3A in the related art.
  • the feature amount obtaining unit 104 obtains the average luminance value of the image area of the target image as the average luminance value (MaxFALL) of the display image data without considering the image area of the black band image. .
  • MaxFALL indicating the average luminance value intended by the photographer (user of the image processing apparatus) can be obtained.
  • the solid line in FIG. 3B indicates MaxFALL obtained from the display image data in FIG. 3A in the present embodiment.
  • MaxFALL (broken line) of the related art indicates an average luminance value lower than MaxFALL (solid line) of the present embodiment. For this reason, in the related art, display luminance lower than the display luminance intended by the photographer is realized based on MaxFALL (broken line).
  • MaxFALL indicating the average luminance value intended by the photographer can be obtained
  • display luminance intended by the photographer can be realized based on MaxFALL (solid line).
  • the display image data is image data generated by the imaging device, the imaging device can individually determine the image area of the black band image and the image area of the target image.
  • the feature amount acquiring unit 104 may acquire the maximum luminance value of the entire image area of the display image data as the maximum luminance value (MaxCLL) of the display image data, or may acquire the maximum luminance value of the image area of the target image. May be obtained.
  • FIG. 3C shows MaxFALL and MaxCLL obtained from the display image data of FIG. 3A in the present embodiment.
  • the additional image may affect the maximum luminance value (MaxCLL) of the display image data.
  • the feature amount acquiring unit 104 may acquire the maximum luminance value of the image area of the target image as the maximum luminance value (MaxCLL) of the display image data without considering the image area of the additional image.
  • MaxCLL indicating the maximum luminance value intended by the photographer can be more reliably obtained, and the display luminance intended by the photographer can be more reliably realized.
  • the additional information generation unit 105 generates additional information to be added to the display image data based on the feature amounts (MaxCLL and MaxFALL) acquired by the feature amount acquisition unit 104. Then, the additional information generation unit 105 outputs the additional information to the IF processing unit 106.
  • the additional information is information based on the HDMI standard or the like, and includes feature information indicating MaxCLL, MaxFALL, and the like.
  • the additional information includes area information indicating an image area in which the feature amount (MaxCLL or MaxFALL) has been acquired. In this embodiment, the area information indicates whether MaxFALL is obtained from the entire image area of the display image data or only from the image area of the target image. Note that the area information need not be included in the additional information.
  • the IF processing unit 106 adds the display image data generated by the display image generation unit 103 and the additional information generated by the additional information generation unit 105 to the display device 107 (external device) connected to the imaging device. Output.
  • the display unit 109 is connected to the imaging device by a connection method based on the HDMI standard or the like.
  • the IF processing unit 106 generates a signal in a format compliant with the HDMI standard or the like as a signal including the display image data and the additional information, and outputs the generated signal to the display device 107.
  • the display image data and the additional information may be output separately.
  • the display image data and the additional information may be recorded in a storage device without being output to the display device 107.
  • the display device 107 can be used as an electronic viewfinder (EVF) of the imaging device.
  • the display device 107 extracts display image data and additional information from a signal received from the imaging device, and displays an image based on the display image data on a display surface at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like).
  • the display luminance is the luminance on the display surface.
  • the display luminance can be adjusted by adjusting the light emission luminance of the backlight unit, the transmittance of the liquid crystal panel, and the like.
  • the display brightness can be adjusted by adjusting the voltage and current supplied to the backlight unit, the voltage and current supplied to the liquid crystal panel, and the like.
  • step S100 the image sensor 101 starts imaging.
  • step S101 the development processing unit 102 performs a development process.
  • step S102 the display image generation unit 103 generates display image data from the image data after the development processing.
  • the image area of the display image data includes at least the image area of the target image.
  • the image area of the display image data further includes an image area of a black band image.
  • step S103 the feature amount acquiring unit 104 determines whether or not the image area of the display image data generated in step S102 includes the image area of the black band image. If the image area of the black band image exists (for example, if the state of the display image data is letter box or pillar box), the process proceeds to step S104, and if not, the process proceeds to step S106.
  • step S104 the feature amount acquiring unit 104 generates the luminance value of each pixel in the image area (target area) of the target image without acquiring the luminance value in the image area of the black belt image in step S102. Acquire (extract) from display image data.
  • step S105 the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each luminance value obtained in step S104.
  • MaxCLL indicates the maximum luminance value of the frame (maximum luminance value of each pixel) for each frame
  • MaxFALL indicates the average luminance value of the frame (average luminance value of each pixel) for each frame. Show.
  • step S106 the feature amount acquiring unit 104 acquires (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S102.
  • step S107 the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S106.
  • step S108 the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S105 or step S107.
  • step S109 IF processing section 106 adds the additional information generated in step S108 to the display image data generated in step S102, and outputs the display image data to display device 107. Then, the display device 107 displays an image on the display surface based on the display image data output from the IF processing unit 106 at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like) output from the IF processing unit 106 ( Start of display).
  • additional information MaxCLL, MaxFALL, or the like
  • the display unit 109 can be used as an electronic viewfinder (EVF) or the like.
  • the display unit 109 displays an image based on the display image data output from the display processing unit 108 on a display surface at a display luminance based on the control information (MaxCLL, MaxFALL, or the like) output from the display processing unit 108.
  • the display unit 109 is a combination of a liquid crystal panel and a backlight unit, the display luminance can be adjusted by adjusting the light emission luminance of the backlight unit, the transmittance of the liquid crystal panel, and the like.
  • the display unit 109 is a display panel such as an organic EL panel or a plasma panel, the display luminance can be adjusted by adjusting the light emission luminance of the display panel.
  • step S109 the display processing unit 108 generates control information based on the additional information generated in step S108, and outputs display image data (step S102) and control information to the display unit 109. Then, the display unit 109 displays an image based on the display image data output from the display processing unit 108 on the display surface at a display luminance based on the control information (MaxCLL, MaxFALL, or the like) output from the display processing unit 108 ( Start of display).
  • control information MaxCLL, MaxFALL, or the like
  • FIG. 1C is a block diagram illustrating a configuration example of the imaging apparatus according to the present embodiment.
  • the imaging apparatus according to the present embodiment has a configuration in which the configuration of the first embodiment (FIG. 1A) and the configuration of the second embodiment (FIG. 1B) are combined.
  • the display image generation unit 103 outputs the display image data to the feature amount acquisition unit 104, the IF processing unit 106, and the display processing unit 108.
  • the additional information generation unit 105 outputs the additional information to the IF processing unit 106 and the display processing unit 108.
  • FIG. 2B is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the present embodiment.
  • step S200 the image sensor 101 starts imaging.
  • step S202 the development processing unit 102 performs a development process.
  • image data common to the display device 107 and the display unit 109 may or may not be obtained as image data after development processing.
  • the development process for the display device 107 and the development process for the display unit 109 are separately performed, and the image data for the display device 107 and the image data for the display unit 109 are separately obtained as the image data after the development process. You may be.
  • step S203 the feature amount acquiring unit 104 determines whether or not the image area of the display image data generated in step S202 includes the image area of the black band image.
  • the determination in step S203 is made individually for the display image data for the display device 107 and the display image data for the display unit 109.
  • Steps S204 and S205 are performed on the display image data having the black belt image area
  • steps S206 and S207 are performed on the display image data having no black belt image area.
  • the processes of steps S204 and S205 are performed on the display image data for the display device 107.
  • the display image data for the display unit 109 is the image data of FIG. 3D
  • the processes of steps S206 and S207 are performed on the display image data for the display unit 109.
  • step S204 the feature amount obtaining unit 104 generates the luminance value of each pixel in the entire image area (the entire image area including the image area of the target image and the image area of the black band image) in step S202. Acquire (extract) from display image data.
  • step S205 the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each luminance value obtained in step S206. Specifically, the feature amount acquiring unit 104 acquires the feature amount of the image area of the target image using the luminance value of each pixel in the image area of the target image, as in the first and second embodiments. Further, the feature amount obtaining unit 104 obtains a feature amount of the entire image area using the luminance value of each pixel in the entire image area.
  • step S206 the feature amount obtaining unit 104 obtains (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S202.
  • step S207 the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S206. Specifically, similarly to the first and second embodiments, the feature amount obtaining unit 104 obtains the feature amounts of the entire image area using the luminance values of each pixel in the entire image area.
  • step S208 the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S105 or step S107.
  • the processing of steps S204 and S205 is performed on the display image data for the display device 107.
  • the area information indicating the image area of the target image is associated with the feature amount of the image area of the target image.
  • Steps S206 and S207 are performed on the display image data for the display unit 109. Therefore, the additional information for the display unit 109 includes information indicating that the feature amount is not that of the image including the black band image.
  • step S209 the IF processing unit 106 and the display device 107 display an image based on the display image data generated in step S202 with the display luminance based on the additional information generated in step S208, as in the first embodiment. It is displayed on the display surface of the device 107.
  • the display processing unit 108 and the display unit 109 display an image based on the display image data generated in step S202 with a display luminance based on the additional information generated in step S208. To be displayed.
  • the feature amount (MaxCLL or MaxFALL) for the display device 107 the feature amount of the image area of the target image and the image area including the entire image area (the image area of the target image and the image area of the black belt image) are used. Are obtained. Therefore, when displaying an image on the display device 107, the above two feature amounts can be switched and used. If the feature amount of the image area of the target image is used, the display luminance intended by the photographer can be realized as in the first embodiment. If the feature amount (MaxFALL) of the entire image area is used, the display luminance is reduced by the black band image, so that the power consumption of the display device 107 can be reduced.
  • a pair for the display device 107 and a pair for the display unit 109 may be separately provided as a pair of the feature amount acquisition unit 104 and the additional information generation unit 105.
  • the additional image (the image represented by the predetermined image data) is a character image (a time code image representing the shooting time or the reproduction time), and the target image (the image represented by the YCbCr image data).
  • a character image is added to the lower right of the parentheses.
  • a character image may be surrounded by a frame as shown in FIG. 3E, but may be only a character as shown in FIG. 3F.
  • the additional image in FIG. 3G is a frame image, and indicates, for example, an area to be actually recorded in the display image data.
  • the shape and color of the additional image (character image or frame image) and the superimposition position in the display image data are not particularly limited.
  • the additional image may be an image in which characters and patterns are drawn.
  • the feature amount obtaining unit 104 sets the average luminance value of the target image image area as the average luminance value (MaxFALL) or the maximum luminance value (MaxCLL) of the display image data without considering the image area of the character image. Get value and maximum brightness value. This makes it possible to obtain MaxFALL indicating the average luminance value intended by the photographer (user of the image processing apparatus) and the maximum luminance value (MaxCLL), thereby realizing the display luminance intended by the photographer.
  • MaxFALL indicating the average luminance value intended by the photographer (user of the image processing apparatus)
  • MaxCLL maximum luminance value
  • the character image is black (when the luminance value is low) or white (when the luminance value is high).
  • the imaging device can individually determine the image area of the character image and the image area of the target image. Since the determination method can be determined not only by the area information but also by the pixel value and the gradation value of the image area of the character image, the image area of the additional image includes the unique pixel value and the gradation value specified by the photographer. It can also be an image area. If the target image includes a pixel value or a gradation value designated by the photographer as an image area of the additional image, the specified pixel value or the gradation value is not included in the target image. It is necessary to change the value to a value of ⁇ 1 of the designated value) to obtain a unique pixel value or gradation value.
  • FIG. 2C is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the present embodiment.
  • step S300 the imaging sensor unit 101 starts imaging.
  • step S301 the developing unit 102 performs a developing process.
  • step S302 the display image generation unit 103 generates display image data from the image data after the development processing.
  • the image area of the display image data includes at least the image area of the target image.
  • the image area of the display image data further includes the image area of the character image.
  • an example of an image on which a character image is superimposed is described.
  • the image may be a frame image shown in FIG. And so on.
  • step S303 the feature amount acquiring unit 104 determines whether the image area of the display image data generated in step S302 includes the image area of the character image. If the image area of the character image exists, the process proceeds to step S304; otherwise, the process proceeds to step S306.
  • step S306 the feature amount obtaining unit 104 obtains (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S302.
  • step S307 the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S306.
  • step S308 the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S305 or step S307.
  • step S309 the IF processing unit 106 outputs the display image data generated in step S302 to the display device 107 with the additional information generated in step S308 added. Then, the display device 107 displays an image on the display surface based on the display image data output from the IF processing unit 106 at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like) output from the IF processing unit 106 ( Start of display).
  • the additional information MaxCLL, MaxFALL, or the like
  • the imaging device image processing device
  • a feature amount that matches the intention of the photographer is acquired, and Feature information that matches the intention of the user is generated.
  • This makes it possible to more reliably realize the display luminance intended by the photographer. For example, even if the display device does not have a function of acquiring a suitable feature amount, it is possible to realize a display luminance that matches the intention of the photographer based on the feature information generated by the imaging device.
  • the configuration example of the imaging apparatus in the case where there is the image area of the character image described in the present embodiment is an example, and the display brightness that matches the intention of the photographer can be realized even with the configurations of the second and third embodiments.
  • Each block of the first to fourth embodiments may or may not be individual hardware.
  • the functions of two or more blocks may be realized by common hardware.
  • Each of a plurality of functions of one block may be realized by individual hardware.
  • Two or more functions of one block may be realized by common hardware.
  • each block may or may not be realized by hardware.
  • the device may have a processor and a memory in which a control program is stored. Then, the functions of at least some of the blocks of the device may be realized by the processor reading out the control program from the memory and executing the control program.
  • the first to fourth embodiments are merely examples, and the configuration obtained by appropriately modifying or changing the configuration of the first to fourth embodiments within the scope of the present invention. Are also included in the present invention. A configuration obtained by appropriately combining the configurations of the first to fourth embodiments is also included in the present invention.
  • the present invention supplies a program for realizing one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program.
  • This processing can be realized. Further, it can be realized by a circuit (for example, an ASIC) that realizes one or more functions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing device according to the present invention has a generation means for generating output image data on the basis of target image data, an acquisition means for acquiring feature values from the output image data, and an output means for outputting the output image data and feature information based on the feature values, wherein when the image area of the output image data includes a first area, which is an image area of the target image data, and a second area, which is an image area of prescribed image data, the acquisition means acquires the feature values of the first area.

Description

画像処理装置および画像処理方法Image processing apparatus and image processing method
 本発明は、画像処理装置および画像処理方法に関する。 The present invention relates to an image processing device and an image processing method.
 デジタルカメラやデジタルビデオカメラなどの撮像装置は、撮像して得た画像(撮像画像)をEVF(電子ビューファインダー)に表示しながら撮影(画像の撮像および記録)を行うことができる。例えば、撮像装置が備える表示パネルや撮像装置に接続された表示装置(外部装置)がEVFとして用いられ、撮影者は、EVFに表示された撮像画像を見て、撮像画像の各種特徴量を確認する。 (2) An imaging device such as a digital camera or a digital video camera can perform imaging (imaging and recording of an image) while displaying an image (imaging image) obtained by imaging on an EVF (electronic viewfinder). For example, a display panel included in the imaging device or a display device (external device) connected to the imaging device is used as an EVF, and a photographer checks a captured image displayed on the EVF to check various feature amounts of the captured image. I do.
 撮影者が確認したい特徴量には、撮像画像の輝度値(輝度レベル)が含まれる。近年、比較的広いダイナミックレンジ(輝度レンジ)であるHDR(High Dynamic Range)での撮影や表示が本格化しており、HDRに関連した規格化や製品化が進められている。例えば、HDR10+などの規格では、シーン毎にシーンの最大輝度値を示すMaxCLL(Maximum Content Light Level)や、シーン毎にフレームの平均輝度値の最大値を示すMaxFALL(Maximum Frame Average Light Level)などの付加情報が定義されている。上記付加情報では、シーン毎に情報(MaxCLLやMaxCLLの値)が動的に変化することがある。MaxCLLやMaxFALLでは、1フレームを1シーンとして扱うこともできる。つまり、MaxCLLでは、フレーム毎にフレームの最大輝度値を示すこともでき、MaxFALLでは、フレーム毎にフレームの平均輝度値を示すこともできる。 特 徴 The feature quantity that the photographer wants to check includes the luminance value (luminance level) of the captured image. In recent years, photographing and display in HDR (High Dynamic Range), which is a relatively wide dynamic range (luminance range), have been in full swing, and standardization and commercialization related to HDR have been promoted. For example, in standards such as HDR10 +, MaxCLL (Maximum Content Content Light Level) indicating the maximum luminance value of a scene for each scene, and MaxFALL (Maximum Frame Average Level) such as MaxFALL indicating the maximum value of the average luminance value of a frame for each scene. Additional information is defined. In the additional information, information (MaxCLL and MaxCLL values) may dynamically change for each scene. In MaxCLL and MaxFALL, one frame can be treated as one scene. In other words, MaxCLL can indicate the maximum luminance value of a frame for each frame, and MaxFALL can indicate the average luminance value of a frame for each frame.
 上記付加情報は、HDMI規格などに準拠した通信で装置から他の装置へ伝送することが可能であり、例えば撮像装置から表示装置に伝送することができる。表示装置は、上記付加情報を表示用の輝度評価値として用い、表示輝度(表示面上の輝度)を容易に調整することができる。しかし、撮像画像の縁に所定の画像が付加されることがあり、撮影者の意図にそぐわない付加情報が生成されることがある。具体的には、撮像画像の上下や左右に黒帯画像(帯状の黒色画像)が付加された画像から、撮影者の意図にそぐわない付加情報が生成されることがある。画像の上下に黒帯画像が付加された状態は「レターボックス」などと呼ばれ、画像の左右に黒帯画像が付加された状態は「ピラーボックス」などと呼ばれる。 The additional information can be transmitted from the device to another device by communication conforming to the HDMI standard or the like. For example, the additional information can be transmitted from the imaging device to the display device. The display device can easily adjust the display luminance (the luminance on the display surface) using the additional information as the luminance evaluation value for display. However, a predetermined image may be added to the edge of the captured image, and additional information that does not match the intention of the photographer may be generated. Specifically, additional information that does not match the intention of the photographer may be generated from an image in which a black band image (a band-shaped black image) is added to the top and bottom and left and right of the captured image. A state in which black bands are added above and below the image is called "letterbox", and a state in which black bands are added to the left and right of the image is called "pillar box".
 特許文献1には、所定の画像のエリアを特徴量を取得するエリアから除外して画像から特徴量を取得し、取得した特徴量に基づいてバックライト光源の発光輝度を制御する表示装置が開示されている。 Patent Literature 1 discloses a display device that acquires a feature amount from an image by excluding an area of a predetermined image from an area for acquiring a feature amount, and controls light emission luminance of a backlight light source based on the acquired feature amount. Have been.
特開2007-140483号公報JP 2007-140483 A
 しかしながら、特許文献1に開示の技術では、画像の特徴量を取得する機能を表示装置が備える必要があり、表示装置に入力された付加情報(MaxCLLやMaxFALL)を使用する場合には、撮影者の意図にそぐわない表示輝度が実現されてしまう。 However, according to the technology disclosed in Patent Document 1, the display device needs to be provided with a function of acquiring a feature amount of an image, and when additional information (MaxCLL or MaxFALL) input to the display device is used, the photographer is Display luminance that does not meet the intended purpose is realized.
 本発明は、撮影者などが意図した表示輝度をより確実に実現可能にすることができる技術を提供することを目的とする。 The object of the present invention is to provide a technique capable of more reliably realizing a display luminance intended by a photographer or the like.
 本発明の第1の態様は、
 対象画像データに基づき出力画像データを生成する生成手段と、
 前記出力画像データから特徴量を取得する取得手段と、
 前記出力画像データと前記特徴量に基づく特徴情報とを出力する出力手段と、
を有し、
 前記出力画像データの画像エリアが、前記対象画像データの画像エリアである第1エリアと、所定の画像データの画像エリアである第2エリアとを含む場合に、前記取得手段は、前記第1エリアの特徴量を取得する
ことを特徴とする画像処理装置である。
A first aspect of the present invention provides:
Generating means for generating output image data based on the target image data;
Acquiring means for acquiring a feature amount from the output image data;
Output means for outputting the output image data and feature information based on the feature amount,
Has,
When the image area of the output image data includes a first area that is an image area of the target image data and a second area that is an image area of predetermined image data, the acquisition unit may include the first area. An image processing apparatus is characterized by acquiring the characteristic amount of (1).
 本発明の第2の態様は、
 対象画像データに基づき出力画像データを生成する生成ステップと、
 前記出力画像データから特徴量を取得する取得ステップと、
 前記出力画像データと前記特徴量に基づく特徴情報とを出力する出力ステップと、
を有し、
 前記出力画像データの画像エリアが、前記対象画像データの画像エリアである第1エリアと、所定の画像データの画像エリアである第2エリアとを含む場合に、前記取得ステップでは、前記第1エリアの特徴量を取得する
ことを特徴とする画像処理方法である。
A second aspect of the present invention provides:
A generating step of generating output image data based on the target image data;
An obtaining step of obtaining a feature amount from the output image data,
An output step of outputting the output image data and feature information based on the feature amount,
Has,
In the case where the image area of the output image data includes a first area that is an image area of the target image data and a second area that is an image area of predetermined image data, in the acquiring step, the first area This is an image processing method characterized by acquiring the characteristic amount of (1).
 本発明の第3の態様は、コンピュータを、上述した画像処理装置の各手段として機能させるためのプログラムである。 A third aspect of the present invention is a program for causing a computer to function as each unit of the above-described image processing apparatus.
 本発明によれば、撮影者などが意図した表示輝度をより確実に実現可能にすることができる。 According to the present invention, it is possible to more reliably realize the display luminance intended by the photographer or the like.
実施例1~4に係る撮像装置の構成例を示すブロック図FIG. 2 is a block diagram illustrating a configuration example of an imaging device according to the first to fourth embodiments. 実施例1~4に係る撮像装置の処理フロー例を示すフローチャート9 is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the first to fourth embodiments. 実施例1~4に係る表示画像データなどの一例を示す図FIG. 9 is a diagram illustrating an example of display image data and the like according to the first to fourth embodiments.
 <実施例1>
 以下、本発明の実施例1について説明する。なお、本実施例に係る画像処理装置が撮像装置である例を説明するが、画像処理装置はパーソナルコンピュータ(PC)などであってもよい。
<Example 1>
Hereinafter, Embodiment 1 of the present invention will be described. Although an example in which the image processing apparatus according to the present embodiment is an imaging apparatus will be described, the image processing apparatus may be a personal computer (PC) or the like.
 図1(A)は、本実施例に係る撮像装置の構成例を示すブロック図である。 FIG. 1A is a block diagram illustrating a configuration example of an imaging device according to the present embodiment.
 レンズ群100は、少なくとも1つのレンズを備え、被写体からの光を撮像センサー部101に導く。レンズ群100は、レンズ群100から撮像センサー部101に入射する光の光量や、合焦の状態などを制御可能に構成されている。 The lens group 100 includes at least one lens and guides light from a subject to the imaging sensor unit 101. The lens group 100 is configured to be able to control the amount of light incident on the image sensor 101 from the lens group 100, the state of focusing, and the like.
 撮像センサー部101は、レンズ群100から入射した光を画像データに変換し、当該画像データを現像処理部102へ出力(送信)する。具体的には、撮像センサー部101は、CCDやCMOSなどの撮像素子や、アナログ信号をデジタル信号に変換するA/D変換部などを備える。撮像素子は、レンズ群100から入射して撮像素子で結像した光をアナログ信号に変換する(光電変換)。そして、A/D変換部は、撮像素子によって得られたアナログ信号をデジタル信号(上記画像データ)に変換する。 The image sensor 101 converts the light incident from the lens group 100 into image data, and outputs (transmits) the image data to the development processing unit 102. Specifically, the image sensor 101 includes an image sensor such as a CCD or a CMOS, an A / D converter that converts an analog signal into a digital signal, and the like. The image sensor converts light incident from the lens group 100 and imaged by the image sensor into an analog signal (photoelectric conversion). Then, the A / D converter converts the analog signal obtained by the image sensor into a digital signal (the image data).
 撮像素子の各画素は、赤色のカラーフィルタを有するRサブ画素、緑色のカラーフィルタを有するGサブ画素、及び、青色のカラーフィルタを有するBサブ画素からなる。撮像素子の各画素において、Rサブ画素、Gサブ画素、及び、Bサブ画素の配列は、所定の配列とされている。具体的には、撮像素子の各画素において、1つのRサブ画素、1つのBサブ画素、及び、2つのGサブ画素がモザイク状に配置されている。このような配列は「ベイヤー配列」などと呼ばれ、撮像センサー部101(A/D変換部)から出力される画像データも、ベイヤー配列の画像データ(ベイヤー画像データ)である。 画素 Each pixel of the image sensor includes an R sub-pixel having a red color filter, a G sub-pixel having a green color filter, and a B sub-pixel having a blue color filter. In each pixel of the image sensor, the arrangement of the R sub-pixel, the G sub-pixel, and the B sub-pixel is a predetermined arrangement. Specifically, in each pixel of the image sensor, one R sub-pixel, one B sub-pixel, and two G sub-pixels are arranged in a mosaic pattern. Such an array is called a “Bayer array” or the like, and the image data output from the imaging sensor unit 101 (A / D converter) is also Bayer array image data (Bayer image data).
 現像処理部102は、撮像センサー部101から出力されたベイヤー画像データに現像処理を施し、現像処理後の画像データを表示画像生成部103へ出力する。現像処理は、階調値(R値、G値、B値など)にオフセット値を加算するオフセット調整、階調値にゲイン値を乗算するゲイン調整、階調特性を変換するガンマ変換などを含む。ガンマ変換の変換特性(ガンマ値やガンマカーブなど)は、レンズ群100や撮像センサー部101などの特性を考慮して決定される。ガンマ変換の変換特性を変えることによって、放送用の画像データを生成したり、劇場用の画像データ(例えば、映画フィルムの質感や階調を再現した画像データ)を生成したりできる。現像処理は、ベイヤー画像データ(各画素が1つのRサブ画素、1つのBサブ画素、及び、2つのGサブ画素からなるRGB画像データ)をYCbCr画像データに変換する処理を含む。YCbCr画像データは、各画素値が、輝度値(Y値)と色差値(Cb値とCr値)からなる画像データである。現像処理は、レンズ群100(レンズ)の歪曲収差による画像の歪みを補正する補正処理、撮像装置の振動による画像(画像に写っている被写体)の振動を低減する防振処理、画像のノイズを低減するノイズリダクション処理なども含む。 (4) The development processing unit 102 performs a development process on the Bayer image data output from the imaging sensor unit 101, and outputs the image data after the development process to the display image generation unit 103. The development processing includes offset adjustment for adding an offset value to a gradation value (R value, G value, B value, etc.), gain adjustment for multiplying a gradation value by a gain value, gamma conversion for converting a gradation characteristic, and the like. . The conversion characteristics (gamma value, gamma curve, and the like) of the gamma conversion are determined in consideration of the characteristics of the lens group 100, the imaging sensor unit 101, and the like. By changing the conversion characteristics of the gamma conversion, image data for broadcasting or image data for theater (for example, image data reproducing the texture and gradation of a movie film) can be generated. The development process includes a process of converting Bayer image data (RGB image data in which each pixel includes one R sub-pixel, one B sub-pixel, and two G sub-pixels) into YCbCr image data. The YCbCr image data is image data in which each pixel value includes a luminance value (Y value) and a color difference value (Cb value and Cr value). The development processing includes correction processing for correcting image distortion due to distortion of the lens group 100 (lens), image stabilization processing for reducing vibration of an image (subject shown in the image) due to vibration of the imaging device, and noise reduction of the image. It also includes noise reduction processing to reduce noise.
 なお、現像処理部102から出力される画像データは、YCbCr画像データでなくてもよい。例えば、現像処理はディベイヤー処理を含み、現像処理部102は、ディベイヤー処理によって、ベイヤー画像データを、各画素が1つのRサブ画素、1つのGサブ画素、及び、1つのBサブ画素からなるRGB画像データに変換して出力してもよい。各画素が1つのRサブ画素、1つのGサブ画素、及び、1つのBサブ画素からなるRGB画像データは、YCbCr画像データの変換によって取得(生成)されてもよい。YCbCr値(Y値、Cb値、及び、Cr値)からRGB値(R値、G値、及び、B値)を算出することもできるし、RGB値からYCbCr値を算出することもできる。 The image data output from the development processing unit 102 may not be YCbCr image data. For example, the development process includes a debayer process, and the development processing unit 102 performs the debayer process to convert the Bayer image data into RGB images each including one R subpixel, one G subpixel, and one B subpixel. The data may be converted into image data and output. RGB image data in which each pixel includes one R sub-pixel, one G sub-pixel, and one B sub-pixel may be obtained (generated) by converting YCbCr image data. An RGB value (R value, G value, and B value) can be calculated from the YCbCr value (Y value, Cb value, and Cr value), and a YCbCr value can be calculated from the RGB value.
 撮像センサー部101や現像処理部102から出力される画像データは、被写体を表す撮像画像データであり、撮像装置における処理対象の画像データ(対象画像データ)である。なお、対象画像データは、撮像画像データに限られない。例えば、対象画像データは、CG(コンピュータグラフィック)画像データであってもよい。 The image data output from the imaging sensor unit 101 and the development processing unit 102 is captured image data representing a subject, and is image data to be processed by the imaging device (target image data). Note that the target image data is not limited to the captured image data. For example, the target image data may be CG (computer graphic) image data.
 表示画像生成部103は、現像処理部102から出力されたYCbCr画像データに基づいて表示画像データ(出力画像データ)を生成し、表示画像データを特徴量取得部104とIF処理部106へ出力する。表示画像データは、表示面に表示する画像データである。具体的には、表示画像生成部103は、YCbCr画像データの解像度(画像サイズ)を表示面の解像度に変換したり、YCbCr画像データの階調値(Y値、Cb値、Cr値など)のデータサイズ(ビット幅)を調整したりする。表示画像生成部103は、YCbCr画像データによって表された画像に所定のグラフィック画像が重ねられるように、所定のグラフィック画像を表す画像データをYCbCr画像データに合成したりもする。所定のグラフィック画像は、例えば、撮影アシスト情報を図や文字で表す画像である。さらに、YCbCr画像データのアスペクト比が表示面のアスペクト比と異なる場合などにおいて、表示画像生成部103は、表示画像データのアスペクト比が表示面のアスペクト比と一致するように、YCbCr画像データに所定の画像データを付加する。これらの処理により、表示画像データが生成される。 The display image generation unit 103 generates display image data (output image data) based on the YCbCr image data output from the development processing unit 102, and outputs the display image data to the feature amount acquisition unit 104 and the IF processing unit 106. . The display image data is image data to be displayed on the display surface. More specifically, the display image generation unit 103 converts the resolution (image size) of the YCbCr image data into the resolution of the display surface, and converts the gradation value (Y value, Cb value, Cr value, etc.) of the YCbCr image data. Adjust the data size (bit width). The display image generation unit 103 also combines the image data representing the predetermined graphic image with the YCbCr image data so that the predetermined graphic image is superimposed on the image represented by the YCbCr image data. The predetermined graphic image is, for example, an image in which the photographing assist information is represented by figures or characters. Further, when the aspect ratio of the YCbCr image data is different from the aspect ratio of the display surface, for example, the display image generation unit 103 determines the YCbCr image data so that the aspect ratio of the display image data matches the aspect ratio of the display surface. Is added. Through these processes, display image data is generated.
 図3(A)は、所定の画像データの付加された表示画像データの一例を示す。図3(A)では、付加画像(所定の画像データによって表された画像)が黒帯画像(帯状の黒色画像)であり、対象画像(YCbCr画像データによって表された画像)の上下に黒帯画像が付加されている。図3(A)に示すような状態は「レターボックス」などと呼ばれる。対象画像の左右に黒帯画像が付加されることもあり、そのような状態は「ピラーボックス」などと呼ばれる。なお、付加画像は黒帯画像でなくてもよいし、表示画像データのアスペクト比を調整するために付加される画像でなくてもよい。付加画像の形状や色は特に限定されない。付加画像は絵柄が描かれた画像であってもよい。 FIG. 3A shows an example of display image data to which predetermined image data has been added. In FIG. 3A, the additional image (image represented by predetermined image data) is a black band image (band-shaped black image), and black bands are formed above and below the target image (image represented by YCbCr image data). An image has been added. The state shown in FIG. 3A is called "letterbox" or the like. A black band image may be added to the left and right of the target image, and such a state is called a “pillar box” or the like. Note that the additional image need not be a black band image or an image added to adjust the aspect ratio of display image data. The shape and color of the additional image are not particularly limited. The additional image may be an image on which a picture is drawn.
 特徴量取得部104は、表示画像生成部103によって生成された表示画像データから特徴量を取得し、特徴量を付加情報生成部105へ出力する。特徴量は特に限定されないが、本実施例では、シーン毎にシーンの最大輝度値を示すMaxCLL(Maximum Content Light Level)と、シーン毎にフレームの平均輝度値の最大値を示すMaxFALL(Maximum Frame Average Light Level)とが、特徴量として取得される。このため、シーン毎に特徴量(MaxCLLやMaxCLLの値)が動的に変化することがある。MaxCLLやMaxFALLでは、1フレームを1シーンとして扱うこともできる。つまり、MaxCLLでは、フレーム毎にフレームの最大輝度値を示すこともでき、MaxFALLでは、フレーム毎にフレームの平均輝度値を示すこともできる。本実施例では、フレーム毎にフレームの最大輝度値を示すMaxCLLと、フレーム毎にフレームの平均輝度値を示すMaxFALLとが、特徴量として取得される。 The feature amount acquiring unit 104 acquires a feature amount from the display image data generated by the display image generating unit 103, and outputs the feature amount to the additional information generating unit 105. Although the feature amount is not particularly limited, in the present embodiment, MaxCLL (Maximum Content Content Light Level) indicating the maximum luminance value of the scene for each scene, and MaxFALL (Maximum Frame Average) indicating the maximum value of the average luminance value of the frame for each scene Light @ Level) is acquired as a feature amount. For this reason, the feature amount (the value of MaxCLL or MaxCLL) may dynamically change for each scene. In MaxCLL and MaxFALL, one frame can be treated as one scene. In other words, MaxCLL can indicate the maximum luminance value of a frame for each frame, and MaxFALL can indicate the average luminance value of a frame for each frame. In the present embodiment, MaxCLL indicating the maximum luminance value of a frame for each frame and MaxFALL indicating the average luminance value of the frame for each frame are acquired as feature amounts.
 従来技術では、図3(A)に示す表示画像データの平均輝度値(MaxFALL)として、表示画像データの画像エリア全体(対象画像の画像エリアと黒帯画像の画像エリアとからなる画像エリアの全体)の平均輝度値が取得される。このため、得られた平均輝度値は、対象画像の平均輝度値と異なり、撮影者(画像処理装置のユーザ)が意図した値と異なることとなる。図3(B)の破線は、従来技術で図3(A)の表示画像データから得られるMaxFALLを示す。 In the prior art, the entire image area of the display image data (the entire image area including the image area of the target image and the image area of the black band image) is set as the average luminance value (MaxFALL) of the display image data shown in FIG. ) Is obtained. For this reason, the obtained average luminance value differs from the average luminance value of the target image, and differs from the value intended by the photographer (user of the image processing apparatus). A broken line in FIG. 3B indicates MaxFALL obtained from the display image data in FIG. 3A in the related art.
 そこで、本実施例では、特徴量取得部104は、黒帯画像の画像エリアを考慮せずに、表示画像データの平均輝度値(MaxFALL)として、対象画像の画像エリアの平均輝度値を取得する。これにより、撮影者(画像処理装置のユーザ)が意図した平均輝度値を示すMaxFALLを得ることができる。図3(B)の実線は、本実施例で図3(A)の表示画像データから得られるMaxFALLを示す。付加画像により、従来技術のMaxFALL(破線)は、本実施例のMaxFALL(実線)よりも低い平均輝度値を示す。このため、従来技術では、MaxFALL(破線)に基づいて、撮影者が意図した表示輝度よりも低い表示輝度が実現されてしまう。本実施例では、撮影者が意図した平均輝度値を示すMaxFALLを得ることができるため、MaxFALL(実線)に基づいて、撮影者が意図した表示輝度を実現できる。なお、表示画像データは撮像装置で生成された画像データであるため、撮像装置では、黒帯画像の画像エリアや対象画像の画像エリアを個別に判断することができる。 Therefore, in the present embodiment, the feature amount obtaining unit 104 obtains the average luminance value of the image area of the target image as the average luminance value (MaxFALL) of the display image data without considering the image area of the black band image. . Thereby, MaxFALL indicating the average luminance value intended by the photographer (user of the image processing apparatus) can be obtained. The solid line in FIG. 3B indicates MaxFALL obtained from the display image data in FIG. 3A in the present embodiment. According to the additional image, MaxFALL (broken line) of the related art indicates an average luminance value lower than MaxFALL (solid line) of the present embodiment. For this reason, in the related art, display luminance lower than the display luminance intended by the photographer is realized based on MaxFALL (broken line). In this embodiment, since MaxFALL indicating the average luminance value intended by the photographer can be obtained, display luminance intended by the photographer can be realized based on MaxFALL (solid line). Since the display image data is image data generated by the imaging device, the imaging device can individually determine the image area of the black band image and the image area of the target image.
 本実施例では、付加画像の色が黒色であるため、表示画像データの最大輝度値(MaxCLL)に付加画像が影響を及ぼすことは無い。このため、特徴量取得部104は、表示画像データの最大輝度値(MaxCLL)として、表示画像データの画像エリア全体の最大輝度値を取得してもよいし、対象画像の画像エリアの最大輝度値を取得してもよい。図3(C)は、本実施例で図3(A)の表示画像データから得られるMaxFALLとMaxCLLを示す。なお、付加画像が黒色以外の色(白色など)を含んでいる場合には、表示画像データの最大輝度値(MaxCLL)に付加画像が影響を及ぼすことがある。その場合には、特徴量取得部104は、付加画像の画像エリアを考慮せずに、表示画像データの最大輝度値(MaxCLL)として、対象画像の画像エリアの最大輝度値を取得すればよい。これにより、撮影者が意図した最大輝度値を示すMaxCLLをより確実に得ることができ、撮影者が意図した表示輝度がより確実に実現可能となる。 In this embodiment, since the color of the additional image is black, the additional image does not affect the maximum luminance value (MaxCLL) of the display image data. For this reason, the feature amount acquiring unit 104 may acquire the maximum luminance value of the entire image area of the display image data as the maximum luminance value (MaxCLL) of the display image data, or may acquire the maximum luminance value of the image area of the target image. May be obtained. FIG. 3C shows MaxFALL and MaxCLL obtained from the display image data of FIG. 3A in the present embodiment. When the additional image includes a color other than black (such as white), the additional image may affect the maximum luminance value (MaxCLL) of the display image data. In that case, the feature amount acquiring unit 104 may acquire the maximum luminance value of the image area of the target image as the maximum luminance value (MaxCLL) of the display image data without considering the image area of the additional image. As a result, MaxCLL indicating the maximum luminance value intended by the photographer can be more reliably obtained, and the display luminance intended by the photographer can be more reliably realized.
 付加情報生成部105は、特徴量取得部104によって取得された特徴量(MaxCLLやMaxFALL)に基づいて、表示画像データに付加する付加情報を生成する。そして、付加情報生成部105は、付加情報をIF処理部106へ出力する。例えば、付加情報は、HDMI規格などに基づく情報であり、MaxCLLやMaxFALLなどを示す特徴情報を含む。さらに、付加情報は、特徴量(MaxCLLやMaxFALL)を取得した画像エリアを示すエリア情報などを含む。本実施例では、エリア情報は、MaxFALLが、表示画像データの画像エリア全体から取得されたか、対象画像の画像エリアのみから取得されたかを示す。なお、エリア情報は付加情報に含まれなくてもよい。 The additional information generation unit 105 generates additional information to be added to the display image data based on the feature amounts (MaxCLL and MaxFALL) acquired by the feature amount acquisition unit 104. Then, the additional information generation unit 105 outputs the additional information to the IF processing unit 106. For example, the additional information is information based on the HDMI standard or the like, and includes feature information indicating MaxCLL, MaxFALL, and the like. Further, the additional information includes area information indicating an image area in which the feature amount (MaxCLL or MaxFALL) has been acquired. In this embodiment, the area information indicates whether MaxFALL is obtained from the entire image area of the display image data or only from the image area of the target image. Note that the area information need not be included in the additional information.
 IF処理部106は、表示画像生成部103によって生成された表示画像データを、付加情報生成部105によって生成された付加情報を付加して、撮像装置に接続された表示装置107(外部装置)へ出力する。具体的には、表示部109は、HDMI規格などに準拠した接続方法で撮像装置に接続されている。そして、IF処理部106は、表示画像データと付加情報を含む信号として、HDMI規格などに準拠した形式の信号を生成し、生成した信号を表示装置107へ出力する。なお、表示画像データと付加情報は個別に出力されてもよい。表示画像データと付加情報は、表示装置107へ出力されずに、記憶装置に記録されてもよい。 The IF processing unit 106 adds the display image data generated by the display image generation unit 103 and the additional information generated by the additional information generation unit 105 to the display device 107 (external device) connected to the imaging device. Output. Specifically, the display unit 109 is connected to the imaging device by a connection method based on the HDMI standard or the like. Then, the IF processing unit 106 generates a signal in a format compliant with the HDMI standard or the like as a signal including the display image data and the additional information, and outputs the generated signal to the display device 107. The display image data and the additional information may be output separately. The display image data and the additional information may be recorded in a storage device without being output to the display device 107.
 表示装置107は、撮像装置の電子ビューファインダー(EVF)などとして使用可能である。表示装置107は、撮像装置から受信した信号から表示画像データと付加情報を抽出し、付加情報(MaxCLLやMaxFALLなど)に基づく表示輝度で表示画像データに基づく画像を表示面に表示する。表示輝度は、表示面上の輝度である。表示装置107が液晶表示装置である場合には、バックライトユニットの発光輝度や液晶パネルの透過率などを調整することで、表示輝度を調整できる。具体的には、バックライトユニットに供給する電圧や電流、液晶パネルに供給する電圧や電流などを調整することで、表示輝度を調整できる。表示装置107が有機EL表示装置やプラズマ表示装置である場合には、表示パネル(有機ELパネルやプラズマパネル)の発光輝度を調整することで表示輝度を調整できる。具体的には、表示パネルに供給する電圧や電流などを調整することで、表示輝度を調整できる。 The display device 107 can be used as an electronic viewfinder (EVF) of the imaging device. The display device 107 extracts display image data and additional information from a signal received from the imaging device, and displays an image based on the display image data on a display surface at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like). The display luminance is the luminance on the display surface. When the display device 107 is a liquid crystal display device, the display luminance can be adjusted by adjusting the light emission luminance of the backlight unit, the transmittance of the liquid crystal panel, and the like. Specifically, the display brightness can be adjusted by adjusting the voltage and current supplied to the backlight unit, the voltage and current supplied to the liquid crystal panel, and the like. When the display device 107 is an organic EL display device or a plasma display device, the display luminance can be adjusted by adjusting the emission luminance of the display panel (organic EL panel or plasma panel). Specifically, the display luminance can be adjusted by adjusting the voltage or current supplied to the display panel.
 図2(A)は、本実施例に係る撮像装置の処理フロー例を示すフローチャートである。 FIG. 2A is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the present embodiment.
 ステップS100にて、撮像センサー部101は撮像を開始する。ステップS101にて、現像処理部102は現像処理を行う。ステップS102にて、表示画像生成部103は、現像処理後の画像データから表示画像データを生成する。表示画像データの画像エリアは、少なくとも対象画像の画像エリアを含む。表示画像データの状態がレターボックスやピラーボックスなどである場合には、表示画像データの画像エリアは黒帯画像の画像エリアをさらに含む。 In step S100, the image sensor 101 starts imaging. In step S101, the development processing unit 102 performs a development process. In step S102, the display image generation unit 103 generates display image data from the image data after the development processing. The image area of the display image data includes at least the image area of the target image. When the state of the display image data is a letter box, a pillar box, or the like, the image area of the display image data further includes an image area of a black band image.
 ステップS103にて、特徴量取得部104は、ステップS102で生成された表示画像データの画像エリアが黒帯画像の画像エリアを含むか否かを判断する。黒帯画像の画像エリアが存在する場合(例えば、表示画像データの状態がレターボックスやピラーボックスである場合)にはステップS104へ進み、そうでない場合はステップS106へ進む。 In step S103, the feature amount acquiring unit 104 determines whether or not the image area of the display image data generated in step S102 includes the image area of the black band image. If the image area of the black band image exists (for example, if the state of the display image data is letter box or pillar box), the process proceeds to step S104, and if not, the process proceeds to step S106.
 ステップS104にて、特徴量取得部104は、黒帯画像の画像エリアにおける輝度値を取得せずに、対象画像の画像エリア(対象エリア)における各画素の輝度値を、ステップS102で生成された表示画像データから取得(抽出)する。ステップS105にて、特徴量取得部104は、ステップS104で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。例えば、MaxCLLは、1フレーム毎にフレームの最大輝度値(各画素の輝度値の最大値)を示し、MaxFALLは、1フレーム毎にフレームの平均輝度値(各画素の輝度値の平均値)を示す。 In step S104, the feature amount acquiring unit 104 generates the luminance value of each pixel in the image area (target area) of the target image without acquiring the luminance value in the image area of the black belt image in step S102. Acquire (extract) from display image data. In step S105, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each luminance value obtained in step S104. For example, MaxCLL indicates the maximum luminance value of the frame (maximum luminance value of each pixel) for each frame, and MaxFALL indicates the average luminance value of the frame (average luminance value of each pixel) for each frame. Show.
 ステップS106にて、特徴量取得部104は、全画像エリア(表示画像データの画像エリア全体)における各画素の輝度値を、ステップS102で生成された表示画像データから取得(抽出)する。ステップS107にて、特徴量取得部104は、ステップS106で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。 In step S106, the feature amount acquiring unit 104 acquires (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S102. In step S107, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S106.
 ステップS108にて、付加情報生成部105は、ステップS105またはステップS107で算出された特徴量(MaxCLLやMaxFALL)に基づいて付加情報を生成する。 In step S108, the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S105 or step S107.
 ステップS109にて、IF処理部106は、ステップS102で生成された表示画像データを、ステップS108で生成された付加情報を付加して、表示装置107へ出力する。そして、表示装置107は、IF処理部106から出力された付加情報(MaxCLLやMaxFALLなど)に基づく表示輝度で、IF処理部106から出力された表示画像データに基づく画像を表示面に表示する(表示の開始)。 In step S109, IF processing section 106 adds the additional information generated in step S108 to the display image data generated in step S102, and outputs the display image data to display device 107. Then, the display device 107 displays an image on the display surface based on the display image data output from the IF processing unit 106 at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like) output from the IF processing unit 106 ( Start of display).
 以上述べたように、本実施例によれば、表示装置とは別体の撮像装置(画像処理装置)において、撮影者(画像処理装置のユーザ)の意図に合った特徴量が取得され、撮影者の意図に合った特徴情報が生成される。これにより、撮影者が意図した表示輝度をより確実に実現可能にすることができる。例えば、表示装置は、好適な特徴量を取得する機能を有していなくても、撮像装置で生成された特徴情報に基づいて、撮影者の意図に合った表示輝度を実現できる。 As described above, according to the present embodiment, in the imaging device (image processing device) separate from the display device, a feature amount that matches the intention of the photographer (user of the image processing device) is acquired, and Feature information that matches the intention of the user is generated. This makes it possible to more reliably realize the display luminance intended by the photographer. For example, even if the display device does not have a function of acquiring a suitable feature amount, it is possible to realize a display luminance that matches the intention of the photographer based on the feature information generated by the imaging device.
 <実施例2>
 以下、本発明の実施例2について説明する。なお、以下では、実施例1と異なる点(構成、処理、等)について詳しく説明し、実施例1と同じ点についての説明は省略する。
<Example 2>
Hereinafter, a second embodiment of the present invention will be described. In the following, points (configuration, processing, and the like) different from the first embodiment will be described in detail, and description of the same points as the first embodiment will be omitted.
 図1(B)は、本実施例に係る撮像装置の構成例を示すブロック図である。本実施例に係る撮像装置は、表示装置(外部装置)に接続されておらず、実施例1(図1(A))のIF処理部106の代わりに表示処理部108を有する。さらに、本実施例に係る撮像装置は、表示部109を有する。 FIG. 1B is a block diagram illustrating a configuration example of the imaging apparatus according to the present embodiment. The imaging device according to the present embodiment is not connected to a display device (external device), and includes a display processing unit 108 instead of the IF processing unit 106 of the first embodiment (FIG. 1A). Furthermore, the imaging device according to the present embodiment includes the display unit 109.
 表示処理部108は、付加情報生成部105によって生成された付加情報(MaxCLLやMaxFALLなど)に基づいて、表示輝度などを制御する制御情報を生成する。制御情報は「特徴量取得部104によって取得された特徴量に基づく情報」とも言える。そして、表示処理部108は、表示画像生成部103によって生成された表示画像データと、付加情報に基づいて生成した制御情報とを、表示部109へ出力(伝送)する。例えば、表示部109は、MIPI規格などに準拠した接続方法で表示処理部108に接続されており、表示処理部108は、制御情報の信号として、MIPI規格などに準拠した形式の信号を生成して出力する。 The display processing unit 108 generates control information for controlling display brightness and the like based on the additional information (MaxCLL, MaxFALL, or the like) generated by the additional information generation unit 105. The control information can also be said to be “information based on the feature amount acquired by the feature amount acquiring unit 104”. Then, the display processing unit 108 outputs (transmits) the display image data generated by the display image generation unit 103 and the control information generated based on the additional information to the display unit 109. For example, the display unit 109 is connected to the display processing unit 108 by a connection method compliant with the MIPI standard or the like, and the display processing unit 108 generates a signal of a format compliant with the MIPI standard or the like as a control information signal. Output.
 表示部109は、電子ビューファインダー(EVF)などとして使用可能である。表示部109は、表示処理部108から出力された制御情報(MaxCLLやMaxFALLなど)に基づく表示輝度で、表示処理部108から出力された表示画像データに基づく画像を表示面に表示する。表示部109が液晶パネルとバックライトユニットの組み合わせである場合には、バックライトユニットの発光輝度や液晶パネルの透過率などを調整することで、表示輝度を調整できる。表示部109が有機ELパネルやプラズマパネルなどの表示パネルである場合には、表示パネルの発光輝度を調整することで表示輝度を調整できる。 The display unit 109 can be used as an electronic viewfinder (EVF) or the like. The display unit 109 displays an image based on the display image data output from the display processing unit 108 on a display surface at a display luminance based on the control information (MaxCLL, MaxFALL, or the like) output from the display processing unit 108. When the display unit 109 is a combination of a liquid crystal panel and a backlight unit, the display luminance can be adjusted by adjusting the light emission luminance of the backlight unit, the transmittance of the liquid crystal panel, and the like. When the display unit 109 is a display panel such as an organic EL panel or a plasma panel, the display luminance can be adjusted by adjusting the light emission luminance of the display panel.
 本実施例に係る撮像装置の処理フローは、実施例1(図2(A))と同様である。但し、ステップS109では、表示処理部108は、ステップS108で生成された付加情報に基づいて制御情報を生成し、表示画像データ(ステップS102)と制御情報を表示部109へ出力する。そして、表示部109は、表示処理部108から出力された制御情報(MaxCLLやMaxFALLなど)に基づく表示輝度で、表示処理部108から出力された表示画像データに基づく画像を表示面に表示する(表示の開始)。 処理 The processing flow of the imaging apparatus according to the present embodiment is the same as that of the first embodiment (FIG. 2A). However, in step S109, the display processing unit 108 generates control information based on the additional information generated in step S108, and outputs display image data (step S102) and control information to the display unit 109. Then, the display unit 109 displays an image based on the display image data output from the display processing unit 108 on the display surface at a display luminance based on the control information (MaxCLL, MaxFALL, or the like) output from the display processing unit 108 ( Start of display).
 以上述べたように、本実施例によれば、撮像装置(画像処理装置)単体で、撮影者(画像処理装置のユーザ)が意図した表示輝度をより確実に実現することができる。 As described above, according to the present embodiment, the display luminance intended by the photographer (user of the image processing apparatus) can be more reliably realized by the imaging apparatus (image processing apparatus) alone.
 <実施例3>
 以下、本発明の実施例3について説明する。なお、以下では、実施例1,2と異なる点(構成、処理、等)について詳しく説明し、実施例1,2と同じ点についての説明は省略する。
<Example 3>
Hereinafter, a third embodiment of the present invention will be described. In the following, differences (configuration, processing, etc.) from the first and second embodiments will be described in detail, and description of the same points as the first and second embodiments will be omitted.
 図1(C)は、本実施例に係る撮像装置の構成例を示すブロック図である。本実施例に係る撮像装置は、実施例1(図1(A))の構成と実施例2(図1(B))の構成とを組み合わせた構成を有する。具体的には、表示画像生成部103は、表示画像データを特徴量取得部104、IF処理部106、及び、表示処理部108へ出力する。そして、付加情報生成部105は、付加情報をIF処理部106と表示処理部108へ出力する。 FIG. 1C is a block diagram illustrating a configuration example of the imaging apparatus according to the present embodiment. The imaging apparatus according to the present embodiment has a configuration in which the configuration of the first embodiment (FIG. 1A) and the configuration of the second embodiment (FIG. 1B) are combined. Specifically, the display image generation unit 103 outputs the display image data to the feature amount acquisition unit 104, the IF processing unit 106, and the display processing unit 108. Then, the additional information generation unit 105 outputs the additional information to the IF processing unit 106 and the display processing unit 108.
 図2(B)は、本実施例に係る撮像装置の処理フロー例を示すフローチャートである。 FIG. 2B is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the present embodiment.
 ステップS200にて、撮像センサー部101は撮像を開始する。ステップS202にて、現像処理部102は現像処理を行う。1種類の現像処理により、現像処理後の画像データとして、表示装置107と表示部109で共通の画像データが得られてもよいし、そうでなくてもよい。表示装置107用の現像処理と表示部109用の現像処理とが個別に行われ、現像処理後の画像データとして、表示装置107用の画像データと表示部109用の画像データとが個別に得られてもよい。 In step S200, the image sensor 101 starts imaging. In step S202, the development processing unit 102 performs a development process. With one type of development processing, image data common to the display device 107 and the display unit 109 may or may not be obtained as image data after development processing. The development process for the display device 107 and the development process for the display unit 109 are separately performed, and the image data for the display device 107 and the image data for the display unit 109 are separately obtained as the image data after the development process. You may be.
 ステップS202にて、表示画像生成部103は、現像処理後の画像データから表示画像データを生成する。本実施例では、表示画像生成部103は、表示装置107用の表示画像データと表示部109用の表示画像データとを個別に生成する。このため、2つの表示画像データの一方でのみ黒帯画像が存在したり、2つの表示画像データの両方で黒帯画像が存在したりする。本実施例では、表示装置107用の表示画像データが、図3(A)に示すような画像データ(黒帯画像が存在する画像データ;レターボックスの画像データ)あるとする。そして、表示部109用の表示画像データが、図3(D)に示すような画像データ(黒帯画像が存在しない画像データ;対象画像のみが存在する画像データ)であるとする。表示装置107用の表示画像データで黒帯画像が存在せず、表示部109用の表示画像データで黒帯画像が存在することもある。なお、表示装置107と表示部109で共通の表示画像データが生成されてもよい。 In step S202, the display image generation unit 103 generates display image data from the image data after the development processing. In the present embodiment, the display image generation unit 103 individually generates display image data for the display device 107 and display image data for the display unit 109. For this reason, a black band image exists in only one of the two display image data, or a black band image exists in both of the two display image data. In the present embodiment, it is assumed that the display image data for the display device 107 is image data (image data having a black band image; letterbox image data) as shown in FIG. Assume that the display image data for the display unit 109 is image data (image data having no black band image; image data having only the target image) as shown in FIG. A black band image may not exist in the display image data for the display device 107, and a black band image may exist in the display image data for the display unit 109. Note that the display device 107 and the display unit 109 may generate common display image data.
 ステップS203で、特徴量取得部104は、ステップS202で生成された表示画像データの画像エリアが黒帯画像の画像エリアを含むか否かを判断する。ステップS203の判断は、表示装置107用の表示画像データと表示部109用の表示画像データとについて個別に行われる。黒帯画像の画像エリアが存在する表示画像データについては、ステップS204,S205の処理が行われ、黒帯画像の画像エリアが存在しない表示画像データについてはステップS206,S207の処理が行われる。本実施例では、表示装置107用の表示画像データが図3(A)の画像データであるため、表示装置107用の表示画像データについてステップS204,S205の処理が行われる。そして、表示部109用の表示画像データが図3(D)の画像データであるため、表示部109用の表示画像データについてステップS206,S207の処理が行われる。 In step S203, the feature amount acquiring unit 104 determines whether or not the image area of the display image data generated in step S202 includes the image area of the black band image. The determination in step S203 is made individually for the display image data for the display device 107 and the display image data for the display unit 109. Steps S204 and S205 are performed on the display image data having the black belt image area, and steps S206 and S207 are performed on the display image data having no black belt image area. In the present embodiment, since the display image data for the display device 107 is the image data of FIG. 3A, the processes of steps S204 and S205 are performed on the display image data for the display device 107. Then, since the display image data for the display unit 109 is the image data of FIG. 3D, the processes of steps S206 and S207 are performed on the display image data for the display unit 109.
 ステップS204にて、特徴量取得部104は、全画像エリア(対象画像の画像エリアと黒帯画像の画像エリアとからなる画像エリアの全体)における各画素の輝度値を、ステップS202で生成された表示画像データから取得(抽出)する。ステップS205にて、特徴量取得部104は、ステップS206で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。具体的には、特徴量取得部104は、実施例1,2と同様に、対象画像の画像エリアにおける各画素の輝度値を用いて、対象画像の画像エリアの特徴量を取得する。さらに、特徴量取得部104は、全画像エリアにおける各画素の輝度値を用いて、全画像エリアの特徴量を取得する。 In step S204, the feature amount obtaining unit 104 generates the luminance value of each pixel in the entire image area (the entire image area including the image area of the target image and the image area of the black band image) in step S202. Acquire (extract) from display image data. In step S205, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each luminance value obtained in step S206. Specifically, the feature amount acquiring unit 104 acquires the feature amount of the image area of the target image using the luminance value of each pixel in the image area of the target image, as in the first and second embodiments. Further, the feature amount obtaining unit 104 obtains a feature amount of the entire image area using the luminance value of each pixel in the entire image area.
 ステップS206にて、特徴量取得部104は、全画像エリア(表示画像データの画像エリア全体)における各画素の輝度値を、ステップS202で生成された表示画像データから取得(抽出)する。ステップS207にて、特徴量取得部104は、ステップS206で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。具体的には、特徴量取得部104は、実施例1,2と同様に、全画像エリアにおける各画素の輝度値を用いて、全画像エリアの特徴量を取得する。 In step S206, the feature amount obtaining unit 104 obtains (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S202. In step S207, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S206. Specifically, similarly to the first and second embodiments, the feature amount obtaining unit 104 obtains the feature amounts of the entire image area using the luminance values of each pixel in the entire image area.
 ステップS208にて、付加情報生成部105は、ステップS105またはステップS107で算出された特徴量(MaxCLLやMaxFALL)に基づいて付加情報を生成する。上述したように、本実施例では、表示装置107用の表示画像データについてステップS204,S205の処理が行われる。このため、表示装置107用の付加情報では、対象画像の画像エリアの特徴量に対して、対象画像の画像エリアを示すエリア情報が関連付けられ、全画像エリアの特徴量に対して、全画像エリアを示すエリア情報が関連付けられる。表示部109用の表示画像データについては、ステップS206,S207の処理が行われる。このため、表示部109用の付加情報には、特徴量が黒帯画像を含む画像のものでないことなどを示す情報が含まれる。 In step S208, the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S105 or step S107. As described above, in the present embodiment, the processing of steps S204 and S205 is performed on the display image data for the display device 107. For this reason, in the additional information for the display device 107, the area information indicating the image area of the target image is associated with the feature amount of the image area of the target image. Are associated with each other. Steps S206 and S207 are performed on the display image data for the display unit 109. Therefore, the additional information for the display unit 109 includes information indicating that the feature amount is not that of the image including the black band image.
 ステップS209にて、IF処理部106と表示装置107は、実施例1と同様に、ステップS208で生成された付加情報に基づく表示輝度で、ステップS202で生成された表示画像データに基づく画像を表示装置107の表示面に表示する。表示処理部108と表示部109は、実施例2と同様に、ステップS208で生成された付加情報に基づく表示輝度で、ステップS202で生成された表示画像データに基づく画像を表示部109の表示面に表示する。 In step S209, the IF processing unit 106 and the display device 107 display an image based on the display image data generated in step S202 with the display luminance based on the additional information generated in step S208, as in the first embodiment. It is displayed on the display surface of the device 107. As in the second embodiment, the display processing unit 108 and the display unit 109 display an image based on the display image data generated in step S202 with a display luminance based on the additional information generated in step S208. To be displayed.
 本実施例では、表示装置107用の特徴量(MaxCLLやMaxFALL)として、対象画像の画像エリアの特徴量と、全画像エリア(対象画像の画像エリアと黒帯画像の画像エリアとからなる画像エリアの全体)の特徴量とが取得される。このため、表示装置107に画像を表示する際に、上記2つの特徴量を切り替えて使用できる。対象画像の画像エリアの特徴量を用いれば、実施例1と同様に、撮影者が意図した表示輝度を実現できる。全画像エリアの特徴量(MaxFALL)を使用すれば、黒帯画像によって表示輝度が低下するため、表示装置107の消費電力を低減できる。 In the present embodiment, as the feature amount (MaxCLL or MaxFALL) for the display device 107, the feature amount of the image area of the target image and the image area including the entire image area (the image area of the target image and the image area of the black belt image) are used. Are obtained. Therefore, when displaying an image on the display device 107, the above two feature amounts can be switched and used. If the feature amount of the image area of the target image is used, the display luminance intended by the photographer can be realized as in the first embodiment. If the feature amount (MaxFALL) of the entire image area is used, the display luminance is reduced by the black band image, so that the power consumption of the display device 107 can be reduced.
 以上述べたように、本実施例によれば、実施例1の処理と実施例2の処理の両方が行われるため、実施例1の効果と実施例2の効果の両方を得ることができる。さらに、表示装置107用の付加情報も表示部109の付加情報も同じ装置で生成されるため、表示装置107に表示された画像の見えと、表示部109に表示された画像の見えとを、互いに近づけることができる。画像を表示する際に、対象画像の特徴量と、対象画像と黒帯画像を含む画像の特徴量とを切り替えて使用できるため、撮影者(画像処理装置のユーザ)の意図に合った表示輝度と、低消費電力化のための表示輝度とを好適に切り替えて実現することもできる。 As described above, according to this embodiment, since both the processing of the first embodiment and the processing of the second embodiment are performed, both the effects of the first embodiment and the effects of the second embodiment can be obtained. Further, since the additional information for the display device 107 and the additional information for the display unit 109 are generated by the same device, the appearance of the image displayed on the display device 107 and the appearance of the image displayed on the display unit 109 are changed. Can be close to each other. When displaying an image, the feature amount of the target image and the feature amount of the image including the target image and the black band image can be switched and used, so that the display brightness that matches the intention of the photographer (user of the image processing apparatus) And display brightness for reducing power consumption can be suitably switched.
 なお、図1(D)に示すように、特徴量取得部104と付加情報生成部105のペアとして、表示装置107用のペアと表示部109用のペアとが個別に設けられてもよい。 Note that, as shown in FIG. 1D, a pair for the display device 107 and a pair for the display unit 109 may be separately provided as a pair of the feature amount acquisition unit 104 and the additional information generation unit 105.
 <実施例4>
 以下、本発明の実施例4について説明する。なお、以下では、実施例1~3と異なる点(構成、処理、等)について詳しく説明し、実施例1~3と同じ点についての説明は省略する。
<Example 4>
Hereinafter, a fourth embodiment of the present invention will be described. In the following, differences (configuration, processing, etc.) from the first to third embodiments will be described in detail, and description of the same points as the first to third embodiments will be omitted.
 本実施例に係る撮像装置の構成例を示すブロック図は実施例1と同じ図1(A)である。 ブ ロ ッ ク A block diagram illustrating a configuration example of the imaging apparatus according to the present embodiment is the same as FIG.
 図3(E)、(F)、(G)は、表示画像生成部103によって生成された所定の画像データの付加された表示画像データの一例を示す。図3(E)では、付加画像(所定の画像データによって表された画像)が文字画像(撮影時間または再生時間を表現するタイムコード画像)であり、対象画像(YCbCr画像データによって表された画像)の右下に文字画像が付加されている。文字画像は図3(E)に示すように文字周辺が枠で囲まれる場合もあるが、図3(F)に示すように文字のみの場合もある。図3(G)の付加画像は枠線画像であり、例えば表示画像データ内で実際に記録する領域を示している。付加画像(文字画像や枠線画像)の形状や色、表示画像データ内の重畳位置は特に限定されない。付加画像は文字と絵柄が描かれた画像であってもよい。 (E), (F), and (G) of FIG. 3 show examples of display image data to which predetermined image data generated by the display image generation unit 103 is added. In FIG. 3E, the additional image (the image represented by the predetermined image data) is a character image (a time code image representing the shooting time or the reproduction time), and the target image (the image represented by the YCbCr image data). A character image is added to the lower right of the parentheses. A character image may be surrounded by a frame as shown in FIG. 3E, but may be only a character as shown in FIG. 3F. The additional image in FIG. 3G is a frame image, and indicates, for example, an area to be actually recorded in the display image data. The shape and color of the additional image (character image or frame image) and the superimposition position in the display image data are not particularly limited. The additional image may be an image in which characters and patterns are drawn.
 本実施例では、特徴量取得部104は、文字画像の画像エリアを考慮せずに、表示画像データの平均輝度値(MaxFALL)や最大輝度値(MaxCLL)として、対象画像の画像エリアの平均輝度値と最大輝度値を取得する。これにより、撮影者(画像処理装置のユーザ)が意図した平均輝度値を示すMaxFALLや最大輝度値(MaxCLL)を得ることができるため、撮影者が意図した表示輝度を実現できる。文字画像の画像エリアも含んだ表示画像データの画像エリア全体の平均輝度値や最大輝度値を取得すると、文字画像が黒色の場合(輝度値が低い場合)や白色の場合(輝度値が高い場合)などで撮影者の意図に反した表示輝度が実現されてしまう。なお、表示画像データは撮像装置で生成された画像データであるため、撮像装置では、文字画像の画像エリアや対象画像の画像エリアを個別に判断することができる。判断方法はエリア情報だけではなく、文字画像の画像エリアの画素値や階調値で判断することもできるため、付加画像の画像エリアを撮影者が指定した固有の画素値や階調値を含む画像エリアとすることもできる。撮影者が付加画像の画像エリアとして指定した画素値や階調値が対象画像にも含まれる場合は、指定した画素値や階調値を対象画像に含まれない画素値や階調値(例えば指定した値の±1の値)に変更し、固有の画素値や階調値とする必要がある。 In the present embodiment, the feature amount obtaining unit 104 sets the average luminance value of the target image image area as the average luminance value (MaxFALL) or the maximum luminance value (MaxCLL) of the display image data without considering the image area of the character image. Get value and maximum brightness value. This makes it possible to obtain MaxFALL indicating the average luminance value intended by the photographer (user of the image processing apparatus) and the maximum luminance value (MaxCLL), thereby realizing the display luminance intended by the photographer. When the average luminance value and the maximum luminance value of the entire image area of the display image data including the image area of the character image are obtained, the character image is black (when the luminance value is low) or white (when the luminance value is high). ), The display luminance contrary to the photographer's intention is realized. Since the display image data is image data generated by the imaging device, the imaging device can individually determine the image area of the character image and the image area of the target image. Since the determination method can be determined not only by the area information but also by the pixel value and the gradation value of the image area of the character image, the image area of the additional image includes the unique pixel value and the gradation value specified by the photographer. It can also be an image area. If the target image includes a pixel value or a gradation value designated by the photographer as an image area of the additional image, the specified pixel value or the gradation value is not included in the target image. It is necessary to change the value to a value of ± 1 of the designated value) to obtain a unique pixel value or gradation value.
 図2(C)は、本実施例に係る撮像装置の処理フロー例を示すフローチャートである。 FIG. 2C is a flowchart illustrating an example of a processing flow of the imaging apparatus according to the present embodiment.
 ステップS300にて、撮像センサー部101は撮像を開始する。ステップS301にて、現像処理部102は現像処理を行う。ステップS302にて、表示画像生成部103は、現像処理後の画像データから表示画像データを生成する。表示画像データの画像エリアは、少なくとも対象画像の画像エリアを含む。表示画像データの生成で文字画像を重畳した場合には、表示画像データの画像エリアは文字画像の画像エリアをさらに含む。本実施例では、文字画像を重畳する画像の例として挙げているが、図3(G)に示す枠線画像であってもよいし、撮影情報または再生情報を示す図や絵、その他の文字などであってもよい。 (4) In step S300, the imaging sensor unit 101 starts imaging. In step S301, the developing unit 102 performs a developing process. In step S302, the display image generation unit 103 generates display image data from the image data after the development processing. The image area of the display image data includes at least the image area of the target image. When a character image is superimposed upon generation of display image data, the image area of the display image data further includes the image area of the character image. In the present embodiment, an example of an image on which a character image is superimposed is described. However, the image may be a frame image shown in FIG. And so on.
 ステップS303にて、特徴量取得部104は、ステップS302で生成された表示画像データの画像エリアが文字画像の画像エリアを含むか否かを判断する。文字画像の画像エリアが存在する場合にはステップS304へ進み、そうでない場合はステップS306へ進む。 In step S303, the feature amount acquiring unit 104 determines whether the image area of the display image data generated in step S302 includes the image area of the character image. If the image area of the character image exists, the process proceeds to step S304; otherwise, the process proceeds to step S306.
 ステップS304にて、特徴量取得部104は、文字画像の画像エリアにおける輝度値を取得せずに、対象画像の画像エリア(対象エリア)における各画素の輝度値を、ステップS302で生成された表示画像データから取得(抽出)する。ステップS305にて、特徴量取得部104は、ステップS304で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。例えば、MaxCLLは、1フレーム毎にフレームの最大輝度値(各画素の輝度値の最大値)を示し、MaxFALLは、1フレーム毎にフレームの平均輝度値(各画素の輝度値の平均値)を示す。 In step S304, the feature amount acquiring unit 104 displays the luminance value of each pixel in the image area (target area) of the target image without acquiring the luminance value in the image area of the character image in the display generated in step S302. Acquire (extract) from image data. In step S305, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S304. For example, MaxCLL indicates the maximum luminance value of the frame (maximum luminance value of each pixel) for each frame, and MaxFALL indicates the average luminance value of the frame (average luminance value of each pixel) for each frame. Show.
 ステップS306にて、特徴量取得部104は、全画像エリア(表示画像データの画像エリア全体)における各画素の輝度値を、ステップS302で生成された表示画像データから取得(抽出)する。ステップS307にて、特徴量取得部104は、ステップS306で取得した各輝度値を用いて、特徴量(MaxCLLやMaxFALL)を算出する。 In step S306, the feature amount obtaining unit 104 obtains (extracts) the luminance value of each pixel in the entire image area (the entire image area of the display image data) from the display image data generated in step S302. In step S307, the feature amount obtaining unit 104 calculates a feature amount (MaxCLL or MaxFALL) using each of the luminance values obtained in step S306.
 ステップS308にて、付加情報生成部105は、ステップS305またはステップS307で算出された特徴量(MaxCLLやMaxFALL)に基づいて付加情報を生成する。 In step S308, the additional information generation unit 105 generates additional information based on the feature amounts (MaxCLL and MaxFALL) calculated in step S305 or step S307.
 ステップS309にて、IF処理部106は、ステップS302で生成された表示画像データを、ステップS308で生成された付加情報を付加して、表示装置107へ出力する。そして、表示装置107は、IF処理部106から出力された付加情報(MaxCLLやMaxFALLなど)に基づく表示輝度で、IF処理部106から出力された表示画像データに基づく画像を表示面に表示する(表示の開始)。 In step S309, the IF processing unit 106 outputs the display image data generated in step S302 to the display device 107 with the additional information generated in step S308 added. Then, the display device 107 displays an image on the display surface based on the display image data output from the IF processing unit 106 at a display luminance based on the additional information (MaxCLL, MaxFALL, or the like) output from the IF processing unit 106 ( Start of display).
 以上述べたように、本実施例によれば、表示装置とは別体の撮像装置(画像処理装置)において、撮影者(画像処理装置のユーザ)の意図に合った特徴量が取得され、撮影者の意図に合った特徴情報が生成される。これにより、撮影者が意図した表示輝度をより確実に実現可能にすることができる。例えば、表示装置は、好適な特徴量を取得する機能を有していなくても、撮像装置で生成された特徴情報に基づいて、撮影者の意図に合った表示輝度を実現できる。 As described above, according to the present embodiment, in the imaging device (image processing device) separate from the display device, a feature amount that matches the intention of the photographer (user of the image processing device) is acquired, and Feature information that matches the intention of the user is generated. This makes it possible to more reliably realize the display luminance intended by the photographer. For example, even if the display device does not have a function of acquiring a suitable feature amount, it is possible to realize a display luminance that matches the intention of the photographer based on the feature information generated by the imaging device.
 本実施例で述べた文字画像の画像エリアがある場合の撮像装置の構成例は一例であり、実施例2や実施例3の構成でも撮影者の意図に合った表示輝度を実現できる。 The configuration example of the imaging apparatus in the case where there is the image area of the character image described in the present embodiment is an example, and the display brightness that matches the intention of the photographer can be realized even with the configurations of the second and third embodiments.
 なお、実施例1~4(図1(A)~1(D))の各ブロックは、個別のハードウェアであってもよいし、そうでなくてもよい。2つ以上のブロックの機能が、共通のハードウェアによって実現されてもよい。1つのブロックの複数の機能のそれぞれが、個別のハードウェアによって実現されてもよい。1つのブロックの2つ以上の機能が、共通のハードウェアによって実現されてもよい。また、各ブロックは、ハードウェアによって実現されてもよいし、そうでなくてもよい。例えば、装置が、プロセッサと、制御プログラムが格納されたメモリとを有していてもよい。そして、装置が有する少なくとも一部のブロックの機能が、プロセッサがメモリから制御プログラムを読み出して実行することにより実現されてもよい。 Each block of the first to fourth embodiments (FIGS. 1A to 1D) may or may not be individual hardware. The functions of two or more blocks may be realized by common hardware. Each of a plurality of functions of one block may be realized by individual hardware. Two or more functions of one block may be realized by common hardware. Also, each block may or may not be realized by hardware. For example, the device may have a processor and a memory in which a control program is stored. Then, the functions of at least some of the blocks of the device may be realized by the processor reading out the control program from the memory and executing the control program.
 なお、実施例1~4(上述した変形例を含む)はあくまで一例であり、本発明の要旨の範囲内で実施例1~4の構成を適宜変形したり変更したりすることにより得られる構成も、本発明に含まれる。実施例1~4の構成を適宜組み合わせて得られる構成も、本発明に含まれる。 The first to fourth embodiments (including the above-described modified examples) are merely examples, and the configuration obtained by appropriately modifying or changing the configuration of the first to fourth embodiments within the scope of the present invention. Are also included in the present invention. A configuration obtained by appropriately combining the configurations of the first to fourth embodiments is also included in the present invention.
 <その他の実施例>
 本発明は、上述の実施例の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
<Other Examples>
The present invention supplies a program for realizing one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus read and execute the program. This processing can be realized. Further, it can be realized by a circuit (for example, an ASIC) that realizes one or more functions.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, the following claims are appended to make the scope of the present invention public.
 本願は、2018年10月4日提出の日本国特許出願特願2018-188954と、2019年3月22日提出の日本国特許出願特願2019-054487とを基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application claims priority based on Japanese Patent Application No. 2018-188954 filed on Oct. 4, 2018 and Japanese Patent Application No. 2019-054487 filed on Mar. 22, 2019. Yes, and the entire contents thereof are incorporated herein.
 103:表示画像生成部 104:特徴量取得部 105:付加情報生成部 106:IF処理部 108:表示処理部 # 103: Display image generation unit # 104: Feature acquisition unit # 105: Additional information generation unit # 106: IF processing unit # 108: Display processing unit

Claims (11)

  1.  対象画像データに基づき出力画像データを生成する生成手段と、
     前記出力画像データから特徴量を取得する取得手段と、
     前記出力画像データと前記特徴量に基づく特徴情報とを出力する出力手段と、
    を有し、
     前記出力画像データの画像エリアが、前記対象画像データの画像エリアである第1エリアと、所定の画像データの画像エリアである第2エリアとを含む場合に、前記取得手段は、前記第1エリアの特徴量を取得する
    ことを特徴とする画像処理装置。
    Generating means for generating output image data based on the target image data;
    Acquiring means for acquiring a feature amount from the output image data;
    Output means for outputting the output image data and feature information based on the feature amount,
    Has,
    When the image area of the output image data includes a first area that is an image area of the target image data and a second area that is an image area of predetermined image data, the obtaining unit may include the first area. An image processing apparatus characterized by acquiring a feature amount of (1).
  2.  前記特徴量は平均輝度値である
    ことを特徴とする請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the feature amount is an average luminance value.
  3.  前記特徴量は最大輝度値である
    ことを特徴とする請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the feature amount is a maximum luminance value.
  4.  前記出力手段は、前記出力画像データと前記特徴情報を表示装置に出力する
    ことを特徴とする請求項1~3のいずれか1項に記載の画像処理装置。
    4. The image processing apparatus according to claim 1, wherein the output unit outputs the output image data and the characteristic information to a display device.
  5.  前記画像処理装置は撮像装置であり、
     前記対象画像データは撮像画像データである
    ことを特徴とする請求項1~4のいずれか1項に記載の画像処理装置。
    The image processing device is an imaging device,
    The image processing apparatus according to any one of claims 1 to 4, wherein the target image data is captured image data.
  6.  前記所定の画像データは、前記出力画像データのアスペクト比を調整するために付加される画像データである
    ことを特徴とする請求項1~5のいずれか1項に記載の画像処理装置。
    The image processing apparatus according to any one of claims 1 to 5, wherein the predetermined image data is image data added to adjust an aspect ratio of the output image data.
  7.  前記所定の画像データは、前記出力画像データの撮影情報または再生情報を示すために付加される画像データである
    ことを特徴とする請求項1~5のいずれか1項に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the predetermined image data is image data added to indicate shooting information or reproduction information of the output image data.
  8.  前記出力画像データの画像エリアが、前記対象画像データの画像エリアである第1エリアと、所定の画像データの画像エリアである第2エリアとを含む場合に、前記取得手段は、前記出力画像データの画像エリア全体の特徴量をさらに取得する
    ことを特徴とする請求項1~7のいずれか1項に記載の画像処理装置。
    When the image area of the output image data includes a first area, which is an image area of the target image data, and a second area, which is an image area of predetermined image data, the obtaining unit includes: The image processing apparatus according to any one of claims 1 to 7, further acquiring a feature amount of the entire image area.
  9.  前記出力手段は、前記特徴量を取得した画像エリアを示すエリア情報をさらに出力する
    ことを特徴とする請求項1~8のいずれか1項に記載の画像処理装置。
    The image processing apparatus according to any one of claims 1 to 8, wherein the output unit further outputs area information indicating an image area in which the feature amount has been acquired.
  10.  対象画像データに基づき出力画像データを生成する生成ステップと、
     前記出力画像データから特徴量を取得する取得ステップと、
     前記出力画像データと前記特徴量に基づく特徴情報とを出力する出力ステップと、
    を有し、
     前記出力画像データの画像エリアが、前記対象画像データの画像エリアである第1エリアと、所定の画像データの画像エリアである第2エリアとを含む場合に、前記取得ステップでは、前記第1エリアの特徴量を取得する
    ことを特徴とする画像処理方法。
    A generating step of generating output image data based on the target image data;
    An obtaining step of obtaining a feature amount from the output image data,
    An output step of outputting the output image data and feature information based on the feature amount,
    Has,
    In the case where the image area of the output image data includes a first area that is an image area of the target image data and a second area that is an image area of predetermined image data, in the acquiring step, the first area An image processing method characterized by acquiring a feature amount of (1).
  11.  コンピュータを、請求項1~9のいずれか1項に記載の画像処理装置の各手段として機能させるためのプログラム。 A program for causing a computer to function as each unit of the image processing apparatus according to any one of claims 1 to 9.
PCT/JP2019/036388 2018-10-04 2019-09-17 Image processing device and image processing method WO2020071108A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/220,080 US20210218887A1 (en) 2018-10-04 2021-04-01 Imaging apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018188954 2018-10-04
JP2018-188954 2018-10-04
JP2019-054487 2019-03-22
JP2019054487A JP2020061726A (en) 2018-10-04 2019-03-22 Image processing device and image processing method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/220,080 Continuation US20210218887A1 (en) 2018-10-04 2021-04-01 Imaging apparatus

Publications (1)

Publication Number Publication Date
WO2020071108A1 true WO2020071108A1 (en) 2020-04-09

Family

ID=70054527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/036388 WO2020071108A1 (en) 2018-10-04 2019-09-17 Image processing device and image processing method

Country Status (1)

Country Link
WO (1) WO2020071108A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000295577A (en) * 1999-04-12 2000-10-20 Olympus Optical Co Ltd Image recorder and electronic camera
JP2007140483A (en) * 2005-10-18 2007-06-07 Sharp Corp Liquid crystal display
JP2012137509A (en) * 2009-04-24 2012-07-19 Panasonic Corp Display device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000295577A (en) * 1999-04-12 2000-10-20 Olympus Optical Co Ltd Image recorder and electronic camera
JP2007140483A (en) * 2005-10-18 2007-06-07 Sharp Corp Liquid crystal display
JP2012137509A (en) * 2009-04-24 2012-07-19 Panasonic Corp Display device

Similar Documents

Publication Publication Date Title
US8797462B2 (en) Image processing apparatus and image processing method
KR101099401B1 (en) Image processing apparatus and computer-readable medium
JP5761946B2 (en) Image processing apparatus, image processing method, and storage medium
KR102102740B1 (en) Image processing apparatus and image processing method
JP2004208000A (en) Image composition method and imaging apparatus
JP2009017306A (en) Image display device and imaging apparatus using the same
US8144211B2 (en) Chromatic aberration correction apparatus, image pickup apparatus, chromatic aberration amount calculation method, and chromatic aberration amount calculation program
JP7157714B2 (en) Image processing device and its control method
JP6087612B2 (en) Image processing apparatus and image processing method
JP2022183218A (en) Image processing device and control method thereof
JP7296745B2 (en) Image processing device, image processing method, and program
JP2008219289A (en) Video correction device, video display device, imaging apparatus and video correction program
JP2006333113A (en) Imaging device
JP2013201703A (en) Imaging device, method for controlling the same and program
WO2020071108A1 (en) Image processing device and image processing method
US7512266B2 (en) Method and device for luminance correction
JP6157274B2 (en) Imaging apparatus, information processing method, and program
JP2020061726A (en) Image processing device and image processing method
JP2010245924A (en) Image display device, and camera
JP2015126416A (en) Image processing apparatus, control method, and program
JP5975705B2 (en) Imaging apparatus and control method thereof, image processing apparatus, and method for image processing
JP2009284269A (en) Image processor, imaging apparatus, image processing method, and program
JP2020167456A (en) Image processing device, image processing method, and program
JP2010193112A (en) Image processing apparatus and digital still camera
JP4978669B2 (en) Image processing apparatus, electronic camera, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19869112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19869112

Country of ref document: EP

Kind code of ref document: A1