WO2011161883A1 - 画像処理装置および画像処理プログラム - Google Patents
画像処理装置および画像処理プログラム Download PDFInfo
- Publication number
- WO2011161883A1 WO2011161883A1 PCT/JP2011/003181 JP2011003181W WO2011161883A1 WO 2011161883 A1 WO2011161883 A1 WO 2011161883A1 JP 2011003181 W JP2011003181 W JP 2011003181W WO 2011161883 A1 WO2011161883 A1 WO 2011161883A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image processing
- gradation conversion
- image data
- processing apparatus
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 claims abstract description 96
- 238000003384 imaging method Methods 0.000 claims abstract description 15
- 238000007906 compression Methods 0.000 claims description 94
- 230000006835 compression Effects 0.000 claims description 88
- 230000008859 change Effects 0.000 claims description 5
- 230000002123 temporal effect Effects 0.000 claims description 5
- 230000007704 transition Effects 0.000 claims description 4
- 238000000034 method Methods 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 7
- 230000006866 deterioration Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/73—Colour balance circuits, e.g. white balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
- H04N19/197—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including determination of the initial value of an encoding parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
- H04N9/69—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2101/00—Still video cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/82—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction
- H04N23/83—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction specially adapted for colour signals
Definitions
- the present invention relates to an image processing apparatus and an image processing program.
- image processing such as color interpolation processing is performed on RAW image data output from an imaging device, and the image-processed data is compressed into a general image file format.
- image processing such as color interpolation processing is performed on RAW image data output from an imaging device
- the image-processed data is compressed into a general image file format.
- JPEG Joint Photographic Experts Group
- MPEG Motion Picture / Experts / Group
- the compressed data is stored in a storage medium or the like.
- RAW image data is used, for example, when the color tone or contrast of an image is adjusted with high accuracy after shooting.
- the amount of data in RAW image data is larger than that of data subjected to image processing and compression processing. For this reason, a technique has been proposed in which RAW image data when a still image is captured is compressed for each color component, and the compressed data of each color component is stored in a storage medium or the like (for example, see Patent Document 1).
- RAW image data has a pixel structure that depends on the color arrangement (for example, Bayer arrangement) of the image sensor, the correlation between adjacent pixels in each frame is low, and the correlation between frames is also low. For this reason, for example, even if the RAW image data when a moving image is captured is compressed in the MPEG format, the original compression efficiency of MPEG cannot be obtained.
- a method for efficiently compressing RAW image data of continuously captured images such as moving images has not been proposed.
- An object of the present invention is to provide an image processing apparatus and an image processing program capable of efficiently compressing RAW image data of continuously captured images.
- the image processing apparatus includes an imaging unit that generates RAW image data of a captured image, and a gradation conversion unit.
- the imaging unit includes an imaging element that converts an image of a subject into an electrical signal. Then, the gradation conversion unit performs gradation conversion corresponding to the imaging condition for each color signal based on the pixel array of the image sensor on the RAW image data of continuously captured images.
- FIG. 1 shows an embodiment of the present invention.
- the image processing apparatus 10 of this embodiment is, for example, a digital camera that can capture a moving image.
- the image processing apparatus 10 is also referred to as a digital camera 10.
- the digital camera 10 includes a photographic lens 20, an image sensor 22, an analog processing unit 24, an A / D conversion unit 26, a control unit 28, an image processing unit 30, a compression unit 32, a buffer unit 34, a memory 36, and a storage medium 38. And a monitor 40 and an operation unit 42.
- the A / D conversion unit 26, the control unit 28, the image processing unit 30, the compression unit 32, the buffer unit 34, the memory 36, the storage medium 38, and the monitor 40 are connected to the bus BUS.
- broken arrows in the figure show an example of the flow of image data RAWDATA and RAWCOMP.
- the photographing lens 20 forms an image of the subject on the light receiving surface of the image sensor 22.
- the image sensor 22 is, for example, a CCD image sensor or a CMOS image sensor.
- the image sensor 22 converts an image of a subject incident through the photographing lens 20 into an electric signal (hereinafter also referred to as an image signal), and outputs the converted electric signal to the analog processing unit 24.
- the analog processing unit 24 is an analog front-end circuit that performs analog signal processing on the image signal received from the image sensor 22. For example, the analog processing unit 24 performs gain control for adjusting the gain of the image signal, correlated double sampling processing for reducing the noise component of the image signal, and the like, and generates analog image data. Then, the analog processing unit 24 outputs the generated analog image data to the A / D conversion unit 26.
- the A / D conversion unit 26 generates RAW image data RAWDATA by converting the analog image data received from the analog processing unit 24 into digital image data. Then, the A / D conversion unit 26 outputs the RAW image data RAWDATA to the compression unit 32 and the buffer unit 34.
- the imaging element 22, the analog processing unit 24, and the A / D conversion unit 26 function as an imaging unit that generates RAW image data of a captured image.
- RAW image data RAWDATA luminance information of one color is stored in one pixel based on the pixel array (for example, Bayer array) of the image sensor 22.
- a data format in which luminance information of one color is stored in one pixel is also referred to as a RAW format.
- the control unit 28 is, for example, a microprocessor, and controls the operation of the digital camera 10 based on a program stored in the memory 36. For example, the control unit 28 performs autofocus control, exposure control, white balance control, image data recording, and the like.
- the image processing unit 30 performs image processing including at least color interpolation processing on the RAW image data RAWDATA stored in the buffer unit 34 to generate main image data of the captured image.
- the digital camera 10 can store the main image data in the storage medium 38 or the like as necessary.
- the color interpolation process is, for example, a process of interpolating luminance information of a color that is insufficient for each pixel using color information (luminance information) of surrounding pixels. Therefore, in the main image data, which is image data that has been subjected to image processing such as color interpolation processing, each pixel has luminance information of all colors (for example, red, green, and blue).
- the image processing unit 30 may perform image processing such as white balance processing, contour compensation processing, gamma processing, and noise reduction processing on the RAW image data RAWDATA in addition to the color interpolation processing.
- the compression unit 32 compresses the RAW image data RAWDATA of the moving image in the RAW format, and generates compressed image data RAWCOMP.
- the compression unit 32 performs gradation conversion corresponding to the shooting condition for each color signal based on the pixel arrangement of the image sensor 22 on the RAW image data RAWDATA of continuously shot images. Then, the compression unit 32 performs a compression process using the correlation in the spatial direction and the temporal direction for each color signal on the RAW image data RAWDATA subjected to the gradation conversion.
- the compressed image data RAWCOMP generated by the compression unit 32 is stored in the storage medium 38, for example.
- the compression unit 32 performs gradation corresponding to shooting conditions for each color signal based on the pixel arrangement of the image sensor 22 with respect to the RAW image data RAWDATA of continuously shot images. It functions as a gradation conversion unit that performs conversion. Further, the compression unit 32 functions as an image compression unit that performs compression processing using correlation in the spatial direction and the temporal direction for each color signal with respect to the RAW image data RAWDATA subjected to gradation conversion. That is, in this embodiment, the compression unit 32 includes a gradation conversion unit and an image compression unit.
- the shooting conditions include, for example, at least one of exposure conditions at the time of shooting, white balance at the time of shooting, and luminance of the image.
- the shooting conditions may include, for example, transition information based on at least one time-series change in exposure conditions during shooting, white balance during shooting, and image brightness.
- the compression unit 32 acquires shooting information SHINF indicating the exposure conditions at the time of shooting, the white balance, and the like from the control unit 28. Further, for example, when the image brightness is included in the shooting conditions, the compression unit 32 analyzes the brightness signal of the RAW image data RAWDATA and calculates brightness information indicating the brightness of the image.
- the luminance information calculated from the RAW image data RAWDATA is information corresponding to the amount of light at the time of shooting such as the luminance distribution of the image and the average luminance of the entire screen.
- the compression unit 32 may compress the RAW image data RAWDATA of the still image as it is in the RAW format.
- the compression unit 32 performs a compression process using the correlation in the spatial direction for each color signal on the RAW image data RAWDATA subjected to the gradation conversion.
- An existing compression encoding method such as JPEG (Joint Photographic Experts Group) format can be applied to the compression processing for each color signal of the RAW image data RAWDATA in the still image.
- the compression unit 32 may have a function of compressing the main image data (image data after image processing).
- the compression unit 32 uses the MPEG (Moving Picture Experts Group) format or H.264.
- the main image data of the moving image is compressed using a compression encoding method such as H.264 / MPEG-4 AVC format.
- the compression unit 32 compresses the main image data of the still image using a compression encoding method such as the JPEG format.
- the buffer unit 34 temporarily stores the RAW image data RAWDATA received from the A / D conversion unit 26, for example.
- the memory 36 is a built-in memory formed by a non-volatile memory such as a flash memory, for example, and stores a program and the like for controlling the operation of the digital camera 10.
- the storage medium 38 stores compressed image data RAWCOMP and the like of the captured image via a storage medium interface (not shown).
- the monitor 40 is a liquid crystal display, for example, and displays a through image, a menu screen, and the like.
- the operation unit 42 includes a release button and other various switches, and is operated by the user in order to operate the digital camera 10.
- FIG. 2 shows an example of the operation of the compression unit 32 shown in FIG.
- FIG. 2 shows an example of the operation of the compression unit 32 when moving image shooting is performed.
- steps S100 to S220 are performed by the compression unit 32 according to the image processing program stored in the memory 36.
- step S100 is performed by starting moving image shooting.
- step S100 the compression unit 32 creates a recording file to be used for recording the compressed image data RAWCOMP in the storage medium 38.
- step S110 the compression unit 32 sequentially acquires RAW image data RAWDATA corresponding to each frame of the moving image from the A / D conversion unit 26.
- the RAW image data RAWDATA is sequentially stored in the buffer unit 34 as described with reference to FIG. Therefore, the compression unit 32 may sequentially read the RAW image data RAWDATA from the buffer unit 34. In this case, the A / D conversion unit 26 may not output the RAW image data RAWDATA to the compression unit 32.
- step S120 the compression unit 32 sequentially acquires the shooting information SHINF of each frame from the control unit 28, and sequentially stores the acquired shooting information SHINF in the buffer unit 34 and the like.
- the shooting information SHINF is information indicating an exposure condition and white balance at the time of shooting each frame of the moving image, and is referred to, for example, when determining the content of gradation conversion processing (step S140). Therefore, the compression unit 32 may store the shooting information SHINF for a predetermined time (for example, about 1 second) before the current frame in the buffer unit 34 or the like. Note that the compression unit 32 may include a buffer that stores the shooting information SHINF.
- the compression unit 32 sequentially extracts the luminance information of each frame of the moving image from the RAW image data RAWDATA, and sequentially stores the extracted luminance information in the buffer unit 34 and the like.
- the luminance information is the luminance distribution of the image, the average luminance of the frame (entire screen), and the like, and is referred to, for example, when determining the content of gradation conversion processing (step S140).
- the compression unit 32 may store the luminance information for a predetermined time (for example, about 1 second) before the current frame in the buffer unit 34 or the like.
- the compression unit 32 may include a buffer that stores luminance information.
- the extraction of the luminance information is performed by analyzing the luminance signal of the RAW image data RAWDATA.
- the compression unit 32 extracts the luminance distribution of the image based on the luminance signal of the green component of the RAW image data RAWDATA.
- step S140 the compression unit 32 performs gradation conversion based on the shooting information SHINF (exposure conditions at the time of shooting, white balance at the time of shooting, etc.) and luminance information (such as the luminance distribution of the image) stored in the buffer unit 34.
- the processing content and the number of compressed frames are determined.
- the compression unit 32 comprehensively determines the exposure conditions at the time of shooting, the white balance at the time of shooting, and the luminance of the image, and determines the processing content of gradation conversion and the number of compressed frames.
- the compression unit 32 may determine the processing content for gradation conversion and the number of compressed frames based on at least one of the exposure conditions at the time of shooting, the white balance at the time of shooting, and the luminance of the image.
- the compression unit 32 determines the processing content of the gradation conversion and the number of compressed frames according to transition information based on at least one time-series change in the exposure condition at the time of shooting, the white balance at the time of shooting, and the luminance of the image. May be.
- the determination of the gradation conversion processing content is performed by selecting a gradation conversion table to be used for gradation conversion when, for example, a plurality of gradation conversion tables as shown in FIG. That is.
- a gradation conversion table that is used for gradation conversion when, for example, a plurality of gradation conversion tables as shown in FIG. That is.
- the compression unit 32 when the image luminance (luminance distribution, average luminance, etc.) is on the low luminance side, the compression unit 32 generates a gradation conversion table that leaves the gradation of the low luminance portion with a plurality of gradations prepared in advance. Select from the conversion table.
- the determination of the number of compressed frames is to determine the period of a frame (hereinafter also referred to as a reference frame) that is a reference for compression processing in the time direction. That is, the number of compressed frames corresponds to the period of the reference frame.
- a reference frame For example, H.M. In H.264 / MPEG-4 AVC, the number of compressed frames corresponds to the period of an IDR (Instantaneous Decoder Refresh) picture.
- the number of compressed frames may be constant or variable.
- the compression unit 32 changes the number of compressed frames when at least one time-series change in exposure conditions at the time of shooting, white balance at the time of shooting, and luminance information is large.
- the compression unit 32 selects a gradation conversion table to be used for gradation conversion for each reference frame from a plurality of gradation conversion tables prepared in advance. Thereby, the processing content of the gradation conversion is updated for each reference frame.
- step S150 the compression unit 32 separates the RAW image data RAWDATA acquired in step S110 for each color signal (color component), and generates frame data for each color component.
- the compression unit 32 has an R (red) component, a Gr (green) component, a Gb (green) component, and B (blue) as shown in FIG.
- Four component frame data FRM12, FRM14, FRM16, and FRM18 are generated.
- the frame data FRM12, FRM14, FRM16, and FRM18 are stored in the buffer unit 34, for example.
- the compression unit 32 In step S160, the compression unit 32 generates additional information necessary for decoding the compressed image data (compressed image data RAWCOMP).
- the additional information includes, for example, shooting information SHINF, information indicating the color component of each frame data generated in step S150, information indicating the gradation conversion processing content determined in step S140, and information on compression processing (the number of compressed frames and compression information) Encoding system).
- the compression unit 32 may generate additional information for each frame or may generate additional information for each reference frame. For example, when the additional information is generated for each frame, the content of the additional information of the frames other than the reference frame (for example, the content of the shooting information SHINF) may be only the difference from the reference frame.
- step S170 the compression unit 32 performs tone conversion on the frame data of each color component generated in step S150 based on the processing content of the tone conversion determined in step S140. That is, the compression unit 32 performs gradation conversion of the RAW image data RAWDATA for each color component according to the shooting conditions.
- the frame data of each color component subjected to gradation conversion is stored, for example, in the buffer unit 34 or the like.
- step S180 the compression unit 32 performs a compression process using the correlation in the spatial direction and the time direction on the frame data of each color component subjected to the gradation conversion in step S170. That is, the compression unit 32 performs, for each color component, compression processing using correlation in the spatial direction and the temporal direction for the RAW image data RAWDATA subjected to the gradation conversion.
- the frame data generated in step S150 can be handled as monochrome frame data by itself. Further, in the frame data of each color component, the correlation between adjacent pixels is high, and the correlation between time-series continuous frame data is also high. For this reason, the compression processing of the frame data of each color component includes MPEG format and H.264 format.
- the existing compression encoding method such as H.264 / MPEG-4 AVC format can be applied.
- step S190 the compression unit 32 compresses and encodes the additional information generated in step S160.
- An existing compression coding method such as Huffman coding can be applied to the compression coding of the additional information.
- the compression unit 32 may encrypt the additional information instead of performing compression encoding.
- an existing encryption method such as AES (Advanced Encryption Standard) can be applied to encrypt additional information.
- the additional information is compressed and encoded (or encrypted) by a predetermined method.
- step S200 the compression unit 32 records the frame data of each color component compressed in step S180 and the additional information compressed in step S190 in the recording file created in the storage medium 38 in step S100.
- the additional information may be recorded in a recording file for each reference frame, or the additional information (additional information whose contents are updated for each reference frame) may be recorded for each frame. May be recorded in a recording file.
- step S210 the compression unit 32 determines whether or not the photographing is finished. For example, the compression unit 32 can determine whether or not the photographing has ended by receiving a signal indicating the end of the photographing from the control unit 28. If shooting has not been completed (No in step S210), the operation of the compression unit 32 proceeds to step S110. On the other hand, if shooting has been completed (Yes in step S210), the compression unit 32 closes the recording file created in the storage medium 38 in step S220, and ends the moving image compression processing.
- the compressed image data RAWCOMP stored in the storage medium 38 in steps S100 to S220 is decoded into moving image data in RAW format suitable for image quality adjustment by a procedure reverse to the above-described compression processing.
- the compressed additional information is decoded, and the frame data of each color component is decoded based on the decoded additional information.
- the user can edit the moving image data in the RAW format, and can edit the moving image with high accuracy.
- steps S110 and S120 may be executed in parallel, or the order in which they are executed may be reversed.
- the compression unit 32 may omit the process of step S130.
- the operation of controlling the storage medium 38 such as steps S100 and S220 may be performed by the control unit 28.
- FIG. 3 shows an example of the frame data FRM for each color component.
- FIG. 3 shows an example of the frame data FRM when the pixel array of the image sensor 22 is a Bayer array.
- the frame data FRM10 of the RAW image data RAWDATA a row in which red (R) pixels and green (Gr) pixels are alternately arranged, and a green (Gb) pixel and a blue (B) pixel are alternately arranged. And have The red (R), green (Gr, Gb), and blue (B) pixels have luminance information of red (R), green (Gr, Gb), and blue (B), respectively.
- the frame data FRM10 is separated into frame data FRM12, FRM14, FRM16, and FRM18 for each color component as described in step S150 of FIG.
- the red (R) pixels are arranged in a state where the correlation between the red (R) pixels in the frame data FRM10 is maintained.
- green (Gr) pixels are arranged in a state where the correlation between the green (Gr) pixels in the frame data FRM10 is maintained.
- green (Gb) pixels are arranged in a state where the correlation between the green (Gb) pixels in the frame data FRM10 is maintained.
- the blue (B) pixels are arranged in a state where the correlation between the blue (B) pixels in the frame data FRM10 is maintained.
- the correlation between adjacent pixels can be increased. That is, in this embodiment, the correlation between adjacent pixels in a frame can be increased, and the correlation between frames that are continuous in time series can be increased.
- FIG. 4 shows an example of input / output characteristics of the gradation conversion table TB.
- FIG. 4 shows an example of the input / output characteristics of the gradation conversion table TB having an input gradation and an output gradation of 12 bits (0 to 4095) and 10 bits (0 to 1023), respectively.
- the gradation conversion table TB is a table for reducing the number of gradation bits of image data after gradation conversion compared to the number of gradation bits of image data before gradation conversion. Therefore, the digital camera 10 has a plurality of gradation conversion tables TB having different input / output characteristics in order to suppress deterioration in image quality before and after gradation conversion.
- the gradation conversion table TB1 has input / output characteristics with little deterioration in gradation accuracy of the low luminance part.
- the gradation conversion tables TB2 and TB3 have input / output characteristics in which the deterioration of the gradation accuracy of the middle luminance portion is less than that of the gradation conversion table TB1.
- the gradation conversion table TB4 has input / output characteristics with little deterioration in gradation accuracy of the high luminance part.
- the compression unit 32 converts the gradation conversion table TB that maintains the accuracy of information desired to remain in the original image data from the plurality of gradation conversion tables TB1, TB2, TB3, and TB4. Select based on shooting conditions. For example, when the luminance of the image (such as the luminance distribution indicated by the luminance information of the image) is on the low luminance side, the gradation conversion table TB1 that leaves the gradation of the low luminance portion is selected. Alternatively, when the luminance of the image (such as the luminance distribution indicated by the luminance information of the image) is on the high luminance side, the gradation conversion table TB4 that leaves the gradation of the high luminance portion is selected.
- the compression unit 32 may select the gradation conversion table TB based on the white balance at the time of shooting, or may select the gradation conversion table TB based on the exposure conditions at the time of shooting. Alternatively, the compression unit 32 may select the gradation conversion table TB based on a plurality of pieces of information such as image brightness information and white balance at the time of shooting.
- the gradation conversion table TB corresponding to the shooting conditions is selected, it is possible to suppress deterioration in image quality before and after gradation conversion. Therefore, in this embodiment, while suppressing deterioration in image quality before and after tone conversion, the amount of image data after tone conversion (image data on which compression processing is performed) is changed to the amount of image data before tone conversion. It can be reduced compared to the amount of data.
- the digital camera 10 may not have the gradation conversion table TB according to the white balance.
- the compression unit 32 multiplies each color signal by a gain value for each color signal corresponding to the white balance. After that, the compression unit 32 performs gradation conversion using the gradation conversion table TB selected according to the luminance information of the image. Thereby, the number of gradation conversion tables TB prepared in advance can be reduced.
- the digital camera 10 includes the compression unit 32 that compresses the RAW image data RAWDATA of continuously captured images in the RAW format.
- the compression unit 32 performs tone conversion on the RAW image data RAWDATA of continuously captured images such as moving images based on the shooting conditions for each color signal, and the RAW image data RAWDATA after the tone conversion for each color signal. Compress. Accordingly, in this embodiment, it is possible to efficiently compress the RAW image data RAWDATA of continuously shot images while suppressing deterioration in image quality before and after gradation conversion. That is, in this embodiment, it is possible to provide an image processing apparatus and an image processing program capable of efficiently compressing RAW image data of continuously captured images.
- the compressed image data RAWCOMP generated by the digital camera 10 of this embodiment is decoded into RAW format moving image data suitable for image quality adjustment by a procedure reverse to the compression process shown in FIG.
- the compressed image data RAWCOMP is decoded into RAW image data RAWDATA before gradation conversion.
- the user can accurately edit the moving image. That is, in this embodiment, it is possible to provide moving image data that can accurately edit a moving image.
- RAW image data RAWDATA of the moving image is compressed in the RAW format.
- the present invention is not limited to such an embodiment.
- RAW image data RAWDATA of an image obtained by continuous shooting may be compressed in the RAW format. Also in this case, the same effect as the above-described embodiment can be obtained.
- the frame data FRM10 may be separated into three frame data of red (R), green (G), and blue (B).
- R red
- G green
- B blue
- the two green (Gr, Gb) frame data FRM14 and FRM16 are combined into one green (G) frame data by taking the average of the two green (Gr, Gb).
- the additional information includes information indicating that two green colors (Gr, Gb) in the Bayer array are combined into one green color (G). Also in this case, the same effect as the above-described embodiment can be obtained.
- the RAW image data RAWDATA is a Bayer array.
- the present invention is not limited to such an embodiment.
- the RAW image data RAWDATA may be a color array other than the Bayer array.
- the RAW image data RAWDATA may be in a color arrangement other than red (R), green (Gr, Gb), and blue (B).
- the RAW image data RAWDATA may be a CMY (cyan, magenta, yellow) color array. Also in this case, the same effect as the above-described embodiment can be obtained.
- the image processing apparatus 10 is applied to a digital camera.
- the present invention is not limited to such an embodiment.
- the image processing apparatus of the present invention may be applied to electronic devices such as a mobile phone with a camera and a digital video having a function of continuous shooting and moving image shooting. Also in this case, the same effect as the above-described embodiment can be obtained.
- the image processing apparatus 10 may have at least the compression unit 32.
- the processing (steps S100 to S220 in FIG. 2) executed by the compression unit 32 may be executed by an external processing device such as a computer. That is, the image processing program may cause an external processing device such as a computer to execute the processing executed by the compression unit 32 (steps S100 to S220 in FIG. 2).
- the image processing program is installed in the external processing apparatus via a storage medium such as a CD-ROM that can be read by the external processing apparatus or a communication network such as the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Physics & Mathematics (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Facsimile Image Signal Circuits (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Description
Claims (11)
- 被写体の像を電気信号に変換する撮像素子を有し、撮影された画像のRAW画像データを生成する撮像部と、
連続して撮影された画像の前記RAW画像データに対して、前記撮像素子の画素配列に基づく色信号毎に撮影条件に応じた階調変換を実施する階調変換部とを備えていることを特徴とする画像処理装置。 - 請求項1記載の画像処理装置において、
前記RAW画像データを一時的に記憶するバッファ部を備えていることを特徴とする画像処理装置。 - 請求項1記載の画像処理装置において、
階調変換後の前記RAW画像データに対して、前記色信号毎に空間方向および時間方向の相関を利用した圧縮処理を実施する画像圧縮部を備えていることを特徴とする画像処理装置。 - 請求項1記載の画像処理装置において、
前記撮影条件は、撮影時の露出条件、撮影時のホワイトバランスおよび画像の輝度の少なくとも1つを含んでいることを特徴とする画像処理装置。 - 請求項1記載の画像処理装置において、
階調変換後の画像の階調のビット数を階調変換前の画像の階調のビット数より少なくするための複数の階調変換テーブルを有し、
前記階調変換部は、階調変換に使用する階調変換テーブルを、前記複数の階調変換テーブルの中から前記撮影条件に基づいて選択することを特徴とする画像処理装置。 - 請求項1記載の画像処理装置において、
前記撮影条件は、撮影時の露出条件、撮影時のホワイトバランスおよび画像の輝度の少なくとも1つの時系列の変化に基づく推移情報を含んでいることを特徴とする画像処理装置。 - 撮像素子の画素配列に基づく色信号を含むRAW画像データを圧縮する画像処理プログラムにおいて、
連続して撮影された画像の前記RAW画像データに対して、前記撮像素子の画素配列に基づく色信号毎に撮影条件に応じた階調変換をコンピュータに実行させるための画像処理プログラム。 - 請求項7記載の画像処理プログラムにおいて、
階調変換後の前記RAW画像データに対して、前記色信号毎に空間方向および時間方向の相関を利用した圧縮処理を前記コンピュータにさらに実行させるための画像処理プログラム。 - 請求項7記載の画像処理プログラムにおいて、
前記撮影条件は、撮影時の露出条件、撮影時のホワイトバランスおよび画像の輝度の少なくとも1つを含んでいることを特徴とする画像処理プログラム。 - 請求項7記載の画像処理プログラムにおいて、
階調変換後の画像の階調のビット数を階調変換前の画像の階調のビット数より少なくするための複数の階調変換テーブルの中から、階調変換に使用する階調変換テーブルを前記撮影条件に基づいて選択することを特徴とする画像処理プログラム。 - 請求項7記載の画像処理プログラムにおいて、
前記撮影条件は、撮影時の露出条件、撮影時のホワイトバランスおよび画像の輝度の少なくとも1つの時系列の変化に基づく推移情報を含んでいることを特徴とする画像処理プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012521282A JP5924262B2 (ja) | 2010-06-25 | 2011-06-06 | 画像処理装置および画像処理プログラム |
US13/805,941 US9554108B2 (en) | 2010-06-25 | 2011-06-06 | Image processing device and storage medium storing image processing program |
CN201180031594.7A CN102959956B (zh) | 2010-06-25 | 2011-06-06 | 图像处理装置和图像处理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010144604 | 2010-06-25 | ||
JP2010-144604 | 2010-06-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011161883A1 true WO2011161883A1 (ja) | 2011-12-29 |
Family
ID=45371090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/003181 WO2011161883A1 (ja) | 2010-06-25 | 2011-06-06 | 画像処理装置および画像処理プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9554108B2 (ja) |
JP (1) | JP5924262B2 (ja) |
WO (1) | WO2011161883A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9693010B2 (en) * | 2014-03-11 | 2017-06-27 | Sony Corporation | Method, electronic device, and server for generating digitally processed pictures |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008067315A (ja) * | 2006-09-11 | 2008-03-21 | Olympus Corp | 撮像装置、画像処理装置、撮像システム及び画像処理プログラム |
WO2008132791A1 (ja) * | 2007-04-13 | 2008-11-06 | Panasonic Corporation | 画像処理装置、集積回路及び画像処理方法 |
JP4372686B2 (ja) * | 2002-07-24 | 2009-11-25 | パナソニック株式会社 | 撮像システム |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002125241A (ja) | 2001-08-02 | 2002-04-26 | Konica Corp | スチルビデオカメラ |
JP4470485B2 (ja) * | 2003-12-25 | 2010-06-02 | 株式会社ニコン | 固定ビット長の予測差分圧縮データを生成する画像圧縮装置および画像圧縮プログラム、画像伸張装置および画像伸張プログラム、並びに電子カメラ |
JP4875833B2 (ja) | 2004-03-16 | 2012-02-15 | オリンパス株式会社 | 撮像装置、画像処理装置、画像処理システム、及び画像処理方法 |
US7880771B2 (en) * | 2004-03-16 | 2011-02-01 | Olympus Corporation | Imaging apparatus, image processing apparatus, image processing system and image processing method |
JP4107302B2 (ja) * | 2005-03-22 | 2008-06-25 | セイコーエプソン株式会社 | 印刷装置、画像処理装置、印刷方法、画像処理方法、および変換テーブルの作成方法 |
JP4759363B2 (ja) * | 2005-10-27 | 2011-08-31 | Hoya株式会社 | 画像信号処理ユニット |
JP2007198831A (ja) | 2006-01-25 | 2007-08-09 | Fujifilm Corp | 画像データの処理方法および処理プログラム |
JP5211521B2 (ja) * | 2007-03-26 | 2013-06-12 | 株式会社ニコン | 画像処理装置、画像処理方法、画像処理プログラム、およびカメラ |
JP4785799B2 (ja) * | 2007-07-17 | 2011-10-05 | 富士フイルム株式会社 | 画像処理装置、画像処理方法及び撮影装置 |
JP4973372B2 (ja) * | 2007-08-06 | 2012-07-11 | 株式会社ニコン | 画像処理装置、撮像装置および画像処理プログラム |
JP5220677B2 (ja) * | 2009-04-08 | 2013-06-26 | オリンパス株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
JP5780764B2 (ja) * | 2011-01-17 | 2015-09-16 | オリンパス株式会社 | 撮像装置 |
-
2011
- 2011-06-06 JP JP2012521282A patent/JP5924262B2/ja active Active
- 2011-06-06 US US13/805,941 patent/US9554108B2/en active Active
- 2011-06-06 WO PCT/JP2011/003181 patent/WO2011161883A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4372686B2 (ja) * | 2002-07-24 | 2009-11-25 | パナソニック株式会社 | 撮像システム |
JP2008067315A (ja) * | 2006-09-11 | 2008-03-21 | Olympus Corp | 撮像装置、画像処理装置、撮像システム及び画像処理プログラム |
WO2008132791A1 (ja) * | 2007-04-13 | 2008-11-06 | Panasonic Corporation | 画像処理装置、集積回路及び画像処理方法 |
Also Published As
Publication number | Publication date |
---|---|
CN102959956A (zh) | 2013-03-06 |
JP5924262B2 (ja) | 2016-05-25 |
US20130100313A1 (en) | 2013-04-25 |
US9554108B2 (en) | 2017-01-24 |
JPWO2011161883A1 (ja) | 2013-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8009924B2 (en) | Method and apparatus for recording image data | |
US10674110B2 (en) | Image encoding apparatus, and control method thereof | |
JPWO2019142821A1 (ja) | 符号化装置、復号装置、符号化方法、復号方法、符号化プログラム、および復号プログラム | |
KR101931631B1 (ko) | 카메라 장치의 이미지 부호화장치 및 방법 | |
JP2003199019A (ja) | 撮像装置および方法、記録媒体、並びにプログラム | |
US10244199B2 (en) | Imaging apparatus | |
WO2016171006A1 (ja) | 符号化装置および符号化方法、並びに、復号装置および復号方法 | |
JP6700798B2 (ja) | 撮像装置及びその制御方法 | |
JP5407651B2 (ja) | 画像処理装置、画像処理プログラム | |
JP5924262B2 (ja) | 画像処理装置および画像処理プログラム | |
EP2515543B1 (en) | Image capturing apparatus and image capturing method | |
JP2009268032A (ja) | 撮像装置 | |
JP6741532B2 (ja) | 撮像装置および記録方法 | |
US11823422B2 (en) | Image processing apparatus, control method of the same, and storage medium | |
KR100827680B1 (ko) | 썸네일 데이터 전송 방법 및 장치 | |
JP6152642B2 (ja) | 動画像圧縮装置、動画像復号装置およびプログラム | |
JP5167385B2 (ja) | デジタルカメラ | |
JP2010124114A (ja) | デジタルカメラおよび画像データ処理プログラム | |
JP6907004B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
CN102959956B (zh) | 图像处理装置和图像处理方法 | |
JP2005217493A (ja) | 撮像装置 | |
JP2017200199A (ja) | 動画像圧縮装置、動画像復号装置およびプログラム | |
KR100771138B1 (ko) | 촬영 장치 및 영상 보정 방법 | |
JP2009033629A (ja) | 撮像装置及びその制御方法、並びにプログラム及び媒体、画像処理装置 | |
JP2019036992A (ja) | 圧縮装置、復号装置およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180031594.7 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11797772 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012521282 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13805941 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11797772 Country of ref document: EP Kind code of ref document: A1 |