WO2012026122A1 - Imaging device - Google Patents
Imaging device Download PDFInfo
- Publication number
- WO2012026122A1 WO2012026122A1 PCT/JP2011/004721 JP2011004721W WO2012026122A1 WO 2012026122 A1 WO2012026122 A1 WO 2012026122A1 JP 2011004721 W JP2011004721 W JP 2011004721W WO 2012026122 A1 WO2012026122 A1 WO 2012026122A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- compression
- image
- compression rate
- pixel data
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/333—Mode signalling or mode changing; Handshaking therefor
- H04N1/3333—Mode signalling or mode changing; Handshaking therefor during transmission, input or output of the picture signal; within a single document or page
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/41—Bandwidth or redundancy reduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2101/00—Still video cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/333—Mode signalling or mode changing; Handshaking therefor
- H04N2201/33307—Mode signalling or mode changing; Handshaking therefor of a particular mode
- H04N2201/33342—Mode signalling or mode changing; Handshaking therefor of a particular mode of transmission mode
- H04N2201/33357—Compression mode
Definitions
- the present invention relates to an imaging apparatus capable of compressing a video signal.
- technologies that support high-speed imaging functions have made remarkable progress in imaging devices such as digital still cameras.
- technologies supporting a high-speed imaging function include increasing the number of pixels by miniaturizing the pixel cells of a single-plate color image sensor and increasing the functionality and speed of operation due to technological advances of MOS type image sensors.
- An image sensor corresponding to the technology that supports the high-speed imaging function is widely used as an image sensor constituting the imaging apparatus.
- pixel signals output from the image sensor itself have a high data rate in order to cope with high pixel operation and high-speed operation such as high-speed reading.
- problems described below in order to cope with this higher data rate. That is, in order to cope with the higher data rate, 1) faster signal receiving side interface circuit, 2) faster processing capacity of digital circuit for processing video signal, and 3) memory for temporarily storing video signal. It is necessary to cope with high data bandwidth and high data capacity. And in order to respond
- Patent Document 1 discloses a technique for reducing the volume of video signal data and reducing the amount of data handled per unit time by performing lossy compression processing by quantization processing of digitally converted image data. ing.
- Patent Document 2 discloses a technique for preventing image quality degradation due to compression distortion caused by irreversible compression to a sensor pixel signal, and ON / OFF compression processing for the sensor pixel signal is performed according to shooting conditions and circumstances. A technique for turning off is disclosed.
- the data amount per unit time of the pixel data can be reduced.
- the compression ratio of irreversible compression is fixed, when the suppression effect is increased in the data amount per unit time, the image observer feels uncomfortable due to, for example, a subject with a high spatial frequency, and is unacceptable. Image quality deterioration due to compression distortion occurs.
- the present invention has been made in view of the above-described circumstances, and an object thereof is to provide an imaging apparatus that can always reduce the data amount per unit time or more regardless of shooting conditions.
- an imaging apparatus includes a plurality of pixels arranged in a two-dimensional manner for converting incident light into an electrical signal, and the plurality of pixels converted by the plurality of pixels.
- a photoelectric conversion unit that outputs the electrical signal as pixel data
- a detection unit that detects a feature for each region of the image based on the pixel data output from the photoelectric conversion unit, and the detection unit that detects the feature
- a compression rate setting unit that sets a compression rate for each region based on characteristics; and a compression unit that performs irreversible compression on pixel data output from the photoelectric conversion unit according to the compression rate set by the compression rate setting unit.
- the present invention it is possible to realize an imaging apparatus capable of reducing the data amount per unit time by a certain amount regardless of shooting conditions.
- FIG. 1 is a block diagram illustrating a configuration of the solid-state imaging device according to the first embodiment.
- FIG. 2 is a diagram illustrating a detailed configuration of the photoelectric conversion unit according to the first embodiment.
- FIG. 3 is a diagram showing an equivalent circuit of the pixel cell in the first embodiment.
- FIG. 4 is a diagram illustrating a detailed configuration of the compression unit and the expansion unit in the first embodiment.
- FIG. 5 is a flowchart for explaining the image coding method according to the first embodiment.
- FIG. 6 is a diagram for explaining an arrangement of pixels used for calculation of a predicted value in the first embodiment.
- FIG. 7 is a flowchart for explaining the image decoding method according to the first embodiment.
- FIG. 1 is a block diagram illustrating a configuration of the solid-state imaging device according to the first embodiment.
- FIG. 2 is a diagram illustrating a detailed configuration of the photoelectric conversion unit according to the first embodiment.
- FIG. 3 is a diagram showing an equivalent circuit of the pixel cell in
- FIG. 8A is a diagram showing an example when a plurality of areas are set in the first exemplary embodiment.
- FIG. 8B is a diagram illustrating an example when a plurality of regions are set in the first exemplary embodiment.
- FIG. 9 is an example of a flowchart for explaining the compression rate selection method in the rectangular area according to the first embodiment.
- FIG. 10A is a diagram illustrating an example of a data format of the solid-state imaging element in the first embodiment.
- FIG. 10B is a diagram illustrating an example of a data format of the solid-state imaging element in the first embodiment.
- FIG. 11 is a flowchart for explaining a procedure for storing and transmitting independent compressed imaging data for each region in the first embodiment.
- FIG. 12A is a diagram illustrating an example of a data format of the solid-state imaging element in the first embodiment.
- FIG. 12B is a diagram illustrating an example of a data format of the solid-state imaging element in the first embodiment.
- FIG. 13A is a diagram showing an LVDS output of the solid-state imaging device in the first exemplary embodiment.
- FIG. 13B is a diagram illustrating an LVDS output of the solid-state imaging device according to Embodiment 1.
- FIG. 14 is a flowchart for explaining a frame data reception procedure in the first embodiment.
- FIG. 15 is a block diagram illustrating a configuration of a camera that performs motion detection of the entire subject according to the first exemplary embodiment.
- FIG. 16 is a diagram schematically illustrating an example of an image captured by the solid-state imaging device according to the first embodiment.
- FIG. 17A is a block diagram illustrating a configuration of a camera according to the second embodiment.
- FIG. 17B is an example of a motion vector output to the feature detection unit according to the second embodiment.
- FIG. 18 is a block diagram illustrating a detailed configuration of the image encoding unit according to the second embodiment.
- FIG. 19A is a schematic diagram of a face area when applied to the face area in the third embodiment.
- FIG. 19B is a schematic diagram of a face area when applied to the face area in the third embodiment.
- FIG. 20 is a diagram for explaining a procedure for converting the frame rate by the compression unit according to the fourth embodiment.
- FIG. 20 is a diagram for explaining a procedure for converting the frame rate by the compression unit according to the fourth embodiment.
- FIG. 21 is a block diagram illustrating a configuration of the solid-state imaging device according to the fifth embodiment.
- FIG. 22 is a diagram illustrating an example of imaging data used for AF detection information in the fifth embodiment.
- FIG. 23 is a flowchart for explaining the feature detection processing when the AF detection information according to the fifth embodiment is used.
- FIG. 24 is a diagram illustrating an example of contrast in the fifth embodiment.
- FIG. 25 is a block diagram illustrating a configuration of a digital camera according to the sixth embodiment.
- FIG. 26 is a block diagram illustrating a configuration of the solid-state imaging device according to the second embodiment.
- FIG. 27 is a diagram illustrating a relationship between the pixel data level of each pixel of one horizontal line selected in the feature detection unit according to the second embodiment and the selected compression rate.
- FIG. 1 is a block diagram showing the configuration of the solid-state imaging device according to Embodiment 1 of the present invention.
- the solid-state imaging device 1 shown in FIG. 1 includes a solid-state imaging device 10, a feature detection unit 14, an expansion unit 15, and an image processing unit 16.
- the feature detection unit 14 detects a feature for each region of the image from the pixel data output from the photoelectric conversion unit 11. Then, the feature detection unit 14 inputs the feature amount to the compression rate setting unit 13. Specifically, the feature detection unit 14 extracts a predetermined feature amount from the input image data, and feeds back the extracted feature amount to the compression rate setting unit 13 of the solid-state imaging device 10.
- the decompression unit 15 decompresses the pixel data compressed by the compression unit 12. Specifically, the decompression unit 15 receives the compressed pixel data from the solid-state imaging device 10, decompresses the compressed pixel data, and inputs the decompressed pixel data to the image processing unit 16.
- the image processing unit 16 processes the pixel data expanded by the expansion unit 15. Specifically, the pixel data (RAW data) input from the decompression unit 15 is processed, converted into a luminance (Y) signal and a color difference (C) signal, and output to the feature detection unit 14.
- pixel data (RAW data) input from the decompression unit 15 is processed, converted into a luminance (Y) signal and a color difference (C) signal, and output to the feature detection unit 14.
- the solid-state imaging device 10 includes a photoelectric conversion unit 11, a compression unit 12, and a compression rate setting unit 13. Imaging data (pixel data) of the solid-state imaging device 10 is output from the compression unit 12 to the outside. In the solid-state imaging device 10, the feature amount is input to the compression rate setting unit 13 from the external feature detection unit 14.
- the photoelectric conversion unit 11 has a plurality of pixels that convert incident light into electrical signals in a two-dimensional array, and outputs the converted plurality of electrical signals as pixel data. Specifically, the photoelectric conversion unit 11 converts an electrical signal (digital imaging signal) having a magnitude proportional to the incident light, and transfers the converted digital imaging signal to the compression unit 12 as pixel data.
- an electrical signal digital imaging signal
- the compression unit 12 performs irreversible compression on the pixel data output from the photoelectric conversion unit 11.
- the compression unit 12 compresses the pixel data according to the compression rate set by the compression rate setting unit 13.
- the pixel data output from the photoelectric conversion unit 11 is compressed using the compression rate set based on the compression rate information to which the pixel belongs given by the compression rate setting unit 13.
- the compression unit 12 outputs the compressed pixel data to the decompression unit 15.
- the compression unit 12 outputs the compressed pixel data to the outside based on a predetermined output method.
- the compression rate setting unit 13 sets the compression rate for each area of the image based on the features detected by the feature detection unit 14. Specifically, the compression rate setting unit 13 sets the compression rate based on the feature amount output from the feature detection unit 14 that detects the feature for each region of the image from the pixel data output from the photoelectric conversion unit 11. To do.
- the solid-state imaging device 1 is configured.
- the compression rate is set as follows, for example. That is, the feature detection unit 14 detects, from the pixel data expanded by the expansion unit 15, information on the magnitude relationship with respect to a predetermined threshold in the pixel value for each area of the image as a feature, and the compression rate setting unit 13 performs the feature detection.
- the compression rate is variably set based on the magnitude relationship information detected by the unit 14.
- the feature detection unit 14 detects information on the magnitude relationship with respect to a predetermined value in the pixel value for each region of the image from the pixel data processed by the image processing unit 16, and the compression rate setting unit 13 14, the compression rate is variably set based on the magnitude relationship information detected by 14.
- the method of determining the compression rate is not limited to this method.
- FIG. 2 is a diagram showing a detailed configuration of the photoelectric conversion unit 11.
- the photoelectric conversion unit 11 illustrated in FIG. 2 will be described as an example including 8 horizontal pixels and 8 vertical pixels.
- the photoelectric conversion unit 11 includes, for example, a MOS solid-state imaging device, and includes a pixel cell array 111, a timing generator 112, an AD conversion unit 113, and a horizontal scanning selector 114.
- the plurality of pixel cells Pxy are arranged in a pixel cell array 111 for each pixel in a two-dimensional horizontal and vertical grid, and input incident light is converted into an electrical signal (analog imaging signal) corresponding to the incident light. ).
- the pixel cell Pxy is connected to the common readout signal line Lx and the selection signal line S1y.
- the AD conversion unit 113 is connected to each of the common readout signal line L1 to the common readout signal line L8, and converts the input data (analog imaging signal) into digital data (pixel data).
- the horizontal scanning selector 114 includes switches S41 to S48 connected to the AD conversion unit 113. When the switches S41 to S48 are on, the data (pixel data) output from the AD conversion unit 113 is externally output. Output.
- the timing generator 112 operates based on the vertical synchronization signal VD, the horizontal synchronization signal HD, and the mode selection signal that are generated outside the photoelectric conversion unit 11 (outside the solid-state imaging device 10). That is, the timing generator 112 selects any one of the selection signal readout line S11 to the selection signal readout line S18 according to a predetermined vertical position from the frame head.
- the timing generator 112 selects the selection signal readout line S11 when, for example, a predetermined vertical position from the top of the frame is the first line with reference to the vertical synchronization signal VD.
- each of the pixel cells P11 to P18 outputs the accumulated signal accumulated therein to the corresponding common signal readout line L1 to common signal readout line L8.
- the output accumulated signal is input to the corresponding AD conversion unit 113.
- the AD conversion unit 113 converts the data read from the pixel cells P11 to P18 into a digital signal (digital image pickup signal) binarized into N bits (N is a natural number).
- the selection signal of any one of the switches S41 to S48 becomes valid (ON) at a predetermined horizontal timing with the horizontal synchronization signal HD as a reference. Then, the data output from the AD conversion unit 113 connected to the switches S41 to S48 is output to the outside of the photoelectric conversion unit 11.
- the photoelectric conversion unit 11 operates as described above.
- FIG. 3 is a diagram showing an equivalent circuit of the pixel cell.
- the pixel cell Pxy includes a photodiode 1101, a read transistor 1102, a floating diffusion 1103, a reset transistor 1104, and an amplifier 1105.
- the photodiode 1101 photoelectrically converts incident light (incident light) to generate electric charge according to the incident light.
- the read transistor 1102 has one of a source and a drain connected to the gate (cathode) of the photodiode 1101 and the other of the source and the drain connected to the floating diffusion 1103. Further, the read transistor 1102 is connected to the read signal line at the gate. Therefore, the read transistor 1102 reads the charge generated in the photodiode 1101 to the floating diffusion 1103 when a read signal is applied (turned on) to the read signal line.
- the floating diffusion 1103 has a capacitance, and converts the signal charge read through the read transistor 1102 into a voltage according to the capacitance.
- the source or drain of the reset transistor 1104 is connected to the floating diffusion 1103, and when the reset signal is input to the gate, the charge of the floating diffusion 1103 is reset. In other words, the floating diffusion 1103 is reset via the reset transistor 1104 to which the reset signal is input to the gate before the charge from the photodiode 1101 is read.
- a signal line for transmitting a read signal input to the read transistor 1102 and a signal line (transmit signal signal) (for transmitting a reset signal input to the reset transistor 1104).
- the reset signal line is not shown. Both the readout signal line and the reset signal line are applied in common to the pixel cells Pxy in each row from the timing generator 112.
- a reset signal is applied to the reset transistor 1104 to reset the floating diffusion 1103.
- the readout signal is applied (turned on) to the readout transistor 1102, whereby the charge of the photodiode 1101 is read out to the floating diffusion 1103.
- Such driving is performed in units of rows of the pixel cells Pxy.
- a high pulse indicating a conduction signal is applied to the selection signal readout line S11 to the selection signal readout line S18 commonly connected to the pixel cells Pxy in each row.
- an electric signal (analog pixel signal, imaging signal) proportional to the light incident on the pixel cell Pxy is output from the amplifier 1105.
- compression rate the higher the compression rate, the smaller the ratio of the code amount after compression to the code amount before compression, and conversely, the lower the compression rate, the more compression with respect to the code amount before compression. This means that the subsequent code amount ratio becomes large and approaches the code amount before compression. Therefore, high compression means that compression is performed at a high compression rate, and low compression means that compression is performed at a low compression rate. “Non-compressed” means that compression is not performed and the code amount does not change.
- FIG. 4 is a diagram illustrating detailed configurations of the compression unit 12 and the decompression unit 15.
- the compression unit 12 illustrated in FIG. 4 includes a processing target pixel value input unit 121, a prediction pixel generation unit 122, a code conversion unit 123, a change extraction unit 124, a quantization width determination unit 125, and a quantization processing unit 126. And a packing part 127.
- the processing target pixel value input unit 121 receives a value (pixel data) of a pixel (hereinafter referred to as a target pixel) to be encoded by the compression unit 12 by the photoelectric conversion unit 11.
- the processing target pixel value input unit 121 receives pixel data from the photoelectric conversion unit 11 and outputs the input pixel data to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing.
- the processing target pixel value input unit 121 omits the quantization process and directly outputs the input pixel data to the packing unit 127. To do.
- the processing target pixel value input unit 121 outputs the pixel data to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing.
- the predicted pixel generation unit 122 generates a predicted value of the current target pixel of interest using the input pixel data.
- the prediction pixel generation unit 122 outputs the generated prediction value to the code conversion unit 123.
- the code conversion unit 123 performs code conversion on each of the encoding target pixel (pixel data) input from the processing target pixel value input unit 121 and the prediction value input from the prediction pixel generation unit 122, and the change extraction unit 124. Output to.
- the change extraction unit 124 performs an exclusive OR operation on the input encoding target pixel code and the prediction value code to calculate bit change information.
- the change extraction unit 124 outputs the calculated bit change information to the quantization width determination unit 125 and the quantization processing unit 126.
- the quantization width determination unit 125 determines the quantization width based on the bit change information input from the change extraction unit 124 and outputs the quantization width to the quantization processing unit 126 and the packing unit 127.
- the quantization processing unit 126 performs a quantization process for quantizing the bit change information input from the change extraction unit 124 using the quantization width calculated by the quantization width determination unit 125.
- the packing unit 127 packs data into a combination of at least one or more leading target pixels, a plurality of quantized values, and at least one quantized width information. Then, the packing unit 127 outputs the packed packing data to a memory such as an SDRAM or the unpacking unit 151.
- FIG. 5 is a flowchart for explaining the image coding method according to the present embodiment.
- the compression unit 12 may be configured by hardware such as LSI (Large Scale Integration), a program executed by a CPU (Central Processing Unit), or the like. The same applies to the following.
- LSI Large Scale Integration
- CPU Central Processing Unit
- each pixel data is N-bit digital data
- pixel data after quantization (hereinafter referred to as a quantized value) corresponding to each pixel data is M-bit length.
- a leading target pixel of at least one pixel, a quantization value corresponding to a plurality of pixel data, and a code representing a quantization width of the quantization value (hereinafter referred to as quantization width information) are packed into the S bit length by the packing unit 127. Then, the packed packing data is output.
- the natural numbers N, M, and S are determined in advance.
- pixel (target pixel) data (pixel data) to be encoded is input to the processing target pixel value input unit 121 by the photoelectric conversion unit 11.
- the pixel data input to the processing target pixel value input unit 121 is output from the processing target pixel value input unit 121 to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing.
- step S101 when the encoding target pixel of interest is input as the first target pixel (YES in step S101), the quantization processing is omitted, and the processing target pixel value input unit 121 is directly input. Pixel data is input to the packing unit 127.
- the process proceeds to a prediction pixel generation process.
- the pixel data input to the predicted pixel generation unit 122 is any of the first data to the third data. That is, the first data is the first target pixel that is input to the processing target pixel value input unit 121 before the target encoding target pixel.
- the second data is a previous encoding target pixel that is input to the processing target pixel value input unit 121 first.
- the third data is first encoded by the compression unit 12, the encoded encoded data is sent to the decompression unit 15, and the transmitted encoded data is decoded by the expansion unit 15 Pixel data.
- the predicted pixel generation unit 122 generates a predicted value of the current target pixel of interest using the input pixel data (step S102).
- Predictive coding is a method of generating a prediction value for an encoding target pixel and encoding a difference value between the encoding target pixel and the prediction value.
- this predicted value is likely to be the same as or close to the value of the pixel close to the target pixel. Therefore, based on this, the value of the target pixel to be encoded is predicted from neighboring pixel data. In this way, the difference value is made as small as possible to suppress the quantization width.
- FIG. 6 is a diagram for explaining the arrangement of pixels used for calculation of a predicted value.
- FIG. 6 indicates the pixel value of the target pixel (target pixel). Further, a, b, and c shown in FIG. 6 indicate pixel values of three adjacent pixels with respect to the target pixel, which are used to obtain the predicted value y of the target pixel.
- the predicted pixel generation unit 122 obtains the predicted value y of the target pixel using the pixel values a, b, and c of the neighboring pixels of the target pixel.
- the prediction pixel generation unit 122 calculates a prediction value using any one of the (Expression 1) to (Expression 7) used in the prediction encoding described above, and converts the calculated prediction value into code conversion. Output to the unit 123.
- the code conversion unit 123 performs code conversion on each of the encoding target pixel received from the processing target pixel value input unit 121 and the prediction value received from the prediction pixel generation unit 122 into a code expressed by N bits. To do.
- the code converted code corresponding to the encoding target pixel (hereinafter referred to as the target pixel code) and the prediction value corresponding code (hereinafter referred to as the prediction value code) are sent to the change extraction unit 124. Sent by the converter 123 (step S103).
- the change extraction unit 124 performs an exclusive OR operation between the code of the pixel to be encoded and the code of the prediction value expressed in N bits, and calculates bit change information E having an N-bit length.
- the bit change information is a code for calculating the code of the target pixel from the bit change information and the code of the predicted value. Then, the change extraction unit 124 outputs the calculated bit change information E to the quantization width determination unit 125 and the quantization processing unit 126.
- the quantization width determination unit 125 determines the quantization width J based on the bit change information E sent from the change extraction unit 124 to the quantization width determination unit 125, and the determined quantization width J
- the data is output to the conversion processing unit 126 and the packing unit 127 (step S104).
- the quantization width J indicates a value obtained by subtracting the bit length M of the quantization value from the number of effective bit digits of the bit change information E.
- J is a positive integer, and when the number of significant bit digits of the bit change information E takes a value smaller than M, J is set to 0.
- the quantization processing unit 126 performs quantization processing for quantizing the bit change information E received from the change extraction unit 124 using the quantization width J calculated by the quantization width determination unit 125 (step S106).
- the quantization process using the quantization width J means that the bit change information E between the encoding target pixel and the predicted value corresponding to the encoding target pixel is shifted downward by the number of the quantization width J ( Bit shift). Further, the quantization processing unit 126 may understand that the quantization is not performed when the quantization width J is zero.
- the quantization result (quantized value) output from the quantization processing unit 126 is sent to the packing unit 127 by the quantization processing unit 126.
- the packing unit 127 combines at least one or more leading target pixels, a plurality of quantization values, and at least one or more quantization width information of Q bit length (Q is a natural number). , Packing into S-bit data (step S107). Then, the packing unit 127 outputs the packed packing data to a memory such as an SDRAM or the unpacking unit 151.
- the fixed bit length S to be set may be the same number of bits as the data transfer bus width of the integrated circuit to be used. In addition, when unused bits remain in the rear portion of the packing data bits, dummy data is recorded so as to reach the S bits.
- step S108 When the encoding target pixel packing process is completed, the process proceeds to step S108.
- step S108 the compression unit 12 determines whether or not the image encoding process has been completed for the number of pixels Pix packed into S bits.
- Pix is calculated in advance by the following (Equation 8).
- step S108 if the result of the determination is NO (NO in step S108), the process proceeds to step S101, and then for the pixel data received by the processing target pixel value input unit 121, from step S101 to step S107.
- the compression unit 12 executes at least one of the following processes. On the other hand, if the determination result is YES in step S108 (YES in step S108), the compression unit 12 outputs the encoded data in the buffer memory in units of S bits, and proceeds to step S109.
- step S109 the compression unit 12 determines whether the encoding process for one image has been completed by the output of the encoded pixel data output in the previous step S108. If the determination result is YES (YES in step S109), the encoding process ends. Conversely, if NO (NO in step S109), the process proceeds to step S101, and at least one process from step S101 to step S108 is executed.
- the compression unit 12 performs compression processing (image encoding processing).
- this image encoding method inputs image data of a pixel to be compressed (processing object pixel value input unit 121), and compresses the input pixel data (to a fixed-length code). Is the method. Then, a prediction pixel generation step (prediction pixel generation unit 122) that generates a prediction value of the pixel data from at least one peripheral pixel located around the pixel to be compressed, and code conversion of the pixel data A code conversion step (code conversion unit 123) that generates a code in which the pixel data is code-converted; and the code of the pixel data that has been code-converted in the code conversion step and the prediction pixel generation step.
- bit change information (change extraction unit 124) between the prediction value code and the code value is converted into a quantized value having a bit number smaller than (and having a fixed length) the bit number of the bit change information.
- An image code including a quantization step (quantization processing unit 126) that compresses the pixel data into the quantized values by quantization. It is a method.
- the 4 includes an unpacking unit 151, a prediction pixel generation unit 152, a quantization width determination unit 153, an inverse quantization processing unit 154, a code generation unit 155, and an inverse code conversion unit. 156 and an output unit 157.
- the unpacking unit 151 analyzes the encoded data input (sent) from the packing unit 127 or a memory such as an SDRAM and separates it into a plurality of data.
- the unpacking unit 151 outputs the analyzed encoded data to the quantization width determination unit 153, the inverse quantization processing unit 154, and the output unit 157 at an appropriate timing.
- the quantization width determination unit 153 determines the quantization width in the inverse quantization process corresponding to each decoding target pixel from the encoded data (quantization width information) input from the unpacking unit 151, and performs inverse quantization. Is output to the processing unit 154.
- the inverse quantization processing unit 154 performs inverse quantization using the quantization width in the inverse quantization process input from the quantization width determining unit 153.
- the code generation unit 155 performs code conversion similar to the code conversion in the code conversion unit 123 on the prediction value input from the prediction pixel generation unit 152, and generates a code of the prediction value.
- the reverse code conversion unit 156 performs reverse conversion of the code conversion performed by the code conversion unit 123 on the target pixel code input by the code generation unit 155 to restore the pixel data. Then, the reverse code conversion unit 156 outputs the pixel data subjected to the reverse code conversion to the output unit 157.
- the predicted pixel generation unit 152 generates a predicted value using the input pixel data.
- the input data is data that is input before the target decoding target pixel and is output from the output unit 157.
- FIG. 7 is a flowchart for explaining the image decoding method according to the present embodiment.
- FIG. 7 shows image decoding processing (decompression processing) performed by the decompression unit 15.
- encoded data is input to the unpacking unit 151.
- the encoded data input to the unpacking unit 151 is encoded data necessary for restoring each pixel data.
- the unpacking unit 151 analyzes the S-bit fixed-length encoded data sent from the packing unit 127 or sent from a memory such as SDRAM, and converts the fixed-length encoded data into a plurality of pieces. Separate into data.
- the unpacking unit 151 converts the sent fixed-length encoded data into a N-bit long target pixel, a Q-bit quantization width information, and a M-bit decoding target pixel (hereinafter, referred to as a pixel to be decoded) It is separated into pixels to be decoded (quantized values) (step S201).
- the encoded data analyzed by the unpacking unit 151 is transmitted to the quantization width determining unit 153, the inverse quantization processing unit 154, and the output unit 157 at an appropriate timing.
- the unpacking unit 151 omits the inverse quantization process and transmits the encoded data of interest to the output unit 157 directly.
- the unpacking unit 151 transmits the encoded data to the quantization width determination unit 125, and performs inverse quantization.
- the process proceeds to quantization width determination processing in the conversion (step S204).
- the quantization width determination unit 153 determines the quantization width J ′ in the inverse quantization process corresponding to each decoding target pixel from the encoded data (quantization width information) received from the unpacking unit 151 and determines The quantized width J ′ thus output is output to the inverse quantization processing unit 154.
- the unpacking unit 151 transmits the encoded data to the inverse quantization processing unit 154, and performs inverse quantization. Shift to the conversion process.
- the inverse quantization processing unit 154 performs inverse quantization using the quantization width J ′ in the inverse quantization processing received from the quantization width determination unit 153.
- the inverse quantization process using the quantization width J ′ is a process of bit-shifting the encoded data (quantized value) received from the unpacking unit 151 upward by the number of J ′.
- the inverse quantization processing unit 154 calculates bit change information E ′ represented by N bits by inverse quantization processing. Note that the inverse quantization processing unit 154 may be understood not to perform the inverse quantization when the quantization width J ′ is “0” (step S205).
- the data input to the prediction pixel generation unit 152 is data that is input before the target pixel to be decoded and output from the output unit 157.
- This data is either the first target pixel or pixel data decoded first and output from the output unit 157 (hereinafter referred to as decoded pixel data).
- the prediction pixel generation unit 152 generates a prediction value represented by N bits using the pixel data input to the prediction pixel generation unit 152 as described above.
- the prediction value generation method uses any one of the above-described prediction expressions (Expression 1) to (Expression 7), but uses a prediction expression similar to the expression used in the prediction pixel generation unit 122.
- the predicted pixel generation unit 152 calculates a predicted value.
- the calculated prediction value is output to the code generation unit 155 by the prediction pixel generation unit 152 (step S206).
- the code generation unit 155 performs code conversion similar to the code conversion in the code conversion unit 123 on the prediction value received from the prediction pixel generation unit 152, and the code of the prediction value after the conversion is performed. Generate. That is, the received predicted value is a value before code conversion such as Gray code conversion.
- the code generation unit 155 performs the same code conversion as the code conversion performed by the code conversion unit 123 on the received predicted value, and calculates the converted code. Thereby, the code generation unit 155 calculates a code corresponding to the received predicted value.
- the code generation unit 155 performs an exclusive OR operation between the N-bit length bit change information E ′ received from the inverse quantization processing unit 154 and the code of the predicted value after the code conversion.
- the N-bit target pixel code is generated, and the generated target pixel code is output to the inverse code conversion unit 156 (step S207).
- the reverse code conversion unit 156 performs reverse conversion of the code conversion performed by the code conversion unit 123 on the target pixel code received from the code generation unit 155 to restore the pixel data.
- the pixel data after the reverse code conversion is transmitted to the output unit 157 by the reverse code conversion unit 156 (step S208).
- step S209 the unpacking unit 151 determines whether or not the image decoding process has been completed for the number of pixels Pix packed into S bits by the packing unit 127.
- Pix is calculated in advance using (Equation 8), as in the image encoding process.
- step S209 if the determination result for Pix is NO (NO in step S209), the process proceeds to step S203, and the decompression unit 15 receives the next code received by the unpacking unit 151. At least one of steps S203 to S208 is performed on the digitized data. The decompressing unit 15 repeatedly performs the processing from step S203 to step S208 on the pixels P3 to P6, and sequentially outputs the target pixels obtained by the processing.
- step S210 determines whether the determination result of the above determination regarding Pix is YES in step S209 (YES in step S209).
- step S210 the unpacking unit 151 determines whether or not the decoding process for one image has been completed with the decoded pixel data output from the output unit 157. If the determination result is YES (YES in step S210), the decoding process is terminated. Conversely, if NO (NO in step S210), the process proceeds to step S201, and at least one process from step S201 to step S209 is executed.
- the decompression unit 15 performs decompression processing (image decoding processing).
- compression rate setting part Next, details of the compression rate setting method of the compression rate setting unit 13 constituting the solid-state imaging device 10 will be described using an example.
- the compression rate setting unit 13 sets the compression rate based on the feature amount output from the feature detection unit 14 as described above.
- various information about the feature amount input from the feature detection unit 14 can be mentioned.
- the feature detection unit 14 individually sets a compression rate for a specific target area in the imaging screen will be described.
- the number of target areas, the reference coordinates of each area, the size of the area, The case where is input will be described.
- the compression rate setting unit 13 receives the feature amount from the feature detection unit 14 through a connected control signal, and holds the input feature amount in, for example, an internal register.
- the compression rate setting unit 13 has, for example, an internal register and holds information in the internal register.
- N registers may be prepared that can hold all pieces of area information for N (N is a natural number).
- N is a natural number
- the internal register storing the information of the region n
- the information may be newly updated to the information of the region n.
- the compression rate setting unit 13 calculates coordinates from the reference position in the horizontal direction and the vertical direction with the upper left corner of the screen to be imaged as a reference. Then, the compression rate setting unit 13 compares and determines whether or not the calculated horizontal coordinate and vertical coordinate are included in up to N regions that are outputs of the feature detection unit 14.
- the area information one area may be set for the entire image, and the information is not limited to one, but the first area, the second area,.
- a plurality of areas may be set.
- 8A and 8B are diagrams illustrating an example in which a plurality of areas are set.
- FIG. 8B is an example showing a non-rectangular region.
- the compression rate Q (N) for each region N and the standard compression rate Q for a region that does not belong to any of the N rectangular regions are selected. That is, the feature detection unit 14 selects a region of interest in the image as a rectangular region. Then, the compression rate setting unit 13 reduces the compression rate of the region of interest to leave details, and if the compression rate is set so that high compression is performed with the non-target region as the compression rate Q, the compression unit 12 is efficient. Can be compressed.
- FIG. 9 is an example of a flowchart for explaining a compression rate selection method in a rectangular area.
- step S301 the feature detection information of the region is input and initialization is performed (step S301), and the process proceeds to step S302.
- step S302 it is determined by (Equation 9) below whether the coordinates (x, y) with respect to the upper left of the screen are included in any of the N areas.
- step S302 the standard compression rate outside the region is separate from the compression rate set in the region up to N as the region not belonging to the N target rectangular regions (non-selected region).
- step S304 the process moves to step S305.
- step S305 it is determined whether or not the determination of all N areas has been completed, and when completed (YES in step S305), the process moves to step S306, and when not completed (NO), the area number n is set to 1. After increasing, the process returns to step S302, and the next area n + 1 is discriminated.
- step S306 it is determined whether or not the horizontal pixel scanning is completed. If YES, the process moves to step S307. If NO, the horizontal coordinate x is incremented by 1, and the process returns to step S302 to continue.
- step S307 it is determined whether or not the vertical pixel scanning is completed in step S306. If YES, the compression ratio selection in one frame is completed. If NO, the vertical coordinate y is increased by 1. Then, the process returns to step S302 and continues.
- the compression rate Q (n) of any one of the regions n may be applied to pixels included in two or more regions out of the N rectangular regions.
- the compression rate Q (n set in the region at the stage where the upper left reference coordinate of the rectangular region n is matched. ) Sets the same compression rate. Therefore, it is not necessary to scan all the pixels in the horizontal W pixel and vertical H pixel regions one by one, and it is possible to skip, and the processing amount can be reduced by skipping.
- the region shape is a rectangular region has been described as an example, but the shape is not limited to a rectangular region, and any shape may be used.
- the case where the region shape is not rectangular may be, for example, the case of FIG. 8B, which can be dealt with by giving region information as shown in FIG. 8B.
- the compression rate selection method in the area information in FIG. 8B is the same as the method described in FIG. 9, but unlike the rectangular area, the upper left coordinate cannot be used as the reference coordinate. Therefore, for example, the feature detection unit 14 may give information to the compression rate setting unit 13 as follows.
- FIGS 10A and 10B are diagrams showing examples of the data format of the solid-state imaging device.
- scanning is performed in the horizontal direction and continuous data is output in units of lines.
- FIG. 10A is an example schematically showing data stored in units of one line in the horizontal direction.
- data for one line is arranged in the horizontal direction and data for each line is arranged in the vertical direction, and the data is stored with the synchronization code (header) added to the head as the storage form of the data of each line.
- the synchronization code does not straddle between lines, and is added and stored independently for each line.
- the first line is data of the frame start line (first line), and a frame start synchronization code (SOF) is added to the head of this data.
- SOF frame start synchronization code
- SOL line start synchronization code
- an identification synchronization code (SOR (n)) indicating each area n is added.
- the synchronization code SOR (n) may be added with information indicating, for example, a code indicating the compression rate for 1 byte, the number of pixels to be compressed for 2 bytes, and the total number of bits after compression for 4 bytes. The number of bytes may be set in each solid-state imaging device and is not a unique value. By adding this SOR (n), the decompression unit 15 can detect and expand only a specific area by detecting SOR (n).
- a line end code EOL is added to the end of the last data of one line
- a frame end code EOF is added to the end of the frame last line.
- the identification headers such as SOF, SOL, and SOR are used to correctly identify and decode the imaging data, but 0 or 1 is generated in advance for the number of bits exceeding the bit length of the imaging data in advance. To achieve.
- FIG. 10B is an example showing a sequence of SOR synchronization codes.
- the maximum value is 8 bits or 0. Not limited to that.
- a conversion table between a value of 1 to 254 and a compression rate may be prepared and converted and defined.
- the lower 4 bits may be defined as a denominator and the upper 4 bits as a numerator, and each may be defined in terms of a 4-bit value.
- the value at the time of 75% compression of the code amount ratio after compression with respect to before compression may be shown as 0x34, for example.
- FIG. 11 is a flowchart for explaining a procedure for transmitting compressed image data that is independent for each region.
- step S401 feature information such as region information and compression rate information input from the feature detection unit 14 in step S401 is acquired and set, and the process proceeds to step S402.
- SOF frame start code
- step S407 since it is the left end of the line, an area start code SOR0 indicating the brother 1 area is added, and information on the compression rate, the number of pixels in the area i, and the number of bits after compression is included in the area start code. Append.
- EOL line end header
- the compression unit 12 compresses pixel data by adding at least identification information including compression rate information and data amount information. To do. Then, the solid-state imaging device 10 outputs image data to which identification information is added.
- the format of the compressed data has been described based on FIGS. 10A and 10B, but is not limited thereto.
- the format shown in FIGS. 12A and 12B may be used. That is, as shown in FIG. 12A, the SOL may be added after the SOF is added in the frame start line, and the EOL may be added before the EOF is added in the final frame end line. Further, as shown in FIG. 12B, in the last line of the frame, the processing may be ended only by EOL instead of adding EOF.
- the data format described above is particularly effective when the solid-state imaging device 10 outputs with LVDS (Low Voltage Differential Signaling).
- the LVDS is an interface that converts two differential signals into one set (pair) and operates at a low voltage and a high frequency to convert them into serial signals and perform high-speed transfer.
- the solid-state imaging device 10 includes one or more LVDS output pairs.
- the pairs are divided and assigned based on a predetermined rule.
- a method for allocating and dividing pairs will be described with reference to FIGS. 13A and 13B.
- FIG. 13A and FIG. 13B are diagrams showing LVDS output of the solid-state imaging device in the first embodiment.
- FIG. 13A shows an example in which the number of bits of one pixel data per pair is set to 10 bits, and four pairs A to D are output.
- serial conversion is performed by the serial conversion 171
- output control is performed by the output control unit 172
- output is performed to each of the connected A to D LVDS transceivers.
- Fig. 13B describes the output format of each pair.
- the output control unit 172 adds it to each of the A to D LVDS pairs that distribute the SOL header.
- the code SOR (0) of the first area is stored for the pairs A to B, C, and D.
- the compressed data is distributed and outputted to each pair of A to D by 10 bits.
- the total size of the compressed data in the area is a fraction of 40 bits, dummy data in which all the surplus bits are filled with 1 is output, moved to the next area, and stored. Thereafter, the data is stored up to the end of EOL.
- the pairs A to D each output independently.
- the number of pairs per LVDS port and the number of bits per pixel are not limited to this example, and the number of pairs within the range up to the number of pairs connected by this solid-state imaging device and the number of bits of the AD conversion unit of the solid-state imaging device Settings can be changed.
- FIG. 14 is a flowchart for explaining a frame data reception procedure in the first embodiment.
- the decompression unit 15 repeats the operation of converting each LVDS pair into parallel data in units of a predetermined number of bits after decompression is started.
- each pair waits to receive a frame start synchronization code (SOF).
- SOF frame start synchronization code
- step S501 After receiving the SOF (YES in step S501), the process moves to step S503. On the other hand, when the SOF is not received (NO in step S501), the reception determination of the line start code SOL is performed in step S502. If received (YES in step S502), the process moves to step S503. If not received (NO in step S502), the process moves to step S501 and continues until the SOF or SOL start code is received.
- step S503 after receiving the synchronization code SOR (n) of the region n (YES in step S503), information such as the number of pixels, the compression rate, and the data size of the region n is acquired in step S504.
- step S505 decompression processing of the compressed data in the region n is performed.
- step S506 reception determination of the frame end code EOF is performed. If the frame end code EOF is received (YES in step S506), the process ends. On the other hand, if the frame end code EOF is not received (NO in step S506), the process proceeds to step S507 to determine whether or not the line end code EOL is received.
- step S507 If the line end code EOL is received (YES in step S507), the process returns to step S501 to continue the processing for the next line and thereafter. On the other hand, if not received (NO in step S507), the process returns to step S503 to continue the process for the next area.
- step S505 is shown by taking the processing of the decompression unit 15 described in FIG. 7 as an example.
- Example 1 In this example, a case where the solid-state imaging device 1 according to Embodiment 1 is applied to, for example, an in-vehicle camera or the like will be described. Specifically, an example in which features are detected based on motion information of the entire image will be described.
- FIG. 15 is a block diagram illustrating a configuration of a camera that performs motion detection of the entire subject in the first embodiment. Elements similar to those in FIG. 1 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the camera 2 shown in FIG. 15 differs from the solid-state imaging device 1 according to Embodiment 1 in the configuration of the feature detection unit 24 and the speed detection unit 28.
- the configuration of the solid-state imaging element 10 is the same as that in the first embodiment.
- the feature detection unit 14 receives speed information from the speed detection unit 28 as a parameter in addition to the image data from the image processing unit 16.
- the speed detection unit 28 is a speedometer or the like, and outputs a control signal indicating speed information to the feature detection unit 24.
- FIG. 16 is a diagram schematically illustrating an example of an image captured by the solid-state imaging device 10 according to the first embodiment.
- FIG. 16 for example, when the solid-state imaging element 10 is installed in front of the traveling direction of the automobile and is photographed, a photographed image is schematically illustrated.
- the distance L from the vanishing point coordinate P (Xp, Yp) is calculated from the vanishing point of the target coordinate (X, Y) concentrically using (Equation 10), for example, Let distance T from P be a threshold value.
- the compression rate is reduced to a region of distance L ⁇ T from the vanishing point P (referred to as a region of interest 1602), and in a region separated from the radius T (peripheral region 1603). Control such as increasing the compression rate.
- the example which calculated the distance from the vanishing point P based on the perspective method was taken in the above, it is not restricted to it.
- the higher the speed the narrower the human field of view. Therefore, attention may be paid to the vicinity of the center, and it may be used that the edge is removed from the object of interest by illusioning that the whole is moving particularly quickly.
- the area and the compression ratio of the area may be changed using the speed value as a threshold value according to the speed information from the speed detection unit 28.
- an external input unit may be provided in the camera 2 and the area may be designated in advance by manual input or the like.
- the feature detection unit 24 determines whether or not it is a selection region by the external input unit. For example, taking FIG. 16 as an example, the feature detection unit 24 sets an area that is not the external input designation area (that is, the peripheral area 1603) as a detection target (that is, the attention area 1602), and the speed only in the attention area 1602 that is the detection target. Feature detection may be performed by detecting the speed by the detection unit 28.
- the feature detection unit 24 uses the difference information between the image data composed of the plurality of pixel data processed by the image processing unit 16 and the image data processed by the image processing unit 16 in the past to calculate the image
- the overall motion is detected as a feature
- the compression rate setting unit 13 variably sets the compression rate for each region based on the detected motion information of the entire image.
- the feature detection unit 24 adds speed information as a parameter to be given to the compression ratio setting unit 13 by using the speed parameter input by the speed detection unit 28, and from the vanishing point as the speed increases.
- a parameter indicating that the threshold value T of the radius is changed in a smaller direction can be given to the compression rate setting unit 13.
- the compression efficiency is improved because the peripheral area 1603 for increasing the compression ratio is widened, and the data of the non-target area is reduced by changing the compression ratio of the non-target area in a larger direction, and the compression efficiency is improved.
- the feature detection unit 24 controls the entire screen uniformly. May be. Further, the camera 2 may be further detected by providing a configuration such as tracking a specific object.
- Example 2 In the second embodiment, as a modification of the first embodiment, a case will be described in which the entire motion is detected using a motion vector based on a difference between frames instead of the speed detection unit 28.
- FIG. 17A is a block diagram illustrating a configuration of the camera 3 according to the second embodiment. Elements similar to those in FIGS. 1 and 15 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the camera 3 shown in FIG. 17A differs from the solid-state imaging device 1 according to Embodiment 1 in the configuration of a feature detection unit 34 and a moving image encoding unit 39.
- the camera 3 illustrated in FIG. 17A is different from the camera 2 according to the first embodiment in that the configuration of the feature detection unit 34 is different from the configuration of the feature detection unit 34 in that a moving image encoding unit 39 is not provided.
- the moving image encoding unit 39 receives the luminance Y and color difference C information (also referred to as YC data) output from the image processing unit 16.
- the moving image encoding unit 39 performs encoding based on the input YC data, and outputs motion vector information 1908 generated during encoding to the feature detection unit 34.
- FIG. 17B is an example of a motion vector output to the feature detection unit 34.
- the arrow in FIG. 17B schematically shows a motion vector in a state where the front in the traveling direction of the first embodiment described above is shown, and is represented by two-dimensional coordinates in the horizontal direction and the vertical direction.
- the partition on the grid in the screen indicates a macro block which is a coding unit of the moving picture coding unit 39.
- the arrow increases outward from the vanishing point 1909, indicating that the absolute value of the motion vector of the macroblock is increased.
- the moving image encoding unit 39 can hold at least one motion vector for each macroblock.
- FIG. 18 is a block diagram illustrating a detailed configuration of the moving image encoding unit 39 according to the second embodiment.
- the encoding format is H.264.
- a configuration example in the case of using H.264 is shown.
- the moving image encoding unit 39 includes a subtractor 3902, a DCT quantization unit 3903, an inverse quantization / inverse DCT unit 3904, an intra prediction unit 3905, a deblocking filter 3906, an external memory 3907, a motion detection unit 3908, and a motion compensation unit. 3909, an inter prediction unit 3910, a prediction determination unit 3911, a variable length coding unit 3912, and an adder 3913.
- the moving image encoding unit 39 performs processing by dividing one frame into macroblocks each having 16 horizontal pixels ⁇ 16 vertical pixels.
- the YC data 3901 is YC data input from the image processing unit 16 and is input to the motion detection unit 3908 and the subtracter 3902 in units of macroblocks.
- the input YC data 3901 is subtracted by the subtractor 3902 and the previous determination result is subtracted and input to the DCT quantization unit 3903. Then, DCT quantization section 3903 performs DCT and quantization, and inputs the result to inverse quantization / inverse DCT section 3904, intra prediction section 3905, and variable length coding section 3912.
- the intra prediction unit 3905 performs processing only on the image in the frame to calculate prediction data, and sends the calculated prediction data to the prediction determination unit 3911.
- the prediction determination unit 3911 determines the prediction values of the intra prediction unit 3905 and the inter prediction unit 3910 based on a predetermined determination criterion, and outputs the prediction values selected according to the determination result to the subtracter 3902 and the adder 3913.
- the inverse quantization / inverse DCT unit 3904 restores the data after the DCT quantization by the DCT quantization unit 3903, and then the adder 3913 performs addition with the determination value output from the prediction determination unit 3911. And output to the deblocking filter 3906. Then, after filtering to reduce block noise generated between macroblocks by the deblocking filter 3906, the reference image is stored in the external memory 3907, and the reference image stored in the external memory 3907 is detected as motion between frames.
- the motion vector information 1908 is calculated from the difference information from the input data (YC data 3901) and is output to the motion compensation unit 3909 and the feature detection unit 34.
- the motion compensation unit 3909 performs motion compensation using the output motion vector and the like, and outputs the motion compensation to the inter prediction unit 3910.
- the inter prediction unit 3910 uses the reference image data of a plurality of frames between frames to predict the pixel prediction value. Is calculated and used by the prediction determination unit 3911.
- variable length encoding unit 3912 after the code amount control is performed by the DCT quantization unit 3903, variable length encoding such as CAVLC or CABAC is performed, and code data is output.
- code data is output.
- the motion detection unit 3908, the motion compensation unit 3909, and the inter prediction unit 3910 perform processing after the data of at least one frame before is prepared.
- the camera 3 further includes the moving image encoding unit 39 that encodes the image data processed by the image processing unit 16 as compared with the solid-state imaging device 1 of the first embodiment, and includes the feature detection unit. 34 detects, as a feature, motion vector information for each region of the image from the pixel data processed by the image processing unit 16 based on the motion vector information output from the moving image encoding unit 39, and a compression rate setting unit 13 variably sets the compression rate for each region based on the detected motion information.
- H Although an example in the case of H.264 has been described, the present invention is not limited thereto. Needless to say, the present invention can be applied to, for example, an encoding method using a motion vector between frames such as MPEG.
- W ⁇ H average values (Xave, Yave) are calculated for horizontal and vertical components.
- Example 3 In this example, another application example of the solid-state imaging device 1 according to Embodiment 1 will be described. Specifically, as an example of a method for the feature detection unit 14 to perform feature detection, a method for detecting a human face and selecting a region including the detected human face will be described.
- the feature detection unit 14 gives (outputs) to the compression rate setting unit 13 a parameter that changes the compression rate depending on whether or not a face region is included.
- the face of a person is information that is easily noticed in the video, it is desirable that image processing or recording be performed while maintaining high image quality.
- feature detection based on skin color information information such as the approximate shape of each face component such as eyes, nose, mouth, ear, and hair, information on the positional relationship between components, and face detection condition information such as contour information
- the unit 14 is caused to detect a human face. That is, the feature detection unit 14 is caused to detect face detection condition information and determine whether or not a face or a region including a face exists.
- FIG. 19A and FIG. 19B are schematic views of face areas when applied to face detection in the third embodiment.
- FIG. 19A shows a case where one person's face is shown in the image
- FIG. 19B shows a case where two people's faces are shown.
- the feature detection unit 14 determines that a face exists, the feature detection unit 14 increases the number of regions n for each additional person.
- the feature detection unit 14 detects one area, and at the same time, uses the representative point C (not shown) of the face as a central starting point. Scan to the end of the area to calculate the face area.
- the area shape of the detected area may be a rectangular area drawn with (minimum value in the horizontal direction, minimum value in the vertical direction) and (maximum value in the horizontal direction, maximum value in the vertical direction)
- the rectangular area is not limited, and as shown in FIG. 8B of the first embodiment, each area information may be given to each line by giving a horizontal left end and a width to an arbitrary shape.
- the feature detection unit 14 outputs region information including the face region detected as described above to the compression rate setting unit 13. Then, as shown in FIG. 19A, the compression rate setting unit 13 improves the image quality by setting the compression rate to A and lowering the compression rate for the face region, thereby improving the image quality, and for the region other than the face. As the compression rate B, the compression rate is increased to achieve high compression. In this way, the amount of data can be efficiently reduced while maintaining the image quality.
- the feature detection unit 14 detects two areas of the first face area and the second face area.
- the feature detection unit 14 may set individual compression ratios in the two areas after calculating the two areas.
- the feature detection unit 14 combines the plurality of regions as one region as shown in FIG. 19B. It may be reset as one area (first face area after combination).
- the feature detection unit 14 may output region information including the face region detected in this way to the compression rate setting unit 13.
- an image composed of a plurality of pixels has a face area that is an area showing a human face.
- the feature detection unit 14 detects region information indicating a face region in the image as a feature from the pixel data output from the photoelectric conversion unit 11. Based on the region information detected by the feature detection unit 14, the compression rate setting unit 13 variably sets the compression rate individually for at least a region including a face region and a region not including a face region.
- Example 4 In the present embodiment, a procedure in which the compression unit 12 converts the frame rate based on the compression rate set for each region by the compression rate setting unit 13 (for example, from the amount of imaging data to be reduced) will be described.
- FIG. 20 is a diagram for explaining a procedure in which the compression unit 12 according to the fourth embodiment converts the frame rate.
- FIG. 2 and FIG. 3 the case of a MOS sensor (MOS type solid-state image pickup device) has been described using FIG. 2 and FIG. 3 as the solid-state image pickup device 10.
- the solid-state imaging device 10 is exposed after reading is completed.
- the period of the synchronization signal is reduced while keeping the interval of the synchronization signal of each line equal. is required. And that is done as follows.
- the feature detection unit 14 divides the image into a plurality of areas.
- the number of divided areas is N
- the number of pixels in each area n is P (n)
- the compression rate is C (n).
- the total data amount SIZE (l) per line l can be expressed as (Equation 12).
- Equation 13 it is determined whether or not the maximum value MAX (1 to L) in the L line is less than the total amount of data per line at the time of non-compression.
- the frame rate can be improved according to the amount.
- the data size after compression per line in FIG. 20A for example, the data size after compression per line in FIG.
- the interval of one line of the synchronization signal can be reduced as shown in FIG. 20 (a) to FIG. 20 (b).
- the period of one line can be shortened to 80%, and the speed can be improved by 25% in calculation.
- the line maximum value in the frame is calculated, if the period of the synchronization signal is set to a value larger than the above value, the data output of one line does not straddle the next line, and all lines are output. Data can be output and an equally spaced period can be maintained.
- the solid-state imaging device 1 outputs the pixel data compressed by the compression unit 12, but the compression unit 12 has a data amount after compression and a data amount of pixel data before compression by the compression unit 12.
- the frame rate can be changed by changing the cycle of one line based on the relationship with the data amount before compression.
- the data amount after compression is the data amount of compressed pixel data
- Example 5 In this example, another application example of the solid-state imaging device 1 according to Embodiment 1 will be described. Specifically, a method in which the feature detection unit 14 controls the compression rate of pixel data using detection information for AF control will be described as another example of the feature detection method.
- the AF control includes an active method using an external sensor, a passive method using an image sensor, a passive method, a phase difference detection method, a contrast detection method, and the like.
- the contrast detection method will be described as an example.
- FIG. 21 is a block diagram illustrating a configuration of the solid-state imaging device 4 according to the fifth embodiment.
- the image processing unit 16 includes a preprocessing circuit 461 and a YC processing circuit 462 inside. Further, the feature detection unit 14 includes an HPF circuit 443, an area-by-area contrast integration circuit 444, and a central processing circuit (hereinafter referred to as CPU) 445.
- CPU central processing circuit
- the pixel data output from the decompression unit 15 is detected by the pre-process circuit 461 from the imaging data in the light-shielded area, and subtracts and corrects OB (Optical Black) clamp processing, which corrects flaws in the pixel data due to deterioration, etc.
- Preprocessing such as processing is performed, and the YC processing circuit 462 performs synchronization processing, edge enhancement processing, and gamma conversion processing.
- the output of the preprocess circuit 461 is input to an HPF circuit 443 formed of a digital high-pass filter, and a spatial high-frequency component signal of the subject is extracted and input to a contrast integration circuit 444 for each region.
- the area-by-area contrast integration circuit 444 integrates the output signals from the HPF circuit 443 for each divided area (hereinafter referred to as AF area) of the imaging screen designated in advance in one frame for each imaging frame, and subjects for each AF area. A contrast value is generated and output to the CPU 445.
- the AF area is, for example, an 81 area obtained by dividing the imaging screen (in the image) into 9 parts vertically and horizontally as shown in FIG.
- a to i are horizontal area addresses
- 1 to 9 are vertical area addresses.
- the upper left AF area is the address a1
- the lower right AF area is the address i9. Call it.
- the contrast value of each AF area is called AFC (afp) (afp is a1 to i9).
- a subject image is formed by the optical lens 410 on the pixel cell array 111 in the photoelectric conversion unit 11 of the solid-state imaging device 10, and the optical lens 410 includes a lens driving unit including an actuator such as a stepping motor.
- the lens 411 is driven and controlled in the direction of the lens optical axis so that the subject-side focus position formed on the pixel cell array 111 can be adjusted.
- the lens driving unit 411 is driven and controlled by the CPU 445 for the focus position of the optical lens 410.
- the CPU 445 compresses the compression rate setting unit 13 for the compression rate setting unit 13 corresponding to each AF area for each frame, that is, the above-described region P (N) described with reference to FIGS. 8A and 8B.
- N is 1 to 81.
- the CPU 445 determines that the AF area a1 corresponds to P (1), b1 corresponds to P (2),..., I9 corresponds to P (81), and the horizontal pixel count W (N) and vertical pixel count H ( N) are set for the compression rate setting unit 13 together.
- the solid-state imaging device 1 further includes the lens driving unit 411 that drives the optical lens 410, and the feature detection unit 14 uses the pixel data output from the photoelectric conversion unit 11 to contrast the predetermined region of the image.
- the compression rate setting unit 13 variably sets the compression rate individually for a region including at least the in-focus region and the out-of-focus region.
- an AF operation is started by inputting a trigger for instructing the start of autofocus (hereinafter referred to as AF) from a user (not shown).
- AF a trigger for instructing the start of autofocus
- step S601 the CPU 445 controls the lens driving unit 411 so that the lens position is infinitely focused.
- step S602 the CPU 445 resets the accumulated contrast value of each area to the contrast accumulation circuit 444 for each area.
- step S603 the lens driving unit 411 is controlled so as to move the lens position of the optical lens 410 to the closest focus position side by a predetermined amount.
- step S604 one frame output from the solid-state imaging device 10 after driving the lens in step S603 and acquired by the contrast integration circuit for each region via the extension unit 15, the preprocess circuit 461, and the HPF circuit 443.
- the contrast values of all the AF areas are acquired.
- step S605 it is determined whether or not the lens position of the optical lens 410 is the close end. If it is not the close end (NO in step S605), the process proceeds to step S603. On the other hand, if it is the close end (YES in step S605), the process proceeds to step S606.
- step S606 based on the contrast value profile obtained by the above operation, a profile and an AF region having a contrast value peak closest to the closest end are selected and become the peak of the profile.
- the lens position is calculated as the focusing lens position.
- FIG. 24 shows an example of a contrast value profile.
- the person indicated by the dotted line in FIG. A profile 2601 is acquired in the hit AF area, and a profile 2602 is acquired in the other AF areas.
- the lens position fp1 that is the peak of the profile 2601 in FIG. 24 is determined as the focus lens position.
- step S607 the lens driving unit 411 is controlled to drive the optical lens 410 to the in-focus lens position determined in step S606. As a result, the AF operation is completed.
- step S608 the compression ratio Q is set to low compression for the AF area (AF area indicated by the dotted line in FIG. 22) selected by the determination of the focusing lens position, that is, the focused AF area.
- a compression rate Q is set in the compression rate setting unit 13 so as to achieve high compression for an out-of-focus AF area that is not selected in determining the focus lens position.
- the compression ratio is set so that the pixel data is compressed with low compression for the AF area including the focused subject, the deterioration of the focused subject portion due to compression distortion is avoided.
- pixel data is compressed in the solid-state image sensor so as to be highly compressed in areas that are not in focus, so the amount of compressed pixel data output from the solid-state image sensor can be reduced. This can reduce the frame rate and power consumption.
- Example 6 In this example, a case where the solid-state imaging device 1 according to Embodiment 1 is applied to, for example, a digital camera will be described.
- FIG. 25 is a block diagram illustrating a configuration of the digital camera 5 according to the sixth embodiment. Elements similar to those in FIGS. 1 and 17A are denoted by the same reference numerals, and detailed description thereof is omitted.
- a digital camera 5 shown in FIG. 25 includes a solid-state imaging device 10, an extension unit 15, an image processing unit 16, a feature detection unit 14, a moving image encoding unit 39, a recording unit 508, a recording memory 509, and a display.
- a unit 510, a display device 511, a CPU 512, a program memory 513, and an external memory 514 are provided.
- An external switch 515 is input to the digital camera 5. Note that the configurations of the solid-state imaging device 10, the expansion unit 15, the image processing unit 16, and the feature detection unit 14 are the same as those described in FIG. The differences will be described below.
- the CPU 512 reads the program from the program memory 513 of the power source, determines a mode designated in advance by the input of the external switch 515, for example, a mode such as a recording mode or a reproduction mode, and starts the system in a predetermined mode.
- the decompression unit 15 decompresses the compressed image data input by the compression unit 12 from the solid-state image sensor 10, and the image processing unit 16 generates YC data.
- the moving image encoding unit 39 encodes a still image / moving image with respect to the YC image data obtained by the image processing unit 16 to generate image data for recording.
- encoding is performed in, for example, the JPEG format. Encoding is performed in a format such as H.264 or MPEG.
- the recording unit 508 receives the code data of the moving image encoding unit 39, performs processing such as header generation for recording in the recording memory 509 and area management of the recording medium, and records the data in the recording memory 509. .
- the display unit 510 receives processing data of the image processing unit 16 and performs processing for displaying on the display device 511. That is, the image data is converted into an image size and an image format corresponding to the display device 511, and data such as OSD (On Screen Display) is added and transferred to the display device 511.
- OSD On Screen Display
- the display device 511 includes a liquid crystal (LCD) display, for example, and outputs an input signal to display an image. Further, feature information is input from the feature detection unit 14 to the CPU 512, and the feature amount can be set in the compression rate setting unit 13 using various feature amounts obtained as described in the first embodiment. It is. Note that the feature detection performed by the feature detection unit 14 may be performed by the CPU 512 instead of the feature detection unit 14.
- LCD liquid crystal
- the code data recorded from the recording memory 509 is read out and input to the moving image encoding unit 39, and the input code data is decoded into YC data.
- the decrypted YC data is temporarily stored in the external memory 514, and then read out from the external memory 514 and processed by the display unit 510, so that it is sent to the display device 511 to perform a display operation.
- the image data output from the solid-state image sensor 10 may be stored and used in the external memory 514.
- the CPU 512 has a work memory for the external memory 514.
- the data after processing of the image processing unit 16 is stored, the display unit 510 reads out and uses it for display, the output code data of the moving image encoding unit 39 is temporarily stored, and the recording unit 508 reads out the data. Or use it. Since there are many accesses in this way, if the compressed data output from the solid-state imaging device 10 is first stored in the external memory 514 and then read out and input to the decompression unit 15 to perform processing, the memory access amount for the imaging data can be reduced. It becomes possible.
- the digital camera 5 includes the solid-state imaging device 1 according to Embodiment 1, the moving image encoding unit 39 that encodes the pixel data processed by the image processing unit 16, and the moving image encoding unit 39.
- a recording unit 508 that records the pixel data encoded by the image processing unit 509 in the recording memory 509, a display unit 510 that displays the image data processed by the image processing unit 16, a program memory 513 that stores a program, and a program memory
- the CPU 512 performs system control of the digital camera 5 based on a program read from the program 513.
- the CPU 512 performs system control of a recording operation or a reproduction operation based on a setting given from the outside.
- FIG. 26 is a block diagram showing a configuration of the solid-state imaging device according to Embodiment 2 of the present invention. Elements similar to those in FIG. 1 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the solid-state imaging device 6 shown in FIG. 25 differs from the solid-state imaging device 1 according to Embodiment 1 in that the feature detection unit 54 is included in the solid-state imaging device 60.
- the pixel data output from the photoelectric conversion unit 11 is input to the compression unit 12 and also input to the feature detection unit 64 built in the solid-state imaging device 60, and is horizontally output by the feature detection unit 54. Feature detection is performed for each line, and the result is output to the compression ratio setting unit 13. Note that the operation of the compression rate setting unit 13, the operation of the compression unit 12, and the mutual relationship are the same as those in the first embodiment, and thus the description thereof is omitted.
- the feature detection unit 64 selects a compression rate according to the signal level of the pixel data of each pixel of one horizontal line input from the photoelectric conversion unit 11 in synchronization with a vertical and horizontal synchronization signal (not shown), and performs low compression.
- the pixel area within the horizontal line is output to the compression ratio setting unit 13 with the pixel as the attention area pixel.
- FIG. 27 is a diagram showing the relationship between the pixel data level of each pixel of one horizontal line selected in the feature detection unit 64 in the second embodiment and the selected compression rate.
- the feature detection unit 64 has two threshold values set in advance with respect to the pixel data level, and the pixel P1, P2,... Pn of one horizontal line input from the photoelectric conversion unit 11 A level comparison having a threshold value and hysteresis is performed to select a compression rate. Specifically, the pixels after the level change from a pixel level lower than threshold 1 to a higher pixel level are identified as pixels whose compression rate is high compression, and more than threshold 2 set to a level lower than threshold 1 The area information and the compression rate are set so that the pixel after the pixel whose level has changed from the high pixel level to the low pixel level is low-compressed, that is, the low-compression compression rate that is recognized as a target pixel and stored in advance is set.
- a high compression ratio is set at a high pixel level, that is, a high luminance portion of the subject, and a low compression ratio is set at medium luminance and low luminance, and pixel data is compressed.
- pixel data that is insensitive to fine luminance changes in the human eye and that is positively compressed in a high-luminance part of the subject that is level-compressed by a non-linear gamma conversion process performed by the subsequent image processing unit 16. Perform the compression process.
- the pixel data is compressed in the solid-state imaging device so as to be compressed, the data amount of the compressed pixel data which is the output of the solid-state imaging device can be reduced, and a high frame rate or low power consumption can be achieved.
- the pixel data compression process is completed within the solid-state imaging device, so that the size of the apparatus can be reduced by reducing the number of input signal lines and terminals from outside the device, and in real time.
- the compression rate is determined, the delay in reflecting the setting of the compression rate in the compression process can be minimized, so that in principle there is no delay in units of frames.
- a black and white solid-state imaging device that does not implicitly have a color filter for each pixel is described, but the present invention is not limited to this.
- a single-plate color solid-state imaging device having a Bayer array color-on-chip filter may be used, and in that case, the feature detection unit 64 is configured to have hysteresis due to a level change between adjacent pixels having the same color phase. It is inevitable.
- the imaging device of the present invention includes a solid-state imaging device 10, a feature detection unit 14, an expansion unit 15, and an image processing unit 16 outside the solid-state imaging device 10, and the solid-state imaging device 10 includes a photoelectric conversion unit. 11, a compression unit 12 and a compression rate setting unit 13.
- the pixel data of the photoelectric conversion unit 11 is subjected to irreversible compression processing by the compression unit 12 and output to the outside.
- Pixel data output from the solid-state imaging device 10 and decoded into pixel data by the decompression unit 15 is subjected to image processing by the image processing unit 16 and input to the feature detection unit 14 as a frame image.
- the feature detection unit 14 extracts the feature of the image in the frame, generates optimum compression rate information in the individual area, and gives information about the feature to the compression rate setting unit 13.
- the compression rate setting unit 13 sets the compression rate information from the information regarding the feature. Based on the compression rate information, the compression unit 12 performs the lossy compression operation by adaptively changing the compression rate within the frame.
- the pixel data can be subjected to compression processing in accordance with subject conditions and shooting conditions, and thus output from the solid-state imaging device 10 while the quantization error of the compressed and quantized pixel data is in an optimum state.
- the amount of data per unit time can always be reduced above a certain level.
- an imaging apparatus can be realized without impairing image quality performance. Therefore, for example, it is possible to suppress an increase in the circuit scale of the camera system including the imaging device such as an image sensor and to reduce power consumption.
- the imaging device and the digital camera of the present invention have been described based on the embodiment, the present invention is not limited to this embodiment. Unless it deviates from the meaning of this invention, the form which carried out the various deformation
- the present invention can be used for an imaging apparatus, and more particularly to an imaging apparatus that is a device that electronically generates and displays or records a moving image or a still image such as a digital still camera, a digital video camera, a surveillance camera, or a drive recorder camera. Can be used.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
An imaging device capable of consistently reducing the volume of data per unit of time by at least a certain amount, regardless of the imaging conditions, is provided. This imaging device is provided with: a photoelectric conversion unit (11), in which a plurality of pixels (Pxy), which convert incoming light to electric signals, are arranged in a two-dimensional shape, and which outputs a plurality of converted electric signals as pixel data; a feature detection unit (14) that detects the features of each area of an image on the basis of the pixel data output by the photoelectric conversion unit; a compression ratio setting unit (13) that sets the compression ratio for each area on the basis of the features detected by the feature detection unit (14); and a compression unit (12) that performs lossy compression on the pixel data output by the photoelectric conversion unit (11) according to the compression ratios set by the compression ratio setting unit (13).
Description
本発明は、映像信号を圧縮可能な撮像装置に関する。
The present invention relates to an imaging apparatus capable of compressing a video signal.
近年、デジタルスチルカメラ等の撮像装置において、高速撮像機能を支える技術がめざましい進歩を遂げている。例えば、高速撮像機能を支える技術としては、単板カラーイメージセンサの画素セルの微細化による高画素化やMOS型イメージセンサの技術進歩による高機能化かつ高速動作化がある。そして、上記高速撮像機能を支える技術に対応したイメージセンサが、撮像装置を構成するイメージセンサとして普及している。
In recent years, technologies that support high-speed imaging functions have made remarkable progress in imaging devices such as digital still cameras. For example, technologies supporting a high-speed imaging function include increasing the number of pixels by miniaturizing the pixel cells of a single-plate color image sensor and increasing the functionality and speed of operation due to technological advances of MOS type image sensors. An image sensor corresponding to the technology that supports the high-speed imaging function is widely used as an image sensor constituting the imaging apparatus.
撮像装置を構成するイメージセンサでは、高画素化及び高速読み出し等の高速動作化に対応するため、イメージセンサそのものから出力される画素信号が高データレート化されている。しかし、この高データレート化に対応するため以下に説明する課題がある。すなわち、高データレート化に対応するためには、1)信号受信側インターフェース回路の高速化、2)映像信号を処理するデジタル回路の処理能力の高速化、3)映像信号を一時記憶するメモリの高データ帯域化及び高データ容量化の対応が必要となる。そして、これらに対応するためカメラシステム全体としての回路規模が増大してしまい、コストUPや消費電流の増大を引き起こしてしまうという課題がある。
In the image sensor that constitutes the imaging device, pixel signals output from the image sensor itself have a high data rate in order to cope with high pixel operation and high-speed operation such as high-speed reading. However, there are problems described below in order to cope with this higher data rate. That is, in order to cope with the higher data rate, 1) faster signal receiving side interface circuit, 2) faster processing capacity of digital circuit for processing video signal, and 3) memory for temporarily storing video signal. It is necessary to cope with high data bandwidth and high data capacity. And in order to respond | correspond to these, the circuit scale as the whole camera system will increase, and there exists a subject that causes cost increase and increase in current consumption.
この課題に対して、例えば特許文献1では、デジタル変換した画像データの量子化処理による非可逆圧縮処理を行うことで映像信号データの容量を減らし単位時間当たりの取り扱いデータ量を減らす技術が開示されている。
To deal with this problem, for example, Patent Document 1 discloses a technique for reducing the volume of video signal data and reducing the amount of data handled per unit time by performing lossy compression processing by quantization processing of digitally converted image data. ing.
また、例えば特許文献2では、センサ画素信号への非可逆圧縮により生じる圧縮歪の画質劣化を防止する技術が開示されており、撮影条件や状況に応じて、センサ画素信号に対する圧縮処理をON/OFFする技術が開示されている。
For example, Patent Document 2 discloses a technique for preventing image quality degradation due to compression distortion caused by irreversible compression to a sensor pixel signal, and ON / OFF compression processing for the sensor pixel signal is performed according to shooting conditions and circumstances. A technique for turning off is disclosed.
しかしながら、上記従来の技術では以下のような課題がある。
However, the above conventional techniques have the following problems.
すなわち、上記特許文献1に開示されるイメージセンサの画素データを非可逆圧縮する技術においては、画素データの単位時間あたりのデータ量を減らすことはできる。しかし、非可逆圧縮の圧縮率を固定としているため、単位時間あたりのデータ量に抑圧効果を高くした場合には、例えば空間周波数が高い被写体などの条件によって画像観察者が違和感を覚え、許容できない圧縮歪みによる画像画質劣化が発生してしまう。
That is, in the technique for irreversibly compressing the pixel data of the image sensor disclosed in Patent Document 1, the data amount per unit time of the pixel data can be reduced. However, since the compression ratio of irreversible compression is fixed, when the suppression effect is increased in the data amount per unit time, the image observer feels uncomfortable due to, for example, a subject with a high spatial frequency, and is unacceptable. Image quality deterioration due to compression distortion occurs.
また、これを回避するための技術として、上記特許文献2が挙げられるが、上記特許文献2に開示される技術では、単に被写体条件や撮影条件に応じてフレーム単位での圧縮処理のON/OFFを切り替えているに留まっている。そのため、撮影条件によっては全く画素信号の圧縮処理がなされないこととなり単位時間あたりデータ量を抑制する機能自体が実施されない。
Further, as a technique for avoiding this, the above-mentioned Patent Document 2 can be cited. However, in the technique disclosed in the above-mentioned Patent Document 2, ON / OFF of compression processing in units of frames is simply performed according to subject conditions and photographing conditions. Stays switched. Therefore, the pixel signal compression process is not performed at all depending on the photographing conditions, and the function itself for suppressing the data amount per unit time is not implemented.
本発明は、上述の事情を鑑みてなされたもので、撮影条件に関係なく、単位時間あたりデータ量を必ず一定以上低減することができる撮像装置を提供することを目的とする。
The present invention has been made in view of the above-described circumstances, and an object thereof is to provide an imaging apparatus that can always reduce the data amount per unit time or more regardless of shooting conditions.
上記目的を達成するために、本発明の1形態に係る撮像装置は、入射光を電気信号に変換する2次元状に配列された複数の画素を有し、前記複数の画素により変換された複数の前記電気信号を画素データとして出力する光電変換部と、前記光電変換部から出力される画素データに基づいて、画像の領域毎の特徴を検出する検出部と、前記検出部により検出された前記特徴に基づいて前記領域毎に圧縮率を設定する圧縮率設定部と、前記圧縮率設定部により設定された圧縮率に従って、前記光電変換部から出力される画素データを非可逆圧縮する圧縮部とを備える。
In order to achieve the above object, an imaging apparatus according to an aspect of the present invention includes a plurality of pixels arranged in a two-dimensional manner for converting incident light into an electrical signal, and the plurality of pixels converted by the plurality of pixels. A photoelectric conversion unit that outputs the electrical signal as pixel data, a detection unit that detects a feature for each region of the image based on the pixel data output from the photoelectric conversion unit, and the detection unit that detects the feature A compression rate setting unit that sets a compression rate for each region based on characteristics; and a compression unit that performs irreversible compression on pixel data output from the photoelectric conversion unit according to the compression rate set by the compression rate setting unit. Is provided.
本発明によれば、撮影条件に関係なく、単位時間あたりデータ量を必ず一定以上低減することができる撮像装置を実現することができる。
According to the present invention, it is possible to realize an imaging apparatus capable of reducing the data amount per unit time by a certain amount regardless of shooting conditions.
(実施の形態1)
以下、図面を参照して本発明の実施の形態について説明する。なお、以下で説明する実施の形態はあくまで一例であり、様々な改変を行うことが可能である。 (Embodiment 1)
Embodiments of the present invention will be described below with reference to the drawings. The embodiment described below is merely an example, and various modifications can be made.
以下、図面を参照して本発明の実施の形態について説明する。なお、以下で説明する実施の形態はあくまで一例であり、様々な改変を行うことが可能である。 (Embodiment 1)
Embodiments of the present invention will be described below with reference to the drawings. The embodiment described below is merely an example, and various modifications can be made.
図1は、本発明の実施の形態1における固体撮像装置の構成を示すブロック図である。
FIG. 1 is a block diagram showing the configuration of the solid-state imaging device according to Embodiment 1 of the present invention.
図1に示す固体撮像装置1は、固体撮像素子10と、特徴検出部14と、伸長部15と、画像処理部16とを備える。
The solid-state imaging device 1 shown in FIG. 1 includes a solid-state imaging device 10, a feature detection unit 14, an expansion unit 15, and an image processing unit 16.
特徴検出部14は、光電変換部11から出力される画素データから、画像の領域毎の特徴を検出する。そして、特徴検出部14は、圧縮率設定部13に特徴量を入力する。具体的には、特徴検出部14は、入力された画像データから、所定の特徴量を抽出し、固体撮像素子10の圧縮率設定部13に対して抽出した特徴量をフィードバック出力する。
The feature detection unit 14 detects a feature for each region of the image from the pixel data output from the photoelectric conversion unit 11. Then, the feature detection unit 14 inputs the feature amount to the compression rate setting unit 13. Specifically, the feature detection unit 14 extracts a predetermined feature amount from the input image data, and feeds back the extracted feature amount to the compression rate setting unit 13 of the solid-state imaging device 10.
伸長部15は、圧縮部12により圧縮された画素データを伸張する。具体的には、伸長部15は、固体撮像素子10から圧縮された画素データが入力され、圧縮された画素データの伸長を行い、伸長後の画素データを画像処理部16へ入力する。
The decompression unit 15 decompresses the pixel data compressed by the compression unit 12. Specifically, the decompression unit 15 receives the compressed pixel data from the solid-state imaging device 10, decompresses the compressed pixel data, and inputs the decompressed pixel data to the image processing unit 16.
画像処理部16は、伸長部15により伸張された画素データを処理する。具体的には、伸長部15から入力された画素データ(RAWデータ)を処理後、輝度(Y)信号、色差(C)信号に変換し、特徴検出部14に出力する。
The image processing unit 16 processes the pixel data expanded by the expansion unit 15. Specifically, the pixel data (RAW data) input from the decompression unit 15 is processed, converted into a luminance (Y) signal and a color difference (C) signal, and output to the feature detection unit 14.
固体撮像素子10は、光電変換部11と、圧縮部12と、圧縮率設定部13とで構成されている。固体撮像素子10の撮像データ(画素データ)は、圧縮部12より外部へ出力される。また、固体撮像素子10は、外部の特徴検出部14より圧縮率設定部13へ特徴量の入力が与えられている。
The solid-state imaging device 10 includes a photoelectric conversion unit 11, a compression unit 12, and a compression rate setting unit 13. Imaging data (pixel data) of the solid-state imaging device 10 is output from the compression unit 12 to the outside. In the solid-state imaging device 10, the feature amount is input to the compression rate setting unit 13 from the external feature detection unit 14.
光電変換部11は、入射光を電気信号に変換する複数の画素が2次元状に配列され、変換された複数の前記電気信号を画素データとして出力する。具体的には、光電変換部11は、入射された光に比例する大きさの電気信号(デジタル撮像信号)に変換し、変換したデジタル撮像信号を、画素データとして圧縮部12に転送する。
The photoelectric conversion unit 11 has a plurality of pixels that convert incident light into electrical signals in a two-dimensional array, and outputs the converted plurality of electrical signals as pixel data. Specifically, the photoelectric conversion unit 11 converts an electrical signal (digital imaging signal) having a magnitude proportional to the incident light, and transfers the converted digital imaging signal to the compression unit 12 as pixel data.
圧縮部12は、光電変換部11から出力される画素データを非可逆圧縮する。ここで、圧縮部12は、圧縮率設定部13により設定された圧縮率に従って画素データを圧縮する。具体的には、圧縮率設定部13により与えられた当該画素の属する圧縮率情報に基づいて設定された圧縮率を用いて、光電変換部11から出力される画素データを圧縮する。圧縮部12は、圧縮した画素データを伸長部15へ出力する。また、圧縮部12は、圧縮した画素データを、所定の出力方式に基づいて外部へ出力する。
The compression unit 12 performs irreversible compression on the pixel data output from the photoelectric conversion unit 11. Here, the compression unit 12 compresses the pixel data according to the compression rate set by the compression rate setting unit 13. Specifically, the pixel data output from the photoelectric conversion unit 11 is compressed using the compression rate set based on the compression rate information to which the pixel belongs given by the compression rate setting unit 13. The compression unit 12 outputs the compressed pixel data to the decompression unit 15. The compression unit 12 outputs the compressed pixel data to the outside based on a predetermined output method.
圧縮率設定部13は、特徴検出部14により検出された特徴に基づいて画像の領域毎に圧縮率を設定する。具体的には、圧縮率設定部13は、光電変換部11から出力される画素データから、画像の領域毎の特徴を検出する特徴検出部14により出力される特徴量に基づいて圧縮率を設定する。
The compression rate setting unit 13 sets the compression rate for each area of the image based on the features detected by the feature detection unit 14. Specifically, the compression rate setting unit 13 sets the compression rate based on the feature amount output from the feature detection unit 14 that detects the feature for each region of the image from the pixel data output from the photoelectric conversion unit 11. To do.
以上のように、固体撮像装置1は構成される。
As described above, the solid-state imaging device 1 is configured.
なお、圧縮率は、例えば、以下のように設定する。すなわち、特徴検出部14は、伸長部15により伸長された画素データから、画像の領域毎の画素値における所定の閾値に対する大小関係の情報を特徴として検出し、圧縮率設定部13は、特徴検出部14により検出された前記大小関係の情報に基づいて圧縮率を可変設定する。また、特徴検出部14は、画像処理部16により処理された画素データから、画像の領域毎の画素値における所定の値に対する大小関係の情報を検出し、圧縮率設定部13は、特徴検出部14により検出された大小関係の情報に基づいて圧縮率を可変設定する。もちろん、後述するように、圧縮率の決め方はこの方法に限定されないのは言うまでもない。
Note that the compression rate is set as follows, for example. That is, the feature detection unit 14 detects, from the pixel data expanded by the expansion unit 15, information on the magnitude relationship with respect to a predetermined threshold in the pixel value for each area of the image as a feature, and the compression rate setting unit 13 performs the feature detection. The compression rate is variably set based on the magnitude relationship information detected by the unit 14. Further, the feature detection unit 14 detects information on the magnitude relationship with respect to a predetermined value in the pixel value for each region of the image from the pixel data processed by the image processing unit 16, and the compression rate setting unit 13 14, the compression rate is variably set based on the magnitude relationship information detected by 14. Of course, as will be described later, it goes without saying that the method of determining the compression rate is not limited to this method.
次に、固体撮像装置1を構成する各構成要素の詳細について説明する。
Next, details of each component constituting the solid-state imaging device 1 will be described.
(光電変換部11の詳細)
まず、固体撮像素子10を構成する光電変換部11の詳細について説明する。 (Details of photoelectric conversion unit 11)
First, the detail of thephotoelectric conversion part 11 which comprises the solid-state image sensor 10 is demonstrated.
まず、固体撮像素子10を構成する光電変換部11の詳細について説明する。 (Details of photoelectric conversion unit 11)
First, the detail of the
図2は、光電変換部11の詳細構成を示す図である。ここで、図2に示す光電変換部11は、一例として、水平8画素、垂直8画素で構成されているものとして説明する。
FIG. 2 is a diagram showing a detailed configuration of the photoelectric conversion unit 11. Here, the photoelectric conversion unit 11 illustrated in FIG. 2 will be described as an example including 8 horizontal pixels and 8 vertical pixels.
図2に示すように、光電変換部11は、例えばMOS型固体撮像素子を構成しており、画素セルアレイ111と、タイミングジェネレータ112と、AD変換部113と、水平走査セレクタ114とを備える。
As shown in FIG. 2, the photoelectric conversion unit 11 includes, for example, a MOS solid-state imaging device, and includes a pixel cell array 111, a timing generator 112, an AD conversion unit 113, and a horizontal scanning selector 114.
画素セルアレイ111は、複数の画素セルPxy(x=水平方向の座標、y=垂直方向の座標)を有し、垂直方向の共通読み出し信号線Lx(x=1、2、・・・、8)と、水平方向の選択信号線S1y、S2y、S3y(y=1、2、・・・、8)とが接続されている。
The pixel cell array 111 has a plurality of pixel cells Pxy (x = horizontal coordinates, y = vertical coordinates), and vertical common readout signal lines Lx (x = 1, 2,..., 8). Are connected to the horizontal selection signal lines S1y, S2y, S3y (y = 1, 2,..., 8).
共通読み出し信号線Lx(x=1、2、・・・、8)は、垂直方向に配設され、列方向の画素セルPxyに共通に接続され、共通に接続された画素セルPxyの信号を読み出すための配線である。
The common readout signal line Lx (x = 1, 2,..., 8) is arranged in the vertical direction, is commonly connected to the pixel cells Pxy in the column direction, and receives signals of the pixel cells Pxy connected in common. Wiring for reading.
選択信号線S1y、S2y、S3y(y=1、2、・・・、8)は、水平方向に配設され、選択する行方向の画素セルPxyが蓄積する蓄積信号を読み出しするための信号(導通信号または非導通信号)を、それら画素セルPxyに伝達するための配線である。また、選択信号線S1y、S2y、S3y(y=1、2、・・・、8)は、タイミングジェネレータ112と接続されており、導通信号であるハイパルスが印加されると、対応する。
The selection signal lines S1y, S2y, S3y (y = 1, 2,..., 8) are arranged in the horizontal direction, and are signals for reading out the accumulation signals accumulated in the selected pixel cells Pxy in the row direction ( A wiring for transmitting a conduction signal or a non-conduction signal) to the pixel cells Pxy. Further, the selection signal lines S1y, S2y, S3y (y = 1, 2,..., 8) are connected to the timing generator 112 and correspond when a high pulse that is a conduction signal is applied.
複数の画素セルPxyは、画素セルアレイ111内に、水平及び垂直からなる2次元の格子状に1画素毎に並べられており、入力される入射光を入射光に応じた電気信号(アナログ撮像信号)として蓄積する。画素セルPxyは、共通読み出し信号線Lxと選択信号線S1yとに接続されている。
The plurality of pixel cells Pxy are arranged in a pixel cell array 111 for each pixel in a two-dimensional horizontal and vertical grid, and input incident light is converted into an electrical signal (analog imaging signal) corresponding to the incident light. ). The pixel cell Pxy is connected to the common readout signal line Lx and the selection signal line S1y.
AD変換部113は、共通読み出し信号線L1~共通読み出し信号線L8にそれぞれ接続され、入力されたデータ(アナログ撮像信号)をデジタルのデータ(画素データ)に変換する。
The AD conversion unit 113 is connected to each of the common readout signal line L1 to the common readout signal line L8, and converts the input data (analog imaging signal) into digital data (pixel data).
水平走査セレクタ114は、AD変換部113それぞれに接続されるスイッチS41~スイッチS48を備え、スイッチS41~スイッチS48がオンの場合に、AD変換部113から出力されたデータ(画素データ)を外部に出力する。
The horizontal scanning selector 114 includes switches S41 to S48 connected to the AD conversion unit 113. When the switches S41 to S48 are on, the data (pixel data) output from the AD conversion unit 113 is externally output. Output.
タイミングジェネレータ112は、光電変換部11外部(固体撮像素子10外部)で生成された垂直同期信号VD、水平同期信号HD及びモード選択信号に基づいて、動作する。すなわち、タイミングジェネレータ112は、フレーム先頭から垂直の所定位置によって、選択信号読み出し線S11~選択信号読み出し線S18のうちいずれかを選択する。
The timing generator 112 operates based on the vertical synchronization signal VD, the horizontal synchronization signal HD, and the mode selection signal that are generated outside the photoelectric conversion unit 11 (outside the solid-state imaging device 10). That is, the timing generator 112 selects any one of the selection signal readout line S11 to the selection signal readout line S18 according to a predetermined vertical position from the frame head.
次に、以上のように構成された光電変換部11の動作について、説明する。
Next, the operation of the photoelectric conversion unit 11 configured as described above will be described.
まず、タイミングジェネレータ112は、例えば垂直同期信号VDを基準にフレーム先頭から垂直の所定位置が1ライン目である場合に、選択信号読み出し線S11を選択する。
First, the timing generator 112 selects the selection signal readout line S11 when, for example, a predetermined vertical position from the top of the frame is the first line with reference to the vertical synchronization signal VD.
次に、画素セルP11~画素セルP18はそれぞれ、蓄積している蓄積信号を、対応する共通信号読み出し線L1~共通信号読み出し線L8に出力する。出力された蓄積信号は、対応するAD変換部113に入力される。
Next, each of the pixel cells P11 to P18 outputs the accumulated signal accumulated therein to the corresponding common signal readout line L1 to common signal readout line L8. The output accumulated signal is input to the corresponding AD conversion unit 113.
次に、AD変換部113は、画素セルP11~画素セルP18から読み出されたデータを、Nビット(Nは自然数)の各ビット2値化されたデジタル信号(デジタル撮像信号)に変換する。
Next, the AD conversion unit 113 converts the data read from the pixel cells P11 to P18 into a digital signal (digital image pickup signal) binarized into N bits (N is a natural number).
続いて、水平走査セレクタ114において、水平同期信号HDを基準とした水平方向の所定のタイミングにより、スイッチS41~スイッチS48のいずれかの選択信号が有効(オン)となる。すると、スイッチS41~スイッチS48に接続されているAD変換部113が出力するデータが光電変換部11外部に出力される。
Subsequently, in the horizontal scanning selector 114, the selection signal of any one of the switches S41 to S48 becomes valid (ON) at a predetermined horizontal timing with the horizontal synchronization signal HD as a reference. Then, the data output from the AD conversion unit 113 connected to the switches S41 to S48 is output to the outside of the photoelectric conversion unit 11.
以上のように光電変換部11は動作する。
The photoelectric conversion unit 11 operates as described above.
ここで、上述した画素セルPxyを構成する具体的な回路(等価回路)の例について説明する。図3は、画素セルの等価回路を示す図である。
Here, an example of a specific circuit (equivalent circuit) constituting the pixel cell Pxy described above will be described. FIG. 3 is a diagram showing an equivalent circuit of the pixel cell.
画素セルPxyは、図3に示すように、フォトダイオード1101と、読み出しトランジスタ1102と、フローティングディフュージョン1103と、リセットトランジスタ1104と、アンプ1105とから構成される。
As shown in FIG. 3, the pixel cell Pxy includes a photodiode 1101, a read transistor 1102, a floating diffusion 1103, a reset transistor 1104, and an amplifier 1105.
フォトダイオード1101は、入射された光(入射光)を光電変換して、入射光に応じた電荷を発生する。
The photodiode 1101 photoelectrically converts incident light (incident light) to generate electric charge according to the incident light.
読み出しトランジスタ1102は、ソース及びドレインの一方が、フォトダイオード1101のゲート(カソード)と接続されており、ソース及びドレインの他方が、フローティングディフュージョン1103と接続されている。また、読み出しトランジスタ1102は、ゲートに読み出し信号線に接続されている。そのため、読み出しトランジスタ1102は、読み出し信号線に読み出し信号が印加(オン)されると、フォトダイオード1101で発生した電荷を、フローティングディフュージョン1103へ読み出す。
The read transistor 1102 has one of a source and a drain connected to the gate (cathode) of the photodiode 1101 and the other of the source and the drain connected to the floating diffusion 1103. Further, the read transistor 1102 is connected to the read signal line at the gate. Therefore, the read transistor 1102 reads the charge generated in the photodiode 1101 to the floating diffusion 1103 when a read signal is applied (turned on) to the read signal line.
フローティングディフュージョン1103は、キャパシタンスを有し、読み出しトランジスタ1102を介して読み出された電荷を、キャパシタンスに従ってその信号電荷を電圧に変換する。
The floating diffusion 1103 has a capacitance, and converts the signal charge read through the read transistor 1102 into a voltage according to the capacitance.
リセットトランジスタ1104は、ソースまたはドレインがフローティングディフュージョン1103に接続されており、ゲートにリセット信号が入力されると、フローティングディフュージョン1103の電荷をリセットする。換言すると、フローティングディフュージョン1103は、フォトダイオード1101からの電荷が読み出される前に、リセット信号がゲートに入力されたリセットトランジスタ1104を介してリセットされる。
The source or drain of the reset transistor 1104 is connected to the floating diffusion 1103, and when the reset signal is input to the gate, the charge of the floating diffusion 1103 is reset. In other words, the floating diffusion 1103 is reset via the reset transistor 1104 to which the reset signal is input to the gate before the charge from the photodiode 1101 is read.
アンプ1105は、その入力端子にフローティングディフュージョン1103が接続されており、出力端子に共通信号読み出し線Lx(x=1、2、・・・、8)と接続されている。そのため、アンプ1105は、フローティングディフュージョン1103の電圧を増幅して共通信号読み出し線Lx(x=1、2、・・・、8)へ出力する。例えば、図2に示す画素セルP11において、アンプ1105は、増幅後の電圧を、スイッチング素子(不図示)を介して共通信号読み出し線L11へ出力する。
The amplifier 1105 has a floating diffusion 1103 connected to its input terminal and a common signal readout line Lx (x = 1, 2,..., 8) connected to its output terminal. Therefore, the amplifier 1105 amplifies the voltage of the floating diffusion 1103 and outputs it to the common signal readout line Lx (x = 1, 2,..., 8). For example, in the pixel cell P11 shown in FIG. 2, the amplifier 1105 outputs the amplified voltage to the common signal readout line L11 via a switching element (not shown).
次に、以上のように構成された画素セルPxyの動きについて説明する。なお、図2中では、読み出しトランジスタ1102に対して入力される読み出し信号を伝達するための信号線(読み出し信号線)、リセットトランジスタ1104に対して入力されるリセット信号を伝達するための信号線(リセット信号線)が図示されていない。読み出し信号線及びリセット信号線のいずれも、タイミングジェネレータ112より各行の画素セルPxyに共通に印加される。
Next, the movement of the pixel cell Pxy configured as described above will be described. Note that in FIG. 2, a signal line (read signal line) for transmitting a read signal input to the read transistor 1102 and a signal line (transmit signal signal) (for transmitting a reset signal input to the reset transistor 1104). The reset signal line is not shown. Both the readout signal line and the reset signal line are applied in common to the pixel cells Pxy in each row from the timing generator 112.
まず、リセット信号がリセットトランジスタ1104に印加されてフローティングディフュージョン1103のリセットが実施される。
First, a reset signal is applied to the reset transistor 1104 to reset the floating diffusion 1103.
次に、読み出し信号が読み出しトランジスタ1102に印加(オン)されることでフォトダイオード1101の電荷をフローティングディフュージョン1103へ読み出す。そして、このような駆動は画素セルPxyの行単位で実施される。
Next, the readout signal is applied (turned on) to the readout transistor 1102, whereby the charge of the photodiode 1101 is read out to the floating diffusion 1103. Such driving is performed in units of rows of the pixel cells Pxy.
次に、各行の画素セルPxyに共通に接続される選択信号読み出し線S11~選択信号読み出し線S18に、導通信号を示すハイパルスが印加される。
Next, a high pulse indicating a conduction signal is applied to the selection signal readout line S11 to the selection signal readout line S18 commonly connected to the pixel cells Pxy in each row.
その結果として、画素セルPxyに入射した光に比例する電気信号(アナログ画素信号、撮像信号)がアンプ1105から出力されることとなる。
As a result, an electric signal (analog pixel signal, imaging signal) proportional to the light incident on the pixel cell Pxy is output from the amplifier 1105.
次に、固体撮像素子10を構成する圧縮部12と伸長部15の詳細について説明する。
Next, details of the compression unit 12 and the expansion unit 15 constituting the solid-state imaging device 10 will be described.
ここで、以下に記載する「圧縮率」については、圧縮率が高いほど圧縮前の符号量に対する圧縮後の符号量の比が小さくなり、逆に圧縮率が低いほど圧縮前の符号量に対する圧縮後の符号量比が大きくなり圧縮前の符号量に近くなることを意味する。したがって、高圧縮とは高い圧縮率で圧縮を行うこと、低圧縮とは低い圧縮率で圧縮を行うことを意味する。また、「非圧縮」とは圧縮を行わず、符号量に変化がないことを意味する。
Here, regarding the “compression rate” described below, the higher the compression rate, the smaller the ratio of the code amount after compression to the code amount before compression, and conversely, the lower the compression rate, the more compression with respect to the code amount before compression. This means that the subsequent code amount ratio becomes large and approaches the code amount before compression. Therefore, high compression means that compression is performed at a high compression rate, and low compression means that compression is performed at a low compression rate. “Non-compressed” means that compression is not performed and the code amount does not change.
(圧縮部詳細)
図4は、圧縮部12と伸長部15の詳細構成を示す図である。 (Details of compression section)
FIG. 4 is a diagram illustrating detailed configurations of thecompression unit 12 and the decompression unit 15.
図4は、圧縮部12と伸長部15の詳細構成を示す図である。 (Details of compression section)
FIG. 4 is a diagram illustrating detailed configurations of the
まず、圧縮部12について説明する。図4に示す圧縮部12は、処理対象画素値入力部121と、予測画素生成部122と、コード変換部123と、変化抽出部124と、量子化幅決定部125と、量子化処理部126と、パッキング部127とで構成される。
First, the compression unit 12 will be described. The compression unit 12 illustrated in FIG. 4 includes a processing target pixel value input unit 121, a prediction pixel generation unit 122, a code conversion unit 123, a change extraction unit 124, a quantization width determination unit 125, and a quantization processing unit 126. And a packing part 127.
処理対象画素値入力部121は、光電変換部11により、圧縮部12で符号化対象となる画素(以降、対象画素と記載)の値(画素データ)が入力される。処理対象画素値入力部121は、光電変換部11より画素データが入力され、入力された画素データを、適切なタイミングで予測画素生成部122とコード変換部123とに出力する。なお、処理対象画素値入力部121は、入力された画素データが、先頭の対象画素の画素データとして入力された場合、量子化処理を省き、入力された画素データを直接、パッキング部127に出力する。一方、処理対象画素値入力部121は、入力された画素データが先頭の対象画素の画素データでない場合には、適切なタイミングで予測画素生成部122とコード変換部123とに出力する。
The processing target pixel value input unit 121 receives a value (pixel data) of a pixel (hereinafter referred to as a target pixel) to be encoded by the compression unit 12 by the photoelectric conversion unit 11. The processing target pixel value input unit 121 receives pixel data from the photoelectric conversion unit 11 and outputs the input pixel data to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing. When the input pixel data is input as the pixel data of the first target pixel, the processing target pixel value input unit 121 omits the quantization process and directly outputs the input pixel data to the packing unit 127. To do. On the other hand, when the input pixel data is not the pixel data of the first target pixel, the processing target pixel value input unit 121 outputs the pixel data to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing.
予測画素生成部122は、入力された画素データを用いて、着目している現在の対象画素の予測値を生成する。予測画素生成部122は、生成した予測値をコード変換部123に出力する。
The predicted pixel generation unit 122 generates a predicted value of the current target pixel of interest using the input pixel data. The prediction pixel generation unit 122 outputs the generated prediction value to the code conversion unit 123.
コード変換部123は、処理対象画素値入力部121から入力された符号化対象画素(画素データ)と、予測画素生成部122から入力した予測値とのそれぞれを、コード変換し、変化抽出部124に出力する。
The code conversion unit 123 performs code conversion on each of the encoding target pixel (pixel data) input from the processing target pixel value input unit 121 and the prediction value input from the prediction pixel generation unit 122, and the change extraction unit 124. Output to.
変化抽出部124は、入力された符号化対象画素のコードと予測値のコードとの排他的論理和演算を行い、ビット変化情報を算出する。変化抽出部124は、算出したビット変化情報を量子化幅決定部125と量子化処理部126に出力する。
The change extraction unit 124 performs an exclusive OR operation on the input encoding target pixel code and the prediction value code to calculate bit change information. The change extraction unit 124 outputs the calculated bit change information to the quantization width determination unit 125 and the quantization processing unit 126.
量子化幅決定部125は、変化抽出部124から入力されたビット変化情報基づき、量子化幅を決定し、量子化処理部126とパッキング部127とに出力する。
The quantization width determination unit 125 determines the quantization width based on the bit change information input from the change extraction unit 124 and outputs the quantization width to the quantization processing unit 126 and the packing unit 127.
量子化処理部126は、量子化幅決定部125で算出された量子化幅を用いて、変化抽出部124から入力されたビット変化情報を量子化する量子化処理を行う。
The quantization processing unit 126 performs a quantization process for quantizing the bit change information input from the change extraction unit 124 using the quantization width calculated by the quantization width determination unit 125.
パッキング部127は、少なくとも一画素以上の先頭の対象画素と、複数の量子化値と、少なくとも一つ以上の量子化幅情報とを結合させたデータにパッキングする。そして、パッキング部127は、パッキングしたパッキングデータを、SDRAM等のメモリ、又はアンパッキング部151に出力する。
The packing unit 127 packs data into a combination of at least one or more leading target pixels, a plurality of quantized values, and at least one quantized width information. Then, the packing unit 127 outputs the packed packing data to a memory such as an SDRAM or the unpacking unit 151.
次に、以上のように構成される圧縮部12の圧縮方法(画像符号化)について説明する。
Next, a compression method (image encoding) of the compression unit 12 configured as described above will be described.
<画像符号化処理>
図5は、本実施の形態における画像符号化方法を説明するためのフローチャートである。 <Image coding processing>
FIG. 5 is a flowchart for explaining the image coding method according to the present embodiment.
図5は、本実施の形態における画像符号化方法を説明するためのフローチャートである。 <Image coding processing>
FIG. 5 is a flowchart for explaining the image coding method according to the present embodiment.
なお、以下で説明する圧縮部12の符号化処理の全てまたは一部は、固体撮像素子10内での構成に限らない。圧縮部12は、LSI(Large Scale Integration)などのハードウェアや、CPU(Central Processing Unit)等により実行されるプログラムによって構成されるとしてもよい。このことは以下でも同様である。
Note that all or part of the encoding process of the compression unit 12 described below is not limited to the configuration in the solid-state imaging device 10. The compression unit 12 may be configured by hardware such as LSI (Large Scale Integration), a program executed by a CPU (Central Processing Unit), or the like. The same applies to the following.
なお、本実施の形態において、各画素データはNビット長のデジタルデータとし、各々の画素データに対応した、量子化後の画素データ(以下、量子化値という)を、Mビット長とする。また、圧縮部12では、少なくとも一画素以上の、先頭の対象画素と、複数の画素データに対応した量子化値と、量子化値の量子化幅を表す符号(以下、量子化幅情報という)を、パッキング部127によりSビット長にパッキングする。そして、パッキングされたパッキングデータを出力する。ここで、自然数N、M、Sは、予め決められているものとする。
In this embodiment, each pixel data is N-bit digital data, and pixel data after quantization (hereinafter referred to as a quantized value) corresponding to each pixel data is M-bit length. Further, in the compression unit 12, a leading target pixel of at least one pixel, a quantization value corresponding to a plurality of pixel data, and a code representing a quantization width of the quantization value (hereinafter referred to as quantization width information) Are packed into the S bit length by the packing unit 127. Then, the packed packing data is output. Here, the natural numbers N, M, and S are determined in advance.
まず、光電変換部11により、符号化対象となる画素(対象画素)データ(画素データ)が、処理対象画素値入力部121に入力される。
First, pixel (target pixel) data (pixel data) to be encoded is input to the processing target pixel value input unit 121 by the photoelectric conversion unit 11.
次に、処理対象画素値入力部121に入力された画素データは、適切なタイミングで予測画素生成部122と、コード変換部123とに、処理対象画素値入力部121により出力される。
Next, the pixel data input to the processing target pixel value input unit 121 is output from the processing target pixel value input unit 121 to the prediction pixel generation unit 122 and the code conversion unit 123 at an appropriate timing.
ただし、着目している符号化対象画素が、先頭の対象画素として入力された場合には(ステップS101でYES)、量子化処理を省き、処理対象画素値入力部121は、直接、入力された画素データをパッキング部127に、入力する。一方、着目している符号化対象画素が、先頭の対象画素でない場合には(ステップS101でNO)、予測画素生成処理に移行する。
However, when the encoding target pixel of interest is input as the first target pixel (YES in step S101), the quantization processing is omitted, and the processing target pixel value input unit 121 is directly input. Pixel data is input to the packing unit 127. On the other hand, when the encoding target pixel of interest is not the first target pixel (NO in step S101), the process proceeds to a prediction pixel generation process.
ここで、予測画素生成部122に入力される画素データは、第1のデータ~第3のデータのうちの何れかである。すなわち、第1のデータとは、着目している符号化対象画素よりも先に処理対象画素値入力部121に入力された、先頭の対象画素である。第2のデータとは、処理対象画素値入力部121に先に入力された、以前の符号化対象画素である。第3のデータとは、先に、圧縮部12により符号化され、符号化された符号化済みデータが伸長部15に送られ、送られた符号化済みデータが伸長部15により復号化された画素データである。
Here, the pixel data input to the predicted pixel generation unit 122 is any of the first data to the third data. That is, the first data is the first target pixel that is input to the processing target pixel value input unit 121 before the target encoding target pixel. The second data is a previous encoding target pixel that is input to the processing target pixel value input unit 121 first. The third data is first encoded by the compression unit 12, the encoded encoded data is sent to the decompression unit 15, and the transmitted encoded data is decoded by the expansion unit 15 Pixel data.
次に、予測画素生成部122は、入力された画素データを用いて、着目している現在の対象画素の予測値を生成する(ステップS102)。
Next, the predicted pixel generation unit 122 generates a predicted value of the current target pixel of interest using the input pixel data (step S102).
なお、画素データを符号化する符号化方法として、予測符号化方法がある。予測符号化とは、符号化対象画素に対する予測値を生成し、符号化対象画素と予測値との差分値を符号化する方法である。この予測値は、画素データの場合、注目画素に近接する画素の値が、注目画素の値と同一である又は近い可能性が高い。そのため、このことに基づき、近傍の画素データから、着目している符号化対象画素の値を予測する。このようにして、差分値をできるだけ小さくして、量子化幅を抑える。
Note that there is a predictive encoding method as an encoding method for encoding pixel data. Predictive coding is a method of generating a prediction value for an encoding target pixel and encoding a difference value between the encoding target pixel and the prediction value. In the case of pixel data, this predicted value is likely to be the same as or close to the value of the pixel close to the target pixel. Therefore, based on this, the value of the target pixel to be encoded is predicted from neighboring pixel data. In this way, the difference value is made as small as possible to suppress the quantization width.
ここで、予測値の算出について説明する。
Here, calculation of predicted values will be described.
図6は、予測値の算出に用いられる画素の配置を説明するための図である。
FIG. 6 is a diagram for explaining the arrangement of pixels used for calculation of a predicted value.
図6に示すxは、注目画素(対象画素)の画素値を示している。また、図6に示すa、b、cは、注目画素の予測値yを求めるために用いられる、対象画素に対する3つの近接画素の画素値を示している。
6 indicates the pixel value of the target pixel (target pixel). Further, a, b, and c shown in FIG. 6 indicate pixel values of three adjacent pixels with respect to the target pixel, which are used to obtain the predicted value y of the target pixel.
ここで、一般的に用いられる予測式を、(式1)~(式7)に示す。
Here, prediction formulas generally used are shown in (Formula 1) to (Formula 7).
y=a…(式1)
y=b…(式2)
y=c…(式3)
y=a+b-c…(式4)
y=a+(b-c)/2…(式5)
y=b+(a-c)/2…(式6)
y=(a+b)/2…(式7) y = a (Formula 1)
y = b (Formula 2)
y = c (Formula 3)
y = a + bc (formula 4)
y = a + (bc) / 2 (Formula 5)
y = b + (ac) / 2 (Formula 6)
y = (a + b) / 2 (Expression 7)
y=b…(式2)
y=c…(式3)
y=a+b-c…(式4)
y=a+(b-c)/2…(式5)
y=b+(a-c)/2…(式6)
y=(a+b)/2…(式7) y = a (Formula 1)
y = b (Formula 2)
y = c (Formula 3)
y = a + bc (formula 4)
y = a + (bc) / 2 (Formula 5)
y = b + (ac) / 2 (Formula 6)
y = (a + b) / 2 (Expression 7)
このように、予測画素生成部122は、注目画素の近接画素の画素値a、b、cを用いて、注目画素の予測値yを求める。予測画素生成部122は、この予測値yと符号化対象画素xとの予測誤差Δ(=y-x)を求め、この予測誤差Δを符号化する。そして、予測画素生成部122は、上述した予測符号化で用いられる(式1)~(式7)のいずれかの予測式を用いて予測値を算出し、算出された予測値を、コード変換部123に出力する。なお、圧縮処理で利用可能な、内部のメモリバッファが確保できる場合などには、上述した予測式に限らず、着目画素に隣接している画素以外の周辺画素もメモリバッファに保持しておき、予測に使用するとしてもよい。その場合、予測精度を向上する予測式なども、用いることが可能である。なお、本実施の形態においては、一例として、ステップS102において(式1)の予測式を使用する場合について説明している。
Thus, the predicted pixel generation unit 122 obtains the predicted value y of the target pixel using the pixel values a, b, and c of the neighboring pixels of the target pixel. The prediction pixel generation unit 122 obtains a prediction error Δ (= y−x) between the prediction value y and the encoding target pixel x, and encodes the prediction error Δ. Then, the prediction pixel generation unit 122 calculates a prediction value using any one of the (Expression 1) to (Expression 7) used in the prediction encoding described above, and converts the calculated prediction value into code conversion. Output to the unit 123. In addition, when an internal memory buffer that can be used in compression processing can be secured, not only the prediction formula described above, but also peripheral pixels other than the pixel adjacent to the target pixel are held in the memory buffer, It may be used for prediction. In that case, it is also possible to use a prediction formula that improves the prediction accuracy. In the present embodiment, as an example, the case where the prediction formula (Formula 1) is used in Step S102 has been described.
次に、コード変換部123は、処理対象画素値入力部121から受信した符号化対象画素と、予測画素生成部122から受信した予測値とのそれぞれを、Nビットで表現されたコードにコード変換する。そして、コード変換された、符号化対象画素に対応するコード(以下、対象画素のコードという)と予測値に対応するコード(以下、予測値のコードという)は、変化抽出部124へと、コード変換部123によって送られる(ステップS103)。
Next, the code conversion unit 123 performs code conversion on each of the encoding target pixel received from the processing target pixel value input unit 121 and the prediction value received from the prediction pixel generation unit 122 into a code expressed by N bits. To do. The code converted code corresponding to the encoding target pixel (hereinafter referred to as the target pixel code) and the prediction value corresponding code (hereinafter referred to as the prediction value code) are sent to the change extraction unit 124. Sent by the converter 123 (step S103).
次に、変化抽出部124は、Nビットで表現された、符号化対象画素のコードと予測値のコードとの排他的論理和演算を行い、Nビット長のビット変化情報Eを算出する。ビット変化情報は、ビット変化情報と、上記予測値のコードとから、対象画素のコードが算出されるコードである。そして、変化抽出部124は、算出されたビット変化情報Eを、量子化幅決定部125と量子化処理部126に出力する。
Next, the change extraction unit 124 performs an exclusive OR operation between the code of the pixel to be encoded and the code of the prediction value expressed in N bits, and calculates bit change information E having an N-bit length. The bit change information is a code for calculating the code of the target pixel from the bit change information and the code of the predicted value. Then, the change extraction unit 124 outputs the calculated bit change information E to the quantization width determination unit 125 and the quantization processing unit 126.
次に、量子化幅決定部125は、変化抽出部124から当該量子化幅決定部125に送られるビット変化情報Eに基づき、量子化幅Jを決定し、決定した量子化幅Jを、量子化処理部126とパッキング部127に出力する(ステップS104)。この量子化幅Jとは、ビット変化情報Eの有効ビット桁数から、量子化値のビット長Mを引いた値を指す。ここで、Jは正の整数であり、ビット変化情報Eの有効ビット桁数がMよりも小さい値をとる場合は、Jは0とする。
Next, the quantization width determination unit 125 determines the quantization width J based on the bit change information E sent from the change extraction unit 124 to the quantization width determination unit 125, and the determined quantization width J The data is output to the conversion processing unit 126 and the packing unit 127 (step S104). The quantization width J indicates a value obtained by subtracting the bit length M of the quantization value from the number of effective bit digits of the bit change information E. Here, J is a positive integer, and when the number of significant bit digits of the bit change information E takes a value smaller than M, J is set to 0.
次に、量子化処理部126は、量子化幅決定部125で算出された量子化幅Jにより、変化抽出部124から受信したビット変化情報Eを量子化する量子化処理を行う(ステップS106)。なお、量子化幅Jによる量子化処理とは、符号化対象画素と、符号化対象画素に対応する予測値との間のビット変化情報Eを、量子化幅Jの数だけ、下位にシフト(ビットシフト)する処理である。また、量子化処理部126は、量子化幅Jが0である場合、量子化は行わないと理解してもよい。そして、量子化処理部126から出力された量子化結果(量子化値)は、パッキング部127へと量子化処理部126により送られる。
Next, the quantization processing unit 126 performs quantization processing for quantizing the bit change information E received from the change extraction unit 124 using the quantization width J calculated by the quantization width determination unit 125 (step S106). . The quantization process using the quantization width J means that the bit change information E between the encoding target pixel and the predicted value corresponding to the encoding target pixel is shifted downward by the number of the quantization width J ( Bit shift). Further, the quantization processing unit 126 may understand that the quantization is not performed when the quantization width J is zero. The quantization result (quantized value) output from the quantization processing unit 126 is sent to the packing unit 127 by the quantization processing unit 126.
次に、パッキング部127は、少なくとも一画素以上の先頭の対象画素と、複数の量子化値と、少なくとも一つ以上の、Qビット長(Qは自然数)の量子化幅情報とを結合させて、Sビットのデータにパッキングする(ステップS107)。そして、パッキング部127は、パッキングされたパッキングデータを、SDRAM等のメモリ、又はアンパッキング部151に出力する。なお、設定する固定ビット長Sとしては、使用する集積回路のデータ転送のバス幅のビット数と同じビット数が考えられる。また、パッキングデータのビットの後部において、未使用ビットが残存する場合には、Sビットに達するよう、ダミーデータを記録する。
Next, the packing unit 127 combines at least one or more leading target pixels, a plurality of quantization values, and at least one or more quantization width information of Q bit length (Q is a natural number). , Packing into S-bit data (step S107). Then, the packing unit 127 outputs the packed packing data to a memory such as an SDRAM or the unpacking unit 151. The fixed bit length S to be set may be the same number of bits as the data transfer bus width of the integrated circuit to be used. In addition, when unused bits remain in the rear portion of the packing data bits, dummy data is recorded so as to reach the S bits.
そして、符号化対象画素のパッキング処理が終了するとステップS108に移行する。
When the encoding target pixel packing process is completed, the process proceeds to step S108.
次に、圧縮部12は、ステップS108では、Sビットにパッキングする画素数Pixについて画像符号化処理が終了したか否かを判定する。ここで、Pixは、以下の(式8)にて予め算出されているものとする。
Next, in step S108, the compression unit 12 determines whether or not the image encoding process has been completed for the number of pixels Pix packed into S bits. Here, it is assumed that Pix is calculated in advance by the following (Equation 8).
Pix = S/(Q+M) ・・・(式8)
Pix = S / (Q + M) (Equation 8)
ステップS108において、判定の結果がNOならば(ステップS108でNO)、処理はステップS101に移行し、次いで、処理対象画素値入力部121が受信した画素データに対して、ステップS101からステップS107までの少なくとも一つの処理を圧縮部12が実行する。一方、ステップS108において、判定の結果がYESならば(ステップS108でYES)、圧縮部12が、バッファメモリ内の符号化データをSビット単位で出力し、ステップS109に移行する。
In step S108, if the result of the determination is NO (NO in step S108), the process proceeds to step S101, and then for the pixel data received by the processing target pixel value input unit 121, from step S101 to step S107. The compression unit 12 executes at least one of the following processes. On the other hand, if the determination result is YES in step S108 (YES in step S108), the compression unit 12 outputs the encoded data in the buffer memory in units of S bits, and proceeds to step S109.
最後、ステップS109では、先のステップS108で出力した符号化画素データの出力で、1画像についての符号化処理が全て終了したかを圧縮部12が判別する。そして、判別の結果がYESであれば(ステップS109でYES)、符号化処理を終了する。反対に、NOであれば(ステップS109でNO)、ステップS101へ移行して、ステップS101からステップS108の少なくとも1つの処理を実行する。
Finally, in step S109, the compression unit 12 determines whether the encoding process for one image has been completed by the output of the encoded pixel data output in the previous step S108. If the determination result is YES (YES in step S109), the encoding process ends. Conversely, if NO (NO in step S109), the process proceeds to step S101, and at least one process from step S101 to step S108 is executed.
以上のように、圧縮部12は、圧縮処理(画像符号化処理)を行う。
As described above, the compression unit 12 performs compression processing (image encoding processing).
以上のように、この画像符号化方法は、圧縮対象の画素の画素データを入力し(処理対象画素値入力部121)、入力された前記画素データを(固定長符号に)圧縮する画像符号化方法である。そして、圧縮対象の前記画素の周辺に位置する少なくとも1つの周辺画素から、前記画素データの予測値を生成する予測画素生成ステップ(予測画素生成部122)と、前記画素データをコード変換することにより、前記画素データがコード変換されたコードを生成するコード変換ステップ(コード変換部123)と、前記コード変換ステップでコード変換された、前記画素データの前記コードと、前記予測画素生成ステップで生成された前記予測値のコードとの間のビット変化情報(変化抽出部124)を、当該ビット変化情報のビット数よりも少ないビット数である(であり、かつ固定長である)量子化値へと量子化することにより、前記画素データを、前記量子化値へと圧縮する量子化ステップ(量子化処理部126)とを含む画像符号化方法である。
As described above, this image encoding method inputs image data of a pixel to be compressed (processing object pixel value input unit 121), and compresses the input pixel data (to a fixed-length code). Is the method. Then, a prediction pixel generation step (prediction pixel generation unit 122) that generates a prediction value of the pixel data from at least one peripheral pixel located around the pixel to be compressed, and code conversion of the pixel data A code conversion step (code conversion unit 123) that generates a code in which the pixel data is code-converted; and the code of the pixel data that has been code-converted in the code conversion step and the prediction pixel generation step. Further, the bit change information (change extraction unit 124) between the prediction value code and the code value is converted into a quantized value having a bit number smaller than (and having a fixed length) the bit number of the bit change information. An image code including a quantization step (quantization processing unit 126) that compresses the pixel data into the quantized values by quantization. It is a method.
(伸長部詳細)
次に、伸長部15について説明する。 (Details of extension)
Next, thedecompression unit 15 will be described.
次に、伸長部15について説明する。 (Details of extension)
Next, the
また、図4に示す伸長部15は、アンパッキング部151と、予測画素生成部152と、量子化幅決定部153と、逆量子化処理部154と、コード生成部155と、逆コード変換部156と、出力部157とで構成される。
4 includes an unpacking unit 151, a prediction pixel generation unit 152, a quantization width determination unit 153, an inverse quantization processing unit 154, a code generation unit 155, and an inverse code conversion unit. 156 and an output unit 157.
アンパッキング部151は、パッキング部127またはSDRAM等のメモリから入力された(送られた)符号化データを解析し、複数のデータへと分離する。アンパッキング部151は、解析した符号化データを、適切なタイミングで、量子化幅決定部153、逆量子化処理部154及び出力部157に出力する。
The unpacking unit 151 analyzes the encoded data input (sent) from the packing unit 127 or a memory such as an SDRAM and separates it into a plurality of data. The unpacking unit 151 outputs the analyzed encoded data to the quantization width determination unit 153, the inverse quantization processing unit 154, and the output unit 157 at an appropriate timing.
量子化幅決定部153は、アンパッキング部151から入力された符号化データ(量子化幅情報)から、各復号化対象画素に対応する、逆量子化処理における量子化幅を決定し、逆量子化処理部154に出力する。
The quantization width determination unit 153 determines the quantization width in the inverse quantization process corresponding to each decoding target pixel from the encoded data (quantization width information) input from the unpacking unit 151, and performs inverse quantization. Is output to the processing unit 154.
逆量子化処理部154は、量子化幅決定部153より入力された、逆量子化処理における量子化幅を用いて、逆量子化を行う。
The inverse quantization processing unit 154 performs inverse quantization using the quantization width in the inverse quantization process input from the quantization width determining unit 153.
コード生成部155は、予測画素生成部152より入力された予測値に対して、コード変換部123におけるコード変換と同様のコード変換を施し、予測値のコードを生成する。
The code generation unit 155 performs code conversion similar to the code conversion in the code conversion unit 123 on the prediction value input from the prediction pixel generation unit 152, and generates a code of the prediction value.
逆コード変換部156は、コード生成部155により入力された対象画素のコードに対して、コード変換部123にて施したコード変換の逆変換を行い、画素データを復元させる。そして、逆コード変換部156は、逆コード変換を施した画素データを、出力部157に出力する。
The reverse code conversion unit 156 performs reverse conversion of the code conversion performed by the code conversion unit 123 on the target pixel code input by the code generation unit 155 to restore the pixel data. Then, the reverse code conversion unit 156 outputs the pixel data subjected to the reverse code conversion to the output unit 157.
予測画素生成部152は、入力された画素データを用いて、予測値を生成する。ここで、入力されるデータは、着目している復号化対象画素よりも先に入力され、出力部157から出力されたデータである。
The predicted pixel generation unit 152 generates a predicted value using the input pixel data. Here, the input data is data that is input before the target decoding target pixel and is output from the output unit 157.
次に、以上のように構成される伸長部15の伸長方法(画像復号)について説明する。
Next, a decompression method (image decoding) of the decompression unit 15 configured as described above will be described.
<伸長処理>
図7は、本実施の形態における画像復号方法を説明するためのフローチャートである。図7は、伸長部15が行う画像復号処理(伸長処理)について示している。 <Extension processing>
FIG. 7 is a flowchart for explaining the image decoding method according to the present embodiment. FIG. 7 shows image decoding processing (decompression processing) performed by thedecompression unit 15.
図7は、本実施の形態における画像復号方法を説明するためのフローチャートである。図7は、伸長部15が行う画像復号処理(伸長処理)について示している。 <Extension processing>
FIG. 7 is a flowchart for explaining the image decoding method according to the present embodiment. FIG. 7 shows image decoding processing (decompression processing) performed by the
まず、アンパッキング部151に符号化データが入力される。ここでアンパッキング部151に入力される符号化データは、各画素データを復元させるために必要な符号化データである。
First, encoded data is input to the unpacking unit 151. Here, the encoded data input to the unpacking unit 151 is encoded data necessary for restoring each pixel data.
次に、アンパッキング部151は、パッキング部127から送られた、または、SDRAM等のメモリから送られた、Sビットの固定長符号化データを解析し、その固定長符号化データを、複数のデータへと分離する。つまり、アンパッキング部151は、送られた固定長符号化データを、Nビット長の先頭の対象画素、Qビット長の量子化幅情報、Mビット長の復号化の対象となる画素(以下、復号化対象画素という:量子化値)に分離する(ステップS201)。
Next, the unpacking unit 151 analyzes the S-bit fixed-length encoded data sent from the packing unit 127 or sent from a memory such as SDRAM, and converts the fixed-length encoded data into a plurality of pieces. Separate into data. In other words, the unpacking unit 151 converts the sent fixed-length encoded data into a N-bit long target pixel, a Q-bit quantization width information, and a M-bit decoding target pixel (hereinafter, referred to as a pixel to be decoded) It is separated into pixels to be decoded (quantized values) (step S201).
次に、アンパッキング部151で解析された符号化データは、適切なタイミングで、量子化幅決定部153及び逆量子化処理部154、出力部157に送信される。
Next, the encoded data analyzed by the unpacking unit 151 is transmitted to the quantization width determining unit 153, the inverse quantization processing unit 154, and the output unit 157 at an appropriate timing.
なお、着目している符号化データが、先頭の対象画素として入力された場合には(ステップS202でYES)、Nビットの、符号化前のダイナミックレンジを保持した画素データとして受信する。このため、アンパッキング部151は、逆量子化処理を省き、着目している符号化データを、直接、出力部157に送信する。
If the encoded data of interest is input as the first target pixel (YES in step S202), the encoded data is received as N-bit pixel data holding the dynamic range before encoding. For this reason, the unpacking unit 151 omits the inverse quantization process and transmits the encoded data of interest to the output unit 157 directly.
次に、着目している符号化データが、量子化幅情報である場合には(ステップS203でYES)、アンパッキング部151は、符号化データを量子化幅決定部125に送信し、逆量子化における量子化幅決定処理に移行する(ステップS204)。量子化幅決定部153は、アンパッキング部151から受信した符号化データ(量子化幅情報)から、各復号化対象画素に対応する、逆量子化処理における量子化幅J´を決定し、決定された量子化幅J´を、逆量子化処理部154に出力する。
Next, when the encoded data of interest is quantization width information (YES in step S203), the unpacking unit 151 transmits the encoded data to the quantization width determination unit 125, and performs inverse quantization. The process proceeds to quantization width determination processing in the conversion (step S204). The quantization width determination unit 153 determines the quantization width J ′ in the inverse quantization process corresponding to each decoding target pixel from the encoded data (quantization width information) received from the unpacking unit 151 and determines The quantized width J ′ thus output is output to the inverse quantization processing unit 154.
一方、着目している符号化データが、量子化幅情報ではない場合(ステップS203でNO)には、アンパッキング部151は、その符号化データを逆量子化処理部154に送信し、逆量子化処理に移行する。
On the other hand, when the encoded data of interest is not quantization width information (NO in step S203), the unpacking unit 151 transmits the encoded data to the inverse quantization processing unit 154, and performs inverse quantization. Shift to the conversion process.
次に、逆量子化処理では、逆量子化処理部154において、量子化幅決定部153から受信した、逆量子化処理における量子化幅J´により、逆量子化を行う。量子化幅J´による逆量子化処理とは、アンパッキング部151から受信した符号化データ(量子化値)を、J´の数だけ上位にビットシフトする処理である。逆量子化処理部154は、逆量子化処理により、Nビットで表現されたビット変化情報E´を算出する。なお、逆量子化処理部154は、量子化幅J´が“0”である場合、逆量子化は行わないと理解されてもよい(ステップS205)。
Next, in the inverse quantization processing, the inverse quantization processing unit 154 performs inverse quantization using the quantization width J ′ in the inverse quantization processing received from the quantization width determination unit 153. The inverse quantization process using the quantization width J ′ is a process of bit-shifting the encoded data (quantized value) received from the unpacking unit 151 upward by the number of J ′. The inverse quantization processing unit 154 calculates bit change information E ′ represented by N bits by inverse quantization processing. Note that the inverse quantization processing unit 154 may be understood not to perform the inverse quantization when the quantization width J ′ is “0” (step S205).
ここで、予測画素生成部152に入力されるデータは、着目している復号化対象画素よりも先に入力され、出力部157から出力されたデータである。このデータは、先頭の対象画素、または、先に復号化され、出力部157から出力された画素データ(以下、復号化画素データという)のいずれかである。
Here, the data input to the prediction pixel generation unit 152 is data that is input before the target pixel to be decoded and output from the output unit 157. This data is either the first target pixel or pixel data decoded first and output from the output unit 157 (hereinafter referred to as decoded pixel data).
次に、予測画素生成部152は、上記のような、予測画素生成部152に入力された画素データを用いて、Nビットで表現された予測値を生成する。ここで、予測値の生成方法は、前述した(式1)~(式7)の予測式のうちのいずれかを用いるが、予測画素生成部122で用いた式と同様の予測式を用いて、予測画素生成部152は予測値を算出する。算出した予測値は、コード生成部155に、予測画素生成部152により出力される(ステップS206)。
Next, the prediction pixel generation unit 152 generates a prediction value represented by N bits using the pixel data input to the prediction pixel generation unit 152 as described above. Here, the prediction value generation method uses any one of the above-described prediction expressions (Expression 1) to (Expression 7), but uses a prediction expression similar to the expression used in the prediction pixel generation unit 122. The predicted pixel generation unit 152 calculates a predicted value. The calculated prediction value is output to the code generation unit 155 by the prediction pixel generation unit 152 (step S206).
次に、コード生成部155は、予測画素生成部152から受信した予測値に対して、コード変換部123におけるコード変換と同様のコード変換を施し、変換がされた後の、予測値のコードを生成する。つまり、受信される予測値は、グレイコード変換等のコード変換がされる前の値である。コード生成部155は、このような、受信された予測値に対して、コード変換部123により行われるコード変換と同じコード変換を行い、変換後のコードを算出する。これにより、コード生成部155は、受信された予測値に対応したコードを算出する。さらに、コード生成部155は、逆量子化処理部154から受信した、Nビット長のビット変化情報E´と、コード変換後の予測値のコードとの間での、排他的論理和演算を行い、Nビット長の対象画素のコードを生成し、生成された対象画素のコードを、逆コード変換部156に出力する(ステップS207)。
Next, the code generation unit 155 performs code conversion similar to the code conversion in the code conversion unit 123 on the prediction value received from the prediction pixel generation unit 152, and the code of the prediction value after the conversion is performed. Generate. That is, the received predicted value is a value before code conversion such as Gray code conversion. The code generation unit 155 performs the same code conversion as the code conversion performed by the code conversion unit 123 on the received predicted value, and calculates the converted code. Thereby, the code generation unit 155 calculates a code corresponding to the received predicted value. Furthermore, the code generation unit 155 performs an exclusive OR operation between the N-bit length bit change information E ′ received from the inverse quantization processing unit 154 and the code of the predicted value after the code conversion. The N-bit target pixel code is generated, and the generated target pixel code is output to the inverse code conversion unit 156 (step S207).
次に、逆コード変換部156は、コード生成部155から受信した対象画素のコードに対して、コード変換部123にて施したコード変換の逆変換を行い、画素データを復元させる。逆コード変換を施された後の画素データは、出力部157へ逆コード変換部156により送信される(ステップS208)。
Next, the reverse code conversion unit 156 performs reverse conversion of the code conversion performed by the code conversion unit 123 on the target pixel code received from the code generation unit 155 to restore the pixel data. The pixel data after the reverse code conversion is transmitted to the output unit 157 by the reverse code conversion unit 156 (step S208).
次に、ステップS209では、パッキング部127によりSビットにパッキングされる画素数Pixについて、画像復号化処理が終了したか否かを、例えばアンパッキング部151等が判定する。ここで、Pixは、画像符号化処理と同様に、(式8)にて予め算出されているものとする。
Next, in step S209, for example, the unpacking unit 151 determines whether or not the image decoding process has been completed for the number of pixels Pix packed into S bits by the packing unit 127. Here, it is assumed that Pix is calculated in advance using (Equation 8), as in the image encoding process.
そして、ステップS209において、Pixについての上記の判定の判定結果がNOならば(ステップS209でNO)、処理はステップS203に移行し、伸長部15は、アンパッキング部151が受信した、次の符号化データに対して、ステップS203からステップS208までのうちの少なくとも一つの処理を実行する。なお、伸長部15は、画素P3から画素P6に対して、ステップS203からステップS208までの処理を繰り返し行い、順次、処理により得られた対象画素を出力する。
In step S209, if the determination result for Pix is NO (NO in step S209), the process proceeds to step S203, and the decompression unit 15 receives the next code received by the unpacking unit 151. At least one of steps S203 to S208 is performed on the digitized data. The decompressing unit 15 repeatedly performs the processing from step S203 to step S208 on the pixels P3 to P6, and sequentially outputs the target pixels obtained by the processing.
一方、ステップS209において、Pixについての上記の判定の判定結果がYESならば(ステップS209でYES)、処理はステップS210に移行する。
On the other hand, if the determination result of the above determination regarding Pix is YES in step S209 (YES in step S209), the process proceeds to step S210.
次に、ステップS210では、出力部157が出力した復号化画素データで、1画像についての復号化処理が、全て終了したかをアンパッキング部151等が判別する。そして、この判別の判別結果がYESであれば(ステップS210でYES)、復号化処理を終了する。反対に、NOであれば(ステップS210でNO)、ステップS201へ移行して、ステップS201からステップS209の少なくとも1つの処理を実行する。
Next, in step S210, the unpacking unit 151 determines whether or not the decoding process for one image has been completed with the decoded pixel data output from the output unit 157. If the determination result is YES (YES in step S210), the decoding process is terminated. Conversely, if NO (NO in step S210), the process proceeds to step S201, and at least one process from step S201 to step S209 is executed.
以上のように、伸長部15は、伸長処理(画像復号処理)を行う。
As described above, the decompression unit 15 performs decompression processing (image decoding processing).
(圧縮率設定部)
次に、固体撮像素子10を構成する圧縮率設定部13の圧縮率設定手法の詳細について、例を用いて説明する。 (Compression rate setting part)
Next, details of the compression rate setting method of the compressionrate setting unit 13 constituting the solid-state imaging device 10 will be described using an example.
次に、固体撮像素子10を構成する圧縮率設定部13の圧縮率設定手法の詳細について、例を用いて説明する。 (Compression rate setting part)
Next, details of the compression rate setting method of the compression
圧縮率設定部13は、上述したように特徴検出部14により出力される特徴量に基づいて圧縮率を設定する。ここで、特徴検出部14より入力される特徴量ついて様々な情報が挙げられる。以下では、例として、特徴検出部14が、撮像画面のうち特定の対象領域に対して個別に圧縮率を設定する場合について述べ、対象領域の個数、各領域の基準座標、領域の大きさ、を入力とした場合について述べる。
The compression rate setting unit 13 sets the compression rate based on the feature amount output from the feature detection unit 14 as described above. Here, various information about the feature amount input from the feature detection unit 14 can be mentioned. Hereinafter, as an example, the case where the feature detection unit 14 individually sets a compression rate for a specific target area in the imaging screen will be described. The number of target areas, the reference coordinates of each area, the size of the area, The case where is input will be described.
圧縮率設定部13は、特徴検出部14からの特徴量が、接続されている制御信号を通じて入力され、入力された特徴量を例えば内部レジスタに保持する。ここで、圧縮率設定部13は、例えば内部レジスタを有し、この内部レジスタに情報を保持するとしている。
The compression rate setting unit 13 receives the feature amount from the feature detection unit 14 through a connected control signal, and holds the input feature amount in, for example, an internal register. Here, the compression rate setting unit 13 has, for example, an internal register and holds information in the internal register.
なお、入力された特徴量を内部レジスタに保持する場合は、N(Nは自然数)個分までの各領域情報を全て保持出来るレジスタをN個用意してもよい。また、フレーム先頭から出力走査を行う際の時間経過を観測し、明らかに領域m(mはN以下の自然数)の走査を終了した場合は、領域nの情報を格納してある内部レジスタに対して新たに領域nの情報に更新するとしてもよい。
When the input feature value is held in the internal register, N registers may be prepared that can hold all pieces of area information for N (N is a natural number). In addition, when the scanning of the region m (m is a natural number equal to or less than N) is clearly observed after observing the time when the output scanning is performed from the beginning of the frame, the internal register storing the information of the region n Thus, the information may be newly updated to the information of the region n.
また、圧縮率設定部13は、撮像する画面の左上端を基準として、水平方向、垂直方向の基準位置からの座標を算出している。そして、圧縮率設定部13は、算出後の水平座標および垂直座標に対して、特徴検出部14の出力であるN個までの領域に含まれるか否かの比較判別を行う。ここで、領域の情報としては、画像全体を1つの領域が設定されてもよいし、1つに限らず、第1の領域、第2の領域、…、第Nの領域という具合に、N個の複数の領域が設定されてもよい。以下、複数の領域が設定される場合の例について説明する。図8Aおよび図8Bは、複数の領域が設定される場合の例を示した図である。図8AはN=2の場合の矩形領域を示した例である。一方、図8Bは、矩形でない領域を示した例である。
The compression rate setting unit 13 calculates coordinates from the reference position in the horizontal direction and the vertical direction with the upper left corner of the screen to be imaged as a reference. Then, the compression rate setting unit 13 compares and determines whether or not the calculated horizontal coordinate and vertical coordinate are included in up to N regions that are outputs of the feature detection unit 14. Here, as the area information, one area may be set for the entire image, and the information is not limited to one, but the first area, the second area,. A plurality of areas may be set. Hereinafter, an example in which a plurality of areas are set will be described. 8A and 8B are diagrams illustrating an example in which a plurality of areas are set. FIG. 8A is an example showing a rectangular area in the case of N = 2. On the other hand, FIG. 8B is an example showing a non-rectangular region.
図8Aでは、領域情報として、領域の番号N、各領域Nの基準座標P(N)=(水平左端、垂直上端)、領域の水平画素数W(N)、垂直画素数H(N)が入力されている。
In FIG. 8A, as the area information, the area number N, the reference coordinates P (N) of each area N = (horizontal left edge, vertical top edge), the horizontal pixel number W (N), and the vertical pixel number H (N) of the area. Have been entered.
なお、ここでは、各領域Nに対する圧縮率Q(N)、および、N個の矩形領域のいずれにも属さない領域の標準の圧縮率Qを選択する。すなわち、特徴検出部14が画像中の着目領域を矩形領域として選択する。そして、圧縮率設定部13は、着目領域の圧縮率を下げて詳細を残し、非着目領域を圧縮率Qとして高圧縮を行うように圧縮率を設定すれば、圧縮部12にて効率の良い圧縮を行なえる。
Here, the compression rate Q (N) for each region N and the standard compression rate Q for a region that does not belong to any of the N rectangular regions are selected. That is, the feature detection unit 14 selects a region of interest in the image as a rectangular region. Then, the compression rate setting unit 13 reduces the compression rate of the region of interest to leave details, and if the compression rate is set so that high compression is performed with the non-target region as the compression rate Q, the compression unit 12 is efficient. Can be compressed.
次に、圧縮率の領域判定方法について説明する。
Next, the compression rate area determination method will be described.
図9は、矩形領域での圧縮率選択方法を説明するためフローチャートの例である。
FIG. 9 is an example of a flowchart for explaining a compression rate selection method in a rectangular area.
まず、フレーム撮像を開始後、上記領域の特徴検出情報を入力し初期化を行い(ステップS301)、ステップS302に移動する。
First, after frame imaging is started, the feature detection information of the region is input and initialization is performed (step S301), and the process proceeds to step S302.
次に、ステップS302では、画面左上を基準とする座標(x、y)がN個の領域のいずれかに入っているか、下記の(式9)により判断する。
Next, in step S302, it is determined by (Equation 9) below whether the coordinates (x, y) with respect to the upper left of the screen are included in any of the N areas.
矩形領域1の内部:(X1<=x<X1+W(1)、Y1<=y<Y1+W(1))
矩形領域2の内部:(X2<=x<X2+W(1)、Y2<=y<Y2+W(2))
:
矩形領域Nの内部:(XN<=x<XN+W(N)、YN<=y<YN+W(N)) ・・・(式9) Inside rectangular area 1: (X1 <= x <X1 + W (1), Y1 <= y <Y1 + W (1))
Inside the rectangular area 2: (X2 <= x <X2 + W (1), Y2 <= y <Y2 + W (2))
:
Inside rectangular area N: (XN <= x <XN + W (N), YN <= y <YN + W (N)) (Equation 9)
矩形領域2の内部:(X2<=x<X2+W(1)、Y2<=y<Y2+W(2))
:
矩形領域Nの内部:(XN<=x<XN+W(N)、YN<=y<YN+W(N)) ・・・(式9) Inside rectangular area 1: (X1 <= x <X1 + W (1), Y1 <= y <Y1 + W (1))
Inside the rectangular area 2: (X2 <= x <X2 + W (1), Y2 <= y <Y2 + W (2))
:
Inside rectangular area N: (XN <= x <XN + W (N), YN <= y <YN + W (N)) (Equation 9)
次に、画像中の当該座標(x、y)がN個のうちの矩形領域n(n=1、2、・・・、N)に含まれておりYESとなる場合(ステップS302でYES)、圧縮率=Q(n)として(ステップS303)、ステップS305に移動する。
Next, when the coordinate (x, y) in the image is included in the rectangular area n (n = 1, 2,..., N) out of N and YES (YES in step S302). Then, the compression ratio = Q (n) is set (step S303), and the process proceeds to step S305.
一方、NOの場合(ステップS302でNO)、前記N個までの対象矩形領域に属さない領域(非選択領域)として、N個までの領域に設定した圧縮率とは別に領域外の標準圧縮率Qとして(ステップS304)、ステップS305に移動する。
On the other hand, in the case of NO (NO in step S302), the standard compression rate outside the region is separate from the compression rate set in the region up to N as the region not belonging to the N target rectangular regions (non-selected region). As Q (step S304), the process moves to step S305.
次に、ステップS305では、N個の領域全ての判定が終了したかどうかを判別し、終了(ステップS305でYES)時、ステップS306に移動し、未終了時(NO)、領域番号nを1増加して、ステップS302に戻り、次の領域n+1に対して判別を行う。
Next, in step S305, it is determined whether or not the determination of all N areas has been completed, and when completed (YES in step S305), the process moves to step S306, and when not completed (NO), the area number n is set to 1. After increasing, the process returns to step S302, and the next area n + 1 is discriminated.
ステップS306では、水平方向の画素スキャンが終了したかの判別を行い、YESの場合、ステップS307へ移動し、NOの場合、水平座標xを1増加し、ステップS302へ戻り、継続する。
In step S306, it is determined whether or not the horizontal pixel scanning is completed. If YES, the process moves to step S307. If NO, the horizontal coordinate x is incremented by 1, and the process returns to step S302 to continue.
ステップS307では、ステップS306に対して垂直方向の画素スキャンが終了したかどうかの判別を行い、YESの場合、1フレーム内の圧縮率選択を終了し、NOである場合、垂直座標yを1増加して、ステップS302へ戻り継続する。
In step S307, it is determined whether or not the vertical pixel scanning is completed in step S306. If YES, the compression ratio selection in one frame is completed. If NO, the vertical coordinate y is increased by 1. Then, the process returns to step S302 and continues.
なお、N個の矩形領域のうち2個以上の領域に重複して含まれる画素については、いずれかの領域nの圧縮率Q(n)を適用すればよい。また、本例では矩形領域に対して圧縮率領域を判定していることから、スキャン中ステップS302において、矩形領域nの左上の基準座標に一致した段階で領域内に設定する圧縮率Q(n)は同一の圧縮率を設定する。そのため、水平方向W画素、垂直方向H画素の領域内全画素を1画素ずつスキャンする必要はなく、スキップすることも可能であり、スキップにより処理量を削減できる。
Note that the compression rate Q (n) of any one of the regions n may be applied to pixels included in two or more regions out of the N rectangular regions. In this example, since the compression rate region is determined for the rectangular region, in step S302 during scanning, the compression rate Q (n set in the region at the stage where the upper left reference coordinate of the rectangular region n is matched. ) Sets the same compression rate. Therefore, it is not necessary to scan all the pixels in the horizontal W pixel and vertical H pixel regions one by one, and it is possible to skip, and the processing amount can be reduced by skipping.
また、ここでは領域形状が矩形領域である場合を例にして説明したが、矩形領域に限らず、任意の形状でも構わない。領域形状が矩形でない場合については、例えば、図8Bの場合があり、図8Bに示すように領域情報を与えることで対応できる。
In addition, here, the case where the region shape is a rectangular region has been described as an example, but the shape is not limited to a rectangular region, and any shape may be used. The case where the region shape is not rectangular may be, for example, the case of FIG. 8B, which can be dealt with by giving region information as shown in FIG. 8B.
図8Bでの領域情報における圧縮率選択方法は、図9に述べた手法と同様であるが、矩形領域と異なり、左上の座標を基準座標とすることが出来ない。そのため、例えば、特徴検出部14が圧縮率設定部13に次のように情報を与えればよい。
The compression rate selection method in the area information in FIG. 8B is the same as the method described in FIG. 9, but unlike the rectangular area, the upper left coordinate cannot be used as the reference coordinate. Therefore, for example, the feature detection unit 14 may give information to the compression rate setting unit 13 as follows.
すなわち、固体撮像素子10からライン単位でデータを出力するとした場合、各ラインlについて、領域の番号n、領域nが含まれる左端の座標を基準座標P(l、n)として、P(l、n)=(Xn、Yl)、領域nの水平画素数W(n)、等の情報をライン毎に付与すればよい。
That is, when data is output from the solid-state imaging device 10 in units of lines, for each line l, the region number n and the leftmost coordinate including the region n are set as reference coordinates P (l, n), and P (l, Information such as n) = (Xn, Yl) and the number of horizontal pixels W (n) of the region n may be given for each line.
(データ形式)
次に、圧縮部12により圧縮された圧縮データを固体撮像素子10が出力する際のデータ形式について説明する。 (data form)
Next, the data format when the solid-state imaging device 10 outputs the compressed data compressed by the compression unit 12 will be described.
次に、圧縮部12により圧縮された圧縮データを固体撮像素子10が出力する際のデータ形式について説明する。 (data form)
Next, the data format when the solid-
図10Aおよび図10Bは、固体撮像素子のデータ形式の例を示した図である。以下、固体撮像素子10からの出力に際しては、水平方向に走査を行いライン単位で連続データを出力することとする。
10A and 10B are diagrams showing examples of the data format of the solid-state imaging device. Hereinafter, when outputting from the solid-state imaging device 10, scanning is performed in the horizontal direction and continuous data is output in units of lines.
図10Aは、水平方向の1ライン単位で格納されたデータを模式的に示した例である。ここで、横方向に1ラインのデータ、縦方向にライン毎のデータが並んでおり、各ラインのデータの格納形態として、先頭に同期コード(ヘッダ)を付加して格納する。また、同期コードは、ライン間に跨ることはなく、ラインごとに独立して付加されて格納されている。
FIG. 10A is an example schematically showing data stored in units of one line in the horizontal direction. Here, data for one line is arranged in the horizontal direction and data for each line is arranged in the vertical direction, and the data is stored with the synchronization code (header) added to the head as the storage form of the data of each line. In addition, the synchronization code does not straddle between lines, and is added and stored independently for each line.
図10Aに示すように、1行目はフレーム先頭ライン(1ライン目)のデータであり、このデータの先頭にフレーム開始の同期コード(SOF)が付加される。同様に、2ライン目以降も、1ラインのデータの先頭にはライン開始の同期コード(SOL)が付加される。
As shown in FIG. 10A, the first line is data of the frame start line (first line), and a frame start synchronization code (SOF) is added to the head of this data. Similarly, in the second and subsequent lines, a line start synchronization code (SOL) is added to the head of the data of one line.
また、SOL直後には、各領域nであることを示す識別用同期コード(SOR(n))が付加される。なお、同期コードSOR(n)には、例えば、圧縮率を示すコードを1バイト分、圧縮対象画素数を2バイト分、圧縮後の総ビット数を4バイト分、の情報を付加すればよく、前記バイト数については、個々の固体撮像装置において設定すればよく、一意の値ではない。このSOR(n)を付加しておくことで、伸長部15は、SOR(n)を検出することにより、特定の領域のみ取り出して伸張する、ということも可能となる。
Also, immediately after the SOL, an identification synchronization code (SOR (n)) indicating each area n is added. The synchronization code SOR (n) may be added with information indicating, for example, a code indicating the compression rate for 1 byte, the number of pixels to be compressed for 2 bytes, and the total number of bits after compression for 4 bytes. The number of bytes may be set in each solid-state imaging device and is not a unique value. By adding this SOR (n), the decompression unit 15 can detect and expand only a specific area by detecting SOR (n).
また、領域を示す同期コードSOR(n)のサイズが大きくなるにつれて、1ラインあたりの領域数Nが増加すると、1ライン中ヘッダの占める比率が高まりデータ圧縮効率が下がるため、ヘッダサイズは小さく済む方が適しているのはいうまでもない。
Also, as the number of areas N per line increases as the size of the synchronization code SOR (n) indicating the area increases, the ratio of the header in one line increases and the data compression efficiency decreases, so the header size can be reduced. Needless to say, this is more appropriate.
さらに、1ラインの最終データの末尾にライン終了コードEOLを付加し、フレーム最終ラインの末尾にはフレーム終了コードEOFを付加する。
Furthermore, a line end code EOL is added to the end of the last data of one line, and a frame end code EOF is added to the end of the frame last line.
ここで、上記SOFやSOL、SOR等の各識別ヘッダは、撮像データを正しく識別しデコード可能とするために用いるが、予め撮像データのビット長を超えるビット数について0または1を連続並びで生成することにより、実現する。
Here, the identification headers such as SOF, SOL, and SOR are used to correctly identify and decode the imaging data, but 0 or 1 is generated in advance for the number of bits exceeding the bit length of the imaging data in advance. To achieve.
図10Bは、SORの同期コードの並びを示した例である。
FIG. 10B is an example showing a sequence of SOR synchronization codes.
図10Bに示すように、ヘッダに記載する圧縮率については、例えば1バイト用意したとき、非圧縮すなわち圧縮前に対する圧縮後の符号量比100%の場合は8ビット最大値または0としているが、それに限らない。
As shown in FIG. 10B, for the compression rate described in the header, for example, when 1 byte is prepared, in the case of non-compression, that is, the code amount ratio after compression to 100% after compression, the maximum value is 8 bits or 0. Not limited to that.
なお、圧縮時には、例えば1~254の値と圧縮率との変換テーブルを用意し変換を行って規定しても良い。また、下位4ビットを分母、上位4ビットを分子として定義し、各々4ビットの値の関係で規定するとしてもよい。この定義の場合、圧縮前に対する圧縮後の符号量比75%圧縮時での値は、例えば0x34と示せばよい。ここで、0x34以外に0x68、0x9Cという組み合わせが存在するが、構わない。
It should be noted that, at the time of compression, for example, a conversion table between a value of 1 to 254 and a compression rate may be prepared and converted and defined. Alternatively, the lower 4 bits may be defined as a denominator and the upper 4 bits as a numerator, and each may be defined in terms of a 4-bit value. In the case of this definition, the value at the time of 75% compression of the code amount ratio after compression with respect to before compression may be shown as 0x34, for example. Here, there are combinations of 0x68 and 0x9C other than 0x34, but they do not matter.
また、必ずしも上記のように上位ビットと下位ビットで分子分母の関係を作る必要はなく、1バイトの中で重複せず一意に圧縮率が分かればよい。すなわち、例えば圧縮前に対する圧縮後の符号量比75%のときは0x4B(=75)というように、直接圧縮前に対する圧縮後の符号量比を記載しても良い。または、圧縮時に用いた量子化ステップを記載しても良い。
Also, as described above, it is not always necessary to create a numerator denominator relationship between the upper bits and the lower bits, and it is sufficient if the compression rate is uniquely known without overlapping in one byte. That is, for example, when the code amount ratio after compression with respect to before compression is 75%, the code amount ratio after compression with respect to before direct compression may be described as 0x4B (= 75). Or you may describe the quantization step used at the time of compression.
次に、領域毎に独立した圧縮撮像データを送信する手順について説明する。
Next, a procedure for transmitting independent compressed imaging data for each area will be described.
図11は、領域毎に独立した圧縮撮像データを送信する手順を説明するためのフローチャートである。
FIG. 11 is a flowchart for explaining a procedure for transmitting compressed image data that is independent for each region.
まず、フレームデータの転送を開始する。
First, transfer of frame data is started.
次に、ステップS401で特徴検出部14から入力された領域情報、圧縮率情報等の特徴情報を取得して設定し、ステップS402に移動する。
Next, feature information such as region information and compression rate information input from the feature detection unit 14 in step S401 is acquired and set, and the process proceeds to step S402.
次に、ステップS402でライン数カウンタj=1にリセットする。
Next, in step S402, the line number counter j = 1 is reset.
次に、ライン単位の処理にはいり、ステップS403で、各ライン数の領域数をカウントする領域数カウンタi=1に初期化する。
Next, in line-by-line processing, in step S403, an area number counter i = 1 for counting the number of areas for each line number is initialized.
次に、ステップS404では、ライン数カウンタjの値から先頭ライン(j=1)であれば(ステップS404でYES)、フレーム開始コード(SOF)を付加(ステップS405)する。一方、j=1以外では(ステップS404でNO)、ライン開始ヘッダ(SOL)を付加する(ステップS406)。
Next, in step S404, if the first line (j = 1) from the value of the line number counter j (YES in step S404), a frame start code (SOF) is added (step S405). On the other hand, if j is not 1 (NO in step S404), a line start header (SOL) is added (step S406).
次に、ステップS407では、ライン左端であるので、弟1領域を示す領域開始コードSOR0を付加し、領域開始コード内には、圧縮率および領域iの画素数、圧縮後のビット数の情報を付加する。
Next, in step S407, since it is the left end of the line, an area start code SOR0 indicating the brother 1 area is added, and information on the compression rate, the number of pixels in the area i, and the number of bits after compression is included in the area start code. Append.
次に、ステップS408で、圧縮後の撮像RAWデータを格納し、ステップS409で、領域数カウンタi=設定領域数Nに達するかどうかを判断し、達していなければ(ステップS409でNO)、i=i+1とカウントアップして(ステップS412)、ステップS407に戻り、第2領域以降のデータの格納を続ける。一方、ステップS409で設定領域数に達した場合(ステップS409でYES)は、ステップS410へ移動する。
Next, in step S408, the compressed imaging RAW data is stored, and in step S409, it is determined whether or not the area number counter i = the set area number N is reached. If not (NO in step S409), i = I + 1 (step S412), the process returns to step S407, and the storage of data in the second area and thereafter is continued. On the other hand, if the number of set areas has been reached in step S409 (YES in step S409), the process moves to step S410.
次に、ステップS410において、最終ラインでない場合(ステップS410でNO)は、ステップS413でライン終了ヘッダ(EOL)を付加して、ステップS403に戻り、次ラインの処理を継続する。一方、j=Lとなり最終ラインの場合(ステップS410でYES)は、フレーム終了コードEOFを付加して終了する。
Next, if it is not the last line in step S410 (NO in step S410), a line end header (EOL) is added in step S413, the process returns to step S403, and the next line processing is continued. On the other hand, if j = L and the final line is selected (YES in step S410), the process ends by adding a frame end code EOF.
以上のように、圧縮部12は、圧縮率設定部13により設定された圧縮率の情報に基づいて、少なくとも、圧縮率の情報及びデータ量の情報を含む識別情報を付加して画素データを圧縮する。そして、固体撮像素子10は、識別情報が付加された画像データを出力する。
As described above, based on the compression rate information set by the compression rate setting unit 13, the compression unit 12 compresses pixel data by adding at least identification information including compression rate information and data amount information. To do. Then, the solid-state imaging device 10 outputs image data to which identification information is added.
なお、上記圧縮データの形式について図10Aおよび図10Bを元に説明したが、それに限らない。例えば変形例として、図12Aおよび図12Bに示す形式でもよい。すなわち、図12Aのように、フレーム先頭ラインにおいて、SOF付加後、SOLを付加してもよく、フレーム終了最終ラインにおいて、EOFを付加する前にEOLを付加してもよい。また、図12Bのように、フレーム最終ラインにおいて、EOFを付加する変わりにEOLのみで終了しても良い。
The format of the compressed data has been described based on FIGS. 10A and 10B, but is not limited thereto. For example, as a modification, the format shown in FIGS. 12A and 12B may be used. That is, as shown in FIG. 12A, the SOL may be added after the SOF is added in the frame start line, and the EOL may be added before the EOF is added in the final frame end line. Further, as shown in FIG. 12B, in the last line of the frame, the processing may be ended only by EOL instead of adding EOF.
また、上記で説明したデータ形式は、LVDS(Low Voltage Differential Signalling)で固体撮像素子10が出力する場合特に有効である。ここで、LVDSは、2本の差動信号を1組(ペア)として、低電圧かつ高周波数で動作させることにより、シリアル信号に変換して高速転送を行うインターフェースである。
In addition, the data format described above is particularly effective when the solid-state imaging device 10 outputs with LVDS (Low Voltage Differential Signaling). Here, the LVDS is an interface that converts two differential signals into one set (pair) and operates at a low voltage and a high frequency to convert them into serial signals and perform high-speed transfer.
次に、LVDSで固体撮像素子10が出力する場合について説明する。
Next, a case where the solid-state imaging device 10 outputs with LVDS will be described.
まず、固体撮像素子10は、1つ以上のLVDS出力ペアを備えており、2つ以上の出力ペアを伴う場合は、所定の規則に基づいてペアの分割割り当てを行う。以下、ペアの分割割り当て方法について、図13Aおよび図13Bを用いて説明する。
First, the solid-state imaging device 10 includes one or more LVDS output pairs. When two or more output pairs are involved, the pairs are divided and assigned based on a predetermined rule. Hereinafter, a method for allocating and dividing pairs will be described with reference to FIGS. 13A and 13B.
図13Aおよび図13Bは、実施の形態1おける固体撮像素子のLVDS出力を示した図である。図13Aでは、1ペア当たりの1画素データのビット数を10ビットとして、A~Dの4ペアで出力する場合の例を示している。
FIG. 13A and FIG. 13B are diagrams showing LVDS output of the solid-state imaging device in the first embodiment. FIG. 13A shows an example in which the number of bits of one pixel data per pair is set to 10 bits, and four pairs A to D are output.
1ラインの圧縮部12において圧縮後、シリアル変換171によってシリアル変換後、出力制御部172によって出力制御を行い、接続されているA~Dの各LVDSトランシーバへ出力を行う。
After compression in the compression unit 12 of one line, serial conversion is performed by the serial conversion 171, output control is performed by the output control unit 172, and output is performed to each of the connected A to D LVDS transceivers.
図13Bにおいて、各ペアの出力形式について述べる。
Fig. 13B describes the output format of each pair.
各LVDSペアは、独立して送受信を行うため、ライン左端においては、出力制御部172は、SOLヘッダを分配するA~D各LVDSペアに対して付加する。
Since each LVDS pair performs transmission and reception independently, at the left end of the line, the output control unit 172 adds it to each of the A to D LVDS pairs that distribute the SOL header.
以降、まず、第1領域のコードSOR(0)をペアAからB、C、Dに対して格納していく。次に、圧縮データを10ビットずつA~Dの各ペアに対して分配して出力していく。領域の圧縮データのサイズの合計が40ビットに対して端数となる場合は、剰余ビットを1で全て埋めたダミーデータを出力し、次領域に移動し、格納する。以降、EOLの最後まで格納する。
Thereafter, first, the code SOR (0) of the first area is stored for the pairs A to B, C, and D. Next, the compressed data is distributed and outputted to each pair of A to D by 10 bits. When the total size of the compressed data in the area is a fraction of 40 bits, dummy data in which all the surplus bits are filled with 1 is output, moved to the next area, and stored. Thereafter, the data is stored up to the end of EOL.
上記のようにして、ペアA~Dは、それぞれ独立に出力を行う。
As described above, the pairs A to D each output independently.
なお、LVDS1ポート当たりのペア数や1画素単位のビット数について、この例に限らず、本固体撮像装置で接続したペア数、固体撮像素子のAD変換部のビット数までの範囲で、値の設定変更可能である。
It should be noted that the number of pairs per LVDS port and the number of bits per pixel are not limited to this example, and the number of pairs within the range up to the number of pairs connected by this solid-state imaging device and the number of bits of the AD conversion unit of the solid-state imaging device Settings can be changed.
<フレームデータ処理>
ここで、伸長部15が固体撮像素子10の前記LVDS出力圧縮データを受信し、伸長を行い、画像処理部16で使用可能な撮像RAWデータに復元を行う手順について図14を用いて説明する。 <Frame data processing>
Here, a procedure in which thedecompression unit 15 receives the LVDS output compressed data of the solid-state image sensor 10, decompresses, and restores the imaging RAW data usable in the image processing unit 16 will be described with reference to FIG.
ここで、伸長部15が固体撮像素子10の前記LVDS出力圧縮データを受信し、伸長を行い、画像処理部16で使用可能な撮像RAWデータに復元を行う手順について図14を用いて説明する。 <Frame data processing>
Here, a procedure in which the
図14は、実施の形態1おけるフレームデータ受信手順を説明するためのフローチャートである。なお、伸長部15では、伸長開始後、各LVDSペアを、LVDSのシリアルデータ入力を所定ビット数単位でのパラレルデータに変換する動作を繰り返している。
FIG. 14 is a flowchart for explaining a frame data reception procedure in the first embodiment. The decompression unit 15 repeats the operation of converting each LVDS pair into parallel data in units of a predetermined number of bits after decompression is started.
まず、ステップS501で、各ペアにおいてフレーム開始の同期コード(SOF)を受信する待ちを行う。
First, in step S501, each pair waits to receive a frame start synchronization code (SOF).
SOF受信後(ステップS501でYES)、ステップS503へ移動する。一方、SOFを受信しない場合(ステップS501でNO)、ステップS502でライン開始コードSOLの受信判別を行う。受信したら(ステップS502でYES)、ステップS503に移動する。受信しなかったら(ステップS502でNO)、ステップS501に移動し、SOFまたはSOLの開始コードを受信するまで継続する。
After receiving the SOF (YES in step S501), the process moves to step S503. On the other hand, when the SOF is not received (NO in step S501), the reception determination of the line start code SOL is performed in step S502. If received (YES in step S502), the process moves to step S503. If not received (NO in step S502), the process moves to step S501 and continues until the SOF or SOL start code is received.
次に、ステップS503において、領域nの同期コードSOR(n)を受信後(ステップS503でYES)、ステップS504で領域nの画素数、圧縮率、データサイズ等の情報を取得する。次いで、ステップS505に移動し、領域n内の圧縮データの伸長処理を行い、伸長完了後、ステップS506に移動し、フレーム終了コードEOFの受信判別を行う。そして、フレーム終了コードEOFを受信すれば(ステップS506でYES)、終了する。一方、フレーム終了コードEOF受信しなければ(ステップS506でNO)、ステップS507へ移りライン終了コードEOLを受信したかどうか判別を行う。ここで、ライン終了コードEOLを受信したら(ステップS507でYES)、ステップS501へ戻り、次のライン以降の処理を継続する。一方、受信しなければ(ステップS507でNO)、ステップS503へ戻り、次の領域の処理を継続する。
Next, in step S503, after receiving the synchronization code SOR (n) of the region n (YES in step S503), information such as the number of pixels, the compression rate, and the data size of the region n is acquired in step S504. Next, the process proceeds to step S505, where decompression processing of the compressed data in the region n is performed. After the decompression is completed, the process proceeds to step S506, where reception determination of the frame end code EOF is performed. If the frame end code EOF is received (YES in step S506), the process ends. On the other hand, if the frame end code EOF is not received (NO in step S506), the process proceeds to step S507 to determine whether or not the line end code EOL is received. If the line end code EOL is received (YES in step S507), the process returns to step S501 to continue the processing for the next line and thereafter. On the other hand, if not received (NO in step S507), the process returns to step S503 to continue the process for the next area.
なお、ステップS505のデコード処理は、図7で説明した伸長部15の処理を例として示した。
Note that the decoding processing in step S505 is shown by taking the processing of the decompression unit 15 described in FIG. 7 as an example.
(実施例1)
本実施例では、実施の形態1に係る固体撮像装置1を例えば車載カメラ等に適用した場合について説明する。具体的には、画像全体の動き情報に基づいて特徴を検出する場合の例について説明する。 Example 1
In this example, a case where the solid-state imaging device 1 according to Embodiment 1 is applied to, for example, an in-vehicle camera or the like will be described. Specifically, an example in which features are detected based on motion information of the entire image will be described.
本実施例では、実施の形態1に係る固体撮像装置1を例えば車載カメラ等に適用した場合について説明する。具体的には、画像全体の動き情報に基づいて特徴を検出する場合の例について説明する。 Example 1
In this example, a case where the solid-
図15は、実施例1おける被写体全体の動き検出を行うカメラの構成を示すブロック図である。なお、図1と同様の要素には同一の符号を付しており、詳細な説明は省略する。図15に示すカメラ2は、実施の形態1に係る固体撮像装置1に対して、特徴検出部24と、速度検出部28との構成が異なる。一方で、固体撮像素子10の構成は、実施の形態1におけるものと同一である。
FIG. 15 is a block diagram illustrating a configuration of a camera that performs motion detection of the entire subject in the first embodiment. Elements similar to those in FIG. 1 are denoted by the same reference numerals, and detailed description thereof is omitted. The camera 2 shown in FIG. 15 differs from the solid-state imaging device 1 according to Embodiment 1 in the configuration of the feature detection unit 24 and the speed detection unit 28. On the other hand, the configuration of the solid-state imaging element 10 is the same as that in the first embodiment.
特徴検出部14は、画像処理部16からの画像データ以外に、速度検出部28から速度情報がパラメータとして入力される。
The feature detection unit 14 receives speed information from the speed detection unit 28 as a parameter in addition to the image data from the image processing unit 16.
速度検出部28は、例えば、カメラ2が自動車に搭載される場合には、スピードメータ等であり、速度情報を示す制御信号を特徴検出部24に出力する。
For example, when the camera 2 is mounted on an automobile, the speed detection unit 28 is a speedometer or the like, and outputs a control signal indicating speed information to the feature detection unit 24.
図16は、実施例1おける固体撮像素子10が撮影している映像の例を模式的に示す図である。図16では、例えば、固体撮像素子10を自動車の進行方向前方に設置して撮影している場合に、撮影している映像を模式的に示している。
FIG. 16 is a diagram schematically illustrating an example of an image captured by the solid-state imaging device 10 according to the first embodiment. In FIG. 16, for example, when the solid-state imaging element 10 is installed in front of the traveling direction of the automobile and is photographed, a photographed image is schematically illustrated.
図16に示すように、固体撮像素子10が前方に直進しているとき、撮像画像においては、遠近法に基づき進行方向無限遠点となる画面の中心付近に景色のほとんど動かない点(消失点1601)が存在する。そして、消失点から離れて画面の端に近づく程動きが大きく、距離が離れる程速度が速くなっているように見える。
As shown in FIG. 16, when the solid-state imaging device 10 is traveling straight forward, in the captured image, a point where the scenery hardly moves near the center of the screen, which is a point of infinity in the traveling direction based on perspective (disappearing point). 1601). And it seems that as the distance from the vanishing point approaches the edge of the screen, the movement increases, and as the distance increases, the speed increases.
例えば、消失点の座標P(Xp、Yp)からの距離Lとして、対象の座標(X、Y)の消失点から同心円状に、(式10)を用いて、距離Lを算出して、例えばPからの距離Tを閾値とする。その場合に、(式11)に示すように、消失点Pからの距離L<Tの領域(着目領域1602とする)までは圧縮率を下げ、半径T以降離れた領域(周辺領域1603)では圧縮率を上げる、というような制御を行う。
For example, the distance L from the vanishing point coordinate P (Xp, Yp) is calculated from the vanishing point of the target coordinate (X, Y) concentrically using (Equation 10), for example, Let distance T from P be a threshold value. In this case, as shown in (Equation 11), the compression rate is reduced to a region of distance L <T from the vanishing point P (referred to as a region of interest 1602), and in a region separated from the radius T (peripheral region 1603). Control such as increasing the compression rate.
距離L=√(X-Xp)^2+(Y-Yp)^2 ・・・(式10)
圧縮率=低圧縮(L≦T)、高圧縮(L>T) ・・・(式11) Distance L = √ (X−Xp) ^ 2 + (Y−Yp) ^ 2 (Equation 10)
Compression rate = low compression (L ≦ T), high compression (L> T) (Equation 11)
圧縮率=低圧縮(L≦T)、高圧縮(L>T) ・・・(式11) Distance L = √ (X−Xp) ^ 2 + (Y−Yp) ^ 2 (Equation 10)
Compression rate = low compression (L ≦ T), high compression (L> T) (Equation 11)
なお、上記では、遠近法に基づき消失点Pからの距離を算出する例をとったが、それに限らない。例えば速度が速いほど人間の視界が狭くなるため、中心付近に注目し、端の方は特に全体が早く動いているように錯覚して着目対象から外してしまうことを利用してもよい。その場合は、速度検出部28からの速度情報に応じて、速度の値を閾値として領域および領域の圧縮率を変更すればよい。
In addition, although the example which calculated the distance from the vanishing point P based on the perspective method was taken in the above, it is not restricted to it. For example, the higher the speed, the narrower the human field of view. Therefore, attention may be paid to the vicinity of the center, and it may be used that the edge is removed from the object of interest by illusioning that the whole is moving particularly quickly. In that case, the area and the compression ratio of the area may be changed using the speed value as a threshold value according to the speed information from the speed detection unit 28.
また、画面の端に近い領域については前述の通り注目(着目対象)から外れ易いことから、カメラ2にさらに外部入力部を設けて手動入力等で予め領域を指定しておくとしてもよい。その場合、特徴検出部24は、外部入力部による選択領域であるかどうかを判別する。例えば図16を例に取ると、特徴検出部24は、外部入力指定領域(すなわち、周辺領域1603)でない領域を検出対象(すなわち、着目領域1602)とし、検出対象となる着目領域1602においてのみ速度検出部28による速度検出を行うせて特徴検出を行うとしても良い。
In addition, since the area close to the edge of the screen is easily deviated from attention (target object) as described above, an external input unit may be provided in the camera 2 and the area may be designated in advance by manual input or the like. In that case, the feature detection unit 24 determines whether or not it is a selection region by the external input unit. For example, taking FIG. 16 as an example, the feature detection unit 24 sets an area that is not the external input designation area (that is, the peripheral area 1603) as a detection target (that is, the attention area 1602), and the speed only in the attention area 1602 that is the detection target. Feature detection may be performed by detecting the speed by the detection unit 28.
以上のように、特徴検出部24は、画像処理部16により処理された複数の画素データから構成される画像データと、過去に画像処理部16により処理された画像データとの差分情報から、画像全体の動きを特徴として検出し、圧縮率設定部13は、検出された前記画像全体の動きの情報に基づいて領域毎に圧縮率を可変設定する。具体的には、特徴検出部24は、速度検出部28により入力される速度パラメータを用いることにより、圧縮率設定部13へ与えるパラメータとして、速度情報を加えて、速度が大きくなるにつれ消失点からの半径の閾値Tを小さい方向に変える旨を示すパラメータを圧縮率設定部13へ与えることができる。それにより、圧縮率を高くする周辺領域1603が広くなるため圧縮効率が良くなり、また、着目外領域の圧縮率を大きい方向に変えることにより、着目外領域のデータが削減され、圧縮効率が良くなるという効果を奏する。
As described above, the feature detection unit 24 uses the difference information between the image data composed of the plurality of pixel data processed by the image processing unit 16 and the image data processed by the image processing unit 16 in the past to calculate the image The overall motion is detected as a feature, and the compression rate setting unit 13 variably sets the compression rate for each region based on the detected motion information of the entire image. Specifically, the feature detection unit 24 adds speed information as a parameter to be given to the compression ratio setting unit 13 by using the speed parameter input by the speed detection unit 28, and from the vanishing point as the speed increases. A parameter indicating that the threshold value T of the radius is changed in a smaller direction can be given to the compression rate setting unit 13. As a result, the compression efficiency is improved because the peripheral area 1603 for increasing the compression ratio is widened, and the data of the non-target area is reduced by changing the compression ratio of the non-target area in a larger direction, and the compression efficiency is improved. The effect of becoming.
なお、図16において、カメラ2の固体撮像素子10が前方の進行方向に対して撮像している場合を例に取ったが、それに限らない。進行方向に対して横方向、すなわち消失点が存在しない方向に対して撮像している場合には、画面全体が同一方向に動いているため、特徴検出部24は画面全体を一様に制御してもよい。また、カメラ2がさらに、特定の物体を追尾する等の構成を備えることによりその特徴検出を行ってもよい。
In addition, in FIG. 16, although the case where the solid-state image sensor 10 of the camera 2 was imaging with respect to the forward traveling direction was taken as an example, it is not limited thereto. When the image is taken in a direction transverse to the traveling direction, that is, in a direction in which no vanishing point exists, the entire screen moves in the same direction. Therefore, the feature detection unit 24 controls the entire screen uniformly. May be. Further, the camera 2 may be further detected by providing a configuration such as tracking a specific object.
(実施例2)
実施例2では、実施例1の変形例として、速度検出部28ではなく、フレーム間の差分に基づく動きベクトルを用いて全体の動きを検出する場合について説明する。 (Example 2)
In the second embodiment, as a modification of the first embodiment, a case will be described in which the entire motion is detected using a motion vector based on a difference between frames instead of thespeed detection unit 28.
実施例2では、実施例1の変形例として、速度検出部28ではなく、フレーム間の差分に基づく動きベクトルを用いて全体の動きを検出する場合について説明する。 (Example 2)
In the second embodiment, as a modification of the first embodiment, a case will be described in which the entire motion is detected using a motion vector based on a difference between frames instead of the
図17Aは、実施例2におけるカメラ3の構成を示すブロック図である。なお、図1および図15と同様の要素には同一の符号を付しており、詳細な説明は省略する。図17Aに示すカメラ3は、実施の形態1に係る固体撮像装置1に対して、特徴検出部34と、動画像符号化部39との構成が異なる。図17Aに示すカメラ3は、実施例1に係るカメラ2に対して、特徴検出部34の構成と、速度検出部28を備えず動画像符号化部39を備える点で構成が異なる。
FIG. 17A is a block diagram illustrating a configuration of the camera 3 according to the second embodiment. Elements similar to those in FIGS. 1 and 15 are denoted by the same reference numerals, and detailed description thereof is omitted. The camera 3 shown in FIG. 17A differs from the solid-state imaging device 1 according to Embodiment 1 in the configuration of a feature detection unit 34 and a moving image encoding unit 39. The camera 3 illustrated in FIG. 17A is different from the camera 2 according to the first embodiment in that the configuration of the feature detection unit 34 is different from the configuration of the feature detection unit 34 in that a moving image encoding unit 39 is not provided.
動画像符号化部39は、画像処理部16により出力される輝度Y、色差Cの情報(YCデータとも言う)が入力される。動画像符号化部39は、入力されるYCデータに基づいて符号化を行うとともに、符号化中に発生する動きベクトル情報1908を、特徴検出部34に出力している。
The moving image encoding unit 39 receives the luminance Y and color difference C information (also referred to as YC data) output from the image processing unit 16. The moving image encoding unit 39 performs encoding based on the input YC data, and outputs motion vector information 1908 generated during encoding to the feature detection unit 34.
図17Bは、特徴検出部34に出力される動きベクトルの例である。図17B中の矢印は、上述した実施例1進行方向前方を撮影した状態の動きベクトルを模式的に示したものであり、水平方向と垂直方向との2次元座標で表される。ここで、画面中の格子上に区切りは、動画像符号化部39の符号化単位であるマクロブロックを示している。また、消失点1909を中心に、矢印が外側に向かって大きくなっているが、マクロブロックの動きベクトルの絶対値が大きくなっていることを示している。
FIG. 17B is an example of a motion vector output to the feature detection unit 34. The arrow in FIG. 17B schematically shows a motion vector in a state where the front in the traveling direction of the first embodiment described above is shown, and is represented by two-dimensional coordinates in the horizontal direction and the vertical direction. Here, the partition on the grid in the screen indicates a macro block which is a coding unit of the moving picture coding unit 39. In addition, the arrow increases outward from the vanishing point 1909, indicating that the absolute value of the motion vector of the macroblock is increased.
動画像符号化部39は、マクロブロック毎に動きベクトルを少なくとも1つ保持することができる。
The moving image encoding unit 39 can hold at least one motion vector for each macroblock.
次に、動画像符号化部39の詳細構成例について説明する。図18は、実施例2における動画像符号化部39の詳細構成を示すブロック図である。なお、図18では、例えば、符号化形式としてH.264を用いた場合の構成例を示している。
Next, a detailed configuration example of the moving image encoding unit 39 will be described. FIG. 18 is a block diagram illustrating a detailed configuration of the moving image encoding unit 39 according to the second embodiment. In FIG. 18, for example, the encoding format is H.264. A configuration example in the case of using H.264 is shown.
図18に示す動画像符号化部39は、画像処理部16によりYCデータ1901が入力される。この動画像符号化部39は、減算器3902、DCT量子化部3903、逆量子化/逆DCT部3904、イントラ予測部3905、デブロッキングフィルタ3906、外部メモリ3907、動き検出部3908、動き補償部3909、インター予測部3910、予測判定部3911、可変長符号化部3912、および、加算器3913を備える。
18, YC data 1901 is input by the image processing unit 16. The moving image encoding unit 39 includes a subtractor 3902, a DCT quantization unit 3903, an inverse quantization / inverse DCT unit 3904, an intra prediction unit 3905, a deblocking filter 3906, an external memory 3907, a motion detection unit 3908, and a motion compensation unit. 3909, an inter prediction unit 3910, a prediction determination unit 3911, a variable length coding unit 3912, and an adder 3913.
ここで、動画像符号化部39の処理について説明する。まず、動画像符号化部39は、1フレームを水平16画素×垂直16画素単位のマクロブロックに分割して処理を行う。
Here, the processing of the moving image encoding unit 39 will be described. First, the moving image encoding unit 39 performs processing by dividing one frame into macroblocks each having 16 horizontal pixels × 16 vertical pixels.
YCデータ3901は、画像処理部16より入力されるYCデータであり、マクロブロック単位で動き検出部3908と減算器3902に入力される。
The YC data 3901 is YC data input from the image processing unit 16 and is input to the motion detection unit 3908 and the subtracter 3902 in units of macroblocks.
入力されたYCデータ3901は、減算器3902で、直前の判定結果が減算されてDCT量子化部3903に入力される。そして、DCT量子化部3903では、DCTと量子化とが行われ、逆量子化/逆DCT部3904とイントラ予測部3905と可変長符号化部3912に入力される。
The input YC data 3901 is subtracted by the subtractor 3902 and the previous determination result is subtracted and input to the DCT quantization unit 3903. Then, DCT quantization section 3903 performs DCT and quantization, and inputs the result to inverse quantization / inverse DCT section 3904, intra prediction section 3905, and variable length coding section 3912.
そして、イントラ予測部3905では、フレーム内の画像のみで処理を行って予測データを算出し、算出した予測データを予測判定部3911に送る。予測判定部3911では、イントラ予測部3905とインター予測部3910との予測値を、所定の判定基準に基づいて判定し、判定結果に従い選択した予測値を減算器3902および加算器3913に出力する。
Then, the intra prediction unit 3905 performs processing only on the image in the frame to calculate prediction data, and sends the calculated prediction data to the prediction determination unit 3911. The prediction determination unit 3911 determines the prediction values of the intra prediction unit 3905 and the inter prediction unit 3910 based on a predetermined determination criterion, and outputs the prediction values selected according to the determination result to the subtracter 3902 and the adder 3913.
また、逆量子化/逆DCT部3904では、DCT量子化部3903でDCT量子化された後のデータを復元した後、加算器3913で、予測判定部3911から出力された判定値と加算を行い、デブロッキングフィルタ3906に出力する。そして、デブロッキングフィルタ3906で、マクロブロック間で生じるブロックノイズを軽減するためのフィルタリング後、参照画像として外部メモリ3907へ格納し、外部メモリ3907に格納されている参照画像は、フレーム間の動き検出部3908で入力として用いられ、入力データ(YCデータ3901)との差分情報から、動きベクトル情報1908を算出して動き補償部3909および特徴検出部34へ出力される。
In addition, the inverse quantization / inverse DCT unit 3904 restores the data after the DCT quantization by the DCT quantization unit 3903, and then the adder 3913 performs addition with the determination value output from the prediction determination unit 3911. And output to the deblocking filter 3906. Then, after filtering to reduce block noise generated between macroblocks by the deblocking filter 3906, the reference image is stored in the external memory 3907, and the reference image stored in the external memory 3907 is detected as motion between frames. The motion vector information 1908 is calculated from the difference information from the input data (YC data 3901) and is output to the motion compensation unit 3909 and the feature detection unit 34.
動き補償部3909では、出力された動きベクトル等を用いて動き補償を行い、インター予測部3910に出力され、インター予測部3910では、フレーム間で複数フレームの参照画像のデータを用いて画素予測値を算出し、予測判定部3911で用いられる。
The motion compensation unit 3909 performs motion compensation using the output motion vector and the like, and outputs the motion compensation to the inter prediction unit 3910. The inter prediction unit 3910 uses the reference image data of a plurality of frames between frames to predict the pixel prediction value. Is calculated and used by the prediction determination unit 3911.
可変長符号化部3912では、DCT量子化部3903で符号量制御が行われた後、CAVLCやCABAC等の可変長符号化を行い、符号データを出力する。ここで、イントラフレームにおいては、参照画像が存在しない。一方、インターフレームにおいては、可変長符号化部3912では、動き検出部3908、動き補償部3909、およびインター予測部3910により、少なくとも1フレーム前のデータが揃った後に処理が行われる。
In the variable length encoding unit 3912, after the code amount control is performed by the DCT quantization unit 3903, variable length encoding such as CAVLC or CABAC is performed, and code data is output. Here, there is no reference image in the intra frame. On the other hand, in the inter frame, in the variable length encoding unit 3912, the motion detection unit 3908, the motion compensation unit 3909, and the inter prediction unit 3910 perform processing after the data of at least one frame before is prepared.
以上のように、カメラ3は、実施の形態1の固体撮像装置1に比べて、さらに、画像処理部16により処理された画像データを符号化する動画像符号化部39を備え、特徴検出部34は、動画像符号化部39により出力される動きベクトル情報に基づいて、画像処理部16により処理された画素データから画像の領域毎の動きベクトル情報を、特徴として検出し、圧縮率設定部13は、検出された動き情報に基づいて領域毎に圧縮率を可変設定する。
As described above, the camera 3 further includes the moving image encoding unit 39 that encodes the image data processed by the image processing unit 16 as compared with the solid-state imaging device 1 of the first embodiment, and includes the feature detection unit. 34 detects, as a feature, motion vector information for each region of the image from the pixel data processed by the image processing unit 16 based on the motion vector information output from the moving image encoding unit 39, and a compression rate setting unit 13 variably sets the compression rate for each region based on the detected motion information.
なお、本実施例では、H.264の場合の例を説明したがそれに限らない。例えば、MPEG等のフレーム間の動きベクトルを用いる符号化手法に適用できるのは言うまでもない。
In this embodiment, H. Although an example in the case of H.264 has been described, the present invention is not limited thereto. Needless to say, the present invention can be applied to, for example, an encoding method using a motion vector between frames such as MPEG.
また、画面全体が動いているかどうかは、動画像符号化部39により出力される動きベクトル情報1908に基づいて、以下のように判別する。例えば、マクロブロック単位で出力される各動きベクトルMV(x、y)=(X、Y)(x、yは水平垂直マクロブロック座標、XYは水平垂直動きベクトル値)の水平成分垂直成分について、平均値を算出後、傾きを算出する。ここで、例えば水平W個、垂直H個のマクロブロックに分割された場合、水平垂直成分について、W×H個の平均値(Xave、Yave)を算出する。そして、各マクロブロックの動きベクトルMV(x、y)の傾きが、平均の傾き(Xave、Yave)に対して同じ方向を向いているかどうかを判別し、前記平均値と同じ方向を指すマクロブロックが画像全体W×H個に対して一様に分布している場合、画面全体が動いていると判別することができる。
Whether or not the entire screen is moving is determined as follows based on the motion vector information 1908 output by the moving image encoding unit 39. For example, regarding the horizontal component vertical component of each motion vector MV (x, y) = (X, Y) (x, y are horizontal and vertical macroblock coordinates, XY is a horizontal and vertical motion vector value) output in units of macroblocks, After calculating the average value, the slope is calculated. Here, for example, when divided into W macro blocks and H macro blocks, W × H average values (Xave, Yave) are calculated for horizontal and vertical components. Then, it is determined whether or not the gradient of the motion vector MV (x, y) of each macroblock is in the same direction with respect to the average gradient (Xave, Yave), and the macroblock indicating the same direction as the average value Is uniformly distributed over the entire image W × H, it can be determined that the entire screen is moving.
(実施例3)
本実施例では、実施の形態1に係る固体撮像装置1の別の適用例について説明する。具体的には、特徴検出部14が特徴検出を行う方法の例として、人物の顔を検出し、検出された人物の顔が含まれる領域を選択する方法について説明する。 (Example 3)
In this example, another application example of the solid-state imaging device 1 according to Embodiment 1 will be described. Specifically, as an example of a method for the feature detection unit 14 to perform feature detection, a method for detecting a human face and selecting a region including the detected human face will be described.
本実施例では、実施の形態1に係る固体撮像装置1の別の適用例について説明する。具体的には、特徴検出部14が特徴検出を行う方法の例として、人物の顔を検出し、検出された人物の顔が含まれる領域を選択する方法について説明する。 (Example 3)
In this example, another application example of the solid-
具体的には、特徴検出部14は、顔の領域が含まれるか否かで圧縮率を変更するようなパラメータを圧縮率設定部13に与える(出力する)。
Specifically, the feature detection unit 14 gives (outputs) to the compression rate setting unit 13 a parameter that changes the compression rate depending on whether or not a face region is included.
ここで、人物の顔は映像中において注目しやすい情報であるため、高画質を維持したまま画像処理または記録されるようにするのが望ましい。例えば、肌の色情報、目や鼻や口や耳、頭髪といった顔の各構成要素の概略形状や構成要素間の位置関係等の情報、輪郭情報等の顔検出条件情報に基づいて、特徴検出部14に人物の顔検出を行わせる。すなわち、特徴検出部14に顔検出条件情報を検出させて、顔あるいは顔を含む領域が存在するかどうか判別させる。
Here, since the face of a person is information that is easily noticed in the video, it is desirable that image processing or recording be performed while maintaining high image quality. For example, feature detection based on skin color information, information such as the approximate shape of each face component such as eyes, nose, mouth, ear, and hair, information on the positional relationship between components, and face detection condition information such as contour information The unit 14 is caused to detect a human face. That is, the feature detection unit 14 is caused to detect face detection condition information and determine whether or not a face or a region including a face exists.
図19Aおよび図19Bは、実施例3における顔検出に適用した場合の顔領域の模式図を示している。図19Aは、画像中に1人の顔が写っている場合を示しており、図19Bは、2人の顔が写っている場合を示している。
FIG. 19A and FIG. 19B are schematic views of face areas when applied to face detection in the third embodiment. FIG. 19A shows a case where one person's face is shown in the image, and FIG. 19B shows a case where two people's faces are shown.
特徴検出部14は、顔が存在すると判別すると、1人増えるごとに領域数nを増加していく。
When the feature detection unit 14 determines that a face exists, the feature detection unit 14 increases the number of regions n for each additional person.
特徴検出部14は、例えば図19Aに示すように検出された顔が1人分である場合、1領域を検出し、同時に顔の代表点C(不図示)を中心起点として画像中で顔の領域の端となる箇所までスキャンを行い、顔領域を算出する。ここで、検出される領域の領域形状は、(水平方向の最小値、垂直方向の最小値)、および(水平方向の最大値、垂直方向の最大値)で画する矩形領域としてもよいが、矩形領域とは限ったことではなく、実施の形態1の図8Bで示したように、各領域情報をライン毎に持たせることに水平方向の左端および幅を与えて任意の形状としてもよい。特徴検出部14は、実施の形態1で説明したように、上記のように検出した顔領域を含む領域情報を圧縮率設定部13へ出力する。そして、圧縮率設定部13は、図19Aに示すように顔領域については圧縮率をAとして圧縮率を下げて設定して低圧縮とすることで画質を向上させるとともに、顔以外の領域については、圧縮率Bとして圧縮率を上げて高圧縮とする。このようにして、画質を維持しつつデータ量を効率的に削減できる。
For example, when the detected face is one person as shown in FIG. 19A, the feature detection unit 14 detects one area, and at the same time, uses the representative point C (not shown) of the face as a central starting point. Scan to the end of the area to calculate the face area. Here, the area shape of the detected area may be a rectangular area drawn with (minimum value in the horizontal direction, minimum value in the vertical direction) and (maximum value in the horizontal direction, maximum value in the vertical direction) The rectangular area is not limited, and as shown in FIG. 8B of the first embodiment, each area information may be given to each line by giving a horizontal left end and a width to an arbitrary shape. As described in the first embodiment, the feature detection unit 14 outputs region information including the face region detected as described above to the compression rate setting unit 13. Then, as shown in FIG. 19A, the compression rate setting unit 13 improves the image quality by setting the compression rate to A and lowering the compression rate for the face region, thereby improving the image quality, and for the region other than the face. As the compression rate B, the compression rate is increased to achieve high compression. In this way, the amount of data can be efficiently reduced while maintaining the image quality.
また、特徴検出部14は、例えば図19Bに示すように2人以上の顔が写っている場合、第1顔領域と第2顔領域との2領域を検出する。特徴検出部14は、この2領域を算出後、この2領域に個別の圧縮率を設定しても良い。また、特徴検出部14は、この2領域が重複する場合または領域の境界間の距離が所定の値Thより小さい近い場合、図19Bに示すように複数領域を結合して領域を1つとして第1領域(結合後の第1顔領域)と再設定してもよい。そして、特徴検出部14は、このように検出した顔領域を含む領域情報を圧縮率設定部13に出力するとしてもよい。
Further, for example, when two or more faces are shown as shown in FIG. 19B, the feature detection unit 14 detects two areas of the first face area and the second face area. The feature detection unit 14 may set individual compression ratios in the two areas after calculating the two areas. In addition, when the two regions overlap or when the distance between the region boundaries is smaller than the predetermined value Th, the feature detection unit 14 combines the plurality of regions as one region as shown in FIG. 19B. It may be reset as one area (first face area after combination). The feature detection unit 14 may output region information including the face region detected in this way to the compression rate setting unit 13.
以上のように、複数の画素で構成される画像は、人物の顔を示す領域である顔領域を有している。特徴検出部14は、光電変換部11から出力される画素データから、特徴として、当該画像に顔領域を示す領域情報を検出する。圧縮率設定部13は、特徴検出部14により検出された領域情報に基づいて、少なくとも顔領域を含む領域と顔領域を含まない領域とで個別に圧縮率を可変設定する。
As described above, an image composed of a plurality of pixels has a face area that is an area showing a human face. The feature detection unit 14 detects region information indicating a face region in the image as a feature from the pixel data output from the photoelectric conversion unit 11. Based on the region information detected by the feature detection unit 14, the compression rate setting unit 13 variably sets the compression rate individually for at least a region including a face region and a region not including a face region.
(実施例4)
本実施例では、圧縮部12が、圧縮率設定部13により領域毎に設定された圧縮率に基づいて(例えば削減される撮像データ量から)、フレームレートを変換する手順について説明する。 Example 4
In the present embodiment, a procedure in which thecompression unit 12 converts the frame rate based on the compression rate set for each region by the compression rate setting unit 13 (for example, from the amount of imaging data to be reduced) will be described.
本実施例では、圧縮部12が、圧縮率設定部13により領域毎に設定された圧縮率に基づいて(例えば削減される撮像データ量から)、フレームレートを変換する手順について説明する。 Example 4
In the present embodiment, a procedure in which the
図20は、実施例4における圧縮部12がフレームレートを変換する手順について説明するための図である。
FIG. 20 is a diagram for explaining a procedure in which the compression unit 12 according to the fourth embodiment converts the frame rate.
なお、実施の形態1では、固体撮像素子10として、図2及び図3を用いて、MOSセンサ(MOS型固体撮像素子)の場合について述べた。なお、この固体撮像素子10は、読み出し終了後に露光が行われるが、各ラインの露光時間を同じに保つためには、各ラインの同期信号の間隔を等しく保ちながら同期信号の周期を削減することが必要である。そして、それは、次のように行う。
In the first embodiment, the case of a MOS sensor (MOS type solid-state image pickup device) has been described using FIG. 2 and FIG. 3 as the solid-state image pickup device 10. The solid-state imaging device 10 is exposed after reading is completed. In order to keep the exposure time of each line the same, the period of the synchronization signal is reduced while keeping the interval of the synchronization signal of each line equal. is required. And that is done as follows.
以下では、特徴検出部14により複数の領域に分割されるとする。つまり、1フレームLラインで構成されるフレーム中、各ラインlにおいて、分割された領域数N、各領域nの画素数P(n)、圧縮率C(n)とする。その場合、ラインlの1ラインあたりの総データ量SIZE(l)は、(式12)のように表せる。
In the following, it is assumed that the feature detection unit 14 divides the image into a plurality of areas. In other words, in each frame l in a frame composed of one frame L line, the number of divided areas is N, the number of pixels in each area n is P (n), and the compression rate is C (n). In this case, the total data amount SIZE (l) per line l can be expressed as (Equation 12).
SOLバイト数+Σ(SORバイト数+P(n)*C(n))(n=1、2、・・・、N)+EOLバイト数 ・・・(式12)
SOL byte count + Σ (SOR byte count + P (n) * C (n)) (n = 1, 2,..., N) + EOL byte count (Formula 12)
さらに、ライン間のデータ量のばらつきを無くして揃えるため、SIZE(l)の値を、ラインl=1、2、・・・、Lと全ラインに対して算出を行い、最大値を算出する。
Further, in order to eliminate the variation in the data amount between the lines, the value of SIZE (l) is calculated for the lines l = 1, 2,..., L and all the lines, and the maximum value is calculated. .
MAX(SIZE(1)、SIZE(2)、・・・、SIZE(L))・・・(式13)
MAX (SIZE (1), SIZE (2), ..., SIZE (L)) ... (Formula 13)
上記(式13)中Lライン中の最大値MAX(1~L)の値が非圧縮時の1ライン辺りの全データ量より減少しているかどうかの判別を行い、減少していれば、圧縮量に応じてフレームレートを向上させることができる。
In the above (Equation 13), it is determined whether or not the maximum value MAX (1 to L) in the L line is less than the total amount of data per line at the time of non-compression. The frame rate can be improved according to the amount.
例えば、1ライン単位の周期を基にした、例えば図20(a)に示す1ライン期間において、圧縮前のデータ量に対して、図20(b)の1ラインあたりの圧縮後のデータサイズが80%のサイズになった場合、図20(a)から図20(b)へのように、同期信号の1ラインの間隔を縮小することができる。このように1ラインの周期は80%に短縮でき、計算上25%の速度向上ができる。ここで、フレーム中でのライン最大値を算出しているため、同期信号の周期を上記の値より大きい値にしておけば、1ラインのデータ出力が次のラインに跨ることなく、全ラインのデータが出力でき、かつ等間隔の周期を保つことができる。
For example, in the one line period shown in FIG. 20A based on the cycle of one line unit, for example, the data size after compression per line in FIG. When the size is 80%, the interval of one line of the synchronization signal can be reduced as shown in FIG. 20 (a) to FIG. 20 (b). Thus, the period of one line can be shortened to 80%, and the speed can be improved by 25% in calculation. Here, since the line maximum value in the frame is calculated, if the period of the synchronization signal is set to a value larger than the above value, the data output of one line does not straddle the next line, and all lines are output. Data can be output and an equally spaced period can be maintained.
以上のように、固体撮像装置1は、圧縮部12により圧縮された画素データを出力するが、圧縮部12は、圧縮後のデータ量と、圧縮部12が圧縮する前の画素データのデータ量である圧縮前のデータ量との関係に基づいて、1ラインの周期を変更することにより、フレームレートを変更することができる。ここで、圧縮後のデータ量とは、圧縮された画素データのデータ量であり、固体撮像装置1から出力される圧縮された画素データの1ライン辺りの画像の領域毎に対する圧縮率及び画素数の総和に基づいて算出される。
As described above, the solid-state imaging device 1 outputs the pixel data compressed by the compression unit 12, but the compression unit 12 has a data amount after compression and a data amount of pixel data before compression by the compression unit 12. The frame rate can be changed by changing the cycle of one line based on the relationship with the data amount before compression. Here, the data amount after compression is the data amount of compressed pixel data, and the compression rate and the number of pixels for each region of the image per line of the compressed pixel data output from the solid-state imaging device 1. Is calculated based on the sum of
(実施例5)
本実施例では、実施の形態1に係る固体撮像装置1のさらに別の適用例について説明する。具体的には、特徴検出部14が特徴検出方法の別の例として、AF制御のための検波情報を用いて画素データの圧縮率を制御する方法について説明する。 (Example 5)
In this example, another application example of the solid-state imaging device 1 according to Embodiment 1 will be described. Specifically, a method in which the feature detection unit 14 controls the compression rate of pixel data using detection information for AF control will be described as another example of the feature detection method.
本実施例では、実施の形態1に係る固体撮像装置1のさらに別の適用例について説明する。具体的には、特徴検出部14が特徴検出方法の別の例として、AF制御のための検波情報を用いて画素データの圧縮率を制御する方法について説明する。 (Example 5)
In this example, another application example of the solid-
ここで、AFの制御としては、外部センサを用いるアクティブ方式、撮像素子を用いるパッシブ方式、さらに、パッシブ方式で位相差検出方式、コントラスト検出方式等がある。以下では、コントラスト検出方式を例に取り説明する。
Here, the AF control includes an active method using an external sensor, a passive method using an image sensor, a passive method, a phase difference detection method, a contrast detection method, and the like. Hereinafter, the contrast detection method will be described as an example.
図21は、実施例5における固体撮像装置4の構成を示すブロック図である。
FIG. 21 is a block diagram illustrating a configuration of the solid-state imaging device 4 according to the fifth embodiment.
画像処理部16は、内部にプリプロセス回路461とYC処理回路462とを備えている。また、特徴検出部14は、内部にHPF回路443と領域毎コントラスト積算回路444と中央演算回路(以下、CPU)445とを備える。
The image processing unit 16 includes a preprocessing circuit 461 and a YC processing circuit 462 inside. Further, the feature detection unit 14 includes an HPF circuit 443, an area-by-area contrast integration circuit 444, and a central processing circuit (hereinafter referred to as CPU) 445.
伸長部15から出力された画素データは、プリプロセス回路461にて遮光領域の撮像データから黒レベルを検出され、減算補正されるOB(Optical Black)クランプ処理、劣化等で画素データに生じる傷補正処理等の前処理が実施され、YC処理回路462にて同時化処理やエッジ強調処理、ガンマ変換処理が実施される。合わせてプリプロセス回路461の出力はデジタルハイパスフィルタでなるHPF回路443に入力され、被写体の空間高周波成分信号を抽出し、領域毎コントラスト積算回路444に入力される。領域毎コントラスト積算回路444は、撮像フレーム毎に1フレーム内の予め指定された撮像画面の分割領域毎(以降、AF領域)にHPF回路443からの出力信号を積算し、該AF領域毎の被写体コントラスト値を生成し、CPU445に出力する。
The pixel data output from the decompression unit 15 is detected by the pre-process circuit 461 from the imaging data in the light-shielded area, and subtracts and corrects OB (Optical Black) clamp processing, which corrects flaws in the pixel data due to deterioration, etc. Preprocessing such as processing is performed, and the YC processing circuit 462 performs synchronization processing, edge enhancement processing, and gamma conversion processing. At the same time, the output of the preprocess circuit 461 is input to an HPF circuit 443 formed of a digital high-pass filter, and a spatial high-frequency component signal of the subject is extracted and input to a contrast integration circuit 444 for each region. The area-by-area contrast integration circuit 444 integrates the output signals from the HPF circuit 443 for each divided area (hereinafter referred to as AF area) of the imaging screen designated in advance in one frame for each imaging frame, and subjects for each AF area. A contrast value is generated and output to the CPU 445.
ここで、上記AF領域は、例えば図22に示すように撮像画面内(画像内)を縦横9分割された81領域とする。なお、図22中に示されるa~iは、水平方向の領域アドレスとし、1~9は垂直方向の領域アドレスとし、例えば画面左上のAF領域をアドレスa1、画面右下のAF領域をアドレスi9と呼ぶ。また、各AF領域のコントラスト値を、AFC(afp)(afpはa1~i9)と呼ぶ。
Here, the AF area is, for example, an 81 area obtained by dividing the imaging screen (in the image) into 9 parts vertically and horizontally as shown in FIG. In FIG. 22, a to i are horizontal area addresses, and 1 to 9 are vertical area addresses. For example, the upper left AF area is the address a1, and the lower right AF area is the address i9. Call it. The contrast value of each AF area is called AFC (afp) (afp is a1 to i9).
一方、固体撮像素子10の光電変換部11内にある画素セルアレイ111へは光学レンズ410にて被写体像が結像され、該光学レンズ410は、例えばステッピングモーターのようなアクチュエータを備えたレンズ駆動部411によって画素セルアレイ111に結像される被写体側ピント位置が調整できるようレンズ光軸方向に駆動制御される。レンズ駆動部411は光学レンズ410のピント位置をCPU445にて駆動制御される。
On the other hand, a subject image is formed by the optical lens 410 on the pixel cell array 111 in the photoelectric conversion unit 11 of the solid-state imaging device 10, and the optical lens 410 includes a lens driving unit including an actuator such as a stepping motor. The lens 411 is driven and controlled in the direction of the lens optical axis so that the subject-side focus position formed on the pixel cell array 111 can be adjusted. The lens driving unit 411 is driven and controlled by the CPU 445 for the focus position of the optical lens 410.
また、CPU445は、圧縮率設定部13に対してフレーム毎の各AF領域に対応した圧縮率設定部13すなわち図8Aおよび図8Bを用いて説明した前述の領域P(N)に対して、圧縮率Q(N)を設定する。ここで、Nは1~81となっている。CPU445は、例えばAF領域a1はP(1)に、b1はP(2)に、・・・、i9はP(81)に対応するよう、水平画素数W(N)および垂直画素数H(N)を合わせて圧縮率設定部13に対して設定する。
Further, the CPU 445 compresses the compression rate setting unit 13 for the compression rate setting unit 13 corresponding to each AF area for each frame, that is, the above-described region P (N) described with reference to FIGS. 8A and 8B. Set the rate Q (N). Here, N is 1 to 81. For example, the CPU 445 determines that the AF area a1 corresponds to P (1), b1 corresponds to P (2),..., I9 corresponds to P (81), and the horizontal pixel count W (N) and vertical pixel count H ( N) are set for the compression rate setting unit 13 together.
以上のように、固体撮像装置1は、さらに、光学レンズ410を駆動するレンズ駆動部411を備え、特徴検出部14は、光電変換部11から出力される画素データから、画像の所定領域に対するコントラスト値を検出する領域毎コントラスト積算回路444と、コントラスト値に基づいて所定の領域が合焦しているか否かを判別することにより、前記所定領域が合焦領域である旨を示す情報を検出し、かつ、前記所定領域が合焦するようレンズ駆動部411を制御するCPU445を有する。圧縮率設定部13は、当該情報に基づき、少なくとも合焦領域と非合焦領域とを含む領域について個別に圧縮率を可変設定する。
As described above, the solid-state imaging device 1 further includes the lens driving unit 411 that drives the optical lens 410, and the feature detection unit 14 uses the pixel data output from the photoelectric conversion unit 11 to contrast the predetermined region of the image. Information indicating that the predetermined area is an in-focus area by detecting whether or not the predetermined area is in focus based on the contrast value and a contrast integration circuit for each area 444 for detecting the value And a CPU 445 for controlling the lens driving unit 411 so that the predetermined area is in focus. Based on this information, the compression rate setting unit 13 variably sets the compression rate individually for a region including at least the in-focus region and the out-of-focus region.
次に、CPU445の動作について図23のフローチャートを用いて説明する。
Next, the operation of the CPU 445 will be described using the flowchart of FIG.
まず、図示しないユーザーからのオートフォーカス(以降、AF)開始を指示するトリガーが入力されることでAF動作が開始される。
First, an AF operation is started by inputting a trigger for instructing the start of autofocus (hereinafter referred to as AF) from a user (not shown).
次に、ステップS601にてまずレンズ位置が無限にピントが合うようCPU445からレンズ駆動部411へ制御する。
Next, in step S601, the CPU 445 controls the lens driving unit 411 so that the lens position is infinitely focused.
次に、ステップS602では、CPU445から領域毎コントラスト積算回路444に対して、各領域の積算されたコントラスト値をリセットする。
Next, in step S602, the CPU 445 resets the accumulated contrast value of each area to the contrast accumulation circuit 444 for each area.
次に、ステップS603では、光学レンズ410のレンズ位置を所定量至近ピント位置側へ移動させるようレンズ駆動部411を制御する。
Next, in step S603, the lens driving unit 411 is controlled so as to move the lens position of the optical lens 410 to the closest focus position side by a predetermined amount.
次に、ステップS604では、ステップS603にてレンズ駆動後に固体撮像素子10から出力され、伸長部15とプリプロセス回路461およびHPF回路443を介して、領域毎コントラスト積算回路にて取得された1フレームの全AF領域のコントラスト値を取得する。
Next, in step S604, one frame output from the solid-state imaging device 10 after driving the lens in step S603 and acquired by the contrast integration circuit for each region via the extension unit 15, the preprocess circuit 461, and the HPF circuit 443. The contrast values of all the AF areas are acquired.
次に、ステップS605では、光学レンズ410のレンズ位置が至近端であるかどうかの判定を行い、至近端でなければ(ステップS605でNO)、ステップS603へ進む。一方、至近端であれば(ステップS605でYES)、ステップS606へ進む。
Next, in step S605, it is determined whether or not the lens position of the optical lens 410 is the close end. If it is not the close end (NO in step S605), the process proceeds to step S603. On the other hand, if it is the close end (YES in step S605), the process proceeds to step S606.
このようにすることで、レンズ位置が無限端から至近端まで間での全AF領域のコントラスト値AF(afp)のレンズ位置に対するコントラスト値のプロファイルを取得する。
In this way, a profile of the contrast value with respect to the lens position of the contrast value AF (afp) of the entire AF region between the lens position from the infinite end to the closest end is acquired.
次に、ステップS606では、上記動作で得られたコントラスト値のプロファイルをもとに、もっとも至近端に近いところにコントラスト値のピークを持つプロファイルおよびAF領域を選択し、該プロファイルのピークとなるレンズ位置を合焦レンズ位置として算出する。
Next, in step S606, based on the contrast value profile obtained by the above operation, a profile and an AF region having a contrast value peak closest to the closest end are selected and become the peak of the profile. The lens position is calculated as the focusing lens position.
例えば、図24にコントラスト値のプロファイルの例を示しているが、図22の被写体絵柄のように手前に人物があり人物より後が遠景である場合には、図22の点線で示した人物に当たったAF領域ではプロファイル2601が取得され、それ以外のAF領域ではプロファイル2602が取得される。この場合、ステップS606では図24のプロファイル2601のピークであるレンズ位置fp1を合焦レンズ位置として決定する。
For example, FIG. 24 shows an example of a contrast value profile. However, when there is a person in the foreground and a distant view behind the person as in the subject pattern in FIG. 22, the person indicated by the dotted line in FIG. A profile 2601 is acquired in the hit AF area, and a profile 2602 is acquired in the other AF areas. In this case, in step S606, the lens position fp1 that is the peak of the profile 2601 in FIG. 24 is determined as the focus lens position.
次に、ステップS607では、ステップS606で決定された合焦レンズ位置へ光学レンズ410を駆動するようレンズ駆動部411を制御する。その結果、AF動作が完了する。
Next, in step S607, the lens driving unit 411 is controlled to drive the optical lens 410 to the in-focus lens position determined in step S606. As a result, the AF operation is completed.
次に、ステップS608では、合焦レンズ位置の決定で選択したAF領域(図22の点線図示のAF領域)つまりは合焦しているAF領域に対して圧縮率Qを低圧縮に設定し、合焦レンズ位置の決定で選択されなかった非合焦のAF領域に対しては高圧縮となるよう圧縮率Qを圧縮率設定部13に対して設定する。
Next, in step S608, the compression ratio Q is set to low compression for the AF area (AF area indicated by the dotted line in FIG. 22) selected by the determination of the focusing lens position, that is, the focused AF area. A compression rate Q is set in the compression rate setting unit 13 so as to achieve high compression for an out-of-focus AF area that is not selected in determining the focus lens position.
以上のように、合焦している被写体を含むAF領域について、低圧縮で画素データを圧縮するよう圧縮率を設定するため、ピントが合っている被写体部位に対して圧縮歪みによる劣化を避け高画質化を達成しながら、ピントが合っていない領域に対しては積極的に高圧縮となるよう固体撮像素子内で画素データを圧縮するので固体撮像素子出力の圧縮された画素データのデータ量を削減し、高フレームレート化もしくは低消費電力化を達成できる。
As described above, since the compression ratio is set so that the pixel data is compressed with low compression for the AF area including the focused subject, the deterioration of the focused subject portion due to compression distortion is avoided. While achieving high image quality, pixel data is compressed in the solid-state image sensor so as to be highly compressed in areas that are not in focus, so the amount of compressed pixel data output from the solid-state image sensor can be reduced. This can reduce the frame rate and power consumption.
また、本実施例をさらに応用し、合焦しているAF領域が複数ある場合はコントラスト値の高いAF領域ほど画素データの圧縮率を低圧縮に設定するという処理を実施し、合焦している被写体の中でも被写体コントラストや空間周波数の高低によって圧縮率を適応的に可変することによって高画質化することも容易に可能である。
Further, by further applying this embodiment, when there are a plurality of focused AF areas, a process of setting the compression rate of the pixel data to a lower compression is performed in an AF area having a higher contrast value. It is also possible to easily improve the image quality by adaptively changing the compression ratio according to the subject contrast and the spatial frequency among the existing subjects.
(実施例6)
本実施例では、実施の形態1に係る固体撮像装置1を例えばデジタルカメラに適用した合について説明する。 (Example 6)
In this example, a case where the solid-state imaging device 1 according to Embodiment 1 is applied to, for example, a digital camera will be described.
本実施例では、実施の形態1に係る固体撮像装置1を例えばデジタルカメラに適用した合について説明する。 (Example 6)
In this example, a case where the solid-
図25は、実施例6におけるデジタルカメラ5の構成を示すブロック図である。なお、図1および図17Aと同様の要素には同一の符号を付しており、詳細な説明は省略する。
FIG. 25 is a block diagram illustrating a configuration of the digital camera 5 according to the sixth embodiment. Elements similar to those in FIGS. 1 and 17A are denoted by the same reference numerals, and detailed description thereof is omitted.
図25に示すデジタルカメラ5は、固体撮像素子10と、伸長部15と、画像処理部16、特徴検出部14と、動画像符号化部39と、記録部508と、記録メモリ509と、表示部510と、表示デバイス511と、CPU512と、プログラムメモリ513と、外部メモリ514とを備える。このデジタルカメラ5には、外部スイッチ515が入力される構成となっている。なお、固体撮像素子10と、伸長部15と、画像処理部16と、特徴検出部14との構成は実施の形態1の図1で説明した内容と同様である。以下、相違するところについて述べる。
A digital camera 5 shown in FIG. 25 includes a solid-state imaging device 10, an extension unit 15, an image processing unit 16, a feature detection unit 14, a moving image encoding unit 39, a recording unit 508, a recording memory 509, and a display. A unit 510, a display device 511, a CPU 512, a program memory 513, and an external memory 514 are provided. An external switch 515 is input to the digital camera 5. Note that the configurations of the solid-state imaging device 10, the expansion unit 15, the image processing unit 16, and the feature detection unit 14 are the same as those described in FIG. The differences will be described below.
CPU512は、電源のプログラムメモリ513からプログラムを読み出して、予め外部スイッチ515の入力で指定されたモード、例えば、記録モードや再生モード等のモードを判別して、所定のモードでシステムを起動する。
The CPU 512 reads the program from the program memory 513 of the power source, determines a mode designated in advance by the input of the external switch 515, for example, a mode such as a recording mode or a reproduction mode, and starts the system in a predetermined mode.
なお、ここで述べる動作は、CPU512によって制御される。
Note that the operation described here is controlled by the CPU 512.
例えば、記録モードの場合、固体撮像素子10から、圧縮部12により入力された圧縮撮像データを伸長部15が伸張を行い、画像処理部16がYCデータを生成する。動画像符号化部39は、画像処理部16により得られたYC画像データに対して、静止画像/動画像の符号化を行い、記録用の画像データを生成する。ここで、静止画像を記録する場合、例えばJPEG形式で符号化を行い、動画像の場合は、例えばH.264やMPEG等の形式で符号化を行う。
For example, in the recording mode, the decompression unit 15 decompresses the compressed image data input by the compression unit 12 from the solid-state image sensor 10, and the image processing unit 16 generates YC data. The moving image encoding unit 39 encodes a still image / moving image with respect to the YC image data obtained by the image processing unit 16 to generate image data for recording. Here, when recording a still image, encoding is performed in, for example, the JPEG format. Encoding is performed in a format such as H.264 or MPEG.
さらに、記録部508は、動画像符号化部39の符号データを入力し、記録メモリ509に記録するためのヘッダ生成や記録メディアの領域管理等の処理を行い、記録メモリ509へデータを記録する。また、表示部510は、画像処理部16の処理データが入力され、表示デバイス511で表示を行うための処理を行う。すなわち、表示デバイス511に対応した画像サイズや画像形式に変換を行い、OSD(On Screen Display)等のデータを付加して表示デバイス511へ転送する。
Further, the recording unit 508 receives the code data of the moving image encoding unit 39, performs processing such as header generation for recording in the recording memory 509 and area management of the recording medium, and records the data in the recording memory 509. . The display unit 510 receives processing data of the image processing unit 16 and performs processing for displaying on the display device 511. That is, the image data is converted into an image size and an image format corresponding to the display device 511, and data such as OSD (On Screen Display) is added and transferred to the display device 511.
表示デバイス511は、例えば液晶(LCD)ディスプレイなどがあり、入力された信号を出力して映像が表示される。さらに、特徴検出部14からCPU512へ特徴情報が入力されており、実施の形態1で述べたように得られた各種特徴量を用いて、圧縮率設定部13へ特徴量を設定することが可能である。なお、特徴検出部14が行う特徴検出を、特徴検出部14に代わってCPU512が行っても良い。
The display device 511 includes a liquid crystal (LCD) display, for example, and outputs an input signal to display an image. Further, feature information is input from the feature detection unit 14 to the CPU 512, and the feature amount can be set in the compression rate setting unit 13 using various feature amounts obtained as described in the first embodiment. It is. Note that the feature detection performed by the feature detection unit 14 may be performed by the CPU 512 instead of the feature detection unit 14.
一方、例えば再生モード時には、記録メモリ509ら記録された符号データを読み出して動画像符号化部39へ入力し、入力された符号データをYCデータへ復号を行う。復号後のYCデータは、一旦外部メモリ514に格納された後、表示部510が外部メモリ514から読み出して処理することにより、表示デバイス511に送られ、表示動作を行う。
On the other hand, in the playback mode, for example, the code data recorded from the recording memory 509 is read out and input to the moving image encoding unit 39, and the input code data is decoded into YC data. The decrypted YC data is temporarily stored in the external memory 514, and then read out from the external memory 514 and processed by the display unit 510, so that it is sent to the display device 511 to perform a display operation.
また、デジタルカメラ5において、固体撮像素子10より出力される撮像データを外部メモリ514に対して格納して使用する場合があるが、この場合は、外部メモリ514に対しては、CPU512がワークメモリとしても使用し、画像処理部16の処理後のデータを格納し、表示部510が読み出して表示に使用し、動画像符号化部39の出力符号データを一旦格納して記録部508が読み出して使用したりする。このようにアクセスが多いため、固体撮像素子10の出力の圧縮データをまず外部メモリ514へ格納後、読み出して伸長部15に入力して処理を行うと、撮像データ分のメモリアクセス量の削減が可能となる。
In the digital camera 5, the image data output from the solid-state image sensor 10 may be stored and used in the external memory 514. In this case, the CPU 512 has a work memory for the external memory 514. The data after processing of the image processing unit 16 is stored, the display unit 510 reads out and uses it for display, the output code data of the moving image encoding unit 39 is temporarily stored, and the recording unit 508 reads out the data. Or use it. Since there are many accesses in this way, if the compressed data output from the solid-state imaging device 10 is first stored in the external memory 514 and then read out and input to the decompression unit 15 to perform processing, the memory access amount for the imaging data can be reduced. It becomes possible.
以上のように、デジタルカメラ5は、実施の形態1に係る固体撮像装置1と、画像処理部16により処理された画素データを符号化する動画像符号化部39と、動画像符号化部39により符号化された画素データを記録メモリ509に記録する記録部508と、画像処理部16により処理された画像データを表示する表示部510と、プログラムを記憶しているプログラムメモリ513と、プログラムメモリ513から読み出したプログラムに基づいて、このデジタルカメラ5のシステム制御を行うCPU512と、を備え、CPU512は、外部により与えられた設定に基づいて、記録動作または再生動作のシステム制御を行う。
As described above, the digital camera 5 includes the solid-state imaging device 1 according to Embodiment 1, the moving image encoding unit 39 that encodes the pixel data processed by the image processing unit 16, and the moving image encoding unit 39. A recording unit 508 that records the pixel data encoded by the image processing unit 509 in the recording memory 509, a display unit 510 that displays the image data processed by the image processing unit 16, a program memory 513 that stores a program, and a program memory The CPU 512 performs system control of the digital camera 5 based on a program read from the program 513. The CPU 512 performs system control of a recording operation or a reproduction operation based on a setting given from the outside.
以上により、デジタルカメラ5の記録再生においても、特徴検出部14により抽出された領域別に圧縮率を設定することが可能となり、最適な圧縮率配分を行って記録することができる。
As described above, it is possible to set the compression rate for each area extracted by the feature detection unit 14 even in the recording / reproduction of the digital camera 5, and recording can be performed by distributing the optimum compression rate.
(実施の形態2)
以下、実施の形態2では、実施の形態1における固体撮像装置1とは異なる構成の場合について説明する。 (Embodiment 2)
Hereinafter, in the second embodiment, a case where the configuration is different from that of the solid-state imaging device 1 in the first embodiment will be described.
以下、実施の形態2では、実施の形態1における固体撮像装置1とは異なる構成の場合について説明する。 (Embodiment 2)
Hereinafter, in the second embodiment, a case where the configuration is different from that of the solid-
図26は、本発明の実施の形態2における固体撮像装置の構成を示すブロック図である。なお、図1と同様の要素には同一の符号を付しており、詳細な説明は省略する。図25に示す固体撮像装置6は、実施の形態1に係る固体撮像装置1に対して、特徴検出部54が固体撮像素子60に含まれている点で構成が異なる。
FIG. 26 is a block diagram showing a configuration of the solid-state imaging device according to Embodiment 2 of the present invention. Elements similar to those in FIG. 1 are denoted by the same reference numerals, and detailed description thereof is omitted. The solid-state imaging device 6 shown in FIG. 25 differs from the solid-state imaging device 1 according to Embodiment 1 in that the feature detection unit 54 is included in the solid-state imaging device 60.
具体的には、光電変換部11から出力された画素データは、圧縮部12に入力されるとともに固体撮像素子60内に内蔵された特徴検出部64にも入力され、特徴検出部54にて水平1ライン毎に特徴検出を実施し、その結果を圧縮率設定部13へ出力する。なお、圧縮率設定部13の動作、圧縮部12の動作および相互の関係は実施の形態1と同じであるため説明を省略する。
Specifically, the pixel data output from the photoelectric conversion unit 11 is input to the compression unit 12 and also input to the feature detection unit 64 built in the solid-state imaging device 60, and is horizontally output by the feature detection unit 54. Feature detection is performed for each line, and the result is output to the compression ratio setting unit 13. Note that the operation of the compression rate setting unit 13, the operation of the compression unit 12, and the mutual relationship are the same as those in the first embodiment, and thus the description thereof is omitted.
特徴検出部64は、図示しない垂直および水平同期信号に同期して光電変換部11から入力された1水平ラインの各画素の画素データの信号レベルに応じた圧縮率を選択して低圧縮となる画素を注目領域画素としてその水平ライン内の画素領域を圧縮率設定部13へ出力する。
The feature detection unit 64 selects a compression rate according to the signal level of the pixel data of each pixel of one horizontal line input from the photoelectric conversion unit 11 in synchronization with a vertical and horizontal synchronization signal (not shown), and performs low compression. The pixel area within the horizontal line is output to the compression ratio setting unit 13 with the pixel as the attention area pixel.
ここで、図27は、実施の形態2における特徴検出部64内で選択される1水平ラインの各画素の画素データレベルと選択される圧縮率の関係を示す図である。
Here, FIG. 27 is a diagram showing the relationship between the pixel data level of each pixel of one horizontal line selected in the feature detection unit 64 in the second embodiment and the selected compression rate.
特徴検出部64では、画素データレベルに対して予め設定された閾値を2つ有しており、光電変換部11から入力された水平1ラインの画素P1、P2、・・・Pnに対して該閾値とヒステリシスとを有したレベル比較を行い圧縮率の選択を実施する。具体的には、閾値1より低い画素レベルから高い画素レベルへレベル変化した画素以降を圧縮率が高圧縮とする画素として識別し、閾値1よりの低いレベルに設定された閾値2に対してより高い画素レベルから低い画素レベルへレベル変化した画素以降を低圧縮となるよう画素、つまりは注目画素として認識して予め保持する低圧縮の圧縮率を設定するよう該領域情報および圧縮率を圧縮率設定部13へ出力する。結果として画素レベルの高い、つまりは被写体の高輝度部位では高圧縮の圧縮率が設定され、中輝度および低輝度では低圧縮の圧縮率が設定されて画素データの圧縮処理が行われる。
The feature detection unit 64 has two threshold values set in advance with respect to the pixel data level, and the pixel P1, P2,... Pn of one horizontal line input from the photoelectric conversion unit 11 A level comparison having a threshold value and hysteresis is performed to select a compression rate. Specifically, the pixels after the level change from a pixel level lower than threshold 1 to a higher pixel level are identified as pixels whose compression rate is high compression, and more than threshold 2 set to a level lower than threshold 1 The area information and the compression rate are set so that the pixel after the pixel whose level has changed from the high pixel level to the low pixel level is low-compressed, that is, the low-compression compression rate that is recognized as a target pixel and stored in advance is set. Output to the setting unit 13. As a result, a high compression ratio is set at a high pixel level, that is, a high luminance portion of the subject, and a low compression ratio is set at medium luminance and low luminance, and pixel data is compressed.
このように、人間の目では細かい輝度変化に鈍感でありまた後段の画像処理部16で実施される非線形なガンマ変換処理によりレベル圧縮される被写体の高輝度部位で積極的に高圧縮な画素データの圧縮処理を実施する。それにより、画質劣化が認識されやすい中輝度および低輝度の被写体に対して圧縮歪みによる劣化を避け高画質化を達成しながら、劣化が認識されにくい被写体高輝度部位に対しては積極的に高圧縮となるよう固体撮像素子内で画素データを圧縮するので固体撮像素子出力である圧縮画素データのデータ量を削減し、高フレームレート化もしくは低消費電力化を達成できる。さらに本実施の形態の構成を取ることにより、固体撮像素子内で画素データの圧縮処理が完結するため素子外からの入力信号線および端子を削減することで装置の小型化ができる上に、リアルタイムに圧縮率を決定するため圧縮処理への圧縮率の設定反映遅延が最小化できるので原理的にフレーム単位で遅延しないという効果がある。
In this way, pixel data that is insensitive to fine luminance changes in the human eye and that is positively compressed in a high-luminance part of the subject that is level-compressed by a non-linear gamma conversion process performed by the subsequent image processing unit 16. Perform the compression process. As a result, while avoiding deterioration due to compression distortion for medium- and low-brightness subjects where image quality degradation is easily recognized, high image quality is achieved while aggressively increasing for high-brightness subjects where degradation is difficult to recognize. Since the pixel data is compressed in the solid-state imaging device so as to be compressed, the data amount of the compressed pixel data which is the output of the solid-state imaging device can be reduced, and a high frame rate or low power consumption can be achieved. Further, by adopting the configuration of the present embodiment, the pixel data compression process is completed within the solid-state imaging device, so that the size of the apparatus can be reduced by reducing the number of input signal lines and terminals from outside the device, and in real time. In addition, since the compression rate is determined, the delay in reflecting the setting of the compression rate in the compression process can be minimized, so that in principle there is no delay in units of frames.
なお、本実施の形態では暗黙的に画素毎のカラーフィルターを有さない白黒固体撮像素子について記述しているが、それに限らない。例えばベイヤ―配列カラーオンチップフィルターを有する単板カラー固体撮像素子の場合でもよく、その場合には、同色位相の隣接画素間でのレベル変化でヒステリシスを持たせるように特徴検出部64を構成することは必然的である。
In the present embodiment, a black and white solid-state imaging device that does not implicitly have a color filter for each pixel is described, but the present invention is not limited to this. For example, a single-plate color solid-state imaging device having a Bayer array color-on-chip filter may be used, and in that case, the feature detection unit 64 is configured to have hysteresis due to a level change between adjacent pixels having the same color phase. It is inevitable.
以上、本発明によれば、撮影条件に関係なく、単位時間あたりデータ量を必ず一定以上低減することができる撮像装置を実現することができる。
As described above, according to the present invention, it is possible to realize an imaging apparatus capable of reducing the data amount per unit time by a certain amount regardless of shooting conditions.
具体的には、本発明の撮像装置は、固体撮像素子10、固体撮像素子10の外部に、特徴検出部14、伸長部15及び画像処理部16を備え、固体撮像素子10は、光電変換部11、圧縮部12及び圧縮率設定部13で構成されている。光電変換部11の画素データは圧縮部12にて非可逆圧縮処理を施されて外部へ出力される。固体撮像素子10から出力され伸長部15にて画素データに複号された画素データは、画像処理部16にて画像処理され、フレーム画像として特徴検出部14へ入力される。特徴検出部14はフレーム内の画像の特徴を抽出し、個別領域での最適な圧縮率情報を生成して、圧縮率設定部13へ特徴に関する情報を与える。圧縮率設定部13は、特徴に関する情報から圧縮率情報を設定する。該圧縮率情報に基づき、圧縮部12は、フレーム内で適応的に圧縮率を可変して非可逆圧縮動作させる。
Specifically, the imaging device of the present invention includes a solid-state imaging device 10, a feature detection unit 14, an expansion unit 15, and an image processing unit 16 outside the solid-state imaging device 10, and the solid-state imaging device 10 includes a photoelectric conversion unit. 11, a compression unit 12 and a compression rate setting unit 13. The pixel data of the photoelectric conversion unit 11 is subjected to irreversible compression processing by the compression unit 12 and output to the outside. Pixel data output from the solid-state imaging device 10 and decoded into pixel data by the decompression unit 15 is subjected to image processing by the image processing unit 16 and input to the feature detection unit 14 as a frame image. The feature detection unit 14 extracts the feature of the image in the frame, generates optimum compression rate information in the individual area, and gives information about the feature to the compression rate setting unit 13. The compression rate setting unit 13 sets the compression rate information from the information regarding the feature. Based on the compression rate information, the compression unit 12 performs the lossy compression operation by adaptively changing the compression rate within the frame.
このような構成により、被写体条件や撮影状況に応じた圧縮処理を画素データに施すことができるので、圧縮量子化された画素データの量子化誤差を最適な状態としつつ固体撮像素子10から出力する単位時間あたりデータ量を必ず一定以上低減することができる。
With such a configuration, the pixel data can be subjected to compression processing in accordance with subject conditions and shooting conditions, and thus output from the solid-state imaging device 10 while the quantization error of the compressed and quantized pixel data is in an optimum state. The amount of data per unit time can always be reduced above a certain level.
それにより、高画質で高速な映像信号の出力が可能となり、画像画質性能を損なうことなく撮像装置を実現することができる。従って、例えばイメージセンサなどのこの撮像装置を含めたカメラシステムの回路規模増加の抑制や消費電力低減を実現することができる。
Therefore, it is possible to output a high-quality and high-speed video signal, and an imaging apparatus can be realized without impairing image quality performance. Therefore, for example, it is possible to suppress an increase in the circuit scale of the camera system including the imaging device such as an image sensor and to reduce power consumption.
以上、本発明の撮像装置およびデジタルカメラについて、実施の形態に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態における構成要素を組み合わせて構築される形態も、本発明の範囲内に含まれる。
As mentioned above, although the imaging device and the digital camera of the present invention have been described based on the embodiment, the present invention is not limited to this embodiment. Unless it deviates from the meaning of this invention, the form which carried out the various deformation | transformation which those skilled in the art can think to this embodiment, and the structure constructed | assembled combining the component in different embodiment is also contained in the scope of the present invention. .
本発明は、撮像装置に利用でき、特に、デジタルスチルカメラ、デジタルビデオカメラ、監視カメラ、またはドライブレコーダカメラ等の電子的に動画像もしくは静止画像を生成し表示もしくは記録する機器である撮像装置に利用することができる。
INDUSTRIAL APPLICABILITY The present invention can be used for an imaging apparatus, and more particularly to an imaging apparatus that is a device that electronically generates and displays or records a moving image or a still image such as a digital still camera, a digital video camera, a surveillance camera, or a drive recorder camera. Can be used.
1、4、6 固体撮像装置
2、3 カメラ
5 デジタルカメラ
10、60 固体撮像素子
11 光電変換部
12 圧縮部
13 圧縮率設定部
14、24、34、54、64 特徴検出部
15 伸長部
16 画像処理部
28 速度検出部
39 動画像符号化部
111 画素セルアレイ
112 タイミングジェネレータ
113 AD変換部
114 水平走査セレクタ
121 処理対象画素値入力部
122 予測画素生成部
123 コード変換部
124 変化抽出部
125、153 量子化幅決定部
126 量子化処理部
127 パッキング部
151 アンパッキング部
152 予測画素生成部
154 逆量子化処理部
155 コード生成部
156 逆コード変換部
157 出力部
171 シリアル変換
172 出力制御部
410 光学レンズ
411 レンズ駆動部
443 HPF回路
444 領域毎コントラスト積算回路
445、512 CPU
461 プリプロセス回路
462 YC処理回路
508 記録部
509 記録メモリ
510 表示部
511 表示デバイス
513 プログラムメモリ
514、3907 外部メモリ
515 外部スイッチ
1101 フォトダイオード
1102 読み出しトランジスタ
1103 フローティングディフュージョン
1104 リセットトランジスタ
1105 アンプ
1601 消失点
1602 着目領域
1603 周辺領域
1901、3901 YCデータ
1908 動きベクトル情報
1909 消失点
2601、2602 プロファイル
3902 減算器
3903 DCT量子化部
3904 逆量子化/逆DCT部
3905 イントラ予測部
3906 デブロッキングフィルタ
3908 動き検出部
3909 動き補償部
3910 インター予測部
3911 予測判定部
3912 可変長符号化部
3913 加算器
L1~L8 共通信号読み出し線
S11~S18 選択信号読み出し線
S41~S48 スイッチ
P11~P88 画素セル DESCRIPTION OF SYMBOLS 1, 4, 6 Solid- state imaging device 2, 3 Camera 5 Digital camera 10, 60 Solid-state image sensor 11 Photoelectric conversion part 12 Compression part 13 Compression rate setting part 14, 24, 34, 54, 64 Feature detection part 15 Expansion part 16 Image Processing unit 28 Speed detection unit 39 Video encoding unit 111 Pixel cell array 112 Timing generator 113 AD conversion unit 114 Horizontal scan selector 121 Target pixel value input unit 122 Predicted pixel generation unit 123 Code conversion unit 124 Change extraction unit 125, 153 Quantum Quantization width determination unit 126 Quantization processing unit 127 Packing unit 151 Unpacking unit 152 Predicted pixel generation unit 154 Inverse quantization processing unit 155 Code generation unit 156 Inverse code conversion unit 157 Output unit 171 Serial conversion 172 Output control unit 410 Optical lens 411 Lens drive unit 443 HPF circuit 444 Contrast integration circuit for each region 445, 512 CPU
461Preprocess circuit 462 YC processing circuit 508 Recording unit 509 Recording memory 510 Display unit 511 Display device 513 Program memory 514, 3907 External memory 515 External switch 1101 Photodiode 1102 Read transistor 1103 Floating diffusion 1104 Reset transistor 1105 Amplifier 1601 Vanishing point 1602 Area 1603 Peripheral area 1901, 3901 YC data 1908 Motion vector information 1909 Vanishing point 2601, 2602 Profile 3902 Subtractor 3903 DCT quantization section 3904 Inverse quantization / inverse DCT section 3905 Intra prediction section 3906 Deblocking filter 3908 Motion detection section 3909 Motion Compensator 3910 Inter Predictor 3911 Prediction Part 3912 variable-length encoder 3913 adders L1 ~ L8 common signal read lines S11 ~ S18 selection signal read lines S41 ~ S48 switches P11 ~ P88 pixel cells
2、3 カメラ
5 デジタルカメラ
10、60 固体撮像素子
11 光電変換部
12 圧縮部
13 圧縮率設定部
14、24、34、54、64 特徴検出部
15 伸長部
16 画像処理部
28 速度検出部
39 動画像符号化部
111 画素セルアレイ
112 タイミングジェネレータ
113 AD変換部
114 水平走査セレクタ
121 処理対象画素値入力部
122 予測画素生成部
123 コード変換部
124 変化抽出部
125、153 量子化幅決定部
126 量子化処理部
127 パッキング部
151 アンパッキング部
152 予測画素生成部
154 逆量子化処理部
155 コード生成部
156 逆コード変換部
157 出力部
171 シリアル変換
172 出力制御部
410 光学レンズ
411 レンズ駆動部
443 HPF回路
444 領域毎コントラスト積算回路
445、512 CPU
461 プリプロセス回路
462 YC処理回路
508 記録部
509 記録メモリ
510 表示部
511 表示デバイス
513 プログラムメモリ
514、3907 外部メモリ
515 外部スイッチ
1101 フォトダイオード
1102 読み出しトランジスタ
1103 フローティングディフュージョン
1104 リセットトランジスタ
1105 アンプ
1601 消失点
1602 着目領域
1603 周辺領域
1901、3901 YCデータ
1908 動きベクトル情報
1909 消失点
2601、2602 プロファイル
3902 減算器
3903 DCT量子化部
3904 逆量子化/逆DCT部
3905 イントラ予測部
3906 デブロッキングフィルタ
3908 動き検出部
3909 動き補償部
3910 インター予測部
3911 予測判定部
3912 可変長符号化部
3913 加算器
L1~L8 共通信号読み出し線
S11~S18 選択信号読み出し線
S41~S48 スイッチ
P11~P88 画素セル DESCRIPTION OF
461
Claims (12)
- 入射光を電気信号に変換する2次元状に配列された複数の画素を有し、前記複数の画素により変換された複数の前記電気信号を画素データとして出力する光電変換部と、
前記光電変換部から出力される画素データに基づいて、画像の領域毎の特徴を検出する検出部と、
前記検出部により検出された前記特徴に基づいて前記領域毎に圧縮率を設定する圧縮率設定部と、
前記圧縮率設定部により設定された圧縮率に従って、前記光電変換部から出力される画素データを非可逆圧縮する圧縮部とを備える
撮像装置。 A photoelectric conversion unit having a plurality of pixels arranged in a two-dimensional shape for converting incident light into an electrical signal, and outputting the plurality of electrical signals converted by the plurality of pixels as pixel data;
Based on the pixel data output from the photoelectric conversion unit, a detection unit that detects features for each region of the image,
A compression rate setting unit that sets a compression rate for each region based on the features detected by the detection unit;
An image pickup apparatus comprising: a compression unit that performs irreversible compression on pixel data output from the photoelectric conversion unit according to a compression rate set by the compression rate setting unit. - 前記撮像装置は、さらに、
前記圧縮部により圧縮された画素データを伸張する伸長部と、
前記伸長部により伸張された画素データを処理する画像処理部とを備え、
前記検出部は、前記画像処理部により処理された画素データから、画像の領域毎の特徴を検出する
請求項1に記載の撮像装置。 The imaging device further includes:
A decompression unit for decompressing the pixel data compressed by the compression unit;
An image processing unit that processes the pixel data expanded by the expansion unit,
The imaging device according to claim 1, wherein the detection unit detects a feature for each region of the image from the pixel data processed by the image processing unit. - 前記光電変換部は、MOS型固体撮像素子を構成する
請求項1または2に記載の撮像装置。 The imaging device according to claim 1, wherein the photoelectric conversion unit constitutes a MOS solid-state imaging device. - 前記検出部は、
前記画像処理部により処理された複数の画素データから構成される画像データと、過去に前記画像処理部により処理された画像データとの差分情報から、画像全体の動きを前記特徴として検出し、
前記圧縮率設定部は、検出された前記画像全体の動きの情報に基づいて前記領域毎に圧縮率を可変設定する
請求項1~3のいずれか1項に記載の撮像装置。 The detector is
From the difference information between the image data composed of a plurality of pixel data processed by the image processing unit and the image data processed by the image processing unit in the past, the movement of the entire image is detected as the feature,
The imaging apparatus according to any one of claims 1 to 3, wherein the compression rate setting unit variably sets a compression rate for each of the regions based on the detected motion information of the entire image. - 前記撮像装置は、さらに、
前記画像処理部により処理された画像データを符号化する符号化部を備え、
前記検出部は、
前記符号化部により出力される動きベクトル情報に基づいて、前記画像処理部により処理された画素データから画像の領域毎の動きベクトル情報を、前記特徴として検出し、
前記圧縮率設定部は、検出された動き情報に基づいて前記領域毎に圧縮率を可変設定する
請求項1~3のいずれか1項に記載の撮像装置。 The imaging device further includes:
An encoding unit for encoding the image data processed by the image processing unit;
The detector is
Based on the motion vector information output by the encoding unit, motion vector information for each region of the image is detected as the feature from the pixel data processed by the image processing unit,
The imaging apparatus according to any one of claims 1 to 3, wherein the compression rate setting unit variably sets a compression rate for each region based on detected motion information. - 前記画像は、人物の顔を示す領域である顔領域を有しており、
前記検出部は、前記光電変換部から出力される画素データに基づき、前記特徴として、当該画像に顔領域を示す領域情報を検出し、
前記圧縮率設定部は、前記検出部により検出された前記領域情報に基づいて、少なくとも顔領域を含む領域と顔領域を含まない領域とで個別に圧縮率を可変設定する
請求項1~3のいずれか1項に記載の撮像装置。 The image has a face area that is an area showing a human face,
The detection unit detects region information indicating a face region in the image as the feature based on pixel data output from the photoelectric conversion unit,
The compression rate setting unit variably sets the compression rate individually for at least a region including a face region and a region not including a face region based on the region information detected by the detection unit. The imaging device according to any one of the above. - 前記圧縮部は、
さらに、前記圧縮率設定部により設定された圧縮率の情報に基づいて、少なくとも、圧縮率の情報及びデータ量の情報を含む識別情報を付加して画素データを圧縮する
請求項1~6のいずれか1項に記載の撮像装置。 The compression unit is
The pixel data is further compressed based on compression rate information set by the compression rate setting unit by adding identification information including at least compression rate information and data amount information. The imaging apparatus of Claim 1. - 前記撮像装置は、さらに、レンズを駆動するレンズ駆動部を備え、
前記検出部は、
前記光電変換部から出力される画素データに基づいて、画像の所定領域に対するコントラスト値を検出するコントラスト検出部と、
前記コントラスト値に基づいて前記所定の領域が合焦しているか否かを判別することにより、前記所定領域が合焦領域である旨を示す情報を検出し、かつ、前記所定領域が合焦するよう前記レンズ駆動部を制御するCPU(Central Processing Unit)を有し、
前記圧縮率設定部は、当該情報に基づき、少なくとも合焦領域と非合焦領域とを含む領域について個別に圧縮率を可変設定する
請求項1~4のいずれか1項に記載の撮像装置。 The imaging apparatus further includes a lens driving unit that drives the lens,
The detector is
A contrast detection unit that detects a contrast value for a predetermined region of the image based on pixel data output from the photoelectric conversion unit;
By determining whether or not the predetermined area is in focus based on the contrast value, information indicating that the predetermined area is the in-focus area is detected, and the predetermined area is in focus A CPU (Central Processing Unit) for controlling the lens driving unit,
The imaging apparatus according to any one of claims 1 to 4, wherein the compression rate setting unit variably sets a compression rate individually for a region including at least a focused region and a non-focused region based on the information. - 前記検出部は、前記伸長部により伸長された画素データから、画像の領域毎の画素値における所定の閾値に対する大小関係の情報を前記特徴として検出し、
前記圧縮率設定部は、前記検出部により検出された前記大小関係の情報に基づいて圧縮率を可変設定する
請求項1~4のいずれか1項に記載の撮像装置。 The detection unit detects, from the pixel data expanded by the expansion unit, information on a magnitude relationship with respect to a predetermined threshold in a pixel value for each area of the image as the feature,
The imaging apparatus according to any one of claims 1 to 4, wherein the compression rate setting unit variably sets a compression rate based on the magnitude relationship information detected by the detection unit. - 前記検出部は、前記画像処理部により処理された画素データから、画像の領域毎の画素値における所定の値に対する大小関係の情報を検出し、
前記圧縮率設定部は、前記検出部により検出された前記大小関係の情報に基づいて圧縮率を可変設定する
請求項1~4のいずれか1項に記載の撮像装置。 The detection unit detects information on a magnitude relationship with respect to a predetermined value in the pixel value for each region of the image from the pixel data processed by the image processing unit,
The imaging apparatus according to any one of claims 1 to 4, wherein the compression rate setting unit variably sets a compression rate based on the magnitude relationship information detected by the detection unit. - 前記撮像装置は、前記圧縮部により圧縮された画素データを出力し、
前記圧縮された画素データのデータ量は、前記撮像装置から出力される前記圧縮された画素データの1ライン辺りの前記画像の領域毎に対する圧縮率及び画素数の総和に基づいて算出され、
前記圧縮部は、さらに、前記圧縮された画素データのデータ量と、前記圧縮部が圧縮する前の当該画素データのデータ量との関係に基づいて、1ラインの周期を変更することにより、フレームレートを変更する
請求項1~4のいずれか1項に記載の撮像装置。 The imaging device outputs pixel data compressed by the compression unit,
The data amount of the compressed pixel data is calculated based on the compression rate and the total number of pixels for each area of the image around one line of the compressed pixel data output from the imaging device,
The compression unit further changes the cycle of one line based on the relationship between the data amount of the compressed pixel data and the data amount of the pixel data before the compression unit compresses the frame. The imaging apparatus according to any one of claims 1 to 4, wherein the rate is changed. - デジタルカメラであって、
請求項2~9のいずれか1項に記載の撮像装置と、
前記画像処理部により処理された画素データを符号化する符号化部と、
前記符号化部により符号化された画素データを記録メモリに記録する記録部と、
前記画像処理部により処理された画像データを表示する表示部と、
プログラムを記憶しているプログラムメモリと、
前記プログラムメモリから読み出したプログラムに基づいて、前記デジタルカメラのシステム制御を行うCPUと
を備え、
前記CPUは、外部により与えられた設定に基づいて、記録動作または再生動作のシステム制御を行う
デジタルカメラ。 A digital camera,
An imaging device according to any one of claims 2 to 9,
An encoding unit for encoding pixel data processed by the image processing unit;
A recording unit for recording the pixel data encoded by the encoding unit in a recording memory;
A display unit for displaying the image data processed by the image processing unit;
A program memory storing the program;
A CPU for performing system control of the digital camera based on a program read from the program memory,
The CPU is a digital camera that performs system control of a recording operation or a reproducing operation based on a setting given from the outside.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010190092 | 2010-08-26 | ||
JP2010-190092 | 2010-08-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012026122A1 true WO2012026122A1 (en) | 2012-03-01 |
Family
ID=45723144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/004721 WO2012026122A1 (en) | 2010-08-26 | 2011-08-25 | Imaging device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2012026122A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016174353A (en) * | 2015-03-17 | 2016-09-29 | キヤノン株式会社 | Imaging device, control method of the same, and computer program |
WO2018231087A1 (en) * | 2017-06-14 | 2018-12-20 | Huawei Technologies Co., Ltd. | Intra-prediction for video coding using perspective information |
WO2024168495A1 (en) * | 2023-02-13 | 2024-08-22 | 北京小米移动软件有限公司 | Photographing device and solid-state photographing element |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004200739A (en) * | 2002-12-16 | 2004-07-15 | Sanyo Electric Co Ltd | Image processor |
JP2005109606A (en) * | 2003-09-29 | 2005-04-21 | Sony Corp | Signal processing method, signal processing apparatus, recording apparatus, and reproducing apparatus |
JP2006197005A (en) * | 2005-01-11 | 2006-07-27 | Sharp Corp | Image coding apparatus and method of coding |
JP2006303690A (en) * | 2005-04-18 | 2006-11-02 | Sony Corp | Image signal processing apparatus, camera system, and image signal processing method |
JP2008113070A (en) * | 2006-10-27 | 2008-05-15 | Sony Corp | Imaging device and imaging method |
-
2011
- 2011-08-25 WO PCT/JP2011/004721 patent/WO2012026122A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004200739A (en) * | 2002-12-16 | 2004-07-15 | Sanyo Electric Co Ltd | Image processor |
JP2005109606A (en) * | 2003-09-29 | 2005-04-21 | Sony Corp | Signal processing method, signal processing apparatus, recording apparatus, and reproducing apparatus |
JP2006197005A (en) * | 2005-01-11 | 2006-07-27 | Sharp Corp | Image coding apparatus and method of coding |
JP2006303690A (en) * | 2005-04-18 | 2006-11-02 | Sony Corp | Image signal processing apparatus, camera system, and image signal processing method |
JP2008113070A (en) * | 2006-10-27 | 2008-05-15 | Sony Corp | Imaging device and imaging method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016174353A (en) * | 2015-03-17 | 2016-09-29 | キヤノン株式会社 | Imaging device, control method of the same, and computer program |
WO2018231087A1 (en) * | 2017-06-14 | 2018-12-20 | Huawei Technologies Co., Ltd. | Intra-prediction for video coding using perspective information |
CN111066322A (en) * | 2017-06-14 | 2020-04-24 | 华为技术有限公司 | Intra-prediction for video coding via perspective information |
US11240512B2 (en) | 2017-06-14 | 2022-02-01 | Huawei Technologies Co., Ltd. | Intra-prediction for video coding using perspective information |
CN111066322B (en) * | 2017-06-14 | 2022-08-26 | 华为技术有限公司 | Intra-prediction for video coding via perspective information |
WO2024168495A1 (en) * | 2023-02-13 | 2024-08-22 | 北京小米移动软件有限公司 | Photographing device and solid-state photographing element |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220248038A1 (en) | Rate control in video coding | |
EP2129105B1 (en) | Image reproducing apparatus, image reproducing method, imaging apparatus and method for controlling the imaging apparatus | |
KR100820528B1 (en) | Digital camera, memory control device usable for it, image processing device and image processing method | |
JP4844305B2 (en) | Imaging device | |
JP4641892B2 (en) | Moving picture encoding apparatus, method, and program | |
US20090273717A1 (en) | Noise reduction processing apparatus, noise reduction processing method, and image sensing apparatus | |
US20110080503A1 (en) | Image sensing apparatus | |
US8823832B2 (en) | Imaging apparatus | |
CN107306335B (en) | Image pickup apparatus and control method thereof | |
JP2007221273A (en) | Imaging apparatus and control method thereof, program, and storage medium | |
KR101046012B1 (en) | Dynamic image processing device, dynamic image processing method, and computer-readable recording medium having recorded dynamic image processing program | |
JP4190576B2 (en) | Imaging signal processing apparatus, imaging signal processing method, and imaging apparatus | |
KR20090071481A (en) | Imaging apparatus and control method therefor | |
WO2012026122A1 (en) | Imaging device | |
JP6341598B2 (en) | Image encoding device, image decoding device, image encoding program, and image decoding program | |
JP2010103823A (en) | Imaging apparatus | |
JP2023052939A (en) | Coding device, decoding device, coding method, decoding method, coding program, and decoding program | |
JP6152642B2 (en) | Moving picture compression apparatus, moving picture decoding apparatus, and program | |
JPH08275049A (en) | Image pickup device | |
JP2018148379A (en) | Image processing device, image processing method, and image processing program | |
JP2021078008A (en) | Image processing apparatus, image processing method and imaging apparatus | |
JP2017200199A (en) | Video compression device, video decoding device, and program | |
JP7289642B2 (en) | IMAGE PROCESSING DEVICE, CONTROL METHOD FOR IMAGE PROCESSING DEVICE, AND PROGRAM | |
US10791334B2 (en) | Image processing apparatus and image processing method | |
JP2009124278A (en) | Imaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11819598 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11819598 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |