WO2010064374A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2010064374A1
WO2010064374A1 PCT/JP2009/006285 JP2009006285W WO2010064374A1 WO 2010064374 A1 WO2010064374 A1 WO 2010064374A1 JP 2009006285 W JP2009006285 W JP 2009006285W WO 2010064374 A1 WO2010064374 A1 WO 2010064374A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
temporary storage
image
processing apparatus
image data
Prior art date
Application number
PCT/JP2009/006285
Other languages
French (fr)
Japanese (ja)
Inventor
古田暁広
高橋雄一郎
坪田一広
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2010064374A1 publication Critical patent/WO2010064374A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2104Intermediate information storage for one or a few pictures
    • H04N1/2112Intermediate information storage for one or a few pictures using still video cameras
    • H04N1/2137Intermediate information storage for one or a few pictures using still video cameras with temporary storage before final recording, e.g. in a frame buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3285Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device using picture signal storage, e.g. at transmitter
    • H04N2201/329Storage of less than a complete document page or image frame
    • H04N2201/3292Storage of less than a complete document page or image frame of one or two complete lines

Definitions

  • the present invention relates to an image processing apparatus that processes image data.
  • the image processing apparatus reduces the processing load of a host that controls the entire image processing system, such as processing with large processing load such as adjustment and quality improvement of video signals fetched from a camera module, compression and decompression of moving images, etc. And is generally configured using a dedicated processing engine.
  • FIG. 1 is a block diagram showing the configuration of a conventional image processing apparatus.
  • the image processing apparatus 10 includes an image processing unit 11 that performs fixed processing A, an image processing unit 12 that performs fixed processing B, an image processing unit 13 that performs fixed processing C, and SDRAM (Synchronous Dynamic Random Access Memory). And the like, and a bus 15 interconnecting these parts.
  • a camera module 16 incorporating a charge coupled device (CCD) camera or a complementary metal oxide semiconductor (CMOS) camera is connected to the image processing unit 11, and captured image data is input as a video signal.
  • the image processing unit 11 is a processing engine that performs fixed processing A (for example, RAW data processing (black level correction, gain adjustment, white balance correction, gamma correction)).
  • the image processing unit 12 is a processing engine that performs fixed processing B (for example, YC (brightness and color difference) separation).
  • the image processing unit 13 is a processing engine that performs fixed processing C (for example, output processing).
  • the shared memory 14 is connected to the bus 15 and temporarily stores data to be transferred between a plurality of processing engines (image processing units 11 to 13).
  • the shared memory 14 is controlled by an SDRAM controller (not shown).
  • the SDRAM controller performs access control to the SDRAM by exchanging addresses, data, and control signals with each processing engine.
  • Patent Document 1 also discloses, in FIG. 2 thereof, a buffer memory which is controlled by a DMA control unit, is connected to a bus, and temporarily stores data transferred between a plurality of processing engines.
  • the object of the present invention is made in view of the above, and is to solve the bottleneck of the memory IO and to provide a high-performance and programmable image processing apparatus.
  • An image processing apparatus is an image processing apparatus for processing image data composed of a plurality of divided image data, and includes a plurality of image processing means for executing a specific fixing process on the divided image data; A plurality of temporary storage means for temporarily storing image data; a bus for selectively connecting the image processing means and the temporary storage means; and a switching control means for selectively controlling the connection of the buses. Take the configuration provided.
  • the image processing means performs input / output in units of divided image data (for example, one line) with a bus that can be selectively combined
  • unnecessary processing skips and processing order Can be used as a common platform applicable to a plurality of different types of camera products with different applications.
  • high-resolution image processing can be performed without using a plurality of external memories, so both high performance and low price can be achieved.
  • FIG. 6 is a diagram showing a bus switching control sequence of the image processing apparatus according to the first embodiment of the present invention.
  • FIG. 6 is a diagram showing processing timing of the image processing apparatus according to the first embodiment of the present invention.
  • Block diagram showing a configuration of an image processing apparatus according to Embodiment 2 of the present invention The figure which shows the data flow of the bus of the image processing apparatus which concerns on Embodiment 2 of this invention.
  • Block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention The figure which shows the structure of the temporary memory part of the image processing apparatus which concerns on Embodiment 4 of this invention. The figure which shows the structure of the temporary storage part of the image processing apparatus which concerns on Embodiment 5 of this invention.
  • Block diagram showing a configuration of an image processing apparatus according to Embodiment 6 of the present invention A diagram showing a configuration of a temporary storage unit of an image processing apparatus according to Embodiment 6 of the present invention.
  • FIG. 6 is a diagram showing processing timing of the image processing apparatus according to the first embodiment of the present invention.
  • FIG. 16 is a diagram showing processing timing of the image processing apparatus according to the sixth embodiment of the present invention.
  • Block diagram showing a configuration of an image processing apparatus according to Embodiment 7 of the present invention Block diagram showing a configuration of an image processing apparatus according to Embodiment 7 of the present invention
  • FIG. 2 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
  • the present embodiment is an example applied to an apparatus that performs signal processing of a digital camera.
  • the image processing apparatus 200 includes an input interface (I / F) 201, an output I / F 202, image processing units 211 to 214, a bus 220, and a temporary storage unit 231 to 233. Mainly composed.
  • the image processing apparatus 200 also performs predetermined signal processing based on a control signal input from the external main control unit 250.
  • the image processing apparatus 200 is an apparatus that performs signal processing of a digital camera, and receives an output signal (Bayer array RAW data) of an image sensor (camera) to generate and output a YUV signal.
  • the resolution of the image sensor is, for example, 2560 horizontal pixels ⁇ 1920 vertical pixels (about 4.9 M pixels).
  • the input I / F 201 captures RAW data into the image processing apparatus 200 using an input clock and horizontal and vertical synchronization signals associated therewith.
  • the output I / F 202 adds an output clock and a horizontal / vertical synchronization signal to the YUV signal and outputs the same to the outside of the image processing apparatus 200.
  • the image processing unit 211 performs RAW data processing (black level correction, gain adjustment, white balance correction, gamma correction).
  • the image processing unit 212 is a two-dimensional digital filter with variable coefficients.
  • the image processing unit 212 incorporates a data buffer for four horizontal lines, and executes a filter operation of up to 100 taps (horizontal 20 ⁇ vertical 5).
  • the image processing unit 213 performs YC (brightness and color difference) separation.
  • the image processing unit 213 has a built-in data buffer for four horizontal lines, and generates luminance / color difference data (YUV signal) while performing interpolation processing with the data in this buffer.
  • the image processing unit 214 performs graphic superposition (processing of superposing a rectangle or a rectangular frame on an image). By superimposing a rectangle or a rectangular frame on an image, the photographer can be alerted and privacy of a subject to be photographed can be protected.
  • the bus 220 selectively couples the image processing units 211 to 214 with the temporary storage units 231 to 233.
  • the bus 220 is composed of 2-input 1-output selectors 221 to 224 (selectors S1 to S4) and a register 225 for switching and controlling the selectors 221 to 224.
  • the selectors 221 to 224 switch the 2-input connection bus to any one.
  • the register 225 outputs control signals for switching the selectors 221 to 224 for each application to the selectors 221 to 224 according to an instruction from the main control unit 250 outside the image processing apparatus 200.
  • the temporary storage units 231 to 233 are 2-port SRAMs capable of storing data (equivalent to 5120 pixels) for two horizontal lines.
  • the 2-port SRAM has, for example, 16 bits x 5120 words in 1 read 1 write.
  • the image processing units 211 to 214 are connected to the signal lines for the read address (not shown), the write address (not shown), the read data, and the write data through the bus 220.
  • FIG. 3 is a diagram showing processing contents and a selector control method of the bus 220 for each of three types of applications in which the present apparatus is used.
  • Information obtained by coding the content of this figure is stored in the internal memory of the main control unit 250 outside the image processing apparatus 200.
  • the information stored in the main control unit 250 includes the image processing unit 212 (for each of “1. normal use”, “2. monitoring under dark environment”, and “3. medical use”). 2) filtering (filter coefficients) of the two-dimensional digital filter and selection directions of the selectors 221 to 224 of the bus 220.
  • the register 225 of FIG. 2 outputs a control signal (for example, 1-bit information) to the selectors 221 to 224 (selectors S1 to S4) according to the value set from the main control unit 250, thereby selecting the selectors 221 to 224 (selectors). Switch control of S1 to S4).
  • the selectors 221 to 224 select one of the connection buses in this figure (“a” or “b”) of the two inputs “a” and “b” (see FIG. 5). In FIG. 2, the upper inputs of the selectors 221 to 224 are “a” and the lower inputs are “b”.
  • processing is performed in the order of RAW data processing in the image processing unit 211 ⁇ YC separation in the image processing unit 213 ⁇ filter processing in the image processing unit 212 ⁇ graphic superposition in the image processing unit 214.
  • the filter processing of the image processing unit 212 (two-dimensional digital filter) is capillary blood vessel enhancement (red high frequency component enhancement).
  • the selector 221 (selector S1) is “b”
  • the selector 222 (selector S2) is “a”
  • the selector 223 (selector S3) is “a”
  • the selector 224 selectives “b” respectively.
  • the image processing units 211 to 214 all input or output data in units of one line in the image because of the nature of the processing. For example, when the image processing unit 212 (two-dimensional digital filter) receives image data for one new line from the bus 220, the image data for four lines held in the built-in data buffer and the image data for one new line A maximum of 100 taps filter operation is performed on the data of the image data of 5 lines in total to output the image data of 1 line, and the image data of the oldest 1 line of the built-in data buffer is new Replace with one line of image data.
  • the image processing unit 212 processes the entire one screen by repeating the processing for each line.
  • FIG. 4 is a diagram showing processing timings of the image processing apparatus 200 in “1. normal use” shown in FIG.
  • the timing of image input the processing latency of the image processing unit 211 (RAW data processing), the writing operation to the temporary storage unit 231 by the image processing unit 211, and the temporary storage unit 231 by the image processing unit 213 (YC separation) Reading operation, processing latency of the image processing unit 213, writing operation to the temporary storage unit 233 by the image processing unit 213, reading operation from the temporary storage unit 233 by the image processing unit 214 (graphic superposition), processing latency of the image processing unit 214, And timing of image output.
  • the vertical axes of the temporary storage units 231 and 233 indicate addresses.
  • Raw data from the input I / F 201 is input in a 16 ⁇ s period in a 40 ⁇ s cycle per line, and the data rate is 160 MHz / pixel.
  • the image processing unit 211 (RAW data processing), the image processing unit 213 (YC separation), and the image processing unit 214 (graphics superposition) operate in synchronization in 40 ⁇ s by the above horizontal synchronization signal and process 10 ⁇ s, 24 ⁇ s, and 6 ⁇ s, respectively. Has a latency.
  • Each of the image processing units 211, 213, and 214 generates a read / write address for the temporary storage units 231 and 233, and the write area and the read area are alternately switched in a cycle of 40 ⁇ s.
  • the image processing unit 211 performs RAW data processing on the input image data with a processing latency of 10 ⁇ s, and sequentially writes the processed data in the temporary storage unit 231. Then, 40 ⁇ s after the start of processing by the image processing unit 211, the image processing unit 213 sequentially reads data from the temporary storage unit 231, performs YC separation with a processing latency of 24 ⁇ s, and sequentially completes processed data in the temporary storage unit 233. Write. Then, 40 ⁇ s after the start of processing by the image processing unit 213, the image processing unit 214 sequentially reads data from the temporary storage unit 233, performs graphic superposition with a processing latency of 6 ⁇ s, and sequentially outputs processed data.
  • the image processing units 211 to 214 operate in parallel as pipeline processing in units of lines.
  • the image processing apparatus 200 is used for the three types of applications shown in FIG. 3, and switches the execution existence and the order of the image processing units 211 to 214 for each application.
  • the processing content of the image processing unit 212 differs depending on the application. For example, when the image processing apparatus 200 is used for “2. Monitoring in a dark environment”, the amount of light input to the image sensor decreases and noise increases, so the image processing unit 212 processes high frequency components after RAW data processing. Remove and improve the image quality to make it easy for the observer to see.
  • FIG. 5 is a diagram showing a data flow of the bus 220 of the image processing apparatus 200. Thick solid lines in FIG. 5 indicate buses through which data flows by selection of the selectors 221 to 224 (selectors S1 to S4), and broken lines in the same figure indicate buses through which data does not flow due to non-selection of the selectors 221 to 224.
  • FIG. 5A is a data flow of the processing order (image processing unit 211 (RAW data processing) ⁇ image processing unit 213 (YC separation) ⁇ image processing unit 214 (graphics superposition)) in “1. normal use”.
  • FIG. 5B shows the processing order in “2. Monitoring under dark environment” (image processing unit 211 (RAW data processing) ⁇ image processing unit 212 (two-dimensional digital filter) ⁇ image processing unit 213 (YC separation) ⁇ image processing It is a data flow of part 214 (figure superposition).
  • FIG. 5C shows the processing order in “3. medical use” (image processing unit 211 (RAW data processing) ⁇ image processing unit 213 (YC separation) ⁇ image processing unit 212 (two-dimensional digital filter) ⁇ image processing unit 214 (figure Data flow).
  • image processing unit 211 RAW data processing
  • image processing unit 213 YC separation
  • image processing unit 212 two-dimensional digital filter
  • image processing unit 214 figure Data flow
  • the image processing apparatus 200 sets the four selectors 221 to 224 (selectors S1 to S4) included in the bus 220 as shown in FIG.
  • the temporary storage units 231 to 233 are selectively coupled. As a result, the execution existence and the order of the image processing units 211 to 214 are changed.
  • the memory in units of lines is used instead of the shared memory, the memory IO bottleneck can be eliminated and the cost can be suppressed with respect to the conventional device.
  • the conventional device uses a shared memory such as an SDRAM. Since the throughput of currently available DDR SDRAM is about 1.3 [GByte / s] at the maximum, the memory IO becomes a bottleneck in one SDRAM and the device can not be configured. Two SDRAMs are required to configure the conventional device. As described above, although the conventional device is unnecessary as a storage capacity, it is necessary to prepare a plurality of external memories to secure the throughput, which is expensive.
  • three small-capacity SRAMs of 640 [MByte / s] may be prepared, and therefore, they can be integrated on one semiconductor device (LSI), which is inexpensive
  • LSI semiconductor device
  • the register 225 switches and controls the selectors 221 to 224 according to the external control, thereby combining the image processing units 211 to 214 with the temporary storage units 231 to 233.
  • the form is changed to specify the execution order of the image processing units 211 to 214.
  • unnecessary processing can be skipped and the processing order can be changed, and a memory IO bottleneck can be eliminated, and a high performance and programmable image processing apparatus can be realized.
  • platforms for various camera products with different applications can be made common.
  • clock supply or power supply to the image processing units 211 to 214 skipped as unnecessary processing and the unused temporary storage units 231 to 233 can be stopped. In this way, power consumption can be reduced, and in battery-powered devices, continuous operation time can be extended.
  • the second embodiment is an example in which an external memory is connected to the image processing apparatus described in the first embodiment.
  • FIG. 6 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 2 of the present invention.
  • the same components as in FIG. 2 will be assigned the same reference numerals and overlapping description will be omitted.
  • the image processing apparatus 300 includes an image processing apparatus main body 310 and an external memory 320.
  • the image processing apparatus main body 310 includes an input I / F 201, an output I / F 202, and an image processing unit 211.
  • a DMA (Direct Memory Access) controller 311 To 214A, a DMA (Direct Memory Access) controller 311, a bus 220A, and temporary storage units 231 to 234.
  • DMA Direct Memory Access
  • the image processing apparatus 300 is an apparatus that performs signal processing of a digital camera as in the image processing apparatus 200 of FIG. 2 and generates an YUV signal using an output signal (Bayer array RAW data) of an image sensor (camera) as an input. And output.
  • the image processing unit 214A reads the image data of one line at the same position from the temporary storage unit 233 and the temporary storage unit 234, in addition to the same function as the image processing unit 214 in FIG. Then, the presence or absence of a moving object (whether it is a still image or not) is determined.
  • the bus 220A selectively couples the image processing units 211 to 214A, the DMA controller 311, and the temporary storage units 231 to 234.
  • the bus 220A has a configuration in which a bus for connecting the DMA controller 311 and a selector are added to the bus 220 in FIG. 2 and is basically a configuration in which the bus 220 in FIG. 2 is expanded.
  • the temporary storage unit 234 is a two-port SRAM that can store data (corresponding to 5120 pixels) for two horizontal lines, similarly to the temporary storage units 231 to 233.
  • the 2-port SRAM has, for example, 16 bits x 5120 words in 1 read 1 write.
  • the DMA controller 311 performs data transfer between the bus 220A and the external memory 320 in units of one line.
  • the DMA controller 311 reads image data of one line from the temporary storage unit 233 and writes the image data to the external memory 320. Further, the DMA controller 311 reads the image data for one line at the same position of the old frame from the external memory 320 and writes the image data in the temporary storage unit 234.
  • the external memory 320 is a memory (for example, an SDRAM) having a large capacity as compared with the temporary storage units 231 to 234.
  • the external memory 320 is provided to hold past image data in frame units.
  • the external memory 320 stores image data of two old and new frames.
  • FIG. 7 is a diagram showing a data flow of the bus 220A of the image processing apparatus 300.
  • the image data is input to the image processing unit 211 and then the image is stored in the temporary storage unit 233.
  • the operation until the data is written is the same as that of the image processing apparatus 200 (FIG. 5) in the first embodiment, so the description will be omitted.
  • the DMA controller 311 When the image data is written to the temporary storage unit 233, the DMA controller 311 reads the image data of one line from the temporary storage unit 233 and writes the image data to the external memory 320. At the same time, the image data for one line is input to the image processing unit 214A.
  • the DMA controller 311 reads the image data of one line of the old frame from the external memory 320 and writes the image data in the temporary storage unit 234.
  • the image processing unit 214A reads the image data of one line of the old frame from the temporary storage unit 234.
  • the DMA control is performed such that the image data of one line of the old frame and the image data of one line of the above (new frame) are at the same position on the original frame.
  • the image processing unit 214A determines the presence or absence of a moving object from the difference between new and old two image data of one line at the same position.
  • the image processing apparatus 300 holds the external image 320 holding the past image data in frame units, and the DMA controller performs data transfer between the bus 220A and the external memory 320.
  • the DMA controller performs data transfer between the bus 220A and the external memory 320.
  • the external memory 320 having a large capacity is used as an external component, and mounting of the external memory can be omitted when the signal processing for referring to the past frame is unnecessary. This makes it possible to construct a platform that can be adapted to both high performance and low cost products.
  • the third embodiment is an example of connecting an external device to the image processing apparatus described in the second embodiment.
  • FIG. 8 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention.
  • the same components as in FIG. 6 will be assigned the same reference numerals and overlapping descriptions will be omitted.
  • the image processing apparatus 300 comprises an image processing apparatus main body 310, an external memory 320, a bus 330, and an external device 340.
  • the image processing apparatus main body 310 has an input I / F 201 and an output.
  • An I / F 202, image processing units 211 to 214A, a DMA controller 311, a bus 220A, and temporary storage units 231 to 234 are provided.
  • the DMA controller 311 writes image data to the external memory 320 and reads image data from the external memory 320 via the bus 330.
  • the external device 340 is, for example, a general purpose processor, an FPGA (Field Programmable Gate Array).
  • the external device 340 reads image data from the external memory 320 and performs predetermined processing, for example, still image object recognition such as face recognition. Then, the external device 340 outputs a control signal indicating the result of the process to the main control unit 250. For example, when the external device 340 performs face recognition, when the face of a person is recognized from image data, a control signal indicating that is output to the main control unit 250.
  • the main control unit 250 controls the type of signal processing of the image processing apparatus 300 at the timing when the control signal is input from the external device 340. For example, when the image processing apparatus 300 is performing “1. normal use”, when the main control unit 250 receives, from the external device 340, a control signal indicating that a human face has been recognized, the image processing apparatus Instructs 300 to increase the resolution for processing.
  • the fixed processing by the image processing units 211 to 214A and the variable processing by the external device 340 can be combined freely in the execution order, and a signal processing platform with high flexibility is constructed. can do.
  • face recognition can be changed to vehicle recognition only by changing the program of the external device 340.
  • the external device 340 which is a general purpose processor, the bus 330, the image processing units 211 to 214A, the DMA controller 311, the bus 220A, and the temporary storage units 231 to 234 can be integrated on one semiconductor device (LSI). According to this configuration, the number of parts and the mounting area of the device can be reduced, and the downsizing and cost reduction of the device can be realized.
  • LSI semiconductor device
  • Embodiment 4 is a configuration example of a temporary storage unit.
  • FIG. 9 is a diagram showing the configuration of a temporary storage unit of an image processing apparatus according to a fourth embodiment of the present invention.
  • the same components as in FIG. 2 are assigned the same reference numerals.
  • FIG. 9 representatively shows the temporary storage unit 231 among the temporary storage units 231 to 233 in FIG.
  • the temporary storage unit 231 includes a FIFO (First-In First-Out) 251 which can store divided image data alone, and a control circuit 252 which controls the FIFO 251.
  • FIFO First-In First-Out
  • the image processing units 211 to 214 need to perform processing by recognizing the positional relationship of the divided images to be processed. For example, in camera signal processing, since the color arrangement of the image sensor is generally different between the even lines and the odd lines, it is necessary to perform processing by recognizing even and odd of the line being processed.
  • even and odd of the line being processed are recognized.
  • even-numbered lines are stored at addresses 0 to 2559
  • odd-numbered lines are stored at addresses 2560 to 5119.
  • the bus 220 needs a signal line for the write address and a signal line for the read address.
  • the control circuit 252 when storing the divided image data in the FIFO 251 of the temporary storage unit 231, the control circuit 252 adds information indicating the positional relationship of the divided image in the image. For example, when the divided image is one line, a line number is added to the stored line.
  • control circuit 252 stores the divided image 1 position information in the FIFO 251 after storing the divided image 1 data, and then stores the divided image 2 position information and then stores the divided image 2 data.
  • the FIFO 251 sequentially outputs the data input first.
  • the image processing unit 212 can recognize the positional relationship of the corresponding divided image only by reading out the divided image data sequentially from the FIFO 251 of the temporary storage unit 231.
  • the image processing units 211 to 214 do not need to generate addresses for the temporary storage units 231 to 233, so a large number of signal lines (address lines) are deleted from the bus 220. can do.
  • each circuit processing engine
  • each circuit such as an image processing unit can be arranged close to each other, thereby shortening the signal transmission time. Operation speed can be increased.
  • the information indicating the positional relationship of the divided images in the present embodiment may be any as long as it indicates the top line of the image (1 bit flag information), and the image processing units 211 to 214 divide the top line of the image. For example, the positional relationship of the divided images to be processed can be calculated.
  • the fifth embodiment is another configuration example of the temporary storage unit.
  • FIG. 10 is a diagram showing the configuration of a temporary storage unit of the image processing apparatus according to the fifth embodiment of the present invention. The same components as in FIG. 2 are assigned the same reference numerals.
  • FIG. 10 is a representative of the temporary storage unit 231 among the temporary storage units 231 to 233 in FIG.
  • the temporary storage unit 231 includes a control circuit 261, two or more SRAMs 263 and 265, and registers 262 and 264 associated with the SRAMs 263 and 265.
  • the control circuit 261 controls the division image data input from the bus 220 to be alternately stored in the SRAM 263 or the SRAM 265.
  • the control circuit 261 rewrites the value of the register 262 when starting storage of the divided image data in the SRAM 263, and rewrites the value of the register 264 when starting storage of the divided image data in the SRAM 265. Further, when outputting the divided image data from the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the control circuit 261 adds the values of the registers 262 and 264 (information indicating the positional relationship of the corresponding divided image in the image).
  • the registers 262 and 264 hold information indicating the positional relationship of the corresponding divided image.
  • the SRAMs 263 and 265 are memories capable of storing divided image data alone.
  • the divided image data is alternately written in the two SRAMs 263 and 265, and has a 2-bank configuration in which one is read from the other during writing.
  • the image processing units 211 to 214 need to perform processing by recognizing the positional relationship between the divided images to be processed. Usually, this positional relationship is recognized by storing predetermined data at a predetermined address of the temporary storage unit.
  • the control circuit 261 when the control circuit 261 stores the divided image data in the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the register (262 or 264) associated with the divided image data Stores information indicating the positional relationship. For example, when the divided image is one line, the line number for the stored line is stored.
  • control circuit 261 when outputting the divided image data from the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the control circuit 261 adds the value of the register 262 or 264 as information indicating the positional relationship of the corresponding divided image in the image.
  • the image processing unit 212 can recognize the positional relationship of the corresponding divided image with respect to the input divided image data.
  • the image processing units 211 to 214 do not need to generate addresses for the temporary storage units 231 to 233, so a large number of signal lines (address lines) are deleted from the bus 220. can do.
  • the circuits processing engines
  • the circuits can be arranged close to each other by deleting a large number of signal lines, thereby shortening the signal transmission time and operating speed. You can raise it.
  • the values of the registers 262 and 264 may be those indicating the top line of the image (1 bit flag information), and the image processing units 211 to 214 If the top line of the image is known, the positional relationship of the divided image to be processed can be calculated.
  • the configuration of the temporary storage unit it is optional to adopt either the configuration of the fourth embodiment or the configuration of the fifth embodiment.
  • the temporary storage unit can be configured by two 1-port SRAMs, the circuit scale is reduced compared to the configuration of the fourth embodiment using FIFO (usually configured using 2-port SRAM). can do.
  • FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 6 of the present invention.
  • the same components as in FIG. 2 will be assigned the same reference numerals.
  • the image processing apparatus 400 includes an input I / F 201, an output I / F 202, image processing units 211 to 214B, a bus 220B, and temporary storage units 231B to 233B.
  • the image processing units 212B to 214B receive, from the temporary storage units 231B to 233B, a control signal indicating that storage of divided image data is completed. The operation is started at the timing when the signal is input.
  • the bus 220B selectively couples the image processing units 211 to 214B and the temporary storage units 231B to 233B.
  • the bus 220B has a configuration in which a signal line for supplying the control signals from the temporary storage units 231B to 233B to the image processing units 212B to 214B is added to the bus 220 in FIG. Is an expanded configuration of the bus 220 of FIG.
  • FIG. 12 is a diagram showing the configuration of a temporary storage unit of the image processing apparatus 400. As shown in FIG. In FIG. 12, the same components as in FIG. 10 and FIG. Among the temporary storage units 231B to 233B in FIG. 11, the temporary storage unit 231B is representatively shown.
  • the temporary storage unit 231B includes a control circuit 261B, two or more SRAMs 263 and 265, and registers 262 and 264 associated with the SRAMs 263 and 265.
  • control circuit 261B when storing the divided image data in the SRAMs 263 and 265, the control circuit 261B writes, in the registers 262 and 264, information indicating the positional relationship of the corresponding divided image in the image. Furthermore, the control circuit 261B generates a control signal indicating that storage of the divided image data in the SRAMs 263 and 265 is completed, and outputs the control signal to the image processing units 212B to 214B. Further, the control circuit 261 B performs bank switching simultaneously with the output of the control signal, and inputs new divided image data.
  • the present embodiment is characterized in the bank switching timing of the temporary storage units 231B to 233B and the operation start timing of each of the image processing units 212B to 214B.
  • FIG. 13 is a diagram showing processing timings of the image processing apparatus according to the first embodiment for comparison with the present embodiment.
  • FIG. 14 is a diagram showing processing timings of the image processing apparatus 400 according to the present embodiment.
  • FIGS. 13 and 14 each show the processing timing in “1. normal use” shown in FIG.
  • the data of the image processing unit 211 (RAW data processing) ⁇ temporary storage unit 231B ⁇ image processing unit 213B (YC separation) ⁇ temporary storage unit 233B ⁇ image processing unit 214B (graphics overlap)
  • the connection of the bus 220 is made to flow in order.
  • Raw data from the input I / F 201 is input in a 16 ⁇ s period, and the data rate is 160 MHz / pixel.
  • the image processing unit 211 (RAW data processing), the image processing unit 213B (YC separation), and the image processing unit 214B (graphics superposition) have processing latencies of 10 ⁇ s, 24 ⁇ s, and 6 ⁇ s, respectively.
  • the image processing apparatus when bank switching is simultaneously performed, the switching timing is limited by the processing time of the image processing unit 213B that requires the longest processing time. Therefore, the image processing apparatus according to the first embodiment can not improve the processing performance.
  • the image processing apparatus of the first embodiment requires a 40 ⁇ s period to output one line of image data, and Only 40% of the performance is out.
  • the image processing units 212B to 214B operate at timings at which control signals indicating that storage of divided image data is completed are input from the temporary storage units 231B to 233B. Therefore, Bank switching can be performed individually for each of the temporary storage units 231B to 233B, and the original performance can be obtained.
  • the image processing unit 211 processes the RAW data with a processing latency of 10 ⁇ s on the input divided image data of 16 ⁇ s per line, and sequentially writes the processed data in the temporary storage unit 231B. Further, the image processing unit 211 processes the segmented image data of the first line, and subsequently processes the segmented image data of the second line.
  • the image processing unit 213B sequentially reads data from the temporary storage unit 231B, performs YC separation with a processing latency of 24 ⁇ s, and sequentially stores processed data in the temporary storage unit 233B. Write. Further, the image processing unit 213B processes the divided image data of the first line, and subsequently processes the divided image data of the second line.
  • the image processing unit 214B sequentially reads data from the temporary storage unit 233B, performs graphic superposition with a processing latency of 6 ⁇ s, and sequentially outputs processed data.
  • the image processing unit 214B processes the divided image data of the first line, and subsequently processes the divided image data of the second line.
  • the image processing apparatus 400 can output image data of one line in a period of 16 ⁇ s with respect to RAW data input in a period of 16 ⁇ s per line.
  • the processing performance of the entire apparatus is improved as compared with the case where bank switching is performed simultaneously even if the processing time of each of the image processing units 211 to 214 B is uneven. Can.
  • FIG. 15 is a diagram showing a configuration of a temporary storage unit of an image processing apparatus according to Embodiment 7 of the present invention.
  • the same components as in FIG. 12 will be assigned the same reference numerals.
  • the temporary storage unit 231C having the same configuration as the temporary storage unit 231B is connected to the bus.
  • Temporary storage units 231B and 231C are configured of, for example, an SRAM.
  • one temporary storage unit 231D having a large capacity can be configured using a plurality of temporary storage units 231B and 231C. This can be realized by using an SRAM for the temporary storage unit 231D.
  • the temporary storage unit 231D for horizontal 2560 pixel 1 line can be configured by using two temporary storage units 231B and 231C that store a divided image of horizontal 1280 pixel 1 line. In this case, the register indicated by the broken line in FIG. 15B can be reduced.
  • FIGS. 16 and 17 are block diagrams showing the configuration of an image processing apparatus according to a seventh embodiment of the present invention.
  • the same components as in FIG. 2 will be assigned the same reference numerals and overlapping explanations will be omitted.
  • FIGS. 16 and 17 are block diagrams showing the configuration of an image processing apparatus according to a seventh embodiment of the present invention that performs stereo camera input processing.
  • the same components as in FIG. 2 will be assigned the same reference numerals and overlapping explanations will be omitted.
  • the image processing apparatus 500 includes input I / Fs 201A and 201B, output I / F 202, image processing units 511A, 511B, 512A, 512B, 213 and 214, a bus 220C, and a temporary storage unit. 531A, 531B, 532A, 532B, and 234 are comprised.
  • the image processing units 511A, 511B, 512A, and 512B are provided with two image processing units 211 and 212 of FIG. 2 for 2ch correspondence.
  • two temporary storage units 231 and 232 shown in FIG. 2 are provided for the temporary storage units 531A, 531B, 532A, and 532B.
  • the bus 220C corresponds to the above, and has the same configuration as the bus 220 of FIG.
  • the temporary storage units 531A and 532A in FIG. 16 correspond to the temporary storage unit 231B in FIG. 15A
  • the temporary storage units 531B and 532B correspond to the temporary storage unit 231C in FIG.
  • the image processing units 511A and 511B perform RAW data processing, and the image processing units 512A and 512B2 perform YC separation. Further, the image processing unit 213 performs distance measurement, and the image processing unit 214 performs output screen generation processing.
  • the image processing apparatus 500 can process a 2-channel stereo camera input (horizontal resolution: 1280 pixels), add distance information of an object to a screen, and output it.
  • FIG. 17 shows an example of the same image processing apparatus 500 as that of FIG. 16 and processing a 1-ch camera input whose horizontal resolution is doubled (2560 pixels).
  • a plurality of temporary storage units 531A and 531B are used to configure one temporary storage unit 531C having a large capacity.
  • one temporary storage unit 532C having a large capacity is configured by using a plurality of temporary storage units 532A and 532B.
  • Temporary storage units 531C and 532C in FIG. 17 correspond to the temporary storage unit 231D in FIG.
  • a high resolution processing system of one channel and a low resolution processing system of a plurality of channels can be constructed on the same platform. This makes it possible to commonize platforms for various camera products (eg, multiview camera and high resolution camera).
  • Eighth Embodiment 18 and 19 are block diagrams showing the configuration of an image processing apparatus according to an eighth embodiment of the present invention.
  • 18 and FIG. 19 the same components as in FIG. 2 and FIG. 16 will be assigned the same reference numerals and overlapping explanations will be omitted.
  • the image processing apparatus 600 includes an input I / F 201, an output I / F 202, an image processing unit 211 to 214, a bus 220D, and temporary storage units 631A, 631B, 632A, 632B, 633A, 633B. , And a power control unit 610.
  • the image processing apparatus 600 has, for example, two or more operation modes such as a low power mode and a normal mode.
  • the image processing apparatus 600 reduces power consumption by reducing the processing resolution of the image or partially using the image processing units 211 to 214.
  • the image processing apparatus 600 adopts an aspect of increasing the processing resolution and the image processing units 211 to 214 to be used.
  • the specific factors are related to intelligent monitoring, such as detection of registrations and detection of suspicious objects.
  • FIG. 18 is an example of the low power mode.
  • the image processing apparatus 600 reduces the horizontal resolution to 1280 pixels and does not perform color processing in YC separation.
  • the image processing unit 211 performs RAW data processing
  • the image processing unit 212 performs only luminance processing.
  • the image processing unit 213 performs color processing (skip), and the image processing unit 214 performs output screen generation and abnormality detection.
  • the power control unit 610 turns off the image processing unit 213, the temporary storage unit 631B, the temporary storage unit 632B, and the temporary storage unit 633B.
  • FIG. 19 shows an example of the normal mode in the same image processing apparatus 600 as FIG.
  • the detection signal is sent to the power control unit 610, and the image processing apparatus 600 shifts to the normal mode.
  • the image processing apparatus 600 performs color processing in YC separation while increasing the horizontal resolution to 2560 pixels.
  • the power control unit 610 turns on the image processing unit 213, the temporary storage unit 631B, the temporary storage unit 632B, and the temporary storage unit 633B.
  • the present embodiment it is possible to suppress the power consumption at the normal time while processing the necessary image with high performance. It is very effective as a power control for regular (24 hours) operation devices such as surveillance cameras.
  • the image processing apparatus is not limited to camera signal processing.
  • the processing unit of the image (the size of the divided image) is not limited to “one line”, and may be a plurality of lines, or may be one tile in a tile configuration in which the image is divided in a grid shape.
  • the configuration of the bus is not limited to the configuration with four selectors.
  • the execution order of the image processing units 211 to 214 is 1-2-3-4, 1-3-2-4, and 1-3-4. There is no limitation to such a configuration.
  • circuits processing engines
  • shared memory that constitute the image processing apparatus, the number thereof, the connection method, and the like may be used.
  • the image processing apparatus is suitable for use in the camera field and in programmable image processing systems such as intelligent surveillance.
  • Image processing apparatus 200, 300, 400, 500, 600 Image processing apparatus 201 Input I / F 202 Output I / F 211 to 214 image processing unit 220 bus 221 to 224 selector 225 register 231 to 233 temporary storage unit 311 DMA controller 320 external memory 330 bus 340 external device

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Storing Facsimile Image Data (AREA)
  • Studio Devices (AREA)

Abstract

Provided is a programmable image processing device which has a high performance and can solve the problem of the memory IO bottleneck.  The image processing device (200) includes: image processing units (211 to 214) which execute a particular process in a unit of a partition image (such as an image of one line in an image); temporary storage units (231 to 233) which temporarily store data in the unit of the partition image; selectors (221 to 224) which selectively connect the image processing units (211 to 214) to the temporary storage units (231 to 233); and a register (225) which controls switching between the selectors (221 to 224).  The image processing device (200) defines the execution order of the image processing units (211 to 214) by the register (225) which controls switching between the selectors (221 to 224) according to an external control so as to modify the connection state between the image processing units (211 to 214) and the temporary storage units (231 to 233).

Description

画像処理装置Image processing device
 本発明は、画像データを処理する画像処理装置に関する。 The present invention relates to an image processing apparatus that processes image data.
 画像処理装置は、カメラモジュールから取り込まれた映像信号の調整・品質向上や、動画像の圧縮・伸張等、処理負荷の大きい処理を、画像処理システム全体を制御するホストの処理負荷を軽減するために代行して行うもので、一般的には専用の処理エンジンを用いて構成される。 The image processing apparatus reduces the processing load of a host that controls the entire image processing system, such as processing with large processing load such as adjustment and quality improvement of video signals fetched from a camera module, compression and decompression of moving images, etc. And is generally configured using a dedicated processing engine.
 図1は、従来の画像処理装置の構成を示すブロック図である。図1に示すように、画像処理装置10は、固定処理Aを行う画像処理部11、固定処理Bを行う画像処理部12、固定処理Cを行う画像処理部13、SDRAM(Synchronous Dynamic Random Access Memory)などからなる共有メモリ14、及びこれら各部を相互に接続するバス15を備える。 FIG. 1 is a block diagram showing the configuration of a conventional image processing apparatus. As shown in FIG. 1, the image processing apparatus 10 includes an image processing unit 11 that performs fixed processing A, an image processing unit 12 that performs fixed processing B, an image processing unit 13 that performs fixed processing C, and SDRAM (Synchronous Dynamic Random Access Memory). And the like, and a bus 15 interconnecting these parts.
 画像処理部11には、CCD(Charge Coupled Device)カメラ又はCMOS(Complementary Metal Oxide Semiconductor)カメラを内蔵するカメラモジュール16が接続され、撮像された画像データが映像信号として入力される。画像処理部11は、固定処理A(例えばRAWデータ処理(黒レベル補正、ゲイン調整、ホワイトバランス補正、ガンマ補正))を行う処理エンジンである。画像処理部12は、固定処理B(例えばYC(輝度・色差)分離)を行う処理エンジンである。画像処理部13は、固定処理C(例えば出力処理)を行う処理エンジンである。 A camera module 16 incorporating a charge coupled device (CCD) camera or a complementary metal oxide semiconductor (CMOS) camera is connected to the image processing unit 11, and captured image data is input as a video signal. The image processing unit 11 is a processing engine that performs fixed processing A (for example, RAW data processing (black level correction, gain adjustment, white balance correction, gamma correction)). The image processing unit 12 is a processing engine that performs fixed processing B (for example, YC (brightness and color difference) separation). The image processing unit 13 is a processing engine that performs fixed processing C (for example, output processing).
 共有メモリ14は、バス15と接続して、複数の処理エンジン(画像処理部11~13)間を転送するデータを一時的に格納する。また、共有メモリ14は、図示しないSDRAMコントローラにより制御される。SDRAMコントローラは、各処理エンジンとの間でアドレス、データ、制御信号をやり取りすることで、SDRAMへのアクセス制御を行う。 The shared memory 14 is connected to the bus 15 and temporarily stores data to be transferred between a plurality of processing engines (image processing units 11 to 13). The shared memory 14 is controlled by an SDRAM controller (not shown). The SDRAM controller performs access control to the SDRAM by exchanging addresses, data, and control signals with each processing engine.
 特許文献1にも、その図2に、DMA制御部により制御され、バスと接続して、複数の処理エンジン間を転送するデータを一時的に格納するバッファメモリが開示されている。 Patent Document 1 also discloses, in FIG. 2 thereof, a buffer memory which is controlled by a DMA control unit, is connected to a bus, and temporarily stores data transferred between a plurality of processing engines.
特開2002-344710号公報Japanese Patent Application Publication No. 2002-344710
 しかしながら、上記従来の画像処理装置は、複数の処理エンジン間のデータ転送に共有メモリを用いているため、メモリIO(In-Out)がボトルネックとなり、処理エンジンの性能が十分であっても、装置全体としては処理が破綻するおそれがある。図1の場合、共有メモリ14とバス15間がボトルネックとなる。 However, since the above-mentioned conventional image processing apparatus uses shared memory for data transfer between a plurality of processing engines, even if the memory IO (In-Out) becomes a bottleneck and the performance of the processing engine is sufficient, There is a possibility that the processing may be broken as the whole apparatus. In the case of FIG. 1, a bottleneck occurs between the shared memory 14 and the bus 15.
 例えば、カメラ分野においては、近年の半導体技術の進歩によってCCDやCMOSカメラの画素数および動作速度が著しく向上しており、カメラモジュールの出力画像を処理する装置には、高い解像度・フレームレートを取り扱える処理性能が求められている。今後、高解像度・高フレームレート化等の高性能化がさらに進展すると、処理エンジンが取り扱うデータのスループットが増大し、メモリIOがボトルネックとなって処理が破綻するおそれがある。 For example, in the camera field, recent advances in semiconductor technology have significantly improved the pixel count and operating speed of CCDs and CMOS cameras, and devices that process the output image of the camera module can handle high resolutions and frame rates. Processing performance is required. In the future, as performance advances such as high resolution and high frame rate progress further, the throughput of data handled by the processing engine may increase, and the memory IO may become a bottleneck and processing may be broken.
 本発明の目的は、上記に鑑みてなされたものであり、メモリIOのボトルネックを解消し、高性能かつプログラマブルな画像処理装置を提供することである。 The object of the present invention is made in view of the above, and is to solve the bottleneck of the memory IO and to provide a high-performance and programmable image processing apparatus.
 本発明の画像処理装置は、複数の区分画像データからなる画像データを処理する画像処理装置であって、前記区分画像データに対して特定の固定処理を実行する複数の画像処理手段と、前記区分画像データを一時的に記憶する複数の一時記憶手段と、前記画像処理手段と前記一時記憶手段とを選択的に結合するバスと、前記バスの結合の形態を選択制御する切り替え制御手段と、を備える構成を採る。 An image processing apparatus according to the present invention is an image processing apparatus for processing image data composed of a plurality of divided image data, and includes a plurality of image processing means for executing a specific fixing process on the divided image data; A plurality of temporary storage means for temporarily storing image data; a bus for selectively connecting the image processing means and the temporary storage means; and a switching control means for selectively controlling the connection of the buses. Take the configuration provided.
 本発明によれば、画像処理手段が区分画像データ(例えば1ライン分)を単位として入出力を行う性質と、選択的に結合可能なバスとを組み合わせることにより、不要な処理のスキップや処理順序の入れ替えが可能となるため、用途の異なる複数種のカメラ製品に適用可能な共通プラットフォームとして利用することができる。また、高解像度の画像処理を複数の外付けメモリを用いることなく実行できるため、高性能と低価格を両立させることができる。 According to the present invention, by combining the property that the image processing means performs input / output in units of divided image data (for example, one line) with a bus that can be selectively combined, unnecessary processing skips and processing order Can be used as a common platform applicable to a plurality of different types of camera products with different applications. In addition, high-resolution image processing can be performed without using a plurality of external memories, so both high performance and low price can be achieved.
従来の画像処理装置の構成を示すブロック図Block diagram showing the configuration of a conventional image processing apparatus 本発明の実施の形態1に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention 本発明の実施の形態1に係る画像処理装置のバスの切り替え制御シーケンスを示す図FIG. 6 is a diagram showing a bus switching control sequence of the image processing apparatus according to the first embodiment of the present invention. 本発明の実施の形態1に係る画像処理装置の処理タイミングを示す図FIG. 6 is a diagram showing processing timing of the image processing apparatus according to the first embodiment of the present invention. 本発明の実施の形態1に係る画像処理装置のバスのデータフローを示す図The figure which shows the data flow of the bus of the image processing apparatus which concerns on Embodiment 1 of this invention. 本発明の実施の形態2に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 2 of the present invention 本発明の実施の形態2に係る画像処理装置のバスのデータフローを示す図The figure which shows the data flow of the bus of the image processing apparatus which concerns on Embodiment 2 of this invention. 本発明の実施の形態3に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention 本発明の実施の形態4に係る画像処理装置の一時記憶部の構成を示す図The figure which shows the structure of the temporary memory part of the image processing apparatus which concerns on Embodiment 4 of this invention. 本発明の実施の形態5に係る画像処理装置の一時記憶部の構成を示す図The figure which shows the structure of the temporary storage part of the image processing apparatus which concerns on Embodiment 5 of this invention. 本発明の実施の形態6に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 6 of the present invention 本発明の実施の形態6に係る画像処理装置の一時記憶部の構成を示す図A diagram showing a configuration of a temporary storage unit of an image processing apparatus according to Embodiment 6 of the present invention 本発明の実施の形態1に係る画像処理装置の処理タイミングを示す図FIG. 6 is a diagram showing processing timing of the image processing apparatus according to the first embodiment of the present invention. 本発明の実施の形態6に係る画像処理装置の処理タイミングを示す図FIG. 16 is a diagram showing processing timing of the image processing apparatus according to the sixth embodiment of the present invention. 本発明の実施の形態7に係る画像処理装置の一時記憶部の構成を示す図The figure which shows the structure of the temporary storage part of the image processing apparatus which concerns on Embodiment 7 of this invention. 本発明の実施の形態7に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 7 of the present invention 本発明の実施の形態7に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 7 of the present invention 本発明の実施の形態8に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 8 of the present invention 本発明の実施の形態8に係る画像処理装置の構成を示すブロック図Block diagram showing a configuration of an image processing apparatus according to Embodiment 8 of the present invention
 以下、本発明の実施の形態について図面を参照して詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 (実施の形態1)
 図2は、本発明の実施の形態1に係る画像処理装置の構成を示すブロック図である。本実施の形態は、ディジタルカメラの信号処理を行う装置に適用した例である。
Embodiment 1
FIG. 2 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention. The present embodiment is an example applied to an apparatus that performs signal processing of a digital camera.
 図2に示すように、画像処理装置200は、入力インターフェース(I/F)201と、出力I/F202と、画像処理部211~214と、バス220と、一時記憶部231~233と、から主に構成される。また、画像処理装置200は、外部の主制御部250から入力した制御信号に基づいて所定の信号処理を行う。 As shown in FIG. 2, the image processing apparatus 200 includes an input interface (I / F) 201, an output I / F 202, image processing units 211 to 214, a bus 220, and a temporary storage unit 231 to 233. Mainly composed. The image processing apparatus 200 also performs predetermined signal processing based on a control signal input from the external main control unit 250.
 画像処理装置200は、ディジタルカメラの信号処理を行う装置であり、イメージセンサ(カメラ)の出力信号(ベイヤー配列RAWデータ)を入力として、YUV信号を生成し、出力する。イメージセンサの解像度は、例えば、水平2560ピクセル×垂直1920ピクセル(約4.9Mピクセル)である。 The image processing apparatus 200 is an apparatus that performs signal processing of a digital camera, and receives an output signal (Bayer array RAW data) of an image sensor (camera) to generate and output a YUV signal. The resolution of the image sensor is, for example, 2560 horizontal pixels × 1920 vertical pixels (about 4.9 M pixels).
 入力I/F201は、RAWデータをこれに付随する入力クロック及び水平・垂直同期信号を用いて画像処理装置200内部に取り込む。 The input I / F 201 captures RAW data into the image processing apparatus 200 using an input clock and horizontal and vertical synchronization signals associated therewith.
 出力I/F202は、YUV信号に出力クロック及び水平・垂直同期信号を付加して画像処理装置200外部に出力する。 The output I / F 202 adds an output clock and a horizontal / vertical synchronization signal to the YUV signal and outputs the same to the outside of the image processing apparatus 200.
 画像処理部211は、RAWデータ処理(黒レベル補正、ゲイン調整、ホワイトバランス補正、ガンマ補正)を行う。 The image processing unit 211 performs RAW data processing (black level correction, gain adjustment, white balance correction, gamma correction).
 画像処理部212は、係数可変の2次元ディジタルフィルタである。画像処理部212は、水平4ライン分のデータバッファを内蔵し、最大100タップ(水平20×垂直5)のフィルタ演算を実行する。 The image processing unit 212 is a two-dimensional digital filter with variable coefficients. The image processing unit 212 incorporates a data buffer for four horizontal lines, and executes a filter operation of up to 100 taps (horizontal 20 × vertical 5).
 画像処理部213は、YC(輝度・色差)分離を行う。画像処理部213は、水平4ライン分のデータバッファを内蔵し、このバッファ内のデータで補間処理を行いながら輝度・色差のデータ(YUV信号)を生成する。 The image processing unit 213 performs YC (brightness and color difference) separation. The image processing unit 213 has a built-in data buffer for four horizontal lines, and generates luminance / color difference data (YUV signal) while performing interpolation processing with the data in this buffer.
 画像処理部214は、図形重畳(画像に矩形や矩形枠を重ね合わせる処理)を行う。画像に矩形や矩形枠を重畳させることにより、撮影者に対して注意を促すことや、撮影対象のプライバシーを保護することができる。 The image processing unit 214 performs graphic superposition (processing of superposing a rectangle or a rectangular frame on an image). By superimposing a rectangle or a rectangular frame on an image, the photographer can be alerted and privacy of a subject to be photographed can be protected.
 バス220は、画像処理部211~214と一時記憶部231~233とを選択的に結合する。バス220は、2入力1出力セレクタ221~224(セレクタS1~S4)と、セレクタ221~224を切り替え制御するレジスタ225とから構成される。 The bus 220 selectively couples the image processing units 211 to 214 with the temporary storage units 231 to 233. The bus 220 is composed of 2-input 1-output selectors 221 to 224 (selectors S1 to S4) and a register 225 for switching and controlling the selectors 221 to 224.
 セレクタ221~224は、2入力接続バスをいずれか1つに切り替える。レジスタ225は、画像処理装置200の外部の主制御部250からの指示に従って、セレクタ221~224を用途毎に切り替えるための制御信号をセレクタ221~224に出力する。 The selectors 221 to 224 switch the 2-input connection bus to any one. The register 225 outputs control signals for switching the selectors 221 to 224 for each application to the selectors 221 to 224 according to an instruction from the main control unit 250 outside the image processing apparatus 200.
 一時記憶部231~233は、各々水平2ライン分のデータ(5120ピクセル相当)を格納することができる2ポートSRAMである。この2ポートSRAMは、例えば1リード・1ライトで、16ビット×5120ワードを有する。 The temporary storage units 231 to 233 are 2-port SRAMs capable of storing data (equivalent to 5120 pixels) for two horizontal lines. The 2-port SRAM has, for example, 16 bits x 5120 words in 1 read 1 write.
 このように、バス220を介して画像処理部211~214と、リードアドレス(図示せず)、ライトアドレス(図示せず)、リードデータ、ライトデータの各信号線が結合される。 As described above, the image processing units 211 to 214 are connected to the signal lines for the read address (not shown), the write address (not shown), the read data, and the write data through the bus 220.
 図3は、本装置が用いられる3種類の用途毎に、処理内容およびバス220のセレクタ制御方法を示す図である。この図の内容をコード化した情報は、画像処理装置200の外部の主制御部250の内部メモリに格納される。図3に示すように、主制御部250に格納される情報は、「1.通常用途」、「2.暗環境下での監視」、「3.医療用途」毎に、画像処理部212(2次元ディジタルフィルタ)のフィルタ処理(フィルタ係数)およびバス220のセレクタ221~224の選択方向である。 FIG. 3 is a diagram showing processing contents and a selector control method of the bus 220 for each of three types of applications in which the present apparatus is used. Information obtained by coding the content of this figure is stored in the internal memory of the main control unit 250 outside the image processing apparatus 200. As illustrated in FIG. 3, the information stored in the main control unit 250 includes the image processing unit 212 (for each of “1. normal use”, “2. monitoring under dark environment”, and “3. medical use”). 2) filtering (filter coefficients) of the two-dimensional digital filter and selection directions of the selectors 221 to 224 of the bus 220.
 図2のレジスタ225は、主制御部250から設定された値に従って制御信号(例えば、1ビットの情報)をセレクタ221~224(セレクタS1~S4)に出力することにより、セレクタ221~224(セレクタS1~S4)を切り替え制御する。セレクタ221~224は、2入力「a」,「b」(図5参照)のうち、この図の接続バスのいずれか(「a」または「b」)を選択する。図2では、セレクタ221~224の上側入力が「a」、下側入力が「b」である。 The register 225 of FIG. 2 outputs a control signal (for example, 1-bit information) to the selectors 221 to 224 (selectors S1 to S4) according to the value set from the main control unit 250, thereby selecting the selectors 221 to 224 (selectors). Switch control of S1 to S4). The selectors 221 to 224 select one of the connection buses in this figure (“a” or “b”) of the two inputs “a” and “b” (see FIG. 5). In FIG. 2, the upper inputs of the selectors 221 to 224 are “a” and the lower inputs are “b”.
 「1.通常用途」では、画像処理部211におけるRAWデータ処理→画像処理部213におけるYC分離→画像処理部214における図形重畳、の順序で処理が行われる。また、画像処理部212(2次元ディジタルフィルタ)のフィルタ処理はスキップする。「1.通常用途」の場合、セレクタ223(セレクタS3)は「a」を選択し、セレクタ224(セレクタS4)は「b」を選択する。なお、画像処理部212(2次元ディジタルフィルタ)はスキップするので、セレクタ221(セレクタS1)及びセレクタ222(セレクタS2)の選択方向は任意である。 In “1. normal use”, processing is performed in the order of RAW data processing in the image processing unit 211 → YC separation in the image processing unit 213 → graphic superposition in the image processing unit 214. Also, the filter processing of the image processing unit 212 (two-dimensional digital filter) is skipped. In the case of “1. normal use”, the selector 223 (selector S3) selects “a”, and the selector 224 (selector S4) selects “b”. Since the image processing unit 212 (two-dimensional digital filter) is skipped, the selection direction of the selector 221 (selector S1) and the selector 222 (selector S2) is arbitrary.
 「2.暗環境下での監視」では、画像処理部211におけるRAWデータ処理→画像処理部212におけるフィルタ処理→画像処理部213におけるYC分離→画像処理部214における図形重畳、の順序で処理が行われる。また、画像処理部212(2次元ディジタルフィルタ)のフィルタ処理は、ノイズ除去(高周波数成分除去)である。「2.暗環境下での監視」の場合、セレクタ221(セレクタS1)は「a」を、セレクタ222(セレクタS2)は「b」を、セレクタ223(セレクタS3)は「b」を、セレクタ224(セレクタS4)は「a」を、それぞれ選択する。 In “2. Monitoring under dark environment”, processing is performed in the order of RAW data processing in the image processing unit 211 → filter processing in the image processing unit 212 → YC separation in the image processing unit 213 → graphic superposition in the image processing unit 214. To be done. The filter processing of the image processing unit 212 (two-dimensional digital filter) is noise removal (high frequency component removal). In the case of "2. Monitoring under dark environment", the selector 221 (selector S1) selects "a", the selector 222 (selector S2) selects "b", and the selector 223 (selector S3) selects "b" 224 (selector S4) selects “a” respectively.
 「3.医療用途」では、画像処理部211におけるRAWデータ処理→画像処理部213におけるYC分離→画像処理部212におけるフィルタ処理→画像処理部214における図形重畳、の順序で処理が行われる。また、画像処理部212(2次元ディジタルフィルタ)のフィルタ処理は、毛細血管強調(赤色高周波成分強調)である。「3.医療用途」の場合、セレクタ221(セレクタS1)は「b」を、セレクタ222(セレクタS2)は「a」を、セレクタ223(セレクタS3)は「a」を、セレクタ224(セレクタS4)は「b」を、それぞれ選択する。 In “3. Medical use”, processing is performed in the order of RAW data processing in the image processing unit 211 → YC separation in the image processing unit 213 → filter processing in the image processing unit 212 → graphic superposition in the image processing unit 214. The filter processing of the image processing unit 212 (two-dimensional digital filter) is capillary blood vessel enhancement (red high frequency component enhancement). In the case of “3. medical use”, the selector 221 (selector S1) is “b”, the selector 222 (selector S2) is “a”, the selector 223 (selector S3) is “a”, the selector 224 (selector S4 ) Selects “b” respectively.
 以下、上述のように構成された画像処理装置200の動作を説明する。 Hereinafter, the operation of the image processing apparatus 200 configured as described above will be described.
 〔ライン単位処理〕
 まず、ライン単位処理について説明する。
[Line unit processing]
First, line-by-line processing will be described.
 画像処理部211~214は、いずれもその処理の性質から、画像中の1ラインを単位としてデータの入力もしくは出力を行う。例えば、画像処理部212(2次元ディジタルフィルタ)は、バス220から新規の1ライン分の画像データを受けると、内蔵データバッファに保持された4ライン分の画像データと新規1ライン分の画像データとの合計5ライン分の画像データのデータに対し最大100タップのフィルタ演算を実行して1ライン分の画像データの出力を行うとともに、内蔵データバッファの最古の1ライン分の画像データを新規1ライン分の画像データに置換する。画像処理部212は、この1ライン毎の処理を繰り返すことにより、1画面全体を処理する。 The image processing units 211 to 214 all input or output data in units of one line in the image because of the nature of the processing. For example, when the image processing unit 212 (two-dimensional digital filter) receives image data for one new line from the bus 220, the image data for four lines held in the built-in data buffer and the image data for one new line A maximum of 100 taps filter operation is performed on the data of the image data of 5 lines in total to output the image data of 1 line, and the image data of the oldest 1 line of the built-in data buffer is new Replace with one line of image data. The image processing unit 212 processes the entire one screen by repeating the processing for each line.
 〔動作タイミング〕
 次に、画像処理装置200の動作タイミングについて説明する。図4は、図3に示した「1.通常用途」における、画像処理装置200の処理タイミングを示す図である。
[Operation timing]
Next, the operation timing of the image processing apparatus 200 will be described. FIG. 4 is a diagram showing processing timings of the image processing apparatus 200 in “1. normal use” shown in FIG.
 詳細には、画像入力のタイミング、画像処理部211(RAWデータ処理)の処理レイテンシ、画像処理部211による一時記憶部231に対する書込み動作、画像処理部213(YC分離)による一時記憶部231からの読出し動作、画像処理部213の処理レイテンシ、画像処理部213による一時記憶部233に対する書込み動作、画像処理部214(図形重畳)による一時記憶部233からの読出し動作、画像処理部214の処理レイテンシ、及び画像出力のタイミングを示す。なお、図4中、一時記憶部231,233の縦軸はアドレスを示す。 Specifically, the timing of image input, the processing latency of the image processing unit 211 (RAW data processing), the writing operation to the temporary storage unit 231 by the image processing unit 211, and the temporary storage unit 231 by the image processing unit 213 (YC separation) Reading operation, processing latency of the image processing unit 213, writing operation to the temporary storage unit 233 by the image processing unit 213, reading operation from the temporary storage unit 233 by the image processing unit 214 (graphic superposition), processing latency of the image processing unit 214, And timing of image output. In FIG. 4, the vertical axes of the temporary storage units 231 and 233 indicate addresses.
 図4の例では、データが、画像処理部211(RAWデータ処理)→一時記憶部231→画像処理部213(YC分離)→一時記憶部233→画像処理部214(図形重畳)の順序で流れるように、バス220の接続が行われる。 In the example of FIG. 4, data flows in the order of image processing unit 211 (RAW data processing) → temporary storage unit 231 → image processing unit 213 (YC separation) → temporary storage unit 233 → image processing unit 214 (graphics overlap) Thus, the connection of the bus 220 is made.
 入力I/F201からのRAWデータは、1ラインあたり40μs周期中の16μs期間に入力され、データレートは160MHz/ピクセルである。 Raw data from the input I / F 201 is input in a 16 μs period in a 40 μs cycle per line, and the data rate is 160 MHz / pixel.
 画像処理部211(RAWデータ処理)、画像処理部213(YC分離)及び画像処理部214(図形重畳)は、上記水平同期信号により40μsで同期して動作し、それぞれ10μs、24μs、6μsの処理レイテンシを持つ。 The image processing unit 211 (RAW data processing), the image processing unit 213 (YC separation), and the image processing unit 214 (graphics superposition) operate in synchronization in 40 μs by the above horizontal synchronization signal and process 10 μs, 24 μs, and 6 μs, respectively. Has a latency.
 各画像処理部211,213,214は、一時記憶部231,233に対してそれぞれリード・ライトのアドレスを生成し、40μs周期でライト領域とリード領域が交互に入れ替わる構成となっている。 Each of the image processing units 211, 213, and 214 generates a read / write address for the temporary storage units 231 and 233, and the write area and the read area are alternately switched in a cycle of 40 μs.
 したがって、図4の例では、画像処理部211が、入力した画像データに対して10μsの処理レイテンシでRAWデータ処理し、処理完了したデータを順次一時記憶部231に書込む。そして、画像処理部211の処理開始から40μs後に、画像処理部213が、一時記憶部231からデータを順次読出し、24μsの処理レイテンシでYC分離を行い、処理完了したデータを順次一時記憶部233に書込む。そして、画像処理部213の処理開始から40μs後に、画像処理部214が、一時記憶部233からデータを順次読出し、6μsの処理レイテンシで図形重畳を行い、処理完了したデータを順次出力する。 Therefore, in the example of FIG. 4, the image processing unit 211 performs RAW data processing on the input image data with a processing latency of 10 μs, and sequentially writes the processed data in the temporary storage unit 231. Then, 40 μs after the start of processing by the image processing unit 211, the image processing unit 213 sequentially reads data from the temporary storage unit 231, performs YC separation with a processing latency of 24 μs, and sequentially completes processed data in the temporary storage unit 233. Write. Then, 40 μs after the start of processing by the image processing unit 213, the image processing unit 214 sequentially reads data from the temporary storage unit 233, performs graphic superposition with a processing latency of 6 μs, and sequentially outputs processed data.
 このように、画像処理装置200は、画像処理部211~214が、ライン単位のパイプライン処理として並列動作する構成となる。 As described above, in the image processing apparatus 200, the image processing units 211 to 214 operate in parallel as pipeline processing in units of lines.
 〔バス切替方式〕
 次に、バス切替方式について説明する。
[Bus switching method]
Next, the bus switching method will be described.
 画像処理装置200は、図3に示す3種類の用途に用いられ、用途毎に画像処理部211~214の実行有無や順序を切り替える。 The image processing apparatus 200 is used for the three types of applications shown in FIG. 3, and switches the execution existence and the order of the image processing units 211 to 214 for each application.
 ここで、画像処理部212(2次元ディジタルフィルタ)の処理内容は、用途毎に異なる。例えば、画像処理装置200を「2.暗環境下での監視」に用いる場合、イメージセンサへの入力光量が減少しノイズが増大するため、画像処理部212は、RAWデータ処理後の高周波成分を除去し、監視者が見やすいよう画質改善を行う。 Here, the processing content of the image processing unit 212 (two-dimensional digital filter) differs depending on the application. For example, when the image processing apparatus 200 is used for “2. Monitoring in a dark environment”, the amount of light input to the image sensor decreases and noise increases, so the image processing unit 212 processes high frequency components after RAW data processing. Remove and improve the image quality to make it easy for the observer to see.
 また、画像処理装置200を「3.医療用途」に用いる場合、医療者が毛細血管を視認しやすいよう赤色高周波成分を強調する必要があるため、画像処理部212は、YC分離後にこの赤色高周波成分強調処理を行う。 When the image processing apparatus 200 is used for “3. medical use”, it is necessary to emphasize the red high-frequency component so that the medical practitioner can easily view the capillary blood vessel. Perform component emphasis processing.
 これらの特殊処理は、主制御部250が画像処理部212(2次元ディジタルフィルタ)のフィルタ係数を変更することにより実現される。 These special processes are realized by the main control unit 250 changing the filter coefficients of the image processing unit 212 (two-dimensional digital filter).
 図5は、画像処理装置200のバス220のデータフローを示す図である。図5中の太実線は、セレクタ221~224(セレクタS1~S4)の選択によってデータが流れるバスを示し、同図破線は、セレクタ221~224の非選択によってデータが流れないバスを示す。 FIG. 5 is a diagram showing a data flow of the bus 220 of the image processing apparatus 200. Thick solid lines in FIG. 5 indicate buses through which data flows by selection of the selectors 221 to 224 (selectors S1 to S4), and broken lines in the same figure indicate buses through which data does not flow due to non-selection of the selectors 221 to 224.
 図5Aは、「1.通常用途」における処理順序(画像処理部211(RAWデータ処理)→画像処理部213(YC分離)→画像処理部214(図形重畳))のデータフローである。 FIG. 5A is a data flow of the processing order (image processing unit 211 (RAW data processing) → image processing unit 213 (YC separation) → image processing unit 214 (graphics superposition)) in “1. normal use”.
 図5Bは、「2.暗環境下での監視」における処理順序(画像処理部211(RAWデータ処理)→画像処理部212(2次元ディジタルフィルタ)→画像処理部213(YC分離)→画像処理部214(図形重畳))のデータフローである。 FIG. 5B shows the processing order in “2. Monitoring under dark environment” (image processing unit 211 (RAW data processing) → image processing unit 212 (two-dimensional digital filter) → image processing unit 213 (YC separation) → image processing It is a data flow of part 214 (figure superposition).
 図5Cは、「3.医療用途」における処理順序(画像処理部211(RAWデータ処理)→画像処理部213(YC分離)→画像処理部212(2次元ディジタルフィルタ)→画像処理部214(図形重畳))のデータフローである。 FIG. 5C shows the processing order in “3. medical use” (image processing unit 211 (RAW data processing) → image processing unit 213 (YC separation) → image processing unit 212 (two-dimensional digital filter) → image processing unit 214 (figure Data flow).
 図5に示すように、画像処理装置200は、バス220に含まれる4つのセレクタ221~224(セレクタS1~S4)を、図3に示すように設定することにより、画像処理部211~214と一時記憶部231~233とを選択的に結合する。これにより、画像処理部211~214の実行有無や順序が変更される。 As shown in FIG. 5, the image processing apparatus 200 sets the four selectors 221 to 224 (selectors S1 to S4) included in the bus 220 as shown in FIG. The temporary storage units 231 to 233 are selectively coupled. As a result, the execution existence and the order of the image processing units 211 to 214 are changed.
 また、本実施の形態によれば、共有メモリを用いず、ライン単位のメモリを用いているため、メモリIOボトルネックが解消されると共に、従来の装置に対してコストを抑えることができる。 Further, according to the present embodiment, since the memory in units of lines is used instead of the shared memory, the memory IO bottleneck can be eliminated and the cost can be suppressed with respect to the conventional device.
 例えば、図3の「1.通常用途」に相当する処理を従来の装置で行う場合、共有メモリに求められる最大スループットは以下の通りである。
  データレート:R=160 [MHz]
  入出力系統数:N=6(リード・ライト各3系統)
  1ピクセル当たりデータ量:B=2[Byte]
  スループット:S=R×N×B=1920[MByte/s]
For example, when the processing corresponding to “1. normal use” in FIG. 3 is performed by the conventional device, the maximum throughput required for the shared memory is as follows.
Data rate: R = 160 [MHz]
Number of input / output systems: N = 6 (3 systems for read and write)
Amount of data per pixel: B = 2 [Byte]
Throughput: S = R × N × B = 1920 [MByte / s]
 従来の装置は、SDRAMなどからなる共有メモリを用いている。現時点で入手し得るDDR SDRAMのスループットは最大1.3[GByte/s]程度であるため、1個のSDRAMではメモリIOがボトルネックとなり、装置を構成できない。従来の装置を構成するには2個のSDRAMが必要となる。このように、従来の装置は、記憶容量としては不必要であるにもかかわらず、スループットを確保するために複数個の外部メモリを用意しなければならず、高価なものとなる。 The conventional device uses a shared memory such as an SDRAM. Since the throughput of currently available DDR SDRAM is about 1.3 [GByte / s] at the maximum, the memory IO becomes a bottleneck in one SDRAM and the device can not be configured. Two SDRAMs are required to configure the conventional device. As described above, although the conventional device is unnecessary as a storage capacity, it is necessary to prepare a plurality of external memories to secure the throughput, which is expensive.
 これに対して、本実施の形態の装置は、640[MByte/s]の小容量SRAMを3個用意すれば良いので、1つの半導体デバイス(LSI)上に集積することが可能となり、安価に装置を構成することができる。 On the other hand, in the device according to the present embodiment, three small-capacity SRAMs of 640 [MByte / s] may be prepared, and therefore, they can be integrated on one semiconductor device (LSI), which is inexpensive The device can be configured.
 以上詳細に説明したように、本実施の形態によれば、外部制御に従って、レジスタ225がセレクタ221~224を切り替え制御することにより、画像処理部211~214と一時記憶部231~233との結合形態を変化させて画像処理部211~214の実行順序を規定する。これにより、不要な処理のスキップや処理順序の入れ替えが可能となり、メモリIOボトルネックを解消し、高性能かつプログラマブルな画像処理装置を実現することができる。例えば、用途の異なる種々のカメラ製品に対するプラットフォームを共通化することができる。 As described above in detail, according to the present embodiment, the register 225 switches and controls the selectors 221 to 224 according to the external control, thereby combining the image processing units 211 to 214 with the temporary storage units 231 to 233. The form is changed to specify the execution order of the image processing units 211 to 214. As a result, unnecessary processing can be skipped and the processing order can be changed, and a memory IO bottleneck can be eliminated, and a high performance and programmable image processing apparatus can be realized. For example, platforms for various camera products with different applications can be made common.
 また、本実施の形態によれば、不要処理としてスキップさせた画像処理部211~214、及び未使用の一時記憶部231~233に対するクロック供給、もしくは電源供給を停止させることもできる。このようにすれば、消費電力を削減することができ、バッテリ駆動の機器では、連続動作時間の長時間化を図ることができる。 Further, according to the present embodiment, clock supply or power supply to the image processing units 211 to 214 skipped as unnecessary processing and the unused temporary storage units 231 to 233 can be stopped. In this way, power consumption can be reduced, and in battery-powered devices, continuous operation time can be extended.
 (実施の形態2)
 実施の形態2は、実施の形態1で説明した画像処理装置に外部メモリを接続する例である。
Second Embodiment
The second embodiment is an example in which an external memory is connected to the image processing apparatus described in the first embodiment.
 図6は、本発明の実施の形態2に係る画像処理装置の構成を示すブロック図である。本実施の形態の説明に当たり、図2と同一の構成部分には同一符号を付して重複箇所の説明を省略する。 FIG. 6 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 2 of the present invention. In the description of the present embodiment, the same components as in FIG. 2 will be assigned the same reference numerals and overlapping description will be omitted.
 図6に示すように、画像処理装置300は、画像処理装置本体310と、外部メモリ320とからなり、画像処理装置本体310は、入力I/F201と、出力I/F202と、画像処理部211~214Aと、DMA(Direct Memory Access)コントローラ311と、バス220Aと、一時記憶部231~234と、を備える。 As shown in FIG. 6, the image processing apparatus 300 includes an image processing apparatus main body 310 and an external memory 320. The image processing apparatus main body 310 includes an input I / F 201, an output I / F 202, and an image processing unit 211. To 214A, a DMA (Direct Memory Access) controller 311, a bus 220A, and temporary storage units 231 to 234.
 画像処理装置300は、図2の画像処理装置200と同様に、ディジタルカメラの信号処理を行う装置であり、イメージセンサ(カメラ)の出力信号(ベイヤー配列RAWデータ)を入力として、YUV信号を生成し、出力する。 The image processing apparatus 300 is an apparatus that performs signal processing of a digital camera as in the image processing apparatus 200 of FIG. 2 and generates an YUV signal using an output signal (Bayer array RAW data) of an image sensor (camera) as an input. And output.
 画像処理部214Aは、図2の画像処理部214と同様の機能に加え、一時記憶部233及び一時記憶部234から、それぞれ同位置の1ライン分の画像データを読み出し、2つの画像データの差分から動体の有無(静止画か否か)を判別する。 The image processing unit 214A reads the image data of one line at the same position from the temporary storage unit 233 and the temporary storage unit 234, in addition to the same function as the image processing unit 214 in FIG. Then, the presence or absence of a moving object (whether it is a still image or not) is determined.
 バス220Aは、画像処理部211~214A、DMAコントローラ311と一時記憶部231~234とを選択的に結合する。バス220Aは、図2のバス220に、DMAコントローラ311接続用のバス及びセレクタが付加された構成であり、基本的には図2のバス220を拡張した構成である。 The bus 220A selectively couples the image processing units 211 to 214A, the DMA controller 311, and the temporary storage units 231 to 234. The bus 220A has a configuration in which a bus for connecting the DMA controller 311 and a selector are added to the bus 220 in FIG. 2 and is basically a configuration in which the bus 220 in FIG. 2 is expanded.
 一時記憶部234は、一時記憶部231~233と同様に、各々水平2ライン分のデータ(5120ピクセル相当)が格納可能な2ポートSRAMである。この2ポートSRAMは、例えば1リード・1ライトで、16ビット×5120ワードを有する。 The temporary storage unit 234 is a two-port SRAM that can store data (corresponding to 5120 pixels) for two horizontal lines, similarly to the temporary storage units 231 to 233. The 2-port SRAM has, for example, 16 bits x 5120 words in 1 read 1 write.
 DMAコントローラ311は、1ライン単位でバス220Aと外部メモリ320との間のデータ転送を行う。DMAコントローラ311は、一時記憶部233から1ライン分の画像データを読み出して外部メモリ320に書き込む。また、DMAコントローラ311は、外部メモリ320から旧フレームの同位置の1ライン分の画像データを読み出して一時記憶部234に書き込む。 The DMA controller 311 performs data transfer between the bus 220A and the external memory 320 in units of one line. The DMA controller 311 reads image data of one line from the temporary storage unit 233 and writes the image data to the external memory 320. Further, the DMA controller 311 reads the image data for one line at the same position of the old frame from the external memory 320 and writes the image data in the temporary storage unit 234.
 外部メモリ320は、一時記憶部231~234に比較して大容量のメモリ(例えばSDRAM)である。外部メモリ320は、過去の画像データをフレーム単位で保持するために設けられる。ここでは、外部メモリ320は、新旧2フレーム分の画像データを蓄積する。 The external memory 320 is a memory (for example, an SDRAM) having a large capacity as compared with the temporary storage units 231 to 234. The external memory 320 is provided to hold past image data in frame units. Here, the external memory 320 stores image data of two old and new frames.
 以下、上述のように構成された画像処理装置300の動作を、図7を用いて説明する。図7は、画像処理装置300のバス220Aのデータフローを示す図である。なお、「1.通常用途」、「2.暗環境下での監視」及び「3.医療用途」のいずれにおいても、画像処理部211に画像データが入力されてから、一時記憶部233に画像データが書き込まれるまでの動作は、上記実施の形態1における画像処理装置200(図5)と同様であるので、説明を省略する。 Hereinafter, the operation of the image processing apparatus 300 configured as described above will be described with reference to FIG. FIG. 7 is a diagram showing a data flow of the bus 220A of the image processing apparatus 300. In any of “1. normal use”, “2. monitoring in dark environment” and “3. medical use”, the image data is input to the image processing unit 211 and then the image is stored in the temporary storage unit 233. The operation until the data is written is the same as that of the image processing apparatus 200 (FIG. 5) in the first embodiment, so the description will be omitted.
 一時記憶部233に画像データが書き込まれると、DMAコントローラ311は、一時記憶部233から1ライン分の画像データを読み出して外部メモリ320に書き込む。また同時に、この1ライン分の画像データが画像処理部214Aに入力される。 When the image data is written to the temporary storage unit 233, the DMA controller 311 reads the image data of one line from the temporary storage unit 233 and writes the image data to the external memory 320. At the same time, the image data for one line is input to the image processing unit 214A.
 次に、DMAコントローラ311は、外部メモリ320から旧フレームの1ライン分の画像データを読み出して一時記憶部234に書き込む。 Next, the DMA controller 311 reads the image data of one line of the old frame from the external memory 320 and writes the image data in the temporary storage unit 234.
 次に、画像処理部214Aは、一時記憶部234から旧フレームの1ライン分の画像データを読み出す。この旧フレームの1ライン分の画像データ、および上記(新フレームの)1ライン分の画像データは、元のフレーム上で同位置となるようにDMA制御が行われる。画像処理部214Aは、同位置の1ライン分の新旧2つの画像データの差分から動体の有無を判別する。 Next, the image processing unit 214A reads the image data of one line of the old frame from the temporary storage unit 234. The DMA control is performed such that the image data of one line of the old frame and the image data of one line of the above (new frame) are at the same position on the original frame. The image processing unit 214A determines the presence or absence of a moving object from the difference between new and old two image data of one line at the same position.
 このように、本実施の形態によれば、画像処理装置300が、過去の画像データをフレーム単位で保持する外部メモリ320、及び、バス220Aと外部メモリ320との間のデータ転送を行うDMAコントローラ311を有することにより、実施の形態1の機能に加えて、動体の有無の判別等、過去のフレームを使用する画像処理を行うことができる。 As described above, according to the present embodiment, the image processing apparatus 300 holds the external image 320 holding the past image data in frame units, and the DMA controller performs data transfer between the bus 220A and the external memory 320. By including 311, in addition to the functions of the first embodiment, it is possible to perform image processing using a past frame, such as determination of the presence or absence of a moving object.
 また、本実施の形態によれば、大容量の外部メモリ320を外付け部品としており、過去のフレームを参照する信号処理が不要な場合、外部メモリの実装を省略することができる。これにより、高性能品・廉価品の双方に適応できるプラットフォームを構築することができる。 Further, according to the present embodiment, the external memory 320 having a large capacity is used as an external component, and mounting of the external memory can be omitted when the signal processing for referring to the past frame is unnecessary. This makes it possible to construct a platform that can be adapted to both high performance and low cost products.
 (実施の形態3)
 実施の形態3は、実施の形態2で説明した画像処理装置に外部デバイスを接続する例である。
Third Embodiment
The third embodiment is an example of connecting an external device to the image processing apparatus described in the second embodiment.
 図8は、本発明の実施の形態3に係る画像処理装置の構成を示すブロック図である。本実施の形態の説明に当たり、図6と同一の構成部分には同一符号を付して重複箇所の説明を省略する。 FIG. 8 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention. In the description of the present embodiment, the same components as in FIG. 6 will be assigned the same reference numerals and overlapping descriptions will be omitted.
 図8に示すように、画像処理装置300は、画像処理装置本体310と、外部メモリ320と、バス330と、外部デバイス340とからなり、画像処理装置本体310は、入力I/F201と、出力I/F202と、画像処理部211~214Aと、DMAコントローラ311と、バス220Aと、一時記憶部231~234と、を備える。 As shown in FIG. 8, the image processing apparatus 300 comprises an image processing apparatus main body 310, an external memory 320, a bus 330, and an external device 340. The image processing apparatus main body 310 has an input I / F 201 and an output. An I / F 202, image processing units 211 to 214A, a DMA controller 311, a bus 220A, and temporary storage units 231 to 234 are provided.
 DMAコントローラ311は、バス330を介して、外部メモリ320への画像データの書き込み、外部メモリ320からの画像データの読み出し、を行う。 The DMA controller 311 writes image data to the external memory 320 and reads image data from the external memory 320 via the bus 330.
 外部デバイス340は、例えば汎用プロセッサ、FPGA(Field Programmable Gate Array)である。外部デバイス340は、外部メモリ320から画像データを読み出し、所定の処理、例えば顔認識のような静止画オブジェクト認識を行う。そして、外部デバイス340は、処理の結果を示す制御信号を主制御部250に出力する。例えば、外部デバイス340が顔認識を行う場合、画像データから人物の顔を認識したときに、その旨を示す制御信号を主制御部250に出力する。 The external device 340 is, for example, a general purpose processor, an FPGA (Field Programmable Gate Array). The external device 340 reads image data from the external memory 320 and performs predetermined processing, for example, still image object recognition such as face recognition. Then, the external device 340 outputs a control signal indicating the result of the process to the main control unit 250. For example, when the external device 340 performs face recognition, when the face of a person is recognized from image data, a control signal indicating that is output to the main control unit 250.
 主制御部250は、外部デバイス340から制御信号を入力したタイミングで、画像処理装置300の信号処理の種別を制御する。例えば、画像処理装置300が「1.通常用途」を行っている場合に、主制御部250は、外部デバイス340から人物の顔を認識した旨を示す制御信号を入力したときに、画像処理装置300に対して、解像度を上げて処理を行うように指示する。 The main control unit 250 controls the type of signal processing of the image processing apparatus 300 at the timing when the control signal is input from the external device 340. For example, when the image processing apparatus 300 is performing “1. normal use”, when the main control unit 250 receives, from the external device 340, a control signal indicating that a human face has been recognized, the image processing apparatus Instructs 300 to increase the resolution for processing.
 このように、本実施の形態によれば、画像処理部211~214Aによる固定処理と、外部デバイス340による可変処理とを実行順序自在に組合せることができ、フレキシビリティの高い信号処理プラットフォームを構築することができる。例えば、外部デバイス340のプログラムを変更するだけで、顔認識を車輛認識に変更することができる。 As described above, according to the present embodiment, the fixed processing by the image processing units 211 to 214A and the variable processing by the external device 340 can be combined freely in the execution order, and a signal processing platform with high flexibility is constructed. can do. For example, face recognition can be changed to vehicle recognition only by changing the program of the external device 340.
 また、汎用プロセッサである外部デバイス340、バス330、画像処理部211~214A、DMAコントローラ311、バス220A、及び一時記憶部231~234を1つの半導体デバイス(LSI)上に集積することもできる。このように構成すれば、機器の部品点数及び実装面積を削減することができ、機器の小型化、及びコストダウンを実現することができる。 The external device 340 which is a general purpose processor, the bus 330, the image processing units 211 to 214A, the DMA controller 311, the bus 220A, and the temporary storage units 231 to 234 can be integrated on one semiconductor device (LSI). According to this configuration, the number of parts and the mounting area of the device can be reduced, and the downsizing and cost reduction of the device can be realized.
 (実施の形態4)
 実施の形態4は、一時記憶部の構成例である。
Embodiment 4
The fourth embodiment is a configuration example of a temporary storage unit.
 図9は、本発明の実施の形態4に係る画像処理装置の一時記憶部の構成を示す図である。図2と同一構成部分には同一符号を付す。図9は、図2の一時記憶部231~233のうち、一時記憶部231を代表して示す。 FIG. 9 is a diagram showing the configuration of a temporary storage unit of an image processing apparatus according to a fourth embodiment of the present invention. The same components as in FIG. 2 are assigned the same reference numerals. FIG. 9 representatively shows the temporary storage unit 231 among the temporary storage units 231 to 233 in FIG.
 図9に示すように、一時記憶部231は、単体で区分画像データを格納することができるFIFO(First-In First-Out)251と、FIFO251を制御する制御回路252とから構成される。 As shown in FIG. 9, the temporary storage unit 231 includes a FIFO (First-In First-Out) 251 which can store divided image data alone, and a control circuit 252 which controls the FIFO 251.
 ここで、画像処理部211~214(図2参照)は、処理対象の区分画像の位置関係を認識して処理を行う必要がある。例えば、カメラ信号処理では、一般に偶数ラインと奇数ラインでイメージセンサの色配列が異なるため、処理中のラインの偶奇を認識して処理を行う必要がある。 Here, the image processing units 211 to 214 (see FIG. 2) need to perform processing by recognizing the positional relationship of the divided images to be processed. For example, in camera signal processing, since the color arrangement of the image sensor is generally different between the even lines and the odd lines, it is necessary to perform processing by recognizing even and odd of the line being processed.
 通常は、一時記憶部の所定のアドレスに所定のデータを格納することで、処理中のラインの偶奇を認識する。例えば、実施の形態1の一時記憶部231であれば、アドレス0番地~2559番地に偶数ライン、アドレス2560番地~5119番地に奇数ラインを格納する。これを行うためには、バス220にはライトアドレスの信号線およびリードアドレスの信号線がそれぞれ必要となる。 Usually, by storing predetermined data at a predetermined address of the temporary storage unit, even and odd of the line being processed are recognized. For example, in the temporary storage unit 231 according to the first embodiment, even-numbered lines are stored at addresses 0 to 2559, and odd-numbered lines are stored at addresses 2560 to 5119. In order to do this, the bus 220 needs a signal line for the write address and a signal line for the read address.
 これに対し、本実施の形態では、制御回路252が、一時記憶部231のFIFO251に区分画像データを格納する際に、画像における当該区分画像の位置関係を示す情報を付加する。例えば、区分画像が1ラインの場合、格納されるラインに対するライン番号を付加する。 On the other hand, in the present embodiment, when storing the divided image data in the FIFO 251 of the temporary storage unit 231, the control circuit 252 adds information indicating the positional relationship of the divided image in the image. For example, when the divided image is one line, a line number is added to the stored line.
 図9の例では、制御回路252が、FIFO251に、区分画像1位置情報を格納した後に区分画像1データを格納し、続いて区分画像2位置情報を格納した後に区分画像2データを格納する。所定のデータが入力されると、FIFO251は最初に入力されたデータから順に出力する。 In the example of FIG. 9, the control circuit 252 stores the divided image 1 position information in the FIFO 251 after storing the divided image 1 data, and then stores the divided image 2 position information and then stores the divided image 2 data. When predetermined data is input, the FIFO 251 sequentially outputs the data input first.
 これにより、画像処理部212は、一時記憶部231のFIFO251から順次区分画像データを読み出すだけで、該当区分画像の位置関係を認識することができる。 Thus, the image processing unit 212 can recognize the positional relationship of the corresponding divided image only by reading out the divided image data sequentially from the FIFO 251 of the temporary storage unit 231.
 このように、本実施の形態によれば、画像処理部211~214側で、一時記憶部231~233に対するアドレスを生成する必要がなくなるので、バス220から大量の信号線(アドレス線)を削除することができる。また、LSIに集積する場合には、大量の信号線を削除することにより、画像処理部等の各回路(処理エンジン)を互いに近くに寄せて配置することができ、信号の伝送時間を短縮して、動作速度を上げることができる。 As described above, according to the present embodiment, the image processing units 211 to 214 do not need to generate addresses for the temporary storage units 231 to 233, so a large number of signal lines (address lines) are deleted from the bus 220. can do. In addition, in the case of integration in an LSI, by deleting a large number of signal lines, each circuit (processing engine) such as an image processing unit can be arranged close to each other, thereby shortening the signal transmission time. Operation speed can be increased.
 なお、本実施の形態における区分画像の位置関係を示す情報は、画像のトップラインを示すもの(1ビットのフラグ情報)であれば良く、画像処理部211~214は、画像のトップラインが分かれば、処理対象の区分画像の位置関係を計算することができる。 The information indicating the positional relationship of the divided images in the present embodiment may be any as long as it indicates the top line of the image (1 bit flag information), and the image processing units 211 to 214 divide the top line of the image. For example, the positional relationship of the divided images to be processed can be calculated.
 (実施の形態5)
 実施の形態5は、一時記憶部の他の構成例である。
Fifth Embodiment
The fifth embodiment is another configuration example of the temporary storage unit.
 図10は、本発明の実施の形態5に係る画像処理装置の一時記憶部の構成を示す図である。図2と同一構成部分には同一符号を付す。図10は、図2の一時記憶部231~233のうち、一時記憶部231を代表して示す。 FIG. 10 is a diagram showing the configuration of a temporary storage unit of the image processing apparatus according to the fifth embodiment of the present invention. The same components as in FIG. 2 are assigned the same reference numerals. FIG. 10 is a representative of the temporary storage unit 231 among the temporary storage units 231 to 233 in FIG.
 図10に示すように、一時記憶部231は、制御回路261と、2以上のSRAM263,265と、SRAM263,265に関連づけられたレジスタ262,264とから構成される。 As shown in FIG. 10, the temporary storage unit 231 includes a control circuit 261, two or more SRAMs 263 and 265, and registers 262 and 264 associated with the SRAMs 263 and 265.
 制御回路261は、バス220から入力した区分画像データを、SRAM263またはSRAM265に交互に格納するように制御する。また、制御回路261は、SRAM263に対して区分画像データの格納を開始する際にレジスタ262の値を書き換え、SRAM265に対して区分画像データの格納を開始する際にレジスタ264の値を書き換える。また、制御回路261は、一時記憶部231のSRAM263またはSRAM265から区分画像データを出力する際に、レジスタ262,264の値(画像における該当区分画像の位置関係を示す情報)を付加する。 The control circuit 261 controls the division image data input from the bus 220 to be alternately stored in the SRAM 263 or the SRAM 265. The control circuit 261 rewrites the value of the register 262 when starting storage of the divided image data in the SRAM 263, and rewrites the value of the register 264 when starting storage of the divided image data in the SRAM 265. Further, when outputting the divided image data from the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the control circuit 261 adds the values of the registers 262 and 264 (information indicating the positional relationship of the corresponding divided image in the image).
 レジスタ262,264は、該当区分画像の位置関係を示す情報を保持する。SRAM263,265は、単体で区分画像データを格納することができるメモリである。 The registers 262 and 264 hold information indicating the positional relationship of the corresponding divided image. The SRAMs 263 and 265 are memories capable of storing divided image data alone.
 以上の構成において、区分画像データは、2つのSRAM263,265に交互に書き込まれ、一方の書き込み中に他方から読み出す2-Bankの構成を採っている。 In the above configuration, the divided image data is alternately written in the two SRAMs 263 and 265, and has a 2-bank configuration in which one is read from the other during writing.
 上記実施の形態4で説明したように、画像処理部211~214は、処理対象の区分画像の位置関係を認識して処理を行う必要がある。通常は、一時記憶部の所定のアドレスに所定のデータを格納することで、この位置関係を認識する。 As described in the fourth embodiment, the image processing units 211 to 214 need to perform processing by recognizing the positional relationship between the divided images to be processed. Usually, this positional relationship is recognized by storing predetermined data at a predetermined address of the temporary storage unit.
 これに対し、本実施の形態では、制御回路261が、一時記憶部231のSRAM263またはSRAM265に区分画像データを格納する際に、関連づけられたレジスタ(262または264)に、画像における該当区分画像の位置関係を示す情報を格納する。例えば、区分画像が1ラインの場合、格納されるラインに対するライン番号を格納する。 On the other hand, in the present embodiment, when the control circuit 261 stores the divided image data in the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the register (262 or 264) associated with the divided image data Stores information indicating the positional relationship. For example, when the divided image is one line, the line number for the stored line is stored.
 さらに、制御回路261は、一時記憶部231のSRAM263またはSRAM265から区分画像データを出力する際に、レジスタ262または264の値を、画像における該当区分画像の位置関係を示す情報として付加する。 Furthermore, when outputting the divided image data from the SRAM 263 or the SRAM 265 of the temporary storage unit 231, the control circuit 261 adds the value of the register 262 or 264 as information indicating the positional relationship of the corresponding divided image in the image.
 これにより、画像処理部212は、入力した区分画像データについて、該当区分画像の位置関係を認識することができる。 Thereby, the image processing unit 212 can recognize the positional relationship of the corresponding divided image with respect to the input divided image data.
 このように、本実施の形態によれば、画像処理部211~214側で、一時記憶部231~233に対するアドレスを生成する必要がなくなるので、バス220から大量の信号線(アドレス線)を削除することができる。また、LSIに集積する場合には、大量の信号線を削除することにより、各回路(処理エンジン)を互いに近くに寄せて配置することができ、信号の伝送時間を短縮して、動作速度を上げることができる。 As described above, according to the present embodiment, the image processing units 211 to 214 do not need to generate addresses for the temporary storage units 231 to 233, so a large number of signal lines (address lines) are deleted from the bus 220. can do. In addition, in the case of integration in an LSI, the circuits (processing engines) can be arranged close to each other by deleting a large number of signal lines, thereby shortening the signal transmission time and operating speed. You can raise it.
 なお、本実施の形態におけるレジスタ262,264の値(区分画像の位置関係を示す情報)は、画像のトップラインを示すもの(1ビットのフラグ情報)であれば良く、画像処理部211~214は、画像のトップラインが分かれば、処理対象の区分画像の位置関係を計算することができる。 Note that the values of the registers 262 and 264 (information indicating the positional relationship of the divided images) in the present embodiment may be those indicating the top line of the image (1 bit flag information), and the image processing units 211 to 214 If the top line of the image is known, the positional relationship of the divided image to be processed can be calculated.
 なお、一時記憶部の構成として、実施の形態4と実施の形態5のどちらの構成を採るかは任意である。実施の形態5では、一時記憶部を1ポートSRAM×2個で構成できるため、FIFO(通常2ポートSRAMを用いて構成される)を用いる実施の形態4の構成に比べて、回路規模を削減することができる。 As the configuration of the temporary storage unit, it is optional to adopt either the configuration of the fourth embodiment or the configuration of the fifth embodiment. In the fifth embodiment, since the temporary storage unit can be configured by two 1-port SRAMs, the circuit scale is reduced compared to the configuration of the fourth embodiment using FIFO (usually configured using 2-port SRAM). can do.
 (実施の形態6)
 図11は、本発明の実施の形態6に係る画像処理装置の構成を示すブロック図である。なお、図11において、図2と同一構成部分には同一符号を付す。
Sixth Embodiment
FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 6 of the present invention. In FIG. 11, the same components as in FIG. 2 will be assigned the same reference numerals.
 図11に示すように、画像処理装置400は、入力I/F201と、出力I/F202と、画像処理部211~214Bと、バス220Bと、一時記憶部231B~233Bと、を備える。 As shown in FIG. 11, the image processing apparatus 400 includes an input I / F 201, an output I / F 202, image processing units 211 to 214B, a bus 220B, and temporary storage units 231B to 233B.
 画像処理部212B~214Bは、図2の画像処理部212~214と同様の機能に加え、一時記憶部231B~233Bから区分画像データの格納が終了したことを示す制御信号が入力され、この制御信号が入力されたタイミングで動作を開始する。 In addition to the same functions as the image processing units 212 to 214 in FIG. 2, the image processing units 212B to 214B receive, from the temporary storage units 231B to 233B, a control signal indicating that storage of divided image data is completed. The operation is started at the timing when the signal is input.
 バス220Bは、画像処理部211~214Bと一時記憶部231B~233Bとを選択的に結合する。バス220Bは、図2のバス220に、一時記憶部231B~233Bからの上記制御信号を画像処理部212B~214Bへ供給するための信号線が付加された構成であり、基本的には図2のバス220を拡張した構成である。 The bus 220B selectively couples the image processing units 211 to 214B and the temporary storage units 231B to 233B. The bus 220B has a configuration in which a signal line for supplying the control signals from the temporary storage units 231B to 233B to the image processing units 212B to 214B is added to the bus 220 in FIG. Is an expanded configuration of the bus 220 of FIG.
 図12は、上記画像処理装置400の一時記憶部の構成を示す図である。図12において、図10及び図11と同一構成部分には同一符号を付す。図11の一時記憶部231B~233Bのうち、一時記憶部231Bを代表して示す。 FIG. 12 is a diagram showing the configuration of a temporary storage unit of the image processing apparatus 400. As shown in FIG. In FIG. 12, the same components as in FIG. 10 and FIG. Among the temporary storage units 231B to 233B in FIG. 11, the temporary storage unit 231B is representatively shown.
 図12に示すように、一時記憶部231Bは、制御回路261Bと、2以上のSRAM263,265と、SRAM263,265に関連づけられたレジスタ262,264とから構成される。 As shown in FIG. 12, the temporary storage unit 231B includes a control circuit 261B, two or more SRAMs 263 and 265, and registers 262 and 264 associated with the SRAMs 263 and 265.
 制御回路261Bは、図10の制御回路261と同様に、SRAM263,265に区分画像データを格納する際に、画像における該当区分画像の位置関係を示す情報をレジスタ262,264に書き込む。さらに、制御回路261Bは、SRAM263,265への区分画像データの格納が終了したことを示す制御信号を生成し、画像処理部212B~214Bに制御信号を出力する。また、制御回路261Bは、制御信号の出力と同時にBank切り替えを行い、新たな区分画像データを入力する。 Similar to the control circuit 261 of FIG. 10, when storing the divided image data in the SRAMs 263 and 265, the control circuit 261B writes, in the registers 262 and 264, information indicating the positional relationship of the corresponding divided image in the image. Furthermore, the control circuit 261B generates a control signal indicating that storage of the divided image data in the SRAMs 263 and 265 is completed, and outputs the control signal to the image processing units 212B to 214B. Further, the control circuit 261 B performs bank switching simultaneously with the output of the control signal, and inputs new divided image data.
 本実施の形態は、一時記憶部231B~233BのBank切り替えのタイミング、及び、各画像処理部212B~214Bの動作開始タイミングに特徴がある。 The present embodiment is characterized in the bank switching timing of the temporary storage units 231B to 233B and the operation start timing of each of the image processing units 212B to 214B.
 図13は、本実施の形態と比較するため、実施の形態1の画像処理装置の処理タイミングを示す図である。図14は、本実施の形態の画像処理装置400の処理タイミングを示す図である。なお、図13及び図14は、いずれも、図3に示した「1.通常用途」における処理タイミングを示す。 FIG. 13 is a diagram showing processing timings of the image processing apparatus according to the first embodiment for comparison with the present embodiment. FIG. 14 is a diagram showing processing timings of the image processing apparatus 400 according to the present embodiment. FIGS. 13 and 14 each show the processing timing in “1. normal use” shown in FIG.
 図13及び図14の例では、データが、画像処理部211(RAWデータ処理)→一時記憶部231B→画像処理部213B(YC分離)→一時記憶部233B→画像処理部214B(図形重畳)の順序で流れるように、バス220の接続が行われる。 In the example of FIG. 13 and FIG. 14, the data of the image processing unit 211 (RAW data processing) → temporary storage unit 231B → image processing unit 213B (YC separation) → temporary storage unit 233B → image processing unit 214B (graphics overlap) The connection of the bus 220 is made to flow in order.
 入力I/F201からのRAWデータは、16μs期間に入力され、データレートは160MHz/ピクセルである。 Raw data from the input I / F 201 is input in a 16 μs period, and the data rate is 160 MHz / pixel.
 画像処理部211(RAWデータ処理)、画像処理部213B(YC分離)及び画像処理部214B(図形重畳)は、それぞれ10μs、24μs、6μsの処理レイテンシを持つ。 The image processing unit 211 (RAW data processing), the image processing unit 213B (YC separation), and the image processing unit 214B (graphics superposition) have processing latencies of 10 μs, 24 μs, and 6 μs, respectively.
 実施の形態1の画像処理装置において、Bank切り替えを一斉に行う場合、切り替えタイミングは、最も処理の時間のかかる画像処理部213Bの処理時間に律速される。このため、実施の形態1の画像処理装置は、処理性能を上げることができない。 In the image processing apparatus according to the first embodiment, when bank switching is simultaneously performed, the switching timing is limited by the processing time of the image processing unit 213B that requires the longest processing time. Therefore, the image processing apparatus according to the first embodiment can not improve the processing performance.
 図13の例では、RAWデータが1ライン当たり16μs期間に入力されるにもかかわらず、実施の形態1の画像処理装置は、1ラインの画像データを出力するために40μs期間を必要とし、本来性能の40%しか出ていない。 In the example of FIG. 13, although the RAW data is input in a period of 16 μs per line, the image processing apparatus of the first embodiment requires a 40 μs period to output one line of image data, and Only 40% of the performance is out.
 これに対し、本実施の形態の画像処理装置400では、各画像処理部212B~214Bが、一時記憶部231B~233Bから区分画像データの格納が終了したことを示す制御信号を入力したタイミングで動作することができるので、一時記憶部231B~233B毎に個別にBank切り替えを行うことができ、本来の性能を引き出すことができる。 On the other hand, in the image processing apparatus 400 according to the present embodiment, the image processing units 212B to 214B operate at timings at which control signals indicating that storage of divided image data is completed are input from the temporary storage units 231B to 233B. Therefore, Bank switching can be performed individually for each of the temporary storage units 231B to 233B, and the original performance can be obtained.
 図14の例では、画像処理部211は、入力した1ライン当たり16μs期間の区分画像データに対して10μsの処理レイテンシでRAWデータ処理し、処理完了したデータを順次一時記憶部231Bに書込む。また、画像処理部211は、1番目のラインの区分画像データを処理した後、続けて2番目のラインの区分画像データを処理する。 In the example of FIG. 14, the image processing unit 211 processes the RAW data with a processing latency of 10 μs on the input divided image data of 16 μs per line, and sequentially writes the processed data in the temporary storage unit 231B. Further, the image processing unit 211 processes the segmented image data of the first line, and subsequently processes the segmented image data of the second line.
 そして、画像処理部211の処理開始から26μs後に、画像処理部213Bは、一時記憶部231Bからデータを順次読出し、24μsの処理レイテンシでYC分離を行い、処理完了したデータを順次一時記憶部233Bに書込む。また、画像処理部213Bは、1番目のラインの区分画像データを処理した後、続けて2番目のラインの区分画像データを処理する。 Then, 26 μs after the start of processing by the image processing unit 211, the image processing unit 213B sequentially reads data from the temporary storage unit 231B, performs YC separation with a processing latency of 24 μs, and sequentially stores processed data in the temporary storage unit 233B. Write. Further, the image processing unit 213B processes the divided image data of the first line, and subsequently processes the divided image data of the second line.
 そして、画像処理部213Bの処理開始から40μs後に、画像処理部214Bは、一時記憶部233Bからデータを順次読出し、6μsの処理レイテンシで図形重畳を行い、処理完了したデータを順次出力する。また、画像処理部214Bは、1番目のラインの区分画像データを処理した後、続けて2番目のラインの区分画像データを処理する。 Then, 40 μs after the start of processing by the image processing unit 213B, the image processing unit 214B sequentially reads data from the temporary storage unit 233B, performs graphic superposition with a processing latency of 6 μs, and sequentially outputs processed data. In addition, the image processing unit 214B processes the divided image data of the first line, and subsequently processes the divided image data of the second line.
 この結果、図14の例では、1ライン当たり16μs期間に入力されるRAWデータに対し、本実施の形態の画像処理装置400は、1ラインの画像データを16μs期間で出力することができる。 As a result, in the example of FIG. 14, the image processing apparatus 400 according to the present embodiment can output image data of one line in a period of 16 μs with respect to RAW data input in a period of 16 μs per line.
 このように、本実施の形態によれば、各画像処理部211~214Bの処理時間が不均一であっても、Bank切り替えを一斉に行う場合と比較して装置全体の処理性能を向上させることができる。 As described above, according to the present embodiment, the processing performance of the entire apparatus is improved as compared with the case where bank switching is performed simultaneously even if the processing time of each of the image processing units 211 to 214 B is uneven. Can.
 (実施の形態7)
 図15は、本発明の実施の形態7に係る画像処理装置の一時記憶部の構成を示す図である。図15において、図12と同一構成部分には同一符号を付す。
Seventh Embodiment
FIG. 15 is a diagram showing a configuration of a temporary storage unit of an image processing apparatus according to Embodiment 7 of the present invention. In FIG. 15, the same components as in FIG. 12 will be assigned the same reference numerals.
 図15(a)では、一時記憶部231Bと同様の構成の一時記憶部231Cを、バスに対して接続している。一時記憶部231B,231Cは、例えばSRAMにより構成される。 In FIG. 15A, the temporary storage unit 231C having the same configuration as the temporary storage unit 231B is connected to the bus. Temporary storage units 231B and 231C are configured of, for example, an SRAM.
 図15(b)に示すように、複数の一時記憶部231B,231Cを用いて、容量の大きな1つの一時記憶部231Dを構成できる。一時記憶部231DにSRAMを用いることにより実現可能である。 As shown in FIG. 15B, one temporary storage unit 231D having a large capacity can be configured using a plurality of temporary storage units 231B and 231C. This can be realized by using an SRAM for the temporary storage unit 231D.
 例えば、水平1280ピクセル1ラインの区分画像を格納する一時記憶部231B,231Cを2個用いて、水平2560ピクセル1ラインに対する一時記憶部231Dを構成することができる。この場合、図15(b)において破線で示すレジスタを削減することができる。 For example, the temporary storage unit 231D for horizontal 2560 pixel 1 line can be configured by using two temporary storage units 231B and 231C that store a divided image of horizontal 1280 pixel 1 line. In this case, the register indicated by the broken line in FIG. 15B can be reduced.
 図16及び図17は、本発明の実施の形態7に係る画像処理装置の構成を示すブロック図である。なお、図16及び図17において、図2と同一構成部分には同一符号を付して重複箇所の説明を省略する。 FIGS. 16 and 17 are block diagrams showing the configuration of an image processing apparatus according to a seventh embodiment of the present invention. In FIGS. 16 and 17, the same components as in FIG. 2 will be assigned the same reference numerals and overlapping explanations will be omitted.
 図16及び図17は、ステレオカメラ入力処理を行う、本発明の実施の形態7に係る画像処理装置の構成を示すブロック図である。なお、図16及び図17において、図2と同一構成部分には同一符号を付して重複箇所の説明を省略する。 FIGS. 16 and 17 are block diagrams showing the configuration of an image processing apparatus according to a seventh embodiment of the present invention that performs stereo camera input processing. In FIGS. 16 and 17, the same components as in FIG. 2 will be assigned the same reference numerals and overlapping explanations will be omitted.
 図16及び図17に示すように、画像処理装置500は、入力I/F201A,201B、出力I/F202、画像処理部511A,511B,512A,512B,213,214、バス220C、及び一時記憶部531A,531B,532A,532B,234を備えて構成される。 As illustrated in FIGS. 16 and 17, the image processing apparatus 500 includes input I / Fs 201A and 201B, output I / F 202, image processing units 511A, 511B, 512A, 512B, 213 and 214, a bus 220C, and a temporary storage unit. 531A, 531B, 532A, 532B, and 234 are comprised.
 画像処理部511A,511B,512A,512Bは、2ch対応用に、図2の画像処理部211,212を2つ設けたものである。また、一時記憶部531A,531B,532A,532Bについても同様に、図2の一時記憶部231,232を2つ設けたものである。バス220Cは、上記に対応したもので図2のバス220と同様の構成を有する。図16の一時記憶部531A,532Aは図15(a)の一時記憶部231Bに対応し、一時記憶部531B,532Bは図15(a)の一時記憶部231Cに対応する。 The image processing units 511A, 511B, 512A, and 512B are provided with two image processing units 211 and 212 of FIG. 2 for 2ch correspondence. Similarly, two temporary storage units 231 and 232 shown in FIG. 2 are provided for the temporary storage units 531A, 531B, 532A, and 532B. The bus 220C corresponds to the above, and has the same configuration as the bus 220 of FIG. The temporary storage units 531A and 532A in FIG. 16 correspond to the temporary storage unit 231B in FIG. 15A, and the temporary storage units 531B and 532B correspond to the temporary storage unit 231C in FIG.
 画像処理部511A,511BはRAWデータ処理を行い、画像処理部512A,512B2はYC分離を行う。また、画像処理部213は距離計測を行い、画像処理部214は出力画面生成処理を行う。 The image processing units 511A and 511B perform RAW data processing, and the image processing units 512A and 512B2 perform YC separation. Further, the image processing unit 213 performs distance measurement, and the image processing unit 214 performs output screen generation processing.
 画像処理装置500は、2chのステレオカメラ入力(水平解像度:1280ピクセル)を処理し、画面に物体の距離情報を付加して出力することができる。 The image processing apparatus 500 can process a 2-channel stereo camera input (horizontal resolution: 1280 pixels), add distance information of an object to a screen, and output it.
 図17は、図16と同一の画像処理装置500であり、水平解像度が2倍(2560ピクセル)の1chカメラ入力を処理する例を示す。この場合、複数の一時記憶部531A,531Bを用いて、容量の大きな1つの一時記憶部531Cを構成する。同様に、複数の一時記憶部532A,532Bを用いて、容量の大きな1つの一時記憶部532Cを構成する。図17の一時記憶部531C,532Cは図15(b)の一時記憶部231Dに対応する。 FIG. 17 shows an example of the same image processing apparatus 500 as that of FIG. 16 and processing a 1-ch camera input whose horizontal resolution is doubled (2560 pixels). In this case, a plurality of temporary storage units 531A and 531B are used to configure one temporary storage unit 531C having a large capacity. Similarly, one temporary storage unit 532C having a large capacity is configured by using a plurality of temporary storage units 532A and 532B. Temporary storage units 531C and 532C in FIG. 17 correspond to the temporary storage unit 231D in FIG.
 このように、本実施の形態によれば、例えば1チャネルの高解像度処理系と、複数チャネルの低解像度処理系とを、同一のプラットフォームで構築することができる。これにより、種々のカメラ製品(例えばマルチビューカメラと高解像度カメラ)に対するプラットフォームを共通化することができる。 Thus, according to the present embodiment, for example, a high resolution processing system of one channel and a low resolution processing system of a plurality of channels can be constructed on the same platform. This makes it possible to commonize platforms for various camera products (eg, multiview camera and high resolution camera).
 (実施の形態8)
 図18及び図19は、本発明の実施の形態8に係る画像処理装置の構成を示すブロック図である。なお、図18及び図19において、図2及び図16と同一構成部分には同一符号を付して重複箇所の説明を省略する。
Eighth Embodiment
18 and 19 are block diagrams showing the configuration of an image processing apparatus according to an eighth embodiment of the present invention. 18 and FIG. 19, the same components as in FIG. 2 and FIG. 16 will be assigned the same reference numerals and overlapping explanations will be omitted.
 図18及び図19に示すように、画像処理装置600は、入力I/F201、出力I/F202、画像処理部211~214、バス220D、一時記憶部631A,631B,632A,632B,633A,633B、及び電力制御部610を備えて構成される。 As shown in FIGS. 18 and 19, the image processing apparatus 600 includes an input I / F 201, an output I / F 202, an image processing unit 211 to 214, a bus 220D, and temporary storage units 631A, 631B, 632A, 632B, 633A, 633B. , And a power control unit 610.
 画像処理装置600は、例えば、ローパワーモードと通常モードなど2以上の動作モードを持つ。 The image processing apparatus 600 has, for example, two or more operation modes such as a low power mode and a normal mode.
 画像処理装置600は、ローパワーモードでは、画像の処理解像度を落とす、あるいは、画像処理部211~214を一部使用しない等により、消費電力を抑える。 In the low power mode, the image processing apparatus 600 reduces power consumption by reducing the processing resolution of the image or partially using the image processing units 211 to 214.
 また、特定の要因によりローパワーモードから通常モードに移行した場合、画像処理装置600は、処理解像度や使用する画像処理部211~214を増やす態様を採る。特定の要因とは、登録物の検出、不審物の検出など、インテリジェント監視関連がある。 Further, in the case of transition from the low power mode to the normal mode due to a specific factor, the image processing apparatus 600 adopts an aspect of increasing the processing resolution and the image processing units 211 to 214 to be used. The specific factors are related to intelligent monitoring, such as detection of registrations and detection of suspicious objects.
 図18は、ローパワーモードの例である。ローパワーモードでは、画像処理装置600は、例えば、水平解像度を1280ピクセルに落とすと共に、YC分離における色処理を行わない。この場合、画像処理部211はRAWデータ処理を行い、画像処理部212は輝度処理のみを行う。また、画像処理部213は色処理(スキップ)を行い、画像処理部214は出力画面生成と異常検出を行う。また、電力制御部610は、画像処理部213、一時記憶部631B、一時記憶部632B、一時記憶部633Bをオフにする。 FIG. 18 is an example of the low power mode. In the low power mode, the image processing apparatus 600, for example, reduces the horizontal resolution to 1280 pixels and does not perform color processing in YC separation. In this case, the image processing unit 211 performs RAW data processing, and the image processing unit 212 performs only luminance processing. The image processing unit 213 performs color processing (skip), and the image processing unit 214 performs output screen generation and abnormality detection. In addition, the power control unit 610 turns off the image processing unit 213, the temporary storage unit 631B, the temporary storage unit 632B, and the temporary storage unit 633B.
 図19は、図18と同一の画像処理装置600における通常モードの例である。ローパワーモード時に、画像処理部214が不審物を検出すると、電力制御部610にその検出信号を送り、画像処理装置600は、通常モードに移行する。通常モードでは、画像処理装置600は、水平解像度を2560ピクセルに増すとともに、YC分離における色処理を行う。また、電力制御部610は、画像処理部213、一時記憶部631B、一時記憶部632B、一時記憶部633Bをオンにする。 FIG. 19 shows an example of the normal mode in the same image processing apparatus 600 as FIG. In the low power mode, when the image processing unit 214 detects a suspicious object, the detection signal is sent to the power control unit 610, and the image processing apparatus 600 shifts to the normal mode. In the normal mode, the image processing apparatus 600 performs color processing in YC separation while increasing the horizontal resolution to 2560 pixels. Also, the power control unit 610 turns on the image processing unit 213, the temporary storage unit 631B, the temporary storage unit 632B, and the temporary storage unit 633B.
 このように、本実施の形態によれば、必要な画像は高性能で処理しつつ、通常時の消費電力は抑制することができる。監視カメラなど常時(24時間)稼動機器に対する電力抑制として非常に有効である。 As described above, according to the present embodiment, it is possible to suppress the power consumption at the normal time while processing the necessary image with high performance. It is very effective as a power control for regular (24 hours) operation devices such as surveillance cameras.
 以上の説明は本発明の好適な実施の形態の例証であり、本発明の範囲はこれに限定されることはない。 The above description is an illustration of a preferred embodiment of the present invention, and the scope of the present invention is not limited to this.
 例えば、画像処理装置は、カメラ信号処理に限定されない。また、画像の処理単位(区分画像のサイズ)は「1ライン」に限定されず、複数ラインでもよく、画像を格子状に分割したタイル構成における1タイルでもよい。さらに、バスの構成は、セレクタ4個で構成などに限定されない。 For example, the image processing apparatus is not limited to camera signal processing. Further, the processing unit of the image (the size of the divided image) is not limited to “one line”, and may be a plurality of lines, or may be one tile in a tile configuration in which the image is divided in a grid shape. Furthermore, the configuration of the bus is not limited to the configuration with four selectors.
 さらに、例えば図3に示すように、画像処理部211~214の実行順序として、1-2-3-4、1-3-2-4、1-3-4としているが、これらは一例であり、このような構成に限定されるものではない。 Further, for example, as shown in FIG. 3, the execution order of the image processing units 211 to 214 is 1-2-3-4, 1-3-2-4, and 1-3-4. There is no limitation to such a configuration.
 また、上記各実施の形態では画像処理装置という名称を用いたが、これは説明の便宜上であり、他の名称を用いてもよい。 Further, although the name of the image processing apparatus is used in each of the above embodiments, this is for convenience of description, and other names may be used.
 さらに、上記画像処理装置を構成する各回路(処理エンジン)及び共有メモリの種類、その数及び接続方法などはどのようなものでもよい。 Furthermore, the types of circuits (processing engines) and shared memory that constitute the image processing apparatus, the number thereof, the connection method, and the like may be used.
 2008年12月5日出願の特願2008-310799の日本出願に含まれる明細書、図面および要約書の開示内容は、すべて本願に援用される。 The disclosures of the specification, drawings, and abstract included in the Japanese application of Japanese Patent Application No. 2008-310799 filed on Dec. 5, 2008 are all incorporated herein by reference.
 本発明に係る画像処理装置は、カメラ分野及び、インテリジェント監視等のプログラマブルな画像処理システムに用いるに好適である。 The image processing apparatus according to the present invention is suitable for use in the camera field and in programmable image processing systems such as intelligent surveillance.
 200,300,400,500,600 画像処理装置
 201 入力I/F
 202 出力I/F
 211~214 画像処理部
 220 バス
 221~224 セレクタ
 225 レジスタ
 231~233 一時記憶部
 311 DMAコントローラ
 320 外部メモリ
 330 バス
 340 外部デバイス
                                                                                
200, 300, 400, 500, 600 Image processing apparatus 201 Input I / F
202 Output I / F
211 to 214 image processing unit 220 bus 221 to 224 selector 225 register 231 to 233 temporary storage unit 311 DMA controller 320 external memory 330 bus 340 external device

Claims (10)

  1.  複数の区分画像データからなる画像データを処理する画像処理装置であって、
     前記区分画像データに対して互いに異なる処理を実行する複数の画像処理部と、
     前記区分画像データを一時的に記憶する複数の一時記憶部と、
     前記画像処理部と前記一時記憶部とを選択的に結合するバスと、
     前記バスの結合の形態を選択制御する切り替え制御部と、
     を備える画像処理装置。
    An image processing apparatus for processing image data composed of a plurality of divided image data, comprising:
    A plurality of image processing units that execute different processes on the divided image data;
    A plurality of temporary storage units for temporarily storing the divided image data;
    A bus selectively coupling the image processing unit and the temporary storage unit;
    A switching control unit that selects and controls a mode of connection of the buses;
    An image processing apparatus comprising:
  2.  少なくとも1フレーム以上の画像データを格納可能な外部メモリと、
     前記バスを介して、前記外部メモリと前記一時記憶部との間で前記区分画像データの転送を行うDMAコントローラと、
     をさらに備える請求項1記載の画像処理装置。
    An external memory capable of storing image data of at least one frame or more;
    A DMA controller for transferring the divided image data between the external memory and the temporary storage unit via the bus;
    The image processing apparatus according to claim 1, further comprising:
  3.  前記外部メモリに接続し、前記外部メモリに格納された画像データを用いて特定処理を行う外部デバイスをさらに備え、
     前記切り替え制御部は、前記外部デバイスの処理結果に基づいて前記バスの結合の形態を選択制御する、
     請求項2記載の画像処理装置。
    It further comprises an external device connected to the external memory and performing specific processing using the image data stored in the external memory,
    The switching control unit selects and controls the form of coupling of the bus based on the processing result of the external device.
    The image processing apparatus according to claim 2.
  4.  前記外部デバイス、前記バス、前記画像処理部、前記DMAコントローラ及び前記一時記憶部を1つの半導体デバイス上に集積する、請求項3記載の画像処理装置。 The image processing apparatus according to claim 3, wherein the external device, the bus, the image processing unit, the DMA controller, and the temporary storage unit are integrated on one semiconductor device.
  5.  前記一時記憶部は、単体で区分画像データを格納することができるFIFO(First-In First-Out)と、前記FIFOに区分画像データを格納する際に、画像における該当区分画像の位置関係を示す情報を付加する制御部と、
     を備える請求項1記載の画像処理装置。
    The temporary storage unit indicates a positional relationship between a FIFO (First-In First-Out) that can store divided image data alone and the divided image in the image when storing the divided image data in the FIFO. A control unit that adds information;
    The image processing apparatus according to claim 1, comprising:
  6.  前記一時記憶部は、単体で区分画像データを格納することができる複数のSRAMと、画像における該当区分画像の位置関係を示す情報を保持する複数のレジスタと、前記SRAMに区分画像データを格納する際に前記レジスタの値を書き換え、前記SRAMから区分画像データを出力する際に前記レジスタの値を付加する制御部と、
     を備える請求項1記載の画像処理装置。
    The temporary storage unit stores divided image data in a plurality of SRAMs that can separately store divided image data, a plurality of registers that hold information indicating the positional relationship of the corresponding divided image in the image, and the SRAM. A control unit that rewrites the value of the register and adds the value of the register when the divided image data is output from the SRAM;
    The image processing apparatus according to claim 1, comprising:
  7.  前記一時記憶部は、前記区分画像データの格納が終了したことを示す制御信号を生成し、
     前記画像処理部は、前記制御信号を入力した時点で処理を開始する、
     請求項6記載の画像処理装置。
    The temporary storage unit generates a control signal indicating that storage of the divided image data is completed,
    The image processing unit starts processing when the control signal is input.
    The image processing apparatus according to claim 6.
  8.  複数の前記一時記憶部を容量の大きな1つの一時記憶部として構成する、請求項5記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the plurality of temporary storage units are configured as one temporary storage unit having a large capacity.
  9.  前記画像処理部および前記一時記憶部のうち未使用のものに対するクロック供給若しくは電源供給を停止させる、請求項1記載の画像処理装置。 The image processing apparatus according to claim 1, wherein clock supply or power supply to an unused one of the image processing unit and the temporary storage unit is stopped.
  10.  画像の処理解像度を下げる、あるいは、使用する前記画像処理部を削減することにより消費電力を抑える電力制御部をさらに具備する請求項9記載の画像処理装置。

                                                                                    
    10. The image processing apparatus according to claim 9, further comprising: a power control unit for reducing power consumption by reducing the processing resolution of the image or reducing the image processing unit to be used.

PCT/JP2009/006285 2008-12-05 2009-11-20 Image processing device WO2010064374A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-310799 2008-12-05
JP2008310799A JP2010134743A (en) 2008-12-05 2008-12-05 Image processor

Publications (1)

Publication Number Publication Date
WO2010064374A1 true WO2010064374A1 (en) 2010-06-10

Family

ID=42233032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/006285 WO2010064374A1 (en) 2008-12-05 2009-11-20 Image processing device

Country Status (2)

Country Link
JP (1) JP2010134743A (en)
WO (1) WO2010064374A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012008715A (en) * 2010-06-23 2012-01-12 Fuji Xerox Co Ltd Data processing device
JP2013029962A (en) * 2011-07-28 2013-02-07 Nikon Corp Electronic apparatus and program
JP2015508528A (en) * 2011-12-28 2015-03-19 インテル・コーポレーション Pipelined image processing sequencer

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MY165813A (en) * 2011-02-25 2018-04-27 Photonis Netherlands B V Acquiring and displaying images in real-time
JP5899918B2 (en) 2011-12-27 2016-04-06 株式会社リコー Image processing apparatus and image processing method
KR102305470B1 (en) 2015-02-13 2021-09-28 삼성전자주식회사 Image signal processing device performing image signal processing in parallel through plurality of image processing channels

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08171536A (en) * 1994-12-16 1996-07-02 Com Syst:Kk Data processor
JP2004021645A (en) * 2002-06-17 2004-01-22 Canon Inc Image processing device and its control method
JP2004240885A (en) * 2003-02-07 2004-08-26 Makoto Ogawa Image processor and image processing method
JP2006157977A (en) * 2006-03-13 2006-06-15 Ricoh Co Ltd Image processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008293267A (en) * 2007-05-24 2008-12-04 Canon Inc Image forming apparatus equipped with original conveyance device and original reading device and method for controlling image processor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08171536A (en) * 1994-12-16 1996-07-02 Com Syst:Kk Data processor
JP2004021645A (en) * 2002-06-17 2004-01-22 Canon Inc Image processing device and its control method
JP2004240885A (en) * 2003-02-07 2004-08-26 Makoto Ogawa Image processor and image processing method
JP2006157977A (en) * 2006-03-13 2006-06-15 Ricoh Co Ltd Image processing apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012008715A (en) * 2010-06-23 2012-01-12 Fuji Xerox Co Ltd Data processing device
JP2013029962A (en) * 2011-07-28 2013-02-07 Nikon Corp Electronic apparatus and program
JP2015508528A (en) * 2011-12-28 2015-03-19 インテル・コーポレーション Pipelined image processing sequencer

Also Published As

Publication number Publication date
JP2010134743A (en) 2010-06-17

Similar Documents

Publication Publication Date Title
KR101034493B1 (en) Image transforming apparatus, dma apparatus for image transforming, and camera interface supporting image transforming
WO2010064374A1 (en) Image processing device
US20020159656A1 (en) Image processing apparatus, image processing method and portable imaging apparatus
KR100997619B1 (en) Techniques to facilitate use of small line buffers for processing of small or large images
JP6210743B2 (en) Data processing device and data transfer control device
TWI360347B (en) Image processor, imaging device, and image process
JP4189252B2 (en) Image processing apparatus and camera
JP2004304387A (en) Image processing apparatus
JPH11103407A (en) Ccd data pixel interpolating circuit and digital still camera provided with the same
JP6722278B2 (en) Image processing device
JP2007088806A (en) Image signal processor and image signal processing method
US20120203942A1 (en) Data processing apparatus
US10453166B2 (en) Image processing device and image processing method
JP2008172410A (en) Imaging apparatus, image processing apparatus, image processing method, program for image processing method, and recording medium recorded with program for image processing method
JP4487454B2 (en) Electronic camera and control IC for electronic camera
JP2011030268A (en) Image processing apparatus
US20120144150A1 (en) Data processing apparatus
JP3810685B2 (en) Resolution converter and digital camera
US20230388661A1 (en) Integrated circuit with multi-application image processing
JP5224492B2 (en) Image data transfer control device, image data transfer method, and camera having the image data transfer device
JP2009260788A (en) Imaging unit
US20070247644A1 (en) Image processing system
JP2000059800A (en) Image signal processing circuit
US9277145B2 (en) Imaging device dividing imaging region into first divided image data and second divided image data
JP2005159596A (en) Digital camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09830145

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09830145

Country of ref document: EP

Kind code of ref document: A1