US20040130553A1 - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
US20040130553A1
US20040130553A1 US10/739,344 US73934403A US2004130553A1 US 20040130553 A1 US20040130553 A1 US 20040130553A1 US 73934403 A US73934403 A US 73934403A US 2004130553 A1 US2004130553 A1 US 2004130553A1
Authority
US
United States
Prior art keywords
image processing
rectangular area
memory
image data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/739,344
Other versions
US7495669B2 (en
Inventor
Katsutoshi Ushida
Yuichi Naoi
Yoshiaki Katahira
Yasuyuki Nakamura
Koichi Morishita
Makoto Fukuo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FUKUO, MAKOTO, KATAHIRA, YOSHIAKI, MORISHITA, KOICHI, NAKAMURA, YASUYUKI, NAOI, YUICHI, USHIDA, KA'TSUTOSHI
Publication of US20040130553A1 publication Critical patent/US20040130553A1/en
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FUKUO, MAKOTO, KATAHIRA, YOSHIAKI, MORISHITA, KOICHI, NAKAMURA, YASUYUKI, NAOI, YUICHI, USHIDA, KATSUTOSHI
Priority to US10/956,129 priority Critical patent/US7043134B2/en
Priority to US11/487,370 priority patent/US7675523B2/en
Application granted granted Critical
Publication of US7495669B2 publication Critical patent/US7495669B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition

Definitions

  • the present invention relates to an image processing technique which is compatible with both an image reading device (CCD (Charged Coupled Device)) and a CIS (Contact Image Sensor) and controls storage of image data read by each device in a memory and a read of the stored data for each rectangular area.
  • CCD Charge Coupled Device
  • CIS Contact Image Sensor
  • FIG. 27 is a block diagram showing the composition of a scanner image processing circuit in a conventional image processing apparatus.
  • an optical element such as a CCD 2010 or CIS 2110 is used.
  • Data according to a predetermined output format is A/D-converted by a CCD interface (I/F) circuit 2000 or CIS interface (I/F) circuit 2100 and stored in a main memory 2200 for each line in the main scanning direction.
  • the CCD 2010 outputs data corresponding to R, G, and B in parallel.
  • the CIS 2110 serially outputs the signals of R, G, and B data in accordance with the order of LED lighting.
  • the CCD and CIS have dedicated interface circuits.
  • the read image data is stored in the main memory (SDRAM) 2200 .
  • SDRAM main memory
  • image processing blocks shade correction (SHD) 2300 , character determination processing 2320 , filter processing 2340 , and the like
  • SHD shading correction
  • character determination processing 2320 character determination processing 2320
  • filter processing 2340 filter processing 2340
  • image processing blocks have dedicated line buffers 2400 a to 2400 d .
  • data corresponding to a plurality of lines which are stored in the main memory (SDRAM) 2200 , are read out in the main scanning direction, stored in the dedicated line buffers ( 2400 a to 2400 d ), and subjected to individual image processing operations.
  • SDRAM main memory
  • a signal output from the CCD 2010 or CIS 2110 serving as an image reading device is processed by the dedicated interface circuit ( 2000 or 2100 ) in accordance with the output format.
  • Bitmapping of read image data on the main memory 2200 depends on which device (e.g., the CCD or CIS) has been used, and image data input processing must inevitably be specialized. That is, the image processing circuit is customized depending on the employed image reading device. This impedes generalization and cost reduction of the image processing circuit.
  • the present invention has been proposed to solve the above problems, and has as its object to provide an image processing apparatus which is compatible with various image reading devices such as a CCD and CIS. It is another object of the present invention to provide an image processing apparatus which controls data processing, including storage of image data read by each image reading device in a memory and processing by an image processing section, by extracting data in a main memory as a predetermined unit appropriate for each image processing mode without intervention of individual line buffers.
  • an image processing apparatus is characterized by mainly comprising memory area control means for setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction; address generation means for generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area; memory control means for reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and image processing means for executing image processing for each rectangular area of the DMA-transferred data by using the second memory.
  • FIG. 1 is a block diagram showing the schematic composition of an image processing apparatus 200 according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing the schematic composition of a scanner I/F section 10 ;
  • FIGS. 3A to 3 D are views showing output signals by a CCD 17 ;
  • FIG. 4 is a timing chart related to lighting control of an LED 19 for a CIS 18 ;
  • FIG. 5A is a timing chart showing the relationship between an output ( 51 e ) and the ON states ( 51 b to 51 d ) of LEDs corresponding to R, G, and B according to the timing chart shown in FIG. 4;
  • FIG. 5B is a timing chart showing timings when the R, G, and B LEDs 19 are sequentially turned on within one period of a sync signal (SP) in association with control of the CIS 18 ;
  • SP sync signal
  • FIG. 5C is a view showing outputs when two channels of a CIS 18 are arranged in the main scanning direction;
  • FIG. 6 is a block diagram for explaining processing of an AFE 15 ;
  • FIG. 7 is a block diagram showing the schematic composition of an LDMAC_A which DMA-transfers image data read by an image reading device to a main memory and an LDMAC_B which controls DMA between the main memory and the scanner image processing section;
  • FIGS. 8A and 8B are views for explaining processing for causing an LDMAC_A 105 a to write 1-channel data in a main memory 100 ;
  • FIGS. 9A and 9B are views for explaining processing for causing the LDMAC_A 105 a to write data of two channels in the main memory 100 ;
  • FIG. 10 is a view for explaining processing for causing the LDMAC_A 105 a to write data of three channels in the main memory 100 ;
  • FIGS. 11A and 11B are views showing a state in which the main memory 100 is divided into predetermined rectangular areas (blocks);
  • FIGS. 12A to 12 C are views showing capacities necessary for the main memory in the respective image processing modes
  • FIG. 13 is a flow chart for explaining the flow of data storage processing in the copy mode
  • FIG. 14 is a flow chart for explaining the flow of data storage processing in the scanner mode
  • FIGS. 15A and 15B are views for explaining a data read when image data in a rectangular area is to be transferred to the block buffer RAM of a scanner image processing section 20 ;
  • FIG. 16 is a block diagram for explaining the schematic composition of the scanner image processing section 20 ;
  • FIG. 17 is a view schematically showing an area to be subjected to image processing and a reference area where filter processing and the like for the image processing are to be executed;
  • FIGS. 18A and 18B are views showing overlap widths in the respective image processing modes (color copy mode, monochrome copy mode, and scanner mode);
  • FIGS. 19A to 19 D are views schematically showing the sizes of rectangular areas necessary for the respective image processing modes
  • FIG. 20 is a view for explaining the start point in the DMA main scanning direction to DMA-transfer image data of the next rectangular data after the end of processing of one rectangular data;
  • FIG. 22 is a flow chart for explaining the flow of a data read and image processing in the scanner mode
  • FIG. 23 is a view for explaining processing for transferring magnified rectangular data from a magnification processing block (LIP) 27 to the main memory 100 ;
  • LIP magnification processing block
  • FIG. 24 is a view showing connection between the magnification processing block (LIP) 27 and the LDMAC_B ( 105 b );
  • FIG. 25 is a timing chart showing the relationship between data and signals sent from the magnification processing block (LIP) 27 to the LDMAC_B ( 105 b ) to DMA-transfer data that has undergone image processing to the main memory 100 ;
  • LIP magnification processing block
  • FIG. 26 is a view for explaining a state in which data is bitmapped on the main memory 100 in accordance with a line end signal and block end signal;
  • FIG. 27 is a block diagram showing the composition of a scanner image processing circuit in a conventional image processing apparatus.
  • FIG. 1 is a block diagram showing the schematic composition of an image processing apparatus 200 according to an embodiment of the present invention.
  • a CCD 17 and CIS 18 are connected to a scanner interface (to be referred to as a “scanner I/F” hereinafter) section 10 through an analog front end (AFE) 15 .
  • AFE analog front end
  • Read data can be input to the image processing apparatus 200 without intervening individual dedicated circuits. Data processing by the scanner I/F section 10 will be described later in detail.
  • a scanner image processing section 20 executes image processing corresponding to an image processing operation mode (color copy, monochrome copy, color scan, monochrome scan, and the like) for image data that is bitmapped on a main memory 100 by processing of the scanner I/F section 10 .
  • the scanner image processing section 20 will be described later in detail.
  • a printer image processing section 30 is a processing unit to printer-output image data obtained by image processing.
  • the printer image processing section 30 executes processing for outputting an image processing result to a laser beam printer (LBP) 45 which is connected through an LBP interface (I/F) 40 .
  • LBP laser beam printer
  • a JPEG module 50 and JBIG module 60 are processing sections which execute compression and expansion processing of image data on the basis of predetermined standards.
  • a memory control section 70 is connected to a first BUS 80 of the image processing system and a second BUS 85 of the computer system.
  • the memory control section 70 systematically controls processing units (LDMAC_A to LDMAC_F ( 105 a to 105 f )) which execute DMA control related to a data write and read for the main memory (SDRAM) 100 .
  • DMA Direct Memory Access
  • DMA means processing for directly moving data between the main storage device and the peripheral devices.
  • the processing units (LDMAC_A to LDMAC_F ( 105 a to 105 f ) which execute DMA control of image data are connected between the first BUS 80 and the above-described scanner I/F section 10 , scanner image processing section 20 , printer image processing section 30 , LBP I/F section 40 , JPEG processing section 50 , and JBIG processing section 60 in correspondence with the respective processing sections ( 10 to 60 ).
  • the LDMAC_A to LDMAC_F ( 105 a to 105 f ) generate predetermined address information to execute DMA control and controls DMA on the basis of the information.
  • the LDMAC_A 105 a generates, for each DMA channel, address information (e.g., a start address to start DMA or offset information to switch the address of the memory) to DMA-transfer image data read by the scanner I/F section 10 to the main memory 100 .
  • the LDMAC_B ( 105 b ) generates, in accordance with a DMA channel, address information to read out image data bitmapped on the main memory 100 .
  • the LDMAC_C to LDMAC_F ( 105 c to 105 f ) can also generate predetermined address information and, on the basis of the information, execute DMA control related to data transmission/reception to/from the main memory 100 . More specifically, the LDMAC_C to LDMAC_F ( 105 c to 105 f ) have channels corresponding to the data write and read and generate address information corresponding to the channels to control DMA.
  • the first BUS 80 allows data transmission/reception between the processing sections ( 10 to 60 ) of the image processing system.
  • the second BUS 85 of the computer system is connected to a CPU 180 , communication & user interface control section 170 , mechatronics system control section 125 , and ROM 95 .
  • the CPU 180 can control the above-described LDMAC_A to LDMAC_F ( 105 a to 105 f ) on the basis of control parameters or control program stored in the ROM 95 .
  • the mechatronics system control section 125 includes a motor control section 110 and an interrupt timer control section 120 which executes timing control to control the motor drive timings or synchronization of processing of the image processing system.
  • An LCD control section 130 is a unit which executes display control to display various settings or processing situations of the image processing apparatus on an LCD 135 .
  • USB interface sections 140 and 150 enable connection to the peripheral devices.
  • FIG. 1 shows a state in which a BJ-printer 175 is connected.
  • a media access control (MAC) section 160 is a unit which controls data transmission (access) timings to a connected device.
  • the CPU 180 controls the entire operation of the image processing apparatus 200 .
  • the scanner I/F section 10 is compatible with the CCD 17 and CIS 18 serving as image reading devices.
  • the scanner I/F section 10 executes input processing of signals from these image reading devices.
  • the input image data is DMA-transferred by the LDMAC_A ( 105 a ) and bitmapped on the main memory 100 .
  • FIG. 2 is a block diagram showing the schematic composition of the scanner I/F section 10 .
  • a timing control section 11 a generates a read device control signal corresponding to the read speed and outputs the control signal to the CCD 17 /CIS 18 .
  • the device control signal synchronizes with a sync signal generated by the scanner I/F section 10 so that the read timing in the main scanning direction and read processing can be synchronized.
  • An LED lighting control section 11 b is a unit which controls lighting of an LED 19 serving as a light source for the CCD 17 /CIS 18 .
  • the LED lighting control section 11 b controls sync signals (TG and SP; FIGS. 3A and 4) for sequential lighting control of LEDs corresponding to R, G, and B color components, a clock signal (CLK; FIG. 4), and brightness control suitable for the CCD 17 /CIS 18 and also controls the start/end of lighting.
  • the control timing is based on a sync signal received from the above-described timing control section. Lighting of the LED 19 is controlled in synchronism with drive of the image reading device.
  • FIGS. 3A to 3 D are views showing output signals by the CCD 17 .
  • An original surface is irradiated with light emitted from the LED 19 . Reflected light is guided to the CCD 17 and photoelectrically converted. For example, the original surface is sequentially scanned for each line in the main scanning direction while moving the read position at a constant speed in a direction (sub-scanning direction) perpendicular to the line direction, i.e., the main scanning direction of the CCD 17 . Accordingly, the image on the entire original surface can be read.
  • FIG. 3A on the basis of the sync signal (TG) output from the timing control section 11 a , signals corresponding to the R, G, and B elements of one line of the CCD 17 are output in parallel (FIGS. 3B, 3C, and 3 D).
  • FIG. 4 is a timing chart related to lighting control of the LED 19 for the CIS 18 .
  • the lighting start and end timings of the R, G, and B LEDs are controlled.
  • the period of the sync signal (SP) is represented by Tstg.
  • Tstg Within this time, lighting of one of the LEDs (R, G, and B) or a combination thereof is controlled.
  • Tled indicates the LED ON time during one period (Tstg) of the sync signal (SP).
  • FIG. 5A is a timing chart showing ON states ( 51 a to 51 d ) of the LEDs corresponding to R, G, and B and an output 51 e obtained by photoelectrically converting LED reflected light accumulated in the ON time in accordance with the timing chart shown in FIG. 4 described above.
  • 51 e in FIG. 5A the R output, G output, and B output corresponding to the R, G, and B colors are output as serial data. It is different from the output signals of the CCD 17 described above.
  • FIG. 5B is a timing chart showing timings when the R, G, and B LEDs 19 are sequentially turned on within one period of the sync signal (SP) in association with control of the CIS 18 .
  • the input from the image reading device can be input to the image processing apparatus 200 as monochrome image data.
  • FIG. 5C is a view showing outputs when two channels of the CIS 18 are arranged in the main scanning direction.
  • Channel 1 53 c in FIG. 5C
  • Channel 2 53 d in FIG. 5C
  • channel 2 outputs 2,794 bits as effective bits from the 3,255th effective bit (the bit that follows the last 3,254th bit of the sensor output of channel 1 ) in synchronism with the trailing edge of the Nth CLK signal.
  • the maximum number of channels of the CIS is not limited to two. Even when, e.g., a 3-channel structure is employed, the scope of the present invention is not limited, and only the number of effective bit outputs changes.
  • the output signal from the image reading device (CCD 17 /CIS 18 ) is input to the AFE (Analog Front End) 15 .
  • the AFE 15 executes gain adjustment ( 15 a and 15 d ) and A/D conversion processing ( 15 b , 15 c , and 15 e ) for the output signals from the CCD 17 and CIS 18 .
  • the AFE 15 converts the analog signal output from each image reading device into a digital signal and inputs the digital signal to the scanner I/F section 10 .
  • the AFE 15 can also convert parallel data output from each image reading device into serial data and output the serial data.
  • a sync control section 11 c shown in FIG. 2 sets, for the AFE 15 , a predetermined threshold level corresponding to the analog signal from each device ( 17 or 18 ) and adjusts the output signal level by the difference in image reading device.
  • the sync control section 11 c also generates and outputs a sync clock to execute sampling control of the analog signal to cause the AFE 15 to output a digital signal and receives read image data by a predetermined digital signal from the AFE 15 .
  • This data is input to an output data control section 11 d through the sync control section 11 c .
  • the output data control section 11 d stores the image data received from the AFE 15 in buffers ( 11 e , 11 f , and 11 g ) in accordance with the output mode of the scanner I/F section 10 .
  • the output mode of the scanner I/F section 10 can be switched between a single mode, 2-channel (2-ch) mode, and 3-channel (3-ch) mode in accordance with the connected image reading device.
  • the single mode is selected when main-scanning data should be input from the AFE 15 .
  • only one buffer is usable.
  • the 2-ch mode is selected when data input from the AFE 15 should be input at the same timing as 2-channel information of the image reading device.
  • two buffers e.g., 11 e and 11 f ) are set in the usable state.
  • the 3-ch mode is selected when image data received from the AFE 15 should be input at the same timing as R, G, and B outputs.
  • three buffers 11 e , 11 f , and 11 g ) are set in the usable state.
  • data received from the AFE 15 contains R, G, and B data outputs which are serially sequenced in accordance with the lighting order of the LEDs, as indicated by 51 e in FIG. 5A.
  • the output data control section 11 d stores the data in one buffer (e.g., the first buffer ( 11 e )) in accordance with the sequence.
  • This processing also applies to a case wherein monochrome image data is read by the CIS 18 .
  • the monochrome image data is stored in one buffer.
  • the above-described 2-ch mode is set.
  • Data received from the AFE 15 contains data for each of two regions divided in the main scanning direction, as indicated by 53 c and 53 d in FIG. 5C.
  • the output data control section 11 d stores the received data in two buffers (e.g., the first buffer ( 11 e ) and second buffer ( 11 f )). This processing also applies to a case wherein monochrome image data is read by the CIS having two channels.
  • the output data control section 11 d can separately store the data received from the AFE 15 , which contains R, G, and B data, in three buffers (first, second, and third buffers ( 11 e , 11 f , and 11 g )) in the above-described 3-ch mode.
  • FIG. 7 is a block diagram showing the schematic composition of the LDMAC_A ( 105 a ) which DMA-transfers image data read by the image reading device ( 17 to 18 ) to the main memory (SDRAM) 100 and the LDMAC_B ( 105 b ) which controls DMA between the main memory 100 and the scanner image processing section 20 .
  • a buffer controller 75 controls the LDMAC_A ( 105 a ) and LDMAC_B ( 105 b ) to arbitrate the data write and read.
  • the LDMAC_A ( 105 a ) has a data arbitration unit 71 a , first write data interface (I/F) section 71 b , and I/O interface section 71 c.
  • the I/O interface section 71 c sets, in the first write data I/F section 71 b , predetermined address information generated by the LDMAC_A to store data in the main memory 100 .
  • the I/O interface section 71 c also receives image data from the scanner I/F section 10 and stores them in buffer channels (to be referred to as “channels” hereinafter) (ch 0 to ch 2 ) in the LDMAC_A ( 105 a ).
  • the first write data I/F section 71 b is connected to a third BUS 73 to be used to write data in the main memory 100 .
  • the first write data I/F section 71 b DMA-transfers data stored in the channels (ch 0 to ch 2 ) to the main memory 100 in accordance with the generated predetermined address information.
  • the data arbitration unit 71 a reads the data stored in each channel and transfers the data in each channel in accordance with the write processing of the first write data I/F section 71 b.
  • the first write data I/F section 71 b is connected to the buffer controller 75 and controlled such that memory access does not conflict with the data read or write by the LDMAC_B ( 105 b ) (to be described later).
  • the main memory 100 With access control for the main memory 100 , even when the main memory 100 is used as a ring buffer, data is never overwritten at the same memory address before the read of data stored in the main memory 100 . Hence, the memory resource can be effectively used.
  • FIGS. 8A and 8B are views for explaining processing for causing the LDMAC_A 105 a to write 1-channel data in the main memory (SDRAM) 100 .
  • SDRAM main memory
  • a composition which causes the data arbitration unit 71 a and first write data I/F section 71 b to DMA-transfer data of channel (ch 0 ) and store it in the main memory 100 will be referred as “first LDMAC”.
  • a composition which processes data of channel (ch 1 ) will be referred to as “second LDMAC”.
  • a composition which processes data of channel (ch 2 ) will be referred to as “third LDMAC”.
  • FIG. 8A is a view showing processing for separating 1-channel color image data into R, G, and B data and storing them.
  • the first LDMAC writes, of the R, G, and B in the line order, R data corresponding to one line (R 1 to R 2 ) in the main scanning direction in an R area ( 100 a ) of the main memory 100 and switches the write address to a start address (G 1 ) of a G area ( 1000 b ) as the next write area.
  • the first LDMAC writes, of the R, G, and B, G data corresponding to one line (G 1 to G 2 ) in the main scanning direction in the G area ( 1000 b ) of the main memory 100 and switches the write address to a start address (B 1 ) of a B area ( 1000 c ) as the next write area.
  • the first LDMAC writes B data corresponding to one line (B 1 to B 2 ) in the main scanning direction in the B area ( 1000 c ) of the main memory 100 and switches the address to the start address (R 2 ) of the second line of the R area ( 1000 a ).
  • the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • the memory address as the storage destination of data corresponding to each of the R, G, and B data is given as offset information (A or B), and the storage area for each color data is switched.
  • the R, G, and B data in the line order can be separated and stored in the main memory 100 as R data, G data, and B data.
  • FIG. 8B is a view for explaining write processing of monochrome image data by the CIS 18 , which is obtained at the LED lighting timing shown in FIG. 5B.
  • Monochrome image data need not be separated into R, G, and B data.
  • data corresponding to one line (M 1 to M 2 ) is written in the main scanning direction of the main memory 100 .
  • the write address is shifted in the sub-scanning direction of an area ( 1000 d ), and the next data corresponding to the second line (M 3 to M 4 ) is written.
  • the monochrome image data can be stored in the area ( 1000 d ) of the main memory.
  • FIG. 9A is a view showing processing for separating 2-channel color image data into R, G, and B data and storing them, as shown in FIG. 5C.
  • the memory area in the main scanning direction is divided in correspondence with the two channels.
  • Image data read by the CIS 18 having two channels (chip 0 and chip 1 ) are stored in two buffers ( 11 e and 11 f ) of the scanner I/F section 10 .
  • the data in the two buffers ( 11 e and 11 f ) are transferred to the channels (ch 0 and ch 1 ) in the LDMAC_A 105 a under the control by the LDMAC_A 105 a.
  • the first LDMAC stores the data (chip 0 _data) of channel. (ch 0 ) in areas indicated by a first R area ( 1100 a ), first G area ( 1200 a ), and first B area ( 1300 a ) in FIG. 9A.
  • the first LDMAC writes, of the R, G, and B input from chip 0 , R data (RA 1 to RA 2 ) in the first R area ( 1100 a ) of the main memory and switches the write address to a start address (GA 1 ) of the first G area ( 1200 a ) as the next write area (offset information C).
  • the first LDMAC writes, of the R, G, and B data, G data (GA 1 to GA 2 ) in the first G area ( 1200 a ) of the main memory and switches the write address to a start address (BA 1 ) of the first B area ( 1300 a ) as the next write area (offset information C).
  • the first LDMAC writes, of the R, G, and B data, B data (BA 1 to BA 2 ) in the B area ( 1300 a ) of the main memory, and after the end of processing, switches the address to a start address (RA 3 ) of the second line in the sub-scanning direction of the R area ( 1100 a ) (offset information D).
  • the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • the first LDMAC can arbitrarily set a memory address as data storage destination for each of the first R area ( 1100 a ), first G area ( 1200 a ), and first B area ( 1300 a ) in FIG. 9A as the data storage area of the main memory.
  • the data stored in the channel (ch 0 ) is stored in the main memory 100 in accordance with the settings.
  • the second LDMAC stores the data (chip 1 _data) of channel (ch 1 ) in areas indicated by a second R area ( 1100 b ), second G area ( 1200 b ), and second B area ( 1300 b ) in FIG. 9A.
  • the second LDMAC writes, of the R, G, and B input from chip 1 , R data (RB 1 to RB 2 ) in the second R area ( 1100 b ) of the main memory and switches the write address to a start address (GB 1 ) of the second G area ( 1200 b ) as the next write area (offset information E).
  • the second LDMAC writes, of the R, G, and B data, G data (GB 1 to GB 2 ) in the second G area ( 1200 b ) of the main memory 100 and switches the write address to a start address (BB 1 ) of the second B area ( 1300 b ) as the next write area (offset information E).
  • the second LDMAC writes, of the R, G, and B data, B data (BB 1 to BB 2 ) in the second B area ( 1300 b ) of the main memory 100 , and after the end of processing, switches the address to a start address (RB 3 ) of the second line in the sub-scanning direction of the second R area ( 1100 b ) (offset information F).
  • the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • the memory address as the storage destination of data corresponding to each of the R, G, and B data is given as offset information (C, D, E, or F), and the storage area for each color data is switched.
  • the R, G, and B data in the line order can be separated and stored in the main memory 100 as R data, G data, and B data.
  • the start addresses (RA 1 and RB 1 in FIG. 8A) at which DMA transfer is started and offset information (C, D, E, and F) are generated by the LDMAC_A ( 105 a ) described above.
  • FIG. 9B is a view for explaining write processing of monochrome image data by the CIS having two channels, which is obtained at the LED lighting timing shown in FIG. 5B.
  • Monochrome image data need not be separated into R, G, and B data, unlike the above-described color image data.
  • data corresponding to one line (MA 1 to MA 2 or MB 1 to MB 2 ) is written in the main scanning direction of the main memory 100 .
  • the write address is shifted in the sub-scanning direction of an area ( 1400 a or 1400 b ), and the next data corresponding to the second line (MA 3 to MA 4 or MB 3 to MB 4 ) is written.
  • the monochrome image data can be stored in the areas ( 1400 a and 1400 b ) of the main memory.
  • FIG. 10 is a view for explaining processing in which when the output data control section 11 d of the scanner I/F section 10 processes image data read by the CCD 17 as 3-channel data (R data, G data, and B data), the first to third LDMACs corresponding to the respective channels write data in the main memory 100 .
  • Data stored in three buffers are transferred to channels (ch 0 , ch 1 , and ch 2 ) in the LDMAC_A 105 a under the control of the LDMAC_A ( 105 a ).
  • the data transferred to ch 0 is written in the main memory 100 by the first LDMAC.
  • the data transferred to ch 1 is written in the main memory 100 by the second LDMAC.
  • the data transferred to ch 2 is written in the main memory 100 by the third LDMAC.
  • the first to third LDMACs write the data in areas corresponding to an R area ( 1500 a ), G area ( 1500 b ), and B area ( 1500 c ) of the main memory 100 so that the R, G, and B data can be separately stored on the main memory 100 .
  • image data read by the image reading device (CCD 17 or CIS 18 ) is distributed to channels that control DMA transfer in accordance with the output format. Address information and offset information, which control DMA for the distributed data, are generated.
  • the image processing apparatus can be compatible with various image reading devices.
  • R, G, and B data are separated, and the image data are stored on the main memory 100 independently of the output format of the image reading device (CCD 17 or CIS 18 ). For this reason, DMA transfer corresponding to the output format of the image reading device (CCD 17 or CIS 18 ) need not be executed for the image processing section (to be described later) on the output side. Only DMA transfer corresponding to necessary image processing needs to be executed. Hence, an image processing apparatus that can be compatible with the output format of the image reading device (CCD 17 or CIS 18 ) with a very simple arrangement and control can be provided.
  • FIG. 11A is a view showing a state in which the main memory 100 is divided into predetermined rectangular areas (blocks).
  • FIG. 11B is a view showing a case wherein the main memory 100 is used as a ring buffer.
  • address information to define a rectangular area is set in accordance with the image processing mode (copy mode or scanner mode).
  • SA indicates the start address of DMA.
  • An area in the main scanning direction (X-axis direction) is divided by a predetermined byte length (XA or XB).
  • An area in the sub-scanning direction (Y-axis direction) is divided by a predetermined number of lines (YA or YB).
  • hatched areas 101 a and 101 b are the same memory area.
  • DMA for a rectangular area (0,0) starts from the start address SA.
  • an address represented by offset data (OFF 1 A) is set as a transfer address shifted in the sub-scanning direction by one line.
  • transfer in the main scanning direction and address shift by offset data (OFF 1 A) are controlled.
  • DMA for the rectangular area (1,0) the address jumps to an address represented by an offset address (OFF 2 A). Transfer in the main scanning direction and address shift by offset data (OFF 1 A) are controlled, as in the rectangular area (0,0).
  • DMA for the rectangular area (1,0) is ended, processing advances to the next rectangular area (2,0). In this way, DMA for YA lines is executed until an area (n,0). Then, the address jumps to an address represented by offset data (OFF 3 ). Processing advances to processing for a rectangular area (0,1).
  • DMA for areas (1,1), (2,1), . . . is controlled in the same way as described above. For example, if there are rectangular areas having different sizes (defined by XB and YB) because of the memory capacity, offset data (OFF 1 B and OFF 2 B) corresponding to the area sizes are further set to control DMA.
  • the number of pixels in the main scanning direction and the number of lines in the sub-scanning direction are set in accordance with the resolution in the main scanning direction and the pixel area to be referred to in accordance with the set image processing mode so that assignment (segmentation) of image data bitmapped on the memory is controlled.
  • FIGS. 12A to 12 C are views showing capacities necessary for the main memory in the respective image processing modes.
  • the capacity in each processing mode is set in the following way.
  • Character determination processing 11 lines on each of the upper and lower sides, 12 pixels on the left side, and 13 pixels on the right side
  • Color determination filtering processing two lines on each of the upper and lower sides, and two pixels on each of the left and right sides
  • Color determination filtering processing two lines on each of the upper and lower sides, and two pixels on each of the left and right sides
  • the transfer efficiency is defined as the area ratio of the effective pixel area to the image area including the overlap area. As described above, in the copy mode, ensuring the overlap area is essential. Hence, the transfer efficiency is low. In the scanner mode, however, no overlap width is necessary except for magnification processing. Hence, the transfer efficiency is high.
  • the contents of image processing change between the scanner mode and the copy mode.
  • a necessary memory area is appropriately set in accordance with the processing contents. For example, as shown in FIG. 12A, in the color copy mode, character determination processing or color determination processing is necessary.
  • the overlap area (this area is indicated as an area ensured around the effective pixels in FIG. 12A) to extract the effective pixel area becomes large.
  • the overlap area is decided by tradeoff with the memory area.
  • the overlap width need not be ensured except for magnification processing.
  • about 1,200 dpi must be ensured as the number of effective pixels in the main scanning direction.
  • the number of lines in the sub-scanning direction is set to, e.g., 24 lines. With this setting, the assignment amount of the main memory in the color copy mode can be almost the same as that in the scanner mode.
  • FIG. 13 is a flow chart for explaining the flow of data storage processing in the copy mode.
  • step S 10 it is determined whether the copy mode is the color copy mode. In the color copy mode (YES in step S 10 ), the processing advances to step S 20 to set address information for DMA transfer in the color copy mode as follows.
  • Effective pixels e.g., a resolution of 600 dpi in the main scanning direction
  • an overlap width 11 lines on each of the upper and lower sides, 12 pixels on the left side, and 13 pixels on the right side is set around the effective pixels (FIG. 12A).
  • address information of DMA transfer is set in the following way.
  • Effective pixels e.g., a resolution of 600 dpi in the main scanning direction
  • an overlap width two lines on each of the upper and lower sides, and two pixels on each of the left and right sides
  • Start address SA start address (BUFTOP) of memory+number of pixels (TOTALWIDTH) in main scanning direction of 1-page image containing overlap width ⁇ 2 (overlap width (upper) in sub-scanning direction)+2 (overlap width (left) in main scanning direction)
  • UA end address (BUFFBOTTOM) of memory+1
  • step S 20 or S 30 the processing advances to step S 40 to start DMA transfer.
  • Data stored in the channels in the LDMAC_A 105 a are sequentially read and DMA-transferred in accordance with the predetermined address information (S 50 and S 60 ).
  • S 70 the read of data stored in the channels (ch 0 to ch 2 ) is ended (S 70 )
  • DMA transfer is ended (S 80 ).
  • FIG. 14 is a flow chart for explaining the flow of data storage processing in the scanner mode.
  • address information for DMA transfer in the scanner mode is set as follows.
  • Effective pixels e.g., a resolution of 1,200 dpi in the main scanning direction
  • an overlap width of one line on the lower side in the sub-scanning direction is ensured.
  • Start address SA start address (BUFTOP) of memory
  • step S 110 the processing advances to step S 110 to start DMA transfer.
  • Data stored in the channels in the LDMAC_A 105 a are sequentially read out and DMA-transferred in accordance with the predetermined address information (S 120 and S 130 ).
  • S 140 DMA transfer is ended (S 150 ).
  • image data is bitmapped on the main memory 100 in accordance with the set processing mode.
  • the overlap width shown in FIG. 13 or 14 is an arbitrarily settable parameter, and the scope of the present invention is not limited by this condition. For example, in color transmission of a photo, the character determination processing may be omitted.
  • An arbitrary overlap width can be set in accordance with the necessary number of reference pixels in image processing such that only filter processing is to be executed.
  • the image data bitmapped on the main memory 100 is loaded to the scanner image processing section 20 as corresponding R, G, and B data or monochrome image data for each predetermined rectangular area.
  • Image processing is executed for each rectangular area.
  • the CPU 180 prepares, in the main memory 100 , shading (SHD) correction data that corrects a variation in sensitivity of the light-receiving element of the image reading device (CCD 17 /CIS 18 ) or a variation in light amount of the LED 19 .
  • the shading data of the rectangular area and image data of the rectangular area are DMA-transferred to the scanner image processing section 20 by the LDMAC_B ( 105 b ) (to be described later).
  • FIGS. 15A and 15B are views for explaining a data read when image data in a rectangular area is to be transferred to a block buffer RAM 210 (FIG. 16) of the scanner image processing section 20 .
  • An overlap area AB 1 CD 1 is set for the effective pixel area (abcd) of an area (0,0) (FIG. 15A).
  • corresponding data is read from the start address A to the address B 1 in the main scanning direction.
  • the address of data to be read next is shifted to the address A 2 in FIG. 15A by one line in the sub-scanning direction.
  • the data is read until the pixel at the address B 3 in the main scanning direction.
  • the data is read in a similar manner.
  • Data from the address C to the address D 1 which correspond to the last line of the overlap area, is read in the main scanning direction.
  • the read of the data in the area (0,0) is thus ended.
  • An overlap area B 2 ED 2 F is set for the effective pixel area (bedf) of an area (0,1) (FIG. 15B).
  • corresponding data is read from the start address B 2 to the address E in the main scanning direction.
  • the address of data to be read next is shifted to the address B 4 in FIG. 15B by one line in the sub-scanning direction.
  • the data is read until the pixel at the address B 5 in the main scanning direction.
  • Data from the address D 2 to the address F, which correspond to the last line of the overlap area, is read out in the main scanning direction.
  • the read of the data in the second area is thus ended.
  • the data of the rectangular area including the overlap area is read. The same processing as described above is executed for each rectangular area.
  • the read of data stored in the main memory 100 is controlled by the LDMAC_B ( 105 b ) shown in FIG. 7.
  • a read data I/F section 72 a is connected to the main memory 100 through a fourth bus 74 for the data read.
  • the read data I/F section 72 a can read predetermined image data from the main memory 100 by referring to address information generated by the LDMAC_B ( 105 b ).
  • the read data are set to a plurality of predetermined channels (ch 3 to ch 6 ) by a data setting unit 72 b .
  • image data for shading correction is set to channel 3 (ch 3 ).
  • Plane-sequential R data is set to channel 4 (ch 4 ).
  • Plane-sequential G data is set to channel 5 (ch 5 ).
  • Plane-sequential B data is set to channel 6 (ch 6 ).
  • the data set to the channels (ch 3 to ch 6 ) are sequentially DMA-transferred through an I/P interface 72 c under the control of the LDMAC_B ( 105 b ) and loaded to the block buffer RAM 210 (FIG. 16) of the scanner image processing section 20 .
  • Channel 7 (ch 7 ) in the LDMAC_B ( 105 b ) is a channel which stores dot-sequential image data output from the scanner image processing section 20 to store data that has undergone predetermined image processing in the main memory 100 .
  • the scanner image processing section 20 outputs address information (block end signal and line end signal) in accordance with the output of dot-sequential image data.
  • a second write data I/F 72 d stores the image data stored in channel 7 in the main memory 100 . The contents of this processing will be described later in detail.
  • FIG. 16 is a block diagram for explaining the schematic composition of the scanner image processing section 20 . Processing corresponding to each image processing mode is executed for the data loaded to the block buffer RAM 210 .
  • FIGS. 19A to 19 D are views schematically showing the sizes of rectangular areas necessary for the respective image processing modes.
  • the scanner image processing section 20 executes processing while switching the rectangular pixel area to be referred to for the rectangular area in accordance with the set image processing mode. The contents of the image processing will be described below with reference to FIG. 16. The sizes of the rectangular areas to be referred to at that processing will be described with reference to FIGS. 19A to 19 D.
  • a shading correction block (SHD) 22 is a processing block which corrects a variation in light amount distribution of the light source (LED 19 ) in the main scanning direction, a variation between light-receiving elements of the image reading device, and the offset of the dark output.
  • shading data correction data corresponding to one pixel is plane-sequentially stored in an order of bright R, bright G, bright B, dark R, dark G, and dark B on the main memory 100 .
  • Pixels (XA pixels in the main scanning direction and YA pixels in the sub-scanning direction (FIG. 19A)) corresponding to the rectangular area are input.
  • the input plane-sequential correction data is converted into dot-sequential data by an input data processing section 21 and stored in the block buffer RAM 210 of the scanner image processing section 20 .
  • the processing shifts to image data transfer.
  • the input data processing section 21 is a processing section which executes processing for reconstructing plane-sequential data separated into R, G, and B data to dot-sequential data.
  • Data of one pixel is stored on the main memory 100 as plane-sequential data for each of the R, G, and B colors.
  • the input data processing section 21 extracts 1-pixel data for each color data and reconstructs the data as R, G, or B data of one pixel.
  • the reconstruction processing is executed for each pixel, thereby converting the plane-sequential image data into dot-sequential image data.
  • the reconstruction processing is executed for all pixels (XA pixels ⁇ YA pixels) in the rectangle.
  • FIG. 17 is a view schematically showing an area to be subjected to image processing and a reference area (ABCD) where filter processing and the like for the processing are to be executed.
  • “Na” and “Nb” pixels are set as overlap widths in the main scanning direction (X direction)
  • “Nc” and “Nd” pixels are set as overlap widths in the sub-scanning direction (Y direction) for the effective pixel area (abcd).
  • FIGS. 18A and 18B are views showing overlap widths in the respective image processing modes (color copy mode, monochrome copy mode, and scanner mode).
  • magnification mode the size of the reference area becomes larger by m pixels and n lines than that in the 1 ⁇ mode because of the necessity of magnification processing.
  • color copy mode the largest reference area in all the image processing modes is necessary because black characters should be determined.
  • halftone dots and black characters must properly be determined.
  • a reference area having (24+m) pixels in the main scanning direction and (21+n) pixels in the sub-scanning direction (in the magnification mode) is set.
  • a reference area having (4+m) pixels in the main scanning direction and (4+n) pixels in the sub-scanning direction (in the magnification mode) is set.
  • the scanner mode no reference area is necessary in the 1 ⁇ mode because necessary image processing is executed by the scanner driver or application on the host computer.
  • a reference area having m pixels in the main scanning direction and n pixels in the sub-scanning direction is set in accordance with the magnification factor.
  • the present invention is not limited to the overlap widths described here. The overlap width can arbitrarily be set.
  • an averaging processing section is a processing block which executes sub-sampling (simple thinning) for decreasing the read resolution in the main scanning direction or averaging processing.
  • An input masking processing section is a processing block which calculates color correction of input R, G, and B data.
  • a correction processing section is a processing block which applies a predetermined gray level characteristic to input data.
  • a character determination processing block 24 is a processing block which determines black characters and the pixels of a line drawing contour in input image data.
  • black character determination processing an area more than the period of halftone dots must be referred to, as described above.
  • an overlap area corresponding to (24+m) pixels in the main scanning direction and (21+n) pixels (lines) in the sub-scanning direction (m and n are defined by the magnification factor) is preferably referred to.
  • data corresponding to XA pixels in the main scanning direction (effective pixels+overlap width) ⁇ YA pixels in the sub-scanning direction (effective pixels+overlap width) (FIG. 19A) is referred to, like the input to the shading correction block (SHD) 22 . That is, all pixels (XA pixels ⁇ YA pixels) in the rectangle are referred to.
  • an MTF correction processing section is a processing section which executes MTF difference correction and filter processing in the main scanning direction to reduce moiré in reducing the image when the image reading device is changed.
  • This block executes multiplication/addition processing of coefficients for predetermined pixels in the main scanning direction in an area of interest. Referring to FIG. 19B, two pixels of a left hatching portion (b 1 ) and three pixels of a right hatching portion (b 2 ) are ensured for an area G 1 of interest, and processing for the area G 1 is executed. That is, the area G 1 of interest in the rectangle and the areas of the hatching portions b 1 and b 2 are read to obtain the area G 1 of interest.
  • An (RGB (L, Ca, Cb)) conversion processing section executes conversion processing of multilevel image data of each of R, G, and B colors in filtering (brightness enhancement, saturation enhancement, and color determination) executed by a filter processing block 26 on the output side.
  • a background density adjustment processing section executes processing for automatically recognizing the background density of an original and correcting the background density value to the white side to obtain binary data suitable for facsimile communication or the like.
  • the filter processing block 26 executes edge enhancement processing of the brightness component (L) of the image and enhancement processing of saturation (Ca, Cb) as processing for executing color determination and filtering for the data obtained in the preceding CTRN processing.
  • the filter processing block 26 also determines the chromatism of the input image and outputs the result.
  • the filter processing block 26 can also change the parameter of the enhancement amount on the basis of the character or line drawing contour portion determination signal generated by the character determination processing block 24 .
  • the data that has undergone the filter processing is converted from L, Ca, and Cb to R, G, and B data and output. When monochrome image data is to be processed, this processing block functions as an edge enhancement filter for 5 ⁇ 5 pixels.
  • the above-described filter processing is executed by using an area (hatched area) corresponding to two pixels (lines) on each of the upper and lower sides and two pixels on each of the left and right sides as reference data. That is, for the area G 1 processed by the MTF correction processing section, the area G 2 after filter processing is obtained.
  • a magnification processing (LIP) block 27 is a processing block which executes linear interpolation magnification processing in the main and sub-scanning directions.
  • an area G 3 is obtained as a result of linear interpolation magnification processing.
  • the area of the area G 3 is decided by magnifying the hatched area of image data (d: (X ⁇ (Na+Nb) pixels) ⁇ (Y ⁇ (Nc+Nd) pixels)) in the main and sub-scanning directions in accordance with predetermined magnification factor (main scanning direction (+m pixels) and sub-scanning direction (+n lines)). That is, the area G 2 after filter processing is input, thereby obtaining the area G 3 after magnification.
  • Na and Nb indicate the numbers of pixels which are set as overlap widths in the main scanning direction (X direction)
  • “Nc” and “Nd” indicate the numbers of pixels which are set as overlap widths in the sub-scanning direction (Y direction), as in FIG. 17.
  • the above image processing is executed for image data of each rectangular area in accordance with the set image processing mode (copy mode or scanner mode).
  • a rectangular area corresponding to an image processing mode is set on the memory, and the unit of the rectangular area is switched, a resolution and high-resolution processing corresponding to the image processing mode can be implemented.
  • Each rectangular area contains an overlap width necessary for image processing of each processing block.
  • the image data of an adjacent area need not be read for each rectangular area to process the end portion of the rectangular image data to be processed.
  • the work memory can further be reduced as compared to the method which simply segments an image into rectangular areas and executes image processing. In this way, image data corresponding to the maximum rectangle necessary for each image processing section is loaded to the block buffer RAM 210 in advance.
  • a necessary image data amount is transferred between the image processing sections. Only with this operation, a series of image processing operations necessary for each mode such as a color copy, monochrome copy, or scanner mode can be implemented. Hence, a line buffer dedicated for an image processing block can be omitted.
  • image processing can be executed independently of the main scanning width or resolution. For this reason, the capacity of the line buffer of each image processing section need not be increased in accordance with the main scanning width or resolution, unlike the prior art.
  • an apparatus such as a copying machine or scanner which executes necessary image processing at appropriate time can be provided with a very simple arrangement.
  • FIG. 20 is a view for explaining the start point in the DMA main scanning direction to DMA-transfer image data of the next rectangular data after the end of processing of one rectangular data.
  • DMA of the first rectangular area ABCD is ended, and transfer until the pixel at a point D is ended
  • the start point in the main scanning direction is set at a position (point S 1 in FIG. 20) that is returned by Na+Nb pixels in the main scanning direction.
  • DMA of rectangular data corresponding to one line is sequentially ended, and DMA of a point E corresponding to the final data of the first line is transferred
  • the start point shifted in the sub-scanning direction to transfer the rectangular data of the next line is set at a position (S 2 in FIG. 20) that is returned by Nc+Nd pixels.
  • FIGS. 21 and 22 are flow charts for explaining the flow of DMA transfer processing and image processing in the respective image processing modes. Referring to FIGS. 21 and 22, detailed numerical values are used as address information. However, the present invention is not limited to these numerical values, and the address information can arbitrarily be set.
  • FIG. 21 is a flow chart for explaining the flow of a data read and image processing in the copy mode.
  • step S 200 it is determined whether the copy mode is the color mode. In the color mode (YES in step S 200 ), the processing advances to step S 210 . In the monochrome mode (NO in step S 200 ), the processing advances to step S 220 .
  • step S 210 address information for the read in the color copy mode is set as follows. This address information is generated by the LDMAC_B ( 105 b ) (this also applies to step S 220 ). On the basis of the address information, the LDMAC_B ( 105 b ) controls DMA.
  • XA rectangular effective main scanning pixels+overlap width (number of pixels of left overlap width (12 pixels) and number of pixels of right overlap width (13 pixels))
  • YA rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of upper overlap width (11 pixels (lines)) and number of pixels of lower overlap width (11 pixels (lines)))
  • OFF 2 A ⁇ (TOTALWIDTH ⁇ YA+overlap width (12 pixels on left side and 13 pixels on right side))
  • OFF 3 A ⁇ (TOTALWIDTH ⁇ (overlap width (11 pixels on upper side and 11 pixels on lower side)+effective main scanning pixels+overlap width (12 pixels on left side and 13 pixels on right side))
  • TOTALWIDTH number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width)
  • XANUM effective main scanning pixels/rectangular effective main scanning pixels
  • step S 220 address information for the read in the monochrome copy mode is set as follows.
  • UA end address (BUFFBOTTOM) of memory+1
  • XA rectangular effective main scanning pixels+overlap width (number of pixels of left overlap width (2 pixels) and number of pixels of right overlap width (2 pixels))
  • YA rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of upper overlap width (2 pixels (lines)) and number of pixels of lower overlap width (2 pixels (lines)))
  • OFF 2 A ⁇ (TOTALWIDTH ⁇ YA+overlap width (2 pixels on left side and 2 pixels on right side))
  • OFF 3 A ⁇ (TOTALWIDTH ⁇ (overlap width (2 pixels on upper side and 2 pixels on lower side)+effective main scanning pixels+overlap width (12 pixels on left side and 13 pixels on right side))
  • TOTALWIDTH number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width)
  • XANUM effective main scanning pixels/rectangular effective main scanning pixels
  • step S 230 determines whether the LDMAC_B ( 105 b ) is in a data readable state. For example, when the buffer controller 75 inhibits a buffer read, the processing waits until the state is canceled (NO in step S 230 ). If a buffer read can be executed (YES in step S 230 ), the processing advances to step S 240 .
  • step S 240 the read data I/F section 72 a reads data in accordance with the set address information.
  • the data setting unit 72 b sets the data in predetermined channels (ch 3 to ch 6 ).
  • the LDMAC_B ( 105 b ) DMA-transfers the data set in the respective channels to the buffer RAM 210 of the scanner image processing section 20 .
  • the DMA-transferred data is loaded to the buffer of the scanner image processing section 20 and subjected to image processing corresponding to each image processing mode. The contents of each image processing have already been described above, and a detailed description thereof will be omitted here.
  • the loaded shading correction data and image data are converted by the above-described input data processing section 21 from plane-sequential data to dot-sequential data and subjected to the following image processing.
  • step S 250 it is determined whether the copy mode is the color copy mode.
  • the processing advances to step S 260 to execute character determination processing.
  • step S 260 character determination processing
  • step S 260 (character determination processing) is skipped.
  • Filter processing is executed in step S 270
  • magnification processing is executed in step S 280 .
  • step S 290 dot-sequential image data that has undergone the image processing is further DMA-transferred to and stored in a predetermined memory area where data that has undergone image processing is to be stored. This storage processing will be described later in detail.
  • step S 300 it is determined whether the image processing of the rectangular area and data storage processing are ended. If NO in step S 330 , the processing returns to step S 250 to execute the same processing as described above. If the processing of the rectangular area is ended (YES in step S 300 ), the processing advances to step S 310 to determine whether the processing of rectangular areas that construct the entire page is ended (S 310 ). If the processing of the entire page is not ended (NO in step S 310 ), the processing returns to step S 230 to read out the subsequent image data from the main memory 100 and execute image processing (steps from S 230 ) for the data.
  • step S 310 when the page processing is ended (YES in step S 310 ), the processing advances to step S 320 to end DMA transfer to the scanner image processing section 20 (S 320 ) and data write processing to the buffer by the scanner image processing section 20 (S 330 ). Thus, the image processing by the scanner image processing section 20 is ended (S 340 ).
  • FIG. 22 is a flow chart for explaining the flow of a data read and image processing in the scanner mode.
  • address information to read out data from the main memory 100 is set as follows. This address information is generated by the LDMAC_B ( 105 b ). On the basis of the address information, the LDMAC_B ( 105 b ) controls DMA.
  • UA end address (BUFFBOTTOM) of memory+1
  • YA rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of lower overlap width (1 pixel (line))
  • OFF 3 A ⁇ (TOTALWIDTH ⁇ (overlap width (number of pixels of lower overlap width (1 pixel))+effective main scanning pixels)
  • TOTALWIDTH number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width)
  • XANUM effective main scanning pixels/rectangular effective main scanning pixels
  • step S 410 determines whether the LDMAC_B ( 105 b ) is in a data readable state. For example, when the buffer controller 75 inhibits a buffer read, the processing waits until the state is canceled (NO in step S 410 ). If a buffer read can be executed (YES in step S 410 ), the processing advances to step S 420 .
  • step S 420 the read data I/F section 72 a reads data in accordance with the set address information.
  • the data setting unit 72 b sets the data in predetermined channels (ch 3 to ch 6 ).
  • the LDMAC_B ( 105 b ) DMA-transfers the data set in the respective channels to the buffer of the scanner image processing section 20 .
  • the DMA-transferred data is loaded to the buffer of the scanner image processing section 20 and subjected to image processing corresponding to each image processing mode.
  • the contents of the image processing have already been described above, and a detailed description thereof will be omitted.
  • the loaded image data is converted by the above-described input data processing section 21 from plane-sequential data to dot-sequential data and subjected to magnification processing in step S 430 .
  • step S 440 dot-sequential image data that has undergone the image processing is further DMA-transferred to and stored in a predetermined memory area where data that has undergone image processing is to be stored. This storage processing will be described later in detail.
  • step S 450 it is determined whether the image processing of the rectangular area and data storage processing are ended. If NO in step S 450 , the processing returns to step S 430 to execute the same processing as described above. If the processing of the rectangular area is ended (YES in step S 450 ), the processing advances to step S 460 to determine whether the processing of the entire page is ended (S 460 ). If the processing of the entire page is not ended (NO in step S 460 ), the processing returns to step S 410 to read out the subsequent image data from the main memory 100 and execute image processing for the data.
  • step S 460 when the page processing is ended (YES in step S 460 ), the processing advances to step S 470 to end DMA transfer to the scanner image processing section 20 (S 470 ) and data write processing to the buffer by the scanner image processing section 20 (S 480 ). Thus, the image processing by the scanner image processing section 20 is ended (S 490 ).
  • predetermined image processing can be executed without intervention of individual line buffers of each image processing section.
  • FIG. 23 is a view for explaining processing for transferring magnified rectangular data from the magnification processing block (LIP) 27 to the main memory 100 .
  • LIP magnification processing block
  • the appearance probability of rectangular areas having different sizes is controlled in accordance with the result of magnification processing so that predetermined magnified image data can be obtained.
  • the signal that controls DMA is sent from the magnification processing block (LIP) 27 to the LDMAC_B ( 105 b ).
  • FIG. 25 is a timing chart showing the relationship between data and signals sent from the magnification processing block (LIP) 27 to the LDMAC_B ( 105 b ) to DMA-transfer the data that has undergone image processing to the main memory 100 .
  • LIP magnification processing block
  • the LDMAC_B ( 105 b ) starts DMA transfer without knowing the main scanning length and sub-scanning length of a rectangular area.
  • the magnification processing block 27 transfers the final data (XA 1 and XA 2 ) with the main scanning width in one rectangle, a line end signal is output. With the line end signal, the LDMAC_B ( 105 b ) is notified of the main scanning length of the rectangle by the magnification processing block 27 .
  • a block end signal is output to the LDMAC_B ( 105 b ). With this signal, the sub-scanning length can be recognized.
  • DMA transfer is shifted to the areas B 21 and B 22 (FIG. 23).
  • data XA in the main scanning direction is sent.
  • DMA is controlled by the line end signal and block end signal. Accordingly, the rectangular area of DMA can dynamically be switched in accordance with the calculation result of the magnification processing block 27 .
  • the above-described line end signal, block end signal, and dot-sequential image data are input to the interface section 72 c of the LDMAC_B ( 105 b ). Of these data, the image data is stored in channel (ch) 7 .
  • the line end signal and block end signal are used as address information in bitmapping the data stored in channel (ch) 7 on the main memory 100 .
  • the second write data I/F section 72 d reads out the data in ch 7 and stores it on the main memory 100 .
  • FIG. 26 is a view for explaining a state in which the data is bitmapped on the main memory 100 in accordance with the line end signal and block end signal.
  • SA represents the start address of DMA transfer.
  • Dot-sequential R, G, and B data are stored from this address in the main scanning direction.
  • the address of DMA transfer is switched by offset information (OFF 1 A).
  • data is stored in the main scanning direction from an address shifted in the sub-scanning direction by one pixel (line).
  • OFF 2 A On the basis of the block end signal of the rectangular area (0,0), processing shifts to data storage for the next rectangular area (1,0).
  • the address of DMA transfer is switched by offset information (OFF 2 A). In this case, OFF 2 A is switched as an address shifted in the main scanning direction by one pixel with respect to the area (0,0) and jumped to the first line in the sub-scanning direction.
  • OFF 3 is switched as an address shifted in the sub-scanning direction by one pixel (line) with respect to the pixel of the final line of the area (0,0) and jumped to the first pixel in the main scanning direction.
  • the data that has undergone the image processing can be DMA-transferred to a predetermined area of the main memory 100 and stored.
  • the present invention has been described as a composite image processing apparatus having various image input/output functions.
  • the present invention is not limited to this and can also be applied to a scanner apparatus or printer apparatus having a single function or an optical card to be extendedly connected to another apparatus.
  • the unit composition of the apparatus according to the present invention is not limited to the above description.
  • the apparatus or system according to the present invention may be constituted such that it is achieved by a plurality of apparatuses connected through a network.
  • the object of the present invention can also be achieved by supplying a storage medium which stores software program codes for implementing the functions of the above-described embodiment to a system or apparatus and causing the computer (or a CPU or MPU) of the system or apparatus to read out and execute the program codes stored in the storage medium.
  • the program codes read out from the storage medium implement the functions of the above-described embodiment by themselves, and the storage medium which stores the program codes constitutes the present invention.
  • the storage medium for supplying the program codes for example, a floppy (trademark) disk, hard disk, optical disk, magnetooptical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, or the like can be used.
  • an image processing apparatus which is compatible with various image reading devices can be provided. More specifically, image data read by an image reading device is distributed to channels that control DMA transfer in accordance with the output format. Address information and offset information, which control DMA for the distributed data, are generated. With this composition, the image processing apparatus can be compatible with various image reading devices.
  • R, G, and B data are separated, and the image data are stored on the main memory 100 independently of the output format of the image reading device (CCD 17 or CIS 18 ). For this reason, DMA transfer corresponding to the output format of the image reading device (CCD 17 or CIS 18 ) need not be executed for the image processing section (to be described later) on the output side. Only DMA transfer corresponding to necessary image processing needs to be executed. Hence, an image processing apparatus that can be compatible with the output format of the image reading device (CCD 17 or CIS 18 ) with a very simple arrangement and control can be provided.
  • predetermined image processing can be executed without intervention of individual line buffers of each image processing section. Since intervention of a line buffer is unnecessary, the apparatus can be compatible with any flexible change in its main scanning width or resolution with a very simple arrangement.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Input (AREA)
  • Storing Facsimile Image Data (AREA)

Abstract

To provide an image processing technique compatible with both a CCD and a CIS, which controls storage of image data read by each device in a memory and the read of the stored data for each rectangular area to obtain a high memory efficiency, an image processing apparatus includes a memory area control section which sets, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction, an address generation section which generates address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area, a memory control section which reads out the image data corresponding to the rectangular area and DMA-transfers the image data to a second memory in accordance with the generated address information, and an image processing section which executes image processing for each rectangular area of the DMA-transferred data by using the second memory.

Description

    FIELD OF THE INVENTION
  • The present invention relates to an image processing technique which is compatible with both an image reading device (CCD (Charged Coupled Device)) and a CIS (Contact Image Sensor) and controls storage of image data read by each device in a memory and a read of the stored data for each rectangular area. [0001]
  • BACKGROUND OF THE INVENTION
  • FIG. 27 is a block diagram showing the composition of a scanner image processing circuit in a conventional image processing apparatus. As an image reading device, an optical element such as a [0002] CCD 2010 or CIS 2110 is used. Data according to a predetermined output format is A/D-converted by a CCD interface (I/F) circuit 2000 or CIS interface (I/F) circuit 2100 and stored in a main memory 2200 for each line in the main scanning direction. In this case, the CCD 2010 outputs data corresponding to R, G, and B in parallel. The CIS 2110 serially outputs the signals of R, G, and B data in accordance with the order of LED lighting. Depending on different output data characteristics, the CCD and CIS have dedicated interface circuits. After predetermined A/D conversion processing, the read image data is stored in the main memory (SDRAM) 2200.
  • Referring to FIG. 27, image processing blocks (shading correction (SHD) [0003] 2300, character determination processing 2320, filter processing 2340, and the like) have dedicated line buffers 2400 a to 2400 d. In this circuit composition, data corresponding to a plurality of lines, which are stored in the main memory (SDRAM) 2200, are read out in the main scanning direction, stored in the dedicated line buffers (2400 a to 2400 d), and subjected to individual image processing operations.
  • However, in the circuit composition that prepares [0004] dedicated line buffers 2400 a to 2400 d for the respective processing sections, the maximum number of pixels that can be processed in the main scanning direction depends on the memory capacity of the dedicated line buffer of each processing section. This restricts the throughput of processing.
  • If the capacity of the line buffer is increased in the hardware configuration of the image processing circuit to improve the processing capability, the cost increases. This impedes cost reduction of the entire image processing apparatus. For example, when the resolution or main scanning width of the apparatus should be increased, the capacity of the line buffer must be increased. [0005]
  • A signal output from the [0006] CCD 2010 or CIS 2110 serving as an image reading device is processed by the dedicated interface circuit (2000 or 2100) in accordance with the output format. Bitmapping of read image data on the main memory 2200 depends on which device (e.g., the CCD or CIS) has been used, and image data input processing must inevitably be specialized. That is, the image processing circuit is customized depending on the employed image reading device. This impedes generalization and cost reduction of the image processing circuit.
  • A prior art having the above composition is disclosed in, e.g., Japanese Patent Laid-Open No. 7-170372. [0007]
  • SUMMARY OF THE INVENTION
  • The present invention has been proposed to solve the above problems, and has as its object to provide an image processing apparatus which is compatible with various image reading devices such as a CCD and CIS. It is another object of the present invention to provide an image processing apparatus which controls data processing, including storage of image data read by each image reading device in a memory and processing by an image processing section, by extracting data in a main memory as a predetermined unit appropriate for each image processing mode without intervention of individual line buffers. [0008]
  • In order to achieve the above objects, an image processing apparatus according to the present invention is characterized by mainly comprising memory area control means for setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction; address generation means for generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area; memory control means for reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and image processing means for executing image processing for each rectangular area of the DMA-transferred data by using the second memory. [0009]
  • Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.[0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. [0011]
  • FIG. 1 is a block diagram showing the schematic composition of an [0012] image processing apparatus 200 according to an embodiment of the present invention;
  • FIG. 2 is a block diagram showing the schematic composition of a scanner I/[0013] F section 10;
  • FIGS. 3A to [0014] 3D are views showing output signals by a CCD 17;
  • FIG. 4 is a timing chart related to lighting control of an [0015] LED 19 for a CIS 18;
  • FIG. 5A is a timing chart showing the relationship between an output ([0016] 51 e) and the ON states (51 b to 51 d) of LEDs corresponding to R, G, and B according to the timing chart shown in FIG. 4;
  • FIG. 5B is a timing chart showing timings when the R, G, and [0017] B LEDs 19 are sequentially turned on within one period of a sync signal (SP) in association with control of the CIS 18;
  • FIG. 5C is a view showing outputs when two channels of a [0018] CIS 18 are arranged in the main scanning direction;
  • FIG. 6 is a block diagram for explaining processing of an [0019] AFE 15;
  • FIG. 7 is a block diagram showing the schematic composition of an LDMAC_A which DMA-transfers image data read by an image reading device to a main memory and an LDMAC_B which controls DMA between the main memory and the scanner image processing section; [0020]
  • FIGS. 8A and 8B are views for explaining processing for causing an [0021] LDMAC_A 105 a to write 1-channel data in a main memory 100;
  • FIGS. 9A and 9B are views for explaining processing for causing the LDMAC_A [0022] 105 a to write data of two channels in the main memory 100;
  • FIG. 10 is a view for explaining processing for causing the LDMAC_A [0023] 105 a to write data of three channels in the main memory 100;
  • FIGS. 11A and 11B are views showing a state in which the [0024] main memory 100 is divided into predetermined rectangular areas (blocks);
  • FIGS. 12A to [0025] 12C are views showing capacities necessary for the main memory in the respective image processing modes;
  • FIG. 13 is a flow chart for explaining the flow of data storage processing in the copy mode; [0026]
  • FIG. 14 is a flow chart for explaining the flow of data storage processing in the scanner mode; [0027]
  • FIGS. 15A and 15B are views for explaining a data read when image data in a rectangular area is to be transferred to the block buffer RAM of a scanner [0028] image processing section 20;
  • FIG. 16 is a block diagram for explaining the schematic composition of the scanner [0029] image processing section 20;
  • FIG. 17 is a view schematically showing an area to be subjected to image processing and a reference area where filter processing and the like for the image processing are to be executed; [0030]
  • FIGS. 18A and 18B are views showing overlap widths in the respective image processing modes (color copy mode, monochrome copy mode, and scanner mode); [0031]
  • FIGS. 19A to [0032] 19D are views schematically showing the sizes of rectangular areas necessary for the respective image processing modes;
  • FIG. 20 is a view for explaining the start point in the DMA main scanning direction to DMA-transfer image data of the next rectangular data after the end of processing of one rectangular data; [0033]
  • FIG. 21 is a flow chart for explaining the flow of a data read and image processing in the copy mode; [0034]
  • FIG. 22 is a flow chart for explaining the flow of a data read and image processing in the scanner mode; [0035]
  • FIG. 23 is a view for explaining processing for transferring magnified rectangular data from a magnification processing block (LIP) [0036] 27 to the main memory 100;
  • FIG. 24 is a view showing connection between the magnification processing block (LIP) [0037] 27 and the LDMAC_B (105 b);
  • FIG. 25 is a timing chart showing the relationship between data and signals sent from the magnification processing block (LIP) [0038] 27 to the LDMAC_B (105 b) to DMA-transfer data that has undergone image processing to the main memory 100;
  • FIG. 26 is a view for explaining a state in which data is bitmapped on the [0039] main memory 100 in accordance with a line end signal and block end signal; and
  • FIG. 27 is a block diagram showing the composition of a scanner image processing circuit in a conventional image processing apparatus.[0040]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • A preferred embodiment of the present invention will now be described in detail in accordance with the accompanying drawings. [0041]
  • FIG. 1 is a block diagram showing the schematic composition of an [0042] image processing apparatus 200 according to an embodiment of the present invention. A CCD 17 and CIS 18 are connected to a scanner interface (to be referred to as a “scanner I/F” hereinafter) section 10 through an analog front end (AFE) 15. Read data can be input to the image processing apparatus 200 without intervening individual dedicated circuits. Data processing by the scanner I/F section 10 will be described later in detail.
  • A scanner [0043] image processing section 20 executes image processing corresponding to an image processing operation mode (color copy, monochrome copy, color scan, monochrome scan, and the like) for image data that is bitmapped on a main memory 100 by processing of the scanner I/F section 10. The scanner image processing section 20 will be described later in detail.
  • A printer [0044] image processing section 30 is a processing unit to printer-output image data obtained by image processing. The printer image processing section 30 executes processing for outputting an image processing result to a laser beam printer (LBP) 45 which is connected through an LBP interface (I/F) 40.
  • A [0045] JPEG module 50 and JBIG module 60 are processing sections which execute compression and expansion processing of image data on the basis of predetermined standards.
  • A [0046] memory control section 70 is connected to a first BUS 80 of the image processing system and a second BUS 85 of the computer system. The memory control section 70 systematically controls processing units (LDMAC_A to LDMAC_F (105 a to 105 f)) which execute DMA control related to a data write and read for the main memory (SDRAM) 100. “DMA (Direct Memory Access)” means processing for directly moving data between the main storage device and the peripheral devices.
  • The processing units (LDMAC_A to LDMAC_F ([0047] 105 a to 105 f)) which execute DMA control of image data are connected between the first BUS 80 and the above-described scanner I/F section 10, scanner image processing section 20, printer image processing section 30, LBP I/F section 40, JPEG processing section 50, and JBIG processing section 60 in correspondence with the respective processing sections (10 to 60).
  • In association with data transmission/reception between the respective image processing sections ([0048] 10 to 60) and the main memory 100, the LDMAC_A to LDMAC_F (105 a to 105 f) generate predetermined address information to execute DMA control and controls DMA on the basis of the information. For example, the LDMAC_A 105 a generates, for each DMA channel, address information (e.g., a start address to start DMA or offset information to switch the address of the memory) to DMA-transfer image data read by the scanner I/F section 10 to the main memory 100. The LDMAC_B (105 b) generates, in accordance with a DMA channel, address information to read out image data bitmapped on the main memory 100.
  • The LDMAC_C to LDMAC_F ([0049] 105 c to 105 f) can also generate predetermined address information and, on the basis of the information, execute DMA control related to data transmission/reception to/from the main memory 100. More specifically, the LDMAC_C to LDMAC_F (105 c to 105 f) have channels corresponding to the data write and read and generate address information corresponding to the channels to control DMA.
  • The [0050] first BUS 80 allows data transmission/reception between the processing sections (10 to 60) of the image processing system. The second BUS 85 of the computer system is connected to a CPU 180, communication & user interface control section 170, mechatronics system control section 125, and ROM 95. The CPU 180 can control the above-described LDMAC_A to LDMAC_F (105 a to 105 f) on the basis of control parameters or control program stored in the ROM 95.
  • The mechatronics [0051] system control section 125 includes a motor control section 110 and an interrupt timer control section 120 which executes timing control to control the motor drive timings or synchronization of processing of the image processing system.
  • An [0052] LCD control section 130 is a unit which executes display control to display various settings or processing situations of the image processing apparatus on an LCD 135.
  • [0053] USB interface sections 140 and 150 enable connection to the peripheral devices. FIG. 1 shows a state in which a BJ-printer 175 is connected.
  • A media access control (MAC) [0054] section 160 is a unit which controls data transmission (access) timings to a connected device.
  • The [0055] CPU 180 controls the entire operation of the image processing apparatus 200.
  • <Composition of Scanner I/[0056] F Section 10>
  • The scanner I/[0057] F section 10 is compatible with the CCD 17 and CIS 18 serving as image reading devices. The scanner I/F section 10 executes input processing of signals from these image reading devices. The input image data is DMA-transferred by the LDMAC_A (105 a) and bitmapped on the main memory 100.
  • FIG. 2 is a block diagram showing the schematic composition of the scanner I/[0058] F section 10. A timing control section 11 a generates a read device control signal corresponding to the read speed and outputs the control signal to the CCD 17/CIS 18. The device control signal synchronizes with a sync signal generated by the scanner I/F section 10 so that the read timing in the main scanning direction and read processing can be synchronized.
  • An LED [0059] lighting control section 11 b is a unit which controls lighting of an LED 19 serving as a light source for the CCD 17/CIS 18. The LED lighting control section 11 b controls sync signals (TG and SP; FIGS. 3A and 4) for sequential lighting control of LEDs corresponding to R, G, and B color components, a clock signal (CLK; FIG. 4), and brightness control suitable for the CCD 17/CIS 18 and also controls the start/end of lighting. The control timing is based on a sync signal received from the above-described timing control section. Lighting of the LED 19 is controlled in synchronism with drive of the image reading device.
  • FIGS. 3A to [0060] 3D are views showing output signals by the CCD 17. An original surface is irradiated with light emitted from the LED 19. Reflected light is guided to the CCD 17 and photoelectrically converted. For example, the original surface is sequentially scanned for each line in the main scanning direction while moving the read position at a constant speed in a direction (sub-scanning direction) perpendicular to the line direction, i.e., the main scanning direction of the CCD 17. Accordingly, the image on the entire original surface can be read. As shown in FIG. 3A, on the basis of the sync signal (TG) output from the timing control section 11 a, signals corresponding to the R, G, and B elements of one line of the CCD 17 are output in parallel (FIGS. 3B, 3C, and 3D).
  • FIG. 4 is a timing chart related to lighting control of the [0061] LED 19 for the CIS 18. On the basis of the sync signal (SP) and clock signal (CLK) generated by the LED lighting control section 11 b, the lighting start and end timings of the R, G, and B LEDs are controlled. The period of the sync signal (SP) is represented by Tstg. Within this time, lighting of one of the LEDs (R, G, and B) or a combination thereof is controlled. Tled indicates the LED ON time during one period (Tstg) of the sync signal (SP).
  • FIG. 5A is a timing chart showing ON states ([0062] 51 a to 51 d) of the LEDs corresponding to R, G, and B and an output 51 e obtained by photoelectrically converting LED reflected light accumulated in the ON time in accordance with the timing chart shown in FIG. 4 described above. As is apparent from 51 e in FIG. 5A, the R output, G output, and B output corresponding to the R, G, and B colors are output as serial data. It is different from the output signals of the CCD 17 described above.
  • FIG. 5B is a timing chart showing timings when the R, G, and [0063] B LEDs 19 are sequentially turned on within one period of the sync signal (SP) in association with control of the CIS 18. In this case, when the R, G, and B data are synthesized, the input from the image reading device can be input to the image processing apparatus 200 as monochrome image data.
  • FIG. 5C is a view showing outputs when two channels of the [0064] CIS 18 are arranged in the main scanning direction. Channel 1 (53 c in FIG. 5C) outputs an arbitrary dummy bitstream in synchronism with the trailing edge of an Nth CLK signal (53 b in FIG. 5C) and then outputs a signal corresponding to 3,254 effective bits (53 c in FIG. 5C). On the other hand, channel 2 (53 d in FIG. 5C) outputs 2,794 bits as effective bits from the 3,255th effective bit (the bit that follows the last 3,254th bit of the sensor output of channel 1) in synchronism with the trailing edge of the Nth CLK signal.
  • With the sensor outputs of two channels, data corresponding to one line in the main scanning direction can be segmented and read. The maximum number of channels of the CIS is not limited to two. Even when, e.g., a 3-channel structure is employed, the scope of the present invention is not limited, and only the number of effective bit outputs changes. [0065]
  • Referring back to FIG. 2, the output signal from the image reading device ([0066] CCD 17/CIS 18) is input to the AFE (Analog Front End) 15. As shown in FIG. 6, the AFE 15 executes gain adjustment (15 a and 15 d) and A/D conversion processing (15 b, 15 c, and 15 e) for the output signals from the CCD 17 and CIS 18. The AFE 15 converts the analog signal output from each image reading device into a digital signal and inputs the digital signal to the scanner I/F section 10. The AFE 15 can also convert parallel data output from each image reading device into serial data and output the serial data.
  • A [0067] sync control section 11 c shown in FIG. 2 sets, for the AFE 15, a predetermined threshold level corresponding to the analog signal from each device (17 or 18) and adjusts the output signal level by the difference in image reading device. The sync control section 11 c also generates and outputs a sync clock to execute sampling control of the analog signal to cause the AFE 15 to output a digital signal and receives read image data by a predetermined digital signal from the AFE 15. This data is input to an output data control section 11 d through the sync control section 11 c. The output data control section 11 d stores the image data received from the AFE 15 in buffers (11 e, 11 f, and 11 g) in accordance with the output mode of the scanner I/F section 10.
  • The output mode of the scanner I/[0068] F section 10 can be switched between a single mode, 2-channel (2-ch) mode, and 3-channel (3-ch) mode in accordance with the connected image reading device.
  • The single mode is selected when main-scanning data should be input from the [0069] AFE 15. In this case, only one buffer is usable.
  • The 2-ch mode is selected when data input from the [0070] AFE 15 should be input at the same timing as 2-channel information of the image reading device. In this case, two buffers (e.g., 11 e and 11 f) are set in the usable state.
  • The 3-ch mode is selected when image data received from the [0071] AFE 15 should be input at the same timing as R, G, and B outputs. In this case, three buffers (11 e, 11 f, and 11 g) are set in the usable state.
  • When color image data is read by the [0072] CIS 18 in the single mode, data received from the AFE 15 contains R, G, and B data outputs which are serially sequenced in accordance with the lighting order of the LEDs, as indicated by 51 e in FIG. 5A. The output data control section 11 d stores the data in one buffer (e.g., the first buffer (11 e)) in accordance with the sequence. This processing also applies to a case wherein monochrome image data is read by the CIS 18. The monochrome image data is stored in one buffer.
  • When color image data is read by the [0073] CIS 18 having two channels, the above-described 2-ch mode is set. Data received from the AFE 15 contains data for each of two regions divided in the main scanning direction, as indicated by 53 c and 53 d in FIG. 5C. To store the data in each region, the output data control section 11 d stores the received data in two buffers (e.g., the first buffer (11 e) and second buffer (11 f)). This processing also applies to a case wherein monochrome image data is read by the CIS having two channels.
  • When color image data is read by the [0074] CCD 17, the output data control section 11 d can separately store the data received from the AFE 15, which contains R, G, and B data, in three buffers (first, second, and third buffers (11 e, 11 f, and 11 g)) in the above-described 3-ch mode.
  • Processing for causing the scanner I/[0075] F section 10 to DMA-transfer image data stored in a predetermined buffer (11 e, 11 f, or 11 g) to the main memory (SDRAM) 100 and store the image data in the main memory will be described next. The processing for DMA-transferring image data to the main memory 100 and storing the data in the main memory is controlled by the LDMAC_A (105 a).
  • FIG. 7 is a block diagram showing the schematic composition of the LDMAC_A ([0076] 105 a) which DMA-transfers image data read by the image reading device (17 to 18) to the main memory (SDRAM) 100 and the LDMAC_B (105 b) which controls DMA between the main memory 100 and the scanner image processing section 20.
  • When the [0077] main memory 100 is to be used as a ring buffer, a buffer controller 75 controls the LDMAC_A (105 a) and LDMAC_B (105 b) to arbitrate the data write and read.
  • <Composition of LDMAC_A ([0078] 105 a)>
  • The LDMAC_A ([0079] 105 a) has a data arbitration unit 71 a, first write data interface (I/F) section 71 b, and I/O interface section 71 c.
  • The I/[0080] O interface section 71 c sets, in the first write data I/F section 71 b, predetermined address information generated by the LDMAC_A to store data in the main memory 100. The I/O interface section 71 c also receives image data from the scanner I/F section 10 and stores them in buffer channels (to be referred to as “channels” hereinafter) (ch0 to ch2) in the LDMAC_A (105 a).
  • The first write data I/[0081] F section 71 b is connected to a third BUS 73 to be used to write data in the main memory 100. The first write data I/F section 71 b DMA-transfers data stored in the channels (ch0 to ch2) to the main memory 100 in accordance with the generated predetermined address information. The data arbitration unit 71 a reads the data stored in each channel and transfers the data in each channel in accordance with the write processing of the first write data I/F section 71 b.
  • The first write data I/[0082] F section 71 b is connected to the buffer controller 75 and controlled such that memory access does not conflict with the data read or write by the LDMAC_B (105 b) (to be described later). With access control for the main memory 100, even when the main memory 100 is used as a ring buffer, data is never overwritten at the same memory address before the read of data stored in the main memory 100. Hence, the memory resource can be effectively used.
  • <(1) Storage of 1-Channel Data>[0083]
  • FIGS. 8A and 8B are views for explaining processing for causing the LDMAC_A [0084] 105 a to write 1-channel data in the main memory (SDRAM) 100. As in the output example indicated by 51 e in FIG. 5A, when R, G, and B data corresponding to one line in the main scanning direction are serially output and stored in one buffer (11(e) in FIG. 2) of the scanner I/F section 10, the data are transferred to one corresponding channel (ch0; FIG. 7) in the LDMAC_A (105 a). Referring to FIGS. 8A, 8B, 9A, 9B, and 10, a composition which causes the data arbitration unit 71 a and first write data I/F section 71 b to DMA-transfer data of channel (ch0) and store it in the main memory 100 will be referred as “first LDMAC”. A composition which processes data of channel (ch1) will be referred to as “second LDMAC”. A composition which processes data of channel (ch2) will be referred to as “third LDMAC”.
  • FIG. 8A is a view showing processing for separating 1-channel color image data into R, G, and B data and storing them. The first LDMAC writes, of the R, G, and B in the line order, R data corresponding to one line (R[0085] 1 to R2) in the main scanning direction in an R area (100 a) of the main memory 100 and switches the write address to a start address (G1) of a G area (1000 b) as the next write area. The first LDMAC writes, of the R, G, and B, G data corresponding to one line (G1 to G2) in the main scanning direction in the G area (1000 b) of the main memory 100 and switches the write address to a start address (B1) of a B area (1000 c) as the next write area. The first LDMAC writes B data corresponding to one line (B1 to B2) in the main scanning direction in the B area (1000 c) of the main memory 100 and switches the address to the start address (R2) of the second line of the R area (1000 a). For the G and B data as well, the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • In DMA control for the data write by the first LDMAC, the memory address as the storage destination of data corresponding to each of the R, G, and B data is given as offset information (A or B), and the storage area for each color data is switched. With this composition, the R, G, and B data in the line order can be separated and stored in the [0086] main memory 100 as R data, G data, and B data.
  • The start address (R[0087] 1 in FIG. 8A) at which DMA transfer is started and offset information (A and B) are generated by the LDMAC_A (105 a) described above.
  • FIG. 8B is a view for explaining write processing of monochrome image data by the [0088] CIS 18, which is obtained at the LED lighting timing shown in FIG. 5B. Monochrome image data need not be separated into R, G, and B data. For the monochrome image data in the line order, data corresponding to one line (M1 to M2) is written in the main scanning direction of the main memory 100. The write address is shifted in the sub-scanning direction of an area (1000 d), and the next data corresponding to the second line (M3 to M4) is written. By sequentially executing this processing, the monochrome image data can be stored in the area (1000 d) of the main memory.
  • <(2) Storage of 2-Channel Data by CIS>[0089]
  • FIG. 9A is a view showing processing for separating 2-channel color image data into R, G, and B data and storing them, as shown in FIG. 5C. The memory area in the main scanning direction is divided in correspondence with the two channels. [0090]
  • Image data read by the [0091] CIS 18 having two channels (chip0 and chip1) are stored in two buffers (11 e and 11 f) of the scanner I/F section 10. The data in the two buffers (11 e and 11 f) are transferred to the channels (ch0 and ch1) in the LDMAC_A 105 a under the control by the LDMAC_A 105 a.
  • The first LDMAC stores the data (chip[0092] 0_data) of channel. (ch0) in areas indicated by a first R area (1100 a), first G area (1200 a), and first B area (1300 a) in FIG. 9A.
  • Referring to FIG. 9A, the first LDMAC writes, of the R, G, and B input from chip[0093] 0, R data (RA1 to RA2) in the first R area (1100 a) of the main memory and switches the write address to a start address (GA1) of the first G area (1200 a) as the next write area (offset information C). The first LDMAC writes, of the R, G, and B data, G data (GA1 to GA2) in the first G area (1200 a) of the main memory and switches the write address to a start address (BA1) of the first B area (1300 a) as the next write area (offset information C). The first LDMAC writes, of the R, G, and B data, B data (BA1 to BA2) in the B area (1300 a) of the main memory, and after the end of processing, switches the address to a start address (RA3) of the second line in the sub-scanning direction of the R area (1100 a) (offset information D). For the G and B data as well, the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • On the basis of the offset information (C or D), the first LDMAC can arbitrarily set a memory address as data storage destination for each of the first R area ([0094] 1100 a), first G area (1200 a), and first B area (1300 a) in FIG. 9A as the data storage area of the main memory. The data stored in the channel (ch0) is stored in the main memory 100 in accordance with the settings.
  • The second LDMAC stores the data (chip[0095] 1_data) of channel (ch1) in areas indicated by a second R area (1100 b), second G area (1200 b), and second B area (1300 b) in FIG. 9A.
  • Referring to FIG. 9A, the second LDMAC writes, of the R, G, and B input from chip[0096] 1, R data (RB1 to RB2) in the second R area (1100 b) of the main memory and switches the write address to a start address (GB1) of the second G area (1200 b) as the next write area (offset information E). The second LDMAC writes, of the R, G, and B data, G data (GB1 to GB2) in the second G area (1200 b) of the main memory 100 and switches the write address to a start address (BB1) of the second B area (1300 b) as the next write area (offset information E). The second LDMAC writes, of the R, G, and B data, B data (BB1 to BB2) in the second B area (1300 b) of the main memory 100, and after the end of processing, switches the address to a start address (RB3) of the second line in the sub-scanning direction of the second R area (1100 b) (offset information F). For the G and B data as well, the data write address is shifted to the second line in the sub-scanning direction, and the data are written.
  • In DMA control for the data write by the first and second LDMACs, the memory address as the storage destination of data corresponding to each of the R, G, and B data is given as offset information (C, D, E, or F), and the storage area for each color data is switched. With this composition, the R, G, and B data in the line order can be separated and stored in the [0097] main memory 100 as R data, G data, and B data.
  • The start addresses (RA[0098] 1 and RB1 in FIG. 8A) at which DMA transfer is started and offset information (C, D, E, and F) are generated by the LDMAC_A (105 a) described above.
  • FIG. 9B is a view for explaining write processing of monochrome image data by the CIS having two channels, which is obtained at the LED lighting timing shown in FIG. 5B. Monochrome image data need not be separated into R, G, and B data, unlike the above-described color image data. For the monochrome image data in the line order, data corresponding to one line (MA[0099] 1 to MA2 or MB1 to MB2) is written in the main scanning direction of the main memory 100. The write address is shifted in the sub-scanning direction of an area (1400 a or 1400 b), and the next data corresponding to the second line (MA3 to MA4 or MB3 to MB4) is written. By sequentially executing this processing, the monochrome image data can be stored in the areas (1400 a and 1400 b) of the main memory.
  • <(3) Storage of 3-Channel Data>[0100]
  • FIG. 10 is a view for explaining processing in which when the output [0101] data control section 11 d of the scanner I/F section 10 processes image data read by the CCD 17 as 3-channel data (R data, G data, and B data), the first to third LDMACs corresponding to the respective channels write data in the main memory 100.
  • Data stored in three buffers ([0102] 11 e, 11 f, and 11 g in FIG. 2) are transferred to channels (ch0, ch1, and ch2) in the LDMAC_A 105 a under the control of the LDMAC_A (105 a). The data transferred to ch0 is written in the main memory 100 by the first LDMAC. The data transferred to ch1 is written in the main memory 100 by the second LDMAC. The data transferred to ch2 is written in the main memory 100 by the third LDMAC. The first to third LDMACs write the data in areas corresponding to an R area (1500 a), G area (1500 b), and B area (1500 c) of the main memory 100 so that the R, G, and B data can be separately stored on the main memory 100.
  • In this case, the start addresses (SA[0103] 1, SA2, and SA3 in FIG. 10) at which DMA transfer is started and are generated by the LDMAC_A (105 a) described above.
  • As described above, image data read by the image reading device ([0104] CCD 17 or CIS 18) is distributed to channels that control DMA transfer in accordance with the output format. Address information and offset information, which control DMA for the distributed data, are generated. With this composition, the image processing apparatus can be compatible with various image reading devices.
  • In this embodiment, R, G, and B data are separated, and the image data are stored on the [0105] main memory 100 independently of the output format of the image reading device (CCD 17 or CIS 18). For this reason, DMA transfer corresponding to the output format of the image reading device (CCD 17 or CIS 18) need not be executed for the image processing section (to be described later) on the output side. Only DMA transfer corresponding to necessary image processing needs to be executed. Hence, an image processing apparatus that can be compatible with the output format of the image reading device (CCD 17 or CIS 18) with a very simple arrangement and control can be provided.
  • <Area Setting on Main Memory and DMA Transfer>[0106]
  • To DMA-transfer image data to the [0107] main memory 100, the LDMAC_A (105 a) generates address information for the main memory 100 and controls DMA transfer in accordance with the address information. FIG. 11A is a view showing a state in which the main memory 100 is divided into predetermined rectangular areas (blocks). FIG. 11B is a view showing a case wherein the main memory 100 is used as a ring buffer. To process the image data stored in the main memory 100 for each rectangular area, address information to define a rectangular area is set in accordance with the image processing mode (copy mode or scanner mode). Referring to FIG. 11A, SA indicates the start address of DMA. An area in the main scanning direction (X-axis direction) is divided by a predetermined byte length (XA or XB). An area in the sub-scanning direction (Y-axis direction) is divided by a predetermined number of lines (YA or YB). When the main memory 100 is used as a ring buffer (FIG. 11B), hatched areas 101 a and 101 b are the same memory area.
  • DMA for a rectangular area (0,0) starts from the start address SA. When data corresponding to XA in the main scanning direction is transferred, an address represented by offset data (OFF[0108] 1A) is set as a transfer address shifted in the sub-scanning direction by one line. In a similar manner, transfer in the main scanning direction and address shift by offset data (OFF1A) are controlled. When DMA for the rectangular area (0,0) is ended, processing advances to DMA for the next rectangular area (1,0).
  • In DMA for the rectangular area (1,0), the address jumps to an address represented by an offset address (OFF[0109] 2A). Transfer in the main scanning direction and address shift by offset data (OFF1A) are controlled, as in the rectangular area (0,0). When DMA for the rectangular area (1,0) is ended, processing advances to the next rectangular area (2,0). In this way, DMA for YA lines is executed until an area (n,0). Then, the address jumps to an address represented by offset data (OFF3). Processing advances to processing for a rectangular area (0,1). DMA for areas (1,1), (2,1), . . . is controlled in the same way as described above. For example, if there are rectangular areas having different sizes (defined by XB and YB) because of the memory capacity, offset data (OFF1B and OFF2B) corresponding to the area sizes are further set to control DMA.
  • For the above-described rectangular area, as an area (overlap area) between rectangular areas, the number of pixels in the main scanning direction and the number of lines in the sub-scanning direction are set in accordance with the resolution in the main scanning direction and the pixel area to be referred to in accordance with the set image processing mode so that assignment (segmentation) of image data bitmapped on the memory is controlled. [0110]
  • DETAILED EXAMPLE
  • FIGS. 12A to [0111] 12C are views showing capacities necessary for the main memory in the respective image processing modes. The capacity in each processing mode is set in the following way.
  • (a) Color Copy Mode [0112]
  • Effective pixels: 600 dpi in the main scanning direction [0113]
  • Character determination processing: 11 lines on each of the upper and lower sides, 12 pixels on the left side, and 13 pixels on the right side [0114]
  • Color determination filtering processing: two lines on each of the upper and lower sides, and two pixels on each of the left and right sides [0115]
  • Magnification processing: n lines on the lower side and m pixels on the right side (m and n depend on the magnification factor) [0116]
  • (b) Monochrome Copy Mode [0117]
  • Effective pixels: 600 dpi in the main scanning direction [0118]
  • Color determination filtering processing: two lines on each of the upper and lower sides, and two pixels on each of the left and right sides [0119]
  • Magnification processing: one line on the lower side [0120]
  • (c) Color Scan Mode [0121]
  • Effective pixels: 1,200 dpi in the main scanning direction [0122]
  • Magnification processing: one line on the lower side [0123]
  • Setting of the overlap width affects not only the memory resource but also the transfer efficiency between the [0124] main memory 100 and the scanner image processing section 20. The transfer efficiency is defined as the area ratio of the effective pixel area to the image area including the overlap area. As described above, in the copy mode, ensuring the overlap area is essential. Hence, the transfer efficiency is low. In the scanner mode, however, no overlap width is necessary except for magnification processing. Hence, the transfer efficiency is high.
  • For example, in the color copy mode shown in FIG. 12A, when a rectangular area including an overlap area is defined as 281 pixels×46 lines, an effective area obtained by subtracting the maximum overlap area (an area for character determination processing) is defined as 256 pixels×24 lines. The transfer efficiency is (256 pixels×24 lines)/(281 pixels×46 lines)=48%. On the other hand, in the scanner mode shown in FIG. 12C, no overlap area is necessary unless magnification processing is executed. Hence, the transfer efficiency is 100%. [0125]
  • The contents of image processing change between the scanner mode and the copy mode. A necessary memory area is appropriately set in accordance with the processing contents. For example, as shown in FIG. 12A, in the color copy mode, character determination processing or color determination processing is necessary. Hence, the overlap area (this area is indicated as an area ensured around the effective pixels in FIG. 12A) to extract the effective pixel area becomes large. However, since a resolution of about 600 dpi must be ensured as the number of effective pixels in the main scanning direction, the overlap area is decided by tradeoff with the memory area. [0126]
  • On the other hand, in the scanner mode, the overlap width need not be ensured except for magnification processing. However, about 1,200 dpi must be ensured as the number of effective pixels in the main scanning direction. When the memory assignment amount in the scanner mode should be almost be same as in the copy mode, the number of lines in the sub-scanning direction is set to, e.g., 24 lines. With this setting, the assignment amount of the main memory in the color copy mode can be almost the same as that in the scanner mode. The flows of storage processing in the respective image processing modes will be described below with reference to FIGS. 13 and 14. [0127]
  • <Storage Processing in Copy Mode>[0128]
  • FIG. 13 is a flow chart for explaining the flow of data storage processing in the copy mode. First, in step S[0129] 10, it is determined whether the copy mode is the color copy mode. In the color copy mode (YES in step S10), the processing advances to step S20 to set address information for DMA transfer in the color copy mode as follows.
  • (a) When Data Containing Overlap Width is to be Written [0130]
  • Effective pixels (e.g., a resolution of 600 dpi in the main scanning direction) are ensured from the start of the buffer. In addition, an overlap width (11 lines on each of the upper and lower sides, 12 pixels on the left side, and 13 pixels on the right side) is set around the effective pixels (FIG. 12A). [0131]
  • (b) When Only Effective Pixels are to be Written [0132]
  • Start address SA=start address (BUFTOP) of memory+number of pixels (TOTALWIDTH) in main scanning direction of 1-page image containing overlap width×11 (overlap width (upper) in sub-scanning direction)+12 (overlap width (left) in main scanning direction) (TOTALWIDTH=number (IMAGEWIDTH) of main scanning effective pixels of 1-page image)+number of pixels of left overlap width+number of pixels of right overlap width) [0133]
  • UA=end address (BUFFBOTTOM) of memory+1=loop-back address of ring buffer [0134]
  • OFF[0135] 1 A=TOTALWIDTH−IMAGEWIDTH
  • In the monochrome copy mode (step S[0136] 30), address information of DMA transfer is set in the following way.
  • (a) When Data Containing Overlap Width is to be Written [0137]
  • Effective pixels (e.g., a resolution of 600 dpi in the main scanning direction) are ensured from the start of the buffer. In addition, an overlap width (two lines on each of the upper and lower sides, and two pixels on each of the left and right sides) is set around the effective pixels (FIG. 12B). [0138]
  • (b) When Only Effective Pixels are to be Written [0139]
  • Start address SA=start address (BUFTOP) of memory+number of pixels (TOTALWIDTH) in main scanning direction of 1-page image containing overlap width×2 (overlap width (upper) in sub-scanning direction)+2 (overlap width (left) in main scanning direction) [0140]
  • UA=end address (BUFFBOTTOM) of [0141] memory+1
  • OFF[0142] 1 A=TOTALWIDTH−IMAGEWIDTH
  • When the address information for DMA transfer is set in step S[0143] 20 or S30, the processing advances to step S40 to start DMA transfer. Data stored in the channels in the LDMAC_A 105 a are sequentially read and DMA-transferred in accordance with the predetermined address information (S50 and S60). When the read of data stored in the channels (ch0 to ch2) is ended (S70), DMA transfer is ended (S80).
  • <Storage Processing in Scanner Mode>[0144]
  • FIG. 14 is a flow chart for explaining the flow of data storage processing in the scanner mode. First, in step S[0145] 100, address information for DMA transfer in the scanner mode is set as follows.
  • (a) When Data Containing Overlap Width is to be Written [0146]
  • Effective pixels (e.g., a resolution of 1,200 dpi in the main scanning direction) are ensured from the start of the buffer. In addition, an overlap width of one line on the lower side in the sub-scanning direction is ensured. [0147]
  • (b) When Only Effective Pixels are to be Written [0148]
  • Start address SA=start address (BUFTOP) of memory [0149]
  • UA=end address (BUFFBOTTOM) of memory+1=loop-back address of ring buffer [0150]
  • OFF[0151] 1A=0(TOTALWIDTH=IMAGEWIDTH)
  • When the address information is set in step S[0152] 100, the processing advances to step S110 to start DMA transfer. Data stored in the channels in the LDMAC_A 105 a are sequentially read out and DMA-transferred in accordance with the predetermined address information (S120 and S130). When the read of data stored in the channels (ch0 to ch2) is ended (S140), DMA transfer is ended (S150).
  • With the processing shown in FIG. 13 or [0153] 14, image data is bitmapped on the main memory 100 in accordance with the set processing mode. The overlap width shown in FIG. 13 or 14 is an arbitrarily settable parameter, and the scope of the present invention is not limited by this condition. For example, in color transmission of a photo, the character determination processing may be omitted. An arbitrary overlap width can be set in accordance with the necessary number of reference pixels in image processing such that only filter processing is to be executed.
  • <Data Read>[0154]
  • The image data bitmapped on the [0155] main memory 100 is loaded to the scanner image processing section 20 as corresponding R, G, and B data or monochrome image data for each predetermined rectangular area. Image processing is executed for each rectangular area. To execute image processing for each rectangular area, the CPU 180 prepares, in the main memory 100, shading (SHD) correction data that corrects a variation in sensitivity of the light-receiving element of the image reading device (CCD 17/CIS 18) or a variation in light amount of the LED 19. The shading data of the rectangular area and image data of the rectangular area are DMA-transferred to the scanner image processing section 20 by the LDMAC_B (105 b) (to be described later).
  • FIGS. 15A and 15B are views for explaining a data read when image data in a rectangular area is to be transferred to a block buffer RAM [0156] 210 (FIG. 16) of the scanner image processing section 20. An overlap area AB1CD1 is set for the effective pixel area (abcd) of an area (0,0) (FIG. 15A). In reading image data, corresponding data is read from the start address A to the address B1 in the main scanning direction. When the read of the data in the main scanning direction is ended, the address of data to be read next is shifted to the address A2 in FIG. 15A by one line in the sub-scanning direction. The data is read until the pixel at the address B3 in the main scanning direction. The data is read in a similar manner. Data from the address C to the address D1, which correspond to the last line of the overlap area, is read in the main scanning direction. The read of the data in the area (0,0) is thus ended.
  • An overlap area B[0157] 2ED2F is set for the effective pixel area (bedf) of an area (0,1) (FIG. 15B). In reading image data, corresponding data is read from the start address B2 to the address E in the main scanning direction. When the read of the data in the main scanning direction is ended, the address of data to be read next is shifted to the address B4 in FIG. 15B by one line in the sub-scanning direction. The data is read until the pixel at the address B5 in the main scanning direction. Data from the address D2 to the address F, which correspond to the last line of the overlap area, is read out in the main scanning direction. The read of the data in the second area is thus ended. With the above processing, the data of the rectangular area including the overlap area is read. The same processing as described above is executed for each rectangular area.
  • <Composition of LDMAC_B ([0158] 105 b)>
  • The read of data stored in the [0159] main memory 100 is controlled by the LDMAC_B (105 b) shown in FIG. 7. A read data I/F section 72 a is connected to the main memory 100 through a fourth bus 74 for the data read. The read data I/F section 72 a can read predetermined image data from the main memory 100 by referring to address information generated by the LDMAC_B (105 b).
  • The read data are set to a plurality of predetermined channels (ch[0160] 3 to ch6) by a data setting unit 72 b. For example, image data for shading correction is set to channel 3 (ch3). Plane-sequential R data is set to channel 4 (ch4). Plane-sequential G data is set to channel 5 (ch5). Plane-sequential B data is set to channel 6 (ch6).
  • The data set to the channels (ch[0161] 3 to ch6) are sequentially DMA-transferred through an I/P interface 72 c under the control of the LDMAC_B (105 b) and loaded to the block buffer RAM 210 (FIG. 16) of the scanner image processing section 20.
  • Channel [0162] 7 (ch7) in the LDMAC_B (105 b) is a channel which stores dot-sequential image data output from the scanner image processing section 20 to store data that has undergone predetermined image processing in the main memory 100. The scanner image processing section 20 outputs address information (block end signal and line end signal) in accordance with the output of dot-sequential image data. On the basis of the address information, a second write data I/F 72 d stores the image data stored in channel 7 in the main memory 100. The contents of this processing will be described later in detail.
  • <Image Processing>[0163]
  • FIG. 16 is a block diagram for explaining the schematic composition of the scanner [0164] image processing section 20. Processing corresponding to each image processing mode is executed for the data loaded to the block buffer RAM 210. FIGS. 19A to 19D are views schematically showing the sizes of rectangular areas necessary for the respective image processing modes. The scanner image processing section 20 executes processing while switching the rectangular pixel area to be referred to for the rectangular area in accordance with the set image processing mode. The contents of the image processing will be described below with reference to FIG. 16. The sizes of the rectangular areas to be referred to at that processing will be described with reference to FIGS. 19A to 19D.
  • Referring to FIG. 16, a shading correction block (SHD) [0165] 22 is a processing block which corrects a variation in light amount distribution of the light source (LED 19) in the main scanning direction, a variation between light-receiving elements of the image reading device, and the offset of the dark output. As shading data, correction data corresponding to one pixel is plane-sequentially stored in an order of bright R, bright G, bright B, dark R, dark G, and dark B on the main memory 100. Pixels (XA pixels in the main scanning direction and YA pixels in the sub-scanning direction (FIG. 19A)) corresponding to the rectangular area are input. The input plane-sequential correction data is converted into dot-sequential data by an input data processing section 21 and stored in the block buffer RAM 210 of the scanner image processing section 20. When reception of the shading data of the rectangular area is ended, the processing shifts to image data transfer.
  • The input [0166] data processing section 21 is a processing section which executes processing for reconstructing plane-sequential data separated into R, G, and B data to dot-sequential data. Data of one pixel is stored on the main memory 100 as plane-sequential data for each of the R, G, and B colors. When these data are loaded to the block buffer RAM 210, the input data processing section 21 extracts 1-pixel data for each color data and reconstructs the data as R, G, or B data of one pixel. The reconstruction processing is executed for each pixel, thereby converting the plane-sequential image data into dot-sequential image data. The reconstruction processing is executed for all pixels (XA pixels×YA pixels) in the rectangle.
  • FIG. 17 is a view schematically showing an area to be subjected to image processing and a reference area (ABCD) where filter processing and the like for the processing are to be executed. Referring to FIG. 17, “Na” and “Nb” pixels are set as overlap widths in the main scanning direction (X direction), and “Nc” and “Nd” pixels are set as overlap widths in the sub-scanning direction (Y direction) for the effective pixel area (abcd). [0167]
  • FIGS. 18A and 18B are views showing overlap widths in the respective image processing modes (color copy mode, monochrome copy mode, and scanner mode). In the magnification mode, the size of the reference area becomes larger by m pixels and n lines than that in the 1×mode because of the necessity of magnification processing. In the color copy mode, the largest reference area in all the image processing modes is necessary because black characters should be determined. To detect black characters, halftone dots and black characters must properly be determined. To determine the period of halftone dots, a reference area having (24+m) pixels in the main scanning direction and (21+n) pixels in the sub-scanning direction (in the magnification mode) is set. In the monochrome copy mode, to execute edge enhancement for a character portion, a reference area having (4+m) pixels in the main scanning direction and (4+n) pixels in the sub-scanning direction (in the magnification mode) is set. In the scanner mode, no reference area is necessary in the 1×mode because necessary image processing is executed by the scanner driver or application on the host computer. In the magnification mode, a reference area having m pixels in the main scanning direction and n pixels in the sub-scanning direction is set in accordance with the magnification factor. The present invention is not limited to the overlap widths described here. The overlap width can arbitrarily be set. [0168]
  • In a [0169] processing block 23, an averaging processing section (SUBS) is a processing block which executes sub-sampling (simple thinning) for decreasing the read resolution in the main scanning direction or averaging processing. An input masking processing section (INPMSK) is a processing block which calculates color correction of input R, G, and B data. A correction processing section (LUT) is a processing block which applies a predetermined gray level characteristic to input data.
  • A character [0170] determination processing block 24 is a processing block which determines black characters and the pixels of a line drawing contour in input image data. In black character determination processing, an area more than the period of halftone dots must be referred to, as described above. Hence, an overlap area corresponding to (24+m) pixels in the main scanning direction and (21+n) pixels (lines) in the sub-scanning direction (m and n are defined by the magnification factor) is preferably referred to. For the input data to the character determination processing block 24, data corresponding to XA pixels in the main scanning direction (effective pixels+overlap width)×YA pixels in the sub-scanning direction (effective pixels+overlap width) (FIG. 19A) is referred to, like the input to the shading correction block (SHD) 22. That is, all pixels (XA pixels×YA pixels) in the rectangle are referred to.
  • In a [0171] processing block 25, an MTF correction processing section is a processing section which executes MTF difference correction and filter processing in the main scanning direction to reduce moiré in reducing the image when the image reading device is changed. This block executes multiplication/addition processing of coefficients for predetermined pixels in the main scanning direction in an area of interest. Referring to FIG. 19B, two pixels of a left hatching portion (b1) and three pixels of a right hatching portion (b2) are ensured for an area G1 of interest, and processing for the area G1 is executed. That is, the area G1 of interest in the rectangle and the areas of the hatching portions b1 and b2 are read to obtain the area G1 of interest.
  • An (RGB (L, Ca, Cb)) conversion processing section (CTRN) executes conversion processing of multilevel image data of each of R, G, and B colors in filtering (brightness enhancement, saturation enhancement, and color determination) executed by a [0172] filter processing block 26 on the output side.
  • A background density adjustment processing section (ABC) executes processing for automatically recognizing the background density of an original and correcting the background density value to the white side to obtain binary data suitable for facsimile communication or the like. [0173]
  • The [0174] filter processing block 26 executes edge enhancement processing of the brightness component (L) of the image and enhancement processing of saturation (Ca, Cb) as processing for executing color determination and filtering for the data obtained in the preceding CTRN processing. The filter processing block 26 also determines the chromatism of the input image and outputs the result. The filter processing block 26 can also change the parameter of the enhancement amount on the basis of the character or line drawing contour portion determination signal generated by the character determination processing block 24. The data that has undergone the filter processing is converted from L, Ca, and Cb to R, G, and B data and output. When monochrome image data is to be processed, this processing block functions as an edge enhancement filter for 5×5 pixels.
  • Referring to FIG. 19C, for an area G[0175] 2 of interest, the above-described filter processing is executed by using an area (hatched area) corresponding to two pixels (lines) on each of the upper and lower sides and two pixels on each of the left and right sides as reference data. That is, for the area G1 processed by the MTF correction processing section, the area G2 after filter processing is obtained.
  • A magnification processing (LIP) [0176] block 27 is a processing block which executes linear interpolation magnification processing in the main and sub-scanning directions. Referring to FIG. 19D, an area G3 is obtained as a result of linear interpolation magnification processing. The area of the area G3 is decided by magnifying the hatched area of image data (d: (X−(Na+Nb) pixels)×(Y−(Nc+Nd) pixels)) in the main and sub-scanning directions in accordance with predetermined magnification factor (main scanning direction (+m pixels) and sub-scanning direction (+n lines)). That is, the area G2 after filter processing is input, thereby obtaining the area G3 after magnification.
  • Referring to FIGS. 19B to [0177] 19D, “Na” and “Nb” indicate the numbers of pixels which are set as overlap widths in the main scanning direction (X direction), and “Nc” and “Nd” indicate the numbers of pixels which are set as overlap widths in the sub-scanning direction (Y direction), as in FIG. 17.
  • The above image processing is executed for image data of each rectangular area in accordance with the set image processing mode (copy mode or scanner mode). When a rectangular area corresponding to an image processing mode is set on the memory, and the unit of the rectangular area is switched, a resolution and high-resolution processing corresponding to the image processing mode can be implemented. Each rectangular area contains an overlap width necessary for image processing of each processing block. Hence, the image data of an adjacent area need not be read for each rectangular area to process the end portion of the rectangular image data to be processed. The work memory can further be reduced as compared to the method which simply segments an image into rectangular areas and executes image processing. In this way, image data corresponding to the maximum rectangle necessary for each image processing section is loaded to the [0178] block buffer RAM 210 in advance. Of the image data on the RAM 210, a necessary image data amount is transferred between the image processing sections. Only with this operation, a series of image processing operations necessary for each mode such as a color copy, monochrome copy, or scanner mode can be implemented. Hence, a line buffer dedicated for an image processing block can be omitted. In addition, since all image processing operations can be executed by using the image data of each rectangle, which is loaded to the block buffer RAM 210, image processing can be executed independently of the main scanning width or resolution. For this reason, the capacity of the line buffer of each image processing section need not be increased in accordance with the main scanning width or resolution, unlike the prior art. Furthermore, an apparatus such as a copying machine or scanner which executes necessary image processing at appropriate time can be provided with a very simple arrangement.
  • FIG. 20 is a view for explaining the start point in the DMA main scanning direction to DMA-transfer image data of the next rectangular data after the end of processing of one rectangular data. When DMA of the first rectangular area ABCD is ended, and transfer until the pixel at a point D is ended, the start point in the main scanning direction is set at a position (point S[0179] 1 in FIG. 20) that is returned by Na+Nb pixels in the main scanning direction. When DMA of rectangular data corresponding to one line is sequentially ended, and DMA of a point E corresponding to the final data of the first line is transferred, the start point shifted in the sub-scanning direction to transfer the rectangular data of the next line is set at a position (S2 in FIG. 20) that is returned by Nc+Nd pixels.
  • FIGS. 21 and 22 are flow charts for explaining the flow of DMA transfer processing and image processing in the respective image processing modes. Referring to FIGS. 21 and 22, detailed numerical values are used as address information. However, the present invention is not limited to these numerical values, and the address information can arbitrarily be set. [0180]
  • <Processing in Copy Mode>[0181]
  • FIG. 21 is a flow chart for explaining the flow of a data read and image processing in the copy mode. First, in step S[0182] 200, it is determined whether the copy mode is the color mode. In the color mode (YES in step S200), the processing advances to step S210. In the monochrome mode (NO in step S200), the processing advances to step S220.
  • In step S[0183] 210, address information for the read in the color copy mode is set as follows. This address information is generated by the LDMAC_B (105 b) (this also applies to step S220). On the basis of the address information, the LDMAC_B (105 b) controls DMA.
  • Start address (SA)=start address (BUFTOP) of memory
  • UA=end address (BUFFBOTTOM) of memory+1
  • XA=rectangular effective main scanning pixels+overlap width (number of pixels of left overlap width (12 pixels) and number of pixels of right overlap width (13 pixels))
  • YA=rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of upper overlap width (11 pixels (lines)) and number of pixels of lower overlap width (11 pixels (lines))) [0184]
  • OFF[0185] 1 A=TOTALWIDTH−XA
  • OFF[0186] 2 A=−(TOTALWIDTH×YA+overlap width (12 pixels on left side and 13 pixels on right side))
  • OFF[0187] 3A=−(TOTALWIDTH×(overlap width (11 pixels on upper side and 11 pixels on lower side)+effective main scanning pixels+overlap width (12 pixels on left side and 13 pixels on right side))
  • (TOTALWIDTH=number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width) [0188]
  • XANUM=effective main scanning pixels/rectangular effective main scanning pixels [0189]
  • In step S[0190] 220, address information for the read in the monochrome copy mode is set as follows.
  • Start address (SA)=start address (BUFTOP) of memory [0191]
  • UA=end address (BUFFBOTTOM) of [0192] memory+1
  • XA=rectangular effective main scanning pixels+overlap width (number of pixels of left overlap width (2 pixels) and number of pixels of right overlap width (2 pixels)) [0193]
  • YA=rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of upper overlap width (2 pixels (lines)) and number of pixels of lower overlap width (2 pixels (lines))) [0194]
  • OFF[0195] 1A=TOTALWIDTH−XA
  • OFF[0196] 2A=−(TOTALWIDTH×YA+overlap width (2 pixels on left side and 2 pixels on right side))
  • OFF[0197] 3A=−(TOTALWIDTH×(overlap width (2 pixels on upper side and 2 pixels on lower side)+effective main scanning pixels+overlap width (12 pixels on left side and 13 pixels on right side))
  • (TOTALWIDTH=number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width) [0198]
  • XANUM=effective main scanning pixels/rectangular effective main scanning pixels [0199]
  • When the address information is set in the read data I/[0200] F section 72 a in step S210 or S220, the processing advances to step S230 to determine whether the LDMAC_B (105 b) is in a data readable state. For example, when the buffer controller 75 inhibits a buffer read, the processing waits until the state is canceled (NO in step S230). If a buffer read can be executed (YES in step S230), the processing advances to step S240.
  • In step S[0201] 240, the read data I/F section 72 a reads data in accordance with the set address information. The data setting unit 72 b sets the data in predetermined channels (ch3 to ch6). The LDMAC_B (105 b) DMA-transfers the data set in the respective channels to the buffer RAM 210 of the scanner image processing section 20. The DMA-transferred data is loaded to the buffer of the scanner image processing section 20 and subjected to image processing corresponding to each image processing mode. The contents of each image processing have already been described above, and a detailed description thereof will be omitted here.
  • The loaded shading correction data and image data are converted by the above-described input [0202] data processing section 21 from plane-sequential data to dot-sequential data and subjected to the following image processing.
  • In step S[0203] 250, it is determined whether the copy mode is the color copy mode. In the color copy mode (YES in step S250), the processing advances to step S260 to execute character determination processing. In the monochrome copy mode (NO in step S250), step S260 (character determination processing) is skipped. Filter processing is executed in step S270, and magnification processing is executed in step S280.
  • The above processing is executed for each rectangular area data. In step S[0204] 290, dot-sequential image data that has undergone the image processing is further DMA-transferred to and stored in a predetermined memory area where data that has undergone image processing is to be stored. This storage processing will be described later in detail.
  • In step S[0205] 300, it is determined whether the image processing of the rectangular area and data storage processing are ended. If NO in step S330, the processing returns to step S250 to execute the same processing as described above. If the processing of the rectangular area is ended (YES in step S300), the processing advances to step S310 to determine whether the processing of rectangular areas that construct the entire page is ended (S310). If the processing of the entire page is not ended (NO in step S310), the processing returns to step S230 to read out the subsequent image data from the main memory 100 and execute image processing (steps from S230) for the data.
  • On the other hand, when the page processing is ended (YES in step S[0206] 310), the processing advances to step S320 to end DMA transfer to the scanner image processing section 20 (S320) and data write processing to the buffer by the scanner image processing section 20 (S330). Thus, the image processing by the scanner image processing section 20 is ended (S340).
  • With the above processing, the image processing for the data read in the copy mode is completed. [0207]
  • <Processing in Scanner Mode>[0208]
  • FIG. 22 is a flow chart for explaining the flow of a data read and image processing in the scanner mode. First, in step S[0209] 400, address information to read out data from the main memory 100 is set as follows. This address information is generated by the LDMAC_B (105 b). On the basis of the address information, the LDMAC_B (105 b) controls DMA.
  • Start address (SA)=start address (BUFTOP) of memory [0210]
  • UA=end address (BUFFBOTTOM) of [0211] memory+1
  • XA=rectangular effective main scanning pixels [0212]
  • YA=rectangular effective sub-scanning pixels (lines)+overlap width (number of pixels of lower overlap width (1 pixel (line)) [0213]
  • OFF[0214] 1A=TOTALWIDTH−XA
  • OFF[0215] 2A=−(TOTALWIDTH×YA)
  • OFF[0216] 3A=−(TOTALWIDTH×(overlap width (number of pixels of lower overlap width (1 pixel))+effective main scanning pixels)
  • (TOTALWIDTH=number (IMAGEWIDTH) of main scanning effective pixels of 1-page image+number of pixels of left overlap width+number of pixels of right overlap width) [0217]
  • XANUM=effective main scanning pixels/rectangular effective main scanning pixels [0218]
  • When the address information is set in the read data I/[0219] F section 72 a in step S400, the processing advances to step S410 to determine whether the LDMAC_B (105 b) is in a data readable state. For example, when the buffer controller 75 inhibits a buffer read, the processing waits until the state is canceled (NO in step S410). If a buffer read can be executed (YES in step S410), the processing advances to step S420.
  • In step S[0220] 420, the read data I/F section 72 a reads data in accordance with the set address information. The data setting unit 72 b sets the data in predetermined channels (ch3 to ch6). The LDMAC_B (105 b) DMA-transfers the data set in the respective channels to the buffer of the scanner image processing section 20. The DMA-transferred data is loaded to the buffer of the scanner image processing section 20 and subjected to image processing corresponding to each image processing mode. The contents of the image processing have already been described above, and a detailed description thereof will be omitted.
  • The loaded image data is converted by the above-described input [0221] data processing section 21 from plane-sequential data to dot-sequential data and subjected to magnification processing in step S430.
  • In step S[0222] 440, dot-sequential image data that has undergone the image processing is further DMA-transferred to and stored in a predetermined memory area where data that has undergone image processing is to be stored. This storage processing will be described later in detail.
  • In step S[0223] 450, it is determined whether the image processing of the rectangular area and data storage processing are ended. If NO in step S450, the processing returns to step S430 to execute the same processing as described above. If the processing of the rectangular area is ended (YES in step S450), the processing advances to step S460 to determine whether the processing of the entire page is ended (S460). If the processing of the entire page is not ended (NO in step S460), the processing returns to step S410 to read out the subsequent image data from the main memory 100 and execute image processing for the data.
  • On the other hand, when the page processing is ended (YES in step S[0224] 460), the processing advances to step S470 to end DMA transfer to the scanner image processing section 20 (S470) and data write processing to the buffer by the scanner image processing section 20 (S480). Thus, the image processing by the scanner image processing section 20 is ended (S490).
  • With the above processing, the image processing for the image data read in the scanner mode is completed. [0225]
  • When a rectangular area containing a predetermined overlap width is set in accordance with the image processing mode, and image processing is executed for each rectangular area, predetermined image processing can be executed without intervention of individual line buffers of each image processing section. [0226]
  • <Storage of Data that has Undergone Image Processing>[0227]
  • FIG. 23 is a view for explaining processing for transferring magnified rectangular data from the magnification processing block (LIP) [0228] 27 to the main memory 100. For example, when rectangular data having 256×256 pixels is to be reduced to 70%, 256 pixels×0.7=179.2 pixels. Rectangular data having 179.2×179.2 pixels must be created on the basis of the numerical calculation. However, pixel data on which a numerical value with decimal places is reflected cannot be generated. The appearance probability of 180 pixels and 179 pixels is controlled to 2:8 in the main scanning direction and in the sub-scanning direction to entirely generate 179.2 pixels corresponding to the reduction magnification factor of 70%. Referring to FIG. 23, the rectangular areas, i.e., an area B11 (XA1=180 pixels) and an area B12 (XA2=179 pixels) have difference sizes in the main scanning direction. The rectangular areas, i.e., the area B11 (YA1=180 pixels) and an area B21 (YA2=179 pixels) have difference sizes in the sub-scanning direction. The appearance probability of rectangular areas having different sizes is controlled in accordance with the result of magnification processing so that predetermined magnified image data can be obtained. To return the data that has undergone the image processing to the main memory 100, the signal that controls DMA is sent from the magnification processing block (LIP) 27 to the LDMAC_B (105 b).
  • The processing in step S[0229] 290 in FIG. 21 or step S440 in FIG. 22 will be described next. FIG. 25 is a timing chart showing the relationship between data and signals sent from the magnification processing block (LIP) 27 to the LDMAC_B (105 b) to DMA-transfer the data that has undergone image processing to the main memory 100.
  • When the data that has undergone the image processing is to be stored in the [0230] main memory 100, the LDMAC_B (105 b) starts DMA transfer without knowing the main scanning length and sub-scanning length of a rectangular area. When the magnification processing block 27 transfers the final data (XA1 and XA2) with the main scanning width in one rectangle, a line end signal is output. With the line end signal, the LDMAC_B (105 b) is notified of the main scanning length of the rectangle by the magnification processing block 27.
  • When the [0231] magnification processing block 27 transfers the final data in one rectangle, a block end signal is output to the LDMAC_B (105 b). With this signal, the sub-scanning length can be recognized. When all data in the sub-scanning direction YA1 are processed, DMA transfer is shifted to the areas B21 and B22 (FIG. 23). In a similar manner, data XA in the main scanning direction is sent. DMA is controlled by the line end signal and block end signal. Accordingly, the rectangular area of DMA can dynamically be switched in accordance with the calculation result of the magnification processing block 27.
  • The above-described line end signal, block end signal, and dot-sequential image data are input to the [0232] interface section 72 c of the LDMAC_B (105 b). Of these data, the image data is stored in channel (ch) 7. The line end signal and block end signal are used as address information in bitmapping the data stored in channel (ch) 7 on the main memory 100. On the basis of these pieces of address information, the second write data I/F section 72 d reads out the data in ch7 and stores it on the main memory 100.
  • FIG. 26 is a view for explaining a state in which the data is bitmapped on the [0233] main memory 100 in accordance with the line end signal and block end signal. Referring to FIG. 26, SA represents the start address of DMA transfer. Dot-sequential R, G, and B data are stored from this address in the main scanning direction. On the basis of the line end signal, the address of DMA transfer is switched by offset information (OFF1A). In a similar manner, data is stored in the main scanning direction from an address shifted in the sub-scanning direction by one pixel (line). On the basis of the block end signal of the rectangular area (0,0), processing shifts to data storage for the next rectangular area (1,0). The address of DMA transfer is switched by offset information (OFF2A). In this case, OFF2A is switched as an address shifted in the main scanning direction by one pixel with respect to the area (0,0) and jumped to the first line in the sub-scanning direction.
  • In a similar manner, data is stored in the areas (2,0), . . . , (n−1,0), and (n,0). When n blocks are stored, the address of DMA transfer is switched by offset information (OFF[0234] 3). In this case, OFF3 is switched as an address shifted in the sub-scanning direction by one pixel (line) with respect to the pixel of the final line of the area (0,0) and jumped to the first pixel in the main scanning direction.
  • As described above, when the offset information (OFF[0235] 1A, OFF2A, or OFF3) is dynamically switched by the line end signal and block end signal, the data that has undergone the image processing can be DMA-transferred to a predetermined area of the main memory 100 and stored.
  • <Other Embodiment>[0236]
  • In the above embodiment, the present invention has been described as a composite image processing apparatus having various image input/output functions. However, the present invention is not limited to this and can also be applied to a scanner apparatus or printer apparatus having a single function or an optical card to be extendedly connected to another apparatus. In addition, the unit composition of the apparatus according to the present invention is not limited to the above description. For example, the apparatus or system according to the present invention may be constituted such that it is achieved by a plurality of apparatuses connected through a network. [0237]
  • The object of the present invention can also be achieved by supplying a storage medium which stores software program codes for implementing the functions of the above-described embodiment to a system or apparatus and causing the computer (or a CPU or MPU) of the system or apparatus to read out and execute the program codes stored in the storage medium. In this case, the program codes read out from the storage medium implement the functions of the above-described embodiment by themselves, and the storage medium which stores the program codes constitutes the present invention. [0238]
  • As the storage medium for supplying the program codes, for example, a floppy (trademark) disk, hard disk, optical disk, magnetooptical disk, CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, or the like can be used. [0239]
  • The functions of the above-described embodiment are implemented not only when the readout program codes are executed by the computer but also when the operating system (OS) running on the computer performs part or all of actual processing on the basis of the instructions of the program codes. [0240]
  • The functions of the above-described embodiment are also implemented when the program codes read out from the storage medium are written in the memory of a function expansion board inserted into the computer or a function expansion unit connected to the computer, and the CPU of the function expansion board or function expansion unit performs part or all of actual processing on the basis of the instructions of the program codes. [0241]
  • As has been described above, according to the embodiment of the present invention, an image processing apparatus which is compatible with various image reading devices can be provided. More specifically, image data read by an image reading device is distributed to channels that control DMA transfer in accordance with the output format. Address information and offset information, which control DMA for the distributed data, are generated. With this composition, the image processing apparatus can be compatible with various image reading devices. [0242]
  • In this embodiment, R, G, and B data are separated, and the image data are stored on the [0243] main memory 100 independently of the output format of the image reading device (CCD 17 or CIS 18). For this reason, DMA transfer corresponding to the output format of the image reading device (CCD 17 or CIS 18) need not be executed for the image processing section (to be described later) on the output side. Only DMA transfer corresponding to necessary image processing needs to be executed. Hence, an image processing apparatus that can be compatible with the output format of the image reading device (CCD 17 or CIS 18) with a very simple arrangement and control can be provided.
  • When a rectangular area corresponding to an image processing mode is set on the memory, and the unit of the rectangular area is switched, a resolution and high-resolution processing corresponding to the image processing mode can be implemented. [0244]
  • When a rectangular area containing a predetermined overlap width is set in accordance with the image processing mode, and image processing is executed for each rectangular area, predetermined image processing can be executed without intervention of individual line buffers of each image processing section. Since intervention of a line buffer is unnecessary, the apparatus can be compatible with any flexible change in its main scanning width or resolution with a very simple arrangement. [0245]
  • As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims. [0246]

Claims (31)

What is claimed is:
1. An image processing apparatus comprising:
memory area control means for setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction;
address generation means for generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area;
memory control means for reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and
image processing means for executing image processing for each rectangular area of the DMA-transferred data by using the second memory.
2. The apparatus according to claim 1, wherein said memory control means distributes the image data read out from the first memory to channels capable of DMA transfer.
3. The apparatus according to claim 1, wherein to cause said memory control means to read out the data from the first memory, said address generation means generates
a start address for the read,
first offset information which controls DMA in the set rectangular area,
second offset information which controls DMA for a second rectangular area that is adjacent to a first rectangular area in the main scanning direction, and
third offset information which controls DMA for a third rectangular area that is shifted in the sub-scanning direction and adjacent to the first rectangular area.
4. The apparatus according to claim 1, wherein said image processing means generates a control signal to DMA-transfer image data of a processed rectangular area to the first memory.
5. The apparatus according to claim 4, wherein the control signal includes a signal to specify a main scanning length and a sub-scanning length of the rectangular area.
6. The apparatus according to claim 1, wherein said memory control means DMA-transfers the image data processed by said image processing means to the first memory and stores the image data on the basis of a control signal.
7. The apparatus according to claim 1, wherein the first memory is a ring buffer.
8. The apparatus according to claim 1, further comprising buffer control means for controlling a data write and a data read to/from the first memory.
9. The apparatus according to claim 1, further comprising mode setting means for setting an image processing mode,
wherein said memory area control means sets, for the image data, a rectangular area corresponding to the set image processing mode.
10. The apparatus according to claim 1, wherein said image processing means comprises a plurality of image processing sections each having a predetermined image processing function, each image processing section using the second memory as a shared memory and executing processing for the DMA-transferred data.
11. The apparatus according to claim 1, wherein said memory area control means sets a rectangular area added with adjacent image areas necessary for said image processing means to process the image to be processed.
12. An image processing method comprising:
a memory area control step of setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction;
an address generation step of generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area;
a memory control step of reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and
an image processing step of executing image processing for each rectangular area of the DMA-transferred data by using the second memory.
13. A program which causes a computer to execute an image processing method, comprising:
a memory area control module for setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction;
an address generation module for generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area;
a memory control module for reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and
an image processing module for executing image processing for each rectangular area of the DMA-transferred data by using the second memory.
14. A computer-readable storage medium which stores an image processing program code, the image processing program code comprising:
a memory area control code for setting, for image data bitmapped on a first memory, a rectangular area divided in a main scanning direction and sub-scanning direction;
an address generation code for generating address information to read out image data corresponding to the rectangular area in correspondence with the set rectangular area;
a memory control code for reading out the image data corresponding to the rectangular area and DMA-transferring the image data to a second memory in accordance with the generated address information; and
an image processing code for executing image processing for each rectangular area of the DMA-transferred data by using the second memory.
15. An image processing apparatus which segments image data bitmapped on a first memory into rectangular areas and processes the image data for each rectangular area, comprising:
mode setting means for setting an image processing mode;
rectangular area setting means for setting, for the image data, a rectangular area corresponding to the set image processing mode;
bitmap means for reading out the image data corresponding to the set rectangular area and bitmapping the image data of the rectangular area on a second memory; and
image processing means for executing image processing for the image data of the rectangular area bitmapped on the second memory in accordance with the set image processing mode.
16. The apparatus according to claim 15, wherein said rectangular area setting means sets rectangular areas that overlap between adjacent areas.
17. The apparatus according to claim 15, wherein the rectangular area set by said rectangular area setting means includes
an effective pixel area to be processed by said memory control means, and
a pixel area to be referred to in order to process the effective pixel area.
18. The apparatus according to claim 15, wherein said image processing means switches a pixel area to be referred to in the rectangular area in accordance with the set image processing mode.
19. The apparatus according to claim 15, wherein said rectangular area setting means controls division on the first memory in a main scanning direction and sub-scanning direction on the basis of a pixel area to be referred to and a main scanning resolution corresponding to the set image processing mode.
20. An image processing method of segmenting image data bitmapped on a first memory into rectangular areas and processing the image data for each rectangular area, comprising:
a mode setting step of setting an image processing mode;
a rectangular area setting step of setting, for the image data, a rectangular area corresponding to the set image processing mode;
a bitmap step of reading out the image data corresponding to the set rectangular area and bitmapping the image data on a second memory that can be shared; and
an image processing step of executing image processing for the image data of the rectangular area bitmapped on the second memory in accordance with the set image processing mode.
21. A program which causes a computer to execute an image processing method of segmenting image data bitmapped on a first memory into rectangular areas and processing the image data for each rectangular area, comprising:
a mode setting module for setting an image processing mode;
a rectangular area setting module for setting, for the image data, a rectangular area corresponding to the set image processing mode;
a bitmap module for reading out the image data corresponding to the set rectangular area and bitmapping the image data on a second memory that can be shared; and
an image processing module for executing image processing for the image data of the rectangular area bitmapped on the second memory in accordance with the set image processing mode.
22. A computer-readable storage medium which stores an image processing program code, the image processing program code comprising:
a mode setting code for setting an image processing mode;
a rectangular area setting code for setting, for image data, a rectangular area corresponding to the set image processing mode;
a bitmap code for reading out the image data corresponding to the set rectangular area and bitmapping the image data on a second memory that can be shared; and
an image processing code for executing image processing for the image data of the rectangular area bitmapped on the second memory in accordance with the set image processing mode.
23. An image processing apparatus comprising:
address generation means for generating, for image data bitmapped on a first memory, address information to read out image data corresponding to a rectangular area divided in a main scanning direction and sub-scanning direction;
memory control means for reading out the image data corresponding to the rectangular area in accordance with the generated address information and transferring the image data to a second memory; and
a plurality of image processing means for executing, for the image data transferred to the second memory, image processing as a series of image processing operations,
wherein the rectangular area has at least a maximum rectangular size necessary for executing the series of image processing operations.
24. The apparatus according to claim 23, wherein the rectangular area is a rectangular area added with adjacent image areas necessary for executing the series of image processing operations.
25. The apparatus according to claim 23, further comprising mode setting means for setting an image processing mode, and rectangular area setting means for setting, for the image data, a rectangular area corresponding to the set image processing mode.
26. An image processing method comprising:
an address generation step of generating, for image data bitmapped on a first memory, address information to read out image data corresponding to a rectangular area divided in a main scanning direction and sub-scanning direction;
a memory control step of reading out the image data corresponding to the rectangular area in accordance with the generated address information and transferring the image data to a second memory; and
an image processing step of executing, for the image data transferred to the second memory, a plurality of image processing operations as a series of image processing operations,
wherein the rectangular area has at least a maximum rectangular size necessary for executing the series of image processing operations.
27. The method according to claim 26, wherein the rectangular area is a rectangular area added with adjacent image areas necessary for executing the series of image processing operations.
28. The method according to claim 26, further comprising a mode setting step of setting an image processing mode, and a rectangular area setting step of setting, for the image data, a rectangular area corresponding to the set image processing mode.
29. A program which causes a computer to execute an image processing method of segmenting image data bitmapped on a first memory into rectangular areas and processing the image data for each rectangular area, comprising:
an address generation module for generating, for the image data bitmapped on the first memory, address information to read out image data corresponding to a rectangular area divided in a main scanning direction and sub-scanning direction;
a memory control module for reading out the image data corresponding to the rectangular area in accordance with the generated address information and transferring the image data to a second memory; and
an image processing module for executing, for the image data transferred to the second memory, a plurality of image processing operations as a series of image processing operations,
wherein the rectangular area has at least a maximum rectangular size necessary for executing the series of image processing operations.
30. The program according to claim 29, wherein the rectangular area is a rectangular area added with adjacent image areas necessary for executing the series of image processing operations.
31. The program according to claim 29, further comprising a mode setting module for setting an image processing mode, and a rectangular area setting module for setting, for the image data, a rectangular area corresponding to the set image processing mode.
US10/739,344 1999-12-23 2003-12-19 Image processing apparatus and image processing method Expired - Fee Related US7495669B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/956,129 US7043134B2 (en) 1999-12-23 2004-10-04 Thermo-optic plasmon-polariton devices
US11/487,370 US7675523B2 (en) 2002-12-26 2006-07-17 Image processing apparatus and image processing method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002-378690 2002-12-26
JP2002-378689 2002-12-26
JP2002378690 2002-12-26
JP2002378689 2002-12-26

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US10/956,129 Continuation-In-Part US7043134B2 (en) 1999-12-23 2004-10-04 Thermo-optic plasmon-polariton devices
US11/487,370 Division US7675523B2 (en) 2002-12-26 2006-07-17 Image processing apparatus and image processing method

Publications (2)

Publication Number Publication Date
US20040130553A1 true US20040130553A1 (en) 2004-07-08
US7495669B2 US7495669B2 (en) 2009-02-24

Family

ID=32684268

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/739,344 Expired - Fee Related US7495669B2 (en) 1999-12-23 2003-12-19 Image processing apparatus and image processing method
US11/487,370 Expired - Fee Related US7675523B2 (en) 2002-12-26 2006-07-17 Image processing apparatus and image processing method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/487,370 Expired - Fee Related US7675523B2 (en) 2002-12-26 2006-07-17 Image processing apparatus and image processing method

Country Status (1)

Country Link
US (2) US7495669B2 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040130750A1 (en) * 2002-12-26 2004-07-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20050168470A1 (en) * 2004-01-30 2005-08-04 Ram Prabhakar Variable-length coding data transfer interface
US20080007795A1 (en) * 2006-07-07 2008-01-10 Canon Kabushiki Kaisha Multifunction printer and image processing method
US20080317138A1 (en) * 2007-06-20 2008-12-25 Wei Jia Uniform video decoding and display
US20090074314A1 (en) * 2007-09-17 2009-03-19 Wei Jia Decoding variable lenght codes in JPEG applications
US20090073007A1 (en) * 2007-09-17 2009-03-19 Wei Jia Decoding variable length codes in media applications
EP2054794A1 (en) * 2006-08-25 2009-05-06 Intel Corporation Display processing line buffers incorporating pipeline overlap
US20090141032A1 (en) * 2007-12-03 2009-06-04 Dat Nguyen Synchronization of video input data streams and video output data streams
US20090141996A1 (en) * 2007-12-03 2009-06-04 Wei Jia Comparator based acceleration for media quantization
US20090141797A1 (en) * 2007-12-03 2009-06-04 Wei Jia Vector processor acceleration for media quantization
US20090238478A1 (en) * 2008-03-18 2009-09-24 Masahiko Banno Image processing apparatus
US20100150244A1 (en) * 2008-12-11 2010-06-17 Nvidia Corporation Techniques for Scalable Dynamic Data Encoding and Decoding
US20100157396A1 (en) * 2008-12-24 2010-06-24 Samsung Electronics Co., Ltd. Image processing apparatus and method of controlling the same
US20100253694A1 (en) * 2009-04-01 2010-10-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium storing control program therefor
US20140112393A1 (en) * 2012-10-18 2014-04-24 Megachips Corporation Image processing device
US8726125B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Reducing interpolation error
US8725504B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Inverse quantization in audio decoding
US20150002708A1 (en) * 2013-07-01 2015-01-01 Kohei MARUMOTO Imaging device, image reading device, image forming apparatus, and method of driving imaging device
US20160321774A1 (en) * 2015-04-29 2016-11-03 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing
US10810960B2 (en) 2016-12-08 2020-10-20 Sharp Kabushiki Kaisha Display device

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004127093A (en) * 2002-10-04 2004-04-22 Sony Corp Image processor and image processing method
KR100594242B1 (en) * 2004-01-29 2006-06-30 삼성전자주식회사 Source driver and source line driving method for flat panel display
DE102005045812A1 (en) * 2005-09-27 2007-05-16 Siemens Ag Method for the cache-optimized processing of a digital image data record
JP5065307B2 (en) * 2009-01-07 2012-10-31 キヤノン株式会社 Image processing apparatus and control method thereof
JP2011002910A (en) * 2009-06-16 2011-01-06 Canon Inc Apparatus and method for processing search
US8665283B1 (en) 2010-03-29 2014-03-04 Ambarella, Inc. Method to transfer image data between arbitrarily overlapping areas of memory
US8514235B2 (en) * 2010-04-21 2013-08-20 Via Technologies, Inc. System and method for managing the computation of graphics shading operations
US8731278B2 (en) * 2011-08-15 2014-05-20 Molecular Devices, Inc. System and method for sectioning a microscopy image for parallel processing
JP2013091222A (en) * 2011-10-25 2013-05-16 Canon Inc Image formation processing apparatus and image processing method
KR101969965B1 (en) 2012-12-24 2019-08-13 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. Image scanning apparatus, method for image compensation and computer-readable recording medium
KR20140109128A (en) 2013-03-05 2014-09-15 삼성전자주식회사 Method for reading data and apparatuses performing the same
JP6905195B2 (en) * 2017-11-16 2021-07-21 富士通株式会社 Data transfer device, arithmetic processing device and data transfer method
JP2020065193A (en) * 2018-10-18 2020-04-23 シャープ株式会社 Image forming apparatus, image processing method, and image processing program

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2004A (en) * 1841-03-12 Improvement in the manner of constructing and propelling steam-vessels
US5425135A (en) * 1990-10-31 1995-06-13 Ricoh Company, Ltd. Parallel interface for printer
US5566253A (en) * 1988-09-20 1996-10-15 Hitachi, Ltd. Method, a device and apparatus for processing values corresponding to measurements of pixels, and a facsimile system and a data processing system
US5642208A (en) * 1993-11-22 1997-06-24 Canon Kabushiki Kaisha Image forming system
US5923339A (en) * 1993-11-29 1999-07-13 Canon Kabushiki Kaisha Higher-speed parallel processing
US5937152A (en) * 1996-04-16 1999-08-10 Brother Kogyo Kabushiki Kaisha Printer with buffer memory
US6023281A (en) * 1998-03-02 2000-02-08 Ati Technologies, Inc. Method and apparatus for memory allocation
US6084686A (en) * 1996-11-26 2000-07-04 Canon Kabushiki Kaisha Buffer memory control device and control method, and an image processing device and method using the same
US6084813A (en) * 1998-06-04 2000-07-04 Canon Kabushiki Kaisha Apparatus and method for controlling memory backup using main power supply and backup power supply
US6130758A (en) * 1996-10-07 2000-10-10 Fuji Photo Film Co., Ltd. Printer system and method of controlling operation of the same
US20020159656A1 (en) * 2001-04-26 2002-10-31 Hiroyuki Matsuki Image processing apparatus, image processing method and portable imaging apparatus
US6595612B1 (en) * 2000-02-23 2003-07-22 Mutoh Industries Ltd. Inkjet printer capable of minimizing chromatic variation in adjacent print swaths when printing color images in bidirectional model
US6633975B1 (en) * 1998-11-13 2003-10-14 Minolta Co., Ltd. Data processing system having plurality of processors and executing series of processings in prescribed order
US6912638B2 (en) * 2001-06-28 2005-06-28 Zoran Corporation System-on-a-chip controller
US6944358B2 (en) * 2001-02-26 2005-09-13 Mega Chips Corporation Image processor
US7116449B1 (en) * 1999-11-29 2006-10-03 Minolta Co., Ltd. Image processing apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62192867A (en) 1986-02-20 1987-08-24 Mitsubishi Electric Corp Work station handling image data
JPS63307587A (en) 1987-06-09 1988-12-15 Fuji Photo Film Co Ltd Image data converter
JP2862242B2 (en) 1988-04-28 1999-03-03 キヤノン株式会社 Image reading device
JP2723970B2 (en) 1989-05-26 1998-03-09 株式会社日立製作所 Data transfer control device
US5530901A (en) 1991-11-28 1996-06-25 Ricoh Company, Ltd. Data Transmission processing system having DMA channels running cyclically to execute data transmission from host to memory and from memory to processing unit successively
JP2658897B2 (en) 1994-09-19 1997-09-30 株式会社日立製作所 Image reading apparatus and facsimile apparatus using the same
US6025875A (en) 1995-10-23 2000-02-15 National Semiconductor Corporation Analog signal sampler for imaging systems
JPH09247474A (en) 1996-03-04 1997-09-19 Fuji Photo Film Co Ltd Picture processor
JP2000032258A (en) 1998-07-09 2000-01-28 Canon Inc Image processing unit and image processing method
JP2000322374A (en) 1999-05-12 2000-11-24 Canon Inc Device and method for converting data
JP2001144920A (en) 1999-11-10 2001-05-25 Ricoh Co Ltd Image processor, image processing method and computer- readable recording medium for recording program to allow computer to execute the method
EP1193610B1 (en) 2000-09-29 2006-11-15 Ricoh Company, Ltd. Data processing apparatus and DMA data transfer method
JP2002359721A (en) 2001-06-01 2002-12-13 Sharp Corp Image read method, image reader and image forming device provided with the reader
DE60327736D1 (en) 2002-12-26 2009-07-09 Canon Kk Image processing apparatus and image processing method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2004A (en) * 1841-03-12 Improvement in the manner of constructing and propelling steam-vessels
US5566253A (en) * 1988-09-20 1996-10-15 Hitachi, Ltd. Method, a device and apparatus for processing values corresponding to measurements of pixels, and a facsimile system and a data processing system
US5425135A (en) * 1990-10-31 1995-06-13 Ricoh Company, Ltd. Parallel interface for printer
US5642208A (en) * 1993-11-22 1997-06-24 Canon Kabushiki Kaisha Image forming system
US5923339A (en) * 1993-11-29 1999-07-13 Canon Kabushiki Kaisha Higher-speed parallel processing
US5937152A (en) * 1996-04-16 1999-08-10 Brother Kogyo Kabushiki Kaisha Printer with buffer memory
US6130758A (en) * 1996-10-07 2000-10-10 Fuji Photo Film Co., Ltd. Printer system and method of controlling operation of the same
US6084686A (en) * 1996-11-26 2000-07-04 Canon Kabushiki Kaisha Buffer memory control device and control method, and an image processing device and method using the same
US6023281A (en) * 1998-03-02 2000-02-08 Ati Technologies, Inc. Method and apparatus for memory allocation
US6084813A (en) * 1998-06-04 2000-07-04 Canon Kabushiki Kaisha Apparatus and method for controlling memory backup using main power supply and backup power supply
US6633975B1 (en) * 1998-11-13 2003-10-14 Minolta Co., Ltd. Data processing system having plurality of processors and executing series of processings in prescribed order
US7116449B1 (en) * 1999-11-29 2006-10-03 Minolta Co., Ltd. Image processing apparatus
US6595612B1 (en) * 2000-02-23 2003-07-22 Mutoh Industries Ltd. Inkjet printer capable of minimizing chromatic variation in adjacent print swaths when printing color images in bidirectional model
US6944358B2 (en) * 2001-02-26 2005-09-13 Mega Chips Corporation Image processor
US20020159656A1 (en) * 2001-04-26 2002-10-31 Hiroyuki Matsuki Image processing apparatus, image processing method and portable imaging apparatus
US7170553B2 (en) * 2001-04-26 2007-01-30 Sharp Kabushiki Kaisha Image processing apparatus, image processing method and portable imaging apparatus
US6912638B2 (en) * 2001-06-28 2005-06-28 Zoran Corporation System-on-a-chip controller

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040130750A1 (en) * 2002-12-26 2004-07-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US7817297B2 (en) 2002-12-26 2010-10-19 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20050168470A1 (en) * 2004-01-30 2005-08-04 Ram Prabhakar Variable-length coding data transfer interface
US8427494B2 (en) * 2004-01-30 2013-04-23 Nvidia Corporation Variable-length coding data transfer interface
US8339406B2 (en) * 2004-01-30 2012-12-25 Nvidia Corporation Variable-length coding data transfer interface
US20100106918A1 (en) * 2004-01-30 2010-04-29 Nvidia Corporation Variable-length coding data transfer interface
US20080007795A1 (en) * 2006-07-07 2008-01-10 Canon Kabushiki Kaisha Multifunction printer and image processing method
EP1892947A2 (en) * 2006-07-07 2008-02-27 Canon Kabushiki Kaisha Multifunction printer and image processing method
US7933049B2 (en) 2006-07-07 2011-04-26 Canon Kabushiki Kaisha Multifunction printer and image processing method
EP1892947B1 (en) * 2006-07-07 2016-04-27 Canon Kabushiki Kaisha Multifunction printer and image processing method
TWI381364B (en) * 2006-08-25 2013-01-01 Intel Corp Method for image processing and article comprising a machine-accessible medium having stored thereon instructions
EP2054794A1 (en) * 2006-08-25 2009-05-06 Intel Corporation Display processing line buffers incorporating pipeline overlap
EP2054794A4 (en) * 2006-08-25 2011-09-07 Intel Corp Display processing line buffers incorporating pipeline overlap
US20110037771A1 (en) * 2006-08-25 2011-02-17 Sreenath Kurupati Display Processing Line Buffers Incorporating Pipeline Overlap
US8726125B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Reducing interpolation error
US8725504B1 (en) 2007-06-06 2014-05-13 Nvidia Corporation Inverse quantization in audio decoding
US8477852B2 (en) 2007-06-20 2013-07-02 Nvidia Corporation Uniform video decoding and display
US20080317138A1 (en) * 2007-06-20 2008-12-25 Wei Jia Uniform video decoding and display
US20090073007A1 (en) * 2007-09-17 2009-03-19 Wei Jia Decoding variable length codes in media applications
US20090074314A1 (en) * 2007-09-17 2009-03-19 Wei Jia Decoding variable lenght codes in JPEG applications
US8849051B2 (en) 2007-09-17 2014-09-30 Nvidia Corporation Decoding variable length codes in JPEG applications
US8502709B2 (en) 2007-09-17 2013-08-06 Nvidia Corporation Decoding variable length codes in media applications
US20090141996A1 (en) * 2007-12-03 2009-06-04 Wei Jia Comparator based acceleration for media quantization
US8687875B2 (en) 2007-12-03 2014-04-01 Nvidia Corporation Comparator based acceleration for media quantization
US20090141797A1 (en) * 2007-12-03 2009-06-04 Wei Jia Vector processor acceleration for media quantization
US20090141032A1 (en) * 2007-12-03 2009-06-04 Dat Nguyen Synchronization of video input data streams and video output data streams
US8704834B2 (en) 2007-12-03 2014-04-22 Nvidia Corporation Synchronization of video input data streams and video output data streams
US8934539B2 (en) 2007-12-03 2015-01-13 Nvidia Corporation Vector processor acceleration for media quantization
US20090238478A1 (en) * 2008-03-18 2009-09-24 Masahiko Banno Image processing apparatus
US9307267B2 (en) 2008-12-11 2016-04-05 Nvidia Corporation Techniques for scalable dynamic data encoding and decoding
US20100150244A1 (en) * 2008-12-11 2010-06-17 Nvidia Corporation Techniques for Scalable Dynamic Data Encoding and Decoding
US8363294B2 (en) * 2008-12-24 2013-01-29 Samsung Electronics Co., Ltd. Image processing apparatus and method of controlling the same
US20130120812A1 (en) * 2008-12-24 2013-05-16 Samsung Electronics Co., Ltd. Image processing apparatus and method of controlling the same
US20100157396A1 (en) * 2008-12-24 2010-06-24 Samsung Electronics Co., Ltd. Image processing apparatus and method of controlling the same
US8717649B2 (en) * 2008-12-24 2014-05-06 Samsung Electronics Co., Ltd. Image processing apparatus and method of controlling the same
US20100253694A1 (en) * 2009-04-01 2010-10-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium storing control program therefor
US8368708B2 (en) 2009-04-01 2013-02-05 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium storing control program therefor
US10475158B2 (en) * 2012-10-18 2019-11-12 Megachips Corporation Image processing device
US20140112393A1 (en) * 2012-10-18 2014-04-24 Megachips Corporation Image processing device
US20150002708A1 (en) * 2013-07-01 2015-01-01 Kohei MARUMOTO Imaging device, image reading device, image forming apparatus, and method of driving imaging device
US9516287B2 (en) * 2013-07-01 2016-12-06 Ricoh Company, Ltd. Imaging device, image reading device, image forming apparatus, and method of driving imaging device
WO2016175918A1 (en) * 2015-04-29 2016-11-03 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing
CN107533752A (en) * 2015-04-29 2018-01-02 高通股份有限公司 The adaptive memory address scan based on surface format for graphics process
US10163180B2 (en) * 2015-04-29 2018-12-25 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing
US20160321774A1 (en) * 2015-04-29 2016-11-03 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing
US10810960B2 (en) 2016-12-08 2020-10-20 Sharp Kabushiki Kaisha Display device

Also Published As

Publication number Publication date
US20060256120A1 (en) 2006-11-16
US7495669B2 (en) 2009-02-24
US7675523B2 (en) 2010-03-09

Similar Documents

Publication Publication Date Title
US7675523B2 (en) Image processing apparatus and image processing method
US7817297B2 (en) Image processing apparatus and image processing method
JP3732702B2 (en) Image processing device
US20010019429A1 (en) Image processing apparatus
US20080013133A1 (en) Contact-type color scanning unit, image scanning device, image scanning method, and computer program product
GB2330972A (en) Multiple image scanner
JP4384124B2 (en) Image processing apparatus and image processing method
US8174732B2 (en) Apparatus, method, and computer program product for processing image
JP4528843B2 (en) Line buffer circuit, image processing apparatus, and image forming apparatus
JPH08172532A (en) Image reader and read method
JP3259975B2 (en) Image reading device
JP2004220584A (en) Image processing device and image processing method
JP3870190B2 (en) Image processing device
JP2005173926A (en) Image processing apparatus, method, program, and storage medium
JP4328608B2 (en) Image processing apparatus, method, program, and storage medium
JPH11355575A (en) Image processor
JP2005124071A (en) Image processing apparatus and image processing method, computer readable storage medium with program stored thereon, and program
JP4328609B2 (en) Image processing apparatus, method, program, and storage medium
JP3234650B2 (en) Red / black data reader
JP2000032258A (en) Image processing unit and image processing method
JPH0131344B2 (en)
JPS61198872A (en) Picture input device
JP2003037739A (en) Data transfer controller, control method therefor, and control program
JP2002152511A (en) Image processor, image processing method and computer readable medium recording program for executing that method in computer
JP2001184502A (en) Device and method for processing image and computer readable recording medium with recorded program for computer to execute the same method

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:USHIDA, KA'TSUTOSHI;NAOI, YUICHI;KATAHIRA, YOSHIAKI;AND OTHERS;REEL/FRAME:014829/0917;SIGNING DATES FROM 20031215 TO 20031216

AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:USHIDA, KATSUTOSHI;NAOI, YUICHI;KATAHIRA, YOSHIAKI;AND OTHERS;REEL/FRAME:015688/0666;SIGNING DATES FROM 20031215 TO 20031216

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210224