US9451125B2 - Image processing apparatus, method therefor, and image reading apparatus - Google Patents

Image processing apparatus, method therefor, and image reading apparatus Download PDF

Info

Publication number
US9451125B2
US9451125B2 US14/469,731 US201414469731A US9451125B2 US 9451125 B2 US9451125 B2 US 9451125B2 US 201414469731 A US201414469731 A US 201414469731A US 9451125 B2 US9451125 B2 US 9451125B2
Authority
US
United States
Prior art keywords
original
unit
pixel
sensor unit
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/469,731
Other languages
English (en)
Other versions
US20150070734A1 (en
Inventor
Katsuyuki Hagiwara
Takayuki Tsutsumi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAGIWARA, KATSUYUKI, TSUTSUMI, TAKAYUKI
Publication of US20150070734A1 publication Critical patent/US20150070734A1/en
Priority to US15/236,712 priority Critical patent/US9614996B2/en
Application granted granted Critical
Publication of US9451125B2 publication Critical patent/US9451125B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3877Image rotation
    • H04N1/3878Skew detection or correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00681Detecting the presence, position or size of a sheet or correcting its position before scanning
    • H04N1/00684Object of the detection
    • H04N1/00718Skew
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00681Detecting the presence, position or size of a sheet or correcting its position before scanning
    • H04N1/00684Object of the detection
    • H04N1/00721Orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00681Detecting the presence, position or size of a sheet or correcting its position before scanning
    • H04N1/00763Action taken as a result of detection
    • H04N1/00774Adjusting or controlling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/047Detection, control or error compensation of scanning velocity or position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0082Image hardcopy reproducer

Definitions

  • the present invention relates to image processing when reading an original image by a reading unit to which a plurality of sensor units are attached.
  • this technique detects the tilt angle of the original, and performs tilt correction processing according to the detected tilt angle.
  • an image processing apparatus comprising: a storage unit configured to store attachment position information indicating attachment positions of a plurality of sensor units, each of which reads an image in each area obtained by dividing an area of an original along a sub-scanning direction; a joint correction unit configured to perform, based on the attachment position information of each sensor unit, attachment error correction of correcting an attachment error of the sensor unit for image data output from the sensor unit, and perform joint processing for the image data having undergone the attachment error correction so as to generate image data of the original; and a tilt correction unit configured to perform, based on placement information of the original, tilt correction of correcting a tilt of the original for the image data having undergone the joint processing.
  • FIG. 1 is a view for explaining the positional relationship between an original and the sensor of a reading apparatus.
  • FIGS. 2A and 2B are views for explaining the arrangement of the sensor.
  • FIG. 3 is a block diagram for explaining the arrangement of an image processing unit in the reading apparatus according to the first embodiment.
  • FIG. 4 is a view for explaining the third coordinate system different for each sensor unit.
  • FIG. 5 is a block diagram for explaining the arrangement of an image processing unit in a reading apparatus according to the second embodiment.
  • FIGS. 6A and 6B are flowcharts for explaining the processing of a geometric correction unit.
  • FIG. 7 is a graph for explaining combining of pixel values corresponding to adjacent sensor units.
  • FIG. 8 is a block diagram for explaining the arrangement of an image processing unit in a reading apparatus according to the third embodiment.
  • FIGS. 9A and 9B are views for explaining reading of an image by an image reading unit.
  • FIGS. 10A and 10B are views each for explaining a data format when a memory controller transfers a pixel value read out from a memory.
  • FIG. 11 is a view for explaining reading of pixel values for each band.
  • FIG. 12 is a flowchart for explaining the processing of a shading correction unit.
  • FIG. 13 is a view showing an example of a tilt corrected image.
  • FIG. 14 is a block diagram for explaining the arrangement of a missing pixel insertion unit.
  • FIG. 15 is a flowchart for explaining the operation of the missing pixel insertion unit.
  • FIGS. 16A and 16B are views each showing an example of a pixel command.
  • FIGS. 17A to 17C are views each for explaining a format when a memory controller transfers a pixel command according to the fourth embodiment.
  • a conveyance mechanism (not shown) conveys the original 104 in a direction (original conveyance direction) indicated by an arrow in FIG. 1 .
  • the sensor 101 fixed to the reading apparatus reads an image on the conveyed original 104 .
  • the present invention is applicable to an arrangement in that the sensor 101 is moved and reads the image on the original 104 placed on an original table. That is, the image on the original 104 is read by changing the relative position in a sub-scanning direction Y between the original 104 and the sensor 101 .
  • FIG. 1 shows a first coordinate system xy having one vertex of the original 104 as a reference (origin), and a second coordinate system XY having, for example, one end of the sensor 101 of the reading apparatus as a reference (origin).
  • the original 104 is desirably placed so that one of the sides of the original 104 is along the sub-scanning direction Y.
  • the original 104 may be placed at a position and angle different from ideal ones, such as a shift position (x 0 , y 0 ) of the original and a tilt angle ⁇ of the original.
  • the first coordinate system and the second coordinate system do not coincide with each other. If the coordinate systems do not coincide with each other, even if the tilt angle ⁇ is small, a shift between the leading end and trailing end of the original may be large. This shift is noticeable especially for an original of a large size.
  • the sensor 101 is formed by almost linearly arranging a plurality of sensor units 102 a to 102 d .
  • Each of original sensors 103 a to 103 c is a sensor for acquiring original placement information.
  • the sensor units 102 a to 102 d are ideally arranged in a direction parallel to a main scanning direction X with reference to the second coordinate system XY to have a predetermined offset in the sub-scanning direction Y.
  • the ideal arrangement of the sensor units 102 a to 102 d is indicated by broken lines.
  • the arrangement of each sensor unit includes an error in attachment position and an error in attachment angle which are occurred in assembly of a product or replacement of parts of the product.
  • the error in attachment position corresponds to deviation from the design attachment position
  • the error in attachment angle corresponds to deviation from the design attachment angle.
  • FIGS. 2A and 2B show only an XY plane. However, for example, there may be errors in attachment position and attachment angle in the Z-axis direction in FIGS. 2A and 2B , which are caused by, for example, a lift at the time of attachment of the sensor unit, or a bend due to the housing of the reading apparatus having no ideal rigid body.
  • Each of the original sensors 103 a to 103 c is a sensor for detecting the edge of the original 104 . It is possible to obtain original placement information by calculating the tilt angle ⁇ of the original 104 based on the time difference between timings at which the respective original sensors detect the edge of the original. Note that an error in attachment position of each of the original sensors 103 a to 103 c is corrected, as needed, by a method of additionally measuring the error or the like. When the original 104 is placed on the original table, it is possible to detect the position and size of the original 104 by performing a pre-scan prior to reading of the original image.
  • the sensor units 102 a to 102 d are arranged so as to obtain image data having no gap in the main-scanning direction X even if there exist a shift of the attachment position and that of the attachment angle to some extent. That is, the sensor units 102 a to 102 d are arranged so that sensor units adjacent to each other in the main scanning direction X partially overlap each other, and have different offsets in the sub-scanning direction Y.
  • the sensor units 102 a to 102 d are used to obtain four divided image data by dividing the area of the original 104 along the sub-scanning direction. It is then possible to obtain one image data corresponding to the original 104 from the four divided image data by performing joint processing by a joint correction unit 204 (to be described later) in consideration of the overlapped portions of the sensor units.
  • the third coordinate system is set in addition to the first and second coordinate systems.
  • the array direction of the light-receiving elements of the sensor unit that is, the direction of the attachment angle is set as the first axis
  • the sub-scanning direction Y is set as the second axis
  • a position including the shift of the attachment position of the sensor unit and the offset of the placement is set as an origin.
  • the attachment position information of each sensor unit need only be acquired by a method of reading a predetermined adjustment pattern or directly measuring the attachment position by an additional measuring apparatus, and stored in a memory, thereby acquiring the attachment position information of each sensor unit from the memory, as needed.
  • the arrangement of an image processing unit 201 in the reading apparatus according to the first embodiment will be described with reference to FIG. 3 .
  • the image processing unit 201 generates image data of the first coordinate system xy with reference to the original by obtaining a pixel value at each pixel position on the first coordinate system xy with reference to the original.
  • the image processing unit 201 then executes image processing for the generated image data.
  • Sensor characteristic correction units 202 respectively perform correction of characteristics different for each sensor unit and each light-receiving element of each sensor unit, such as shading correction, for image data output from the sensor units 102 a to 102 d , thereby normalizing the image data.
  • each sensor characteristic correction unit 202 is provided for each sensor unit.
  • the sensor characteristic correction units 202 are arranged before the joint correction unit 204 (to be described later). This is because if the sensor characteristic correction units 202 are arranged after the joint correction unit 204 , the correspondence between each pixel value having undergone attachment error correction and a sensor unit which has read the pixel value becomes uncertain.
  • Read data buffers 203 temporarily store the image data output from the sensor characteristic correction units 202 , respectively.
  • Each read data buffer 203 is provided for each sensor unit, similarly to the sensor characteristic correction units 202 .
  • the order of pixel values output from the respective sensor units may be different from that of pixel values read out by the joint correction unit 204 , and thus the read data buffers 203 are configured to be randomly accessed.
  • the joint correction unit 204 acquires the pieces of attachment position information (attachment position and attachment angle including errors) of the sensor units from a nonvolatile memory 209 .
  • the joint correction unit 204 outputs pixel values (to be referred to as “attachment-error corrected pixel values hereinafter) obtained by performing attachment error correction for the image data output from the plurality of sensor units 102 a to 102 d according to the pieces of attachment position information. That is, the joint correction unit 204 performs coordinate transformation of the image data from the third coordinate system with reference to each sensor unit into the second coordinate system with reference to the reading apparatus.
  • the joint correction unit 204 preferentially selects one of the sensor units for an area which is readable overlappingly by a plurality of sensor units, thereby performing the joint processing of the image data.
  • An intermediate data buffer 205 temporarily stores, for tilt correction, the image data which has been joined by the joint correction unit 204 and is formed from attachment-error corrected pixel values. Since the order of the attachment-error corrected pixel values obtained by the joint correction unit 204 is different from the order in which a tilt correction unit 206 reads out the attachment-error corrected pixel values, the intermediate data buffer 205 is configured to be randomly accessed.
  • the tilt correction unit 206 calculates original placement information from signals of the original sensors 103 a to 103 c , and corrects the image data read out from the intermediate data buffer 205 in accordance with the original placement information, thereby correcting the tilt of the original image. That is, the tilt correction unit 206 performs coordinate transformation of the image data from the second coordinate system with reference to the reading apparatus into the first coordinate system with reference to the original.
  • the tilt correction unit 206 can obtain image data corresponding to the respective pixel positions on the first coordinate system. Although the operation of the tilt correction unit 206 will be described in detail later, the tilt correction unit 206 is configured to operate in the same order as the processing order of an image treatment unit 207 to reduce a buffer between the tilt correction unit 206 and the image treatment unit 207 .
  • the image treatment unit 207 performs, for the image data obtained by the tilt correction unit 206 , one or both of image processing such as density correction, contrast correction, or noise removal, and data compression processing. Note that if neither image processing nor data compression processing is necessary, the image treatment unit 207 can be omitted.
  • the image processing of the image treatment unit 207 includes spatial filter processing such as edge enhancement, it is desirable to process the image data in the order of rectangular blocks having a predetermined width and height so as to perform processing with a small internal memory capacity. Furthermore, when image data compression including orthogonal transformation for each block of a predetermined size like JPEG (Joint Photographic Experts Group) is applied, processing for each block is preferable.
  • JPEG Joint Photographic Experts Group
  • An output data buffer 208 temporarily stores the image data having undergone the image processing by the image treatment unit 207 , or the tilt corrected image data obtained by the tilt correction unit 206 when the image treatment unit 207 is omitted.
  • the sensor characteristic correction unit 202 , joint correction unit 204 , tilt correction unit 206 , and image treatment unit 207 can also be implemented by supplying programs for implementing the functions of the units to a single or a plurality of microprocessors (CPUs) of a computer device through a recording medium.
  • the read data buffer 203 , intermediate data buffer 205 , and output data buffer 208 may be allocated to a random access memory (RAM) serving as a work memory for the CPU.
  • RAM random access memory
  • the third coordinate system different for each sensor unit will be described with reference to FIG. 4 .
  • the sensor units 102 a and 102 b can read divided areas 301 a and 301 b within a predetermined time, respectively. That is, the third coordinate system of the sensor unit 102 a is a coordinate system XaYa, and that of the sensor unit 102 b is a coordinate system XbYb.
  • the third coordinate system is a coordinate system with reference to the corresponding sensor unit, and serves as the reference coordinate system of the image data read by the sensor unit.
  • the coordinate system XbYb is not a rectangular coordinate system. This is because an Xb-axis includes an error in attachment angle of the sensor unit 102 b but a Yb-axis coincides with the sub-scanning direction determined based on the original conveyance direction and is thus not influenced by the error in attachment angle.
  • [ Xn Yn 1 ] [ cos ⁇ ⁇ ⁇ n - sin ⁇ ⁇ ⁇ n - x n 0 1 - y n 0 0 1 ] ⁇ [ X Y 1 ] ( 1 )
  • [ X Y 1 ] [ cos ⁇ ⁇ ⁇ - sin ⁇ ⁇ ⁇ x 0 sin ⁇ ⁇ ⁇ cos ⁇ ⁇ ⁇ y 0 0 0 1 ] ⁇ [ x y 1 ] ( 2 )
  • the first coordinate transformation formula indicated by equation (1) is used to transform coordinate values (X, Y) on the second coordinate system with reference to the reading apparatus into coordinate values (Xn, Yn) on the third coordinate system with reference to the sensor unit.
  • the parameters of the first coordinate transformation formula are an attachment angle ⁇ n formed by the attachment direction of the a sensor unit 102 n with the X-axis of the second coordinate system, and an attachment position (x n , y n ) on the second coordinate system of the sensor unit 102 n.
  • the parameters ⁇ n and (x n , y n ) of the first coordinate transformation formula are also different for each sensor unit.
  • the second coordinate transformation formula indicated by equation (2) is used to transform the coordinate values (x, y) on the first coordinate system with reference to the original into coordinate values (X, Y) on the second coordinate system with reference to the reading apparatus.
  • the parameters of the second coordinate transformation formula are the original shift position (x 0 , y 0 ) and the original tilt angle ⁇ .
  • the joint correction unit 204 acquires an attachment-error corrected pixel value in a predetermined pixel order for each pixel corresponding to the coordinate values (X, Y) on the second coordinate system with reference to the reading apparatus. For example, a pixel order which makes write addresses in the intermediate data buffer 205 continuous is efficient.
  • the joint correction unit 204 performs correction of the image data output from each of the plurality of sensor units 102 a to 102 d in accordance with the attachment position information of the sensor unit. That is, the joint correction unit 204 obtains image data of the second coordinate system with reference to the reading apparatus from the image data of the third coordinate system with reference to the sensor unit, and joins the obtained image data with image data of an adjacent sensor unit, as needed.
  • the joint correction unit 204 transforms the coordinate values (X, Y) on the second coordinate system of a processing target pixel into the coordinate values (Xn, Yn) on the third coordinate system according to the first coordinate transformation formula indicated by equation (1).
  • the coordinate values (Xn, Yn) correspond to coordinate values on the third coordinate system with reference to each sensor unit, such as (Xa, Ya) or (Xb, Yb) shown in FIG. 4 .
  • the joint correction unit 204 determines whether the coordinate values (Xn, Yn) fall within the readable range of each sensor unit (for example, the divided area 301 a or 301 b shown in FIG. 4 ). For example, by comparing Xn with the effective width of the sensor unit 102 a and Yn with an original feed amount, the joint correction unit 204 can determine whether the sensor unit 102 a can read the pixel of the coordinate values (Xn, Yn).
  • the above-described coordinate transformation and determination are sequentially performed for the respective processing target pixels in a predetermined order for each sensor unit. For example, a sensor unit that has been first determined to have a readable range within which the coordinate values fall is preferentially selected.
  • the joint correction unit 204 selects a sensor unit having a readable range within which the coordinate values (Xn, Yn) fall.
  • the joint correction unit 204 then reads out, as the attachment-error corrected pixel value of the coordinate values (X, Y) on the second coordinate system, a pixel value at an address corresponding to the coordinate values (Xn, Yn) in the read data buffer 203 corresponding to the sensor unit.
  • the joint correction unit 204 assigns a pixel value corresponding to a predetermined background color as the attachment-error corrected pixel value of the coordinate values (X, Y).
  • the attachment-error corrected pixel value thus obtained is stored at an address corresponding to the coordinate values (X, Y) in the intermediate data buffer 205 . That is, the joint correction unit 204 stores the attachment-error corrected pixel value in the intermediate data buffer 205 in association with the coordinate values (X, Y).
  • the read data buffer 203 stores a pixel value acquired at a predetermined sampling interval while the coordinate values (Xn, Yn) obtained by the first coordinate transformation are not necessarily integer values.
  • a simple method of obtaining the attachment-error corrected pixel value is a nearest neighbor algorithm, and a pixel value at a sample point corresponding to integer coordinate values obtained by rounding the coordinate values (Xn, Yn) to the nearest integers may be set as the attachment-error corrected pixel value. That is, the joint correction unit 204 reads out, as the attachment-error corrected pixel value, a pixel value at an address corresponding to coordinate values obtained by rounding the coordinate values (Xn, Yn) to integer values in the read data buffer 203 corresponding to the selected sensor unit.
  • the attachment-error corrected pixel value may be acquired by interpolation processing.
  • the interpolation processing is to calculate the attachment-error corrected pixel value by referring to a plurality of neighboring sample points, and performing weighting calculation according to the difference between the coordinate values (Xn, Yn) and the coordinate values of each sample point.
  • Bilinear interpolation and bicubic interpolation are widely used.
  • the joint correction unit 204 need only determine whether the coordinate values (X, Y) of a processing target pixel fall within the projected range.
  • the tilt correction unit 206 acquires a pixel value (to be referred to as a “tilt corrected pixel value” hereinafter) having undergone tilt correction for each pixel corresponding to the coordinate values (x, y) on the first coordinate system with reference to the original.
  • the tilt correction unit 206 operates based on a processing request added with the pixel position information of the first coordinate system xy, which has been received from the image treatment unit 207 .
  • the tilt correction unit 206 and image treatment unit 207 may be configured to respectively operate according to a predetermined processing order commonly set between them.
  • the tilt correction unit 206 transforms the coordinate values (x, y) on the first coordinate system of the processing target pixel into the coordinate values (X, Y) on the second coordinate system with reference to the reading apparatus by the second coordinate transformation formula indicated by equation (2).
  • the tilt correction unit 206 then reads out the attachment-error corrected pixel value at the address corresponding to the coordinate values (X, Y) in the intermediate data buffer 205 as the tilt corrected pixel value of the coordinate values (x, y) on the first coordinate system, and supplies the tilt corrected pixel value to the image treatment unit 207 .
  • the tilt correction unit 206 may perform the same interpolation processing as that of the joint correction unit 204 , as needed.
  • the read data buffer 203 or intermediate data buffer 205 is desirably configured to have a cache function, as needed, by considering that a plurality of pixels are referred to or the same pixel is referred to a plurality of times.
  • the joint correction unit 204 joins the image data, which includes correction according to the attachment position information.
  • the tilt correction unit 206 performs tilt correction of the image data according to the original placement information.
  • the tilt correction unit 206 can acquire a tilt corrected pixel value in an arbitrary pixel order, independently of the joint correction unit 204 .
  • image enlargement/reduction processing can be implemented by setting a coefficient corresponding to an enlargement ratio or reduction ratio in the second coordinate transformation by the tilt correction unit 206 .
  • a projection on the XY plane of the effective length of the sensor is shorter than that in a case in which the sensor unit has no attachment angle error. That is, the sensor unit reads a section shorter than that in an ideal case, and outputs enlarged image data, as compared with a case in which the sensor unit has no attachment angle error. Therefore, it is only necessary to perform correction to reduce the image data in the attachment direction of the sensor unit in the first coordinate transformation by the joint correction unit 204 .
  • the image reading apparatus for reading an image on an original of a large size by the plurality of sensor units, even if the placement of an original is tilted or offset, it is possible to obtain satisfactory image data by correcting the attachment error of each sensor unit and the tilt of the original.
  • the image processing unit 201 of the second embodiment includes a geometric correction unit 210 for implementing both the functions of a joint correction unit 204 and a tilt correction unit 206 .
  • the geometric correction unit 210 uses the above-described second coordinate transformation and the third coordinate transformation obtained by combining the above-described first and second coordinate transformations.
  • the third coordinate transformation formula obtained by combining equations (1) and (2) is given by:
  • [ Xn Yn 1 ] [ cos ⁇ ⁇ ⁇ n - sin ⁇ ⁇ ⁇ n - x n 0 1 - y n 0 0 1 ] ⁇ [ cos ⁇ ⁇ ⁇ - sin ⁇ ⁇ ⁇ x 0 sin ⁇ ⁇ ⁇ cos ⁇ ⁇ ⁇ y 0 0 0 1 ] ⁇ [ x y 1 ] ( 3 )
  • FIGS. 6A and 6B show processing for one pixel.
  • the geometric correction unit 210 determines the coordinate values (x, y) on the first coordinate system of a processing target pixel according to a request of an image treatment unit 207 (S 501 ).
  • a pixel to be required next by the image treatment unit 207 is set as a processing target pixel.
  • a processing target pixel may be determined according to a predetermined processing order commonly set between the geometric correction unit 210 and the image treatment unit 207 .
  • the geometric correction unit 210 transforms the coordinate values (x, y) on the first coordinate system with reference to the original into coordinate values (X,Y) on the second coordinate system with reference to the reading apparatus using the second coordinate transformation formula (S 502 ).
  • S 502 the second coordinate transformation formula
  • the geometric correction unit 210 determines a sensor unit capable of reading the processing target pixel (S 503 ).
  • the geometric correction unit 210 determines a sensor unit capable of reading the processing target pixel (S 503 ).
  • an attachment-error corrected pixel value is calculated by setting weight coefficients, as follows:
  • w 1 represents a weight of a pixel value Pa corresponding to a sensor unit a
  • w 2 represents a weight of a pixel value Pb corresponding to a sensor unit b
  • P 1 represents the attachment-error corrected pixel value
  • the geometric correction unit 210 branches the process according to determination of the sensor unit (S 504 ). If the coordinate value X falls outside the readable ranges of all the sensor units, and there is no sensor unit capable of reading the coordinate value X, the geometric correction unit 210 sets a predetermined background color as a tilt corrected pixel value (S 505 ).
  • the geometric correction unit 210 transforms the coordinate values (x, y) on the first coordinate system with reference to the original into coordinate values (Xn, Yn) on the third coordinate system with reference to the sensor unit using the third coordinate transformation formula indicated by equation (3) (S 506 ). If, for example, a sensor unit 102 a is selected in step S 503 , coordinate values (Xa, Ya) are obtained by the third coordinate transformation formula having an attachment position (xa, ya) and attachment angle ⁇ a of the sensor unit as parameters. The geometric correction unit 210 acquires a pixel value corresponding to the coordinate values (Xn, Yn) as a tilt corrected pixel value (S 507 ). At this time, interpolation processing may be performed, as needed, similarly to the joint correction unit 204 in the first embodiment.
  • the geometric correction unit 210 transforms the coordinate values (x, y) into coordinate values (Xn1, Yn1) with reference to one of the sensor units using the third coordinate transformation formula indicated by equation (3) (S 508 ).
  • the geometric correction unit 210 acquires a pixel value Pa corresponding to the coordinate values (Xn1, Yn1) (S 509 ).
  • the geometric correction unit 210 transforms the coordinate values (x, y) into coordinate values (Xn2, Yn2) with reference to another sensor unit using the third coordinate transformation formula indicated by equation (3) (S 510 ).
  • the geometric correction unit 210 acquires a pixel value Pb corresponding to the coordinate values (Xn2, Yn2) (S 511 ).
  • the geometric correction unit 210 performs weighting processing of combining the pixel values Pa and Pb into a tilt corrected pixel value by the same processing as that indicated by expression (4) (S 512 ). After that, the geometric correction unit 210 outputs the tilt corrected pixel value to the image treatment unit 207 (S 513 ).
  • weighted average may degrade an edge.
  • the weight may be changed to form an S-shaped curve instead of linearly changing the weight, or one of the pixel values may be selected at a probability corresponding to the weight instead of calculating the average.
  • a combined tilt corrected pixel value can be directly obtained by three-dimensional interpolation processing by adding the axis of the combined weight to two-dimensional coordinate values.
  • the geometric correction unit 210 is formed by the second coordinate transformation without including reading of a pixel value for specifying a sensor unit capable of reading each pixel position of corrected image data, and the third coordinate transformation of obtaining a pixel value (tilt corrected pixel value) including tilt correction. It is, therefore, possible to efficiently obtain satisfactory image data which can be processed in an arbitrary pixel order and for which the attachment error of each sensor unit and the tilt of the original have been corrected.
  • the image reading apparatus in which the plurality of sensor units are arranged and which can read an original of a larger size according to the second embodiment can efficiently execute geometric correction processing including correction of the attachment error of each sensor unit and correction of the tilt of the original.
  • the plurality of sensor units are arranged so as to obtain image data having no gap in the main scanning direction X. That is, as shown in FIG. 2B , the sensor units 102 a to 102 d are arranged so that sensor units adjacent to each other in the main scanning direction X partially overlap each other, and have different offsets in the sub-scanning direction Y.
  • the plurality of sensor units are arrayed in a line in the main scanning direction X not to positively change the offsets in the sub-scanning direction Y.
  • image data in which image data corresponding to the gap are missing is obtained. Note that pixels corresponding to the missing portion of the image data will be referred to as “missing pixels” hereinafter.
  • the sensor characteristic correction unit 202 performs shading correction of the image data output from the sensor units 102 a to 102 d .
  • correction values to be used for shading correction are stored in the memory in advance.
  • the memory need only have a memory capacity capable of storing correction values for the light-receiving elements of the sensor units 102 a to 102 d to correct the difference between the characteristics of the respective light-receiving elements of the sensor units 102 a to 102 d , or the shift of the characteristics in the main scanning direction X. If, however, the number of pixels in the main scanning direction X is increased by increasing the number of sensor units to be implemented, the memory capacity of the memory also increases.
  • image processing of reducing the memory capacity of a memory for shading correction and improving the image quality by tilt correction in the image reading apparatus in which a plurality of sensor units are arrayed in a line in the main scanning direction X will be described.
  • the image processing according to the third embodiment effectively works especially when a line extending in the sub-scanning direction is positioned near the gap between the sensor units, and read by the plurality of sensor units due to the tilt of an original.
  • An image reading unit 501 reads an original image using a plurality of sensor units. As described above, an image on a tilted original may be read. For the sake of simplicity, assume that the image reading unit 501 includes two sensor units 402 a and 402 b , and there is a gap between the sensor units.
  • FIG. 9A shows an example of an image on an original 404 on which two black lines orthogonal to each other and having a width of two pixels are drawn on a white ground.
  • the image reading unit 501 reads the image from the tilted original 404 .
  • FIG. 9B assume that each of the sensor units 402 a and 402 b generates image data of 11 pixels per line, there is a gap of one pixel between the sensor units 402 a and 402 b , and one missing pixel occurs on each line of the image.
  • a memory controller (MEMC) 502 writes image data output from the image reading unit 501 in a memory 503 .
  • the MEMC 502 reads out pixel values in the sub-scanning direction Y from the memory 503 .
  • the reason why the MEMC 502 reads out the pixel values in the sub-scanning direction Y is to reduce the memory amount of a shading correction unit 504 (to be described later).
  • FIG. 10A A data format when the MEMC 502 transfers the pixel value read out from the memory 503 with reference to FIGS. 10A and 10B .
  • the pixel value is transferred as a pixel command added with a header area.
  • the header area includes band information indicating the end position of a pixel in each band.
  • the band information includes at least a band start (BS) bit, band end (BE) bit, column start (CS) bit, and column end (CE) bit.
  • BS, BE, CS, and CE “1” indicates “applicable” and “0” indicates “not applicable”.
  • the header area can include 1-bit attribute information indicating whether the pixel is a missing pixel or not. For the attribute information, “1” indicates a missing pixel, and “0” indicates non-missing pixel.
  • the pixel command indicates that the pixel A 0 is positioned at the start point of a band and a column in a transfer operation for each band, and is not a missing pixel.
  • the header area stores, for example, information for identifying the format of the pixel command in addition to the above-described information. This is not directly related to this embodiment and a description thereof will be omitted. Transferring a pixel value as such pixel command enables an image processing unit of the succeeding stage to readily calculate a pixel position.
  • a circle drawn by a solid line represents a pixel (to be referred to as an “effective pixel” hereinafter) read by the image reading unit 501 .
  • column numbers are given in the main scanning direction X
  • line numbers are given in the sub-scanning direction Y.
  • the pixel A 0 corresponds to a pixel at a position having a line number A and a column number 0. Pixels at positions having column numbers 0 to 10 and column numbers 12 to 22 are effective pixels.
  • FIG. 11 shows a column corresponding to the column number 11 .
  • a circle drawn by a broken line on a column having the column number 11 corresponds to a missing pixel.
  • FIG. 11 virtually shows the two lines of the original 404 shown in FIG. 9A , the two lines do not actually exist.
  • FIG. 11 Shows shown in FIG. 11 represent the reading order (output order) of the pixel values.
  • a pixel represented by a hatched circle corresponds to a column start (CS) or column end (CE pixel
  • a pixel represented by a crosshatched circle corresponds to a band start (BS) or band end (BE) pixel.
  • Pixels from the BS pixel to the BE pixel are a unit of band processing
  • pixels from the CS pixel to the CE pixel is a unit of column processing within a band.
  • the shading correction unit 504 performs shading correction of the pixel value included in the pixel command read out by the MEMC 502 from the memory 503 to correct variations in luminance in the main scanning direction X and the like.
  • a correction value for shading correction is different for each light-receiving element of the sensor units 402 a and 402 b . In other words, the correction value for shading correction is different for each column and exists for each column.
  • FIG. 12 shows shading correction processing executed for each band.
  • the shading correction unit 504 sets the correction value of the column 0 in the memory (S 601 ).
  • the shading correction unit 504 loads a pixel command (S 602 ), performs shading correction of a pixel value included in the pixel command using the correction value set in the memory (S 603 ), and outputs the pixel command having undergone shading correction (S 604 ).
  • the shading correction unit 504 since the pixel values read out by the MEMC 502 in the sub-scanning direction Y are input, the shading correction unit 504 does not require a memory amount for setting the correction values of all the light-receiving elements of the sensor units 402 a and 402 b . In other words, it is only necessary to provide, in the shading correction unit 504 , a memory having a memory capacity capable of setting a correction value for at least one column (one light-receiving element), and to replace the correction value for each unit of column processing.
  • a missing pixel insertion unit 505 receives the pixel commands having undergone shading correction, and inserts the pixel command of a missing pixel to a position corresponding to the gap between the sensor units 402 a and 402 b in the image data formed by a set of pixel commands based on missing pixel information (to be described later).
  • the missing pixel insertion unit 505 generates attribute information based on the missing pixel information, and adds the attribute information to the header area of the inserted pixel command.
  • the missing pixel insertion unit 505 only inserts the missing pixel without correcting the pixel value of the missing pixel.
  • a missing pixel correction unit 509 (to be described alter) corrects the pixel value of the missing pixel. That is, the processing (to be referred to as “missing pixel correction” hereinafter) of inserting the missing pixel and correcting the pixel value of the missing pixel is divided into processing executed by the missing pixel insertion unit 505 and processing executed by missing pixel correction unit 509 . Dividing missing pixel correction makes it possible to perform missing pixel correction of image data having undergone tilt correction.
  • An MEMC 506 having the same arrangement as that of the MEMC 502 writes the pixel commands output from the missing pixel insertion unit 505 in a memory 507 shared with a skew correction unit 508 .
  • the MEMC 506 reads out the pixel commands from the memory 507 in the writing order.
  • the skew correction unit 508 receives the pixel command read out by the MEMC 506 from the memory 507 , and calculates an address in the memory 507 based on the band information of the header area of the pixel command and the tilt angle ⁇ of the original acquired from the above-described original placement information.
  • the skew correction unit 508 reads out a pixel command corresponding to a position rotated by an angle of ⁇ by reading out the pixel command stored in the memory 507 according to the calculated address.
  • the skew correction unit 508 interpolates the pixel value and the attribute information of the pixel command read out from the memory 507 by referring to the pixel values and the pieces of attribute information of the neighboring pixel commands, thereby generating an image (to be referred to as a “tilt corrected image” hereinafter) having undergone tilt correction. Note that it is only necessary to use a method such as a nearest neighbor mechanism or bilinear interpolation to perform interpolation.
  • FIG. 13 shows an example of the tilt corrected image.
  • a circle drawn by a broken line shown in FIG. 13 corresponds to a missing pixel having attribute information “1”.
  • FIG. 13 virtually shows the two lines of the original 404 , but the two lines do not actually exist. Note that even if tilt correction is performed, it may be impossible to obtain satisfactory lines shown in FIG. 13 . In this case, the image quality is improved by performing filter processing such as edge enhancement processing or thickening processing for the tilt corrected image.
  • the skew correction unit 508 outputs the pixel commands of the tilt corrected image in the sub-scanning direction Y. At this time, the skew correction unit 508 resets the band information of the pixel command to be output in accordance with the transfer order shown in FIG. 13 .
  • the pixel command of the tilt corrected image to be output not only the pixel value but also the added attribute information has also undergone tilt correction.
  • the missing pixel correction unit 509 of the succeeding stage therefore, can determine whether the pixel of interest is a missing pixel by referring to the attribute information of the tilt corrected image.
  • the missing pixel correction unit 509 receives the pixel commands of the tilt corrected image, determines a missing pixel by referring to the attribute information, and corrects the pixel value of the missing pixel. For example, the missing pixel correction unit 509 need only calculate, as the value of the missing pixel, the average of the values of non-missing pixels (to be referred to as “right and left effective pixels” hereinafter) adjacent to the missing pixel in the main scanning direction.
  • the missing pixel correction unit 509 may calculate, as the value of the missing pixel, the weighted average of the values of two right effective pixels and two left effective pixels, or the average or weighted average of the values of the right and left effective pixels, the value of an effective pixel obliquely above the missing pixel, and the value of an effective pixel obliquely below the missing pixel.
  • the missing pixel correction unit 509 sets the calculated value as the pixel value of the pixel command of the missing pixel, thereby completing missing pixel correction. Note that upon completion of missing pixel correction, a significant value is set as the pixel value of the pixel command of the missing pixel. However, the pixel value of the pixel command of the missing pixel before the completion of missing pixel correction is arbitrary.
  • An image treatment unit 510 receives the image data having undergone missing pixel correction from the missing pixel correction unit 509 , and performs necessary image processing of the image data.
  • the image processing executed by the image treatment unit 510 includes, for example, color conversion processing, magnification processing, and filter processing.
  • An MEMC 511 having the same arrangement as that of the MEMC 502 writes the image data output from the image treatment unit 510 in a memory 512 , and outputs the image data stored in the memory 512 according to a request of a processing unit connected to the succeeding stage of the image processing unit 500 .
  • Examples of the processing unit of the succeeding stage of the image processing unit 500 are a printing unit for executing print processing, and a communication unit for transmitting the image data to a client apparatus.
  • FIG. 15 shows processing for one band.
  • FIGS. 16A and 16B each show an example of a pixel command.
  • the missing pixel insertion unit 505 inputs missing pixel information acquired in advance (S 701 ).
  • the missing pixel information indicates a missing pixel position corresponding to the position of the gap between the plurality of sensor units, and a missing pixel width corresponding to the width of the gap.
  • the missing pixel information can be acquired from the above-described attachment position information indicating the specification at the time of attaching the sensor units 402 a and 402 b , or acquired by performing a pre-scan.
  • the missing pixel information indicates a missing pixel position “11” and a missing pixel width “1”.
  • the missing pixel insertion unit 505 initializes the count value of a column counter 706 to 0 (S 702 ).
  • the column counter 706 is the internal counter of the missing pixel insertion unit 505 , and is used to detect a column position.
  • the pixel command generation unit 703 inputs a pixel command having undergone shading correction (S 704 ), and stores the received pixel command in a column memory 704 (S 705 ).
  • the column memory 704 is the internal memory of the missing pixel insertion unit 505 , and can store pixel commands for at least one column. Note that the pixel command stored in the column memory 704 is used to generate the pixel command of the missing pixel (to be described later).
  • the attribute information adding unit 705 outputs the pixel command of the effective pixel obtained by adding attribute information “0” to the pixel command received from the pixel command generation unit 703 (S 706 ). For example, a pixel command including attribute information “0” shown in FIG. 16A is output for a pixel A 10 shown in FIG. 11 .
  • the missing pixel insertion unit 505 initializes a count value Lcnt of a line counter 707 to 0 (S 710 ).
  • the line counter 707 is the internal counter of the missing pixel insertion unit 505 , and is used to generate an address in the column memory 704 by counting lines.
  • the pixel command generation unit 703 reads out a pixel command from the address in the column memory 704 corresponding to the count value Lcnt of the line counter 707 (S 711 ).
  • the attribute information adding unit 705 outputs the pixel command of the missing pixel obtained by adding attribute information “1” to the pixel command received from the pixel command generation unit 703 (S 712 ).
  • a pixel command shown in FIG. 16B is obtained by copying the pixel command of the pixel A 10 shown in FIG. 16A as the pixel command of a pixel A 11 shown in FIG. 11 , and changing the attribute information of the pixel command to “1”.
  • the pixel command generation unit 703 or the attribute information adding unit 705 may rewrite the pixel value to 0 or FF in the case of 8 bits or a predetermined value.
  • the MEMC 506 writes an image before tilt correction in the memory 507 , and then the skew correction unit 508 reads out, from the memory 507 , an image rotated by an angle of ⁇ to correct the tilt angle ⁇ of the original.
  • the skew correction unit 508 may write a tilt corrected image in the memory 507 , and the MEMC 506 may read out the tilt corrected image from the memory 507 .
  • the skew correction unit 508 performs a write operation by randomly accessing the memory 507 , thereby making it difficult to control the timing at which the MEMC 506 reads out the tilt corrected image from the memory 507 .
  • the third embodiment assumes that it is possible to assign 1-bit attribute information to a header area like the format of the pixel command shown in FIG. 10A .
  • a header area like the format of the pixel command shown in FIG. 10A .
  • FIGS. 17A to 17C A format when an MEMC 502 transfers a pixel command according to the fourth embodiment will be described with reference to FIGS. 17A to 17C . Unlike the format shown in FIG. 10A , no attribute information is assigned to the header area of the format of the pixel command according to the fourth embodiment shown in FIG. 17A . Attribute information is presented using lower bits of a data area, which will be described later.
  • a missing pixel insertion unit 505 inserts a missing pixel by the same method as that in the third embodiment. Note that since no attribute information is assigned to the header area, the 0th bit of a pixel value is set to the same value as that of the first bit for an effective pixel. FIG. 17B shows the pixel command of an effective pixel. If the first bit of the pixel value is “1”, the 0 th bit is set to “1”. If the first bit is “0”, the 0 th bit is set to “0”.
  • the missing pixel insertion unit 505 sets the 0th bit of a pixel value to a value different from that of the first bit.
  • FIG. 17C shows the pixel command of the missing pixel. If the first bit of the pixel value is “1”, the 0 th bit is set to “0”. If the first bit is “0”, the 0 th bit is set to “1”.
  • lower two bits of a pixel value are used as attribute information for determining whether a corresponding pixel is an effective pixel or missing pixel.
  • attribute information is represented by lower two bits is to minimize variations in pixel value.
  • the lower two bits of the pixel value of the effective pixel before and after attribute information is added are as follows:
  • the pixel value of the effective pixel before a bit operation is “01” or “10”, the pixel value varies. In either case, the pixel value varies at a minimum level of one. Thus, there is almost no visual influence.
  • the pixel value of the missing pixel before correction is not a significant value, as described above, except that lower two bits indicate a missing pixel. Even if, therefore, the pixel value varies, there is no influence.
  • lower one bit of the pixel value is “0”, an effective pixel may be indicated, and if the bit is “1”, a missing pixel may be indicated.
  • “0” may indicate a missing pixel
  • “1” may indicate an effective pixel. That is, it is only necessary to indicate an effective pixel or missing pixel by operating the lower bit of the pixel value.
  • a skew correction unit 508 performs the same tilt correction processing as that in the third embodiment.
  • the lower bits of the pixel value are maintained to maintain attribute information. That is, if the lower bits before correction of a pixel of interest indicate an effective pixel, the value of the 0th bit is changed, as needed, so that the lower bits after correction also indicate an effective pixel. Similarly, if the lower bits before correction indicate a missing pixel, the value of the 0th bit is changed, as needed, so that the lower bits after correction also indicate a missing pixel.
  • a missing pixel correction unit 509 determines a missing pixel by referring to the lower bits of each pixel command of a tilt corrected image output from the skew correction unit 508 , and corrects the pixel value of the missing pixel, similarly to the third embodiment.
  • the image processing apparatus according to the third or fourth embodiment is applicable as an image reading apparatus using four sensor units, as described in the first and second embodiments, or an image reading apparatus using more than four sensor units. That is, in each of the aforementioned embodiments, the number of sensor units is not limited as long as an image reading apparatus uses two or more sensor units.
  • Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment(s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s).
  • the computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Facsimile Scanning Arrangements (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Input (AREA)
US14/469,731 2013-09-06 2014-08-27 Image processing apparatus, method therefor, and image reading apparatus Active US9451125B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/236,712 US9614996B2 (en) 2013-09-06 2016-08-15 Image processing apparatus, method therefor, and image reading apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2013-185705 2013-09-06
JP2013185705 2013-09-06
JP2014118135A JP6335663B2 (ja) 2013-09-06 2014-06-06 画像処理装置およびその方法、並びに、画像読取装置
JP2014-118135 2014-06-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/236,712 Division US9614996B2 (en) 2013-09-06 2016-08-15 Image processing apparatus, method therefor, and image reading apparatus

Publications (2)

Publication Number Publication Date
US20150070734A1 US20150070734A1 (en) 2015-03-12
US9451125B2 true US9451125B2 (en) 2016-09-20

Family

ID=52625334

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/469,731 Active US9451125B2 (en) 2013-09-06 2014-08-27 Image processing apparatus, method therefor, and image reading apparatus
US15/236,712 Active US9614996B2 (en) 2013-09-06 2016-08-15 Image processing apparatus, method therefor, and image reading apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/236,712 Active US9614996B2 (en) 2013-09-06 2016-08-15 Image processing apparatus, method therefor, and image reading apparatus

Country Status (2)

Country Link
US (2) US9451125B2 (fr)
JP (1) JP6335663B2 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6403393B2 (ja) * 2014-02-12 2018-10-10 住友重機械工業株式会社 画像生成装置
JP6247242B2 (ja) * 2015-03-16 2017-12-13 富士フイルム株式会社 読取装置及びその補正値算出方法並びにプログラム、インクジェット記録装置
JP6344862B2 (ja) * 2015-09-02 2018-06-20 富士フイルム株式会社 検査装置、検査方法及びプログラム、画像記録装置
KR20180019976A (ko) * 2016-08-17 2018-02-27 에스프린팅솔루션 주식회사 화상 형성 장치, 그의 스캔 이미지 보정 방법 및 비일시적 컴퓨터 판독가능 기록매체
JP2020030350A (ja) * 2018-08-23 2020-02-27 コニカミノルタ株式会社 画像検査装置およびプログラム
WO2020246713A1 (fr) * 2019-06-03 2020-12-10 주식회사 바딧 Procédé et système pour correction de données de capteur grâce à une opération spécifique de l'utilisateur et support d'enregistrement non transitoire lisible par ordinateur
KR102176769B1 (ko) * 2019-06-03 2020-11-09 주식회사 바딧 사용자의 행동 특징을 기반으로 센서 데이터를 보정하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체
JP2023086020A (ja) * 2021-12-09 2023-06-21 キヤノン株式会社 画像形成装置および方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07221943A (ja) 1995-02-01 1995-08-18 Olympus Optical Co Ltd 画像取扱装置
JP2003319160A (ja) 2002-04-19 2003-11-07 Kyocera Mita Corp 画像読み取り装置及び画像形成装置
US20100165420A1 (en) * 2008-12-26 2010-07-01 Canon Kabushiki Kaisha Image processing appratus, image processing method and computer program
JP2012023564A (ja) 2010-07-14 2012-02-02 Fuji Xerox Co Ltd 画像処理装置
US20120127536A1 (en) * 2010-11-23 2012-05-24 Kinpo Electronics, Inc. Method for image correction and scanner using the same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06339007A (ja) * 1993-05-27 1994-12-06 Hitachi Ltd 画像読み取り装置
JP5429035B2 (ja) * 2010-05-11 2014-02-26 三菱電機株式会社 密着型イメージセンサ

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07221943A (ja) 1995-02-01 1995-08-18 Olympus Optical Co Ltd 画像取扱装置
JP2003319160A (ja) 2002-04-19 2003-11-07 Kyocera Mita Corp 画像読み取り装置及び画像形成装置
US20100165420A1 (en) * 2008-12-26 2010-07-01 Canon Kabushiki Kaisha Image processing appratus, image processing method and computer program
JP2012023564A (ja) 2010-07-14 2012-02-02 Fuji Xerox Co Ltd 画像処理装置
US20120127536A1 (en) * 2010-11-23 2012-05-24 Kinpo Electronics, Inc. Method for image correction and scanner using the same

Also Published As

Publication number Publication date
US20150070734A1 (en) 2015-03-12
US9614996B2 (en) 2017-04-04
US20160352965A1 (en) 2016-12-01
JP6335663B2 (ja) 2018-05-30
JP2015073263A (ja) 2015-04-16

Similar Documents

Publication Publication Date Title
US9614996B2 (en) Image processing apparatus, method therefor, and image reading apparatus
JP6115781B2 (ja) 画像処理装置及び画像処理方法
JP5250847B2 (ja) 画像処理装置、情報処理システム、画像処理方法、およびプログラム
JP2012044530A (ja) 画像補正装置および画像補正方法
US9633280B2 (en) Image processing apparatus, method, and storage medium for determining pixel similarities
US11076092B2 (en) Image processing apparatus, image processing method, and image processing program
US20130242126A1 (en) Image processing device
JP4124096B2 (ja) 画像処理方法および画像処理装置、並びにプログラム
JP2019179342A (ja) 画像処理装置および画像処理方法
US9082186B2 (en) Image processing apparatus and image processing method
US10158777B2 (en) Image processing apparatus including a correction circuit configured to stop formation of an inclination-corrected line in a main scanning direction, image forming apparatus, image processing method and non-transitory computer readable medium
JP2002232654A (ja) 画像処理装置、画像処理方法およびその方法をコンピュータに実行させるプログラムを記録したコンピュータ読み取り可能な記録媒体
US20160191748A1 (en) Image forming apparatus, image forming method, and storage medium
JP2017108323A (ja) 画像処理装置、画像処理方法、画像読取装置、及びプログラム
JP5955003B2 (ja) 画像処理装置および画像処理方法、プログラム
US9270900B2 (en) Movie processing apparatus and control method therefor
JP5340021B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP6058115B2 (ja) 画像処理装置、画像処理方法、画像読取装置、及びプログラム
JP2015201677A (ja) 画像処理装置および画像処理方法
US10298808B2 (en) Image processing apparatus, image processing method, and recording medium
JP6859781B2 (ja) 画像処理装置、画像処理方法およびプログラム
WO2021075314A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et support d'enregistrement lisible par ordinateur
CN109819137B (zh) 影像获取与输出方法
US20230306566A1 (en) Image processing apparatus, image processing method, and storage medium
JP2012070168A (ja) 画像処理装置、画像処理方法、および、画像処理プログラム

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAGIWARA, KATSUYUKI;TSUTSUMI, TAKAYUKI;REEL/FRAME:034901/0976

Effective date: 20140822

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8