WO2007093241A1 - Verfahren und vorrichtung zum scannen von bildern - Google Patents

Verfahren und vorrichtung zum scannen von bildern Download PDF

Info

Publication number
WO2007093241A1
WO2007093241A1 PCT/EP2006/069197 EP2006069197W WO2007093241A1 WO 2007093241 A1 WO2007093241 A1 WO 2007093241A1 EP 2006069197 W EP2006069197 W EP 2006069197W WO 2007093241 A1 WO2007093241 A1 WO 2007093241A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image
source image
filter
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2006/069197
Other languages
German (de)
English (en)
French (fr)
Inventor
Bernhard Frei
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Production Printing Germany GmbH and Co KG
Original Assignee
Oce Printing Systems GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oce Printing Systems GmbH and Co KG filed Critical Oce Printing Systems GmbH and Co KG
Priority to JP2008554614A priority Critical patent/JP4773532B2/ja
Priority to AT06841280T priority patent/ATE522083T1/de
Priority to US12/279,360 priority patent/US7746519B2/en
Priority to EP06841280A priority patent/EP1985105B8/de
Publication of WO2007093241A1 publication Critical patent/WO2007093241A1/de
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/48Picture signal generators
    • H04N1/486Picture signal generators with separate detectors, each detector being used for one specific colour component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/191Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements
    • H04N1/192Simultaneously or substantially simultaneously scanning picture elements on one main scanning line
    • H04N1/193Simultaneously or substantially simultaneously scanning picture elements on one main scanning line using electrically scanned linear arrays, e.g. linear CCD arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/191Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements
    • H04N1/192Simultaneously or substantially simultaneously scanning picture elements on one main scanning line
    • H04N1/193Simultaneously or substantially simultaneously scanning picture elements on one main scanning line using electrically scanned linear arrays, e.g. linear CCD arrays
    • H04N1/1932Simultaneously or substantially simultaneously scanning picture elements on one main scanning line using electrically scanned linear arrays, e.g. linear CCD arrays using an array of elements displaced from one another in the sub scan direction, e.g. a diagonally arranged array
    • H04N1/1933Staggered element arrays, e.g. arrays with elements arranged in a zigzag

Definitions

  • the present invention relates to a method and an apparatus for scanning images, wherein a source image is scanned by means of a plurality of color sensor elements.
  • the source image can be both a black and white image and a color image.
  • the target image generated by scanning is a black and white image.
  • black and white images can be scanned with a color scanner having a plurality of color sensor elements.
  • the source image is usually scanned in the same way as a color image and the pseudo color image thus obtained is correspondingly corrected so that the gray levels in the target image match as well as possible with the corresponding brightness values in the source image.
  • Black and white images are scanned at the same speed and resolution as color images.
  • Color sensor elements but only scanned by a single color sensor element, whereby it is possible that the plurality of color sensor elements simultaneously scan a plurality of pixels. This will increases the scanning speed, or achieved a much higher resolution at the same scanning speed as in conventional scanning devices.
  • the signals are corrected accordingly.
  • colored regions of the source image are detected to different degrees with the different color sensor elements.
  • a red, green and blue color sensor element is provided.
  • a green color area in the source image is well detected by the green color sensor element, whereas the blue and red color sensor elements hardly output a signal for a green color area.
  • frequency artifacts can be detected and corrected with appropriate filters.
  • the pixels obtained with the different color sensor elements are first sorted, so that they are arranged in a corresponding arrangement as in the source image. This is necessary because the different sensor elements are arranged away from each other, so that here for each sensor element, a data stream is formed, in which the values of the individual pixels are arranged offset from one another from data stream to data stream.
  • the invention is based on the object to provide a method and apparatus for scanning an image to produce a target image in black and white representation, with a high sampling rate at a high resolution is possible and at the same time the target image can be generated quickly and in essentially the same hardware as when generating a target image in color representation is usable.
  • the method according to the invention for scanning an image comprises the following steps:
  • the sub-filters are applied directly to the individual color separations without the color separations having to be mixed to form a uniform data unit. This saves a corresponding Sorting process, because the application of the sub-filter selects the correct pixels of the separations.
  • using three small sub-filters requires significantly less processing power than using a much larger overall filter. As a result, the automatic processing of the method according to the invention is significantly accelerated.
  • interpolation from the pixels of the source image to the pixels of the target image is preferably also carried out.
  • the device according to the invention for scanning an image comprises
  • a sensor comprising a plurality of line arrays for scanning a source image, each adapted to detect a particular color such that a color separation of a source image to be scanned is generated from each line array, representing the source image in the form of pixels, the pixels of the color separations a control unit, which is designed in such a way that a partial FIR filter is applied to the color separations in order to image the pixels of the plurality of color separations onto a single target image.
  • the image of the pixels of the plurality of color separations on a target image as Interpolation over a length range corresponding to two adjacent pixels in a color separation. Due to the offset of the color separations, a pixel of the different color separations would be arranged with the same distance from each other when superimposing the color separations in this length region. About these color separations is thus interpolated, with an additional interpolation without additional process step is possible. In this additional interpolation, it is possible to interpolate to pixels which are arranged between the pixels of the color separations.
  • Figure 1 shows the parts of a copier including a
  • FIG. 2 shows an optical sensor of the scanner
  • 3 shows a sensor unit with three color sensor elements
  • FIG. 4 shows a block diagram of an evaluation device for carrying out the method according to the invention.
  • FIG. 1 shows a copier 1 with a scanner 2 and a printer 3.
  • a control unit 4 controls the scanner 2 and the printer 3 and is connected to an operation unit 5 and an external network 6.
  • the signals generated thereby are converted into digital data and stored in a memory 6 in the control unit 4.
  • the image data stored in the memory 7 can be printed by the control unit 4 directly on the printer 3 or sent via the external network 6.
  • the scanner 2 has an optical sensor 8 (FIG. 2).
  • the sensor 8 is provided with three parallel CCD line arrays, each forming a color sensor element for the colors red R, green G and blue B.
  • the CCD line arrays are each provided with a red, green or blue color filter.
  • the CCD line arrays are shown arranged directly adjacent to one another.
  • the individual CCD line arrays each have a certain distance from each other for manufacturing reasons.
  • the individual CCD line arrays are driven by means of clock signals, which are supplied via corresponding clock signal lines 9, wherein a pixel is read out at each clock.
  • the signals read out from the CCD line arrays are output via a respective signal line 10.
  • This sensor 8 thus comprises three CCD line arrays, each representing a color sensor element and each generating a separate data stream for red, green and blue pixels.
  • Scanning a source image are thus in the present embodiment, three color separations for the colors red, green and blue before.
  • the separations represent a coordinate system in which the individual pixels are arranged. Due to the spatial offset of the color sensor elements R, G, B, the color separations in the coordinate systems are slightly offset from each other. The offset between two color separations depends on the
  • the offset between the color separations for the colors red and blue is thus 30 2 ⁇ 3 pixels.
  • the sensor 8 is shown schematically. It comprises an objective 11, the three color sensor elements R, G, B which are each provided with a color filter 12 and a source image 13, on which schematically several rows of pixels 14 are shown and numbered by numbers 1 to 22. If the individual color sensor elements R, G, B are driven simultaneously to detect a pixel, then in the embodiment shown in Figure 3, the offset between two color separations of adjacent color sensor elements five pixels by a staggered control of the individual color sensor elements and corresponding adjustment of the relative speed (arrow A in FIG. 3) non-integer offsets can also be achieved.
  • the signals detected by the sensor are corrected. This is done by means of a filter.
  • the application of a filter to the pixels detected by the sensor is generally carried out as a vector product, the filter being a vector having a plurality of filter coefficients a1, a2,..., Which is connected to a vector consisting of the values of Pixels is multiplied. These pixels to which the filter is applied are juxtaposed in the source image.
  • the application of a filter to pixels is detailed in the WO 2005/114573 A1, to which reference is therefore made in its entirety and which is incorporated in the present application.
  • Color sensor elements generated three color separations to a single image summarize, in which the corresponding pixels are sorted according to their position in the source image.
  • the sensor 8 shown in FIGS. 2 and 3 thus results in the following sequence of pixels:
  • the basic principle of the present invention is that the three data streams of the three color channels of the three color sensor element are not merged prior to filtering, but that the filter is designed so that only in the filtering process, the corresponding pixels from the three data streams or three Color separations are selected.
  • the filter is designed so that only in the filtering process, the corresponding pixels from the three data streams or three Color separations are selected.
  • the filter For easy application of the filter, only the coordinate systems of the three color separations are aligned uniformly.
  • the color separations are each offset by 15 ⁇ pixels or 30 ⁇ pixels, e.g. the coordinate system of the green color separation by 15 pixels changed first to the red and the coordinate system of the blue color separation by 30 pixels in the direction of the red color separation.
  • the parameters of the coordinate axes must be changed accordingly. This adjustment can be done very quickly.
  • This adjustment or the alignment of the coordinate systems is performed only with respect to integer pixels. Offsets by fractions of the distance between two adjacent pixels are not corrected here.
  • Target image is not always in the same grid as the pixels are arranged in the source image, not always a specific pixel of the source image can be mapped to a specific pixel of the target image. Instead, a fictitious pixel of the source image must be imaged on a pixel of the target image, with the fictitious pixel of the source image between two actually captured pixels of the source image may lie. In order to obtain this fictitious pixel, an interpolation is necessary. Therefore, in the present embodiment, multiple sets of three sub-filters each are generated, with each of which a different interpolation step takes place.
  • a fictitious pixel which is shifted with respect to the center between the pixels g ⁇ and rl of the source image by a distance of 1/11 in the direction of the pixel rl, is mapped to a pixel in the corresponding position of the target image:
  • Corresponding filters are generated for the further shifts of the fictitious pixel by a respective distance of 1/11 of the distance between two adjacent pixels of the source image.
  • the filter application is assumed to be one for each interpolation shift and for each color. In the present embodiment, there are a total of eleven interpolation shifts for three colors, so that a total of 33 such filter applications are to be created. In another choice of interpolation steps, which is arbitrary per se, or in another sensor with a different number of color sensor elements, the number of filter applications must be changed accordingly.
  • the 33 sub-filters are denoted FIr, FIb, FIg, F2r, F2b, F2g,..., F33r, F33b, F33g, where the number in this designation denotes the interpolation step and the Letter indicates the color on which color separation the filter is to be applied.
  • the filters for the first interpolation shift are accordingly:
  • the sub-filters for the eleventh shift are as follows:
  • the generation of the sub-filters can be briefly summarized as follows: The scalar products of a vector product of the vector of the filter are made with a vector comprising the corresponding pixels of the source image, the scalar products being created for each application of the filter to each interpolation shift.
  • These scalar products are each considered to be a complete sequence of pixels comprising several complete sets of pixels, one set of pixels each having exactly one pixel of the different color types (r, b, g).
  • the pixels that are not multiplied by any filter coefficients are multiplied by 0.
  • the sub-filters comprise as coefficients the factors of the scalar products of such a vector product, which are multiplied by in each case one pixel of a specific color. These factors are adopted in the same arrangement as in the vector product in the sub-filter. If corresponding sequences in the different color separations are multiplied by the corresponding subfilters (vector product) and the resulting values are added together, the gray tone for the corresponding pixel in the target image is obtained.
  • FIG. 4 is a schematic block diagram of a device for carrying out the method according to the invention.
  • This device comprises a column counter 15 and a line counter 16.
  • the column counter 15 counts the columns in the destination image.
  • the line counter 16 counts the lines in the destination image. Once the column counter 15 has counted through all the columns in the destination image, then the line counter counts down one line. Thus, the pixels of the target image are successively counted line by line.
  • These two counters 15, 16 thus specify at which pixel in the target image the image should be executed from the source image.
  • the column counter 15 and the line counter 16 each have a multiplier 17, 18 connected downstream, with which the
  • the scaling factor represents the step size in the source image so that the distance between two pixels in the target image, i. the distance between two adjacent columns or
  • Rows are mapped to the source image in the unit of the distance of two adjacent pixels in the source image.
  • the multipliers 17, 18 are each followed by adders 19, 20, which in each case add a starting coordinate to the respective number of columns or lines.
  • the image section in the source image that is to be mapped to the target image is determined on the one hand.
  • the starting coordinate centers the FIR filters around the respective pixel in the source image, which is explained in more detail below.
  • the thus corrected column and row numbers are respectively supplied to a correction data generator 21, 22 and an adder 23, 24.
  • the correction data generators 21, 22 use a predefined function or a look-up table to calculate a correction value from the supplied column and line number which, for example, causes a distortion error which is produced when the source image is generated by using a specific optics.
  • This correction value is supplied to the respective adder 23, 24 and connected to the corresponding column or row number added.
  • the column and row number obtained in this way indicates the coordinate in the source image, from which the above-explained FIR filter for calculating the pixel of the target image can calculate with the column and row number specified by the column counter 15 and row counter 16.
  • the adder 23 is followed by an alignment correction element 25 and in a branch by a subtractor 26.
  • the alignment correction element 25 and the subtracter 26 are initially disregarded in the explanation of the device according to the invention. Via this branch, which leads to the FIR-filter distribution tables 27a, 27b, 27c, the fraction of the number of columns is fed to the FIR-filter distribution tables 27a, 27b, 27c.
  • the adder 24 is similarly followed by the FIR filter distribution tables 28a, 28b, 28c, to which the fractional fraction of the number of rows is fed.
  • the integer parts of the number of columns or the number of lines are supplied to an adder 29 and 30, respectively.
  • a local column counter 31 and a local row counter 32 are interposed. When applying a certain number of columns to the adders 29, 30, the local column counter 31 and the local row counter 32 respectively count from 0 to n-1, where n is the number of filter coefficients.
  • the counter number i of the local column counter 15 is passed to the FIR filter distribution tables 27a, 27b, 27c, and accordingly (i-th filter value), the corresponding filter value is selected from the filter corresponding to the second fractional component, ie the FIR sub-filter based on the fractional filter Proportion (corresponds to the interpolation shift) is selected and then from the respective FIR sub-filter of the filter coefficient is read out.
  • one filter value each is read out of the three FIR subfilters 27a, 27b, 27c.
  • Dependency of the fractional part and the counter value of the local line counter 32 is selected.
  • the counter value of the local column counter 31 is added to the integer column number adder 29, thereby calculating the column number corresponding to the corresponding filter value.
  • the counter value of the local line counter 32 which is the adder
  • an output value of the adder 29 (number of columns) and an output value of the adder 30 (line number) form a coordinate pair, which is supplied to an address generator 33.
  • these coordinates are converted into corresponding address data indicating the address in image memories 20a, 20b, 20c.
  • the image memories 20a, 20b, 20c contain the color separations of the source image and the addresses generated by the address generator 33 designate in the image memories 20a, 20b, 20c the data of the color separations which correspond to the coordinate pairs supplied to the address generator 33.
  • the corresponding values are then read, first to first multipliers 35a, 35b, 35c with the filter values from the FIR subfilter tables 28a, 28b, 28c and then to second multipliers 36a, 36b, 36c with the filter values from the FIR subfilter tables 27a, 27b, 27c.
  • the values determined in this case are summed up into partial values in partial accumulators 37a, 37b, 37c and then the accumulated values become an accumulator 38, with which the three partial values are added up.
  • the total value represents the gray value of the pixel in the target image, which is defined by the number of columns specified by the column counter 15 and the number of lines predetermined by the row counter 16.
  • the column counter 16 After output (output 39) of the corresponding gray value, the column counter 16 is increased by one and a new gray value is calculated, with the local column counter 31 and the local row counter 32 once again passing through their value range.
  • the column counter If the column counter has reached the last column, it is reset to 0 and starts at the first column and the row counter is increased by one. If both the column counter and the line counter have reached their maximum values, the complete target image is calculated.
  • the inventive filter computations can be performed simultaneously in both the column and row directions and superimposed by successively multiplying on the first and second multipliers 35a, 35b, 35c, 36a, 36b, 36c.
  • the alignment correction element 25 With the alignment correction element 25 explained above, the property of computers can be exploited that they usually work with 32-bit data words, although only 8 bits are needed to describe a pixel. By reading a data word, the values of four pixels can thus be fetched with one access.
  • the element for alignment correction takes this into account and sets the number of columns in such a way that the next smaller address divisible by 4 is applied to the address generator 33. This rounding has the effect of shifting to the left. This shift could be reversed by a corresponding multiplexer, which compensates for this shift again. However, it is more elegant to edit this integer offset together with the subpixel shift in the filter and to compensate it with the already existing FIR subfilter tables. For this purpose, the differentiator 26 generates the difference between the number of columns applied to the address generator 33 and the output from the adder 23
  • the device described above with reference to Figure 4 is a hardware circuit for mapping a source image to a target image.
  • This hardware circuit is simple
  • Standard components formed which is why they can be produced inexpensively. It also allows a very fast mapping of the source image to the target image.
  • the invention may also be realized by a data processing program executed on a corresponding microprocessor.
  • the method according to the invention is intended for use in a scanner, in particular a high-performance scanner, with which the scanning speed can be increased from 800 sheets per minute to 2400 sheets per A4, each sheet being scanned, interpolated and corrected with a corresponding transfer function.
  • the inventive principle can also be used in other fields of application, such as in digital cameras for generating black and white images.
  • the invention relates to a method and a device for scanning an image, wherein a source image is detected by means of a color sensor.
  • the color sensor has a plurality of line arrays, which are each designed to detect a specific color.
  • Line arrays are used to create separations. According to the invention separate FIR sub-filters are applied to the color separations, without the color separations are previously mixed to form a complete color image. The pixels are automatically merged while filtering.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Facsimile Scanning Arrangements (AREA)
PCT/EP2006/069197 2006-02-14 2006-12-01 Verfahren und vorrichtung zum scannen von bildern Ceased WO2007093241A1 (de)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2008554614A JP4773532B2 (ja) 2006-02-14 2006-12-01 画像をスキャンする方法および装置
AT06841280T ATE522083T1 (de) 2006-02-14 2006-12-01 Verfahren und vorrichtung zum scannen von bildern
US12/279,360 US7746519B2 (en) 2006-02-14 2006-12-01 Method and device for scanning images
EP06841280A EP1985105B8 (de) 2006-02-14 2006-12-01 Verfahren und vorrichtung zum scannen von bildern

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102006006835.1 2006-02-14
DE102006006835A DE102006006835B4 (de) 2006-02-14 2006-02-14 Verfahren und Vorrichtung zum Scannen von Bildern

Publications (1)

Publication Number Publication Date
WO2007093241A1 true WO2007093241A1 (de) 2007-08-23

Family

ID=37808240

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2006/069197 Ceased WO2007093241A1 (de) 2006-02-14 2006-12-01 Verfahren und vorrichtung zum scannen von bildern

Country Status (6)

Country Link
US (1) US7746519B2 (enExample)
EP (1) EP1985105B8 (enExample)
JP (1) JP4773532B2 (enExample)
AT (1) ATE522083T1 (enExample)
DE (1) DE102006006835B4 (enExample)
WO (1) WO2007093241A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010007348A1 (de) 2010-02-09 2011-08-11 Chromasens GmbH, 78467 Verfahren zum Scannen eines Bildes

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4906673B2 (ja) * 2007-10-24 2012-03-28 株式会社リコー 画像処理装置、画像処理方法及び画像処理プログラム
JP5549371B2 (ja) * 2010-05-14 2014-07-16 セイコーエプソン株式会社 画像生成装置,送信装置,画像送信システム及びこれらに用いるプログラム
JP5983082B2 (ja) * 2012-06-21 2016-08-31 セイコーエプソン株式会社 表示制御回路、表示装置、及び、電子機器
US11789667B2 (en) 2019-11-05 2023-10-17 Hewlett-Packard Development Company, L.P. Printer colour deviation detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3418787A1 (de) * 1984-05-19 1985-11-21 Robert Bosch Gmbh, 7000 Stuttgart Verfahren zur erhoehung der aufloesung von farbfernsehkameras
US5923447A (en) * 1995-12-07 1999-07-13 Brother Kogyo Kabushiki Kaisha Color image reader
DE19835348A1 (de) * 1998-08-05 2000-02-10 Heidelberger Druckmasch Ag Vorrichtung zur Abtastung von Vorlagen
WO2001099431A2 (en) * 2000-06-16 2001-12-27 University Technology Corporation Method and apparatus for increasing resolution in digital imaging system by offsetting pixels
EP1173029A2 (en) * 2000-07-14 2002-01-16 Matsushita Electric Industrial Co., Ltd. Color image pickup device
WO2005114573A1 (de) * 2004-05-19 2005-12-01 Oce Document Technologies Gmbh Verfahren und vorrichtung zur interpolation und korrektur eines bildes

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS54174147U (enExample) 1978-05-30 1979-12-08
US5085506A (en) * 1986-05-09 1992-02-04 Greyhawk Systems, Inc. Apparatus and method of forming and projecting high precision optical images
GB9024971D0 (en) * 1990-11-16 1991-01-02 Rank Cintel Ltd Continuous-motion line-array telecine
US5315412A (en) * 1990-04-06 1994-05-24 Canon Kabushiki Kaisha Multi-chip color image sensor with light-receiving windows arranged to provide sensor output signals corresponding to the gap between adjacent sensors
JP2977232B2 (ja) * 1990-05-17 1999-11-15 キヤノン株式会社 色ずれ補正装置
JP3176101B2 (ja) * 1991-11-12 2001-06-11 キヤノン株式会社 画像読取装置
JP2909788B2 (ja) 1992-02-17 1999-06-23 ローム株式会社 サーマルプリントヘッド
US5513007A (en) * 1992-05-19 1996-04-30 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US5353056A (en) * 1992-10-27 1994-10-04 Panasonic Technologies, Inc. System and method for modifying aberration and registration of images
US5452112A (en) * 1994-03-25 1995-09-19 Eastman Kodak Company Color image reproduction system field calibration method and apparatus
JP3392564B2 (ja) * 1995-02-27 2003-03-31 三洋電機株式会社 単板式カラービデオカメラ
JP3658052B2 (ja) * 1995-09-12 2005-06-08 キヤノン株式会社 画像処理装置
KR970049856A (ko) * 1995-12-30 1997-07-29 김광호 복수개의 라이센서를 갖는 화상독취 장치에서 센서간 거리 보정 방법 및 회로
JP2947186B2 (ja) * 1996-10-23 1999-09-13 日本電気株式会社 フリッカ軽減回路
JPH10173868A (ja) * 1996-12-13 1998-06-26 Fuji Xerox Co Ltd 固体撮像素子およびこれを備えた画像読取装置
JPH11331515A (ja) * 1998-05-12 1999-11-30 Canon Inc 画像読取装置及び画像読取装置における読取補正方法
US6819799B1 (en) * 1998-07-17 2004-11-16 Fuji Photo Film Co., Ltd. Image reading apparatus, original reading method and original conveyance apparatus
JP2000115472A (ja) * 1998-09-30 2000-04-21 Fuji Photo Film Co Ltd 原稿読取装置
US6753914B1 (en) * 1999-05-26 2004-06-22 Lockheed Martin Corporation Image correction arrangement
EP1227687A3 (en) * 2000-12-30 2005-05-25 Texas Instruments Incorporated System for reducing color separation artifacts in sequential color displays
US20030007686A1 (en) * 2001-06-29 2003-01-09 Roever Jens A. Combined color space matrix transformation and FIR filter
US20030222987A1 (en) * 2002-05-30 2003-12-04 Karazuba Paul M. Line scan image recording device with internal system for delaying signals from multiple photosensor arrays
JP3947847B2 (ja) * 2003-05-26 2007-07-25 セイコーエプソン株式会社 撮像装置及びその駆動方法
JP2005045404A (ja) * 2003-07-24 2005-02-17 Ricoh Co Ltd 画像処理装置、画像処理方法およびプログラム
JP2005051393A (ja) * 2003-07-31 2005-02-24 Minolta Co Ltd 撮像装置
JP4052243B2 (ja) 2003-12-24 2008-02-27 トヨタ自動車株式会社 車両用バンパ構造
US7366746B2 (en) * 2004-02-12 2008-04-29 Xerox Corporation Finite impulse response filter method and apparatus
JP4082692B2 (ja) 2004-03-18 2008-04-30 株式会社デンソー 歩行者検知装置
JP2005333590A (ja) * 2004-05-21 2005-12-02 Konica Minolta Business Technologies Inc 画像読取装置、画像読取方法及び画像形成装置
JP2006311240A (ja) * 2005-04-28 2006-11-09 Olympus Corp 撮像装置
EP1961207B1 (en) 2005-11-29 2010-07-07 Océ-Technologies B.V. Scanner and method of scanning
US7911515B2 (en) * 2007-09-20 2011-03-22 Victor Company Of Japan, Ltd. Imaging apparatus and method of processing video signal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3418787A1 (de) * 1984-05-19 1985-11-21 Robert Bosch Gmbh, 7000 Stuttgart Verfahren zur erhoehung der aufloesung von farbfernsehkameras
US5923447A (en) * 1995-12-07 1999-07-13 Brother Kogyo Kabushiki Kaisha Color image reader
DE19835348A1 (de) * 1998-08-05 2000-02-10 Heidelberger Druckmasch Ag Vorrichtung zur Abtastung von Vorlagen
WO2001099431A2 (en) * 2000-06-16 2001-12-27 University Technology Corporation Method and apparatus for increasing resolution in digital imaging system by offsetting pixels
EP1173029A2 (en) * 2000-07-14 2002-01-16 Matsushita Electric Industrial Co., Ltd. Color image pickup device
WO2005114573A1 (de) * 2004-05-19 2005-12-01 Oce Document Technologies Gmbh Verfahren und vorrichtung zur interpolation und korrektur eines bildes

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010007348A1 (de) 2010-02-09 2011-08-11 Chromasens GmbH, 78467 Verfahren zum Scannen eines Bildes

Also Published As

Publication number Publication date
DE102006006835A1 (de) 2007-08-30
US7746519B2 (en) 2010-06-29
EP1985105B1 (de) 2011-08-24
DE102006006835B4 (de) 2008-05-08
US20090153924A1 (en) 2009-06-18
JP4773532B2 (ja) 2011-09-14
EP1985105A1 (de) 2008-10-29
JP2009527159A (ja) 2009-07-23
EP1985105B8 (de) 2012-02-29
ATE522083T1 (de) 2011-09-15

Similar Documents

Publication Publication Date Title
DE69327827T2 (de) System aus Modifizierung der Aberration und Positionierung von Bildern
DE3587155T2 (de) Verfahren und vorrichtung zum verdichten von bilddaten.
DE69019877T2 (de) Räumliche Interpolation von digitalen Videobildern.
DE69026434T2 (de) Einzelverarbeitungsverfahren zur erzeugung einer gleichmässigen verarbeitung von horizontalen und vertikalen einzelkomponenten
DE69704896T2 (de) Kamera mit digitaler und analoger Klemmschaltung
DE68920134T2 (de) Farbbildaufnahmegerät mit horizontal-farbstreifenfilter zur reduzierung des steigzeitrauschens.
EP0069325B1 (de) Verfahren zur Wandlung der Zeilenzahl
DE69733882T2 (de) Kamera mit einem einzigen bildaufnehmer
DE69025356T2 (de) Vorrichtung zur elektronischen Bildzusammensetzung
DE3319752C2 (enExample)
DE3853554T2 (de) Bewegungsvektorabschätzung in Fernsehbildern.
DE69031865T2 (de) Ein SIMD-Prozessor als digitales Filter
DE69420469T2 (de) Moire-Verringerungseinrichtung für Festkörperfarbbildvideoapparat
DE69124866T2 (de) Bildsignalverarbeitungsvorrichtung
DE69021164T2 (de) Programmierbarer digitaler schaltkreis zum ausführen einer matrizmultiplikation.
DE3789091T2 (de) Bildverarbeitungsverfahren und -system zur Bildrekonstruktion.
WO2008019867A2 (de) Bildverarbeitungsvorrichtung für farb-bilddaten und verfahren zur bildverarbeitung von farb-bilddaten
DE3788925T2 (de) Interpolator für Fernsehtricksystem.
DE102016112968B4 (de) Bestimmung von Farbwerten für Pixel an Zwischenpositionen
DE69029776T2 (de) Signalverarbeitungsschaltung für Festkörper-Bildaufnahmevorrichtung
DE10156040B4 (de) Verfahren, Vorrichtung und Computerprogramm-Produkt zum Entzerren einer eingescannten Abbildung
DE3304592C2 (enExample)
DE69827540T2 (de) Signalverarbeitungssystem
EP1985105B1 (de) Verfahren und vorrichtung zum scannen von bildern
DE10241353A1 (de) Verfahren und Vorrichtung zum Umwandeln eines Farbbildes

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2008554614

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2006841280

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12279360

Country of ref document: US