WO2016147832A1 - 画像読取装置及び画像読取方法 - Google Patents
画像読取装置及び画像読取方法 Download PDFInfo
- Publication number
- WO2016147832A1 WO2016147832A1 PCT/JP2016/055600 JP2016055600W WO2016147832A1 WO 2016147832 A1 WO2016147832 A1 WO 2016147832A1 JP 2016055600 W JP2016055600 W JP 2016055600W WO 2016147832 A1 WO2016147832 A1 WO 2016147832A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reading
- image data
- image
- magnification
- combined
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/393—Enlarging or reducing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00002—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
- H04N1/00026—Methods therefor
- H04N1/00034—Measuring, i.e. determining a quantity by comparison with a standard
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/04—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
- H04N1/19—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
- H04N1/191—Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a one-dimensional array, or a combination of one-dimensional arrays, or a substantially one-dimensional array, e.g. an array of staggered elements
- H04N1/192—Simultaneously or substantially simultaneously scanning picture elements on one main scanning line
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/401—Compensating positionally unequal response of the pick-up or reproducing head
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/3876—Recombination of partial images to recreate the original image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0081—Image reader
Definitions
- the present invention relates to an image reading apparatus and an image reading method for optically scanning a document to generate image data.
- a contact image sensor that scans an original as a reading object with a one-dimensional image sensor (line sensor) and generates image data corresponding to the original is used as an image reading apparatus applied to a copying machine, a scanner, a facsimile, and the like.
- the contact image sensor has a plurality of sensor chips arranged linearly in the main scanning direction, and each of the plurality of sensor chips is a plurality of linearly arranged main scanning directions at a predetermined arrangement pitch. It has an image sensor.
- Patent Document 1 discloses an apparatus that uses an average value of data of two pixels on both sides of a data missing position corresponding to adjacent sensor chips as interpolation data, and 2 on both sides of a data missing position corresponding to adjacent sensor chips.
- An apparatus is described in which values calculated using a quaternary approximate curve derived from pixel-by-pixel (total 4 pixels) data are used as interpolation data.
- Patent Document 1 when high-frequency image information exists at a position on the document corresponding to the data missing position between adjacent sensor chips, an accurate image of the data missing position is displayed. There is a problem that it cannot be reproduced.
- the present invention has been made to solve the above-described problems of the prior art, and eliminates the loss of data at corresponding positions between adjacent sensor chips and improves the quality of a read image.
- An object is to provide an image reading method.
- An image reading apparatus includes a plurality of image sensors arranged in a first direction, N sensor chips arranged in the first direction (N is an integer of 2 or more), N optical systems for reducing and imaging N reading ranges arranged in the first direction on the N sensor chips, respectively, and image data generated by the N sensor chips, Using the image data of the overlap area, which is an area where adjacent reading ranges of the image data of the reading ranges overlap, obtain the position in the first direction of the overlapping area of the adjacent reading ranges, An image for correcting the magnification in the first direction of the image data of the N reading ranges by obtaining a combined position that is a position where the magnification of the read image and two image data are combined from the position in the first direction. Performs management, characterized by comprising an image processing unit which combines the image data of the N read range the image processing is performed to generate combined image data.
- the loss of data at positions corresponding to adjacent sensor chips among N sensor chips arranged in the main scanning direction is eliminated by the N optical systems, and the N optical systems.
- the distortion of N image data (composition position deviation) caused by the above can be eliminated by data processing. Therefore, it is possible to generate high-quality composite image data corresponding to the reading range of the document.
- FIG. 1 is a functional block diagram showing a schematic configuration of an image reading apparatus according to Embodiment 1 of the present invention.
- (A) is a side view showing the range of light directed from the document at the reference position to each sensor chip of the imaging unit, and (b) is the range of light directed to each sensor chip in the case of FIG. It is a top view which shows the reading range (range on a manuscript) and each sensor chip.
- FIG. 2 is a plan view schematically showing a plurality of sensor chips of the imaging unit shown in FIG. 1.
- (A) is a figure which shows an example of the light range (range on a manuscript) and the figure pattern of a manuscript from the manuscript at a reference position toward each sensor chip, and (b) is the case of the same figure (a)
- (C) is a figure which shows the image data which each sensor chip produces
- (A) is a side view showing a range of light directed from the document located closer to the image pickup unit than the reference position to each sensor chip
- (b) is a diagram illustrating each sensor chip in the case of FIG. It is a top view which shows the range of the light which goes (range on a document), and the reading range read by each sensor chip.
- (A) is a figure which shows an example of the light range (range on a document) and the figure pattern of a document which are directed to each sensor chip from the document in a position closer to the image pickup unit than the reference position, and (b) It is a figure which shows the image which injects into each sensor chip in the case of the same figure (a), (c) is a figure which shows the image data which each sensor chip produces
- (A) is a side view which shows the range of the light which goes to each sensor chip of an image pick-up part from the original in a position far from an image pick-up part rather than a standard position, (b) is each in the case of the same figure (a).
- FIG. 1 It is a top view which shows the range of the light which goes to a sensor chip (range on a manuscript), and the reading range read by each sensor chip.
- A is a figure which shows an example of the range (light on a document) of the light which goes to each sensor chip from the document in a position far from an image pick-up part rather than a reference position, and the figure pattern of a document,
- b) It is a figure which shows the image which injects into each sensor chip in the case of the same figure (a)
- (c) is a figure which shows the image data which each sensor chip produces
- FIGS. 4A to 4C are diagrams for explaining the operation of the synthesizing unit in the case of FIGS. 4A to 4C where the document is at the reference position.
- FIGS. 6A to 6C are diagrams for explaining the operation of the synthesizing unit in the case of FIGS. 6A to 6C in which the document is located closer to the imaging unit than the reference position.
- FIGS. 8A to 8C are diagrams for explaining the operation of the synthesizing unit in the case of FIGS. 8A to 8C where the document is located farther from the imaging unit than the reference position.
- FIG. 10 is a flowchart schematically illustrating an example of processing (an image reading method according to a second embodiment) executed by the arithmetic device of the image reading device according to the second embodiment.
- 3 is a block diagram illustrating a configuration example of a combining unit in the image reading apparatus according to Embodiment 1.
- FIG. 3 is a block diagram illustrating a configuration example of a synthesis magnification setting unit in the synthesis unit according to Embodiment 1.
- FIG. 10 is a block diagram illustrating a configuration example of a composition magnification setting unit in a composition unit of an image reading apparatus according to Embodiment 3.
- (A)-(c) is a figure for demonstrating operation
- (A)-(c) is a figure for demonstrating operation
- FIG. 1 is a functional block diagram showing a schematic configuration of an image reading apparatus 1 according to Embodiment 1 of the present invention.
- the image reading apparatus 1 includes an imaging unit 2, an A / D (analog / digital) conversion unit 3, and an image processing unit 4.
- the image processing unit 4 includes an image memory 41, a similarity calculation unit 42, a synthesis position estimation unit 43, and a synthesis unit 44.
- the image reading apparatus 1 may include a transport unit that transports a document and a control unit that controls the entire apparatus. The transport unit and the processor as the control unit will be described in a second embodiment described later.
- the imaging unit 2 has N sensor chips (N is an integer of 2 or more) arranged in a straight line on the substrate. Each of the N sensor chips has a plurality of imaging elements arranged in a straight line.
- the arrangement direction (first direction) of the plurality of image sensors is referred to as a main scanning direction.
- Each of the N sensor chips is arranged in the main scanning direction so that the image sensors of these N sensor chips are arranged in a straight line.
- the imaging unit 2 has N optical systems (cells) that respectively reduce and form images of a plurality of reading ranges on a document on N sensor chips.
- Each of the N optical systems includes, for example, a lens and a diaphragm.
- N optical systems data loss at corresponding positions between adjacent sensor chips among the N sensor chips arranged in a straight line is eliminated. That is, in two adjacent reading ranges (also referred to as reading areas) read by adjacent sensor chips among the N sensor chips, N pieces are overlapped so that some areas overlap each other at the ends of the adjacent reading areas. Sensor chips and N optical systems are arranged, and the overlapping reading range (overlapping region) becomes an overlapping region (overlapping region).
- the image signal SI generated by optically scanning the document by the imaging unit 2 is converted to digital image data DI by the A / D conversion unit 3, and the image data DI is input to the image processing unit 4 and the image It is stored in the image memory 41 in the processing unit 4.
- FIG. 2A is a side view showing a range of light 28 from the document 26 at the reference position P to the lens 24, the aperture 23 having the opening 23 a, and the light range 28 through the lens 22 toward each sensor chip 21 of the imaging unit 2.
- FIG. 2B is a plan view showing a light range (range on the document 26) 29 toward each sensor chip 21 and a reading range 2A read by each sensor chip 21 in the case of FIG. .
- the main scanning direction is indicated by the x-axis
- the sub-scanning direction orthogonal thereto is indicated by the y-axis.
- Each of the N optical systems of the imaging unit 2 includes a lens 22, a diaphragm 23 having an opening 23a, and a lens 24.
- k is an integer from 1 to N.
- the k-th sensor chip 21 is also expressed as 21 (k).
- the k-th lens among the N lenses 22 is also expressed as 22 (k).
- the k-th lens among the N lenses 24 is also expressed as 24 (k).
- numbers indicating the order of arrangement are shown in parentheses.
- the image reading apparatus 1 may include an illumination 27 that irradiates the original 26 with light.
- the illumination 27 can be composed of, for example, an LED (Light Emitting Diode) as a light source and a light guide such as a resin for converting light emitted from the LED into illumination light of the document 26.
- This light guide is, for example, a cylindrical light guide having a length equivalent to the width of the document 26.
- the light reflected by the document 26 is collected by the lens 24.
- the light condensed by the lens 24 blocks unnecessary light by the diaphragm 23 and allows necessary light to pass through the opening 23 a of the diaphragm 23.
- the light that has passed through the opening 23 a of the diaphragm 23 passes through the lens 22 and reaches a plurality of image sensors of the sensor chip 21.
- a range 28 (k ⁇ 1) indicated by a broken line in FIG. 2A and a range 29 (k ⁇ 1) indicated by a broken line in FIG. 2B are the light beams that reach the sensor chip 21 (k ⁇ 1). The range is shown.
- a range 28 (k) indicated by a broken line in FIG. 2A and a range 29 (k) indicated by a broken line in FIG. 2B indicate a range of light reaching the sensor chip 21 (k).
- a range 28 (k + 1) indicated by a broken line in FIG. 2A and a range 29 (k + 1) indicated by a broken line in FIG. 2B indicate a range of light reaching the sensor chip 21 (k + 1).
- the document 26 is in a direction perpendicular to the paper surface on which FIG. 2A is drawn and is a direction from the back to the front (+ y-axis direction) or a direction from the front to the back ( ⁇ (y-axis direction).
- the document 26 may remain stationary and may be conveyed in a direction in which the imaging unit 2 is directed from the back to the front (+ y-axis direction) or in a direction from the front to the back ( ⁇ y-axis direction).
- at least one of the document 26 and the imaging unit 2 moves so that the document 26 and the imaging unit 2 relatively move in the sub-scanning direction (second direction).
- FIG. 3 is a plan view schematically showing a plurality of sensor chips 21 of the imaging unit 2.
- FIG. 3 shows sensor chips 21 (k ⁇ 1) and 21 (k) among the N sensor chips 21.
- Each of the N sensor chips 21 includes a plurality of red image sensors (R image sensors) 211 in which a red (R) optical filter is disposed on the image sensor, and a green (G) image sensor on the image sensor.
- R image sensors red image sensors
- G image pickup devices green image pickup devices
- B image pickup devices blue image pickup devices
- B image pickup devices blue image pickup devices
- the k-th sensor chip 21 (k) is also referred to as an R image sensor 211 (k), a G image sensor 212 (k), a B image sensor 213 (k), and a readout circuit 214 (k). Similarly, the numbers indicating the arrangement order are shown in parentheses for the other sensor chips 21 as well.
- the light reflected by the document 26 is collected on each color image sensor of the sensor chip 21, the R image sensor 211 photoelectrically converts red light of the collected light, and the G image sensor 212 collects light.
- the green light is photoelectrically converted from the emitted light, and the B image sensor 213 photoelectrically converts the blue light from the collected light. Electric signals obtained by photoelectric conversion are sequentially read out by the reading circuit 214 and output as a signal SI.
- the R image sensor of the (k ⁇ 1) th sensor chip 21 (k ⁇ 1) and the R image sensor of the kth sensor chip 21 (k) are aligned on the same straight line, and the k ⁇ 1th sensor chip 21 (k)
- the G image sensor of -1) and the G image sensor of the kth sensor chip 21 (k) are arranged on the same straight line, and the B image sensor and the kth image of the k-1th sensor chip 21 (k-1).
- the k-1th sensor chip 21 (k-1) and the kth sensor chip 21 (k) are arranged so as to be aligned on the same straight line as the B imaging element of the sensor chip 21 (k).
- FIG. 3 shows an example in which a plurality of R imaging element columns, a plurality of G imaging element columns, and a plurality of B imaging element columns are vertically arranged in this order on the sensor chip 21. Each position may be switched. In addition, when a color image is not acquired, a sensor chip including a plurality of image sensors in a single row structure without an optical filter may be used.
- the reference position P is a predetermined position, for example, a position away from the glass surface 25 by a predetermined distance.
- the reference position P which is a predetermined position, is a value set in advance by a user or the like, and is set by measuring a distance from the glass surface 25 serving as a reference before performing the processing. (Not shown).
- a light range 29 in FIG. 2B indicates a range of light collected by the respective lenses 24 on the document 26.
- the light range 29 (k-1) is a range of light collected by the lens 24 (k-1).
- the light range 29 (k) is a range of light collected by the lens 24 (k).
- the light range 29 (k + 1) is a range of light collected by the lens 24 (k + 1).
- the sensor chip 21 receives light in the reading range 2 ⁇ / b> A out of the light range 29 collected by the lens 24. , Photoelectric conversion.
- the reading range 2A (k-1) is received by the sensor chip 21 (k-1).
- the reading range 2A (k) is received by the sensor chip 21 (k).
- the reading range 2A (k + 1) is received by the sensor chip 21 (k + 1).
- the reading range 2A (k ⁇ 1) and the reading range 2A (k) overlap with each other by the width L1 in the x-axis direction.
- the reading range 2A (k) and the reading range 2A (k + 1) overlap with each other by the width L1 in the x-axis direction.
- the reading range that overlaps by the width L1 in the x-axis direction is an overlap region.
- FIGS. 4A to 4C are views for explaining an image read by the sensor chip 21 when the document 26 is conveyed in the y-axis direction at the reference position (P in FIG. 2A).
- . 4A shows a range 29 of light (range on the document 26) 29 from the document 26 at the reference position P to each sensor chip 21 of the imaging unit 2 through the lens 24, the diaphragm 23, and the lens 22, and the document 26.
- FIG. FIG. 4B is a diagram showing an example of a figure pattern (for example, a zigzag pattern in which a plurality of “ ⁇ ” -like figures are repeated), and FIG. 4B is an image incident on each sensor chip 21 in the case of FIG.
- FIG. 4C is a diagram showing image data generated by each sensor chip 21 in the case of FIG. 4B.
- the document 26 is conveyed in the y-axis direction with the reading surface of the document 26 facing upward.
- the reference position P is determined with reference to the glass surface 25, for example. 4 (a) to 4 (c), the document 26 is conveyed at the position of the reference position P, and the width of the overlap region OV2 shown in FIG.
- the range of the overlap region OV2 in the x-axis direction is set so that the width in the x-axis direction of the shape figure matches. Similar to the reference position P, the width of the overlap region OV2 at the reference position P is set in advance by the user or the like (not shown).
- FIG. 5A is a side view showing a range of light from the document 26 located closer to the imaging unit 2 than the reference position P by dmm to the sensor chip 21 of the imaging unit 2 through the lens 24, the diaphragm 23, and the lens 22.
- FIG. 5B is a plan view showing a light range (range on the document 26) 29 toward each sensor chip 21 and a reading range 2A read by each sensor chip 21 in the case of FIG.
- the light range 28 collected by the lens 24 increases as the lens 24 approaches the glass surface 25 and the document 26. Since the original 26 in FIGS. 5A and 5B is closer to the lens 24 than in the cases of FIGS. 2A and 2B, the light range 29 on the original 26 is as shown in FIGS. It is smaller than the range shown in (b). Accordingly, the width in the x-axis direction of the reading ranges 2A (k ⁇ 1), 2A (k), and 2A (k + 1) is narrow, and the overlapping region (overload) between the reading range 2A (k ⁇ 1) and the reading range 2A (k) The width L2 in the x-axis direction of the wrap region is narrower than the width L1 in the x-axis direction in the cases of FIGS. 2A and 2B (L2 ⁇ L1).
- FIGS. 6A to 6C are diagrams for explaining an image read by the sensor chip 21 when the document 26 is conveyed in the y-axis direction at a position of (reference position ⁇ d) mm.
- FIG. 6A shows a light range (range on the document 26) 29 from the document 26 that is closer to the imaging unit 2 than the reference position P to each sensor chip 21 of the imaging unit 2 and a graphic pattern of the document 26.
- FIG. 6B is a diagram showing an image incident on each sensor chip 21 in the case of FIG. 6A
- FIG. 6C is a diagram showing an example of (zigzag pattern). It is a figure which shows the image data which each sensor chip 21 produces
- FIG. 6A to 6C a case will be described in which a pattern in which a plurality of “ ⁇ ” -shaped figures are arranged horizontally is printed on the document 26.
- FIG. 21 For ease of explanation, when the document 26 is conveyed at a position of (reference position-d) mm, the horizontal width (width in the x-axis direction) of one “ ⁇ ” -shaped figure is adjacent to the sensor chip. 21 is wider than the width L2 in the x-axis direction of the overlapping region of the reading range 2A read by the image data 21.
- the width L2 in the x-axis direction of the overlapping region of the adjacent reading ranges 2A is shown in FIG.
- the overlap region is narrower than the width L1 in the x-axis direction.
- the width in the x-axis direction of one “ ⁇ ” -shaped figure does not fall within the width in the x-axis direction of the overlap region OV1, and the overlap region OV1 It protrudes outside in the x-axis direction.
- the overlap region OV2 set on the sensor chip 21 does not depend on the positional relationship between the glass surface 25 and the document 26, and therefore the overlap region in FIG. A figure outside the range of OV1 also enters the overlap region OV2 on the sensor chip 21.
- the image acquired by the sensor chip 21 is an enlarged image as compared with the case shown in FIG.
- FIG. 7A is a side view showing a range of light directed from the document 26 located farther from the imaging unit 2 than the reference position P to each sensor chip 21 through the lens 24, the diaphragm 23, and the lens 22.
- FIG. 7B is a plan view showing a light range (range on the document 26) 29 directed to each sensor chip 21 and a reading range 2A read by each sensor chip 21 in the case of FIG.
- the reading ranges 2A (k ⁇ 1), 2A (k), and 2A (k + 1) have a wide width in the x-axis direction, and an overlapping region (overover) between the reading range 2A (k ⁇ 1) and the reading range 2A (k)
- the width L3 in the x-axis direction of the wrap region is wider than the width L1 in the x-axis direction in the cases of FIGS. 2A and 2B (L3> L1).
- FIGS. 8A to 8C are diagrams for explaining an image read by the sensor chip 21 when the document 26 is conveyed in the y-axis direction at a position of (reference position + d) mm.
- FIG. 8A shows a light range (range on the document 26) 29 from the document 26 that is farther from the imaging unit 2 than the reference position P to each sensor chip 21 and a graphic pattern (zigzag pattern) of the document 26.
- 8 (b) is a diagram showing an image incident on each sensor chip 21 in the case of FIG. 8 (a)
- FIG. 8 (c) is a diagram of FIG. 8 (b). It is a figure which shows the image data which each sensor chip 21 produces
- the width in the x-axis direction of the overlapping region of the adjacent reading ranges 2A is L3 wider than L1.
- one “ ⁇ ” -shaped figure fits in the overlap area OV1.
- the overlap region OV2 set on the sensor chip 21 does not depend on the positional relationship between the glass surface 25 and the document 26, and therefore the overlap region OV1 in FIG. Not all of them enter the overlap area OV2 on the sensor chip 21.
- the image acquired by the sensor chip 21 is an image taken at a reduced scale as compared with the case shown in FIG.
- the image reading apparatus 1 corrects the magnification of the image data in each reading range corresponding to the sensor chip with respect to the digital image data DI from the A / D conversion unit 3 in the image processing unit 4, and N Image processing for synthesizing image data in each reading range corresponding to one sensor chip is performed to generate synthesized image data D44.
- the image processing unit 4 uses the image data in the overlap area among the image data in each reading range (reading area) corresponding to the N sensor chips in the image data DI stored in the image memory 41. By comparing the image data in adjacent overlap areas, the position of the overlap area having the highest correlation (also referred to as a high degree of similarity) with the same document position as the reading area is obtained, and the position of this overlap area is determined. Originally, a combined position, which is a position where the magnification of the read image and the two image data are combined, is obtained, the magnification of the image data in each reading range corresponding to the sensor chip is corrected, and N sensor chips are supported. Image processing for combining image data in each reading range is performed to generate combined image data D44.
- the configuration of the image processing unit 4 will be described with reference to FIG.
- the similarity calculation unit 42 is an image of image data generated by N sensor chips, and is an overlap area that is an area where adjacent reading ranges of image data of N reading ranges on a document overlap.
- the image data of the matching area set in the overlap area is compared using the image data, and the matching process is performed between the pixels (the difference between the matching areas is obtained). (Hereinafter referred to as similarity).
- the level of this correlation is calculated and output as similarity data (signal D42) which is an index indicating the degree of similarity between the image data in the overlap region.
- the composite position estimation unit 43 is the position of the overlap region having the highest correlation (high similarity) from the similarity data D42 calculated by the similarity calculation unit 42, and two image data in adjacent reading ranges are obtained.
- a combined position (signal D43) that is a position to be combined is estimated, and output as position data D43 indicating the combined position based on the estimated position.
- the synthesizing unit 44 uses the magnification for each of the N reading range image data based on the synthesizing position estimated by the synthesizing position estimating unit 43 to use the width of the N reading range image data in the main scanning direction. Is set (that is, the same magnification or enlargement or reduction is performed using the magnification), and this image data is synthesized to generate synthesized image data D44. By repeating the above processing, image data corresponding to an image of a document as a reading object is generated.
- the image processing unit 4 can make the distortion of the image data generated by the N optical systems and the resulting misalignment of the combined position inconspicuous.
- the combining unit 44 is configured as shown in FIG. 16, for example.
- the composition unit 44 in FIG. 16 sets the composition magnification and the composition position of the image data in each reading range from the positional deviation between the position data D43 from the composition position estimation unit 43 and the composition reference position Pr on the overlap region.
- the image is converted by this combining magnification, and the overlapping area images are combined (also referred to as “connect” or “paste”) according to the combining position, thereby generating and outputting the combined image data D44.
- the composite reference position Pr on the overlap area is set from the position of the overlap area OV2 at the reference position P, and is a predetermined reference position set in advance by a user or the like (not shown). ).
- the composition unit 44 includes a composition magnification setting unit 45, an image conversion unit 442, and an overlap region connection unit 443.
- the position data D43 from the combined position estimating unit 43 input to the combining unit 44 is input to the combining magnification setting unit 45, and the main scanning direction (x-axis) between the position data D43 and the combined reference position Pr on the overlap region. Direction), the composition magnification and composition position of the image data of each reading range are set, and composition magnification position data D45 indicating the magnification and composition position at the time of composition is output.
- the composition magnification setting unit 45 is configured as shown in FIG. 17, for example, and includes a reading width calculation unit 451 and a magnification / composition position setting unit 452.
- the reading width calculation unit 451 in the combination magnification setting unit 45 reads the reading width Wc (reading region) in the main scanning direction from the position data D43 from the combining position estimation unit 43 to the image data (reading region) of the reading range in each cell. In the main scanning direction).
- the reading width Wc is obtained by calculating a difference (width in the main scanning direction between the positions at both ends) between the two position data D43 obtained in the overlap area at both ends of each reading area.
- the magnification / composite position setting unit 452 compares the reading width Wc from the reading width calculation unit 451 with the difference between the width Wr between the combined reference positions Pr at both ends of the reading area by each cell (that is, the combined position of the reading area).
- the composition magnification is set from the deviation from the reference position), and the position data D43 from the composition position estimation unit 43 is obtained as a composition position (composition) to be converted (moved) by the set composition magnification, Output as combined magnification position data D45 indicating the combined magnification and combined position.
- the width Wr between the combined reference positions Pr at both ends of the reading area can be set in advance from the combined reference position Pr, and is a reference width set in advance by a user or the like (not shown).
- the image conversion unit 442 of the composition unit 44 performs magnification conversion of the image in the reading area of each cell of the image data DI stored in the image memory 41 with the composition magnification based on the composition magnification position data D45 from the composition magnification setting unit 45.
- the overlap region connecting unit 443 combines the image of the overlap region with the magnification conversion image data D442 whose magnification has been corrected by the image conversion unit 442 in accordance with the combination position based on the combination magnification position data D45, and combines the combined image data D44. Is generated and output. For example, if the combination in the overlap region connection unit 443 is performed by weighted addition of the composite position and the surrounding image data, a composite image can be obtained without making the distortion of the image data and the resulting composite position shift inconspicuous. Can do.
- the imaging unit 2 outputs a signal SI obtained by photoelectrically converting the light reflected by the document 26 to the A / D conversion unit 3.
- the A / D conversion unit 3 converts the signal SI from an analog signal to a digital signal, and outputs image data DI based on the digital signal to the image processing unit 4.
- the image data DI output from the A / D conversion unit 3 is input to the image memory 41 of the image processing unit 4.
- the image memory 41 temporarily stores the image data DI and outputs the image data MO and ME of the overlap area to the similarity calculation unit 42.
- the image data MO is image data of the overlap region corresponding to the sensor chip of the odd-numbered cell
- the image data ME is image data of the overlap region corresponding to the sensor chip of the even-numbered cell.
- FIG. 9 is a diagram for explaining the operation of the similarity calculation unit 42 of the image processing unit 4 shown in FIG.
- OV2 (k ⁇ 1, R) represents an overlap region (8 ⁇ 4 pixels) on the right side in the x-axis direction of the image data generated by the k ⁇ 1th sensor chip 21 (k ⁇ 1).
- OV2 (k, L) is an overlap region (8 ⁇ 4 pixels) on the left side of the image data generated by the kth sensor chip 21 (k) adjacent to the k ⁇ 1th sensor chip 21 (k ⁇ 1).
- CD ( ⁇ 1) to CD ( ⁇ 7) indicate 8 ⁇ 2 pixel matching regions extracted from the overlap region OV2 (k ⁇ 1, R).
- the identification numbers from -1 to -7 in parentheses are the numbers of matching areas.
- CD (1) to CD (7) indicate 8 ⁇ 2 pixel matching regions extracted from the overlap region OV2 (k, L).
- the identification numbers from 1 to 7 in parentheses are the numbers of matching areas.
- the matching areas CD ( ⁇ 7) to CD ( ⁇ 1) are extracted from the overlap area OV2 (k ⁇ 1, R), and the adjacent cells are extracted.
- the matching areas CD (1) to CD (7) are extracted from the overlap area OV2 (k, L).
- the matching regions CD ( ⁇ 7) to CD ( ⁇ 1) are extracted from one overlap region OV2 (k, R), and the other overlap region OV2 (k + 1, L) is extracted.
- the matching regions CD (1) to CD (7) are extracted.
- the similarity calculation unit 42 performs a matching process between pixels in the same positions of the matching areas CD (-1) to CD (-7) and the matching areas CD (1) to CD (7) (between areas).
- the degree of correlation is calculated. For example, the absolute value of the difference between the pixels at the same position in the matching area CD ( ⁇ 1) and the matching area CD (1) is calculated, and the total difference absolute value of the matching areas CD ( ⁇ 1) and CD (1) is calculated. The sum is calculated and output as data indicating similarity (hereinafter also referred to as “difference absolute value sum” or “similarity data”) D42 (1).
- difference absolute value sum or “similarity data”
- the sum of absolute difference values is similarly calculated and output as data D42 (2).
- the same calculation is performed for the matching area CD ( ⁇ 3) and the matching area CD (3) to the matching area CD ( ⁇ 7) and the matching area CD (7), and the sum of absolute differences D42 (3) to D42 is calculated. Output up to (7).
- the overlap region has a width of 8 pixels (x-axis direction) and a height of 4 pixels (y-axis direction), and a matching region that is a part of the overlap region has a width of 2 pixels (x-axis direction) and a height of 4
- the present invention is not limited to this.
- the entire overlap area having a predetermined height (width in the y-axis direction) may be used as a matching area, and the center position may be sequentially moved with reference to the center position of the overlap area width (x-axis direction). Then, the matching region in one overlap region OV2 (k-1, R) is fixed and the matching region in the other overlap region OV2 (k, L) is moved. The similarity may be obtained.
- the similarity calculation unit 42 outputs similarity data D42 including the sum of absolute differences D42 (1) to D42 (7) to the combined position estimation unit 43.
- FIGS. 10A to 10F are diagrams for explaining the operation of the combined position estimation unit 43.
- FIG. FIGS. 10A and 10B are diagrams for explaining the operation when the document 26 is at the reference position P (in the case of FIGS. 4A to 4C).
- FIGS. 10C and 10D are diagrams illustrating the operation when the document 26 is located closer to the sensor chip 21 than the reference position P (in the case of FIGS. 6A to 6C).
- 10E and 10F are diagrams for explaining the operation when the document 26 is located farther from the sensor chip 21 than the reference position P (in the case of FIGS. 8A to 8C).
- FIGS. 10A, 10C, and 10E show matching in the overlap region OV2 (k-1, L) on the right side of the image data generated by the k-1th sensor chip 21 (k-1).
- the matching region CD (1) in the region CD ( ⁇ 7) to CD ( ⁇ 1) and the overlap region OV2 (k, L) on the left side of the image data generated by the kth sensor chip 21 (k) To CD (7).
- FIGS. 10B, 10D and 10F show similarity data (difference absolute values) corresponding to the image data of the matching regions shown in FIGS. 10A, 10C and 10E. Sum) D42.
- FIGS. 10A to 10F the case where the position data D43 as the synthesis position is represented by an integer has been described. However, in the orthogonal coordinate system including the x axis and the D42 axis, the sum of the absolute differences is the point. , And the x coordinate of the point where the value of D42 in the approximate curve becomes the minimum value is obtained with the precision after the decimal point (also referred to as sub-pixel unit precision), and the combined position may be obtained.
- the similarity data D42 is calculated from the sum of the absolute differences for each pixel in the matching area set in the overlap area.
- the square of the difference is used instead of the sum of the absolute differences.
- the sum may be used as similarity data D42.
- FIG. 11A to 11C show the operation of the combining unit 44 when the document 26 is at the reference position P (FIGS. 2A and 2B and FIGS. 4A to 4C). It is a figure explaining.
- FIG. 11A shows the image data D41 read from the image memory 41
- FIG. 11B shows the composite magnification in the composite magnification position data D45 from the composite magnification setting unit 45 in the composite unit 44.
- FIG. 11 shows image data obtained by scaling the image data D41 in the x-axis direction (imaging element arrangement direction, arrangement direction of a plurality of sensor chips) (magnification is the same in FIGS. 11A to 11C).
- FIG. 11C shows composite image data D44 obtained by combining the image data of FIG. 11B (combined at the composite position by the composite magnification position data D45 based on the composite position Pc).
- FIGS. 4A to 4C show the case where the document 26 is at the reference position P.
- the composite position Pc indicated by the position data D43 calculated by the composite position estimation unit 43 matches the composite reference position Pr on the overlap region.
- the composition reference position Pr is a predetermined position that does not depend on the position of the document 26.
- the reading width calculation unit 451 in the combining magnification setting unit 45 in the combining unit 44 calculates the width Wc in the x-axis direction between the combining positions Pc, and the magnification / composition position setting unit 452 reads the reading width calculation unit 451.
- the magnification of the image data DI is converted at a synthesis magnification of 1 (equal magnification) to correct the magnification of the image data, and the images in the overlap region are combined to generate and output the composite image data D44.
- the composition unit 44 does not perform enlargement or reduction of the image data shown in FIG. As shown in FIG. 11 (c), the image data is used as it is.
- the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- FIGS. 12A to 12C show the case where the document 26 is at the position of (reference position ⁇ d) mm (FIGS. 5A and 5B and FIGS. 6A to 6C).
- FIG. 6 is a diagram for explaining the operation of a combining unit 44.
- 12A shows the image data D41 read out from the image memory 41
- FIG. 12B shows the combination of the combination magnification in the combination magnification position data D45 from the combination magnification setting unit 45 in the combination unit 44.
- FIG. 12C shows image data obtained by converting the magnification of the image data D41 in the x-axis direction (the arrangement direction of the imaging elements, the arrangement direction of the plurality of sensor chips 21) (reduced in FIGS. 12A to 12C).
- FIGS. 6A to 6C show a case where the document 26 is at a position of (reference position-d) mm. Therefore, the composite position Pc in the figure indicated by the position data D43 calculated by the composite position estimation unit 43 does not coincide with the composite reference position Pr on the overlap region, and the composite position Pc is outside the composite reference position Pr. . As shown in FIG. 12A, the combined position Pc indicated by the calculated position data D43 is outside the combined reference position Pr, so the width Wc in the x-axis direction between the two calculated combined positions Pc is It is longer than the width Wr in the x-axis direction between the two composite reference positions Pr (Wc> Wr).
- the reading width calculation unit 451 in the combining magnification setting unit 45 in the combining unit 44 calculates the width Wc in the x-axis direction between the combining positions Pc, and the magnification / composition position setting unit 452 reads the reading width calculation unit 451. Is set to, for example, Wr / Wc times (a value indicating a reduction magnification smaller than 1), and the combined position Pc is set to Wr / Wc. Convert to the value located after the double.
- the magnification of the image data DI is converted at the combined magnification Wr / Wc to correct the magnification of the image data, and the images in the overlap region are combined to generate and output the combined image data D44.
- the combining unit 44 reduces the image data shown in FIG. 12A, and as shown in FIG. 12B, the combining position Pc. Is matched with the synthesis reference position Pr.
- the reduction magnification for example, Wr / Wc times can be used.
- the combining unit 44 combines the image data using the image data of FIG. 12B as shown in FIG.
- the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- FIGS. 13A to 13C show the composition in the case where the document 26 is at the position of (reference position + d) mm (FIGS. 7A and 7B and FIGS. 8A to 8C).
- FIG. 6 is a diagram illustrating the operation of a unit 44.
- FIG. 13A shows the image data D41 read from the image memory 41
- FIG. 13B shows the combination magnification in the combination magnification position data D45 from the combination magnification setting unit 45 in the combination unit 44.
- FIG. 13C shows image data obtained by converting the magnification of the image data D41 in the x-axis direction (image sensor array direction, array direction of the plurality of sensor chips 21) (FIG. 13A to FIG. 13C).
- the reading width calculation unit 451 in the combining magnification setting unit 45 in the combining unit 44 calculates the width Wc in the x-axis direction between the combining positions Pc, and the magnification / composition position setting unit 452 reads the reading width calculation unit 451.
- the composite magnification is set as, for example, Wr / Wc times (value indicating an enlargement magnification greater than 1), and the composite position Pc is set to Wr / Wc Convert to the value located after the double.
- the magnification of the image data DI is converted at the combined magnification Wr / Wc to correct the magnification of the image data, and the images in the overlap region are combined to generate and output the combined image data D44.
- the synthesis unit 44 enlarges the image data shown in FIG. 13A and, as shown in FIG. 13B, the synthesis position Pc. Is matched with the synthesis reference position Pr.
- the magnification for example, Wr / Wc times can be used.
- the combining unit 44 combines the image data as shown in FIG. 13C using the image data of FIG. In the synthesis, the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- ⁇ 1-3 Effects of First Embodiment
- the adjacent reading range 2A on the document 26 using the optical system such as the lens 24 is used.
- (K-1), 2A (k) and adjacent reading ranges 2A (k), 2A (k + 1), etc. are overlapped to form a plurality of sensor chips 21 arranged in a straight line. Image data without data loss between the sensor chips 21 can be acquired.
- the image processing unit 4 estimates the composite position Pc that is the position of the overlap area, obtains the magnification of the read image based on the composite position Pc that is the position of the overlap area, and sets the image data in the x-axis direction. Even when the distance from the reference position P to the document 26 is changed by performing magnification conversion (equal magnification or enlargement or reduction) and compositing the image data after magnification conversion, a joint (compositing) of adjacent images is performed. (Position) can be an inconspicuous image.
- Embodiment 2 A part of the functions of the image reading apparatus 1 in the first embodiment and the third embodiment to be described later may be realized by a hardware configuration, or executed by a microprocessor including a CPU (Central Processing Unit). It may be realized by a computer program. When a part of the functions of the image reading apparatus 1 is realized by a computer program, the microprocessor loads and executes the computer program from a computer-readable storage medium, thereby executing a part of the functions of the image reading apparatus 1. Can be realized.
- a microprocessor including a CPU (Central Processing Unit). It may be realized by a computer program.
- the microprocessor loads and executes the computer program from a computer-readable storage medium, thereby executing a part of the functions of the image reading apparatus 1. Can be realized.
- FIG. 14 is a block diagram showing a hardware configuration that enables a part of the functions of the image reading apparatus to be realized by a computer program.
- the image reading apparatus 1 a includes an imaging unit 2, an A / D conversion unit 3, an arithmetic unit 5, and a document in the y-axis direction (shown in FIGS. 2A and 2B). And a transport unit 6 for transport.
- the arithmetic device 5 includes a processor 51 including a CPU, a RAM (Random Access Memory) 52, a nonvolatile memory 53, a large-capacity storage unit 54, and a bus 55.
- the non-volatile memory 53 for example, a flash memory can be used.
- the large-capacity storage unit 54 for example, a hard disk (magnetic disk) device, an optical disk storage device, a semiconductor storage device, or the like can be used.
- the transport unit 6 can also be configured as a mechanism that moves the imaging unit 2.
- the A / D conversion unit 3 has the same function as the A / D conversion unit 3 in FIG. 1, converts the electric signal SI output from the imaging unit 2 into digital image data DI, and passes through the processor 51 to the RAM 52 ( It has a function as the image memory 41).
- the processor 51 can realize the function of the image processing unit 4 in the first embodiment by loading a computer program from the nonvolatile memory 53 or the large-capacity storage unit 54 and executing the loaded computer program. .
- FIG. 15 is a flowchart schematically illustrating an example of a processing procedure performed by the arithmetic device 5 of the image reading apparatus 1a according to the second embodiment.
- the processor 51 first executes similarity calculation (step S11). This process is the same as the process of the similarity calculation unit 42 in FIG.
- the processor 51 executes a combined position estimation process (step S12). This process is a process having the same contents as the process of the composite position estimation unit 43 in FIG.
- the processor 51 performs magnification conversion on the image data stored in the RAM 52 with the magnification based on the combination position obtained in step S12 (that is, performs processing for enlargement, reduction, or equal magnification). ),
- the image data after the magnification conversion is synthesized and the synthesized image data is output (step S13).
- the image reading apparatus 1a According to the image reading apparatus 1a according to the second embodiment, it is possible to eliminate the loss of data at corresponding positions between adjacent sensor chips and improve the quality of the read image.
- points other than the above are the same as those in the first embodiment.
- the composition magnification setting unit 45 in the composition unit 44 in the image processing unit 4 is configured as shown in FIG. 17, and the position of the reading region is determined from the position data D43 at both ends of each cell.
- the scanning width Wc in the main scanning direction (the width in the main scanning direction between the positions at both ends) is calculated, and the composition magnification and the composition position are set from the reading width Wc and the width Wr between the composition reference positions Pr. It was.
- the overlap amount at the time of reading is calculated from the position data D43 by using the composite magnification setting unit 45b as shown in FIG. 18, and the overlap amount corresponding to the composite reference position Pr (referred to as a reference overlap amount).
- the composition magnification and the composition position can be set based on the difference between the two.
- FIG. 18 shows a composition magnification setting section 45 (FIG. 16) in the composition section 44 in the image processing section 4 in the image reading apparatus 1 according to Embodiment 3 of the present invention. It is a block diagram which shows the structure of the synthetic
- FIG. 18 the same or corresponding components as those in the first embodiment described with reference to FIGS. 1, 16, and 17 are denoted by the same reference numerals.
- the composition magnification setting unit 45b in the composition unit 44 in the image processing unit 4 according to the third embodiment includes an overlap amount calculation unit 453 and a magnification / composition position setting unit 452b.
- the other configurations and operations in the image processing unit 4 and the synthesis unit 44 are the same as those described in the first embodiment, and a detailed description thereof is omitted.
- the synthesizing unit 44 determines each reading range from the deviation in the main scanning direction (referred to as the x-axis direction) between the position data D43 from the synthesizing position estimation unit 43 and the synthesizing reference position Pr on the overlap region.
- Set the composition magnification and composition position of the image data in the main scanning direction convert the image by this composition magnification, and combine the images in the overlap area according to the composition position to generate and output composite image data D44 To do.
- the composite reference position Pr on the overlap region is set from the position of the overlap region OV2 at the reference position P, and the width of the overlap region OV2 in the main scanning direction (also referred to as a reference overlap amount).
- OVWr is also a predetermined value preset by a user or the like (not shown).
- the composition magnification setting unit 45b in the composition unit 44 calculates the width OVWc (composition overlap amount) of the overlap region in the main scanning direction from the position data D43 from the composition position estimation unit 43, and the reference position based on the composition reference position Pr.
- the composition magnification and composition position in the main scanning direction of the image data in the reading range are set according to the difference from the reference overlap amount OVWr at P (that is, the deviation of the composition position of the reading area from the reference position).
- Composite magnification position data D45 indicating the magnification and the composite position is output.
- the overlap amount calculation unit 453 in the combination magnification setting unit 45b actually overlaps with the overlap region at both ends of the reading region in each cell from the position data D43 from the combination position estimation unit 43.
- the width in the scanning direction (composite overlap amount) OVWc is calculated.
- two combined overlap amounts OVWc1 and OVWc2 at both ends of each reading area are calculated, and the combined overlap amount OVWc includes the overlap amounts OVWc1 and OVWc2.
- the combined overlap amount OVWc (OVWc1 and OVWc2) can be calculated from the position on the reading area of the position data D43 obtained in the overlapping area at both ends of the image data in each reading area.
- the position indicated by the position data D43 is converted into a distance (position) in the main scanning direction from the position of the end of the reading area of each cell, thereby combining the overlap data. What is necessary is just to calculate as a lap amount.
- the magnification / composition position setting unit 452b in the composition magnification setting unit 45b is different from the composition overlap amount OVWc from the overlap amount calculation unit 453 with the reference overlap amount OVWr at the reference position P by the composition reference position Pr ( That is, the composition magnification is set from the deviation of the composition position of the reading area from the reference position), and the position data D43 from the composition position estimation unit 43 is further converted (moved) by the set composition magnification. It is obtained as a position (composite position) and output as composite magnification position data D45 indicating the magnification at the time of synthesis and the composite position.
- the combined overlap amount OVWc As the combined overlap amount OVWc, two overlap amounts OVWc1 and OVWc2 at both ends of the reading area are calculated, so two conversion magnifications are obtained from the ratio between the respective overlap amount and the reference overlap amount OVWr, From the average value of the two conversion magnifications at both ends, the combined magnification of the reading area of the corresponding cell is obtained.
- the combined magnification of the reading area of the corresponding cell is obtained from the average value of the two conversion magnifications at both ends.
- the magnification may be obtained for each position in the main scanning direction, or the minimum value or the maximum value of the two conversion magnifications at both ends may be set as the magnification in the corresponding cell. In this way, if the magnification can be set from the two overlap amounts OVWc1 and OVWc2 and the reference overlap amount OVWr, the same effect can be obtained.
- the image conversion unit 442 of the combining unit 44 reads each cell of the image data DI stored in the image memory 41 based on the combined magnification position data D45 output from the magnification / composite position setting unit 452b in the combined magnification setting unit 45b.
- the magnification of the image of the area is converted to correct the magnification of the image data in each reading range corresponding to the sensor chip, and the overlap area connecting unit 443 applies the overlap area to the magnification-converted image data D442 whose magnification has been corrected.
- ⁇ 3-2 Operation of Embodiment 3 ⁇ Operations of Composition Unit 44 and Composition Magnification Setting Unit 45b> The operations of the combining unit 44 and the combining magnification setting unit 45b will be described focusing on the operation of the combining magnification setting unit 45b.
- the overlap amount calculation unit 453 in the combination magnification setting unit 45b in the combination unit 44 calculates the combination overlap amount OVWc, and the magnification / composition position setting unit 452b combines the combination overlap from the overlap amount calculation unit 453.
- the magnification of the image data DI is converted at a synthesis magnification of 1 (equal magnification) to correct the magnification of the image data, and the images in the overlap region are combined to generate and output the composite image data D44.
- the composition unit 44 does not perform enlargement or reduction of the image data shown in FIG. As shown in FIG. 11 (c), the image data is used as it is.
- the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- FIGS. 19A to 19C show the case where the document 26 is at a position of (reference position ⁇ d) mm (FIGS. 5A and 5B and FIGS. 6A to 6C). It is a figure explaining operation
- FIGS. 19A to 19C are the same operations as in FIGS. 12A to 12C, but the combined overlap amount OVWc at both ends in the reading area of each cell, not the reading width Wc. The difference is that the combination magnification and the combination position are obtained from (OVWc1 and OVWc2).
- FIG. 19A shows the image data D41 read from the image memory 41, and FIG.
- FIG. 19B shows the combination magnification in the combination magnification position data D45 from the combination magnification setting unit 45b in the combination unit 44.
- FIG. 19C shows image data obtained by converting the magnification of the image data D41 in the x-axis direction (the arrangement direction of the imaging elements, the arrangement direction of the plurality of sensor chips 21) (reduction in FIGS. 19A to 19C). ) Shows the composite image data D44 obtained by combining the image data of FIG. 19B (combined at the composite position by the composite magnification position data D45 based on the composite position Pc).
- the document 26 is at the position (reference position ⁇ d) mm. Therefore, the composite position Pc in the figure indicated by the position data D43 calculated by the composite position estimation unit 43 does not coincide with the composite reference position Pr on the overlap region, and the composite position Pc is the composite reference position. It is outside of Pr. Since the combined position Pc indicated by the calculated position data D43 is outside the combined reference position Pr, the two overlap amounts OVWc1 and OVWc2 are shorter than the reference overlap amount OVWr (OVWc1 ⁇ OVWr, OVWc2 ⁇ OVWr).
- the overlap amount calculation unit 453 of the combination magnification setting unit 45b in the combination unit 44 calculates the combination overlap amount OVWc (OVWc1, OVWc2), and the magnification / composition position setting unit 452b calculates the overlap amount calculation unit 453. From the combined overlap amount OVWc and the reference overlap amount OVWr, the two conversion magnifications at both ends are set as, for example, OVWc1 / OVWr times and OVWc1 / OVWr times (values indicating a reduction magnification smaller than 1).
- the composite magnification OVWc / OVWr in the reading area in each cell is set from the average value of the magnification, and the composite position Pc is converted into a value located after the magnification conversion. Then, the image conversion unit 442 converts the image data DI at the combined magnification OVWc / OVWr, corrects the magnification of the image data, combines the images in the overlap region, and generates and outputs the combined image data D44. .
- the combining unit 44 reduces the image data shown in FIG. 19A, and as shown in FIG. 19B, the combining position Pc. Is matched with the synthesis reference position Pr.
- the reduction ratio for example, OVWc / OVWr times can be used.
- the combining unit 44 combines the image data as shown in FIG. 19C using the image data of FIG. In the synthesis, the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- FIGS. 20A to 20C show the composition in the case where the document 26 is at the position of (reference position + d) mm (FIGS. 7A and 7B and FIGS. 8A to 8C). It is a figure explaining operation
- 20A to 20C are the same operations as those in FIGS. 13A to 13C, but are not based on the reading width Wc but on the combined overlap amount OVWc (OVWc1 and OVWc2) at both ends. The difference is that the magnification and composition position are obtained.
- 20A shows the image data D41 read from the image memory 41, and FIG.
- FIG. 20B shows the combination magnification in the combination magnification position data D45 from the combination magnification setting unit 45b in the combination unit 44.
- FIG. 20C shows image data obtained by converting the magnification of the image data D41 in the x-axis direction (the arrangement direction of the imaging elements, the arrangement direction of the plurality of sensor chips 21) (FIG. 20A to FIG. 20C). ) Shows synthesized image data D44 obtained by synthesizing the image data of FIG. 20B (combined at the synthesized position by the synthesized magnification position data D45 based on the synthesized position Pc).
- the document 26 is at the position of (reference position + d) mm. Therefore, the composite position Pc in the figure indicated by the position data D43 calculated by the composite position estimation unit 43 does not coincide with the composite reference position Pr on the overlap region, and the composite position Pc is the composite reference position Pr.
- the two overlap amounts OVWc1 and OVWc2 are longer than the reference overlap amount OVWr (OVWc1> OVWr, OVWc2>). OVWr).
- the overlap amount calculation unit 453 of the combination magnification setting unit 45b in the combination unit 44 calculates the combination overlap amount OVWc (OVWc1, OVWc2), and the magnification / composition position setting unit 452b calculates the overlap amount calculation unit 453. From the combined overlap amount OVWc and the reference overlap amount OVWr, the two conversion magnifications at both ends are set as, for example, OVWc1 / OVWr times and OVWc1 / OVWr times (values indicating an enlargement magnification larger than 1).
- the composite magnification OVWc / OVWr in the reading area in each cell is set from the average value of the magnification, and the composite position Pc is converted into a value located after the magnification conversion. Then, the image conversion unit 442 converts the image data DI at the combined magnification OVWc / OVWr, corrects the magnification of the image data, combines the images in the overlap region, and generates and outputs the combined image data D44. .
- the synthesis unit 44 enlarges the image data shown in FIG. 20A and, as shown in FIG. 20B, the synthesis position Pc. Is matched with the synthesis reference position Pr.
- the magnification for example, OVWc / OVWr times can be used.
- the combining unit 44 combines the image data as shown in FIG. 20C using the image data of FIG. In the synthesis, the synthesis unit 44 outputs the synthesized image data D44 that is bonded by weighting and adding the images of adjacent overlap areas.
- the combined overlap amounts OVWc1 and OVWc2 at both ends of the reading area shown in FIGS. 19A to 19C and FIGS. 20A to 20C are the overlap amounts OVWc1 and OVWc2, which are reduced or reduced. Even if the magnification may be calculated at a different magnification on the other side, the magnification / composition position setting unit 452b may determine the composite magnification OVWc / OVWr in the reading area in each cell from the average value of the two conversion magnifications. Therefore, a magnification that does not cause a large distortion can be set.
- ⁇ 3-3 Effects of Embodiment 3
- the combined overlap amount at the time of reading from the position data D43 by the combined magnification setting unit 45b Since the composite magnification and the composite position are set from the difference from the reference overlap amount, the image data is subjected to magnification conversion (equal magnification or enlargement or reduction) in the x-axis direction, and after the magnification conversion
- magnification conversion equal magnification or enlargement or reduction
- the joint (synthetic position) between adjacent images can be made inconspicuous.
- the image reading apparatus 1 according to the third embodiment may be realized by a computer program executed by a microprocessor in the arithmetic device 5 of the image reading apparatus 1a according to the second embodiment, and the same effects as described above. Play.
- the present invention can be applied to a copier, a facsimile machine, a scanner, and the like having a function of scanning an object to be read such as a document and acquiring image information.
- 1, 1a image reading device 2 imaging unit, 3 A / D conversion unit, 4 image processing unit, 5 arithmetic unit, 6 transport unit, 21, 21 (k) sensor chip, 22, 22 (k) lens, 23 aperture , 23a opening, 24, 24 (k) lens, 25 glass surface, 26 manuscript (object to be read), 27 illumination, 28, 28 (k) range of light from the manuscript to the imaging unit, 29, 29 (k) Range on the original of light from the original to the imaging unit, 2A, 2A (k) reading range, 41 image memory, 42 similarity calculation unit, 43 synthesis position estimation unit, 44 synthesis unit, 51 processor, 52 RAM, 53 non-volatile Memory, 54 mass storage, 211, 212, 213 image sensor, L1, L2, L3 overlap Width of area, OV1 (k-1, R), overlap area on the right side of k-1st image data, OV1 (k, L), overlap area on the left side of kth image data, P reference position, Pc composition Position, Pr composite reference position, x main scanning direction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Facsimile Scanning Arrangements (AREA)
- Image Input (AREA)
- Facsimile Heads (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Description
《1-1》実施の形態1の構成
図1は、本発明の実施の形態1に係る画像読取装置1の概略構成を示す機能ブロック図である。図1に示されるように、画像読取装置1は、撮像部2、A/D(アナログ/デジタル)変換部3、及び画像処理部4を備えている。また、画像処理部4は、画像メモリ41、類似度算出部42、合成位置推定部43、及び合成部44を備えている。なお、画像読取装置1は、原稿を搬送する搬送部と、装置全体の制御を行う制御部とを備えてもよい。搬送部と制御部としてのプロセッサについては、後述の実施の形態2において説明する。
〈撮像部2の動作〉
撮像部2は、原稿26で反射した光を光電変換して得られた信号SIをA/D変換部3に出力する。A/D変換部3は、信号SIをアナログ信号からデジタル信号に変換し、このデジタル信号に基づく画像データDIを画像処理部4に出力する。
図9は、図1に示される画像処理部4の類似度算出部42の動作を説明するための図である。図9において、OV2(k-1,R)は、k-1番目のセンサーチップ21(k-1)によって生成された画像データのx軸方向の右側のオーバーラップ領域(8×4画素)を示す。OV2(k,L)は、k-1番目のセンサーチップ21(k-1)の隣りのk番目のセンサーチップ21(k)によって生成された画像データの左側のオーバーラップ領域(8×4画素)を示す。CD(-1)からCD(-7)は、オーバーラップ領域OV2(k-1,R)から抽出された、8×2画素のマッチング領域を示す。ここで、括弧内の-1から-7までの識別番号は、マッチング領域の番号である。また、CD(1)からCD(7)は、オーバーラップ領域OV2(k,L)から抽出された、8×2画素のマッチング領域を示す。ここで、括弧内の1から7までの識別番号は、マッチング領域の番号である。
図10(a)から(f)は、合成位置推定部43の動作を説明するための図である。図10(a)及び(b)は、原稿26が基準位置Pにある場合(図4(a)から(c)の場合)における動作を説明する図である。図10(c)及び(d)は、原稿26が基準位置Pよりもセンサーチップ21に近い位置にある場合(図6(a)から(c)の場合)における動作を説明する図である。図10(e)及び(f)は、原稿26が基準位置Pよりもセンサーチップ21から遠い位置にある場合(図8(a)から(c)の場合)における動作を説明する図である。図10(a)、(c)、(e)は、k-1番目のセンサーチップ21(k-1)によって生成された画像データの右側のオーバーラップ領域OV2(k-1,L)におけるマッチング領域CD(-7)からCD(-1)までと、k番目のセンサーチップ21(k)によって生成された画像データの左側のオーバーラップ領域OV2(k,L)とにおけるマッチング領域CD(1)からCD(7)までとの位置関係を示す。また、図10(b)、(d)、(f)は、図10(a)、(c)、(e)に示される各マッチング領域の画像データに対応する類似度データ(差分絶対値の和)D42を示す図である。
次に、合成部44の動作について説明する。
図11(a)から(c)は、原稿26が基準位置Pにある場合(図2(a)及び(b)、図4(a)から(c)の場合)における合成部44の動作を説明する図である。図11(a)は、画像メモリ41から読み出された画像データD41を示し、図11(b)は、合成部44における合成倍率設定部45からの合成倍率位置データD45での合成倍率により、画像データD41をx軸方向(撮像素子の配列方向、複数のセンサーチップの配列方向)に変倍倍率変換(図11(a)から(c)では、等倍)した画像データを示し、図11(c)は、図11(b)の画像データを合成(合成位置Pcに基づく合成倍率位置データD45による合成位置で結合)した合成画像データD44を示す。
以上に説明したように、実施の形態1に係る画像読取装置1によれば、レンズ24などの光学系を用いて原稿26上の隣り合う読取範囲2A(k-1),2A(k)及び隣り合う読取範囲2A(k),2A(k+1)などをオーバーラップさせることで、複数のセンサーチップ21を直線状に配置した構成であって、隣り合うセンサーチップ21間におけるデータの欠落の無い画像データを取得することができる。
上記実施の形態1及び後述する実施の形態3における画像読取装置1の機能の一部は、ハードウェア構成で実現されても良いし、あるいは、CPU(Central Processing Unit)を含むマイクロプロセッサにより実行されるコンピュータプログラムで実現されてもよい。画像読取装置1の機能の一部がコンピュータプログラムで実現される場合には、マイクロプロセッサは、コンピュータ読み取り可能な記憶媒体からコンピュータプログラムをロードし実行することによって、画像読取装置1の機能の一部を実現することができる。
上記実施の形態1で説明した画像読取装置1は、画像処理部4における合成部44内の合成倍率設定部45を図17のように構成し、各セルの両端の位置データD43から読取領域の主走査方向の読取幅Wc(両端の位置の間の主走査方向幅)を算出し、この読取幅Wcと合成基準位置Pr間の幅Wrとから合成倍率と合成位置を設定するよう構成していた。しかし、図18に示されるような合成倍率設定部45bを用いて、位置データD43から読取時のオーバーラップ量を算出し、合成基準位置Prに対応するオーバーラップ量(基準オーバーラップ量と呼ぶ)との差から、合成倍率と合成位置を設定するよう構成することもできる。
図18は、本発明の実施の形態3に係る画像読取装置1内の画像処理部4において、合成部44内の合成倍率設定部45(図16)の代わりに用いる合成倍率設定部45bの構成を示すブロック図である。図18において、図1及び図16、図17を参照して説明した実施の形態1における構成要素と同一又は対応する構成要素には、同じ符号を付してある。
〈合成部44、合成倍率設定部45bの動作〉
合成部44及び合成倍率設定部45bの動作を、合成倍率設定部45bの動作を中心に説明する。原稿26が基準位置Pにある場合における合成部44、合成倍率設定部45bの動作は、図11(a)から(c)に示すように、位置データD43が示す合成位置Pcは、オーバーラップ領域上の合成基準位置Prに一致するので、2つの合成オーバーラップ量OVWc1及びOVWc2は、基準オーバーラップ量OVWrと一致する(OVWc1、OVWc2=OVWr)。したがって、合成部44内の合成倍率設定部45bにおけるオーバーラップ量算出部453では、合成オーバーラップ量OVWcを算出し、倍率・合成位置設定部452bにおいて、オーバーラップ量算出部453からの合成オーバーラップ量OVWcと基準オーバーラップ量OVWrから、合成倍率を例えばOVWc/OVWr=1倍と設定し、合成位置Pcをそのままの合成位置とする。画像変換部442において、画像データDIを合成倍率1(等倍)で倍率変換して画像データの倍率を補正し、オーバーラップ領域の画像を結合して、合成画像データD44を生成し出力する。
以上に説明したように、実施の形態3に係る画像読取装置1によれば、合成倍率設定部45bにより、位置データD43から読取時の合成オーバーラップ量を算出し、基準オーバーラップ量との差から、合成倍率と合成位置を設定するようにしているので、画像データをx軸方向に倍率変換(等倍又は拡大又は縮小)し、倍率変換後の画像データを合成することで、基準位置Pから原稿26までの距離が変わった場合であっても、隣り合う画像の繋ぎ目(合成位置)を目立たない画像とすることができる。そして、隣り合うセンサーチップ間に対応する位置のデータの欠落を無くし、読取画像の品質を向上することができる。
Claims (13)
- 第1方向に配列された複数の撮像素子をそれぞれ有し、前記第1方向に配列されたN個(Nは2以上の整数)のセンサーチップと、
原稿上において前記第1方向に並ぶN個の読取範囲を前記N個のセンサーチップにそれぞれ縮小結像させるN個の光学系と、
前記N個のセンサーチップによって生成された画像データであって、前記N個の読取範囲の画像データの内の隣り合う読取範囲が重複する領域であるオーバーラップ領域の前記画像データを用いて、前記隣り合う読取範囲のオーバーラップ領域の前記第1方向の位置を求め、前記第1方向の前記位置から読取画像の倍率と2つの画像データが結合される位置である合成位置を得て、前記N個の読取範囲の画像データの前記第1方向の倍率を補正する画像処理を行い、該画像処理が行われた前記N個の読取範囲の画像データを結合して合成画像データを生成する画像処理部と
を備えたことを特徴とする画像読取装置。 - 前記N個の光学系の各々は、
前記原稿で反射した光を光学的に縮小する第1のレンズと、
前記第1のレンズで縮小された前記光の一部を通過させる絞りと、
前記絞りを通過した光を前記N個のセンサーチップの1つに結像させる第2のレンズと、
を有することを特徴とする請求項1に記載の画像読取装置。 - 前記画像処理部は、
前記N個のセンサーチップによって生成された画像データであって、前記N個の読取範囲の画像データの内の隣り合う読取範囲が重複する領域であるオーバーラップ領域の前記画像データを用いて、前記オーバーラップ領域内で設定されたマッチング領域の画像データ間の類似の程度を示す類似度を算出する類似度算出部と、
前記類似度算出部で算出された前記類似度から、前記隣り合う読取範囲の2つの画像データが結合される位置である合成位置をそれぞれ推定する合成位置推定部と、
前記合成位置推定部で推定された前記合成位置に基づき、前記N個の読取範囲の画像データの各々について、読取範囲の倍率と合成位置設定し、前記N個の読取範囲の画像データの前記第1方向の幅をそれぞれ変換して、前記変換された2つの画像データを合成して合成画像データを生成する合成部と、
を備えることを特徴とする請求項1に記載の画像読取装置。 - 前記合成位置推定部は、
前記マッチング領域の画像データ間の類似度を算出する処理を、前記マッチング領域の位置を変えて繰り返し行って複数の類似度を取得し、
取得された前記複数の類似度の内の最も高い類似度に基づいて、前記合成位置を決定する
ことを特徴とする請求項3に記載の画像読取装置。 - 前記N個の読取範囲の画像データの内のk番目(kは1以上N以下の整数)の読取範囲のオーバーラップ領域内のマッチング領域の、前記k番目の読取範囲のオーバーラップ領域内の端部からの距離は、
前記N個の読取範囲の画像データの内のk+1番目の読取範囲のオーバーラップ領域内のマッチング領域の、前記k+1番目の読取範囲のオーバーラップ領域内の端部からの距離に等しい、
ことを特徴とする請求項3又は4に記載の画像読取装置。 - 前記類似度は、前記オーバーラップ領域内で設定された前記マッチング領域の画素毎の差分絶対値の和に基づく値であり、前記差分絶対値の和が小さいほど前記類似度は高い、
ことを特徴とする請求項3に記載の画像読取装置。 - 前記類似度は、前記オーバーラップ領域内で設定された前記マッチング領域の画素毎の差分の二乗和に基づく値であり、前記差分の二乗和が小さいほど前記類似度は高い、
ことを特徴とする請求項3に記載の画像読取装置。 - 前記合成部は、
前記合成位置推定部で推定された前記合成位置と予め決められている合成基準位置に基づき、前記N個の読取範囲の画像データの各々について合成の倍率を設定し、合成倍率と合成接続位置を出力する合成倍率設定部と、
前記合成倍率設定部により設定された合成倍率を用いて、前記N個の読取範囲の画像データの前記第1方向の幅を変換する画像変換部と、
前記合成倍率設定部により設定された合成接続位置を合わせて、前記画像変換部で変換された画像データを合成して合成画像データを生成するオーバーラップ領域接続部と、
を備えることを特徴とする請求項3に記載の画像読取装置。 - 前記合成倍率設定部は、
前記N個の読取範囲の画像データの各々について、
前記合成位置推定部で推定された前記合成位置から、前記第1方向の一端側の合成位置と他端側の合成位置との間の前記第1方向の幅から合成読取幅を求める読取幅算出部と、
予め決められている読取範囲の合成基準位置による両端の間の前記第1方向の幅と、前記読取幅算出部からの合成読取幅から合成倍率を算出するとともに、前記合成位置推定部で推定された前記合成位置を前記合成倍率により変換される際の合成接続位置として求めて、合成倍率と合成接続位置を設定する倍率・合成位置設定部と
を備えることを特徴とする請求項8に記載の画像読取装置。 - 前記倍率・合成位置設定部は、
前記読取幅算出部からの合成読取幅と予め決められている合成基準位置による読取範囲の幅との比率から合成倍率と合成接続位置を設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端側の合成位置と他端側の合成位置とが、前記N個の読取範囲の画像データの各々における予め決められている合成基準位置と同じ位置にあり、前記読取幅算出部からの合成読取幅が予め決められている合成基準位置による読取範囲の幅と等しい場合には、前記合成倍率を1に設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端側の合成位置と他端側の合成位置とが、前記N個の読取範囲の画像データの各々における前記合成基準位置の外側にあり、前記読取幅算出部からの合成読取幅が、予め決められている合成基準位置による読取範囲の幅より大きい場合には、前記合成倍率を1より小さい値に設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端側の合成位置と他端側の合成位置とが、前記N個の読取範囲の画像データの各々における前記合成基準位置の内側にあり、前記読取幅算出部からの合成読取幅が、予め決められている合成基準位置による読取範囲の幅より小さい場合には、前記合成倍率を1より大きい値に設定する、
ことを特徴とする請求項9に記載の画像読取装置。 - 前記合成倍率設定部は、
前記N個の読取範囲の画像データの前記第1方向の両端それぞれに対し、前記合成位置推定部で推定された前記合成位置から、前記読取範囲のオーバーラップ領域の幅を算出し、合成オーバーラップ量を求めるオーバーラップ量算出部と、
前記オーバーラップ量算出部からの前記読取範囲の両端における合成オーバーラップ量と予め決められている読取範囲の合成基準位置でのオーバーラップ量とにより、前記N個の読取範囲における合成倍率を算出するとともに、前記合成位置推定部で推定された前記合成位置を前記合成倍率により変換される際の合成接続位置として求めて、合成倍率と合成接続位置を設定する倍率・合成位置設定部と
を備えることを特徴とする請求項8に記載の画像読取装置。 - 前記倍率・合成位置設定部は、
前記オーバーラップ量算出部からの合成オーバーラップ量と予め決められている合成基準位置によるオーバーラップ量との比率から合成倍率と合成接続位置を設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端の合成オーバーラップ量が、予め決められている合成基準位置でのオーバーラップ量と等しい場合には、前記合成倍率を1に設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端の合成オーバーラップ量が、予め決められている合成基準位置でのオーバーラップ量より小さい場合には、前記合成倍率を1より小さい値に設定し、
前記N個の読取範囲の画像データの各々における前記第1方向の一端の合成オーバーラップ量が、予め決められている合成基準位置でのオーバーラップ量より大きい場合には、前記合成倍率を1より大きい値に設定する、
ことを特徴とする請求項11に記載の画像読取装置。 - 第1方向に配列された複数の撮像素子をそれぞれ有し、前記第1方向に配列されたN個(Nは2以上の整数)のセンサーチップと、
原稿上において前記第1方向に並ぶN個の読取範囲を前記N個のセンサーチップにそれぞれ縮小結像させるN個の光学系と
を備える画像読取装置が実行する画像読取方法であって、
前記N個のセンサーチップによって生成された画像データであって、前記N個の読取範囲の画像データの内の隣り合う読取範囲が重複する領域であるオーバーラップ領域の前記画像データを用いて、前記隣り合う読取範囲のオーバーラップ領域の前記第1方向の位置を求めるステップと、
前記第1方向の前記位置から読取画像の倍率と2つの画像データが結合される位置である合成位置を得て、前記N個の読取範囲の画像データの前記第1方向の倍率を補正する画像処理を行い、該画像処理が行われた前記N個の読取範囲の画像データを結合して合成画像データを生成するステップと
を有することを特徴とする画像読取方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016542294A JP6076552B1 (ja) | 2015-03-16 | 2016-02-25 | 画像読取装置及び画像読取方法 |
DE112016001223.3T DE112016001223T5 (de) | 2015-03-16 | 2016-02-25 | Bildlesevorrichtung und Bildleseverfahren |
US15/547,618 US10326908B2 (en) | 2015-03-16 | 2016-02-25 | Image reading apparatus and image reading method |
CN201680015429.5A CN107409164B (zh) | 2015-03-16 | 2016-02-25 | 图像读取装置和图像读取方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-051635 | 2015-03-16 | ||
JP2015051635 | 2015-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016147832A1 true WO2016147832A1 (ja) | 2016-09-22 |
Family
ID=56918677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/055600 WO2016147832A1 (ja) | 2015-03-16 | 2016-02-25 | 画像読取装置及び画像読取方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10326908B2 (ja) |
JP (1) | JP6076552B1 (ja) |
CN (1) | CN107409164B (ja) |
DE (1) | DE112016001223T5 (ja) |
WO (1) | WO2016147832A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017183912A (ja) * | 2016-03-29 | 2017-10-05 | コニカミノルタ株式会社 | 画像読み取り装置、同装置における読み取り画像の補正方法及び読み取り画像の補正プログラム |
US20180007232A1 (en) * | 2015-04-09 | 2018-01-04 | Mitsubishi Electric Corporation | Image combination device, image reading device and image combination method |
EP3352443A1 (en) * | 2017-01-23 | 2018-07-25 | Seiko Epson Corporation | Scanner, scan program, and method of producing scan data |
CN108347542A (zh) * | 2017-01-23 | 2018-07-31 | 精工爱普生株式会社 | 扫描仪、扫描程序以及扫描数据的生产方法 |
JP2018121102A (ja) * | 2017-01-23 | 2018-08-02 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
WO2018198680A1 (ja) * | 2017-04-27 | 2018-11-01 | 三菱電機株式会社 | 画像読み取り装置 |
US10506124B2 (en) | 2016-05-13 | 2019-12-10 | Mitsubishi Electric Corporation | Image reading apparatus |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021044573A1 (ja) * | 2019-09-05 | 2021-03-11 | 三菱電機株式会社 | 画像読取装置 |
KR20230000673A (ko) | 2021-06-25 | 2023-01-03 | 삼성전자주식회사 | 듀얼 컨버전 게인을 이용한 노이즈 감소를 위한 이미지 처리 장치 및 그 동작 방법 |
CN114608488B (zh) * | 2022-01-27 | 2024-04-09 | 深圳市无限动力发展有限公司 | 多房间的清扫覆盖率测量方法、装置、设备及介质 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010141558A (ja) * | 2008-12-11 | 2010-06-24 | Mitsubishi Electric Corp | 画像読取装置 |
JP2013258604A (ja) * | 2012-06-13 | 2013-12-26 | Fujitsu Ltd | 画像処理装置、方法及びプログラム |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2711051B2 (ja) * | 1992-09-11 | 1998-02-10 | 大日本スクリーン製造株式会社 | 画像読取装置の読取領域接続装置 |
US6373995B1 (en) * | 1998-11-05 | 2002-04-16 | Agilent Technologies, Inc. | Method and apparatus for processing image data acquired by an optical scanning device |
JP2003101724A (ja) | 2001-09-25 | 2003-04-04 | Konica Corp | 画像読取装置及び画像形成装置 |
JP2004172856A (ja) * | 2002-11-19 | 2004-06-17 | Fuji Photo Film Co Ltd | 画像データ作成方法および装置 |
JP2007201660A (ja) * | 2006-01-25 | 2007-08-09 | Fuji Xerox Co Ltd | 画像処理装置、画像形成装置、画像処理方法及びプログラム |
CN101981910B (zh) * | 2008-03-31 | 2013-04-03 | 三菱电机株式会社 | 图像读取装置 |
JP5068236B2 (ja) * | 2008-10-28 | 2012-11-07 | 三菱電機株式会社 | 画像読取装置 |
JP4947072B2 (ja) | 2009-03-04 | 2012-06-06 | 三菱電機株式会社 | 画像読取装置 |
US8174552B2 (en) * | 2009-08-19 | 2012-05-08 | Eastman Kodak Company | Merging of image pixel arrangements |
JP5495710B2 (ja) * | 2009-10-22 | 2014-05-21 | 三菱電機株式会社 | 画像結合装置及び画像結合位置算出方法 |
JP2012109737A (ja) | 2010-11-16 | 2012-06-07 | Mitsubishi Electric Corp | 画像結合装置、画像結合方法、画像入出力システム、プログラム、及び記録媒体 |
JP2012160904A (ja) * | 2011-01-31 | 2012-08-23 | Sony Corp | 情報処理装置、情報処理方法、プログラム、及び撮像装置 |
WO2014156669A1 (ja) * | 2013-03-29 | 2014-10-02 | コニカミノルタ株式会社 | 画像処理装置および画像処理方法 |
US9936098B2 (en) * | 2015-04-09 | 2018-04-03 | Mitsubishi Electric Corporation | Image combination device, image reading device and image combination method |
-
2016
- 2016-02-25 US US15/547,618 patent/US10326908B2/en not_active Expired - Fee Related
- 2016-02-25 JP JP2016542294A patent/JP6076552B1/ja not_active Expired - Fee Related
- 2016-02-25 WO PCT/JP2016/055600 patent/WO2016147832A1/ja active Application Filing
- 2016-02-25 DE DE112016001223.3T patent/DE112016001223T5/de not_active Ceased
- 2016-02-25 CN CN201680015429.5A patent/CN107409164B/zh not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010141558A (ja) * | 2008-12-11 | 2010-06-24 | Mitsubishi Electric Corp | 画像読取装置 |
JP2013258604A (ja) * | 2012-06-13 | 2013-12-26 | Fujitsu Ltd | 画像処理装置、方法及びプログラム |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180007232A1 (en) * | 2015-04-09 | 2018-01-04 | Mitsubishi Electric Corporation | Image combination device, image reading device and image combination method |
US9936098B2 (en) * | 2015-04-09 | 2018-04-03 | Mitsubishi Electric Corporation | Image combination device, image reading device and image combination method |
JP2017183912A (ja) * | 2016-03-29 | 2017-10-05 | コニカミノルタ株式会社 | 画像読み取り装置、同装置における読み取り画像の補正方法及び読み取り画像の補正プログラム |
US10506124B2 (en) | 2016-05-13 | 2019-12-10 | Mitsubishi Electric Corporation | Image reading apparatus |
US10594893B2 (en) | 2017-01-23 | 2020-03-17 | Seiko Epson Corporation | Scanner, scan program, and method of producing scan data |
CN108347541B (zh) * | 2017-01-23 | 2019-10-15 | 精工爱普生株式会社 | 扫描仪、非易失性存储介质以及扫描数据的生成方法 |
JP2018121101A (ja) * | 2017-01-23 | 2018-08-02 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
JP2018121100A (ja) * | 2017-01-23 | 2018-08-02 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
JP2018121102A (ja) * | 2017-01-23 | 2018-08-02 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
JP7153424B2 (ja) | 2017-01-23 | 2022-10-14 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
JP7153425B2 (ja) | 2017-01-23 | 2022-10-14 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
CN108347542A (zh) * | 2017-01-23 | 2018-07-31 | 精工爱普生株式会社 | 扫描仪、扫描程序以及扫描数据的生产方法 |
CN108347541A (zh) * | 2017-01-23 | 2018-07-31 | 精工爱普生株式会社 | 扫描仪、扫描程序以及扫描数据的生产方法 |
JP2022105122A (ja) * | 2017-01-23 | 2022-07-12 | セイコーエプソン株式会社 | スキャナー、スキャンプログラムおよびスキャンデータの生産方法 |
CN108347542B (zh) * | 2017-01-23 | 2020-02-28 | 精工爱普生株式会社 | 扫描仪、计算机可读的非易失性存储介质以及扫描数据的生成方法 |
EP3352443A1 (en) * | 2017-01-23 | 2018-07-25 | Seiko Epson Corporation | Scanner, scan program, and method of producing scan data |
US10657629B2 (en) | 2017-04-27 | 2020-05-19 | Mitsubishi Electric Corporation | Image reading device |
DE112018002233T5 (de) | 2017-04-27 | 2020-01-09 | Mitsubishi Electric Corporation | Bildlesegerät |
JP6469324B1 (ja) * | 2017-04-27 | 2019-02-13 | 三菱電機株式会社 | 画像読み取り装置 |
WO2018198680A1 (ja) * | 2017-04-27 | 2018-11-01 | 三菱電機株式会社 | 画像読み取り装置 |
Also Published As
Publication number | Publication date |
---|---|
DE112016001223T5 (de) | 2017-11-30 |
US20180013919A1 (en) | 2018-01-11 |
US10326908B2 (en) | 2019-06-18 |
CN107409164B (zh) | 2019-11-08 |
CN107409164A (zh) | 2017-11-28 |
JPWO2016147832A1 (ja) | 2017-04-27 |
JP6076552B1 (ja) | 2017-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6076552B1 (ja) | 画像読取装置及び画像読取方法 | |
US10958892B2 (en) | System and methods for calibration of an array camera | |
US8031232B2 (en) | Image pickup apparatus including a first image formation system and a second image formation system, method for capturing image, and method for designing image pickup apparatus | |
JP4700445B2 (ja) | 画像処理装置および画像処理プログラム | |
JP5128207B2 (ja) | 画像処理装置及び画像処理方法、画像処理プログラム | |
JP6353233B2 (ja) | 画像処理装置、撮像装置、及び画像処理方法 | |
JPH08205181A (ja) | 色収差補正回路および色収差補正機能付き撮像装置 | |
JP5821563B2 (ja) | 画像読取装置および画像読取方法、並びにmtf補正パラメータ決定方法 | |
JP6422428B2 (ja) | 画像処理装置、画像処理方法、画像読取装置、及びプログラム | |
US8817137B2 (en) | Image processing device, storage medium storing image processing program, and electronic camera | |
JP6246379B2 (ja) | 画像処理装置、画像処理方法、画像読取装置、及びプログラム | |
JP5224976B2 (ja) | 画像補正装置、および、画像補正方法、および、プログラム、および、記録媒体 | |
JP6362070B2 (ja) | 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体 | |
JP5309940B2 (ja) | 画像処理装置、および撮像装置 | |
JP5002670B2 (ja) | 画像処理装置及び画像読取装置 | |
JP2011055068A (ja) | 撮像装置 | |
JP5359814B2 (ja) | 画像処理装置 | |
WO2023084706A1 (ja) | 内視鏡プロセッサ、プログラム、およびフォーカスレンズの制御方法 | |
JP2010278950A (ja) | 色収差補正機能付き撮像装置、色収差補正方法、プログラムおよび集積回路 | |
JP6469324B1 (ja) | 画像読み取り装置 | |
US11985293B2 (en) | System and methods for calibration of an array camera | |
JP3035007B2 (ja) | 画像読取装置 | |
JP2016099836A (ja) | 撮像装置、画像処理装置、画像処理方法、及びプログラム | |
JP2005340985A (ja) | 画像処理方法、画像処理装置、および画像処理プログラム | |
JP2010118923A (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016542294 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16764662 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15547618 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112016001223 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16764662 Country of ref document: EP Kind code of ref document: A1 |