EP1135925A1 - Procede et dispositif pour combiner des images partielles de balayage d'un film - Google Patents

Procede et dispositif pour combiner des images partielles de balayage d'un film

Info

Publication number
EP1135925A1
EP1135925A1 EP99956255A EP99956255A EP1135925A1 EP 1135925 A1 EP1135925 A1 EP 1135925A1 EP 99956255 A EP99956255 A EP 99956255A EP 99956255 A EP99956255 A EP 99956255A EP 1135925 A1 EP1135925 A1 EP 1135925A1
Authority
EP
European Patent Office
Prior art keywords
image
edge
digital image
digital
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99956255A
Other languages
German (de)
English (en)
Inventor
Michael P. Keyes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Applied Science Fiction Inc
Original Assignee
Applied Science Fiction Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applied Science Fiction Inc filed Critical Applied Science Fiction Inc
Publication of EP1135925A1 publication Critical patent/EP1135925A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image

Definitions

  • the present invention pertains to a digital image processing device and method, and more particularly to a digital image device and method that aligns and/or combines partial film scan images prior to completion of digital film processing.
  • Numerous scanning devices are known for capturing digital images of objects. For example, there are numerous flat-bed and sheet-fed scanners currently available on the market for converting photographs, pages of text, and transparencies to digital images. There are also conventional film scanners available which scan photographic film to produce digital images. Most of the current film scanners use one set of three linear charged coupled devices (CCD) to scan photographic film. Each of the three CCD's scans in one region of the visible spectrum: typically red, green and blue channels. In such conventional scanners, image data is captured in each color channel
  • the three CCD's pass over the film once, thus providing three separate color channel scans at substantially the same time.
  • a problem with scanning photographic film prior to its being fully developed is that the locations of the images on the film are generally unknown. For example, the locations of images early in the development process may not be known at all, while the locations of images may be known with low precision at an intermediate stage. It may only be late in the development process that the locations of the images become known with high precision. Consequently, if one were to process a scan area roughly the size of an image on the exposed photographic film, for example, it will usually contain fragments of two different images. In other words, the scan area will straddle two images such that it contains a partial image fragment of two different images. This results in the image fragments from two separate scan areas having to be aligned and combined together to produce a final joined output image.
  • an area of an image medium is scanned to produce a raster image of the scanned area.
  • the image medium is photographic film.
  • a second area of the photographic film is scanned to produce a second raster image of the second scanned area.
  • the second scanned area is displaced along the longitudinal direction of the photographic film with respect to the first scanned area such that the scanned areas are abutting or partially overlapping.
  • the length of each of the first and second scanned areas is selected to be approximately equal to or greater than a longitudinal length of an image recorded in the photographic film.
  • the photographic film has regularly spaced perforation holes, such as sprocket holes. Regularly spaced indentations or notches, or other similar indicia are also suitable.
  • the width of each of the first and second scanned areas is preferably of a width such that it contains at least a fraction of the sprocket holes, or other indicia, along at least one of the edges of the photographic film. More preferably, the width of each of the first and second scanned areas is approximately the width required to extend entirely across the photographic image out to at least the half-way point of the sprocket holes along each side of the photographic film.
  • the sprocket holes, notches, dents, or other indicia provide reference markers which are fixed relative to the photographic film.
  • the first raster image of the first scanned area is filtered with a high-pass spatial filter to produce an edge image corresponding to the first raster image.
  • the reference markers which are portions of the sprocket holes in the preferred embodiment, appear as an outline of the reference marker.
  • the sprocket holes are used to establish reference points in order to align and combine complementary portions of an image in the first and second scanned images.
  • at least one corner, and more preferably, both corners, of a sprocket hole half on each opposing edge of the photographic film determine reference points.
  • the most preferred embodiment determines four reference points in each of the two scanned images; thus, eight reference points in total. The following description refers to the reference points in the preferred embodiment as fiducial points.
  • Each fiducial point is determined by the intersection of a vertical edge line and a horizontal edge line corresponding to its respective sprocket hole portion.
  • the vertical edge line is taken to be parallel to pixel columns in the edge image.
  • the column address of the vertical edge line for each of the fiducial points is preferably determined by an averaging process which is preferably a weighted average in which each column address is multiplied by the number of sprocket hole edge pixels in that column, all products for an edge of the sprocket hole are summed, and the sum is divided by a normalization factor.
  • a similar procedure is followed to determine the horizontal edge line for each of the fiducial points, except the horizontal edge line is taken to be parallel to the pixel rows and it is preferably assigned a weighted average row number.
  • fiducial point in each of the scanned areas is sufficient to make translational corrections.
  • a minimum of two fiducial points in each scanned image is required to make rotational corrections in addition to the translational corrections.
  • the most preferred embodiment uses four fiducial points in each of the scanned images.
  • the fiducial points are preferably established for two sprocket holes in each scanned area which are proximate to the joining regions of the image fragments and which correspond to the same sprocket hole, but as viewed in the first and second images.
  • fiducial points are determined for one sprocket hole along each transverse side of the photographic film along the joining region of the image fragments for each of the first and second scanned areas.
  • the two fiducial points proximate to each sprocket hole are averaged to provide one average fiducial point intermediate between its respective fiducial points.
  • a translational correction at least one of the fiducial points in the first scanned image is required to coincide with a corresponding one fiducial point in the second scanned image. This provides a translational correction rule.
  • the translational correction rule is then applied to all pixels, preferably within one image, such that there is a uniform translational correction of all the pixels of the first raster image relative to the second raster image.
  • the translational correction rule is determined with respect to one of the above-mentioned average fiducial points to its corresponding average fiducial point.
  • the rotational correction can similarly be applied to fiducial points or average fiducial points.
  • the rotational correction is performed after the translational correction based on one average fiducial point. Consequently, after the translational correction, the coinciding average fiducial point establishes a first line in the first scanned image passing through the average fiducial point on the opposing side of the photographic film. Similarly, the coinciding average fiducial point establishes a second line in the second scanned image passing through the average fiducial point on the opposing side of the photographic film. In general, there will be a non-zero angle between the first and second lines.
  • a rotational translation rule is determined by rotating the first line to coincide with the second line.
  • the fiducial point in the second scanned image which is not in coincidence with the average fiducial point in the first scanned image is rotated to bring them into substantial coincidence.
  • the pivot point of the rotation is the two substantially coinciding fiducial points which were aligned in the translational correction.
  • the combined image is cropped to remove the edge region that includes images of the sprocket holes, and to separate it from adjoining images.
  • FIGURE 1 is a schematic illustration of a digital image combining device according to the preferred embodiment of the invention.
  • FIGURE 2 is a schematic illustration showing a more detailed view of the digital image combining device according to the preferred embodiment of the invention.
  • FIGURE 3 is an illustration of an edge image of photographic film with the locations of images on the film shown schematically as large solid rectangles and the scan area shown schematically as a dashed rectangular box;
  • FIGURE 4 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
  • FIGURE 5 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
  • FIGURE 6 is an edge image of a portion of photographic film, like FIGURE 3, except the scan area is displaced relative to the scan area illustrated in FIGURE 3;
  • FIGURE 7 is similar to FIGURE 4, but shows a portion of a sprocket hole illustrated in FIGURE 6 along with corresponding fiducial points;
  • FIGURE 8 is similar to FIGURE 5, except that it shows a portion of a sprocket hole illustrated in FIGURE 6 along with examples of fiducial points;
  • FIGURE 9 shows a case in which the scan area illustrated in FIGURE 3 is displaced and rotated relative to the scan area illustrated in FIGURE 6;
  • FIGURE 10 illustrates the scan areas shown in FIGURE 9, but after a translational transformation between the two scan areas
  • FIGURE 11 illustrates the scan areas shown in FIGURE 10, but after a rotational transformation between the two scan areas
  • FIGURE 12 is a flow-chart illustrating the method of combining portions of a digital image according to the preferred embodiment of the invention.
  • the digital image combining device is designated generally by the reference numeral 20 in FIGURE 1.
  • the digital image combining device 20 generally has an image scanning device 22, a data processor 24, an output display device 26, and an input device 28.
  • a digital image combining device 20 may include one or more generally available peripheral devices 30.
  • the image scanner 22 is an electronic film developing apparatus, such as described in U.S. Patent Application No. 08/955,853
  • the data processor 24 is a personal computer or a work station
  • the output device 26 is a video monitor
  • the input device 28 is a keyboard.
  • the image scanner 22 is an electronic film developer in the preferred embodiment, the image scanner is not limited to being only an electronic film developer. Other scanning devices, including devices which scan media other than photographic film, are encompassed within the scope and spirit of the invention.
  • FIGURE 2 is a schematic illustration of several elements of the data processor 24.
  • FIGURE 2 also illustrates, schematically, some components interior to the electronic film developer 22.
  • the electronic film developer 22 is in communication with the data processor 24.
  • the scanning device 22 has at least two scanning stations 32 and 34.
  • the example of the digital image scanner 22 illustrated in FIGURE 2 is a schematic illustration of a digital film scanner that has two scanning stations, it is anticipated that other digital film scanners will have three or more scarining stations.
  • the digital film scanner 22 may have a single scanning station.
  • exposed photographic film 32 is directed to move through the scanning stations in the longitudinal direction 38.
  • the photographic film 36 has reference markers 40 at one transverse edge of the photographic film 36.
  • the reference markers 30 are sprocket holes, such as sprocket hole 42, in the photographic film 36.
  • the photographic film 36 has additional reference markers 44 in the transverse direction 46 opposing the reference markers 40.
  • a portion of the photographic film 48 is scanned over a time interval at scanning station 32.
  • a portion of the photographic film 49 is scanned at scanning station 34 over a period of time.
  • the photographic film 36 is typically subjected to film development treatment prior to the scanning station 32 and another film development treatment between scanning stations 32 and 34. Consequently, the film will be at one stage of development at scanmng station 32, and at another stage of film development at scanning station 34. It is anticipated that many digital film scanners also have at least a third scanning station. Consequently, the film 36 is at a different stage of development at scanning station 32 compared to that of scanning station 34. Furthermore, the stage of film development at scanning stations 32 and 34 will be different than at subsequent scanning stations for digital film scanners that have more than two scanning stations.
  • Scanned image data is transferred from each scanning station 32 and 34 to the data processor 24.
  • the data processor 24 has a digital image data processor 50 that is in communication with the scanmng stations 32 and 34.
  • the digital image data processor 50 is also in communication with a data storage unit 52 that stores processed image data, preferably in a conventional raster image format.
  • the data storage unit 52 is in communication with a high-pass spatial filter 54 such that it receives stored raster image data from the storage unit 52.
  • a reference mark detector 56 is in communication with a high-pass spatial filter 54 such that it receives filtered images from the high-pass spatial filter 54.
  • the reference mark detector 56 is also in communication with the data storage unit 52.
  • the partial image combiner 58 is in communication with the reference mark detector 56 and the data storage unit 52.
  • the digital image data processor 50, the high-pass spatial filter 54, the reference mark detector 56 and the partial image combiner 58 are implemented in practice by programming a personal computer or a workstation.
  • the invention includes other embodiments in which the components are implemented as dedicated hardware components.
  • the digital image data processor 50 is a conventional digital image data processor that processes scanned data from scanning stations 32 and 34 and outputs a digital raster image in a conventional format to be stored in data storage unit
  • the data storage unit 52 may be either a conventional hard drive or semiconductor memory, such as random access memory (RAM), or a combination of both. It is anticipated that other data storage devices may be used without departing from the scope and spirit of the invention.
  • the high-pass spatial filter uses a conventional spatial mask such as that described in R.C.
  • a three-pixel-by-three-pixel mask is usually sufficient, although one may select larger masks.
  • the center mask element is given a weight with a value of 8 and each neighbor pixel is given a weight of - 1.
  • the mask is then applied to each pixel of the raster image. If the subject pixel is in a fairly uniform region of the image, the sum of all the neighboring pixel values multiplied by the mask value will cancel with the central value, thus leading to essentially a zero output value. However, in the region of an edge of the image, the output will be non- zero. Consequently, such a filter will provide an output image which represents the edges of the original image. These are thus referred to as edge images.
  • FIGURE 3 is a schematic illustration of photographic film 60 which may be the same or similar to photographic film 36.
  • the photographic film 60 has a series of substantially uniformly spaced sprocket holes 62 proximate to one longitudinal edge of the film 60, and another series of substantially uniformly spaced sprocket holes 64 proximate to a second longitudinal edge of the film 60 opposing the first longitudinal edge.
  • the film 60 may have notches spaced at regular intervals such as notches 66, 68, 70 and 72, or notch 74. Alternatively, one could deliberately cut notches into the film 60 at regular intervals.
  • the first and second longitudinal edges of the film 60 which have notches 66, 68, 70, 72 and 74, and the sprocket holes 62 and 64 illustrates an example of an edge image from a section of photographic film.
  • the edge image in FIGURE 3 is shown with the edges in black and the background in white.
  • locations of images on the film 60 are indicated schematically as solid rectangles with reference numerals 76, 78, 80 and 82 in FIGURE 3.
  • the dashed rectangle 84 indicates a region of the photographic film that has been scanned such that it includes at least a portion of sprocket holes with the regions 4 and 5.
  • FIGURE 4 is a blown-up section of a reference sprocket hole as indicated in FIGURE 3.
  • the portion of the sprocket hole illustrated in FIGURE 4 has a first vertical edge portion 86 and a second vertical edge portion 88.
  • the portion of the sprocket hole illustrated in FIGURE 4 which is contained within the scanned region 84 has a horizontal edge portion 90.
  • a first fiducial point 92 is proximate to a corner of the portion of the sprocket hole shown in FIGURE 4.
  • the first fiducial point 92 is the intersection between the vertical edge line 94 and the horizontal edge line 96.
  • the positions of the vertical edge line 94 and horizontal edge line 96 are determined according to an averaging procedure.
  • the vertical edge line 94 is determined by a weighted average of the number of pixels corresponding to the section 86 within pixel columns of the edge image.
  • the edge line 94 is substantially parallel to the pixel columns of the edge image.
  • each pixel in the edge image has a unique coordinate address, preferably corresponding to a pixel row and column number in the conventional raster image representation.
  • the concept of the invention is not limited to the usual Cartesian representation of raster images. It is also known to represent images in other coordinate representations, such as polar coordinates, or other orthogonal or non-orthogonal coordinate systems.
  • each pixel column number in the region around the edge 86 is multiplied by the number of pixels corresponding to the edge 86 that fall within that pixel column, the products are summed and then divided by a normalization factor to provide a weighted average pixel column number that defines the vertical edge line 94.
  • a similar procedure for the horizontal edge 90, with respect to pixel rows, provides a weighted average pixel row number for the fiducial point 92. Consequently, the weighted average pixel column number and weighted average pixel row number define the fiducial point 92.
  • the fiducial point 92 is determined by finding the weighted average vertical edge line 94 from the horizontal edge line 86, the invention is not limited to establishing a reference marker in only this way.
  • the invention anticipates generally establishing reference markers in the edge image, as long as the reference markers are fixed relative to the image medium. For example, if the edge lines of the sprocket holes are significantly misaligned with rows and columns of the raster image, then it is preferable to determine the edges by a linear regression analysis.
  • a second fiducial point 98 is determined by a similar procedure as that used to determine the fiducial point 92.
  • the vertical edge line 100 is determined as a weighted average vertical edge line corresponding to the edge 88.
  • the fiducial point 98 is the intersection of the vertical edge line 100 and the horizontal edge line 96.
  • a blow-up view of section 5 illustrated in FIGURE 3 is shown in more detail in FIGURE 5.
  • the sprocket hole is at the opposing edge of the photographic film 60 relative to the sprocket hole illustrated in FIGURE 4.
  • the fiducial points 102 and 104 are determined by weighted average vertical edge line 106 and horizontal edge line 108, and vertical edge line 110 and horizontal edge line 108, respectively.
  • FIGURE 6 is an illustration of the edge image of the same section of photographic film 60 as in FIGURE 3, but with a different scan region 112.
  • the sprocket holes in the regions labeled 7 and 8 correspond to the sprocket holes in the regions labeled 4 and 5 in FIGURE 3.
  • the sprocket holes in regions 7 and 8 are part of a second image of the same sprocket holes in regions 4 and 5 of FIGURE 3.
  • the scan region 84 in FIGURE 3 contains a portion of the image 80 while the scan region 112 in
  • FIGURE 6 contains a complementary portion of the same image 80.
  • FIGURE 7 shows a portion of the sprocket hole that is also illustrated in the region 7 in FIGURE 6. This corresponds to the sprocket hole in the region 4 illustrated in FIGURE 3. Since the sprocket hole in the regions 7 and 4 is fixed relative to the film 60, it can be used to line up the image portions of the image 80 and join them together. Similar to FIGURE 4, the portion of the sprocket hole in the region 7 has fiducial points 114 and 120.
  • the fiducial point 114 is determined as the intersection of the vertical edge line 116 and the horizontal edge line 118, each preferably determined by a weighted averaging procedure.
  • the fiducial point 120 is determined by the intersection of the vertical edge line 122 and the horizontal edge line 118.
  • FIGURE 8 contains a second edge image of the same sprocket hole that is illustrated in the first edge image in the region 5 of FIGURE 3.
  • FIGURE 8 shows an enlarged view of the portion of the sprocket hole in the region 8.
  • the sprocket hole in the region 8 also determines two fiducial points, labeled 124 and 126 in FIGURE 8.
  • the intersection of the vertical edge line 128 with the horizontal edge line 130 determines the fiducial point 124.
  • the intersection of the vertical edge line 132 and the horizontal edge line 130 determines the fiducial point 126.
  • the vertical edge lines 128 and 132 and the horizontal edge line 130 are determined by a weighted averaging procedure.
  • FIGURE 9 shows an example of the scanned region 84 illustrated in FIGURE 3 displaced and rotated relative to the scanned region 112 illustrated in FIGURE 6.
  • the displacement and rotation are greatly exaggerated to facilitate the explanation.
  • the first scan region 84 has the four fiducial points 92, 98, 102 and 104 as reference markers for aligning and combining the second scan region 112 with the first scan region 84.
  • the second scan region 112 has four fiducial points 114, 120, 124 and 126 in the preferred embodiment.
  • the invention is not limited specifically to four reference points in each scanned image.
  • As few as one reference marker in each image will be sufficient to make at least translational transformations to bring one portion of an image, such as image 80 into alignment for combining, or joining, with the complementary portion of the image in another scan region.
  • As few as two reference markers in each scan region permit one to make both translational and rotational corrections to combine the image portions.
  • each of the pairs of fiducial points are averaged to provide one average point.
  • FIGURE 10 shows an example in which the average of the fiducial points 114 and 120 are translated to coincide with the average point of the fiducial points 92 and 98. Alternatively, one could have translated the average of the fiducial points 124 and 126 to coincide with the average of the fiducial points 102 and 104.
  • FIGURES 9 and 10 show that the average of the fiducial points 114 and 120 are translated to coincide with the average point of the fiducial points 92 and 98. Alternatively, one could have translated the average of the fiducial points 124 and 126 to coincide with the average of the fiducial points 102 and 104.
  • FIGURE 10 show the regions of the photographic film outside of the scan areas 84, 112, 84' and 112' for illustration pu ⁇ oses only. In actual practice of the invention, it is the areas within the scan regions 84, 112, 84' and 112' that will contain the edge image data. In other embodiments, one could calculate edge images for wider regions up to and including one or both edges of the photographic film 60, or, alternatively, use narrower regions than that shown for the preferred embodiment.
  • the reference numerals for the scan areas 84 and 112 and features within the scan areas are shown with primes to indicate that the relative coordinates of the pixels have been transformed. However, the film 60 and the notch 70 are not shown with primes to indicate that they are representing the underlying film itself, and not image data.
  • FIGURE 11 illustrates the result of a rotation after the translation illustrated in FIGURE 10.
  • the pivot point 134 of the rotation is the coinciding point of the average of fiducial points 92 and 98 and the point which is the average of point 114 and 120.
  • the line 136 shown in FIGURE 10 is defined by the pivot point 134 and the average point of the fiducial points 102' and 104'.
  • the line 138 is defined by the pivot point 134 and the average of the fiducial points 124' and 126'.
  • a first region 84 of photographic film 60 is scanned with a digital image scanner 22 which is preferably a digital film scanner.
  • the scanned image will have a plurality of image channels, such as the front reflection, back reflection and through (transmission front to back and/or back to front) channels discussed in U.S. Patent Application No. 08/955,853.
  • the scanned image data is then processed by the digital image data processor 50 and stored in the data storage device 52.
  • the processed and stored image data is then processed by the high-pass spatial filter 54 to produce a first edge image.
  • Reference marks are detected by the reference mark detector 56.
  • the reference mark data are stored in data storage unit 52.
  • a second region 112 of the photographic film 60 is scanned by the digital film scanner 22.
  • both the first and second scanned areas are sufficiently wide to include at least one-half of the sprocket holes 62 and 64 spaced along the opposing edges of the film 60.
  • the scan regions are preferably approximately equal to or wider than a single image on the photographic film.
  • the second scanned image is similarly processed by the digital image data processor 50 and stored in the data storage unit 52.
  • the second image is processed by the high-pass spatial filter 54 to produce a second edge image.
  • the reference mark detector 56 detects reference marks in the second edge image.
  • the partial image combiner determines a mapping rule to align corresponding and complementary partial images from the first and second scans, transforms the partial images such that they are properly aligned, and combines the first and second partial images into a single combined image.
  • FIGURE 12 is a flowchart that schematically illustrates the method of combining portions of a digital image according to the present invention.
  • First image data from the first scan is processed by and filtered by a high-pass spatial filter to produce an edge image. At least one reference location is determined in the first edge image.
  • Second image data from the second scan are processed by the image data processor and high-pass spatial filtered to produce a second edge image. At least one reference location is determined in the second edge image.
  • a mapping rule is determined to bring the reference locations substantially into coincidence such that a partial image in the first scan is properly aligned with a corresponding, complementary portion in the second image scan. The mapping is applied to align the portions of the digital image and the portions are combined into a single joined image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

L'invention porte sur un dispositif (20) combinant des images numériques et possédant un appareil à balayage (22) d'images numériques, un processeur (50) de données d'images numériques, un dispositif (52) de stockage de données, un filtre (54) spatial passe-haut, un détecteur (56) de points de repère qui détecte des points (40) de repère dans le film (36) photographique et un combineur (58) d'images partielles qui aligne et réunit des parties d'une image photographique unique qui sont divisées entre deux régions de balayage d'image. Le procédé visant à combiner des parties d'une image (80) numérique forme une première image (84) numérique, génère une image de contour de la première image (84) numérique, détermine au moins un point de repère dans cette première image (84) numérique, forme une seconde image (112) numérique, génère une seconde image de contour de cette seconde image (112) numérique, détermine un second point de repère d'image et une règle de transformation, transforme la première et/ou la seconde image à l'aide de la règle de transformation et combine une partie de la première image (84) numérique avec une image correspondante et complémentaire de la seconde image (112) numérique.
EP99956255A 1998-11-12 1999-11-12 Procede et dispositif pour combiner des images partielles de balayage d'un film Withdrawn EP1135925A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US191459 1994-01-31
US19145998A 1998-11-12 1998-11-12
PCT/IB1999/001828 WO2000030340A1 (fr) 1998-11-12 1999-11-12 Procede et dispositif pour combiner des images partielles de balayage d'un film

Publications (1)

Publication Number Publication Date
EP1135925A1 true EP1135925A1 (fr) 2001-09-26

Family

ID=22705584

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99956255A Withdrawn EP1135925A1 (fr) 1998-11-12 1999-11-12 Procede et dispositif pour combiner des images partielles de balayage d'un film

Country Status (4)

Country Link
EP (1) EP1135925A1 (fr)
AU (1) AU1289700A (fr)
TW (1) TW496077B (fr)
WO (1) WO2000030340A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183918A1 (en) * 2003-03-20 2004-09-23 Eastman Kodak Company Producing enhanced photographic products from images captured at known picture sites
GB201118012D0 (en) * 2011-10-19 2011-11-30 Windense Ltd Motion picture scanning system
TWI475883B (zh) * 2012-03-01 2015-03-01 Altek Corp 攝像裝置及其分隔式攝像方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2720944A1 (de) * 1977-05-10 1978-11-16 Hell Rudolf Dr Ing Gmbh Verfahren und einrichtung zur herstellung von bildkombinationen
DE68929193T2 (de) * 1988-01-08 2000-09-28 Fuji Photo Film Co Ltd Gerät zur Farbfilmanalyse
JP3142428B2 (ja) * 1993-12-13 2001-03-07 株式会社東芝 画像形成装置
DE69733220T2 (de) * 1996-03-04 2006-01-19 Fuji Photo Film Co., Ltd., Minami-Ashigara Filmabtaster

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0030340A1 *

Also Published As

Publication number Publication date
TW496077B (en) 2002-07-21
AU1289700A (en) 2000-06-05
WO2000030340A1 (fr) 2000-05-25

Similar Documents

Publication Publication Date Title
US6570612B1 (en) System and method for color normalization of board images
US6535650B1 (en) Creating high resolution images
RU2421814C2 (ru) Способ формирования составного изображения
US7463293B2 (en) Method and system for correcting chromatic aberrations of a color image produced by an optical system
US20020008715A1 (en) Image resolution improvement using a color mosaic sensor
US20100231929A1 (en) Image processing apparatus and image processing method
JPH01170169A (ja) 多色印刷のための画像図形修正処理方法
WO1998012866A1 (fr) Dispositif et procede de synthese d'images
JP2006050627A (ja) 色の歪みを補正するための方法およびシステム
EP0351062A1 (fr) Procédé et appareil pour la génération d'images composites
US7742658B2 (en) System and method for boundary artifact elimination in parallel processing of large format images
US6728425B1 (en) Image processing device
JPH0879529A (ja) 画像処理装置
JP3059205B2 (ja) 画像処理装置
EP1135925A1 (fr) Procede et dispositif pour combiner des images partielles de balayage d'un film
US6118478A (en) Telecine systems for high definition use
JP2002507848A (ja) ランダムにシフトされるピクセルを有する、画像データの座標変換方法
US7847987B2 (en) Method of compensating a zipper image by a K-value and a method of calculating a K-value
JP3260891B2 (ja) エッジ抽出方法
JP4206294B2 (ja) 画像処理装置
WO2000011862A1 (fr) Procede et dispositif permettant d'aligner des images numeriques
EP1096783A1 (fr) Système de lecture de document
JP2634399B2 (ja) カラー画像処理方法
JP2000078390A (ja) 画像処理方法及び装置
JP2001016428A (ja) 画像形成装置及び転写画像歪み補正方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010525

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20020307

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20020718