EP1135925A1 - A method and device for combining partial film scan images - Google Patents

A method and device for combining partial film scan images

Info

Publication number
EP1135925A1
EP1135925A1 EP99956255A EP99956255A EP1135925A1 EP 1135925 A1 EP1135925 A1 EP 1135925A1 EP 99956255 A EP99956255 A EP 99956255A EP 99956255 A EP99956255 A EP 99956255A EP 1135925 A1 EP1135925 A1 EP 1135925A1
Authority
EP
European Patent Office
Prior art keywords
image
edge
digital image
digital
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99956255A
Other languages
German (de)
French (fr)
Inventor
Michael P. Keyes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Applied Science Fiction Inc
Original Assignee
Applied Science Fiction Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applied Science Fiction Inc filed Critical Applied Science Fiction Inc
Publication of EP1135925A1 publication Critical patent/EP1135925A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image

Definitions

  • the present invention pertains to a digital image processing device and method, and more particularly to a digital image device and method that aligns and/or combines partial film scan images prior to completion of digital film processing.
  • Numerous scanning devices are known for capturing digital images of objects. For example, there are numerous flat-bed and sheet-fed scanners currently available on the market for converting photographs, pages of text, and transparencies to digital images. There are also conventional film scanners available which scan photographic film to produce digital images. Most of the current film scanners use one set of three linear charged coupled devices (CCD) to scan photographic film. Each of the three CCD's scans in one region of the visible spectrum: typically red, green and blue channels. In such conventional scanners, image data is captured in each color channel
  • the three CCD's pass over the film once, thus providing three separate color channel scans at substantially the same time.
  • a problem with scanning photographic film prior to its being fully developed is that the locations of the images on the film are generally unknown. For example, the locations of images early in the development process may not be known at all, while the locations of images may be known with low precision at an intermediate stage. It may only be late in the development process that the locations of the images become known with high precision. Consequently, if one were to process a scan area roughly the size of an image on the exposed photographic film, for example, it will usually contain fragments of two different images. In other words, the scan area will straddle two images such that it contains a partial image fragment of two different images. This results in the image fragments from two separate scan areas having to be aligned and combined together to produce a final joined output image.
  • an area of an image medium is scanned to produce a raster image of the scanned area.
  • the image medium is photographic film.
  • a second area of the photographic film is scanned to produce a second raster image of the second scanned area.
  • the second scanned area is displaced along the longitudinal direction of the photographic film with respect to the first scanned area such that the scanned areas are abutting or partially overlapping.
  • the length of each of the first and second scanned areas is selected to be approximately equal to or greater than a longitudinal length of an image recorded in the photographic film.
  • the photographic film has regularly spaced perforation holes, such as sprocket holes. Regularly spaced indentations or notches, or other similar indicia are also suitable.
  • the width of each of the first and second scanned areas is preferably of a width such that it contains at least a fraction of the sprocket holes, or other indicia, along at least one of the edges of the photographic film. More preferably, the width of each of the first and second scanned areas is approximately the width required to extend entirely across the photographic image out to at least the half-way point of the sprocket holes along each side of the photographic film.
  • the sprocket holes, notches, dents, or other indicia provide reference markers which are fixed relative to the photographic film.
  • the first raster image of the first scanned area is filtered with a high-pass spatial filter to produce an edge image corresponding to the first raster image.
  • the reference markers which are portions of the sprocket holes in the preferred embodiment, appear as an outline of the reference marker.
  • the sprocket holes are used to establish reference points in order to align and combine complementary portions of an image in the first and second scanned images.
  • at least one corner, and more preferably, both corners, of a sprocket hole half on each opposing edge of the photographic film determine reference points.
  • the most preferred embodiment determines four reference points in each of the two scanned images; thus, eight reference points in total. The following description refers to the reference points in the preferred embodiment as fiducial points.
  • Each fiducial point is determined by the intersection of a vertical edge line and a horizontal edge line corresponding to its respective sprocket hole portion.
  • the vertical edge line is taken to be parallel to pixel columns in the edge image.
  • the column address of the vertical edge line for each of the fiducial points is preferably determined by an averaging process which is preferably a weighted average in which each column address is multiplied by the number of sprocket hole edge pixels in that column, all products for an edge of the sprocket hole are summed, and the sum is divided by a normalization factor.
  • a similar procedure is followed to determine the horizontal edge line for each of the fiducial points, except the horizontal edge line is taken to be parallel to the pixel rows and it is preferably assigned a weighted average row number.
  • fiducial point in each of the scanned areas is sufficient to make translational corrections.
  • a minimum of two fiducial points in each scanned image is required to make rotational corrections in addition to the translational corrections.
  • the most preferred embodiment uses four fiducial points in each of the scanned images.
  • the fiducial points are preferably established for two sprocket holes in each scanned area which are proximate to the joining regions of the image fragments and which correspond to the same sprocket hole, but as viewed in the first and second images.
  • fiducial points are determined for one sprocket hole along each transverse side of the photographic film along the joining region of the image fragments for each of the first and second scanned areas.
  • the two fiducial points proximate to each sprocket hole are averaged to provide one average fiducial point intermediate between its respective fiducial points.
  • a translational correction at least one of the fiducial points in the first scanned image is required to coincide with a corresponding one fiducial point in the second scanned image. This provides a translational correction rule.
  • the translational correction rule is then applied to all pixels, preferably within one image, such that there is a uniform translational correction of all the pixels of the first raster image relative to the second raster image.
  • the translational correction rule is determined with respect to one of the above-mentioned average fiducial points to its corresponding average fiducial point.
  • the rotational correction can similarly be applied to fiducial points or average fiducial points.
  • the rotational correction is performed after the translational correction based on one average fiducial point. Consequently, after the translational correction, the coinciding average fiducial point establishes a first line in the first scanned image passing through the average fiducial point on the opposing side of the photographic film. Similarly, the coinciding average fiducial point establishes a second line in the second scanned image passing through the average fiducial point on the opposing side of the photographic film. In general, there will be a non-zero angle between the first and second lines.
  • a rotational translation rule is determined by rotating the first line to coincide with the second line.
  • the fiducial point in the second scanned image which is not in coincidence with the average fiducial point in the first scanned image is rotated to bring them into substantial coincidence.
  • the pivot point of the rotation is the two substantially coinciding fiducial points which were aligned in the translational correction.
  • the combined image is cropped to remove the edge region that includes images of the sprocket holes, and to separate it from adjoining images.
  • FIGURE 1 is a schematic illustration of a digital image combining device according to the preferred embodiment of the invention.
  • FIGURE 2 is a schematic illustration showing a more detailed view of the digital image combining device according to the preferred embodiment of the invention.
  • FIGURE 3 is an illustration of an edge image of photographic film with the locations of images on the film shown schematically as large solid rectangles and the scan area shown schematically as a dashed rectangular box;
  • FIGURE 4 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
  • FIGURE 5 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
  • FIGURE 6 is an edge image of a portion of photographic film, like FIGURE 3, except the scan area is displaced relative to the scan area illustrated in FIGURE 3;
  • FIGURE 7 is similar to FIGURE 4, but shows a portion of a sprocket hole illustrated in FIGURE 6 along with corresponding fiducial points;
  • FIGURE 8 is similar to FIGURE 5, except that it shows a portion of a sprocket hole illustrated in FIGURE 6 along with examples of fiducial points;
  • FIGURE 9 shows a case in which the scan area illustrated in FIGURE 3 is displaced and rotated relative to the scan area illustrated in FIGURE 6;
  • FIGURE 10 illustrates the scan areas shown in FIGURE 9, but after a translational transformation between the two scan areas
  • FIGURE 11 illustrates the scan areas shown in FIGURE 10, but after a rotational transformation between the two scan areas
  • FIGURE 12 is a flow-chart illustrating the method of combining portions of a digital image according to the preferred embodiment of the invention.
  • the digital image combining device is designated generally by the reference numeral 20 in FIGURE 1.
  • the digital image combining device 20 generally has an image scanning device 22, a data processor 24, an output display device 26, and an input device 28.
  • a digital image combining device 20 may include one or more generally available peripheral devices 30.
  • the image scanner 22 is an electronic film developing apparatus, such as described in U.S. Patent Application No. 08/955,853
  • the data processor 24 is a personal computer or a work station
  • the output device 26 is a video monitor
  • the input device 28 is a keyboard.
  • the image scanner 22 is an electronic film developer in the preferred embodiment, the image scanner is not limited to being only an electronic film developer. Other scanning devices, including devices which scan media other than photographic film, are encompassed within the scope and spirit of the invention.
  • FIGURE 2 is a schematic illustration of several elements of the data processor 24.
  • FIGURE 2 also illustrates, schematically, some components interior to the electronic film developer 22.
  • the electronic film developer 22 is in communication with the data processor 24.
  • the scanning device 22 has at least two scanning stations 32 and 34.
  • the example of the digital image scanner 22 illustrated in FIGURE 2 is a schematic illustration of a digital film scanner that has two scanning stations, it is anticipated that other digital film scanners will have three or more scarining stations.
  • the digital film scanner 22 may have a single scanning station.
  • exposed photographic film 32 is directed to move through the scanning stations in the longitudinal direction 38.
  • the photographic film 36 has reference markers 40 at one transverse edge of the photographic film 36.
  • the reference markers 30 are sprocket holes, such as sprocket hole 42, in the photographic film 36.
  • the photographic film 36 has additional reference markers 44 in the transverse direction 46 opposing the reference markers 40.
  • a portion of the photographic film 48 is scanned over a time interval at scanning station 32.
  • a portion of the photographic film 49 is scanned at scanning station 34 over a period of time.
  • the photographic film 36 is typically subjected to film development treatment prior to the scanning station 32 and another film development treatment between scanning stations 32 and 34. Consequently, the film will be at one stage of development at scanmng station 32, and at another stage of film development at scanning station 34. It is anticipated that many digital film scanners also have at least a third scanning station. Consequently, the film 36 is at a different stage of development at scanning station 32 compared to that of scanning station 34. Furthermore, the stage of film development at scanning stations 32 and 34 will be different than at subsequent scanning stations for digital film scanners that have more than two scanning stations.
  • Scanned image data is transferred from each scanning station 32 and 34 to the data processor 24.
  • the data processor 24 has a digital image data processor 50 that is in communication with the scanmng stations 32 and 34.
  • the digital image data processor 50 is also in communication with a data storage unit 52 that stores processed image data, preferably in a conventional raster image format.
  • the data storage unit 52 is in communication with a high-pass spatial filter 54 such that it receives stored raster image data from the storage unit 52.
  • a reference mark detector 56 is in communication with a high-pass spatial filter 54 such that it receives filtered images from the high-pass spatial filter 54.
  • the reference mark detector 56 is also in communication with the data storage unit 52.
  • the partial image combiner 58 is in communication with the reference mark detector 56 and the data storage unit 52.
  • the digital image data processor 50, the high-pass spatial filter 54, the reference mark detector 56 and the partial image combiner 58 are implemented in practice by programming a personal computer or a workstation.
  • the invention includes other embodiments in which the components are implemented as dedicated hardware components.
  • the digital image data processor 50 is a conventional digital image data processor that processes scanned data from scanning stations 32 and 34 and outputs a digital raster image in a conventional format to be stored in data storage unit
  • the data storage unit 52 may be either a conventional hard drive or semiconductor memory, such as random access memory (RAM), or a combination of both. It is anticipated that other data storage devices may be used without departing from the scope and spirit of the invention.
  • the high-pass spatial filter uses a conventional spatial mask such as that described in R.C.
  • a three-pixel-by-three-pixel mask is usually sufficient, although one may select larger masks.
  • the center mask element is given a weight with a value of 8 and each neighbor pixel is given a weight of - 1.
  • the mask is then applied to each pixel of the raster image. If the subject pixel is in a fairly uniform region of the image, the sum of all the neighboring pixel values multiplied by the mask value will cancel with the central value, thus leading to essentially a zero output value. However, in the region of an edge of the image, the output will be non- zero. Consequently, such a filter will provide an output image which represents the edges of the original image. These are thus referred to as edge images.
  • FIGURE 3 is a schematic illustration of photographic film 60 which may be the same or similar to photographic film 36.
  • the photographic film 60 has a series of substantially uniformly spaced sprocket holes 62 proximate to one longitudinal edge of the film 60, and another series of substantially uniformly spaced sprocket holes 64 proximate to a second longitudinal edge of the film 60 opposing the first longitudinal edge.
  • the film 60 may have notches spaced at regular intervals such as notches 66, 68, 70 and 72, or notch 74. Alternatively, one could deliberately cut notches into the film 60 at regular intervals.
  • the first and second longitudinal edges of the film 60 which have notches 66, 68, 70, 72 and 74, and the sprocket holes 62 and 64 illustrates an example of an edge image from a section of photographic film.
  • the edge image in FIGURE 3 is shown with the edges in black and the background in white.
  • locations of images on the film 60 are indicated schematically as solid rectangles with reference numerals 76, 78, 80 and 82 in FIGURE 3.
  • the dashed rectangle 84 indicates a region of the photographic film that has been scanned such that it includes at least a portion of sprocket holes with the regions 4 and 5.
  • FIGURE 4 is a blown-up section of a reference sprocket hole as indicated in FIGURE 3.
  • the portion of the sprocket hole illustrated in FIGURE 4 has a first vertical edge portion 86 and a second vertical edge portion 88.
  • the portion of the sprocket hole illustrated in FIGURE 4 which is contained within the scanned region 84 has a horizontal edge portion 90.
  • a first fiducial point 92 is proximate to a corner of the portion of the sprocket hole shown in FIGURE 4.
  • the first fiducial point 92 is the intersection between the vertical edge line 94 and the horizontal edge line 96.
  • the positions of the vertical edge line 94 and horizontal edge line 96 are determined according to an averaging procedure.
  • the vertical edge line 94 is determined by a weighted average of the number of pixels corresponding to the section 86 within pixel columns of the edge image.
  • the edge line 94 is substantially parallel to the pixel columns of the edge image.
  • each pixel in the edge image has a unique coordinate address, preferably corresponding to a pixel row and column number in the conventional raster image representation.
  • the concept of the invention is not limited to the usual Cartesian representation of raster images. It is also known to represent images in other coordinate representations, such as polar coordinates, or other orthogonal or non-orthogonal coordinate systems.
  • each pixel column number in the region around the edge 86 is multiplied by the number of pixels corresponding to the edge 86 that fall within that pixel column, the products are summed and then divided by a normalization factor to provide a weighted average pixel column number that defines the vertical edge line 94.
  • a similar procedure for the horizontal edge 90, with respect to pixel rows, provides a weighted average pixel row number for the fiducial point 92. Consequently, the weighted average pixel column number and weighted average pixel row number define the fiducial point 92.
  • the fiducial point 92 is determined by finding the weighted average vertical edge line 94 from the horizontal edge line 86, the invention is not limited to establishing a reference marker in only this way.
  • the invention anticipates generally establishing reference markers in the edge image, as long as the reference markers are fixed relative to the image medium. For example, if the edge lines of the sprocket holes are significantly misaligned with rows and columns of the raster image, then it is preferable to determine the edges by a linear regression analysis.
  • a second fiducial point 98 is determined by a similar procedure as that used to determine the fiducial point 92.
  • the vertical edge line 100 is determined as a weighted average vertical edge line corresponding to the edge 88.
  • the fiducial point 98 is the intersection of the vertical edge line 100 and the horizontal edge line 96.
  • a blow-up view of section 5 illustrated in FIGURE 3 is shown in more detail in FIGURE 5.
  • the sprocket hole is at the opposing edge of the photographic film 60 relative to the sprocket hole illustrated in FIGURE 4.
  • the fiducial points 102 and 104 are determined by weighted average vertical edge line 106 and horizontal edge line 108, and vertical edge line 110 and horizontal edge line 108, respectively.
  • FIGURE 6 is an illustration of the edge image of the same section of photographic film 60 as in FIGURE 3, but with a different scan region 112.
  • the sprocket holes in the regions labeled 7 and 8 correspond to the sprocket holes in the regions labeled 4 and 5 in FIGURE 3.
  • the sprocket holes in regions 7 and 8 are part of a second image of the same sprocket holes in regions 4 and 5 of FIGURE 3.
  • the scan region 84 in FIGURE 3 contains a portion of the image 80 while the scan region 112 in
  • FIGURE 6 contains a complementary portion of the same image 80.
  • FIGURE 7 shows a portion of the sprocket hole that is also illustrated in the region 7 in FIGURE 6. This corresponds to the sprocket hole in the region 4 illustrated in FIGURE 3. Since the sprocket hole in the regions 7 and 4 is fixed relative to the film 60, it can be used to line up the image portions of the image 80 and join them together. Similar to FIGURE 4, the portion of the sprocket hole in the region 7 has fiducial points 114 and 120.
  • the fiducial point 114 is determined as the intersection of the vertical edge line 116 and the horizontal edge line 118, each preferably determined by a weighted averaging procedure.
  • the fiducial point 120 is determined by the intersection of the vertical edge line 122 and the horizontal edge line 118.
  • FIGURE 8 contains a second edge image of the same sprocket hole that is illustrated in the first edge image in the region 5 of FIGURE 3.
  • FIGURE 8 shows an enlarged view of the portion of the sprocket hole in the region 8.
  • the sprocket hole in the region 8 also determines two fiducial points, labeled 124 and 126 in FIGURE 8.
  • the intersection of the vertical edge line 128 with the horizontal edge line 130 determines the fiducial point 124.
  • the intersection of the vertical edge line 132 and the horizontal edge line 130 determines the fiducial point 126.
  • the vertical edge lines 128 and 132 and the horizontal edge line 130 are determined by a weighted averaging procedure.
  • FIGURE 9 shows an example of the scanned region 84 illustrated in FIGURE 3 displaced and rotated relative to the scanned region 112 illustrated in FIGURE 6.
  • the displacement and rotation are greatly exaggerated to facilitate the explanation.
  • the first scan region 84 has the four fiducial points 92, 98, 102 and 104 as reference markers for aligning and combining the second scan region 112 with the first scan region 84.
  • the second scan region 112 has four fiducial points 114, 120, 124 and 126 in the preferred embodiment.
  • the invention is not limited specifically to four reference points in each scanned image.
  • As few as one reference marker in each image will be sufficient to make at least translational transformations to bring one portion of an image, such as image 80 into alignment for combining, or joining, with the complementary portion of the image in another scan region.
  • As few as two reference markers in each scan region permit one to make both translational and rotational corrections to combine the image portions.
  • each of the pairs of fiducial points are averaged to provide one average point.
  • FIGURE 10 shows an example in which the average of the fiducial points 114 and 120 are translated to coincide with the average point of the fiducial points 92 and 98. Alternatively, one could have translated the average of the fiducial points 124 and 126 to coincide with the average of the fiducial points 102 and 104.
  • FIGURES 9 and 10 show that the average of the fiducial points 114 and 120 are translated to coincide with the average point of the fiducial points 92 and 98. Alternatively, one could have translated the average of the fiducial points 124 and 126 to coincide with the average of the fiducial points 102 and 104.
  • FIGURE 10 show the regions of the photographic film outside of the scan areas 84, 112, 84' and 112' for illustration pu ⁇ oses only. In actual practice of the invention, it is the areas within the scan regions 84, 112, 84' and 112' that will contain the edge image data. In other embodiments, one could calculate edge images for wider regions up to and including one or both edges of the photographic film 60, or, alternatively, use narrower regions than that shown for the preferred embodiment.
  • the reference numerals for the scan areas 84 and 112 and features within the scan areas are shown with primes to indicate that the relative coordinates of the pixels have been transformed. However, the film 60 and the notch 70 are not shown with primes to indicate that they are representing the underlying film itself, and not image data.
  • FIGURE 11 illustrates the result of a rotation after the translation illustrated in FIGURE 10.
  • the pivot point 134 of the rotation is the coinciding point of the average of fiducial points 92 and 98 and the point which is the average of point 114 and 120.
  • the line 136 shown in FIGURE 10 is defined by the pivot point 134 and the average point of the fiducial points 102' and 104'.
  • the line 138 is defined by the pivot point 134 and the average of the fiducial points 124' and 126'.
  • a first region 84 of photographic film 60 is scanned with a digital image scanner 22 which is preferably a digital film scanner.
  • the scanned image will have a plurality of image channels, such as the front reflection, back reflection and through (transmission front to back and/or back to front) channels discussed in U.S. Patent Application No. 08/955,853.
  • the scanned image data is then processed by the digital image data processor 50 and stored in the data storage device 52.
  • the processed and stored image data is then processed by the high-pass spatial filter 54 to produce a first edge image.
  • Reference marks are detected by the reference mark detector 56.
  • the reference mark data are stored in data storage unit 52.
  • a second region 112 of the photographic film 60 is scanned by the digital film scanner 22.
  • both the first and second scanned areas are sufficiently wide to include at least one-half of the sprocket holes 62 and 64 spaced along the opposing edges of the film 60.
  • the scan regions are preferably approximately equal to or wider than a single image on the photographic film.
  • the second scanned image is similarly processed by the digital image data processor 50 and stored in the data storage unit 52.
  • the second image is processed by the high-pass spatial filter 54 to produce a second edge image.
  • the reference mark detector 56 detects reference marks in the second edge image.
  • the partial image combiner determines a mapping rule to align corresponding and complementary partial images from the first and second scans, transforms the partial images such that they are properly aligned, and combines the first and second partial images into a single combined image.
  • FIGURE 12 is a flowchart that schematically illustrates the method of combining portions of a digital image according to the present invention.
  • First image data from the first scan is processed by and filtered by a high-pass spatial filter to produce an edge image. At least one reference location is determined in the first edge image.
  • Second image data from the second scan are processed by the image data processor and high-pass spatial filtered to produce a second edge image. At least one reference location is determined in the second edge image.
  • a mapping rule is determined to bring the reference locations substantially into coincidence such that a partial image in the first scan is properly aligned with a corresponding, complementary portion in the second image scan. The mapping is applied to align the portions of the digital image and the portions are combined into a single joined image.

Abstract

The digital image combining device (20) as a digital image scanner (22), a digital image data processor (50), a data storage device (52), a high-pass spatial filter (54), a reference mark detector (56) to detect reference marks (40) in the photographic film (36), and a partial image combiner (58) which aligns and joins image portions of a single photographic image which are split between two image scan regions. The method of combining portions of a digital image (80) forms a first digital image (84), generates an edge image of the first digital image (84), determines at least one reference mark in the first digital image (84), forms a second digital image (112), generates a second edge image of the second digital image (112), determines a second image reference location, determines a transformation rule, transforms the first and/or second image using the transformation rule and combines an image portion from the first digital image (84) with a corresponding and complementary image in the second digital image (112).

Description

A METHOD AND DEVICE FOR COMBINING PARTIAL FILM SCAN IMAGES
BACKGROUND OF THE INVENTION
The entire contents of the co-pending application entitled "A Method and Device for the Alignment of Digital Images" by the same inventor as this application
(Michael Keyes), and filed September 30, 1998 with attorney docket number 12715/252803 is hereby incorporated by reference.
1. Field of the Invention
The present invention pertains to a digital image processing device and method, and more particularly to a digital image device and method that aligns and/or combines partial film scan images prior to completion of digital film processing.
2. Description of the Related Art
Numerous scanning devices are known for capturing digital images of objects. For example, there are numerous flat-bed and sheet-fed scanners currently available on the market for converting photographs, pages of text, and transparencies to digital images. There are also conventional film scanners available which scan photographic film to produce digital images. Most of the current film scanners use one set of three linear charged coupled devices (CCD) to scan photographic film. Each of the three CCD's scans in one region of the visible spectrum: typically red, green and blue channels. In such conventional scanners, image data is captured in each color channel
(e.g., red, green and blue) at substantially the same time after the film has been fully developed. The three CCD's pass over the film once, thus providing three separate color channel scans at substantially the same time.
Another type of film scanner is described by Edgar in U.S. Patent Nos. 5,519,510 and 5,155,596, and U.S. Patent Application No. 08/955,853, the entire contents of which are incoφorated herein by reference. Edgar teaches a device and method for chemically processing and scanning photographic film. Scanning the photographic film in separate color channels with different sets of CCD's positioned on either side of the photographic film produces images of superior quality according to the methods and device of Edgar. Edgar also teaches that it is advantageous to perform multiple scans of the film with each of the separate color channels using different sets of CCD's. For example, it may be advantageous to capture data for the image highlights at an early stage of development of the photographic film, to capture the mid-tones at an intermediate stage of development, and to capture the shadow image data at a later stage of photographic film development.
A problem with scanning photographic film prior to its being fully developed is that the locations of the images on the film are generally unknown. For example, the locations of images early in the development process may not be known at all, while the locations of images may be known with low precision at an intermediate stage. It may only be late in the development process that the locations of the images become known with high precision. Consequently, if one were to process a scan area roughly the size of an image on the exposed photographic film, for example, it will usually contain fragments of two different images. In other words, the scan area will straddle two images such that it contains a partial image fragment of two different images. This results in the image fragments from two separate scan areas having to be aligned and combined together to produce a final joined output image. Alternatively, if one were to collect the image data and wait until the film is completely developed before beginning the image processing, considerable computer memory would be necessary and the image processing would be completed later than if the digital processing were to be conducted during the photographic film processing. SUMMARY OF THE INVENTION
Accordingly, it is an object of the present invention to provide a digital image combining device and method to combine image fragments into a single joined image. It is another object of the present invention to provide a digital image combining device and method that corrects for misalignments between a portion of an image in a scanned area and a remaining portion of the same image in a second scanned area.
It is another object of the present invention to provide a digital image combining device and method that corrects for translational misalignments between a portion of an image in a scanned area of photographic film and a remaining portion of the same image in a second scanned area of the photographic film.
It is another object of the present invention to provide a digital image combimng device and method that corrects for rotational misalignments between a portion of an image in a first scanned area of photographic film with a remaining portion of an image in a second scanned area of the photographic film.
It is another object of the present invention to provide a digital image combining device and method that corrects for translational and rotational errors between a portion of an image in a scanned area of photographic film with a remaining portion of the image in a second scanned area of the photographic film.
The above and related objects of the present invention are realized by providing a digital image combining device and method in which two image fragments are aligned and joined to form a single combined image. According to the present invention, an area of an image medium is scanned to produce a raster image of the scanned area. Preferably, the image medium is photographic film. A second area of the photographic film is scanned to produce a second raster image of the second scanned area. The second scanned area is displaced along the longitudinal direction of the photographic film with respect to the first scanned area such that the scanned areas are abutting or partially overlapping. The length of each of the first and second scanned areas is selected to be approximately equal to or greater than a longitudinal length of an image recorded in the photographic film. Preferably, the photographic film has regularly spaced perforation holes, such as sprocket holes. Regularly spaced indentations or notches, or other similar indicia are also suitable. The width of each of the first and second scanned areas is preferably of a width such that it contains at least a fraction of the sprocket holes, or other indicia, along at least one of the edges of the photographic film. More preferably, the width of each of the first and second scanned areas is approximately the width required to extend entirely across the photographic image out to at least the half-way point of the sprocket holes along each side of the photographic film.
The sprocket holes, notches, dents, or other indicia, provide reference markers which are fixed relative to the photographic film. The first raster image of the first scanned area is filtered with a high-pass spatial filter to produce an edge image corresponding to the first raster image. The reference markers, which are portions of the sprocket holes in the preferred embodiment, appear as an outline of the reference marker. In the preferred embodiment, the sprocket holes are used to establish reference points in order to align and combine complementary portions of an image in the first and second scanned images. In the preferred embodiment, at least one corner, and more preferably, both corners, of a sprocket hole half on each opposing edge of the photographic film determine reference points. The most preferred embodiment determines four reference points in each of the two scanned images; thus, eight reference points in total. The following description refers to the reference points in the preferred embodiment as fiducial points.
Each fiducial point is determined by the intersection of a vertical edge line and a horizontal edge line corresponding to its respective sprocket hole portion. The vertical edge line is taken to be parallel to pixel columns in the edge image. The column address of the vertical edge line for each of the fiducial points is preferably determined by an averaging process which is preferably a weighted average in which each column address is multiplied by the number of sprocket hole edge pixels in that column, all products for an edge of the sprocket hole are summed, and the sum is divided by a normalization factor. A similar procedure is followed to determine the horizontal edge line for each of the fiducial points, except the horizontal edge line is taken to be parallel to the pixel rows and it is preferably assigned a weighted average row number.
One fiducial point in each of the scanned areas is sufficient to make translational corrections. A minimum of two fiducial points in each scanned image is required to make rotational corrections in addition to the translational corrections. However, the most preferred embodiment uses four fiducial points in each of the scanned images. The fiducial points are preferably established for two sprocket holes in each scanned area which are proximate to the joining regions of the image fragments and which correspond to the same sprocket hole, but as viewed in the first and second images. Preferably, fiducial points are determined for one sprocket hole along each transverse side of the photographic film along the joining region of the image fragments for each of the first and second scanned areas. In the preferred embodiment, the two fiducial points proximate to each sprocket hole are averaged to provide one average fiducial point intermediate between its respective fiducial points. Thus, there is one average fiducial point for each of the four sprocket hole images used in the alignment and combining process. In a translational correction, at least one of the fiducial points in the first scanned image is required to coincide with a corresponding one fiducial point in the second scanned image. This provides a translational correction rule. The translational correction rule is then applied to all pixels, preferably within one image, such that there is a uniform translational correction of all the pixels of the first raster image relative to the second raster image.
In the preferred embodiment, the translational correction rule is determined with respect to one of the above-mentioned average fiducial points to its corresponding average fiducial point.
The rotational correction can similarly be applied to fiducial points or average fiducial points. In the preferred embodiment, the rotational correction is performed after the translational correction based on one average fiducial point. Consequently, after the translational correction, the coinciding average fiducial point establishes a first line in the first scanned image passing through the average fiducial point on the opposing side of the photographic film. Similarly, the coinciding average fiducial point establishes a second line in the second scanned image passing through the average fiducial point on the opposing side of the photographic film. In general, there will be a non-zero angle between the first and second lines.
A rotational translation rule is determined by rotating the first line to coincide with the second line. In other words, the fiducial point in the second scanned image which is not in coincidence with the average fiducial point in the first scanned image is rotated to bring them into substantial coincidence. The pivot point of the rotation is the two substantially coinciding fiducial points which were aligned in the translational correction.
After an image portion in the first raster image is brought into alignment with a corresponding, and complementary, image portion in the second raster image, there will generally be a region of overlap between the two images. Consequently, since there would be two image values corresponding to each pixel in the overlap region, one must determine a method of assigning a unique pixel image value in the overlap region. In the preferred embodiment, all image values from either the first, or all image values from the second, are selected for the image values of all of the pixels in the overlap region. However, one skilled in the art would recognize that this invention includes other procedures of establishing a unique pixel value. For example, one could randomly select one or the other pixel value for each pixel, or perhaps establish some other feathering technique at the interface between the first and second raster image portions of the photographic image. Finally, the combined image is cropped to remove the edge region that includes images of the sprocket holes, and to separate it from adjoining images. BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects and advantages of the present invention will become apparent and more readily appreciated from the following detailed description of the presently preferred exemplary embodiment of the invention taken in conjunction with the accompanying drawings, of which:
FIGURE 1 is a schematic illustration of a digital image combining device according to the preferred embodiment of the invention;
FIGURE 2 is a schematic illustration showing a more detailed view of the digital image combining device according to the preferred embodiment of the invention;
FIGURE 3 is an illustration of an edge image of photographic film with the locations of images on the film shown schematically as large solid rectangles and the scan area shown schematically as a dashed rectangular box;
FIGURE 4 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
FIGURE 5 is an illustration of a portion of one of the sprocket holes illustrated in FIGURE 3 along with examples of fiducial points;
FIGURE 6 is an edge image of a portion of photographic film, like FIGURE 3, except the scan area is displaced relative to the scan area illustrated in FIGURE 3; FIGURE 7 is similar to FIGURE 4, but shows a portion of a sprocket hole illustrated in FIGURE 6 along with corresponding fiducial points;
FIGURE 8 is similar to FIGURE 5, except that it shows a portion of a sprocket hole illustrated in FIGURE 6 along with examples of fiducial points;
FIGURE 9 shows a case in which the scan area illustrated in FIGURE 3 is displaced and rotated relative to the scan area illustrated in FIGURE 6;
FIGURE 10 illustrates the scan areas shown in FIGURE 9, but after a translational transformation between the two scan areas;
FIGURE 11 illustrates the scan areas shown in FIGURE 10, but after a rotational transformation between the two scan areas; FIGURE 12 is a flow-chart illustrating the method of combining portions of a digital image according to the preferred embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED
EMBODIMENT The digital image combining device according to the present invention is designated generally by the reference numeral 20 in FIGURE 1. In the preferred embodiment, the digital image combining device 20 generally has an image scanning device 22, a data processor 24, an output display device 26, and an input device 28. In addition, a digital image combining device 20 may include one or more generally available peripheral devices 30. In the preferred embodiment, the image scanner 22 is an electronic film developing apparatus, such as described in U.S. Patent Application No. 08/955,853, the data processor 24 is a personal computer or a work station, the output device 26 is a video monitor, and the input device 28 is a keyboard. Although the image scanner 22 is an electronic film developer in the preferred embodiment, the image scanner is not limited to being only an electronic film developer. Other scanning devices, including devices which scan media other than photographic film, are encompassed within the scope and spirit of the invention.
FIGURE 2 is a schematic illustration of several elements of the data processor 24. FIGURE 2 also illustrates, schematically, some components interior to the electronic film developer 22. The electronic film developer 22 is in communication with the data processor 24. The scanning device 22 has at least two scanning stations 32 and 34. Although the example of the digital image scanner 22 illustrated in FIGURE 2 is a schematic illustration of a digital film scanner that has two scanning stations, it is anticipated that other digital film scanners will have three or more scarining stations. Furthermore, in a less preferred embodiment, the digital film scanner 22 may have a single scanning station. In the preferred embodiment, exposed photographic film 32 is directed to move through the scanning stations in the longitudinal direction 38. The photographic film 36 has reference markers 40 at one transverse edge of the photographic film 36. Preferably, the reference markers 30 are sprocket holes, such as sprocket hole 42, in the photographic film 36. The photographic film 36 has additional reference markers 44 in the transverse direction 46 opposing the reference markers 40. A portion of the photographic film 48 is scanned over a time interval at scanning station 32. Similarly, a portion of the photographic film 49 is scanned at scanning station 34 over a period of time. Scarining stations for electronic film development are also described in U.S. Patent
No. 5,519,510 and U.S. Patent No. 5,155,596.
In electronic film development, the photographic film 36 is typically subjected to film development treatment prior to the scanning station 32 and another film development treatment between scanning stations 32 and 34. Consequently, the film will be at one stage of development at scanmng station 32, and at another stage of film development at scanning station 34. It is anticipated that many digital film scanners also have at least a third scanning station. Consequently, the film 36 is at a different stage of development at scanning station 32 compared to that of scanning station 34. Furthermore, the stage of film development at scanning stations 32 and 34 will be different than at subsequent scanning stations for digital film scanners that have more than two scanning stations.
Scanned image data is transferred from each scanning station 32 and 34 to the data processor 24. The data processor 24 has a digital image data processor 50 that is in communication with the scanmng stations 32 and 34. The digital image data processor 50 is also in communication with a data storage unit 52 that stores processed image data, preferably in a conventional raster image format. The data storage unit 52 is in communication with a high-pass spatial filter 54 such that it receives stored raster image data from the storage unit 52. A reference mark detector 56 is in communication with a high-pass spatial filter 54 such that it receives filtered images from the high-pass spatial filter 54. The reference mark detector 56 is also in communication with the data storage unit 52. The partial image combiner 58 is in communication with the reference mark detector 56 and the data storage unit 52. In the preferred embodiment, the digital image data processor 50, the high-pass spatial filter 54, the reference mark detector 56 and the partial image combiner 58 are implemented in practice by programming a personal computer or a workstation. However, the invention includes other embodiments in which the components are implemented as dedicated hardware components.
Preferably, the digital image data processor 50 is a conventional digital image data processor that processes scanned data from scanning stations 32 and 34 and outputs a digital raster image in a conventional format to be stored in data storage unit
52. In the preferred embodiment, the data storage unit 52 may be either a conventional hard drive or semiconductor memory, such as random access memory (RAM), or a combination of both. It is anticipated that other data storage devices may be used without departing from the scope and spirit of the invention. Preferably, the high-pass spatial filter uses a conventional spatial mask such as that described in R.C.
Gonzales and R.E. Woods, Digital Image Processing, pages 189-249, the entire contents of which is incoφorated herein by reference. In such a high-pass spatial filter, a three-pixel-by-three-pixel mask is usually sufficient, although one may select larger masks. For a three-pixel-by-three-pixel mask, the center mask element is given a weight with a value of 8 and each neighbor pixel is given a weight of - 1. The mask is then applied to each pixel of the raster image. If the subject pixel is in a fairly uniform region of the image, the sum of all the neighboring pixel values multiplied by the mask value will cancel with the central value, thus leading to essentially a zero output value. However, in the region of an edge of the image, the output will be non- zero. Consequently, such a filter will provide an output image which represents the edges of the original image. These are thus referred to as edge images.
FIGURE 3 is a schematic illustration of photographic film 60 which may be the same or similar to photographic film 36. The photographic film 60 has a series of substantially uniformly spaced sprocket holes 62 proximate to one longitudinal edge of the film 60, and another series of substantially uniformly spaced sprocket holes 64 proximate to a second longitudinal edge of the film 60 opposing the first longitudinal edge. In addition, the film 60 may have notches spaced at regular intervals such as notches 66, 68, 70 and 72, or notch 74. Alternatively, one could deliberately cut notches into the film 60 at regular intervals. The first and second longitudinal edges of the film 60 which have notches 66, 68, 70, 72 and 74, and the sprocket holes 62 and 64 illustrates an example of an edge image from a section of photographic film. The edge image in FIGURE 3 is shown with the edges in black and the background in white. In addition, locations of images on the film 60 are indicated schematically as solid rectangles with reference numerals 76, 78, 80 and 82 in FIGURE 3. The dashed rectangle 84 indicates a region of the photographic film that has been scanned such that it includes at least a portion of sprocket holes with the regions 4 and 5.
FIGURE 4 is a blown-up section of a reference sprocket hole as indicated in FIGURE 3. The portion of the sprocket hole illustrated in FIGURE 4 has a first vertical edge portion 86 and a second vertical edge portion 88. The portion of the sprocket hole illustrated in FIGURE 4 which is contained within the scanned region 84 has a horizontal edge portion 90. A first fiducial point 92 is proximate to a corner of the portion of the sprocket hole shown in FIGURE 4. The first fiducial point 92 is the intersection between the vertical edge line 94 and the horizontal edge line 96. Preferably, the positions of the vertical edge line 94 and horizontal edge line 96 are determined according to an averaging procedure. In the preferred embodiment, the vertical edge line 94 is determined by a weighted average of the number of pixels corresponding to the section 86 within pixel columns of the edge image. Preferably, the edge line 94 is substantially parallel to the pixel columns of the edge image. In the preferred embodiment, each pixel in the edge image has a unique coordinate address, preferably corresponding to a pixel row and column number in the conventional raster image representation. However, the concept of the invention is not limited to the usual Cartesian representation of raster images. It is also known to represent images in other coordinate representations, such as polar coordinates, or other orthogonal or non-orthogonal coordinate systems. In the preferred averaging procedure, each pixel column number in the region around the edge 86 is multiplied by the number of pixels corresponding to the edge 86 that fall within that pixel column, the products are summed and then divided by a normalization factor to provide a weighted average pixel column number that defines the vertical edge line 94. A similar procedure for the horizontal edge 90, with respect to pixel rows, provides a weighted average pixel row number for the fiducial point 92. Consequently, the weighted average pixel column number and weighted average pixel row number define the fiducial point 92. Although the fiducial point 92 is determined by finding the weighted average vertical edge line 94 from the horizontal edge line 86, the invention is not limited to establishing a reference marker in only this way. The invention anticipates generally establishing reference markers in the edge image, as long as the reference markers are fixed relative to the image medium. For example, if the edge lines of the sprocket holes are significantly misaligned with rows and columns of the raster image, then it is preferable to determine the edges by a linear regression analysis.
A second fiducial point 98 is determined by a similar procedure as that used to determine the fiducial point 92. In this case, the vertical edge line 100 is determined as a weighted average vertical edge line corresponding to the edge 88. The fiducial point 98 is the intersection of the vertical edge line 100 and the horizontal edge line 96. Similarly, one may determine fiducial points relative to other sprocket holes in the edge image. A blow-up view of section 5 illustrated in FIGURE 3 is shown in more detail in FIGURE 5. In FIGURE 5, the sprocket hole is at the opposing edge of the photographic film 60 relative to the sprocket hole illustrated in FIGURE 4. The fiducial points 102 and 104 are determined by weighted average vertical edge line 106 and horizontal edge line 108, and vertical edge line 110 and horizontal edge line 108, respectively.
FIGURE 6 is an illustration of the edge image of the same section of photographic film 60 as in FIGURE 3, but with a different scan region 112. The sprocket holes in the regions labeled 7 and 8 correspond to the sprocket holes in the regions labeled 4 and 5 in FIGURE 3. In other words, the sprocket holes in regions 7 and 8 are part of a second image of the same sprocket holes in regions 4 and 5 of FIGURE 3. One can see from comparing FIGURES 3 and 6, that the scan region 84 in FIGURE 3 contains a portion of the image 80 while the scan region 112 in
FIGURE 6 contains a complementary portion of the same image 80. One can also see from this comparison that there will also generally be a region of overlap between the image portion 80 illustrated in FIGURE 3 once it is joined with the image portion of the same image 80 illustrated in FIGURE 6. FIGURE 7 shows a portion of the sprocket hole that is also illustrated in the region 7 in FIGURE 6. This corresponds to the sprocket hole in the region 4 illustrated in FIGURE 3. Since the sprocket hole in the regions 7 and 4 is fixed relative to the film 60, it can be used to line up the image portions of the image 80 and join them together. Similar to FIGURE 4, the portion of the sprocket hole in the region 7 has fiducial points 114 and 120. The fiducial point 114 is determined as the intersection of the vertical edge line 116 and the horizontal edge line 118, each preferably determined by a weighted averaging procedure. The fiducial point 120 is determined by the intersection of the vertical edge line 122 and the horizontal edge line 118.
FIGURE 8 contains a second edge image of the same sprocket hole that is illustrated in the first edge image in the region 5 of FIGURE 3. FIGURE 8 shows an enlarged view of the portion of the sprocket hole in the region 8. The sprocket hole in the region 8 also determines two fiducial points, labeled 124 and 126 in FIGURE 8. The intersection of the vertical edge line 128 with the horizontal edge line 130 determines the fiducial point 124. The intersection of the vertical edge line 132 and the horizontal edge line 130 determines the fiducial point 126. As in the previous cases, the vertical edge lines 128 and 132 and the horizontal edge line 130 are determined by a weighted averaging procedure.
FIGURE 9 shows an example of the scanned region 84 illustrated in FIGURE 3 displaced and rotated relative to the scanned region 112 illustrated in FIGURE 6. The displacement and rotation are greatly exaggerated to facilitate the explanation. In order to obtain a full image 80 from the first scan 84 and the second scan 112, one must align and combine the left portion (as viewed in the figure) of the image 80 in the first scan 84 with the right portion of the image 80 in the second scan 112. In the preferred embodiment, the first scan region 84 has the four fiducial points 92, 98, 102 and 104 as reference markers for aligning and combining the second scan region 112 with the first scan region 84. The second scan region 112 has four fiducial points 114, 120, 124 and 126 in the preferred embodiment. However, the invention is not limited specifically to four reference points in each scanned image. One may select a greater number, or a smaller number, of reference markers. As few as one reference marker in each image will be sufficient to make at least translational transformations to bring one portion of an image, such as image 80 into alignment for combining, or joining, with the complementary portion of the image in another scan region. As few as two reference markers in each scan region permit one to make both translational and rotational corrections to combine the image portions.
In the preferred embodiment the locations of each of the pairs of fiducial points are averaged to provide one average point. In other words, the fiducial points 92 and
98 are averaged to provide an average position intermediate between the points 92 and 98 along the horizontal edge line 96. Similarly, the points 102 and 104 are averaged, the points 114 and 120 are averaged, and the points 124 and 126 are averaged. FIGURE 10 shows an example in which the average of the fiducial points 114 and 120 are translated to coincide with the average point of the fiducial points 92 and 98. Alternatively, one could have translated the average of the fiducial points 124 and 126 to coincide with the average of the fiducial points 102 and 104. FIGURES 9 and
10 show the regions of the photographic film outside of the scan areas 84, 112, 84' and 112' for illustration puφoses only. In actual practice of the invention, it is the areas within the scan regions 84, 112, 84' and 112' that will contain the edge image data. In other embodiments, one could calculate edge images for wider regions up to and including one or both edges of the photographic film 60, or, alternatively, use narrower regions than that shown for the preferred embodiment. In FIGURE 10, the reference numerals for the scan areas 84 and 112 and features within the scan areas are shown with primes to indicate that the relative coordinates of the pixels have been transformed. However, the film 60 and the notch 70 are not shown with primes to indicate that they are representing the underlying film itself, and not image data.
FIGURE 11 illustrates the result of a rotation after the translation illustrated in FIGURE 10. The pivot point 134 of the rotation is the coinciding point of the average of fiducial points 92 and 98 and the point which is the average of point 114 and 120. The line 136 shown in FIGURE 10 is defined by the pivot point 134 and the average point of the fiducial points 102' and 104'. The line 138 is defined by the pivot point 134 and the average of the fiducial points 124' and 126'. After the rotational transformation, the lines 136 and 138 substantially coincide as illustrated in FIGURE
11 as the substantially coinciding line 136' and 138'. In general, after the rotational and translational transformations, there will be a region of the aligned image, 80" that will overlap, as indicated by the region 140. In the preferred embodiment, all pixels within the region of the image 140 in the combined image 80" will be assigned image values corresponding to either the first scan or the second scan. However, other approaches may be taken to remove the redundancy of the image information in the overlap region 140.
In operation, a first region 84 of photographic film 60 is scanned with a digital image scanner 22 which is preferably a digital film scanner. In general the scanned image will have a plurality of image channels, such as the front reflection, back reflection and through (transmission front to back and/or back to front) channels discussed in U.S. Patent Application No. 08/955,853. The scanned image data is then processed by the digital image data processor 50 and stored in the data storage device 52. The processed and stored image data is then processed by the high-pass spatial filter 54 to produce a first edge image. Reference marks are detected by the reference mark detector 56. The reference mark data are stored in data storage unit 52. A second region 112 of the photographic film 60 is scanned by the digital film scanner 22. Preferably, both the first and second scanned areas are sufficiently wide to include at least one-half of the sprocket holes 62 and 64 spaced along the opposing edges of the film 60. In addition, the scan regions are preferably approximately equal to or wider than a single image on the photographic film.
The second scanned image is similarly processed by the digital image data processor 50 and stored in the data storage unit 52. The second image is processed by the high-pass spatial filter 54 to produce a second edge image. The reference mark detector 56 detects reference marks in the second edge image. The partial image combiner determines a mapping rule to align corresponding and complementary partial images from the first and second scans, transforms the partial images such that they are properly aligned, and combines the first and second partial images into a single combined image.
FIGURE 12 is a flowchart that schematically illustrates the method of combining portions of a digital image according to the present invention. First image data from the first scan is processed by and filtered by a high-pass spatial filter to produce an edge image. At least one reference location is determined in the first edge image. Second image data from the second scan are processed by the image data processor and high-pass spatial filtered to produce a second edge image. At least one reference location is determined in the second edge image. A mapping rule is determined to bring the reference locations substantially into coincidence such that a partial image in the first scan is properly aligned with a corresponding, complementary portion in the second image scan. The mapping is applied to align the portions of the digital image and the portions are combined into a single joined image. Although only a single embodiment has been described in detail above, those skilled in the art will readily appreciate that many modifications of the exemplary embodiment are possible without materially departing from the novel teachings and advantages of this invention.

Claims

WHAT IS CLAIMED IS:
1. A method of combining portions of a digital image, comprising: forming a first digital image of an image medium, said first digital image being a first raster image having a first plurality of pixels; generating a first edge image by performing a high-pass spatial filtering of said first digital image; determining a first-image reference location of a reference marker of said image medium relative to said first edge image, said reference marker being substantially fixed relative to said image medium; forming a second digital image of said image medium, said second digital image being a second raster image having a second plurality of pixels; generating a second edge image by performing a high-pass spatial filtering of said second digital image; determining a second-image reference location of said reference marker of said image medium relative to said second edge image; determining a transformation rule derived from transforming said first-image reference location of said reference marker to said second-image reference location of said reference marker; transforming said first plurality of pixels relative to said second plurality of pixels based on said transformation rule; and forming a combined image from portions of said first digital image and said second digital image.
2. A method of combining portions of a digital image according to claim 1, wherein said first digital image includes a first portion of an image recorded in said image medium, and said second digital image includes the remaining portion of said image recorded in said image medium.
3. A method of combining portions of a digital image according to claim 2, wherein said image medium is photographic film and said reference marker is a sprocket hole defined by said photographic film.
4. A method of combimng portions of a digital image according to claim 3, wherein said first and second digital images of said image medium each have widths that are approximately equal to each other and are sufficiently wide to include at least a portion of said sprocket hole in addition to the entire width of said digital image.
5. A method of combining portions of a digital image according to claim 2, wherein said first digital image of said image medium has a length that is at least substantially equal to a length of said image recorded in said image medium and is less than twice said length of said image recorded in said image medium, and said second digital image of said image medium has a length that is at least substantially equal to a length of said image recorded in said image medium and is less than twice said length of said image recorded in said image medium.
6. A method of combining portions of a digital image according to claim 3, wherein said first-image reference location of said reference marker is a first fiducial point proximate to an image of a corner of said sprocket hole in said first edge image, and said second-image reference location of said reference marker is a second fiducial point proximate to an image of said corner of said sprocket hole in said second edge image.
7. A method of combining portions of a digital image according to claim 6, wherein said first fiducial point is determined to be the point of intersection of a vertical edge line of said sprocket hole and a horizontal edge line of said sprocket hole in said first edge image, and said second fiducial point is determined to be the point of intersection of a vertical edge line of said sprocket hole and a horizontal edge line of said sprocket hole in said second edge image.
8. A method of combining portions of a digital image according to claim 7, wherein said vertical edge line of said sprocket hole in said first edge image is determined by assigning a weighted average column number thereto and requiring that said vertical edge line be substantially parallel to columns of said first raster image, said horizontal edge line of said sprocket hole in said first edge image is determined by assigning a weighted average row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said first raster image, said vertical edge line of said sprocket hole in said second edge image is determined by assigning a weighted average column number thereto and requiring that said vertical edge line be substantially parallel to columns of said second raster image, and said horizontal edge line of said sprocket hole in said second edge image is determined by assigning a weighted average row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said first raster image.
9. A method of combining portions of a digital image according to claim 1, wherein said transformation rule is a translation of said first plurality of pixels relative to said second plurality of pixels.
10. A method of combining portions of a digital image according to claim 8, wherein said transformation rule is a translation of said first plurality of pixels relative to said second plurality of pixels, said translation adjusting a pixel address for each of said plurality of pixels by a row difference and a column difference substantially equal to a derived row difference and a derived column difference obtained by requiring that said first and second fiducial points substantially coincide with each other.
11. A method of combining portions of a digital image according to claim 1, wherein said forming a combined image assigns pixel values to overlapping regions of the transformed images such that all overlapping pixels correspond to pixels of the first raster image, or said forming a combined image assigns pixel values to overlapping regions of the transformed images such that all overlapping pixels correspond to pixels of the second raster image.
12. A method of combining portions of a digital image, comprising: forming a first digital image of an image medium, wherein said first digital image is a first raster image having a first plurality of pixels; generating a first edge image by performing a high-pass spatial filtering of said first digital image; determining a first-image reference location of a first reference marker of said image medium relative to said first edge image, said reference marker being substantially fixed proximate to a first longitudinal edge of said image medium; determining a first-image reference location of a second reference marker of said image medium relative to said first edge image, said second reference marker being substantially fixed proximate to a second longitudinal edge of said image medium opposing said first longitudinal edge; forming a second digital image of said image medium, wherein said second digital image is a second raster image having a second plurality of pixels; generating a second edge image by performing a high-pass spatial filtering of said second digital image; determining a second-image reference location of said first reference marker of said image medium relative to said second edge image; determining a second-image reference location of said second reference marker of said image medium relative to said second edge image; determining a transformation rule derived from transforming said first-image reference location of said first reference marker to said second-image reference location of said first reference marker and transforming said first-image reference location of said second reference marker to said second-image reference location of said second reference marker; transforming said first plurality of pixels relative to said second plurality of pixels based on said transformation rule; and forming a combined image from portions of said first digital image and said second digital image.
13. A method of combining portions of a digital image according to claim 12, wherein said first digital image includes a first portion of an image recorded in said image medium, and said second digital image includes the remaining portion of said image recorded in said image medium.
14. A method of combining portions of a digital image according to claim 13, wherein said image medium is photographic film, and said first and second reference markers are first and second sprocket holes, respectively, defined by said photographic film.
15. A method of combining portions of a digital image according to claim 14, wherein said first and second digital images of said image medium each have widths that are approximately equal to each other and are sufficiently wide to include at least a portion of said first and second sprocket holes in addition to an entire width of said digital image.
16. A method of combining portions of a digital image according to claim 13, wherein said first digital image of said image medium has a length that is at least substantially equal to a length of said image recorded in said image medium and is less than twice said length of said image recorded in said image medium, and said second digital image of said image medium has a length that is at least substantially equal to a length of said image recorded in said image medium and is less than twice said length of said image recorded in said image medium.
17. A method of combining portions of a digital image according to claim 14, wherein said first-image reference location of said first reference marker is a first fiducial point proximate to an image of a comer of said first sprocket hole in said first edge image, said second-image reference location of said first reference marker is a second fiducial point proximate to an image of said comer of said first sprocket hole in said second edge image, said first-image reference location of said second reference marker is a third fiducial point proximate to an image of a comer of said second sprocket hole in said first edge image, and said second-image reference location of said second reference marker is a fourth fiducial point proximate to an image of said comer of said second sprocket hole in said second edge image.
18. A method of combining portions of a digital image according to claim 17, wherein said first fiducial point is determined to be the point of intersection of a vertical edge line and a horizontal edge line of said first sprocket hole in said first edge image, said second fiducial point is determined to be the point of intersection of a vertical edge line and a horizontal edge line of said first sprocket hole in said second edge image; said third fiducial point is determined to be the point of intersection of a vertical edge line and a horizontal edge line of said second sprocket hole in said first edge image, said fourth fiducial point is determined to be the point of intersection of a vertical edge line and a horizontal edge line of said second sprocket hole in said second edge image.
19. A method of combining portions of a digital image according to claim 18, wherein said vertical edge line of said first sprocket hole in said first edge image is determined by assigning a weighted average first-image column number thereto and requiring that said vertical edge line be substantially parallel to columns of said first raster image, said horizontal edge line of said first sprocket hole in said first edge image is determined by assigning a weighted average first-image row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said first raster image, said vertical edge line of said first sprocket hole in said second edge image is determined by assigning a weighted average second-image column number thereto and requiring that said vertical edge line be substantially parallel to columns of said second raster image, said horizontal edge line of said first sprocket hole in said second edge image is determined by assigning a weighted average second-image row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said second raster image, said vertical edge line of said second sprocket hole in said first edge image is determined by assigning a weighted average first-image column number thereto and requiring that said vertical edge line be substantially parallel to columns of said first raster image, said horizontal edge line of said second sprocket hole in said first edge image is determined by assigning a weighted average first-image row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said first raster image, said vertical edge line of said second sprocket hole in said second edge image is determined by assigning a weighted average second-image column number thereto and requiring that said vertical edge line be substantially parallel to columns of said second raster image, said horizontal edge line of said second sprocket hole in said second edge image is determined by assigning a weighted average second-image row number thereto and requiring that said horizontal edge line be substantially parallel to rows of said second raster image.
20. A method of combining portions of a digital image according to claim 12, wherein said transformation mle is a translation of said first plurality of pixels relative to said second plurality of pixels.
21. A method of combining portions of a digital image according to claim 19, wherein said transformation rule is a translation of said first plurality of pixels relative to said second plurality of pixels, said translation adjusting a pixel address for each of said first plurality of pixels by a row difference and a column difference that are substantially equal to respective row and column differences obtained by requiring that said first and second fiducial points substantially coincide with each other.
22. A method of combining portions of a digital image according to claim 19, wherein said transformation mle is a translation of said first plurality of pixels relative to said second plurality of pixels, said translation adjusting a pixel address for each of said first plurality of pixels by a row difference and a column difference that are substantially equal to respective row and column differences obtained by requiring that said third and fourth fiducial points substantially coincide with each other.
23. A method of combining portions of a digital image according to claim 19, wherein said transformation mle is a translation of said first plurality of pixels relative to said second plurality of pixels, said translation adjusting a pixel address for each of said first plurality of pixels by a row difference and a column difference that are substantially equal to respective row and column differences obtained by requiring that said first and second fiducial points substantially coincide with each other, and by requiring that said third and fourth fiducial points substantially coincide with each other.
24. A method of combining portions of a digital image according to claim 12, wherein said transformation mle is a rotation of said first plurality of pixels relative to said second plurality of pixels.
25. A method of combining portions of a digital image according to claim 19, wherein said transformation mle is a rotation mle determined by requiring that a first line defined by said first and third fiducial points in said first edge image coincide with a second line defined by said second and fourth fiducial points in said second edge image, each pixel of said first plurality of pixels being rotated about a pivot point by an angle substantially equal to the angle required to make said first and second lines coincide, said angle being measured for rotations about substantially said pivot point.
26. A method of combining portions of a digital image according to claim 12, wherein said transformation mle is a translation and a rotation of said first plurality of pixels relative to said second plurality of pixels.
27. A method of combimng portions of a digital image according to claim 19, wherein said transformation mle is a translation and a rotation of said first plurality of pixels relative to said second plurality of pixels, said translation adjusting a pixel address for each of said first plurality of pixels by a row difference and a column difference that are substantially equal to respective row and column differences obtained by requiring that said first and second fiducial points substantially coincide with each other, and said rotation is determined by requiring that a first line defined by said first and third fiducial points in said first edge image coincide with a second line defined by said second and fourth fiducial points in said second edge image, each pixel of said first plurality of pixels being rotated about a pivot point by an angle substantially equal to the angle required to make said first and second lines coincide, said angle being measured for rotations about the substantially coinciding first and second fiducial points.
28. A method of combining portions of a digital image according to claim 12, wherein said forming a combined image assigns pixel values to overlapping regions of the transformed images such that all overlapping pixels correspond to pixels of said first raster image, or said forming a combined image assigns pixel values to overlapping regions of the transformed images such that all overlapping pixels correspond to pixels of said second raster image.
29. A method of combining portions of a digital image, comprising: forming a first digital image of an image medium, wherein said first digital image is a first raster image having a first plurality of pixels; generating a first edge image by performing a high-pass spatial filtering of said first digital image; determining a first-image reference location of a first reference marker of said image medium relative to said first edge image, said reference marker being substantially fixed proximate to a longitudinal edge of said image medium; determining a first-image reference location of a second reference marker of said image medium relative to said first edge image, said second reference marker being substantially fixed proximate to a longitudinal edge of said image medium; determining a first-image reference location of a third reference marker of said image medium relative to said first edge image, said third reference marker being substantially fixed proximate to a longitudinal edge of said image medium; determining a first-image reference location of a fourth reference marker of said image medium relative to said first edge image, said fourth reference marker being substantially fixed proximate to a longitudinal edge of said image medium; forming a second digital image of said image medium, wherein said second digital image is a second raster image having a second plurality of pixels; generating a second edge image by performing a high-pass spatial filtering of said second digital image; determining a second-image reference location of said first reference marker of said image medium relative to said second edge image; determining a second-image reference location of said second reference marker of said image medium relative to said second edge image; determining a second-image reference location of said third reference marker of said image medium relative to said second edge image; determining a second-image reference location of said fourth reference marker of said image medium relative to said second edge image; determining a transformation mle derived from transforming at least one of said first-image reference locations of said first, second, third and fourth reference markers to said second-image reference locations of said first, second, third and fourth reference markers, respectively; transforming said first plurality of pixels relative to said second plurality of pixels based on said transformation mle; and forming a combined image from portions of said first digital image and said second digital image.
30. A method of combining portions of a digital image according to claim 29, wherein said image medium is photographic film that defines a plurality of sprocket holes, said first-image reference location of said first reference marker is a first fiducial point proximate to an image of a first comer of a first sprocket hole in said first edge image, said second-image reference location of said first reference marker is a second fiducial point proximate to an image of said first comer of said first sprocket hole in said second edge image, said first-image reference location of said second reference marker is a third fiducial point proximate to an image of a second comer of said first sprocket hole in said first edge image, said second-image reference location of said second reference marker is a fourth fiducial point proximate to an image of said second comer of said first sprocket hole in said second edge image, said first-image reference location of said third reference marker is a fifth fiducial point proximate to an image of a first comer of a second sprocket hole in said first edge image, said second-image reference location of said third reference marker is a sixth fiducial point proximate to an image of said first comer of said second sprocket hole in said second edge image, said first-image reference location of said fourth reference marker is a seventh fiducial point proximate to an image of a second comer of said second sprocket hole in said first edge image, said second-image reference location of said fourth reference marker is an eighth fiducial point proximate to an image of said second comer of said second sprocket hole in said second edge image.
31. A method of combining portions of a digital image according to claim 30, wherein said first fiducial point is determined to be the point of intersection of a first vertical edge line and a horizontal edge line of said first sprocket hole in said first edge image, said second fiducial point is determined to be the point of intersection of a first vertical edge line and a horizontal edge line of said first sprocket hole in said second edge image, said third fiducial point is determined to be the point of intersection of a second vertical edge line and said horizontal edge line of said first sprocket hole in said first edge image, said fourth fiducial point is determined to be the point of intersection of a second vertical edge line and said horizontal edge line of said first sprocket hole in said second edge image, said fifth fiducial point is determined to be the point of intersection of a first vertical edge line and a horizontal edge line of said second sprocket hole in said first edge image, said sixth fiducial point is determined to be the point of intersection of a second vertical edge line and said horizontal edge line of said second sprocket hole in said second edge image, said seventh fiducial point is determined to be the point of intersection of a second vertical edge line and said horizontal edge line of said second sprocket hole in said first edge image, and said eighth fiducial point is determined to be the point of intersection of a second vertical edge line and said horizontal edge line of said second sprocket hole in said second edge image.
32. A method of combining portions of a digital image according to claim 31, wherein said transformation mle is a translational transformation mle derived from transforming an average position of said first and third fiducial points to an average position of said second and fourth fiducial points.
33. A method of combining portions of a digital image according to claim 31, wherein said first sprocket hole is proximate to one edge of said photographic film and said second sprocket hole is proximate to an opposing edge of said photographic film, said transformation mle being a transformation rale for a translation followed by a rotation, said translation being derived from transforming an average position of said first and third fiducial points to an average position of said second and fourth fiducial points, and said rotation being derived from rotating, after said translation, a line defined by said average position of said second and fourth fiducial points and an average position of said fifth and seventh fiducial points about said average position of said second and fourth fiducial points as a pivot point to substantially coincide with a line defined by said average position of second and fourth fiducial points and an average position of said sixth and eighth fiducial points.
34. A digital image combimng device, comprising: digital image scanner; digital image data processor in communication with said digital image scanner; data storage device in communication with said digital image processor; a high-pass spatial filter in communication with said data storage device; reference mark detector in communication with said high-pass spatial filter and said data storage device; and partial image combiner in communication with said reference mark detector and said data storage device, herein said digital image scanner scans at least two overlapping regions of an image medium, one of said at least two overlapping regions contains a portion of an image, and the second one of said at least two overlapping regions contains a remaining portion of said image, and said image combiner substantially aligns and joins said portion of said image in said one of said at least two overlapping regions with said remaining portion of said image in said second one of said at least two overlapping regions.
35. A digital image combining device according to claim 34, wherein said digital image scanner is a photographic film scanner and said image medium is photographic film.
36. A method of combining portions of a digital image according to claim 3, wherein said first and second digital images are formed at a stage of partial film development.
37. A method of combining portions of a digital image according to claim 36, wherein each of said first and second digital images comprises a plurality of image channels, each of said plurality of image channels having a spectral distribution that is different from the respective remaining plurality of image channels.
38. A method of combining portions of a digital image according to claim 37, wherein said plurality of image channels includes a front image channel of light reflected from a front surface of said photographic film, a back image channel of light reflected from a back surface of said photographic film opposing said front surface, and a through image channel of light transmitted through said photographic film.
39. A method of combining portions of an image during digital film developing, comprising: forming a first digital image of a first region of a partially developed photographic film, said first digital image including a first portion of an image recorded in said photographic film; forming a first digital image of a second region of a partially developed photographic film, said second region abutting or partially overlapping said first region of said partially developed photographic film, wherein said second digital image includes a second portion of an image recorded in said photographic film; and combining said first and second portions of said image recorded in said partially developed photographic film into an aligned and combined image, wherein forming said first digital image of said first region of said partially developed photographic film, and forming said second digital image of said second region of said partially developed photographic film are performed at least partially concurrently with additional developing of said partially developed photographic film.
40. A method of combining portions of a digital image according to claim 39, wherein said combining said first and second portions of said image recorded in said partially developed photographic film are performed at least partially concurrently with additional developing of said partially developed photographic film.
EP99956255A 1998-11-12 1999-11-12 A method and device for combining partial film scan images Withdrawn EP1135925A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US191459 1994-01-31
US19145998A 1998-11-12 1998-11-12
PCT/IB1999/001828 WO2000030340A1 (en) 1998-11-12 1999-11-12 A method and device for combining partial film scan images

Publications (1)

Publication Number Publication Date
EP1135925A1 true EP1135925A1 (en) 2001-09-26

Family

ID=22705584

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99956255A Withdrawn EP1135925A1 (en) 1998-11-12 1999-11-12 A method and device for combining partial film scan images

Country Status (4)

Country Link
EP (1) EP1135925A1 (en)
AU (1) AU1289700A (en)
TW (1) TW496077B (en)
WO (1) WO2000030340A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183918A1 (en) * 2003-03-20 2004-09-23 Eastman Kodak Company Producing enhanced photographic products from images captured at known picture sites
GB201118012D0 (en) * 2011-10-19 2011-11-30 Windense Ltd Motion picture scanning system
TWI475883B (en) * 2012-03-01 2015-03-01 Altek Corp Camera device and divided image pickup method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2720944A1 (en) * 1977-05-10 1978-11-16 Hell Rudolf Dr Ing Gmbh METHOD AND EQUIPMENT FOR PRODUCING IMAGE COMBINATIONS
EP0639027A1 (en) * 1988-01-08 1995-02-15 Fuji Photo Film Co., Ltd. Color film analyzing method and apparatus therefore
JP3142428B2 (en) * 1993-12-13 2001-03-07 株式会社東芝 Image forming device
DE69733220T2 (en) * 1996-03-04 2006-01-19 Fuji Photo Film Co., Ltd., Minami-Ashigara telecine

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0030340A1 *

Also Published As

Publication number Publication date
AU1289700A (en) 2000-06-05
TW496077B (en) 2002-07-21
WO2000030340A1 (en) 2000-05-25

Similar Documents

Publication Publication Date Title
US6570612B1 (en) System and method for color normalization of board images
US6535650B1 (en) Creating high resolution images
RU2421814C2 (en) Method to generate composite image
US7463293B2 (en) Method and system for correcting chromatic aberrations of a color image produced by an optical system
US20020008715A1 (en) Image resolution improvement using a color mosaic sensor
US20100231929A1 (en) Image processing apparatus and image processing method
JPH01170169A (en) Method for processing image graphic correction for multicolor printing
WO1998012866A1 (en) Image synthesizing device and method
JP2006050627A (en) Method and system for correcting color distortion
EP0351062A1 (en) Method and apparatus for generating composite images
US20090244651A1 (en) Image combining device and image combining method
US6728425B1 (en) Image processing device
JPH0879529A (en) Image processing device
JP3059205B2 (en) Image processing device
WO2000030340A1 (en) A method and device for combining partial film scan images
US6118478A (en) Telecine systems for high definition use
JP2002507848A (en) Image data coordinate transformation method having randomly shifted pixels
US7847987B2 (en) Method of compensating a zipper image by a K-value and a method of calculating a K-value
JP3260891B2 (en) Edge extraction method
JP4206294B2 (en) Image processing device
WO2000011862A1 (en) A method and device for the alignment of digital images
EP1096783A1 (en) Document imaging system
JP2634399B2 (en) Color image processing method
JP2000078390A (en) Image processing method and device
JP2001016428A (en) Image forming device and transferred image distortion correction method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010525

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20020307

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20020718