US8280170B2 - Intermediate image generating apparatus and method of controlling operation of same - Google Patents

Intermediate image generating apparatus and method of controlling operation of same Download PDF

Info

Publication number
US8280170B2
US8280170B2 US12/764,847 US76484710A US8280170B2 US 8280170 B2 US8280170 B2 US 8280170B2 US 76484710 A US76484710 A US 76484710A US 8280170 B2 US8280170 B2 US 8280170B2
Authority
US
United States
Prior art keywords
image
decided
images
points
moving subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/764,847
Other versions
US20100278433A1 (en
Inventor
Makoto Ooishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Assigned to FUJIFILM CORPORATION reassignment FUJIFILM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Ooishi, Makoto
Publication of US20100278433A1 publication Critical patent/US20100278433A1/en
Application granted granted Critical
Publication of US8280170B2 publication Critical patent/US8280170B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • This invention relates to an apparatus for generating an intermediate image and to a method of controlling the operation of this apparatus.
  • an object of the present invention is to generate an intermediate image with regard to the image of a moving subject.
  • an intermediate image generating apparatus comprising: a first feature point deciding device (first feature point deciding means) for deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting; a first corresponding point deciding device (first corresponding point deciding means) for deciding corresponding points, which correspond to the feature points decided by the first feature point deciding device, from within the second image; a moving subject image detecting device (moving subject image detecting means) for detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image; a transformation target area setting device (transformation target area setting means) for setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image detected by the
  • a method of controlling an apparatus for generating an intermediate image comprises the steps of: deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting; deciding corresponding points, which correspond to the feature points that have been decided, from within the second image; detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image; setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image that has been detected, from among the feature points that have been decided, and the positions of corresponding points, which are present in the moving subject image that has been detected, from among the corresponding points that have been decided; deciding a plurality of feature points of an image within the
  • the present invention also provides a recording medium storing a computer-readable program suited to implementation of the above-described method of controlling the operation of an intermediate image generating apparatus.
  • the present invention may provide the program per se.
  • first and second images are obtained by continuous shooting.
  • a plurality of feature points are decided from within the first image.
  • the feature points are points on the contours of a plurality of subject images contained in an image, points at which the shape of a subject image changes, etc.
  • Corresponding points that correspond to the decided feature points are decided from within the second image.
  • Moving subject images which are subject images contained in the first and second images and represent an object moving from capture of the first image to capture of the second image, are detected in respective ones of the first and second images.
  • a transformation target area that encloses the positions of feature points present in the detected moving subject image and the positions of corresponding points present in the detected moving subject image is decided in each of the first and second images.
  • Feature points of the image within the transformation target area in the first image are decided.
  • the corresponding points that correspond to the features points of the image within the transformation target area of the first image are decided with regard to the image within the transformation target area in the second image.
  • the image within the transformation target area of the first image or second image is transformed so as to be positioned at intermediate points present at positions that are intermediate the feature points of the image within the transformation target area of the first image and the corresponding points of the image within the transformation target area of the second image.
  • An intermediate moving subject image is generated so as to be positioned at intermediate points present at positions that are intermediate the detected feature points of the moving subject image of the first image and corresponding points of the moving subject image of the second image.
  • the intermediate moving subject image may be generated by transforming the moving subject image contained in the first image or transforming the moving subject image contained in the second image.
  • the apparatus further comprises an external-area image portion transforming device for transforming the second image in such a manner that the corresponding points decided by the first corresponding point deciding device will coincide with the feature points decided by the first feature point deciding device with regard to an external-area image portion that is external to the transformation target area.
  • the intermediate image generating device would generate the image intermediate the first and second images using the intermediate moving subject image generated by the intermediate moving subject image generating device, and with regard to the image portion exterior to the transformation target area, the intermediate image generating device would generate the image intermediate the first and second images using the first image or the image portion, external to the transformation target area, generated by the external-area image portion transforming device.
  • the moving subject image detecting device detects the moving subject images based upon motion that is different from motion of the total of the plurality of feature points decided by the first feature point deciding device and corresponding points decided by the first corresponding point deciding device, with the proviso that the number of corresponding points that undergo this different motion is greater than a prescribed number.
  • FIG. 1 is a perspective view of a digital movie/still camera as seen from the front;
  • FIG. 2 is a perspective view of the digital movie/still camera as seen from the back;
  • FIG. 3 is a block diagram illustrating the electrical configuration of the digital movie/still camera
  • FIG. 4 illustrates the manner in which an intermediate image is generated
  • FIGS. 7 to 11 are examples of subject images
  • FIG. 12 illustrates camera shake and shift of a moving object
  • FIGS. 13 to 16 are examples of subject images
  • FIGS. 17 to 20 illustrate images within transformation target areas
  • FIG. 21 illustrates an intermediate image
  • FIG. 1 which illustrates a preferred embodiment of the present invention, is a perspective view of a digital movie/still camera 1 as seen from the front.
  • FIG. 2 is a perspective view of the digital movie/still camera 1 as seen from the back.
  • FIG. 3 is a block diagram illustrating the electrical configuration of the digital movie/still camera 1 .
  • the overall operation of the digital movie/still camera 1 is controlled by a CPU 22 .
  • the digital movie/still camera 1 includes a memory 43 in which an operation program and other data, described later, have been stored.
  • the operation program may be written to a memory card 48 or the like, read out of the memory card 48 and installed in the digital movie/still camera 1 , or the operation program may be pre-installed in the camera.
  • the digital movie/still camera 1 includes a shutter-release button 2 and a mode switch 13 .
  • a signal indicating pressing of the shutter-release button 2 is input to the CPU 22 .
  • the mode switch 13 which selects the movie shooting mode, still shooting mode or playback mode, is capable of turning on a switch S 1 , S 2 or S 3 selectively.
  • the movie shooting mode is set by turning on the switch S 1 , the still shooting mode by turning on the switch S 2 and the playback mode by turning on the switch S 3 .
  • the digital movie/still camera 1 is capable of flash photography.
  • the electronic flash 5 is provided to achieve this, as described above.
  • a light emission from the electronic flash 5 is turned on and off under the control of a flash control circuit 16 .
  • An iris 25 and a zoom lens 7 are provided in front of a solid-state electronic image sensing device 27 such as a CCD.
  • the iris 25 has its f-stop decided by a motor 33 controlled by a motor driver 30 .
  • the zoom lens 7 has its zoom position decided by a motor 34 controlled by a motor driver 31 .
  • the mode switch 13 If the movie or still shooting mode is set by the mode switch 13 , light representing the image of a subject that has passed through the iris 25 forms an image on the photoreceptor surface of the image sensing device 27 by the zoom lens 7 .
  • the image sensing device 27 is controlled by a timing generator 32 and the image of the subject is captured at a fixed period (a period of 1/30 of a second, by way of example).
  • a video signal representing the image of the subject is output from the image sensing device 27 at a fixed period and is input to a CDS (correlated double sampling) circuit 28 .
  • the video signal that has been subjected to correlated double sampling in the CDS circuit 28 is converted to digital image data in an analog/digital converting circuit 29 .
  • the digital image data is input to an image signal processing circuit 36 by an image input controller 35 and is subjected to prescribed signal processing such as a gamma correction.
  • the digital image data is written to a VRAM (video random-access memory) 41 , after which this data is read out and applied to an image display unit 47 , whereby the image data is displayed as a moving image on the display screen of the image display unit 47 .
  • VRAM video random-access memory
  • the digital movie/still camera 1 is provided with a feature point/corresponding point detecting circuit 38 , a motion analyzing circuit 39 and an image transforming/synthesizing circuit 45 .
  • the digital image data obtained by image capture is input to an AF (autofocus) detecting circuit 41 .
  • the zoom position of the zoom lens 7 is controlled in the AF detecting circuit 41 so as to bring the image into focus.
  • the digital image data obtained by image capture is input also to an AE (automatic exposure)/AWB (automatic white balance) detecting circuit 42 .
  • the AE/AWB detecting circuit 42 decides the aperture of the iris 25 in such a manner that detected brightness will become an appropriate brightness.
  • a white-balance adjustment is also carried out in the AE/AWB detecting circuit 42 .
  • one frame of image data obtained by image capture at that moment is input to a compression processing circuit 37 .
  • the image data that has been subjected to prescribed compression processing in the compression processing circuit 37 is input to a video encoder 40 and is encoded thereby.
  • the encoded image data is recorded on the memory card 48 under the control of a memory controller 46 .
  • image data that has been obtained by image capture during depression of the shutter-release button 2 is recorded on the memory card 48 as moving image data.
  • the playback mode is set by the mode switch 13 , the image data that has been recorded on the memory card 48 is read.
  • the image represented by the read image data is displayed on the display screen of the image display unit 47 .
  • FIG. 4 illustrates the manner in which an intermediate image is generated.
  • an image 90 that is intermediate a first image 70 and a second image 80 is generated from the first image 70 and the second image 80 , as mentioned earlier.
  • the first image 70 and the second image 80 are images the shooting interval of which is a fixed length of time in the manner of two successive image frames in a moving image, they need not necessarily be two successive image frames.
  • the intermediate image 90 has been generated in a case where the first image 70 contains a subject image 73 that moves in the manner of an automobile and the second image 80 also contains an identical moving subject image 83 , then a moving subject image 93 is positioned at position where it should be present in the intermediate image 90 .
  • the moving subject image will be seen to move without appearing unnatural if the first image 70 , intermediate image 90 and second image 80 are viewed successively.
  • Feature points 78 are decided in the moving subject image 73 and corresponding points 88 are decided in the moving subject image 83 in a manner described later in greater detail.
  • Corresponding points 98 which define the moving subject image 93 of the intermediate image 90 , are decided from the feature points 78 and corresponding points 88 that have been decided.
  • the moving subject image 93 is positioned at the positions defined by the corresponding points 93 that have been decided.
  • a shake correction is carried out in such a manner that the portion of the second image 80 that does not include the moving subject image 83 will coincide with the portion of the first image 70 that does not include the moving subject image 73 . Since the portions of the first image 70 and second image 80 in which the subject is not moving will coincide, blur due to image shake can be prevented. With regard also to the portion of the intermediate image 90 that does not include the moving subject image 93 , it goes without saying that this portion is generated in such a manner that it will coincide with the portion of the first image 70 or second image 80 that is devoid of motion.
  • FIGS. 5 and 6 are flowcharts illustrating processing executed by the digital movie/still camera 1 .
  • First and second images are obtained from the memory card 48 (step 51 in FIG. 5 ).
  • the image 73 of an automobile is displayed toward the front of the first image 70 and images 71 and 72 of trees are displayed in back.
  • the second image 80 has been captured after the first image 70 . Since the first image 70 and second image 80 have not been captured simultaneously, a shift ascribable to camera shake occurs between the first image 70 and second image 80 and the second image 80 is tilted in comparison with the first image 70 .
  • the image 83 of the automobile is displayed toward the front of the second image 80 as well and images 81 and 82 of trees are displayed in back.
  • the images 71 and 72 of the trees in the first image 70 are the same as the images 81 and 82 , respectively, of the trees in the second image 80 , and the image 73 of the automobile in the first image 70 and the image 83 of the automobile in the second image 80 are images of the same automobile.
  • a reference image and a corresponding image are decided (step 52 in FIG. 5 ).
  • the first image 70 is adopted as the reference image and the second image 80 is adopted as the corresponding image.
  • the reverse may just as well be adopted.
  • the feature points of the reference image are decided in a manner described next (step 53 in FIG. 5 ). Further, the corresponding points of the corresponding image are decided (step 54 in FIG. 5 ).
  • the reference image 70 contains the images 71 and 72 of the trees and the image 73 of the automobile.
  • a feature point defines one point on the contour of the subject image contained in the reference image 70 and indicates the characteristic shape of the subject image.
  • a plurality of corresponding points 74 are decided with regard to each of the images 71 and 72 of the trees and image 73 of the automobile.
  • FIG. 10 illustrates the corresponding image (second image) 80 .
  • Corresponding points indicate the points of pixels that correspond to the feature points decided in the reference image 70 .
  • Corresponding points 84 corresponding to the feature points 74 are decided with regard to each of the images 81 and 82 of the trees and image 83 of the automobile contained in the corresponding image 80 .
  • step 55 in FIG. 5 overall-image shake between the reference image 70 and corresponding image 80 is calculated.
  • the calculation of shake of the overall image (calculation of amount and direction of shake) will be described in detail later.
  • the corresponding image 80 is corrected in accordance with the calculated image shake.
  • the shake correction may be performed when the reference image 70 and corresponding image 80 are superimposed.
  • FIG. 11 is an example of a corresponding image 80 A that has undergone a correction for shake. The shift between this image and the reference image 70 has been corrected by the shake) correction.
  • FIG. 12 illustrates an optical flow graph
  • This optical flow graph is such that the amount and direction of shift of the corresponding points, which correspond to the feature points, from these feature points is indicated by marks placed at every feature point (corresponding point).
  • Three groups of marks are indicated in this optical flow graph. Assume that a first group G 1 has the largest number of marks, a second group G 2 has the next largest number of marks and a third group G 3 has a very small number of marks. In this embodiment, it is determined that what is represented by the first group G 1 , which has the largest number of marks, is the overall-image shift between the reference image 70 and corresponding image 80 .
  • the second group G 2 in which the number of marks is greater than a prescribed number, is the amount and direction of movement of the subject image of the moving object that has moved between capture of the reference image 70 and capture of the corresponding image 80 .
  • the third group G 3 in which the number of marks is less than a prescribed number, is treated as garbage and is determined not to represent a moving subject image.
  • step 56 in FIG. 5 it is determined whether a subject image (moving subject image) representing an object that has moved from capture of the reference image 70 to capture of the corresponding image 80 exists in the corresponding image 80 (step 57 in FIG. 6 ).
  • the image 73 of the automobile is contained in the reference image 70
  • the image 83 of the automobile is contained also in the corresponding image 80
  • the automobile is moving from capture of the reference image 70 to capture of the corresponding image 80 . Accordingly, a determination is made that a moving subject image exists (step 57 in FIG. 6 ).
  • a transformation target area that encloses both of the moving-portion areas 76 and 86 is set (step 59 in FIG. 6 ).
  • FIG. 15 illustrates a transformation target area 77 that has been set in the reference image 70 . Further, in order to make it easy to understand that the transformation target area 77 encloses the moving-portion area 76 in the reference image 70 and the moving-portion area 86 in the corresponding image 80 , the moving-portion area 86 in the corresponding image 80 is also shown in FIG. 15 in addition to the moving-portion area 76 in the reference image 70 .
  • FIG. 16 illustrates a transformation target area 87 that has been set in the corresponding image 80 . Further, in order to make it easy to understand that the transformation target area 87 encloses the moving-portion area 76 in the reference image 70 and the moving-portion area 86 in the corresponding image 80 , the moving-portion area 76 in the reference image 70 is also shown in FIG. 16 in addition to the moving-portion area 86 in the corresponding image 80 .
  • step 60 in FIG. 6 feature points of the image within the transformation target area 77 of the reference image 70 are decided anew. Further, corresponding points of the image within the transformation target area 87 of the corresponding image 80 , which points corresponds to the feature points within the transformation target area 77 of the reference image 70 , are decided anew (step 61 in FIG. 6 ).
  • Corresponding points 88 which correspond to the feature points 78 that have been decided with regard to the image 79 within the transformation target area 77 , are decided with regard to the image 89 within the transformation target area 87 .
  • image transformation processing (triangular transformation processing) is executed, in accordance with the amount of movement of the moving subject image, for each triangle obtained by triangulation, in such a manner that the feature points 78 and the decided intermediate points will coincide (step 64 in FIG. 6 ).
  • FIG. 20 illustrates the image (intermediate moving subject image) 93 of the automobile constituting the intermediate image obtained by the image transformation processing.
  • image transformation processing By subjecting the image 79 within the transformation target area 77 that has undergone triangulation to image transformation processing, in accordance with the amount of movement of the moving subject image, for each triangle obtained by triangulation, in such a manner that the feature points 78 and the decided intermediate points coincide, as described above, the image 93 of the automobile constituting the intermediate image is obtained within an area 97 corresponding to the transformation target area 77 .
  • the intermediate image 90 is obtained, as illustrated in FIG. 21 (step 65 in FIG. 6 ).
  • the intermediate image 90 is such that the automobile is positioned between reference image 70 and the corresponding image 80 .
  • the intermediate image 90 is such that the background is the same as in the reference image 70 . If the reference image, intermediate image and corresponding image are viewed in the order mentioned, the motion of the moving vehicle will be smooth and shake will diminished with regard to the background, etc., which is not moving.
  • a moving subject image does not exist (“NO” at step 57 in FIG. 6 ), or even if it has been determined that a moving subject image exists, a large number of corresponding points do not exist in the moving subject image (i.e., if it has been determined that a moving subject image exists owing to the effects of garbage) (“NO” at step 58 in FIG. 6 ), then an intermediate image is generated from the reference image 70 without executing the above-described image transformation processing (step 65 in FIG. 6 ). Naturally, it may be so arranged that the intermediate image is generated from the corresponding image once it has undergone an overall shake correction.
  • the moving subject image 93 of the intermediate image 90 is generated by transforming the image 79 within the transformation target area 77 of the reference image 70 .
  • the moving subject image 93 of the intermediate image 90 is generated by transforming the image 89 within the transformation target area 87 of the corresponding image 80 .
  • an image that is intermediate two image frames is generated from the two image frames.
  • data representing a moving image having smooth motion can be generated.
  • the shake can be calculated utilizing an affine transformation and a projective transformation.
  • Equation (1) illustrates the affine transformation relation
  • Equation (2) illustrates the projective transformation relation.
  • the position of a corresponding point is indicated by (X,Y) in Equations (1), (2), and the position of a feature point by (x,y).
  • Equation (13) holds from the Cauchy-Schwarz inequality, and the relation of Equation (14) holds.
  • Equation (16) Equation (16) below will hold.
  • Equation (17) Equation (17) is obtained, where the relation of Equation (20) holds.
  • Equation (19) Since (x0, y0) is only the amount of shift, it moves to (s,t). Accordingly, the inverse matrix of Equation (19) becomes Equation (22) below, where the relation of Equation (23) holds.
  • Equations (24) to (29) The coefficients a, b, c, d, s, t are as indicated by Equations (24) to (29) below.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

An intermediate image is generated between a reference image and a corresponding image. To achieve this, moving subject images are detected in respective ones of a first image and second image captured at a fixed interval. A moving subject image of an intermediate image is positioned at a position that is intermediate the moving subject images. The intermediate image is generated utilizing the reference image in a portion of the image other than occupied by the moving subject image. A correction is applied in such a manner that the second image will coincide with the first image with the exception of the portion of the second image occupied by the moving subject image.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to an apparatus for generating an intermediate image and to a method of controlling the operation of this apparatus.
2. Description of the Related Art
When a moving image is captured, the subject is captured at a fixed period. Although a moving image featuring fine motion is obtained if the capture period is shortened, there are instances where it is difficult to shorten the capture period. For this reason, there are occasions where an intermediate image is generated between two adjacent image frames obtained by image capture and the intermediate image is inserted between these two image frames.
Various methods of generating such an intermediate image have been considered (see the specifications of Japanese Patent Nos. 3548071 and 3873017). However, there is still room for improvement in terms of generating an intermediate image with regard to the image of a moving subject.
SUMMARY OF THE INVENTION
Accordingly, an object of the present invention is to generate an intermediate image with regard to the image of a moving subject.
According to the present invention, the foregoing object is attained by providing an intermediate image generating apparatus comprising: a first feature point deciding device (first feature point deciding means) for deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting; a first corresponding point deciding device (first corresponding point deciding means) for deciding corresponding points, which correspond to the feature points decided by the first feature point deciding device, from within the second image; a moving subject image detecting device (moving subject image detecting means) for detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image; a transformation target area setting device (transformation target area setting means) for setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image detected by the moving subject image detecting device, from among the feature points decided by the first feature point deciding device, and the positions of corresponding points, which are present in the moving subject image detected by the moving subject image detecting device, from among the corresponding points decided by the first corresponding point deciding device; a second feature point deciding device (second feature point deciding means) for deciding a plurality of feature points of an image within the transformation target area of the first image set by the transformation target area setting device; a second corresponding point deciding device (second corresponding point deciding means) for deciding corresponding points, which correspond to the feature points decided by the second feature point deciding device, in an image within the transformation target area of the second image set by the transformation target area setting device; an intermediate moving subject image generating device (intermediate moving subject image generating means) for generating an intermediate moving subject image by transforming the image within the transformation target area of the first image or second image in such a manner that this image is positioned at intermediate points present at a positions that are intermediate the feature points decided by the second feature point deciding device and the corresponding points decided by the second corresponding point deciding device; and an intermediate image generating device (intermediate image generating means) for generating an image that is intermediate the first and second images using the intermediate moving image generated by the intermediate moving image generating device.
The present invention also provides an operation control method suited to the intermediate image generating apparatus described above. Specifically, a method of controlling an apparatus for generating an intermediate image comprises the steps of: deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting; deciding corresponding points, which correspond to the feature points that have been decided, from within the second image; detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image; setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image that has been detected, from among the feature points that have been decided, and the positions of corresponding points, which are present in the moving subject image that has been detected, from among the corresponding points that have been decided; deciding a plurality of feature points of an image within the set transformation target area of the first image; deciding corresponding points, which correspond to the feature points that have been decided, in an image within the set transformation target area of the second image; generating an intermediate moving subject image by transforming the image within the transformation target area of the first image or second image in such a manner that this image is positioned at intermediate points present at positions that are intermediate the feature points that have been decided and the corresponding points that have been decided; and generating an image that is intermediate the first and second images using the intermediate moving image that has been generated.
The present invention also provides a recording medium storing a computer-readable program suited to implementation of the above-described method of controlling the operation of an intermediate image generating apparatus. The present invention may provide the program per se.
In accordance with the present invention, first and second images are obtained by continuous shooting. A plurality of feature points are decided from within the first image. (The feature points are points on the contours of a plurality of subject images contained in an image, points at which the shape of a subject image changes, etc.) Corresponding points that correspond to the decided feature points are decided from within the second image.
Moving subject images, which are subject images contained in the first and second images and represent an object moving from capture of the first image to capture of the second image, are detected in respective ones of the first and second images. A transformation target area that encloses the positions of feature points present in the detected moving subject image and the positions of corresponding points present in the detected moving subject image is decided in each of the first and second images.
Feature points of the image within the transformation target area in the first image are decided. The corresponding points that correspond to the features points of the image within the transformation target area of the first image are decided with regard to the image within the transformation target area in the second image. The image within the transformation target area of the first image or second image is transformed so as to be positioned at intermediate points present at positions that are intermediate the feature points of the image within the transformation target area of the first image and the corresponding points of the image within the transformation target area of the second image.
When moving subject images are detected in respective ones of the first and second images, feature points of the moving subject image in the first image and the corresponding points of the moving subject image in the second image are detected. An intermediate moving subject image is generated so as to be positioned at intermediate points present at positions that are intermediate the detected feature points of the moving subject image of the first image and corresponding points of the moving subject image of the second image. The intermediate moving subject image may be generated by transforming the moving subject image contained in the first image or transforming the moving subject image contained in the second image. Thus it is possible to generate a moving subject image contained in an image that is intermediate the moving subject image contained in the first image and the moving subject image contained in the second image. An image intermediate the first and second images is generated using the intermediate moving subject image thus generated.
The apparatus further comprises an external-area image portion transforming device for transforming the second image in such a manner that the corresponding points decided by the first corresponding point deciding device will coincide with the feature points decided by the first feature point deciding device with regard to an external-area image portion that is external to the transformation target area.
In this case, by way of example, with regard to the interior of the transformation target area, the intermediate image generating device would generate the image intermediate the first and second images using the intermediate moving subject image generated by the intermediate moving subject image generating device, and with regard to the image portion exterior to the transformation target area, the intermediate image generating device would generate the image intermediate the first and second images using the first image or the image portion, external to the transformation target area, generated by the external-area image portion transforming device.
The moving subject image detecting device detects the moving subject images based upon motion that is different from motion of the total of the plurality of feature points decided by the first feature point deciding device and corresponding points decided by the first corresponding point deciding device, by way of example.
The moving subject image detecting device detects the moving subject images based upon motion that is different from motion of the total of the plurality of feature points decided by the first feature point deciding device and corresponding points decided by the first corresponding point deciding device, with the proviso that the number of corresponding points that undergo this different motion is greater than a prescribed number.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a perspective view of a digital movie/still camera as seen from the front;
FIG. 2 is a perspective view of the digital movie/still camera as seen from the back;
FIG. 3 is a block diagram illustrating the electrical configuration of the digital movie/still camera;
FIG. 4 illustrates the manner in which an intermediate image is generated;
FIGS. 5 to 6 are flowcharts illustrating processing executed by the digital movie/still camera;
FIGS. 7 to 11 are examples of subject images;
FIG. 12 illustrates camera shake and shift of a moving object;
FIGS. 13 to 16 are examples of subject images;
FIGS. 17 to 20 illustrate images within transformation target areas; and
FIG. 21 illustrates an intermediate image.
DESCRIPTION OF THE PREFERRED EMBODIMENT
A preferred embodiment of the present invention will now be described in detail with reference to the drawings.
FIG. 1, which illustrates a preferred embodiment of the present invention, is a perspective view of a digital movie/still camera 1 as seen from the front.
The top of the digital movie/still camera 1 is formed to have a shutter-release button 2. The right side face of the digital movie/still camera 1 is formed to have a terminal 3 for connecting a USB (Universal Serial Bus) cable.
The front side of the digital movie/still camera 1 is formed to have a zoom lens 7, an electronic flash 5 is formed at the upper left of the zoom lens 7, and an optical viewfinder 6 is formed at the upper right of the zoom lens 7.
FIG. 2 is a perspective view of the digital movie/still camera 1 as seen from the back.
A liquid crystal display screen 10 is formed on the back side of the digital movie/still camera 1 at the lower left thereof, a power switch 12 is formed on the right side of the optical viewfinder 6, and a mode switch 13 is formed on the right side of the power switch 12. An shooting mode (movie shooting mode and still shooting mode) and a playback mode are set by the mode switch 13. Provided below the mode switch 13 is a circular button 14 on which arrows are formed on up, down, left and right portions thereof. Other modes can be set using the button 14.
FIG. 3 is a block diagram illustrating the electrical configuration of the digital movie/still camera 1.
The overall operation of the digital movie/still camera 1 is controlled by a CPU 22.
The digital movie/still camera 1 includes a memory 43 in which an operation program and other data, described later, have been stored. The operation program may be written to a memory card 48 or the like, read out of the memory card 48 and installed in the digital movie/still camera 1, or the operation program may be pre-installed in the camera.
The digital movie/still camera 1 includes a shutter-release button 2 and a mode switch 13. A signal indicating pressing of the shutter-release button 2 is input to the CPU 22. The mode switch 13, which selects the movie shooting mode, still shooting mode or playback mode, is capable of turning on a switch S1, S2 or S3 selectively. The movie shooting mode is set by turning on the switch S1, the still shooting mode by turning on the switch S2 and the playback mode by turning on the switch S3.
The digital movie/still camera 1 is capable of flash photography. The electronic flash 5 is provided to achieve this, as described above. A light emission from the electronic flash 5 is turned on and off under the control of a flash control circuit 16.
An iris 25 and a zoom lens 7 are provided in front of a solid-state electronic image sensing device 27 such as a CCD. The iris 25 has its f-stop decided by a motor 33 controlled by a motor driver 30. The zoom lens 7 has its zoom position decided by a motor 34 controlled by a motor driver 31.
If the movie or still shooting mode is set by the mode switch 13, light representing the image of a subject that has passed through the iris 25 forms an image on the photoreceptor surface of the image sensing device 27 by the zoom lens 7. The image sensing device 27 is controlled by a timing generator 32 and the image of the subject is captured at a fixed period (a period of 1/30 of a second, by way of example). A video signal representing the image of the subject is output from the image sensing device 27 at a fixed period and is input to a CDS (correlated double sampling) circuit 28. The video signal that has been subjected to correlated double sampling in the CDS circuit 28 is converted to digital image data in an analog/digital converting circuit 29.
The digital image data is input to an image signal processing circuit 36 by an image input controller 35 and is subjected to prescribed signal processing such as a gamma correction. The digital image data is written to a VRAM (video random-access memory) 41, after which this data is read out and applied to an image display unit 47, whereby the image data is displayed as a moving image on the display screen of the image display unit 47.
In the digital movie/still camera 1 according to this embodiment, an intermediate image (interpolated image), namely an image that is intermediate first and second images, can be generated from the first and second images. In the generating of the intermediate image, feature points on the contour, etc., of a subject image contained in the first image are decided and corresponding points corresponding to these feature points are decided in the second image. The second image is positioned in such a manner that the corresponding points will coincide with the feature points. As a result, camera shake between the capture of the first image and the capture of the second image is corrected.
Further, in a case where an object moving from capture of the first image to capture of the second image exists, the motion of the image of the object is analyzed and the position of a moving subject image in the intermediate image is decided midway between the moving subject image in the first image and the moving subject image in the second image.
In order to generate such an intermediate image, the digital movie/still camera 1 is provided with a feature point/corresponding point detecting circuit 38, a motion analyzing circuit 39 and an image transforming/synthesizing circuit 45.
The digital image data obtained by image capture is input to an AF (autofocus) detecting circuit 41. The zoom position of the zoom lens 7 is controlled in the AF detecting circuit 41 so as to bring the image into focus. Further, the digital image data obtained by image capture is input also to an AE (automatic exposure)/AWB (automatic white balance) detecting circuit 42. The AE/AWB detecting circuit 42 decides the aperture of the iris 25 in such a manner that detected brightness will become an appropriate brightness. A white-balance adjustment is also carried out in the AE/AWB detecting circuit 42.
If the shutter-release button 2 is pressed in the still shooting mode, one frame of image data obtained by image capture at that moment is input to a compression processing circuit 37. The image data that has been subjected to prescribed compression processing in the compression processing circuit 37 is input to a video encoder 40 and is encoded thereby. The encoded image data is recorded on the memory card 48 under the control of a memory controller 46. In the movie shooting mode, image data that has been obtained by image capture during depression of the shutter-release button 2 is recorded on the memory card 48 as moving image data.
If the playback mode is set by the mode switch 13, the image data that has been recorded on the memory card 48 is read. The image represented by the read image data is displayed on the display screen of the image display unit 47.
FIG. 4 illustrates the manner in which an intermediate image is generated.
In this embodiment, an image 90 that is intermediate a first image 70 and a second image 80 is generated from the first image 70 and the second image 80, as mentioned earlier. Although the first image 70 and the second image 80 are images the shooting interval of which is a fixed length of time in the manner of two successive image frames in a moving image, they need not necessarily be two successive image frames.
In particular, in this embodiment, if the intermediate image 90 has been generated in a case where the first image 70 contains a subject image 73 that moves in the manner of an automobile and the second image 80 also contains an identical moving subject image 83, then a moving subject image 93 is positioned at position where it should be present in the intermediate image 90. The moving subject image will be seen to move without appearing unnatural if the first image 70, intermediate image 90 and second image 80 are viewed successively. Feature points 78 are decided in the moving subject image 73 and corresponding points 88 are decided in the moving subject image 83 in a manner described later in greater detail. Corresponding points 98, which define the moving subject image 93 of the intermediate image 90, are decided from the feature points 78 and corresponding points 88 that have been decided. The moving subject image 93 is positioned at the positions defined by the corresponding points 93 that have been decided.
Furthermore, in this embodiment, in order that an image portion such as the background or foreground of a moving subject image will be corrected for a shift in position ascribable to camera shake or the like, a shake correction is carried out in such a manner that the portion of the second image 80 that does not include the moving subject image 83 will coincide with the portion of the first image 70 that does not include the moving subject image 73. Since the portions of the first image 70 and second image 80 in which the subject is not moving will coincide, blur due to image shake can be prevented. With regard also to the portion of the intermediate image 90 that does not include the moving subject image 93, it goes without saying that this portion is generated in such a manner that it will coincide with the portion of the first image 70 or second image 80 that is devoid of motion.
FIGS. 5 and 6 are flowcharts illustrating processing executed by the digital movie/still camera 1.
First and second images are obtained from the memory card 48 (step 51 in FIG. 5).
FIG. 7 is an example of the first image and FIG. 8 an example of the second image.
As shown in FIG. 7, the image 73 of an automobile is displayed toward the front of the first image 70 and images 71 and 72 of trees are displayed in back.
In FIG. 8, the second image 80 has been captured after the first image 70. Since the first image 70 and second image 80 have not been captured simultaneously, a shift ascribable to camera shake occurs between the first image 70 and second image 80 and the second image 80 is tilted in comparison with the first image 70. The image 83 of the automobile is displayed toward the front of the second image 80 as well and images 81 and 82 of trees are displayed in back.
The images 71 and 72 of the trees in the first image 70 are the same as the images 81 and 82, respectively, of the trees in the second image 80, and the image 73 of the automobile in the first image 70 and the image 83 of the automobile in the second image 80 are images of the same automobile.
When the first image 70 and second image 80 are obtained, a reference image and a corresponding image are decided (step 52 in FIG. 5). In this embodiment, the first image 70 is adopted as the reference image and the second image 80 is adopted as the corresponding image. However, the reverse may just as well be adopted.
When the reference image and corresponding image are decided, the feature points of the reference image are decided in a manner described next (step 53 in FIG. 5). Further, the corresponding points of the corresponding image are decided (step 54 in FIG. 5).
FIG. 9 illustrates the reference image (first image) 70.
The reference image 70 contains the images 71 and 72 of the trees and the image 73 of the automobile. A feature point defines one point on the contour of the subject image contained in the reference image 70 and indicates the characteristic shape of the subject image. A plurality of corresponding points 74 are decided with regard to each of the images 71 and 72 of the trees and image 73 of the automobile.
FIG. 10 illustrates the corresponding image (second image) 80.
Corresponding points indicate the points of pixels that correspond to the feature points decided in the reference image 70. Corresponding points 84 corresponding to the feature points 74 are decided with regard to each of the images 81 and 82 of the trees and image 83 of the automobile contained in the corresponding image 80.
Next, overall-image shake between the reference image 70 and corresponding image 80 is calculated (step 55 in FIG. 5). The calculation of shake of the overall image (calculation of amount and direction of shake) will be described in detail later. The corresponding image 80 is corrected in accordance with the calculated image shake. The shake correction may be performed when the reference image 70 and corresponding image 80 are superimposed.
FIG. 11 is an example of a corresponding image 80A that has undergone a correction for shake. The shift between this image and the reference image 70 has been corrected by the shake) correction.
FIG. 12 illustrates an optical flow graph.
This optical flow graph is such that the amount and direction of shift of the corresponding points, which correspond to the feature points, from these feature points is indicated by marks placed at every feature point (corresponding point). Three groups of marks are indicated in this optical flow graph. Assume that a first group G1 has the largest number of marks, a second group G2 has the next largest number of marks and a third group G3 has a very small number of marks. In this embodiment, it is determined that what is represented by the first group G1, which has the largest number of marks, is the overall-image shift between the reference image 70 and corresponding image 80. Further, it is determined that what is represented by the second group G2, in which the number of marks is greater than a prescribed number, is the amount and direction of movement of the subject image of the moving object that has moved between capture of the reference image 70 and capture of the corresponding image 80. What is represented by the third group G3, in which the number of marks is less than a prescribed number, is treated as garbage and is determined not to represent a moving subject image.
If overall-image shake exists (“YES” at step 56 in FIG. 5), then it is determined whether a subject image (moving subject image) representing an object that has moved from capture of the reference image 70 to capture of the corresponding image 80 exists in the corresponding image 80 (step 57 in FIG. 6). In this embodiment, the image 73 of the automobile is contained in the reference image 70, the image 83 of the automobile is contained also in the corresponding image 80, and the automobile is moving from capture of the reference image 70 to capture of the corresponding image 80. Accordingly, a determination is made that a moving subject image exists (step 57 in FIG. 6).
Next, whether the corresponding points of the moving subject image are large in number (greater than a prescribed number) is checked (step 58 in FIG. 6). If a large number of corresponding points do not exist, then the subject image portion represented by these corresponding points is considered to be a moving subject image owing to the effects of garbage and therefore is eliminated.
FIG. 13 illustrates a moving-portion area 76, which encloses the moving subject image (the image of the automobile) 73 determined as described above, in the reference image 70. FIG. 14 illustrates a moving-portion area 86 that encloses the moving subject image 83 in the corresponding image 80.
Since the automobile is moving, there is a shift between the position of the moving-portion area 76 shown in FIG. 13 and the position of the moving-portion area 86 shown in FIG. 14, and the shift depends upon the amount and direction of movement of the automobile. In this embodiment, a transformation target area that encloses both of the moving- portion areas 76 and 86 is set (step 59 in FIG. 6).
FIG. 15 illustrates a transformation target area 77 that has been set in the reference image 70. Further, in order to make it easy to understand that the transformation target area 77 encloses the moving-portion area 76 in the reference image 70 and the moving-portion area 86 in the corresponding image 80, the moving-portion area 86 in the corresponding image 80 is also shown in FIG. 15 in addition to the moving-portion area 76 in the reference image 70.
FIG. 16 illustrates a transformation target area 87 that has been set in the corresponding image 80. Further, in order to make it easy to understand that the transformation target area 87 encloses the moving-portion area 76 in the reference image 70 and the moving-portion area 86 in the corresponding image 80, the moving-portion area 76 in the reference image 70 is also shown in FIG. 16 in addition to the moving-portion area 86 in the corresponding image 80.
Next, feature points of the image within the transformation target area 77 of the reference image 70 are decided anew (step 60 in FIG. 6). Further, corresponding points of the image within the transformation target area 87 of the corresponding image 80, which points corresponds to the feature points within the transformation target area 77 of the reference image 70, are decided anew (step 61 in FIG. 6).
FIG. 17 illustrates an image 79 within the transformation target area 77 of the reference image 70.
The image 79 within the transformation target area 77 contains the image 73 of the automobile. A plurality of the feature points 78 are decided with regard to the image of the automobile. Preferably, the feature points 78 are taken at intervals smaller than the intervals of the feature points that have been decided with regard to the overall reference image 7 in the manner shown in FIG. 9. The reason is that in a case where an image transformation is performed, a more detailed image transformation can be achieved, as will be described later.
FIG. 18 illustrates an image 89 within the transformation target area 87 of the corresponding image 80.
Corresponding points 88, which correspond to the feature points 78 that have been decided with regard to the image 79 within the transformation target area 77, are decided with regard to the image 89 within the transformation target area 87.
Next, triangulation (polygonization) is carried out, as illustrated in FIG. 19, utilizing the feature points 78 within the reference image 70 and the vertices of the transformation target area 77 (step 62 in FIG. 6).
Furthermore, intermediate points are decided. These are points present at positions that are intermediate the feature points 78 within the transformation target area 77 in the reference image 70 shown in FIG. 17 and their counterpart corresponding points 88 within the transformation target area 87 in the corresponding image 80 shown in FIG. 18 (step 63 in FIG. 6).
Next, with regard to the image 79 within the transformation target area 77 that has undergone triangulation, image transformation processing (triangular transformation processing) is executed, in accordance with the amount of movement of the moving subject image, for each triangle obtained by triangulation, in such a manner that the feature points 78 and the decided intermediate points will coincide (step 64 in FIG. 6).
FIG. 20 illustrates the image (intermediate moving subject image) 93 of the automobile constituting the intermediate image obtained by the image transformation processing. By subjecting the image 79 within the transformation target area 77 that has undergone triangulation to image transformation processing, in accordance with the amount of movement of the moving subject image, for each triangle obtained by triangulation, in such a manner that the feature points 78 and the decided intermediate points coincide, as described above, the image 93 of the automobile constituting the intermediate image is obtained within an area 97 corresponding to the transformation target area 77.
By using an image 99, which has been generated as shown in FIG. 20, with regard to the interior of the area 97 and using the reference image 70 with regard to the exterior of the area 97, the intermediate image 90 is obtained, as illustrated in FIG. 21 (step 65 in FIG. 6). Specifically, with regard to the automobile, the intermediate image 90 is such that the automobile is positioned between reference image 70 and the corresponding image 80. With regard to the background, etc., of the automobile, the intermediate image 90 is such that the background is the same as in the reference image 70. If the reference image, intermediate image and corresponding image are viewed in the order mentioned, the motion of the moving vehicle will be smooth and shake will diminished with regard to the background, etc., which is not moving.
If a moving subject image does not exist (“NO” at step 57 in FIG. 6), or even if it has been determined that a moving subject image exists, a large number of corresponding points do not exist in the moving subject image (i.e., if it has been determined that a moving subject image exists owing to the effects of garbage) (“NO” at step 58 in FIG. 6), then an intermediate image is generated from the reference image 70 without executing the above-described image transformation processing (step 65 in FIG. 6). Naturally, it may be so arranged that the intermediate image is generated from the corresponding image once it has undergone an overall shake correction.
In the above-described embodiment, the moving subject image 93 of the intermediate image 90 is generated by transforming the image 79 within the transformation target area 77 of the reference image 70. However, it may be so arranged that the moving subject image 93 of the intermediate image 90 is generated by transforming the image 89 within the transformation target area 87 of the corresponding image 80.
In the above-described embodiment, an image that is intermediate two image frames is generated from the two image frames. However, by applying similar processing to a number of frames of a moving image, data representing a moving image having smooth motion can be generated.
Next, a method of calculating overall-image shake (the processing of step 55 in FIG. 5) between the reference image and corresponding image will be described.
In a case where image shake has occurred from feature points to corresponding points, as described above, the shake can be calculated utilizing an affine transformation and a projective transformation.
Equation (1) illustrates the affine transformation relation, and Equation (2) illustrates the projective transformation relation. The position of a corresponding point is indicated by (X,Y) in Equations (1), (2), and the position of a feature point by (x,y).
[ x y ] = [ a b c d ] [ X Y ] + [ s t ] Eq . ( 1 ) x = a · X + b · Y + s p · X + q · Y + 1 Y = c · X + d · Y + t p · X + q · Y + 1 Eq . ( 2 )
If shake is calculated using the method of least squares, we proceed as follows:
A GM estimation based upon the method of least squares is Equation (3) below, where the matrix T represents feature points and is indicated by Equation (4). Further, the matrix A is an affine parameter vector and is indicated by Equation (5). The matrix F represents corresponding points and is indicated by Equation (6). Furthermore, the transposed matrix of the matrix F is Equation (7).
T≈F·A  Eq. (3)
where A=(FTF)−1·FTT
T = ( X 1 Y 1 X 2 Y 2 ) Eq . ( 4 ) A = ( a b c d s t ) Eq . ( 5 ) F = ( x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 ) Eq . ( 6 ) FT = ( x 1 0 x 2 0 y 1 0 y 2 0 0 x 1 0 x 2 0 y 1 0 y 2 1 0 1 0 0 1 0 1 ) Eq . ( 7 )
Finding the affine parameter vector results in Equation (8) below, and it can be defined as shown in Equations (9) and (10).
FT · F = ( x 1 0 x 2 0 y 1 0 y 2 0 0 x 1 0 x 2 0 y 1 0 y 2 1 0 1 0 0 1 0 1 ) ( x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 ) = ( k = 1 n xk 2 k = 1 n xkyk 0 0 k = 1 n xk 0 k = 1 n xkyk k = 1 n yk 2 0 0 k = 1 n yk 0 0 0 k = 1 n xk 2 k = 1 n xkyk 0 k = 1 n xk 0 0 k = 1 n xkyk k = 1 n yk 2 0 k = 1 n yk k = 1 n xk k = 1 n yk 0 0 n 0 0 0 k = 1 n xk k = 1 n yk 0 n ) Eq . ( 8 ) F T · F = ( A B 0 0 D 0 B C 0 0 E 0 0 0 A B 0 D 0 0 B C 0 E D E 0 0 n 0 0 0 D E 0 n ) Eq . ( 9 ) A = k = 1 n xk 2 B = k = 1 n xkyk C = k = 1 n yk 2 D = k = 1 n xk E = k = 1 n yk Eq . ( 10 )
Further, the inverse matrix of Equation (9) is represented by Equation (11), where the relation of Equation (12) holds.
( F T · F ) - 1 = 1 det × ( nC - E 2 - nB + DE 0 0 BE - CD 0 - nB + DE n A - D 2 0 0 BD - AE 0 0 0 nC - E 2 - nB + DE 0 BE - CD 0 0 - nB + DE n A - D 2 0 BD - AE BE - CD BD - AE 0 0 A C - B 2 0 0 0 BE - CD BD - AE 0 A C - B 2 ) Eq . ( 11 ) det = n ( A C - B 2 ) + D ( BE - CD ) + E ( BD - AE ) Eq . ( 12 )
Equation (13) below holds from the Cauchy-Schwarz inequality, and the relation of Equation (14) holds.
A C - B 2 = k = 1 n xk 2 · k = 1 n yk 2 - ( k = 1 n xkyk ) 2 0 Eq . ( 13 ) F T · T = ( x 1 0 x 2 0 y 1 0 y 2 0 0 x 1 0 x 2 0 y 1 0 y 2 1 0 1 0 0 1 0 1 ) × ( X 1 Y 1 X 2 Y 2 ) = ( k = 1 n xkXk k = 1 n ykXk k = 1 n xkYk k = 1 n ykYk k = 1 n Xk k = 1 n Yk ) Eq . ( 14 )
Since affine parameters are calculated from Equation (15) below, shake from feature points to corresponding points is calculated.
A=(F T F)−1 ·F T T   Eq. (15)
Next, the image transformation processing (the processing of step 64 in FIG. 6) within the transformation target area mentioned above will be described.
If we assume that the vertices (x0, y0), (x1, y1) and (x2, y2) of a triangle generated as described above move to (X0, Y0), (X1, Y1) and (X2, Y2) (that is, if we assume movement from feature points to corresponding point within the transformation target area), then Equation (16) below will hold.
( X 0 Y 0 X 1 Y 1 X 2 Y 2 ) = ( x 0 y 0 0 0 1 0 0 0 x 0 y 0 0 1 x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 ) ( a b c d s t ) Eq . ( 16 )
Since the coefficients a, b, c, d, s, t are found from the inverse matrix of the transformation described above, we have Equations (17), (18) below, and Equation (19) is obtained, where the relation of Equation (20) holds.
( X 0 Y 0 X 1 Y 1 X 2 Y 2 ) = C · ( a b c d s t ) Eq . ( 17 ) ( a b c d s t ) = C - 1 · ( X 0 Y 0 X 1 Y 1 X 2 Y 2 ) Eq . ( 18 ) C - 1 = 1 det C ( y 1 - y 2 0 y 2 - y 0 0 y 0 - y 1 0 x 2 - x 1 0 x 0 - x 2 0 x 1 - x 0 0 0 y 1 - y 2 0 y 2 - y 0 0 y 0 - y 1 0 x 2 - x 1 0 x 0 - x 2 0 x 1 - x 0 x 1 y 2 - x 2 y 1 0 x 2 y 0 - x 0 y 2 0 x 0 y 1 - x 1 y 0 0 0 x 1 y 2 - x 2 y 1 0 x 2 y 0 - x 0 y 2 0 x 0 y 1 - x 1 y 0 ) Eq . ( 19 ) det C = x 0 y 1 + x 2 y 0 + x 1 y 2 - x 0 y 2 - x 2 y 1 - x 1 y 0 Eq . ( 20 )
If it is so arranged that (x0, y0) will always be at the origin, the Equation (21) below holds.
( X 0 Y 0 X 1 Y 1 X 2 Y 2 ) = ( 0 0 0 0 1 0 0 0 0 0 0 1 x 1 y 1 0 0 1 0 0 0 x 1 y 1 0 1 x 2 y 2 0 0 1 0 0 0 x 2 y 2 0 1 ) ( a b c d s t ) Eq . ( 21 )
Since (x0, y0) is only the amount of shift, it moves to (s,t). Accordingly, the inverse matrix of Equation (19) becomes Equation (22) below, where the relation of Equation (23) holds.
C - 1 = 1 det C ( y 1 - y 2 0 y 2 0 - y 1 0 x 2 - x 1 0 - x 2 0 x 1 0 0 y 1 - y 2 0 y 2 0 - y 1 0 x 2 - x 1 0 - x 2 0 x 1 x 1 y 2 - x 2 y 1 0 0 0 0 0 0 x 1 y 2 - x 2 y 1 0 0 0 0 ) Eq . ( 22 ) det C = x 1 y 2 - x 2 y 1 Eq . ( 23 )
The coefficients a, b, c, d, s, t are as indicated by Equations (24) to (29) below.
a = 1 x 1 y 2 - x 2 y 1 ( ( y 1 - y 2 ) X 0 + y 2 X 1 - y 1 X 2 ) Eq . ( 24 ) b = 1 x 1 y 2 - x 2 y 1 ( ( x 2 - x 1 ) X 0 - x 2 X 1 + x 1 X 2 ) Eq . ( 25 ) c = 1 x 1 y 2 - x 2 y 1 ( ( y 1 - y 2 ) Y 0 + y 2 Y 1 - y 1 Y 2 ) Eq . ( 26 ) d = 1 x 1 y 2 - x 2 y 1 ( ( x 2 - x 1 ) Y 0 - x 2 Y 1 + x 1 Y 2 ) Eq . ( 27 ) s = X 0 Eq . ( 28 ) t = Y 0 Eq . ( 29 )
Thus, it is possible to move from feature points to corresponding points (to execute an image transformation) within the transformation target area described above.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.

Claims (6)

1. An intermediate image generating apparatus comprising:
a first feature point deciding device for deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting;
a first corresponding point deciding device for deciding corresponding points, which correspond to the feature points decided by said first feature point deciding device, from within the second image;
a moving subject image detecting device for detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image;
a transformation target area setting device for setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image detected by said moving subject image detecting device, from among the feature points decided by said first feature point deciding device, and the positions of corresponding points, which are present in the moving subject image detected by said moving subject image detecting device, from among the corresponding points decided by said first corresponding point deciding device;
a second feature point deciding device for deciding a plurality of feature points of an image within the transformation target area of the first image set by said transformation target area setting device;
a second corresponding point deciding device for deciding corresponding points, which correspond to the feature points decided by said second feature point deciding device, in an image within the transformation target area of the second image set by said transformation target area setting device;
an intermediate moving subject image generating device for generating an intermediate moving subject image by transforming the image within the transformation target area of the first image or second image in such a manner that this image is positioned at intermediate points present at positions that are intermediate the feature points decided by said second feature point deciding device and the corresponding points decided by said second corresponding point deciding device; and
an intermediate image generating device for generating an image that is intermediate the first and second images using the intermediate moving image generated by said intermediate moving image generating device.
2. The apparatus according to claim 1, further comprising an external-area image portion transforming device for transforming the second image in such a manner that the corresponding points decided by said first corresponding point deciding device will coincide with the feature points decided by said first feature point deciding device with regard to an image portion that is external to the transformation target area;
wherein with regard to the interior of the transformation target area, said intermediate image generating device generates the image intermediate the first and second images using the intermediate moving subject image generated by said intermediate moving subject image generating device, and with regard to the image portion exterior to the transformation target area, said intermediate image generating device generates the image intermediate the first and second images using the first image or the image portion, external to the transformation target area, generated by said external-area image portion transforming device.
3. The apparatus according to claim 1, wherein said moving subject image detecting device detects the moving subject images based upon motion that is different from motion of the total of the plurality of feature points decided by said first feature point deciding device and corresponding points decided by said first corresponding point deciding device.
4. The apparatus according to claim 3, wherein said moving subject image detecting device detects the moving subject images based upon motion that is different from motion of the total of the plurality of feature points decided by said first feature point deciding device and corresponding points decided by said first corresponding point deciding device, with the proviso that the number of corresponding points that undergo this different motion is greater than a prescribed number.
5. A method of controlling operation of an intermediate image generation apparatus, comprising the steps of:
deciding a plurality of feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting;
deciding corresponding points, which correspond to the feature points that have been decided, from within the second image;
detecting moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image;
setting transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of feature points, which are present in the moving subject image that has been detected, from among the feature points that have been decided, and the positions of corresponding points, which are present in the moving subject image that has been detected, from among the corresponding points that have been decided;
deciding a plurality of feature points of an image within the set transformation target area of the first image;
deciding corresponding points, which correspond to the feature points that have been decided, in an image within the set transformation target area of the second image;
generating an intermediate moving subject image by transforming the image within the transformation target area of the first image or second image in such a manner that this image is positioned at intermediate points present at positions that are intermediate the feature points that have been decided and the corresponding points that have been decided; and
generating an image that is intermediate the first and second images using the intermediate moving image that has been generated.
6. A non-transitory recording medium storing a computer-readable program for controlling a computer of an intermediate image generating apparatus so as to:
decide a plurality of first feature points, which indicate the shape features of a subject, from within a first image, wherein the first image and a second image have been obtained by continuous shooting;
decide first corresponding points, which correspond to the feature points that have been decided, from within the second image;
detect moving subject images in respective ones of the first and second images, the moving subject images being subject images contained in the first and second images and representing an object moving from capture of the first image to capture of the second image;
set transformation target areas in respective ones of the first and second images, the transformation target areas enclosing both the positions of first feature points, which are present in the moving subject image that has been detected, from among the first feature points that have been decided, and the positions of first corresponding points, which are present in the moving subject image that has been detected, from among the first corresponding points that have been decided;
decide a plurality of second feature points of an image within the set transformation target area of the first image;
deciding second corresponding points, which correspond to the second feature points that have been decided, in an image within the set transformation target area of the second image;
generating an intermediate moving subject image by transforming the image within the transformation target area of the first image or second image in such a manner that this image is positioned at intermediate points present at positions that are intermediate the second feature points and the second corresponding points that have been decided; and
generate an image that is intermediate the first and second images using the intermediate moving image that has been generated.
US12/764,847 2009-05-01 2010-04-21 Intermediate image generating apparatus and method of controlling operation of same Active 2031-05-07 US8280170B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009111816A JP5149861B2 (en) 2009-05-01 2009-05-01 Intermediate image generation apparatus and operation control method thereof
JP2009-111816 2009-05-01

Publications (2)

Publication Number Publication Date
US20100278433A1 US20100278433A1 (en) 2010-11-04
US8280170B2 true US8280170B2 (en) 2012-10-02

Family

ID=43030390

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/764,847 Active 2031-05-07 US8280170B2 (en) 2009-05-01 2010-04-21 Intermediate image generating apparatus and method of controlling operation of same

Country Status (2)

Country Link
US (1) US8280170B2 (en)
JP (1) JP5149861B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130182105A1 (en) * 2012-01-17 2013-07-18 National Taiwan University of Science and Technolo gy Activity recognition method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5226600B2 (en) * 2009-04-28 2013-07-03 富士フイルム株式会社 Image deformation apparatus and operation control method thereof
CN110110189A (en) * 2018-02-01 2019-08-09 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN109788190B (en) * 2018-12-10 2021-04-06 北京奇艺世纪科技有限公司 Image processing method and device, mobile terminal and storage medium
RU2727263C1 (en) * 2020-01-10 2020-07-21 Федеральное государственное бюджетное учреждение науки Институт химии нефти Сибирского отделения Российской академии наук (ИХН СО РАН) Vibration viscometer for thixotropic liquids

Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
US5495300A (en) * 1992-06-11 1996-02-27 U.S. Philips Corporation Motion-compensated picture signal interpolation
US5625417A (en) * 1995-08-08 1997-04-29 Daewoo Electronics Co. Ltd. Image processing system using a feature point-based motion estimation
US5627591A (en) * 1995-08-08 1997-05-06 Daewoo Electronics Co. Ltd. Image processing system using a pixel-by-pixel motion estimation based on feature points
US5638129A (en) * 1995-03-20 1997-06-10 Daewoo Electronics Co. Ltd. Image processing apparatus using a pixel-by-pixel motion estimation based on feature points
US5701163A (en) * 1995-01-18 1997-12-23 Sony Corporation Video processing method and apparatus
US5757668A (en) * 1995-05-24 1998-05-26 Motorola Inc. Device, method and digital video encoder of complexity scalable block-matching motion estimation utilizing adaptive threshold termination
US5825917A (en) * 1994-09-30 1998-10-20 Sanyo Electric Co., Ltd. Region-based image processing method, image processing apparatus and image communication apparatus
US6181747B1 (en) * 1996-02-01 2001-01-30 Hughes Electronics Corporation Methods and systems for high compression rate encoding and decoding of quasi-stable objects in video and film
US6414685B1 (en) * 1997-01-29 2002-07-02 Sharp Kabushiki Kaisha Method of processing animation by interpolation between key frames with small data quantity
US6526173B1 (en) * 1995-10-31 2003-02-25 Hughes Electronics Corporation Method and system for compression encoding video signals representative of image frames
US20030081679A1 (en) * 2001-11-01 2003-05-01 Martti Kesaniemi Image interpolation
US6625333B1 (en) * 1999-08-06 2003-09-23 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry Through Communications Research Centre Method for temporal interpolation of an image sequence using object-based image analysis
US20030189980A1 (en) * 2001-07-02 2003-10-09 Moonlight Cordless Ltd. Method and apparatus for motion estimation between video frames
US20030202780A1 (en) * 2002-04-25 2003-10-30 Dumm Matthew Brian Method and system for enhancing the playback of video frames
US6724915B1 (en) * 1998-03-13 2004-04-20 Siemens Corporate Research, Inc. Method for tracking a video object in a time-ordered sequence of image frames
JP3548071B2 (en) 2000-02-08 2004-07-28 三洋電機株式会社 Intermediate image synthesizing method, intermediate image synthesizing device, recording medium storing intermediate image synthesizing program
US20040252230A1 (en) * 2003-06-13 2004-12-16 Microsoft Corporation Increasing motion smoothness using frame interpolation with motion analysis
US20050093987A1 (en) * 2003-09-19 2005-05-05 Haruo Hatanaka Automatic stabilization control apparatus, automatic stabilization control method, and recording medium having automatic stabilization control program recorded thereon
US20050238101A1 (en) * 2004-04-07 2005-10-27 Markus Schu Method and device for determination of motion vectors that are coordinated with regions of an image
US20060072664A1 (en) * 2004-10-04 2006-04-06 Kwon Oh-Jae Display apparatus
US7079144B1 (en) * 1999-02-26 2006-07-18 Sony Corporation Curve generating device and method, and program recorded medium
US20060274834A1 (en) * 2005-06-03 2006-12-07 Marko Hahn Method and device for determining motion vectors
US20060285595A1 (en) * 2005-06-21 2006-12-21 Samsung Electronics Co., Ltd. Intermediate vector interpolation method and three-dimensional (3D) display apparatus performing the method
JP3873017B2 (en) 2002-09-30 2007-01-24 株式会社東芝 Frame interpolation method and apparatus
US20070189386A1 (en) * 2005-06-22 2007-08-16 Taro Imagawa Image generation apparatus and image generation method
US20070297513A1 (en) * 2006-06-27 2007-12-27 Marvell International Ltd. Systems and methods for a motion compensated picture rate converter
US20080008243A1 (en) * 2006-05-31 2008-01-10 Vestel Elektronik Sanayi Ve Ticaret A.S. Method and apparatus for frame interpolation
US20080211968A1 (en) * 2006-12-19 2008-09-04 Tomokazu Murakami Image Processor and Image Display Apparatus Comprising the Same
US20080231745A1 (en) * 2007-03-19 2008-09-25 Masahiro Ogino Video Processing Apparatus and Video Display Apparatus
US20080304568A1 (en) * 2007-06-11 2008-12-11 Himax Technologies Limited Method for motion-compensated frame rate up-conversion
US20080317127A1 (en) * 2007-06-19 2008-12-25 Samsung Electronics Co., Ltd System and method for correcting motion vectors in block matching motion estimation
US20090147132A1 (en) * 2007-12-07 2009-06-11 Fujitsu Limited Image interpolation apparatus
US20090208137A1 (en) * 2008-02-18 2009-08-20 Minoru Urushihara Image processing apparatus and image processing method
US20090274434A1 (en) * 2008-04-29 2009-11-05 Microsoft Corporation Video concept detection using multi-layer multi-instance learning
US20100008424A1 (en) * 2005-03-31 2010-01-14 Pace Charles P Computer method and apparatus for processing image data
US7676063B2 (en) * 2005-03-22 2010-03-09 Microsoft Corp. System and method for eye-tracking and blink detection
US20100079665A1 (en) * 2008-09-26 2010-04-01 Kabushiki Kaisha Toshiba Frame Interpolation Device
US20100104140A1 (en) * 2008-10-23 2010-04-29 Samsung Electronics Co., Ltd. Apparatus and method for improving frame rate using motion trajectory
US20100271501A1 (en) * 2009-04-28 2010-10-28 Fujifilm Corporation Image transforming apparatus and method of controlling operation of same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3957816B2 (en) * 1997-06-05 2007-08-15 富士通株式会社 Inter-frame interpolation image processing device
JP4967938B2 (en) * 2007-09-06 2012-07-04 株式会社ニコン Program, image processing apparatus, and image processing method

Patent Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
US5495300A (en) * 1992-06-11 1996-02-27 U.S. Philips Corporation Motion-compensated picture signal interpolation
US5825917A (en) * 1994-09-30 1998-10-20 Sanyo Electric Co., Ltd. Region-based image processing method, image processing apparatus and image communication apparatus
US5701163A (en) * 1995-01-18 1997-12-23 Sony Corporation Video processing method and apparatus
US5638129A (en) * 1995-03-20 1997-06-10 Daewoo Electronics Co. Ltd. Image processing apparatus using a pixel-by-pixel motion estimation based on feature points
US5757668A (en) * 1995-05-24 1998-05-26 Motorola Inc. Device, method and digital video encoder of complexity scalable block-matching motion estimation utilizing adaptive threshold termination
US5625417A (en) * 1995-08-08 1997-04-29 Daewoo Electronics Co. Ltd. Image processing system using a feature point-based motion estimation
US5627591A (en) * 1995-08-08 1997-05-06 Daewoo Electronics Co. Ltd. Image processing system using a pixel-by-pixel motion estimation based on feature points
US6526173B1 (en) * 1995-10-31 2003-02-25 Hughes Electronics Corporation Method and system for compression encoding video signals representative of image frames
US6181747B1 (en) * 1996-02-01 2001-01-30 Hughes Electronics Corporation Methods and systems for high compression rate encoding and decoding of quasi-stable objects in video and film
US20020130873A1 (en) * 1997-01-29 2002-09-19 Sharp Kabushiki Kaisha Method of processing animation by interpolation between key frames with small data quantity
US6414685B1 (en) * 1997-01-29 2002-07-02 Sharp Kabushiki Kaisha Method of processing animation by interpolation between key frames with small data quantity
US6724915B1 (en) * 1998-03-13 2004-04-20 Siemens Corporate Research, Inc. Method for tracking a video object in a time-ordered sequence of image frames
US7079144B1 (en) * 1999-02-26 2006-07-18 Sony Corporation Curve generating device and method, and program recorded medium
US6625333B1 (en) * 1999-08-06 2003-09-23 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry Through Communications Research Centre Method for temporal interpolation of an image sequence using object-based image analysis
JP3548071B2 (en) 2000-02-08 2004-07-28 三洋電機株式会社 Intermediate image synthesizing method, intermediate image synthesizing device, recording medium storing intermediate image synthesizing program
US20030189980A1 (en) * 2001-07-02 2003-10-09 Moonlight Cordless Ltd. Method and apparatus for motion estimation between video frames
US20030081679A1 (en) * 2001-11-01 2003-05-01 Martti Kesaniemi Image interpolation
US20030202780A1 (en) * 2002-04-25 2003-10-30 Dumm Matthew Brian Method and system for enhancing the playback of video frames
JP3873017B2 (en) 2002-09-30 2007-01-24 株式会社東芝 Frame interpolation method and apparatus
US20040252230A1 (en) * 2003-06-13 2004-12-16 Microsoft Corporation Increasing motion smoothness using frame interpolation with motion analysis
US20050093987A1 (en) * 2003-09-19 2005-05-05 Haruo Hatanaka Automatic stabilization control apparatus, automatic stabilization control method, and recording medium having automatic stabilization control program recorded thereon
US20050238101A1 (en) * 2004-04-07 2005-10-27 Markus Schu Method and device for determination of motion vectors that are coordinated with regions of an image
US7933332B2 (en) * 2004-04-07 2011-04-26 Trident Microsystems (Far East) Ltd. Method and device for determination of motion vectors that are coordinated with regions of an image
US20060072664A1 (en) * 2004-10-04 2006-04-06 Kwon Oh-Jae Display apparatus
US7676063B2 (en) * 2005-03-22 2010-03-09 Microsoft Corp. System and method for eye-tracking and blink detection
US20100008424A1 (en) * 2005-03-31 2010-01-14 Pace Charles P Computer method and apparatus for processing image data
US20060274834A1 (en) * 2005-06-03 2006-12-07 Marko Hahn Method and device for determining motion vectors
US7894529B2 (en) * 2005-06-03 2011-02-22 Trident Microsystems (Far East) Ltd. Method and device for determining motion vectors
US20060285595A1 (en) * 2005-06-21 2006-12-21 Samsung Electronics Co., Ltd. Intermediate vector interpolation method and three-dimensional (3D) display apparatus performing the method
US20070189386A1 (en) * 2005-06-22 2007-08-16 Taro Imagawa Image generation apparatus and image generation method
US20080008243A1 (en) * 2006-05-31 2008-01-10 Vestel Elektronik Sanayi Ve Ticaret A.S. Method and apparatus for frame interpolation
US20070297513A1 (en) * 2006-06-27 2007-12-27 Marvell International Ltd. Systems and methods for a motion compensated picture rate converter
US20080211968A1 (en) * 2006-12-19 2008-09-04 Tomokazu Murakami Image Processor and Image Display Apparatus Comprising the Same
US20080231745A1 (en) * 2007-03-19 2008-09-25 Masahiro Ogino Video Processing Apparatus and Video Display Apparatus
US20080304568A1 (en) * 2007-06-11 2008-12-11 Himax Technologies Limited Method for motion-compensated frame rate up-conversion
US20080317127A1 (en) * 2007-06-19 2008-12-25 Samsung Electronics Co., Ltd System and method for correcting motion vectors in block matching motion estimation
US20090147132A1 (en) * 2007-12-07 2009-06-11 Fujitsu Limited Image interpolation apparatus
US20090208137A1 (en) * 2008-02-18 2009-08-20 Minoru Urushihara Image processing apparatus and image processing method
US20090274434A1 (en) * 2008-04-29 2009-11-05 Microsoft Corporation Video concept detection using multi-layer multi-instance learning
US20100079665A1 (en) * 2008-09-26 2010-04-01 Kabushiki Kaisha Toshiba Frame Interpolation Device
US20100104140A1 (en) * 2008-10-23 2010-04-29 Samsung Electronics Co., Ltd. Apparatus and method for improving frame rate using motion trajectory
US20100271501A1 (en) * 2009-04-28 2010-10-28 Fujifilm Corporation Image transforming apparatus and method of controlling operation of same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130182105A1 (en) * 2012-01-17 2013-07-18 National Taiwan University of Science and Technolo gy Activity recognition method
US8928816B2 (en) * 2012-01-17 2015-01-06 National Taiwan University Of Science And Technology Activity recognition method

Also Published As

Publication number Publication date
JP5149861B2 (en) 2013-02-20
US20100278433A1 (en) 2010-11-04
JP2010262428A (en) 2010-11-18

Similar Documents

Publication Publication Date Title
US8279291B2 (en) Image transforming apparatus using plural feature points and method of controlling operation of same
JP4823179B2 (en) Imaging apparatus and imaging control method
KR101303410B1 (en) Image capture apparatus and image capturing method
CN101795355B (en) Imaging apparatus and image processing method
US8253812B2 (en) Video camera which adopts a focal-plane electronic shutter system
CN102739961B (en) Can the image processing apparatus of generating wide angle
JP2006270238A (en) Image processing apparatus, electronic camera, and image processing program
KR20090071471A (en) Imaging device and its shutter drive mode selection method
US9167173B2 (en) Image capturing apparatus configured to perform a cyclic shooting operation and method of controlling the same
US8280170B2 (en) Intermediate image generating apparatus and method of controlling operation of same
US20110211038A1 (en) Image composing apparatus
KR100819811B1 (en) Photographing apparatus, and photographing method
US8836821B2 (en) Electronic camera
US8120668B2 (en) Electronic camera for adjusting a parameter for regulating an image quality based on the image data outputted from an image sensor
US20130089270A1 (en) Image processing apparatus
US20100182460A1 (en) Image processing apparatus
US11736658B2 (en) Image pickup apparatus, image pickup method, and storage medium
US11206350B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and storage medium
JP5641352B2 (en) Image processing apparatus, image processing method, and program
JP5347624B2 (en) Imaging device, image processing method of imaging device, and image processing program
JP2014023121A (en) Image processing device and program
JP2022112878A (en) Image processing apparatus, image processing method, and program
JP2010239514A (en) Imaging system, image processing method and program
JP2014145974A (en) Imaging apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJIFILM CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OOISHI, MAKOTO;REEL/FRAME:024297/0518

Effective date: 20100401

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY