WO2004093011A1 - 複数のフレーム画像からの静止画像の生成 - Google Patents
複数のフレーム画像からの静止画像の生成 Download PDFInfo
- Publication number
- WO2004093011A1 WO2004093011A1 PCT/JP2004/005514 JP2004005514W WO2004093011A1 WO 2004093011 A1 WO2004093011 A1 WO 2004093011A1 JP 2004005514 W JP2004005514 W JP 2004005514W WO 2004093011 A1 WO2004093011 A1 WO 2004093011A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- frame image
- reference frame
- target
- comparison
- Prior art date
Links
- 238000000034 method Methods 0.000 claims description 92
- 230000015572 biosynthetic process Effects 0.000 claims description 44
- 238000003786 synthesis reaction Methods 0.000 claims description 44
- 238000012545 processing Methods 0.000 claims description 20
- 239000002131 composite material Substances 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 16
- 230000007717 exclusion Effects 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 8
- 239000000203 mixture Substances 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 230000037433 frameshift Effects 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 22
- 238000006073 displacement reaction Methods 0.000 description 11
- 230000002159 abnormal effect Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 9
- 230000002194 synthesizing effect Effects 0.000 description 8
- 238000013519 translation Methods 0.000 description 8
- 230000008030 elimination Effects 0.000 description 6
- 238000003379 elimination reaction Methods 0.000 description 6
- 238000012935 Averaging Methods 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Definitions
- the present invention relates to a technique for generating a still image by synthesizing a plurality of frame images included in a moving image.
- one scene of a moving image captured by a digital video camera or the like is captured to generate a still image having a higher resolution than a frame image.
- Such a still image is generated by superimposing and composing a plurality of frame images included in a moving image.
- Japanese Patent Application Laid-Open No. 2000-244845 discloses that one frame image is selected as a reference image from consecutive (n + 1) frame images, and the reference image is selected. A motion vector of each of the n other frame images (target images) to be calculated, and a still image is generated by combining (n + 1) frame images based on each motion vector. ing.
- Japanese Patent Application Laid-Open No. 6-350974 discloses a technique for generating a still image from a moving image in an interlaced mode, wherein one of two fields paired in an interlaced mode is used.
- a technique has been proposed in which a field is used as a reference image, another one field is used as a target image, and it is determined whether or not the target image is suitable for combination for each field.
- increasing the number of frame images to be synthesized improves the image quality of still images.
- increasing the number of frame images to be synthesized does not necessarily improve the image quality.
- FIG. 1 is an explanatory diagram showing a method of synthesizing a reference image and an image to be synthesized.
- the upper part of Fig. 1 shows how the reference image and the compositing target image are arranged with the amount of deviation corrected.
- the lower part of Fig. 1 shows the reference image, the compositing target image, and the positional relationship of each pixel of the composite image.
- “o” represents a pixel of the reference image.
- “ ⁇ ” Indicates a pixel of the image to be combined.
- the circles with hatching on the broken-line grid represent pixels of the composite image.
- the resolutions of the reference image and the image to be synthesized are the same, and the case where the resolution of the frame image is increased by 1.5 times in the X-axis direction and the y-axis direction.
- the pixel g1 of the composite image This pixel g1 matches the pixel t1 of the reference image.
- the tone value at the position of the pixel g1 is obtained by the bilinear method.
- the tone value of pixel g1 is determined.
- the gradation value of the pixel g2 of the composite image is determined by the following procedure. That is, first, the tone value at the position of the image g2 is obtained by the bilinear method based on the tone values of the four pixels t2 to t5 of the reference image surrounding the pixel g2. Next, the gradation value at the position of the pixel g2 is determined by the bilinear method based on the gradation values of the four pixels s4 to s7 of the synthesis target image surrounding the pixel g2. Then, the gradation value of the pixel g2 is determined by averaging the two.
- the gradation values of other pixels can be determined in the same manner as described above.
- the explanation was made assuming that the resolution of the reference image and the target image is the same.However, if the resolution of the reference image and the image to be synthesized are different, enlarge or reduce as appropriate. Then, similar processing may be performed.
- FIG. 2 is an explanatory diagram showing a synthesizing method when the shift amount between the reference image and the synthesis target image is 0.
- the upper part of Fig. 2 shows how the reference image and the compositing target image are arranged with the misalignment corrected. Since the shift amount is 0, the reference image and the compositing target image completely overlap.
- the lower part of Fig. 2 shows the reference image, the image to be synthesized, and the positional relationship of each pixel of the synthesized image. Since the reference image and the image to be synthesized overlap, the reference image Each pixel of the image and the compositing target image exists at the same position.
- the gradation value of the pixel g2 of the composite image is determined by the following procedure. That is, first, the tone value at the position of the image g2 is obtained by the bilinear method based on the tone values of the four pixels t2 to t5 of the reference image surrounding the pixel g2 . Next, based on the tone values of the four pixels s2 to s5 of the synthesis target image surrounding the pixel g2, the tone value at the position of the pixel g2 is determined by the bilinear method. Then, the tone value of the pixel g2 is determined by averaging the two.
- the image g2 obtained by the bilinear method based on the gradation values of the pixels t2 to t5 The tone value at the position of the pixel g2 obtained by the bilinear method based on the tone value at the position of and the tone value of the pixels s2 to s5 is the same value.
- those average values are also used as the gradation values at the position of the image g2 and the gradation values of the pixels s2 to s5 obtained by the bilinear method based on the gradation values of the pixels t2 to t5. It is the same as the gradation value at the position of the pixel g2 obtained by the bilinear method based on the bilinear method.
- the present invention has been made to solve the above-described problems, and has as its object to efficiently improve the image quality of a still image when a still image is generated from a plurality of frame images included in a moving image.
- an image generation device of the present invention is an image generation device that generates a still image from a plurality of frame images included in a moving image, At least one or more of the regions included in the frame images other than the reference frame image in the plurality of frame images is determined based on a predetermined rule set in relation to the reference frame image region in the reference frame image.
- a combining target setting unit that sets a combining target frame image region; a comparison reference extracting unit that extracts one comparison reference frame image region from the reference frame image region and the combining target frame image region; A target extraction unit that extracts one target frame image region from the synthesis target frame image region other than the image region; and a comparison unit that compares the comparison reference frame image region and the target frame image region to obtain a predetermined parameter. If the parameter does not satisfy a predetermined criterion, the target frame image region is combined with the target frame. Excluding excluding unit from beam image area, characterized by comprising the said reference frame image area and the synthetic image generating unit that the synthetic subject frame image area combined to generate a composite image area.
- a frame image is a still image that can be displayed in a progressive system (also referred to as a non-interlaced system). Therefore, in the case of the interlace method, an image composed of a plurality of field images (odd field and even field) having different rasters corresponds to the frame image of the present invention.
- the area of a frame image that does not satisfy a predetermined criterion is excluded from the target of synthesis, and the number of frame image areas to be synthesized is secured to a predetermined number or more, so that the image quality of the synthesized image can be efficiently improved.
- the predetermined criterion is that, for example, if the parameter obtained by the comparison unit is an image shift amount, the image shift amount is equal to or larger than a threshold. Details will be described later.
- the predetermined number can be set arbitrarily, but is preferably two or more.
- the compositing target setting unit may set the region of the frame image that is chronologically continuous to the reference frame image in the moving image as a continuous several compositing target frame image region, or may set the composite target frame image every few frames It may be set in an area.
- the composite processing performed by the composite image generation unit includes the nearest neighbor method and Various well-known image interpolation methods such as the near method and the bicubic method can be applied.
- a method that can perform high-speed processing has a simplified procedure, and therefore has lower interpolation accuracy and lower image quality than a method that has a complicated procedure.
- the near-arrested-neighbor method, the bilinear method, and the bicubic method complicate the procedure in this order and increase the processing time.
- the interpolation accuracy is high, and the image quality is improved.
- the synthesized image generation unit When the total number of the combined frame image area and the reference frame image area is large, the synthesized image generation unit performs high-speed synthesis processing using the nearest neighbor method or the like. It is also possible to perform good synthesis processing.
- the image generating apparatus may further include: a setting unit configured to set an area serving as a reference for synthesis as the reference frame image area in the reference frame image; a reference frame image area; and a synthesis target frame image satisfying the reference.
- a frame number control unit that repeats the processing of the combination target setting unit, the comparison reference extraction unit, the target extraction unit, the comparison unit, and the exclusion unit until the total number of regions is equal to or more than a predetermined number; May be provided.
- the image generation device may further include a designation receiving unit that receives the designation of the reference frame image, and the setting unit may use the designated frame image as the reference frame image.
- the user can select a frame image to be a still image from a moving image and use it as a reference frame image.
- the comparison reference extraction unit may set the reference frame image area as the comparison reference frame image area.
- the reference frame image area is an area of an image serving as a reference for synthesis, it is desirable that the reference frame image area is first set as the comparison reference frame image area. Using the reference frame image area as a comparison reference frame image area, it is determined whether or not the synthesis target frame image area is worthy of synthesis. The other one can then be used as a comparison reference frame image area.
- the frame image area 1 is the reference frame image area
- the frame image area 2 and the frame image area 3 are the synthesis target frame image areas. I do.
- the frame image area 1 is used as a comparison reference frame image area
- the shift amount from the frame image area 2 and the shift amount from the frame image area 3 are obtained. If the determined shift amounts are each equal to or larger than a predetermined value, the frame image area 2 is set as a comparison reference frame image area.
- the displacement between frame image region 2 and frame image region 3 is because if the amount is 0, it is not necessary to combine all the frame image regions 1, 2, and 3. That is, it is sufficient to combine the frame image areas 1 and 2 or the frame image areas 1 and 3.
- the frame image region 2 as a comparison reference frame image region and the frame image region 3 as a target frame image region, it becomes possible to exclude the frame image region 3 from the compositing target.
- the image generating apparatus may further include an elimination unit that removes, from the compositing target frame image area, an area of a frame image whose characteristic as an area of one frame image satisfies a predetermined condition.
- the predetermined conditions include, for example, large noise, out of focus, and abnormal color gradation due to a hand covering the lens.
- the elimination unit can remove such a frame image region from the composition target in advance.
- the parameter may be an image shift amount.
- the image shift amount occurs, for example, due to camera shake or camera turn. If the amount of image shift is too small, the frame image area to be synthesized cannot substantially improve the image quality of the synthesized image area. According to the present invention, it is possible to exclude a frame image region to be synthesized that is not very useful for improving the image quality of the synthesized image region from the synthesis target.
- the image shift amount is the translation shift It may include at least one of the amount and the rotation deviation amount.
- the amount of translational deviation can be detected by various methods such as a block matching method, a gradient method, and a method combining these.
- the amount of rotation deviation can also be detected by geometric calculation. If the parameter is the image shift amount, the above-mentioned predetermined criterion is that the image shift amount is equal to or larger than a threshold value.
- the comparison unit includes a frame shift amount calculation unit that calculates an image shift amount of a target frame image including the target frame image region from a comparison reference frame image including the comparison reference frame image region.
- An area shift amount calculation unit that obtains an image shift amount of the target frame image area from the comparison reference frame image area based on the image shift amount obtained by the frame shift amount calculation unit may be provided. good.
- the shift amount of the area can be easily obtained based on the image shift amount of the frame image. Even if the image shift amount of the frame image includes the rotational shift amount, the shift amount of each area can be approximated by the translational shift amount in some cases. In some cases, an image that is not worth combining as a whole frame image can be used for combining by dividing it into regions. The shift amount of the area may be directly calculated without obtaining the image shift amount of the frame image.
- the parameter may be an image difference obtained by comparing a feature amount of a pixel at the same position in the target frame image region and the comparison reference frame image region.
- the feature quantity may be a color gradation or luminance.
- a frame image region having almost no image difference compared to the comparison reference frame image region can be excluded from the compositing target. Even if frame image areas with the same content are combined, only the frame image area with the same content is obtained, but the image quality does not improve.Therefore, the frame image area with the same content as the comparison reference frame image area is combined with the frame image to be combined. It is intended to be excluded from the area.
- INDUSTRIAL APPLICABILITY The present invention is particularly effective in excluding a frame image region having the same content from a combination target when frame images having the same content are continuous, such as when a frame rate is converted in a moving image.
- the predetermined criteria mentioned above is that the image difference is not zero.
- the parameter may be a comparison of an average value of pixel feature amounts in the target frame image region and the comparison reference frame image region.
- a frame image region having a difference in characteristics as compared with a comparison reference frame image region can be excluded from a synthesis target. If a frame image area that is clearly abnormal compared to the reference frame image area is used for synthesis, the image quality of the synthesized frame image will also be abnormal, so the obviously abnormal frame image area is excluded from the frame image areas to be synthesized. It is intention.
- the predetermined criterion described above is that the comparison of the average of the pixel feature is large.
- the reference frame image area and the compositing target frame image area are areas in which the frame image is divided in the same format, and the target extraction unit is configured to output the comparison reference frame image area The target frame image area at the same position corresponding to the above may be extracted.
- the present invention it is possible to determine whether or not a frame image is to be synthesized for each region divided in the same format. By discriminating for each area, it is uniformly excluded from the synthesis target
- the frame image that has been set can also be a compositing target in a certain area. As a result, the quality of the composite image can be improved.
- the present invention can be configured as an invention of an image generating method in addition to the configuration as the image generating apparatus described above. Further, the present invention can be realized in various forms such as a computer program for realizing the above, a recording medium on which the program is recorded, and a data signal including the program and embodied in a carrier wave. In each embodiment, the various additional elements described above can be applied.
- the present invention is configured as a computer program or a recording medium on which the program is recorded, the present invention may be configured as an entire program for controlling the operation of the image generating apparatus, or may be configured only as a portion that performs the functions of the present invention. You may do it.
- Recording media include flexible disks, CD-ROMs, DVD-ROMs, magneto-optical disks, IC cards, ROM cartridges, punched cards, printed materials on which codes such as barcodes are printed, computer internal storage devices ( Various computer-readable media such as RAM and ROM) and external storage devices can be used.
- FIG. 1 is an explanatory diagram showing a method of synthesizing a reference image and a target image.
- FIG. 2 is an explanatory diagram showing a synthesizing method when the shift amount between the reference image and the target image is zero.
- FIG. 3 is an explanatory diagram showing a schematic configuration of the image generating apparatus 100 as the first embodiment of the present invention.
- FIG. 4 is an explanatory diagram conceptually showing how a still image is generated by combining a plurality of frame images in the first embodiment.
- FIG. 5 is an explanatory diagram showing a shift amount between the comparison reference image and the target image.
- FIG. 6 is an explanatory diagram showing a method of calculating the amount of translational deviation by the gradient method.
- FIG. 7 is an explanatory diagram showing a method of calculating a rotation shift.
- FIG. 8 is a flowchart showing a flow of a still image generation process in the first embodiment.
- FIG. 9 is a flowchart showing a frame image input process.
- FIG. 10 is an explanatory diagram illustrating a schematic configuration of an image generating apparatus 100 # according to a second embodiment of the present invention.
- FIG. 11 is an explanatory diagram illustrating a shift amount of a block between the comparison reference image and the target image.
- FIG. 12 is an explanatory diagram showing a frame image divided into blocks.
- FIG. 13 is a flowchart showing the flow of the still image generation process in the second embodiment.
- FIG. 14 is an explanatory diagram showing how a panoramic image is generated.
- a 1. Configuration of image generation device :
- a 1. Configuration of image generation device :
- FIG. 3 is an explanatory diagram showing a schematic configuration of the image generating apparatus 100 as the first embodiment of the present invention.
- the image generation device 100 is a device that combines a plurality of frame images included in a moving image to generate a still image having a higher resolution than the frame image.
- the image generating apparatus 100 is configured by installing predetermined application software on a general-purpose personal computer, and has the functional blocks shown in software.
- the personal computer includes an interface for inputting a moving image from a recording medium such as a hard disk, a DVD-ROM, a memory card, in addition to a CPU, a ROM, and a RAM. It also has a function to play back input moving images.
- a recording medium such as a hard disk, a DVD-ROM, a memory card, in addition to a CPU, a ROM, and a RAM. It also has a function to play back input moving images.
- the control unit 10 controls each unit.
- the frame image input unit 20 inputs a frame image included in a moving image.
- the frame image input unit 20 inputs four consecutive frame images in time series from the timing at which the user inputs a pause instruction during reproduction of a moving image.
- the number of frame images input here is the number of frame images used for synthesizing still images.
- the frame image input section 20 inputs four frame images and, at the same time, inputs 20 frame images that are continuous in time series, and stores them in the frame image storage section 30 separately.
- the 20 frame images are spare frame images that are newly synthesized candidates when the previous four frame images are not suitable for synthesizing a still image.
- the 20 frame images are referred to as spare frame images.
- the preceding four frame images are called selected frame images.
- the frame image input unit 20 also performs a process of changing the spare frame image to the selected frame image.
- the number of frame images input by the frame image input unit 20 may be arbitrarily set by the user. Also, the input frame images are continuous in time series. You don't have to.
- the frame image at the timing input to the command input unit may be the second or third frame image, and the frame images may be successively input in time series.
- the frame image storage unit 30 stores a plurality of frame images input by the frame image input unit 20.
- the rejection unit 50 removes an abnormal frame image from the selected frame images stored in the frame image storage unit 30 when the evaluation is performed on a frame basis. For example, a frame image with large noise, a frame image with out of focus, or a selected frame image with abnormal color gradation due to a hand covering in front of the lens is removed.
- the frame image input unit 20 newly changes the spare frame image to the selected frame image.
- the spare frame image to be changed is a spare frame image that is chronologically continuous with the image previously selected as the selected frame image.
- the elimination unit 50 checks the image that has newly become the selected frame image, and removes the abnormal selected frame image. Until the number of selected frame images determined to be normal by the elimination unit 50 becomes four, the elimination of the selected frame image and the change from the spare frame image to the selected frame image are repeated.
- the reference image designation receiving unit 25 displays the selected frame image on the monitor.
- the user specifies a frame image to be used as a reference image from the displayed selected frame images.
- the reference image specification receiving unit 25 receives the specification.
- the reference image setting unit 40 sets the selected frame image designated by the reference image designation receiving unit 25 as a reference image.
- the reference image designation receiving unit 25 may not be provided, and the selected frame image input first by the frame image input unit 20 among the selected frame images may be set as the reference image.
- a functional block for analyzing the feature amount (eg, edge strength) of each selected frame image is provided, and a reference image is set based on the analysis result. You may do so.
- the comparison target setting unit 45 sets a selected frame image other than the reference image among the selected frame images as a comparison target image.
- the comparison reference image setting section 90 sets the reference image or Sets one of the comparison target images as the comparison reference image. However, first, the reference image is set as the comparison reference image.
- the comparison target resetting unit 85 resets a comparison target image other than the comparison reference image as a comparison target image.
- the target image setting unit 65 sets one of the comparison target images as the target image as a target for detecting a shift amount from the next comparison reference image. In the present embodiment, as will be described later, the comparison target images are set as target images in the order of input (change) by the frame image input unit 20.
- the shift amount detector 60 detects a shift amount of the target image with respect to the reference image.
- the translational deviation amount is detected as the deviation amount.
- the detection of the shift amount will be described later.
- the exclusion unit 80 excludes the target image from the comparison target image if the shift amount detected by the shift amount detection unit 60 does not satisfy a predetermined criterion.
- the determination unit 70 determines whether the total number of the comparison reference image and the comparison target image is four.
- the synthesized image generation unit 75 performs resolution conversion and synthesizes the comparison reference image and the comparison target image so as to compensate for the shift amount detected by the shift amount detection unit 60.
- To generate a composite image The reference at the time of composition is the reference image, and the composition method is as described above. However, since four images are combined, the average of four gradation values is calculated for each pixel of the combined image. If not, the frame image input unit changes the spare frame image to the selected frame image again.
- FIG. 4 is an explanatory diagram conceptually showing how a still image is generated by combining a plurality of frame images in the first embodiment.
- a still image is generated using frame images that are continuous in time series.
- the first frame image 1 is the reference image set by the reference image setting unit 40
- the frame images 2 to 4 are the comparison target images set by the comparison target setting unit 45. Note that none of the frame images was rejected by the rejection unit 50. Image.
- the comparison reference image setting unit 90 first sets the frame image 1 as the reference image as the comparison reference image.
- the comparison target resetting unit 85 resets the comparison target images other than the comparison reference image, that is, the frame images 2 to 4 as comparison target images.
- the target image setting unit 65 first sets the frame image 2 as the target image among the comparison target images.
- the shift amount detector 60 detects the shift amount between the comparison reference image (frame image 1) and the target image (frame image 2).
- the exclusion unit 80 determines whether the deviation amount satisfies a predetermined criterion.
- an “x” mark is shown as not satisfying the predetermined criterion. That is, the frame image 2 is excluded from the comparison target images by the exclusion unit 80.
- the target image setting unit 65 sets the frame image 3 as a target image. Then, a shift amount between the comparison reference image (frame image 1) and the target image (frame image 3) is detected, and it is determined whether the shift amount satisfies a predetermined criterion.
- the ⁇ ⁇ ⁇ ⁇ ”mark is shown as satisfying the predetermined criteria. That is, the frame image 3 is not excluded from the comparison target images by the exclusion unit 80.
- the shift amount between the comparison reference image (frame image 1) and the target image (frame image 4) is detected. The mark “ ⁇ ” is illustrated assuming that the deviation amount satisfies the predetermined standard.
- Step 2 is a process performed after step 1.
- the comparison reference image setting unit 90 converts one of the comparison target images (frame image 3 and frame image 4) (frame image 3) not excluded by the exclusion unit 80 into a comparison reference image.
- the reference image (frame image 1) is not used as the comparison reference image because it has already become the comparison reference image in step 1.
- the previous frame image 1 is referred to as a comparative reference image 1
- the frame image 3 is referred to as a comparative reference image 2.
- the comparison target resetting unit 85 newly sets a comparison target image (frame image 4) other than the comparison reference image (frame image 3) as a comparison target image.
- the number of new comparison target images is one, but there may be a case where there are multiple images.
- the target image setting unit 65 sets one of the comparison target images as the target image.
- the frame image 4 is the target image.
- the shift amount detector 60 detects the shift amount between the comparison reference image 2 (frame image 3) and the target image (frame image 4).
- the exclusion unit 80 determines whether the deviation amount satisfies a predetermined criterion. An “x” mark is shown as not meeting the prescribed criteria. That is, the frame image 4 is excluded from the images to be compared by the exclusion unit 80.
- the determination unit 70 determines whether or not the total number of the comparison reference image and the comparison target image is four.
- the determination unit 70 makes a determination when the number of images to be compared becomes one or less after the deviation amount is detected.
- the number of comparison reference images was two, frame image 1 and frame image 3, and the number of images to be compared was 0, so the total was two. Therefore, since there are not four frames, the frame image input unit 20 changes the spare frame images to the selected frame images so that the total number of frame images becomes four. That is, the two spare frame images are changed to the selected frame images. If at least one of the two images is eliminated by the exclusion unit 50, the frame image is changed again.
- Step 3 is a process performed after step 2.
- the comparison target setting unit 45 sets the frame image 5 and the frame image as comparison target images.
- the comparison reference image setting unit 90 sets the frame image 1 as the comparison reference image 1
- the comparison target resetting unit 85 sets the frame image 3, the frame image 5, and the frame image 6 as the comparison target images.
- the target image is set in the order of frame image 3, frame image 5, and frame image 6, and the shift amount is detected.
- the frame image 3, the frame image 5, and the frame image 6, the mark “ ⁇ ” is illustrated assuming that the shift amount satisfies a predetermined standard. Note that the result of detecting the amount of displacement between the frame image 1 and the frame image 3 is shown in step 1, and therefore is not shown in step 3.
- the comparison target images (frame image 3, frame image 5, frame image 6)
- One of the images (frame image 3) is set as the comparison reference image 2
- the remaining comparison target images (frame image 5, frame image 6) are set as the comparison target images.
- the target image is set in the order of the frame image 5 and the frame image 6, and the shift amount is detected.
- the “o” mark is illustrated assuming that the deviation amount satisfies a predetermined criterion.
- one of the comparison target images (frame image 5 and frame image 6) (frame image 5) is set as the comparison reference image 3, and the other comparison target image (frame image 6) is set as the comparison target image. Then, the frame image 6 is set as the target image, and the shift amount is detected.
- the mark “ ⁇ ” is illustrated assuming that the shift amount of the frame image 6 satisfies a predetermined standard.
- the judgment unit 70 makes a judgment.
- detection of the shift amount between the comparison reference image and the target image will be described.
- FIG. 5 is an explanatory diagram showing a shift amount between the comparison reference image and the target image. It is assumed that the coordinates (X 1, y 1) of the comparison reference image are shifted from the coordinates (X 2, y 2) of the target image. Here, the translational deviation (u, V) and the rotational deviation ⁇ are used as the deviations.
- FIG. 6 is an explanatory diagram showing a method of calculating the amount of translational deviation by the gradient method.
- FIG. 6 (a) shows the pixels and luminance of the comparative reference image and the target image.
- (x1i, y1i) represents the coordinate value of the pixel of the comparison reference image
- B1 (x1i, y1i) represents its luminance.
- Figure 6 (b) shows the principle of the gradient method.
- the pixel at the coordinates (X 2 ⁇ , y 2 i) of the target image is between the coordinates (x 1 i to x 1 i +1, y 1 i to y 1 i +1) of the comparison reference image, that is, , (X 1 i + ⁇ , y 1 i + ⁇ ) which are coordinates between pixels.
- B 2 B 2 (x 2 i, y 2 i) ⁇ ⁇ ⁇ (4)
- ⁇ and Ay are calculated for each pixel, and the average is taken as a whole.
- the translation error is calculated by the gradient method.
- other methods such as a block matching method, an iterative gradient method, and a method combining these methods may be used.
- FIG. 7 is an explanatory diagram showing a method of calculating a rotation shift. Here, it is assumed that the translational deviation between the comparison reference image and the target image has been corrected.
- FIG. 8 is a flowchart showing the flow of the still image generation process in the first embodiment. This is a process executed by the CPU of the image generating apparatus 100.
- FIG. 9 is a flowchart showing a frame image input process.
- the frame image input unit 20 first inputs four selected frame images and 20 spare frame images from the moving image and stores them in the frame image storage unit 30 (step S 21).
- the elimination unit 50 determines whether one of the selected frame images has large noise, blurred focus, abnormal color gradation due to the hand covering the lens, etc. It is determined whether there is any abnormality (step S23). If abnormal (step S23), the selected frame image is deleted from the frame image storage unit 30 (step S24), and the spare frame image is changed to one selected frame image (step S25) ). Steps S23 to S25 are repeated until all the selected frame images are determined to be normal.
- the reference image specification receiving unit 25 displays all the selected frame images on the monitor (Step S27), and receives the specification of the reference image from the user (Step S28).
- the number of reference images is one.
- the reference image setting unit 40 sets one selected frame image designated by the user as a reference image (step S29).
- the comparison target setting unit 45 sets a selected frame image other than the reference image as a comparison target image (step S30). This is the end of the description of the frame image input processing. Returning to FIG. 8, the flow of the still image generation processing will be described.
- one of the reference image or the comparison target image is set as the comparison reference image (scanning).
- Step S35) a comparison target image other than the comparison reference image is reset as a comparison target image (step S40).
- one of the comparison target images is set as the target image, and the deviation amount of the target image from the comparison reference image is detected (step S50).
- the deviation amount of the detection result is (u, V)
- the deviation amount ( ⁇ , ⁇ V) is obtained as the absolute value of the difference between (u, v) and the integer closest to (u, v). .
- (0.3, 0.2).
- step S55 if the deviation amount ( ⁇ _ ⁇ , ⁇ ) is equal to or smaller than the threshold value (0.1, 0.1) (step S55), the target image is excluded from the comparison target image as a non-composition value (step S55). S 60). If the amount of deviation from the comparison reference image has not been detected for all the comparison target images (step S65), the next comparison target image is set as the target image, and steps S45 to S60 are repeated. .
- step S65 When the amount of deviation from the comparison reference image is detected for all the comparison images (step S65), it is determined whether the number of comparison images is one or less (step S7).
- steps S35 to S65 are repeated using a new comparison target image as a comparison reference image.
- step S70 it is determined whether or not the total number of the comparison reference image and the comparison target image is four (step S5). If the number is not four, the spare frame image is changed to the selected frame image for the lack (step S5).
- step S86 the comparison reference image and the comparison target image are combined to generate a combined image.
- a frame image having a small amount of deviation from the comparison reference image is excluded from the target of synthesis, and the frame image to be synthesized is By securing four images, the quality of the synthesized image can be improved efficiently.
- FIG. 10 is an explanatory diagram showing a schematic configuration of an image generating device 100A as a second embodiment of the present invention.
- the configuration of the image generating apparatus 100A is almost the same as that of the image generating apparatus 100 of the first embodiment except that the image generating apparatus 100A includes a dividing unit 95.
- the target image is divided into a plurality of blocks, and a shift amount from the comparison reference image is obtained in block units. Then, a block having a small amount of deviation from the comparison reference image is excluded from the target of synthesis. For this reason, the dividing unit 95 divides all the selected frame images such that each block has 16 ⁇ 16 pixels.
- FIG. 11 is an explanatory diagram showing a shift amount of a block between the reference image and the target image. It is assumed that the coordinates (X1, y1) of the comparison reference image are shifted from the coordinates (X2, y2) of the target image.
- the shift amount of the frame image is composed of three parameters: a translation shift amount (u, v) and a rotation shift amount ⁇ .
- FIG. 12 is an explanatory diagram showing a frame image divided into blocks.
- the frame image in the figure is divided into 5X8 blocks. Even when the entire frame image is rotated as indicated by the arrow in the figure, the block displacement can be represented only by the translation displacement (u, V).
- the shift amount of each block may be calculated from the translation shift amount and the rotation shift amount of the frame image.
- the amount of displacement of each block is calculated without translation of the frame image.
- the calculation is based on the shift amount and the rotation shift amount, the shift amount of each block may be directly detected.
- FIG. 13 is a flowchart showing the flow of the still image generation process in the second embodiment. This is a process executed by the CPU of the image generating apparatus 100A.
- step S92 all the selected frame images are divided so that each block has 16 ⁇ 16 pixels. Then, the same processing as in the first embodiment is performed for each block (step S95 to step S150). That is, processing is performed assuming that each block is an independent frame image. At this time, the blocks at the same position in the selected frame image (the first block of the selected frame image 1, the first block of the selected frame image 2, the first block of the selected frame image 3, and the first block of the selected frame image 4) Process each other. Similarly, the processing from step S95 to step S150 is repeated for all the blocks in the second block and the third block 1 '(step S155).
- step S155 if the spare frame image is changed to the selected frame image (step S155), exclusion processing is performed (step S30), and the selected frame image is set as the comparison target image (step S156).
- the image to be compared is also divided so that each block becomes 16 ⁇ 16 pixels (step S 158).
- the block shift amount can be easily obtained based on the image shift amount of the frame image.
- an image that cannot be used for synthesis as a whole frame image can be used for synthesis by dividing it into blocks.
- the present invention is not limited to any particular embodiment, and can be implemented in various modes without departing from the scope of the invention.
- the number of frame images to be combined and the threshold value of the shift amount can be set variously.
- an image difference which is a set of differences between the feature values of pixels at the same position in the comparison reference image and the target image, and a difference between the average values of the pixel feature values are used as a criterion for determining whether to use the target image for synthesis. It is good. Further, the following modifications are possible.
- the image generation device 100 of the first embodiment can also generate a panoramic image.
- FIG. 14 is an explanatory diagram showing how a panoramic image is generated.
- the five frame images 1 to 5 indicated by the solid line are synthesized, a part of them is extracted, and the panoramic image indicated by the broken line is generated.
- the frame image 1 when the frame image 1 is used as the reference image, there is no area overlapping with the frame image 5, so that a composite image cannot be generated.
- the image generating apparatus 100 of the first embodiment by changing the comparison reference image from frame image 1 ⁇ frame image 2 ⁇ frame image 3 ⁇ frame image 4, more frame images can be combined. To generate a panoramic image.
- the present invention is applicable to an apparatus that combines a plurality of frame images of a moving image or a still image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/541,479 US7672538B2 (en) | 2003-04-17 | 2004-04-16 | Generation of still image from a plurality of frame images |
EP04728040A EP1538562A4 (en) | 2003-04-17 | 2004-04-16 | GENERATING A STILL IMAGE FROM MULTIPLE INDIVIDUAL IMAGES |
JP2005505487A JP4120677B2 (ja) | 2003-04-17 | 2004-04-16 | 複数のフレーム画像からの静止画像の生成 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003112392 | 2003-04-17 | ||
JP2003-112392 | 2003-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004093011A1 true WO2004093011A1 (ja) | 2004-10-28 |
Family
ID=33296055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/005514 WO2004093011A1 (ja) | 2003-04-17 | 2004-04-16 | 複数のフレーム画像からの静止画像の生成 |
Country Status (5)
Country | Link |
---|---|
US (1) | US7672538B2 (ja) |
EP (1) | EP1538562A4 (ja) |
JP (1) | JP4120677B2 (ja) |
CN (1) | CN100351868C (ja) |
WO (1) | WO2004093011A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006270523A (ja) * | 2005-03-24 | 2006-10-05 | Casio Comput Co Ltd | 画像合成装置および画像合成処理プログラム |
JP2006270238A (ja) * | 2005-03-22 | 2006-10-05 | Nikon Corp | 画像処理装置、電子カメラ、および画像処理プログラム |
WO2010013471A1 (ja) * | 2008-07-30 | 2010-02-04 | シャープ株式会社 | 画像合成装置、画像合成方法及び画像合成プログラム |
JP2020154428A (ja) * | 2019-03-18 | 2020-09-24 | 株式会社リコー | 画像処理装置、画像処理方法、画像処理プログラム、電子機器及び撮影装置 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070248240A1 (en) * | 2004-05-05 | 2007-10-25 | Koninklijke Philips Electronics, N.V. | Selective Video Blanking |
ATE396461T1 (de) * | 2005-07-13 | 2008-06-15 | Lasersoft Imaging Ag | Bereitstellung einer digitalen kopie eines quellbildes |
WO2007055334A1 (ja) * | 2005-11-10 | 2007-05-18 | Matsushita Electric Industrial Co., Ltd. | ビュワー装置、ビュワー装置におけるスライドショー表示方法、及びプログラム |
WO2007142109A1 (ja) * | 2006-05-31 | 2007-12-13 | Nec Corporation | 画像高解像度化装置及び画像高解像度化方法並びにプログラム |
US20080043259A1 (en) * | 2006-08-18 | 2008-02-21 | Roger Lee Triplett | Method and system for hardcopy output of snapshots and video |
WO2008047664A1 (fr) * | 2006-10-19 | 2008-04-24 | Panasonic Corporation | Dispositif de création d'image et procédé de création d'image |
US8717412B2 (en) * | 2007-07-18 | 2014-05-06 | Samsung Electronics Co., Ltd. | Panoramic image production |
JP4480760B2 (ja) * | 2007-12-29 | 2010-06-16 | 株式会社モルフォ | 画像データ処理方法および画像処理装置 |
US20090244301A1 (en) * | 2008-04-01 | 2009-10-01 | Border John N | Controlling multiple-image capture |
WO2010095460A1 (ja) * | 2009-02-19 | 2010-08-26 | 日本電気株式会社 | 画像処理システム、画像処理方法および画像処理プログラム |
JP2010250612A (ja) * | 2009-04-16 | 2010-11-04 | Canon Inc | 画像処理装置及び画像処理方法 |
TWI401617B (zh) * | 2009-05-19 | 2013-07-11 | Ipanel Technologies Ltd | 獲取圖像偏移位置的方法及裝置 |
EP2309452A1 (en) | 2009-09-28 | 2011-04-13 | Alcatel Lucent | Method and arrangement for distance parameter calculation between images |
US9792012B2 (en) | 2009-10-01 | 2017-10-17 | Mobile Imaging In Sweden Ab | Method relating to digital images |
US8558913B2 (en) | 2010-02-08 | 2013-10-15 | Apple Inc. | Capture condition selection from brightness and motion |
SE534551C2 (sv) | 2010-02-15 | 2011-10-04 | Scalado Ab | Digital bildmanipulation innefattande identifiering av ett målområde i en målbild och sömlös ersättning av bildinformation utifrån en källbild |
JP5424930B2 (ja) * | 2010-02-19 | 2014-02-26 | キヤノン株式会社 | 画像編集装置およびその制御方法およびプログラム |
CN102075679A (zh) * | 2010-11-18 | 2011-05-25 | 无锡中星微电子有限公司 | 一种图像采集方法和装置 |
SE1150505A1 (sv) * | 2011-05-31 | 2012-12-01 | Mobile Imaging In Sweden Ab | Metod och anordning för tagning av bilder |
EP2718896A4 (en) | 2011-07-15 | 2015-07-01 | Mobile Imaging In Sweden Ab | METHOD FOR PROVIDING ADJUSTED DIGITAL GRAPHIC REPRESENTATION OF VIEW AND APPROPRIATE APPARATUS |
AU2011253779A1 (en) * | 2011-12-01 | 2013-06-20 | Canon Kabushiki Kaisha | Estimation of shift and small image distortion |
CN104428815B (zh) * | 2012-07-13 | 2017-05-31 | 富士胶片株式会社 | 图像变形装置及其动作控制方法 |
JP6249596B2 (ja) * | 2012-12-05 | 2017-12-20 | 三星電子株式会社Samsung Electronics Co.,Ltd. | 撮像装置および撮像方法 |
US20140152765A1 (en) * | 2012-12-05 | 2014-06-05 | Samsung Electronics Co., Ltd. | Imaging device and method |
US9304089B2 (en) * | 2013-04-05 | 2016-04-05 | Mitutoyo Corporation | System and method for obtaining images with offset utilized for enhanced edge resolution |
CN103699897A (zh) * | 2013-12-10 | 2014-04-02 | 深圳先进技术研究院 | 一种鲁棒人脸配准方法和装置 |
CN105865451B (zh) * | 2016-04-19 | 2019-10-01 | 深圳市神州云海智能科技有限公司 | 用于移动机器人室内定位的方法和设备 |
CN107483839B (zh) | 2016-07-29 | 2020-08-07 | Oppo广东移动通信有限公司 | 多帧图像合成方法和装置 |
US9940695B2 (en) * | 2016-08-26 | 2018-04-10 | Multimedia Image Solution Limited | Method for ensuring perfect stitching of a subject's images in a real-site image stitching operation |
CN106303292B (zh) * | 2016-09-30 | 2019-05-03 | 努比亚技术有限公司 | 一种视频数据的生成方法和终端 |
CN108063920A (zh) * | 2017-12-26 | 2018-05-22 | 深圳开立生物医疗科技股份有限公司 | 一种图像冻结方法、装置、设备及计算机可读存储介质 |
CN111131688B (zh) * | 2018-10-31 | 2021-04-23 | Tcl科技集团股份有限公司 | 一种图像处理方法、装置及移动终端 |
JP6562492B1 (ja) * | 2019-05-16 | 2019-08-21 | 株式会社モルフォ | 画像処理装置、画像処理方法及びプログラム |
US11423224B2 (en) | 2019-06-14 | 2022-08-23 | Kyocera Document Solutions Inc. | Image-to-text recognition for a sequence of images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0210680U (ja) * | 1988-07-01 | 1990-01-23 | ||
JPH06350974A (ja) * | 1993-04-13 | 1994-12-22 | Matsushita Electric Ind Co Ltd | フレーム静止画像生成装置 |
JPH1069537A (ja) * | 1996-08-28 | 1998-03-10 | Nec Corp | 画像合成方法及び画像合成装置 |
JP2000244851A (ja) * | 1999-02-18 | 2000-09-08 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0210680A (ja) | 1988-06-28 | 1990-01-16 | Sanyo Electric Co Ltd | 電気コタツ |
US6075905A (en) | 1996-07-17 | 2000-06-13 | Sarnoff Corporation | Method and apparatus for mosaic image construction |
US5987164A (en) * | 1997-08-01 | 1999-11-16 | Microsoft Corporation | Block adjustment method and apparatus for construction of image mosaics |
JP4250237B2 (ja) | 1998-11-10 | 2009-04-08 | キヤノン株式会社 | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
EP1008956A1 (en) * | 1998-12-08 | 2000-06-14 | Synoptics Limited | Automatic image montage system |
JP3432212B2 (ja) * | 2001-03-07 | 2003-08-04 | キヤノン株式会社 | 画像処理装置及び方法 |
NL1019365C2 (nl) * | 2001-11-14 | 2003-05-15 | Tno | Bepaling van een beweging van een achtergrond in een reeks beelden. |
-
2004
- 2004-04-16 WO PCT/JP2004/005514 patent/WO2004093011A1/ja active Application Filing
- 2004-04-16 CN CNB2004800016127A patent/CN100351868C/zh not_active Expired - Fee Related
- 2004-04-16 US US10/541,479 patent/US7672538B2/en not_active Expired - Fee Related
- 2004-04-16 JP JP2005505487A patent/JP4120677B2/ja not_active Expired - Fee Related
- 2004-04-16 EP EP04728040A patent/EP1538562A4/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0210680U (ja) * | 1988-07-01 | 1990-01-23 | ||
JPH06350974A (ja) * | 1993-04-13 | 1994-12-22 | Matsushita Electric Ind Co Ltd | フレーム静止画像生成装置 |
JPH1069537A (ja) * | 1996-08-28 | 1998-03-10 | Nec Corp | 画像合成方法及び画像合成装置 |
JP2000244851A (ja) * | 1999-02-18 | 2000-09-08 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1538562A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006270238A (ja) * | 2005-03-22 | 2006-10-05 | Nikon Corp | 画像処理装置、電子カメラ、および画像処理プログラム |
JP4497001B2 (ja) * | 2005-03-22 | 2010-07-07 | 株式会社ニコン | 画像処理装置、電子カメラ、および画像処理プログラム |
JP2006270523A (ja) * | 2005-03-24 | 2006-10-05 | Casio Comput Co Ltd | 画像合成装置および画像合成処理プログラム |
JP4496537B2 (ja) * | 2005-03-24 | 2010-07-07 | カシオ計算機株式会社 | 画像合成装置および画像合成処理プログラム |
WO2010013471A1 (ja) * | 2008-07-30 | 2010-02-04 | シャープ株式会社 | 画像合成装置、画像合成方法及び画像合成プログラム |
JP2010034964A (ja) * | 2008-07-30 | 2010-02-12 | Sharp Corp | 画像合成装置、画像合成方法及び画像合成プログラム |
JP2020154428A (ja) * | 2019-03-18 | 2020-09-24 | 株式会社リコー | 画像処理装置、画像処理方法、画像処理プログラム、電子機器及び撮影装置 |
JP7247682B2 (ja) | 2019-03-18 | 2023-03-29 | 株式会社リコー | 画像処理装置、画像処理方法、画像処理プログラム、電子機器及び撮影装置 |
Also Published As
Publication number | Publication date |
---|---|
EP1538562A1 (en) | 2005-06-08 |
JPWO2004093011A1 (ja) | 2006-07-06 |
EP1538562A4 (en) | 2005-08-10 |
US20060171687A1 (en) | 2006-08-03 |
US7672538B2 (en) | 2010-03-02 |
CN1717702A (zh) | 2006-01-04 |
JP4120677B2 (ja) | 2008-07-16 |
CN100351868C (zh) | 2007-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004093011A1 (ja) | 複数のフレーム画像からの静止画像の生成 | |
EP2169945B1 (en) | Image processing apparatus and method for detection and correction of camera shake | |
JP4325625B2 (ja) | 画像処理装置、画像処理方法、そのプログラムおよび記録媒体 | |
EP1578116B1 (en) | Image processor | |
EP1761072B1 (en) | Image processing device for detecting chromatic difference of magnification from raw data, image processing program, and electronic camera | |
JP4461937B2 (ja) | 低解像度の複数の画像に基づく高解像度の画像の生成 | |
JP2005354124A (ja) | 複数の低画素密度画像からの高画素密度画像の生成 | |
US20050157949A1 (en) | Generation of still image | |
JP4514264B2 (ja) | ビデオデータ信号から取得された静止画像内でのモーション表示方法 | |
US6784927B1 (en) | Image processing apparatus and image processing method, and storage medium | |
US20100157107A1 (en) | Image Apparatus And Electronic Apparatus | |
US20100123792A1 (en) | Image processing device, image processing method and program | |
JP2004234623A (ja) | 画像生成装置、画像ずれ量検出装置、画像生成方法、画像ずれ量検出方法、画像生成プログラムおよび画像ずれ量検出プログラム | |
KR100423504B1 (ko) | 영상신호의 라인보간 장치 및 방법 | |
JP4360177B2 (ja) | 静止画像生成装置、静止画像生成方法、静止画像生成プログラム、および静止画像生成プログラムを記録した記録媒体 | |
JP2004272751A (ja) | 複数のフレーム画像からの静止画像の生成 | |
JP2006215655A (ja) | 動きベクトル検出方法、動きベクトル検出装置、動きベクトル検出プログラム及びプログラム記録媒体 | |
JP5448983B2 (ja) | 解像度変換装置及び方法、走査線補間装置及び方法、並びに映像表示装置及び方法 | |
US8345157B2 (en) | Image processing apparatus and image processing method thereof | |
JP2006033232A (ja) | 画像処理装置 | |
JP2006215657A (ja) | 動きベクトル検出方法、動きベクトル検出装置、動きベクトル検出プログラム及びプログラム記録媒体 | |
JP5055571B2 (ja) | 画像処理装置、電子カメラ、および画像処理プログラム | |
JP2013126123A (ja) | 画像処理装置、撮像装置及び画像処理方法 | |
JP2005129996A (ja) | 低解像度の複数の画像に基づく高解像度の画像生成の効率向上 | |
JP2009182935A (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2005505487 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004728040 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004728040 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048016127 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2006171687 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10541479 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10541479 Country of ref document: US |