WO2007141863A1 - Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images - Google Patents

Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images Download PDF

Info

Publication number
WO2007141863A1
WO2007141863A1 PCT/JP2006/311547 JP2006311547W WO2007141863A1 WO 2007141863 A1 WO2007141863 A1 WO 2007141863A1 JP 2006311547 W JP2006311547 W JP 2006311547W WO 2007141863 A1 WO2007141863 A1 WO 2007141863A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
roughness
image processing
diffusion
subject
Prior art date
Application number
PCT/JP2006/311547
Other languages
English (en)
Japanese (ja)
Inventor
Syunichi Yanagida
Original Assignee
Nippon Computer System Co., Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Computer System Co., Ltd filed Critical Nippon Computer System Co., Ltd
Priority to PCT/JP2006/311547 priority Critical patent/WO2007141863A1/fr
Priority to JP2006523460A priority patent/JP3915037B1/ja
Priority to PCT/JP2006/315679 priority patent/WO2007141886A1/fr
Priority to PCT/JP2006/315680 priority patent/WO2007141887A1/fr
Publication of WO2007141863A1 publication Critical patent/WO2007141863A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction

Definitions

  • Image processing program computer-readable recording medium recorded with the same, and image processing apparatus
  • the present invention relates to an image processing program and an image processing apparatus that correct a out-of-focus image or a hand-blurred image into a sharp image.
  • the present invention relates to an image processing program that can be automatically corrected with accuracy, a computer-readable recording medium on which the program is recorded, and an image processing apparatus image processing apparatus.
  • a photographic image may be blurred.
  • the blur include a blur caused by a shift in focal length, a blur caused by camera shake of a photographer, and a subject blurred due to a subject moving.
  • a so-called unsharp mask or Laplacian filter is conventionally used as a method for correcting such a blurred image into a sharp image, that is, a sharp image (see, for example, Patent Document 1 and Patent Document 2).
  • the unsharp mask process a blurred image is generated by further blurring the blurred original image, and the blur is corrected by adding the difference between the blurred image and the original image to the original image.
  • the Laplacian filter process corrects the blur by examining the second derivative value of the original image, finding the boundary where the brightness and shade change rapidly, and adding it to the original image. It is.
  • Patent Document 1 Japanese Patent Application Publication No. JP 2006-0331195
  • Patent Document 2 Japanese Patent Application Publication No. 2006-011619
  • the present invention has been made in view of such a problem, and uses a common algorithm to achieve high speed and high speed regardless of whether blurring generated in an image is out of focus or hand blurring.
  • Means for performing sharpening processing of accuracy are provided.
  • the sharpening process can be performed based on parameter input from the user, or can be performed automatically without any parameter input.
  • FIG. 1 is a block diagram showing a configuration of an image processing apparatus 1 according to an embodiment of the present invention.
  • FIG. 2 is a flowchart showing a processing flow of an image processing program 6.
  • FIG. 3 is a diagram showing an example of a brightness difference between a target pixel 13 and pixels adjacent in the surrounding eight directions.
  • FIG. 4 is a diagram illustrating an example of a brightness difference between a target pixel 13 and pixels adjacent in the surrounding eight directions.
  • FIG. 5 is a diagram showing an example of a brightness difference between a target pixel 13 and pixels adjacent in the surrounding eight directions.
  • FIG. 6 is a diagram showing an example of a brightness difference between a target pixel 13 and pixels adjacent in the surrounding eight directions.
  • FIG. 7 is an explanatory diagram for explaining the strip 22.
  • FIG. 8 is a flowchart showing the flow of processing in the out-of-focus only handling mode (S3).
  • FIG. 9 is a graph for calculating the first intermediate value.
  • FIG. 10 is a graph for calculating the second intermediate value.
  • FIG. 11 is a graph for calculating the second intermediate value.
  • FIG. 12 is an explanatory diagram for explaining how a photon emitted from a subject H force reaches the image sensor surface S.
  • FIG. 13 is an explanatory diagram for explaining the occurrence of camera shake.
  • FIG. 14 is an explanatory diagram for explaining occurrence of blurring.
  • FIG. 15 is an explanatory diagram for explaining the total number of photons received by the image sensor (X, y).
  • FIG. 16 is a frequency distribution table showing the frequency distribution of [difference in brightness of adjacent pixels] for all pixels included in a mesh.
  • FIG. 17 is a flowchart showing the flow of processing in the out-of-focus / hand shake countermeasure mode (S4).
  • FIG. 18 A diagram showing [original data table for spreading factor setting].
  • FIG. 1 is a block diagram illustrating the configuration of the image processing apparatus 1 according to the present embodiment.
  • This image processing device 1 generates a new image that is sharpened based on the input original image and the media reader (original image input means) 2 for inputting the image data of the original image to be corrected. And a photographic printer 4 for printing out the generated new image.
  • the media reader 2 is for reading original image data stored in various storage media such as a memory card and a CD and inputting it to the computer 3.
  • various storage media such as a memory card and a CD
  • the computer 3 includes an image memory 5 that stores the original image data input from the media reader 2, a hard disk 7 that stores an image processing program 6 for image sharpening, and a read from the hard disk 7.
  • RAM Random Access Memory
  • CPU Central Processing Unit
  • a display unit 10 for displaying the processed image and an operation unit 11 composed of a mouse, a keyboard and the like are connected via a system bus 12.
  • This image processing program 6 is created using various programming languages and is based on the original image data! / And approximates the ideal image of the time without any blur or camera shake. is there.
  • FIG. 2 is a flowchart showing the flow of processing of the image processing program 6.
  • a camera shake situation estimation process S1 is first executed on the input original image.
  • the out-of-focus mode is executed (S3).
  • the out-of-focus / hand shake countermeasure mode is executed (S4).
  • S3 the out-of-focus / hand shake countermeasure mode
  • the camera shake situation estimation process (S1) is a process of estimating the camera shake direction [blurring direction] and the number of pixels [number of blebixels] that occurred during shooting by analyzing the input original image data. It is.
  • a camera shake occurs, a photon that was generated by the subject and was supposed to be received by one image sensor with a digital camera is the direction of camera shake relative to the image sensor. It can also be received by an image sensor located in the opposite direction. Thereby, in the original image, attributes such as brightness are close to each other, and a strip-like portion in which pixels are continuous in a predetermined direction is generated.
  • the force with this threshold value set to 0.05 can be changed appropriately.
  • the target pixel 13 has almost the same brightness as the pixels 14 and 15 on both the left and right sides, and the brightness is somewhat different from the other neighboring pixels.
  • the target pixel 13 has almost the same brightness as the left and right neighboring pixels 14 and 15, and the other neighboring pixels. If the brightness is somewhat different from that of the pixel, it is determined that the target pixel 13 is flat in the horizontal direction.
  • Fig. 4 (a) when the target pixel 13 has almost the same brightness as the pixels 16 and 17 on both the upper and lower sides, and the brightness is somewhat different from the other neighboring pixels.
  • Fig. 4 (b) and Fig. 4 (c) the target pixel 13 is almost the same in brightness as either the upper or lower neighbor pixels 16, 17, and the other neighboring pixels are the brightness.
  • the force S is different to some extent, it is determined that the target pixel 13 is flat in the vertical direction.
  • FIG. 5 and FIG. 6 are diagrams showing another example of the brightness difference between the target pixel 13 and pixels adjacent in the surrounding eight directions.
  • the hatched pixels are pixels whose brightness difference from the target pixel 13 is smaller than the predetermined threshold
  • the pixels that are blacked out are pixels whose brightness difference from the target pixel 13 is larger than the predetermined threshold.
  • the pixels painted in white mean the pixels regardless of the brightness difference from the target pixel 13.
  • the target pixel 13 has almost the same brightness as the pixels 18 and 19 on the lower left and upper right, and the brightness is almost the same as the other adjacent pixels.
  • the target pixel 13 has almost the same brightness as either the lower left or upper right neighboring pixels 18, 19 When it cannot be said that the brightness is almost the same as that of the other adjacent pixels, it is determined that the target pixel 13 is flat in the lower left and upper right directions.
  • the target pixel 13 has almost the same brightness as both the upper left and lower right adjacent pixels 20, 21, and the other adjacent pixels have the same brightness. If it is not almost the same, or as shown in Fig. 6 (b) and Fig. 6 (c), the target pixel 13 has almost the same brightness as either the upper left or lower right neighboring pixels 20, 21 If it cannot be said that the brightness is almost the same as other neighboring pixels, it is determined that the target pixel 13 is flat in the upper left and lower right directions.
  • a strip on the original image Investigate the existence and direction of the part. That is, as shown in Fig. 7 ( a ), (b), (c), when pixels having flatness in the same direction continue, this is judged as strip 22 and details are not shown in the figure.
  • a frequency distribution table showing how many strips 22 of pixels are formed in each of the horizontal, vertical, lower left, upper right, and upper left, lower right directions is prepared.
  • the average length of the strips 22 is calculated for each direction based on the created frequency distribution table.
  • strips 22 whose length is shorter than 2 pixels and strips 22 longer than the predetermined upper limit are excluded from the calculation of the average length. This is because if the length is shorter than 2 pixels, it cannot be said that the strip 22 is practically formed, and if the length is longer than the upper limit, it does not actually exist on the subject other than the strip 22 caused by camera shake. This is because there is a high probability that the pixels that are close in brightness will be continuous.
  • the upper limit is set to 8 pixels, but the setting can be changed as appropriate.
  • the [blur direction] is estimated based on the average length of the strips 22 calculated for each direction.
  • the [blurring direction] is estimated by checking the direction in which the strips 22 are formed most and determining the direction as [blur direction]. More specifically, among the average lengths of strips 22 calculated for each direction, the average strip length in the horizontal direction is compared with the average strip length in the vertical direction, and the average strip length in the horizontal direction is the vertical direction. Is equal to or greater than the average strip length, the [horizontal-vertical strip distribution ratio] is calculated by the following equation (1). On the other hand, when the average strip length in the horizontal direction is smaller than the average strip length in the vertical direction, [horizontal-vertical strip distribution ratio] is calculated by the following equation (2).
  • the average strip length in the lower left one upper right direction is compared with the average strip length in the upper left one lower right direction, and the lower left one upper right direction Average strip length
  • the [oblique strip distribution ratio] is calculated by the following equation (3).
  • the [oblique strip distribution ratio] is calculated by the following equation (4).
  • [the number of Brevixels] is estimated. That is, when [Blurring direction] is horizontal, [Brevixel number] is determined as the average strip length in the horizontal direction, and when [Blull direction] is vertical, [Brebixel number] is determined as the average strip length in the vertical direction. I will decide. Also, if [blurring direction] is in the lower left and upper right direction, [number of Brevixels] is determined as the average strip length in the lower left and upper right direction, and if [blurring direction] is in the upper left and lower right direction, [number of brevixels] Is determined to be the average strip length in the upper left and lower right direction.
  • [the certainty factor due to the number of groupable pixels] is calculated.
  • the direction in which strips 22 are most formed is defined as [blur direction].
  • the direction in which the largest number of strips 22 is formed in the state where only a small number of strips 22 with few pixels that can form the strips 22 are formed is defined as the blur direction.
  • the certainty level differs from the case where it is determined. Therefore, in the present invention, as a coefficient indicating the degree of certainty in the estimation of [blurring direction], in addition to [confidence regarding blur direction], [a Certainty ] Has been introduced.
  • the [majority formation 'number of pixels' ratio] is first calculated by the following equation (6).
  • the [majority formation 'pixel number] in the equation (6) when [blurring direction] is horizontal, the number of pixels having flatness in the horizontal direction in the original image is expressed as [blur direction]. Is vertical, the number of pixels that have flatness in the horizontal direction in the original image, and if [blurring direction] is in the lower left, upper right, the flatness is in the lower left, upper right in the original image.
  • the [blurring direction] is in the upper left, lower right direction, the number of pixels having flatness in the upper left, lower right direction is employed.
  • FIG. 8 is a flowchart showing the flow of processing in the out-of-focus only mode (S3).
  • this image processing program 6 performs sharpening processing on the original image on the assumption that photons emitted from the subject diffuse according to a two-dimensional normal distribution.
  • the standard deviation ⁇ which is a parameter of the two-dimensional normal distribution, is set to an optimum value, and then sharpening processing is performed on the original image.
  • the standard deviation ⁇ is set to the upper limit value, and the sharpening process is executed on the original image (S5).
  • a practical upper limit value and a lower limit value are predetermined as standard deviation ⁇ , which is a parameter of a two-dimensional normal distribution.
  • the upper limit value is a limit value at which the sharpness of the image increases, but the roughness of the image becomes unbearable when the standard deviation ⁇ is further increased.
  • the lower limit value is a limit value at which the sharpening effect cannot be obtained or the image becomes abnormal when the standard deviation ⁇ becomes smaller than that.
  • the upper limit value is set to 0.7 and the lower limit value is set to 0.3.
  • the setting is not limited to this embodiment and can be changed as appropriate. The details of the sharpening process will be described later, and the description thereof is omitted here.
  • the roughness measurement process is executed on the image after the sharpening process (S6).
  • the measured roughness is defined as [roughness' standard deviation ⁇ upper limit setting], and this is compared with the reference roughness (threshold) (S7).
  • This rough roughness value is a rough value of the roughness value that a good sharpening effect can be obtained when the roughness value becomes this level. In this embodiment, this rough roughness value is set to 30. is doing. Of course, this rough roughness can be changed as appropriate. If [Roughness / Standard deviation / Upper limit setting] is smaller than the standard roughness (S7: Yes), the standard deviation ⁇ is set to the upper limit value, and the original image is sharpened (S8).
  • the process ends.
  • the standard sharpness ⁇ is set as the upper limit value and the strongest sharpening treatment is performed. If the roughness of the image after the sharpening process does not reach the target roughness even after processing, the upper limit is adopted as the optimum value of the standard deviation ⁇ . Note that the details of the roughness measurement processing will be described later, so the description thereof is omitted here.
  • the standard deviation ⁇ is set to the lower limit value, and the sharpening process is executed on the original image (S9). Then, a roughness measurement process is executed for the image after the sharpening process (S10).
  • the measured roughness is defined as [roughness 'standard deviation' lower limit setting], and this is compared with the rough roughness (Sl l). If [Roughness / Standard deviation / Lower limit setting] is larger than the standard roughness! (S11: Yes), set the standard deviation ⁇ to the lower limit and sharpen the original image. (S12) After the sharpened image is output as the final result, the process ends.
  • the lower limit is adopted as the optimum value of. Even in this case, the lower limit value is set to such a level that there is almost no difference between the images before and after the sharpening process. There will never be no state.
  • the first intermediate value of standard deviation ⁇ is calculated (S13). This first intermediate value is tentatively set as a value that is reasonably close to the optimum value between the upper limit value and the lower limit value.
  • To calculate this first intermediate value first create a graph with the standard deviation ⁇ on the X-axis and the roughness on the Y-axis, as shown in Fig. 9, and (upper limit value, [Roughness' standard deviation 'upper limit setting]) P1 and (lower limit, [Roughness' standard deviation ⁇ lower limit setting]) ⁇ 2 are plotted.
  • the standard deviation ⁇ is set to the first intermediate value, and the sharpening process is executed on the original image (S14).
  • the roughness measurement process is executed on the image after the sharpening process (S15).
  • the measured roughness is defined as [roughness / standard deviation, first intermediate value setting].
  • a second intermediate value of the standard deviation ⁇ is calculated (S16).
  • the second intermediate value is a value closer to the optimum value than the first intermediate value, and is set between the lower limit value and the first intermediate value, or between the first intermediate value and the upper limit value. is there.
  • the calculation method of the second intermediate value is [Roughness' standard deviation ⁇ 1st intermediate value setting] larger than the standard roughness! It is classified into two types according to whether or not. First, when [Roughness ⁇ Standard deviation ⁇ First intermediate value] is larger than the standard roughness, create a graph with the standard deviation ⁇ on the X axis and the roughness on the Y axis, as shown in Fig. 10.
  • the standard deviation ⁇ value at which the roughness becomes the reference roughness is first After calculating as an intermediate value, set the standard deviation ⁇ to its intermediate value and perform sharpening process to measure the roughness of the image, so that the roughness becomes closer to the approximate roughness.
  • the approximate value of the standard deviation ⁇ is approximated by the procedure of calculating the intermediate value.
  • the second intermediate value is adopted as the optimum value of the standard deviation ⁇ , but the third intermediate value is further between the lower limit value and the second intermediate value, or between the second intermediate value and the upper limit value. It is also possible to calculate and adopt this as the optimum value.
  • the surface of the subject H is divided into a grid, and a predetermined portion of the subject H located in each grid is represented as a subject (X, y), and the subject (X, y)
  • the number of photons that have reached the imaging device surface S of the digital camera that is, the amount of light among all the photons that have also generated this partial force, is expressed as output light quantity—subject (X, y).
  • the image pickup device surface S of the digital camera is formed by arranging the same number of image pickup devices as the number of divisions of the subject, and a predetermined portion of the image pickup device is represented as an image pickup device (X, y).
  • , y) represents the number of photons received, that is, the amount of light, as input light quantity-image sensor (X, y).
  • Table 1 is a table showing an example of the photon diffusion rate.
  • about 40% of the photons that reach the image sensor surface S from the subject (X, y) are received by the image sensor (X, y), and about 10% are image sensors (X, y-1), Image sensor (X, y + 1), image sensor (x-l, y), and image sensor (x + 1, y), respectively, about 5% are image sensor (x-l, y-1), image sensor
  • the range in which photons generated by the subject (X, y) force diffuse is limited to the range of a total of nine imaging elements adjacent to the imaging element (X, y). This means that it does not diffuse into the
  • Table 1 Table showing diffusivity of photons
  • the input light quantity in Expression (10) the image sensor (X, y) is a known value obtained for all X and y by referring to the original image data input from the media reader 2. Therefore, based on this known input light quantity—image sensor (X, y), the total number of photons that have reached the image sensor surface S of the digital camera out of all photons emitted from the subject (X, y) By obtaining the output light quantity—subject (X, y) for all X and y, an accurate image of the photographed subject H can be obtained. That is, it is possible to know an accurate image of the subject H without any blur based on the original image where the blur occurs.
  • Output light intensity _Subject (X—1, y) is input light intensity _Output to image sensor (X—1, y)
  • Light quantity _Subject ( ⁇ + 1, y) as input light quantity—Image sensor ( ⁇ + 1, y)
  • Output light quantity—Subject (X, y—l) as input light quantity _Image sensor (X, y—l)
  • Output light quantity_subject (X, y + 1) is input light quantity_image sensor (X, y + 1)
  • output light quantity—subject ( ⁇ -1, -1) is input light quantity—image sensor -1
  • Output light quantity _Subject (x + 1, y-1) to input light quantity _Image sensor (x + 1, y-1)
  • Output light quantity _subject (x + 1, y-1) is input light quantity _Image sensor (x + 1, y-1)
  • formula (11) can be rewritten as the following formula (13) by newly introducing a value of the total light amount / ambient defined as the following formula (12).
  • the diffusion rate of photons is not limited to Table 1, and can be changed to any setting. By changing the diffusion rate setting, the degree of blur correction is increased or decreased. can do.
  • photons emitted from the subject (X, y) are diffused into a range of a total of nine image sensors adjacent to the image sensor (X, y) and surrounding the image sensor (X, y).
  • the diffusivity is set as not diffusing in the range of the image sensor, it is not limited to this.
  • the image sensor adjacent to the image sensor (X, y) that is, the image sensor (X, y) is the center.
  • Table 2 Table showing the diffusion rate of photons (Set the degree of out-of-focus correction to a higher level)
  • Table 3 Table indicating the diffusion rate of photons (Set the degree of defocus correction to a low level)
  • Table 1, Table 2, and Table 3 have been described as examples of the table indicating the photon diffusivity.
  • the table indicating the photon diffusivity is arbitrarily set and changed. It is possible to change.
  • a table indicating the photon diffusion rate is set on the assumption that the photon force emitted from the subject is diffused according to a dimensional normal distribution. This is a force that can be adjusted easily by increasing or decreasing the degree of defocus correction by controlling only one parameter, the standard deviation.
  • the image processing algorithm in this case is the same as that described above in that it is performed using Expression (14), and the description thereof is omitted here.
  • the present image processing program 6 can also correct camera shake by setting the photon diffusivity to a setting different from that for out-of-focus correction.
  • Table 4 shows examples of photon diffusivity settings when correcting camera shake blur.
  • the image sensor adjacent to the direction opposite to the camera shake direction (X—1, y)
  • the imaging element (X-2, y) adjacent to it receives the photons.
  • Table 4 Table showing the diffusion rate of photons (when blurring is corrected)
  • ⁇ average in mesh—input light quantity—imaging element (m, n) ⁇ means the mesh (m, n) located in the original image at the mth position in the horizontal direction and the nth position in the vertical direction. ), The ⁇ input light quantity—imaging element (X, y) ⁇ is examined for all the imaging elements (X, y) existing therein, and the average value thereof is calculated.
  • adjacent pixel means a pixel determined as follows rather than a pixel stored adjacently on the image data.
  • V the number of pixels stored in the vertical direction
  • K the number of pixels is K
  • the adjacent pixel of (X, y) can be expressed as pixel (X—I, y) by using I calculated by the following equations (18) and (19). This means that the pixel is stored at a position separated by I pixels in the left direction when a certain pixel (X, y) force is seen.
  • the adjacent pixels are not limited to pixels adjacent in the left direction, and pixels adjacent in an arbitrary direction can be selected.
  • a frequency distribution table is created with the brightness difference displayed at 0.1 horizontal intervals on the horizontal axis and the frequency displayed on the vertical axis.
  • the sum of the frequencies in the lightness difference range of 0.1 to +0.1 is defined as [central area * total frequency], while the area located outside this central area, that is, the brightness difference.
  • the sum of the frequencies in the range from 0.3 to 0.1 and +0.1 to +0.3 is defined as [near-center region / total frequency].
  • the roughness of the mesh can be expressed by the following equation (20).
  • the horizontal axis of the frequency distribution table is not limited to the 0.1 width interval, but can be any width interval.
  • FIG. 17 is a flowchart showing the flow of processing in the out-of-focus / hand shake countermeasure mode (S4).
  • this image processing program 6 executes sharpening processing on the current image on the assumption that photons emitted from the subject diffuse according to a two-dimensional normal distribution.
  • the out-of-focus / hand shake countermeasure mode (S4) is characterized by the fact that the photon diffusivity is set differently from the out-of-focus only mode (S3), so that it can also cope with hand shake.
  • a flat wrinkle parameter value is calculated based on the output result of the camera shake situation estimation process (S1).
  • the flattening parameter value consists of three numerical forces: [the long axis direction angle of the flattening process], [long axis direction 'magnification], and [short axis direction' magnification]. It is calculated based on the three types of information output from (S1): [blur direction], [number of brevic cells], and [total judgment 'confidence]. [Elongation direction angle of flattening process] is determined by [blur direction].
  • FIG. 18 shows the original data for spreading factor setting.
  • FIG. The coordinates shown in the figure are assigned to each square constituting the [Diffusion rate setting original data 'table].
  • the following formula (24) force is also set using [value set for grid (X, y)] calculated using formula (29).
  • Equation (30) means the total sum over all squares of [original data of diffusion rate setting table] (value set in square (X, y)).
  • the present invention provides an image processing product that corrects out-of-focus images and camera shake images into sharp images.
  • the present invention is applicable to an image processing apparatus and an image processing apparatus.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

La présente invention concerne un programme de traitement d'images permettant à un ordinateur d'effectuer un traitement de netteté d'une image originale brouillée sur la présupposition que les photons émis à partir d'un objet se diffusent selon un caractère prescrit. De façon concrète, le programme de traitement d'images permet à un ordinateur d'exécuter le traitement de netteté de l'image originale selon un caractère de diffusion des photons, un traitement de mesure du degré de rugosité apparente de l'image après le traitement de netteté, et un traitement de détermination de la valeur maximale des paramètres tels que sa valeur optimale, où la valeur optimale domine le caractère de diffusion dans une plage qui n'est pas plus grande qu'une valeur prescrite de la rugosité.
PCT/JP2006/311547 2006-06-08 2006-06-08 Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images WO2007141863A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/JP2006/311547 WO2007141863A1 (fr) 2006-06-08 2006-06-08 Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images
JP2006523460A JP3915037B1 (ja) 2006-06-08 2006-06-08 画像処理プログラム及びこれが記録されたコンピュータ読み取り可能な記録媒体、ならびに画像処理装置
PCT/JP2006/315679 WO2007141886A1 (fr) 2006-06-08 2006-08-08 programme d'estimation d'état de tremblement de caméra, support lisible par un ordinateur SUR LEQUEL EST enregistré le programme et dispositif d'estimation d'état de tremblement de caméra
PCT/JP2006/315680 WO2007141887A1 (fr) 2006-06-08 2006-08-08 programme de mesure de rugosité, support d'enregistrement lisible par un ordinateur SUR LEQUEL EST enregistré le programme, et dispositif de mesure de rugosité

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2006/311547 WO2007141863A1 (fr) 2006-06-08 2006-06-08 Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images

Publications (1)

Publication Number Publication Date
WO2007141863A1 true WO2007141863A1 (fr) 2007-12-13

Family

ID=38170214

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/JP2006/311547 WO2007141863A1 (fr) 2006-06-08 2006-06-08 Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images
PCT/JP2006/315679 WO2007141886A1 (fr) 2006-06-08 2006-08-08 programme d'estimation d'état de tremblement de caméra, support lisible par un ordinateur SUR LEQUEL EST enregistré le programme et dispositif d'estimation d'état de tremblement de caméra
PCT/JP2006/315680 WO2007141887A1 (fr) 2006-06-08 2006-08-08 programme de mesure de rugosité, support d'enregistrement lisible par un ordinateur SUR LEQUEL EST enregistré le programme, et dispositif de mesure de rugosité

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/JP2006/315679 WO2007141886A1 (fr) 2006-06-08 2006-08-08 programme d'estimation d'état de tremblement de caméra, support lisible par un ordinateur SUR LEQUEL EST enregistré le programme et dispositif d'estimation d'état de tremblement de caméra
PCT/JP2006/315680 WO2007141887A1 (fr) 2006-06-08 2006-08-08 programme de mesure de rugosité, support d'enregistrement lisible par un ordinateur SUR LEQUEL EST enregistré le programme, et dispositif de mesure de rugosité

Country Status (2)

Country Link
JP (1) JP3915037B1 (fr)
WO (3) WO2007141863A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07121703A (ja) * 1993-10-22 1995-05-12 Canon Inc 画像処理方法
JP2000004363A (ja) * 1998-06-17 2000-01-07 Olympus Optical Co Ltd 画像復元方法
JP2000298300A (ja) * 1999-04-13 2000-10-24 Ricoh Co Ltd 手ぶれ画像補正方法、記録媒体及び撮像装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61109182A (ja) * 1984-11-01 1986-05-27 Fujitsu Ltd 画像処理装置
JPH04157353A (ja) * 1990-10-19 1992-05-29 Komatsu Ltd ひび割れ画像データ処理方法
JPH096962A (ja) * 1995-06-21 1997-01-10 Dainippon Screen Mfg Co Ltd 鮮鋭度の評価方法
JPH1139479A (ja) * 1997-07-16 1999-02-12 Dainippon Screen Mfg Co Ltd 鮮鋭度の評価方法
US7397953B2 (en) * 2001-07-24 2008-07-08 Hewlett-Packard Development Company, L.P. Image block classification based on entropy of differences
JP4515208B2 (ja) * 2003-09-25 2010-07-28 富士フイルム株式会社 画像処理方法および装置並びにプログラム
JP2006019874A (ja) * 2004-06-30 2006-01-19 Fuji Photo Film Co Ltd 手ぶれ・ピンボケレベル報知方法および撮像装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07121703A (ja) * 1993-10-22 1995-05-12 Canon Inc 画像処理方法
JP2000004363A (ja) * 1998-06-17 2000-01-07 Olympus Optical Co Ltd 画像復元方法
JP2000298300A (ja) * 1999-04-13 2000-10-24 Ricoh Co Ltd 手ぶれ画像補正方法、記録媒体及び撮像装置

Also Published As

Publication number Publication date
WO2007141887A1 (fr) 2007-12-13
JPWO2007141863A1 (ja) 2009-10-15
JP3915037B1 (ja) 2007-05-16
WO2007141886A1 (fr) 2007-12-13

Similar Documents

Publication Publication Date Title
US7599568B2 (en) Image processing method, apparatus, and program
US7356254B2 (en) Image processing method, apparatus, and program
US9361680B2 (en) Image processing apparatus, image processing method, and imaging apparatus
Boracchi et al. Modeling the performance of image restoration from motion blur
JP4351911B2 (ja) デジタルスチルカメラにおける取り込み画像の写真品質を評価する方法及び装置
US8224116B2 (en) Image processing method and imaging apparatus
JP5389903B2 (ja) 最適映像選択
US10477128B2 (en) Neighborhood haze density estimation for single-image dehaze
KR101662846B1 (ko) 아웃 포커싱 촬영에서 빛망울 효과를 생성하기 위한 장치 및 방법
JP4454657B2 (ja) ぶれ補正装置及び方法、並びに撮像装置
TWI462054B (zh) Estimation Method of Image Vagueness and Evaluation Method of Image Quality
JP5725194B2 (ja) 夜景画像ボケ検出システム
JP6293374B2 (ja) 画像処理装置、画像処理方法、プログラム、これを記録した記録媒体、映像撮影装置、及び映像記録再生装置
US20190253609A1 (en) Image processing apparatus, image processing method, and non-transitory computer readable storage medium
JP5158202B2 (ja) 画像補正装置および画像補正方法
JP4515208B2 (ja) 画像処理方法および装置並びにプログラム
JP2018195084A (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
Choi et al. A method for fast multi-exposure image fusion
JP2009088935A (ja) 画像記録装置、画像補正装置及び撮像装置
US8849066B2 (en) Image processing apparatus, image processing method, and storage medium
CN109727193B (zh) 图像虚化方法、装置及电子设备
RU2338252C1 (ru) Способ предотвращения печати размытых фотографий
Simpkins et al. Robust grid registration for non-blind PSF estimation
WO2007141863A1 (fr) Programme de traitement d'images et support d'enregistrement lisible par ordinateur sur lequel est enregistré le programme et dispositif de traitement d'images
CN115239569A (zh) 图像暗角移除方法、全景图像生成方法和相关设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2006523460

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06766509

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 06766509

Country of ref document: EP

Kind code of ref document: A1