WO2007083621A1 - Dispositif de traitement d’image - Google Patents

Dispositif de traitement d’image Download PDF

Info

Publication number
WO2007083621A1
WO2007083621A1 PCT/JP2007/050486 JP2007050486W WO2007083621A1 WO 2007083621 A1 WO2007083621 A1 WO 2007083621A1 JP 2007050486 W JP2007050486 W JP 2007050486W WO 2007083621 A1 WO2007083621 A1 WO 2007083621A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
image data
pixel
restored
Prior art date
Application number
PCT/JP2007/050486
Other languages
English (en)
Japanese (ja)
Inventor
Fuminori Takahashi
Original Assignee
Nittoh Kogaku K.K
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nittoh Kogaku K.K filed Critical Nittoh Kogaku K.K
Priority to JP2007554892A priority Critical patent/JP4885150B2/ja
Publication of WO2007083621A1 publication Critical patent/WO2007083621A1/fr

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B5/00Adjustment of optical system relative to image or object surface other than for focusing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras

Definitions

  • the present invention relates to an image processing apparatus.
  • a method of moving a lens and a method of circuit processing are known.
  • a method for moving a lens a method is known in which camera shake is detected and correction is performed by moving a predetermined lens in accordance with the detected camera shake (see Patent Document 1).
  • a change in the optical axis of the camera is detected by an angular acceleration sensor, and a transfer function representing a blurring state at the time of photographing is detected and obtained for a photographed image.
  • a method is known in which the transfer function is inversely transformed to restore the image (see Patent Document 2).
  • Patent Document 1 Japanese Patent Laid-Open No. 6-317824 (see abstract)
  • Patent Document 2 JP-A-11 24122 (see abstract)
  • a camera adopting the camera shake correction described in Patent Document 1 requires a hardware space for driving a lens such as a motor, and becomes large.
  • such hardware itself and a drive circuit for operating the hardware are required, which increases costs.
  • the camera shake correction described in Patent Document 2 has the following problems although the above-described problems are eliminated.
  • image restoration is difficult for the following two reasons.
  • the value of the transfer function to be obtained fluctuates greatly due to these slight fluctuations that are very vulnerable to noise information errors.
  • the restored image obtained by the inverse transformation is far from an image taken with no camera shake, and cannot be used in practice.
  • a method of estimating the solution by singular value decomposition etc. of the solution of simultaneous equations can be adopted, but the calculated value for the estimation becomes astronomical size. Therefore, there is a high risk that it will not be solved in practice.
  • an object of the present invention is to provide an image processing apparatus having a realistic circuit processing method while preventing an increase in size of the apparatus when restoring an image.
  • an image processing apparatus includes an image processing apparatus having a processing unit that processes an image, and the processing unit uses data of change factor information that causes an image change.
  • the image data power of an arbitrary image is also generated as a result of this comparison by generating image data of a comparison image and then comparing the image data of the original image to be processed with the image data of the comparison image.
  • the difference data is distributed to the image data of an arbitrary image using the data of the change factor information to generate the image data of the restored image, and whether the image data of the restored image is within the limit value. If the image data limit of the restored image is exceeded and the range of the restored image is exceeded,
  • the limit value is a limit value of the amount of energy that can be stored in a pixel.
  • the correction amount of the correction of the original restored image performed based on the excess data is obtained using the data of the change factor information.
  • the amount of distribution to the original restored image data is determined using the data of the change factor information, so that the restored image data can be efficiently accommodated within the limit value.
  • the distribution of the difference data to the image data of an arbitrary image performed to generate the image data of the restored image is obtained by dividing the difference data by the change factor information.
  • the distribution amount is calculated using the value.
  • the processing unit further performs a process of removing an influence on the unprocessed pixels of the distribution amount of the pixels subjected to the correction / restoration image processing with respect to the unprocessed pixels.
  • the amount of distribution to the original restored image data is determined using the data of the change factor information, so that the restored image data can be efficiently accommodated within the limit value.
  • the present invention when restoring a deteriorated image, it is possible to prevent an increase in size of the apparatus and to provide an image processing apparatus having a realistic circuit processing method.
  • FIG. 1 is a block diagram showing a main configuration of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is an external perspective view showing an outline of the image processing apparatus shown in FIG. 1, and is a view for explaining an arrangement position of angular velocity sensors.
  • FIG. 3 is a processing flow diagram (processing routine) performed by the processing unit of the image processing apparatus shown in FIG. 1 for explaining the basic concept.
  • FIG. 4 is a diagram for explaining the concept of the processing method shown in FIG.
  • FIG. 5 is a diagram for specifically explaining the processing method shown in FIG. 3 using hand shake as an example, and a table showing energy concentration when there is no hand shake.
  • FIG. 6 is a diagram for specifically explaining the processing method shown in FIG. 3 using camera shake as an example, and is a diagram showing image data when there is no camera shake.
  • FIG. 7 is a diagram for specifically explaining the processing method shown in FIG. 3 with an example of camera shake, and is a diagram showing energy dispersion when camera shake occurs.
  • FIG. 8 is a diagram for specifically explaining the processing method shown in FIG. 3 by taking an example of camera shake, and is a diagram for explaining a situation in which image data for comparison is generated with an arbitrary image force.
  • FIG. 9 A diagram for specifically explaining the processing method shown in FIG. 3 using camera shake as an example. Comparison image data is compared with a blurred original image to be processed, and difference data is obtained. It is a figure for demonstrating the condition to produce
  • FIG. 10 is a diagram for specifically explaining the processing method shown in FIG. 3 by taking an example of camera shake, and explains a situation in which restored image data is generated by allocating difference data and adding it to an arbitrary image.
  • FIG. 10 is a diagram for specifically explaining the processing method shown in FIG. 3 by taking an example of camera shake, and explains a situation in which restored image data is generated by allocating difference data and adding it to an arbitrary image.
  • FIG. 11 A diagram for specifically explaining the processing method shown in FIG. 3 by taking an example of camera shake, and generating new comparison image data from the generated restored image data, and the data and processing target It is a figure for demonstrating the condition which produces
  • FIG. 12 A diagram for specifically explaining the processing method shown in FIG. 3 by taking an example of camera shake, and explaining a situation in which newly generated differential data is allocated and new restored image data is generated. It is a figure for doing.
  • FIG. 13 is a processing flow for explaining the distribution of difference data.
  • FIG. 14 is a diagram illustrating the contents of each pixel image data in the image data handled by the image processing apparatus shown in FIG. 1.
  • FIG. 15 is a diagram showing a relationship between corrected pixel image data handled by the image processing apparatus shown in FIG. 1 and pixel image data of comparison image data.
  • FIG. 16 is a diagram for explaining the distribution amount of difference data for pixels after pixel n handled by the image processing apparatus shown in FIG. 1.
  • FIG. 17 A diagram for explaining another processing method using the processing method shown in FIG. 3 or FIG. 13, in which (A) shows data of an original image to be processed, and (B) shows (A) It is a figure which shows the data of the arch IV, and the data.
  • FIG. 18 To explain another processing method using the processing method shown in FIG. 3 or FIG. (A) shows data of an original image to be processed, and (B) shows data obtained by extracting a part of the data of (A).
  • FIG. 19 is a diagram for explaining a modification of the processing method shown in FIG. 18, and is a diagram showing that the data of the original image is divided into four, and a part of the area for iterative processing is extracted from each divided area. It is.
  • Image data base image data
  • the image processing apparatus 1 is a so-called digital camera for consumer use that uses a CCD for the image pickup unit.
  • the image processing device 1 is for a surveillance camera, a TV camera, and an endoscope that use an image pickup device such as a CCD for the image pickup unit. It can also be applied to devices other than cameras, such as cameras for other uses, imaging diagnostic devices for microscopes, binoculars, and NMR imaging.
  • the image processing apparatus 1 includes an imaging unit 2 that captures an image of a person or the like, a control system unit 3 that drives the imaging unit 2, a processing unit 4 that processes an image captured by the imaging unit 2, have.
  • the image processing apparatus 1 according to this embodiment further includes a recording unit 5 that records the image processed by the processing unit 4 and an angular velocity sensor, and detects change factor information that causes a change such as image degradation.
  • a factor information storage unit 7 for storing known change factor information that causes image degradation and the like.
  • the imaging unit 2 is a part that includes an imaging optical system having a lens and an imaging element such as a CCD or C-MOS that converts light that has passed through the lens into an electrical signal.
  • the control system unit 3 controls each unit in the image processing apparatus 1 such as the imaging unit 2, the processing unit 4, the recording unit 5, the detection unit 6, and the factor information storage unit 7.
  • the processing unit 4 is composed of an image processing processor, and is configured by an ASIC (Application Specific
  • the processing unit 4 may store an image serving as a base when generating image data for comparison images (hereinafter referred to as comparison image data), which will be described later.
  • the processing unit 4 may be configured to process with software rather than configured as hardware such as an ASIC.
  • the recording unit 5 may employ magnetic recording means such as a hard disk drive made of semiconductor memory, or optical recording means using a DVD or the like.
  • the detection unit 6 includes two angular velocity sensors that detect the speeds around the X and Y axes that are perpendicular to the Z axis that is the optical axis of the image processing apparatus 1. Is provided. By the way, camera shake when shooting with the camera is the force that also causes movement in the X, Y, and Z directions and rotation around the Z axis. Rotation and rotation around the X axis. These two variations are only a slight variation, and the captured image is greatly blurred. Therefore, in this embodiment, only two angular velocity sensors around the X axis and the Y axis in FIG. 2 are arranged. Z for strength and completeness An additional angular velocity sensor around the axis or a sensor that detects movement in the X or Y direction may be added. In addition, the sensor used may be an angular acceleration sensor that is not an angular velocity sensor.
  • the factor information storage unit 7 is a recording unit that stores change factor information such as known change factor information, such as aberrations of the optical system.
  • the factor information storage unit 7 stores information on aberrations of the optical system and lens distortion. The information is used when restoring blurring of camera shake described later.
  • I is image data of an arbitrary image (hereinafter, arbitrary image data)
  • rim g ′ refers to the captured image, that is, the data of the changed image that has changed due to the change factor, and is the image data of the original image that will be processed in this processing (hereinafter referred to as the original image data).
  • D is difference data between the original image data Img ′ and the comparison image data.
  • kj is the distribution ratio based on the data G of the change factor information.
  • I is optional image data
  • restored image data is the original image data before the change (hereinafter referred to as “basic image data”) that is the basis of the original image data Img ′.
  • basic image data the original image data before the change
  • the relationship between Img and Img ' is expressed by the following equation (1).
  • the difference data D may be a simple difference between corresponding pixels, but in general, it differs depending on the data G of the change factor information and is expressed by the following equation (2).
  • the processing routine of the processing unit 4 first uses the arbitrary image data I as the initial image data I.
  • This initial image data I that is, arbitrary image data
  • Data I may be the original image data Img 'that is the image data of the captured image.
  • step S102 the initial image data I is inserted instead of Img in equation (1), and the change image is displayed.
  • Image data for comparison I ′ which is an image, is obtained.
  • difference data D is calculated (step S103).
  • Steps S102, S103, and S104 are repeated using image data I as initial image data I.
  • step S104 If the difference data D is smaller than the predetermined value in step S104, the process is terminated (step S106). Then, the restored image data I at the end of processing is corrected and the image
  • the recording unit 5 records initial image data I and change factor information data G.
  • Comparison image data (I) is approximate to the original image data Img ′.
  • the angular velocity detection sensor detects the angular velocity every 5 seconds.
  • the value used as a determination criterion for the difference data D is “6” in this embodiment when each data is represented by 8 bits (0 to 255). That is, when it is less than 6, that is, 5 or less, the processing is finished.
  • the shake data detected by the angular velocity detection sensor does not correspond to actual shake when the sensor itself is not calibrated. Therefore, in order to cope with actual blurring, when the sensor is not calibrated, a correction is required to multiply the raw data detected by the sensor by a predetermined magnification.
  • FIG. 3 Next, the details of the processing method shown in FIGS. 3 and 4 will be described with reference to FIGS. 5, 6, 7, 8, 8, 9, 10, 11 and 12.
  • FIG. 5 is a diagrammatic representation of FIGS. 5, 6, 7, 8, 8, 9, 10, 11 and 12.
  • the light energy corresponding to a given pixel is concentrated on that pixel during the exposure time.
  • light energy is dispersed in the blurred pixels during the exposure time.
  • the way the light energy is dispersed during the exposure time can be understood, so that it is possible to produce a blur-free image.
  • “12 0” of the pixel “n ⁇ 3” is the distribution ratio of “0.5”, “0.3”, “0.2” of the data G of the change factor information that is blur information. Accordingly, “60” is distributed to the “n ⁇ 3” pixel, “36” is distributed to the “n ⁇ 2” pixel, and “24” is distributed to the “n ⁇ 1” pixel. Similarly, “60” which is the data of the pixel “n ⁇ 2” is distributed as “30” in “n ⁇ 2”, “18” in “n ⁇ l”, and “12” in “n”.
  • This original image data Img 'and the change factor information data G shown in Fig. 7 will be calculated as an ideal image (base image data Img) with no blurring.
  • step S101 Any initial image data I shown in step S101 can be used.
  • This initial image data I that is, the original image data Img '
  • Use data G of information That is, for example, the “n-3” image of the initial image data I
  • step S103 the difference data D in step S103 is as shown in the bottom column of FIG.
  • step S104 the size of the difference data D is determined in step S104. Specifically, the process is terminated when all the difference data D is 5 or less in absolute value, but the difference data D shown in FIG. 9 does not meet this condition, so the process proceeds to step S105. . That is, the difference data D is allocated to the initial image data I using the data G of the change factor information, and “
  • the restored image data I shown as “next input” is generated. In this case, it is the first time
  • the restored image data I is generated.
  • the restored image data I corresponds to “next input” in the table of FIG.
  • the restored image data I is the input image data in step S102.
  • step S102 is executed.
  • step S102 is executed.
  • Table in Fig. 11 “Input I
  • Minute data D is obtained.
  • the size of the new difference data D is determined in step S104. If it is larger than the predetermined value, the new difference data D is converted to the previous restored image data I in step S105.
  • step S104 the process goes to step S104, and depending on the determination, the process goes to step S105, or the process proceeds to step S106. Such a process is repeated.
  • either or both of the number of processes and the determination reference value of the difference data D can be set in advance.
  • the number of processings can be set to any number such as 20 or 50 times.
  • This set value can be set arbitrarily. If both the number of processing times and the criterion value are entered, the processing is stopped when either one is satisfied. When both settings are possible, the judgment reference value may be prioritized, and if the predetermined number of processes does not fall within the determination reference value, the predetermined number of processes may be repeated.
  • the force that did not use the information stored in the factor information storage unit 7 is a known change factor stored here, such as optical aberration and lens distortion. Data may be used.
  • the blur information and the optical aberration information are combined and regarded as one change factor.
  • the factor information storage unit 7 may not be installed, and the image may be corrected or restored only by dynamic factors during shooting, for example, only blurring.
  • This image processing apparatus 1 adopts the following processing method based on the above-described concept of the processing method. This method will be described with reference to FIGS. This method is intended to optimize the amount of distribution of difference data D to initial image data I.
  • Processing flow S2 01 force S204 is a processing flow corresponding to processing flow S101 to S104 in FIG. That is, the input initial image data I (S201) force also generates comparison image data I '(S2
  • Difference data D for the prime is calculated (S203). Then, the size of the difference data D is determined for each pixel (S204). If the size is appropriate for all the pixels, it is determined that the image has been restored, and the process is terminated (S205).
  • the initial image data I for pixels n ⁇ 3, n ⁇ 2, n ⁇ l, n, n + 1, n + 2,.
  • Is the initial image data I ( b, b, b, b, b, b, b, b,.
  • the light energy of each pixel is a ratio of ⁇ , ⁇ , ⁇ to the pixel in the blur direction. Will be distributed.
  • a-b is dispersed in the pixel n
  • ⁇ -b is dispersed in the pixel n + 1
  • y-b is dispersed in the pixel n + 2.
  • the energy is distributed to the pixels indicated by arrows in the figure according to the data G of the change factor information.
  • b, b, b, b, b, b, b which are initial image data of each pixel (n ⁇ 3, n ⁇ 2, n ⁇ 2, n ⁇ l, n, n + 1, n + 2,...) , b, n-3 n-2 n-1 n n + 1 n + 2 are distributed according to the data G of the change factor information, and b ′, b ′, b are respectively used as comparison image data for each pixel. ', b', b ', b', ...
  • the dispersion amount of ⁇ of the pixel image data b of the pixel n is influenced by the amount of dispersion of ⁇ for pixel image data b of ⁇ .
  • h ' a -b +
  • the pixel image data of the other pixels of the comparison image data I ′ are also changed according to the change factor information data G, according to the change factor information data G (in FIG. This is the content affected by the amount of dispersion of the pixel on the left and the pixel on the left.
  • Figure 9 shows the process that 0 generates.
  • the original image data Img ' is, for example, pixel n n-1 n n + 1
  • the content is influenced by the amount of dispersion corresponding to the proportion of oc in the pixel image data a of its own pixel ⁇ .
  • a ' a -a + ⁇ -a + y -a.
  • the self-pixel, the previous one and the other two previous pixels the pixel on the left in FIG.
  • the content is influenced by the amount of dispersion of the base image data Img corresponding to the pixel on the left.
  • the difference data D which is the difference between the data and the image data of each pixel, becomes ⁇ , ⁇ , ⁇ , ⁇ , ⁇ ,... (S203).
  • an arbitrary distribution amount that is considered to be an appropriate distribution amount is calculated (S206).
  • This distribution amount is calculated as follows. For example, regarding pixel n, pixel image data b ′ is the sum of and
  • the difference data ⁇ ⁇ It also includes the amount of dispersion of the pixel image data a_, a, a force in the base image data Img of -2, n-1, n.
  • the difference data ⁇ includes a′a as the amount of dispersion of the pixel image data a of the base image data Img at the pixel ⁇ and the amount of dispersion of the initial image data I from the pixel image data b. a -b is included. Then, it is considered that the difference data ⁇ includes the dispersion amount of the pixel image data a and the pixel image data b force at a ratio of K (0 ⁇ K ⁇ 1). In other words, it can be considered as the following equation (3).
  • the distribution amount h allocated from the difference data ⁇ to the initial image data b is expressed as ⁇ ⁇ ⁇
  • the value obtained by dividing (dividing) the difference data ⁇ by the data ⁇ of the change factor information is used as the distribution amount h.
  • this distribution amount h K ′ ⁇ ⁇ ⁇ is distributed to the pixel image data b of the pixel n of the initial image data I for n 0 n, and b + ⁇ ⁇ ⁇ ⁇ ⁇ is the pixel image data of the restored image for the pixel ⁇ (S20 7).
  • the pixel image data b + ⁇ ⁇ ⁇ ⁇ ⁇ becomes the pixel image data for the pixel n of the new restored image data I, and the pixel image data
  • the data b + ⁇ ⁇ ⁇ / ⁇ is processed as new initial image data (S105). Thereafter, the process returns to S102, and whether the restored image data I approximates the base image data Img.
  • S207 force and S212 are instructed to determine whether or not the appropriate amount of distribution is ⁇ ⁇ ⁇ ⁇ ⁇ / ⁇ force as the self amount h. If it is not appropriate, correct it so that the appropriate distribution amount is obtained.
  • the determination as to whether the distribution amount ⁇ ⁇ ⁇ ⁇ ⁇ is an appropriate distribution amount is based on whether the pixel image data “b + ⁇ ⁇ ⁇ ⁇ ⁇ ” for the pixel ⁇ obtained in S207 is a pixel of the restored image data This is performed based on the judgment (S208) whether the image data is appropriate. This decision is made as follows.
  • the pixel image data “b + ⁇ ⁇ ⁇ ⁇ ⁇ ” corresponds to an image that approximates the pixel image data a. Therefore, the range of limit values that can be assumed for the pixel image data a (min ⁇ a ⁇ max).
  • the range of the limit value is a limit value of the amount of light energy that can be accumulated in the pixel. For example, if the pixel image data is represented by 8 bits (0 to 255) as the intensity of light energy, the pixel image data a should be within the range of 0 ⁇ a ⁇ 255. .
  • the pixel image data “b + ⁇ ⁇ ⁇ ”that exceeds the limit value range (min ⁇ a ⁇ max) of the pixel image data a is inappropriate as the pixel image data. That is, when the pixel image data “b + ⁇ ⁇ ⁇ ⁇ ” is below the lower limit of the limit value range of the pixel image data a or exceeds the upper limit value, the pixel image data “b + b ⁇ “ ⁇ / H” is determined to be inappropriate as pixel image data. That is, the pixel image data “b If “+ ⁇ ⁇ ⁇ ⁇ ” is inappropriate, it can be determined that the distribution amount ⁇ ⁇ ⁇ ⁇ is inappropriate.
  • ⁇ ⁇ ⁇ is the source of the generation of pixel image data f. Shadow of the distribution amount from the pixel image data b and b for the pixels n ⁇ 1 and n ⁇ 2 of the original restored image data n ⁇ 1 n ⁇ 2
  • N-1 n-2 n-1 n-2 The data b, b is reduced and the amount of distribution to the pixel image data b, b force pixel n is reduced.
  • pixel image data "b + ⁇ ⁇ ⁇ ⁇ " range data (min ⁇ b + ⁇ ⁇ ⁇ Za ⁇ max) amounts exceeding (hereinafter, beyond content data) is calculated e n.
  • the excess data e is considered to be included in ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , and the excess data e is considered to be the amount of dispersion of the pixel image data b and b force. That is, such a state is considered to be because the pixel image data b and b from which the pixel image data “b + ⁇ ⁇ ⁇ / ⁇ ” is restored are inappropriate.
  • the pixel image data b 1 and b 2 are corrected so that the pixel image data b + ⁇ ⁇ ⁇ ⁇ does not exceed the range (min ⁇ b + ⁇ ⁇ ⁇ / a ⁇ max)! /.
  • This correction amount is determined based on the following concept. As shown in FIG. 15, assuming that the correction amount data of the pixel image data b and b are correction data e and e, respectively, the pixel image data b ′ of the comparison image data at the pixel n is expressed by the equation (5). It is expressed as follows.
  • the restored image data at the pixel n can be set to an appropriate value by setting the correction data e_, e so that the expression (6) is satisfied. That is, it is possible to set ⁇ ⁇ ⁇ Z a that is the distribution amount based on the difference data ⁇ to an appropriate value. it can.
  • the specific setting of the correction data e n 2 based on equation (6), can be considered as follows, for example.
  • correction is performed by adding the correction data e and e set as described above to the pixel image data of the pixels n ⁇ 2 and n ⁇ 1, and pixel image data corrected for the pixels n ⁇ 2 and n ⁇ 1 is generated. (S210).
  • the pixel images of the pixels n ⁇ 2 and n ⁇ l are further displayed.
  • the difference data D is distributed to the initial image data I of each pixel.
  • the image processing apparatus 1 has been described above, various modifications can be made without departing from the gist of the present invention.
  • the processing performed by the processing unit 4 may be configured by hardware composed of components in which a part of processing is shared by each of the forces configured by software.
  • the original image to be processed may be processed such as color-corrected or Fourier-transformed.
  • the comparison image data includes color correction for data generated using the change factor information data G, or Fourier transform. It is also good as data that has been processed.
  • the change factor information data includes information that simply changes the image, and information that improves the image, as opposed to deterioration.
  • the set number of times may be changed by the data G of the change factor information. For example, when the data of a certain pixel is distributed over many pixels due to blurring, the number of iterations may be increased, and when the variance is small, the number of iterations may be decreased.
  • the process may be stopped. For example, a method of judging whether or not the force is diverging can be determined by looking at the average value of the difference data D and determining that the diverging force is greater than the previous value.
  • the process may be stopped. For example, in the case of 8 bits, if the value to be changed exceeds 255, processing is stopped.
  • that value may be used instead of the normal value. For example, let's assume that a value exceeding 255 within the 8-bit range of 0 to 255 is input data. When this happens, it is processed as 255, which is the maximum value.
  • the change factor information data G may generate data that goes out of the area of the image to be restored. .
  • data that protrudes outside the area is input to the opposite side.
  • the data also bring the opposite side force.
  • the difference data D can be directly used as the previous restored image data I, or the corresponding pixel
  • each processing method described above can be automatically selected according to the content of the data G of the change factor information.
  • processing methods (1) a method of allocating difference data D using an allocation ratio k (allocation ratio allocation method), (2) a corresponding pixel difference, or Processing unit program that can execute the three methods of scaling the difference data D (corresponding pixel method) and (3) detecting the center of gravity of the change factor and using the data of the center of gravity (centroid method) 4) Save the data in the analysis, analyze the status of the change factors, and select one of the three methods based on the analysis results. Alternatively, you can select any of the three methods and use them alternately in each routine, or process them in one method for the first few times, and then process them in another method. !
  • the first method is to reduce the data by thinning out the data.
  • thinning for example, as shown in FIG. 17, when the original image data ImgZ force consists of pixels 11-16, 21-26, 31-36, 41-46, 51-56, 61-66, There is a method of thinning out every other pixel to generate a reduced size Img 'of the pixels 11, 13, 15, 31, 33, 35, 51, 53, 55, which is a quarter size.
  • the transfer function of ' is not the transfer function used in the iterative processing of the reduced data. Therefore, the reduced and restored image data I and the reduced Img 'force that is the reduced original image data also calculate the transfer function.
  • the calculated transfer function is enlarged, and the enlarged transfer function is interpolated.
  • the corrected transfer function is used as the transfer function for the original image data Img ′ as the original data. Then, using the modified transfer function, deconvolution calculation is performed in the frequency space (calculation that removes blur by calculating the image group force including blur), and the complete restored image data I
  • the original image data Img ' may be used for further processing.
  • the second method of using reduced data is a method of obtaining reduced data by extracting data of a partial area of original image data Img '.
  • the original image data ImgZ force consists of pixels 11 to 16, 21 to 26, 31 to 36, 41 to 46, 51 to 56, 61 to 66!
  • the entire image area is not restored by iterative processing, but a part of the area is iteratively processed to obtain a good restored image, which is used to obtain a transfer function for that part, and the transfer function itself
  • the entire image is restored using a modified version (enlarged).
  • the area to be extracted must be sufficiently larger than the fluctuation area. In the previous example shown in Fig. 5 etc., it fluctuates over 3 pixels, so it is necessary to extract an area of 3 pixels or more.
  • the original image data Img ' is divided into four parts as shown in Fig. 19, for example. It is possible to make the original whole image by iteratively processing each of the four reduced Ims, restoring each of the four divided areas, and combining the restored four divided images into one.
  • the method is as follows.
  • a subject with a sudden change in contrast uses an iterative restoration process using the processing methods shown in Fig. 3 and Fig. 13, and if it tries to obtain something similar to the original image, the number of iterations is extremely high. And the restored image data that approximates the original subject after processing many times.
  • the data of the original image (blurred image) Img ' ⁇ , the data B of the change factor information at the time of shooting is generated from the data B of the known image, and the data of the blurred image is generated.
  • the superimposed image is restored by the process shown in FIG. 3, and a known added field image is obtained from the result data C as the restored image data I.
  • the base image data Img includes a rapid contrast change, but by adding the data B of a known image, this rapid contrast change can be reduced, and the restoration process is performed. The number of iterations can be reduced.
  • Method of optimizing the distribution of D Method of optimizing the distribution of difference data
  • Method of thinning out data and combining with inverse problem inverse problem thinning out method
  • Extracting the reduced area and inverse problem (7) Method of superimposing a predetermined image and performing an iterative process, and then removing the predetermined image (bad image countermeasure overlay method), (8) Method to remove the calculated error from the restored image including (error extraction method)
  • the processing method program may be stored in the processing unit 4 so that the processing method can be automatically selected according to the user's selection or the type of image.
  • any one of these (1) to (8) is stored in the processing unit 4 and selected by the user.
  • the processing method may be automatically selected according to the type of image. Also
  • the image processing apparatus 1 may have a different processing method in addition to any one or a plurality of (1) to (8) described above.
  • each processing method mentioned above may be programmed.
  • a program may be stored in a storage medium such as a CD, a DVD, or a USB memory so that it can be read by a computer.
  • the image processing apparatus 1 has a reading means for reading a program in the storage medium.
  • the program may be stored in an external server of the image processing apparatus 1, downloaded as necessary, and used.
  • the image processing apparatus 1 has communication means for downloading the program in the storage medium.

Abstract

L’invention concerne une méthode de traitement de circuit de restauration d’image évitant l’augmentation de la taille du dispositif. Un dispositif de traitement d’image compare des données d’image I0 d’une image de comparaison générée à partir de données d’image I0 d’une image arbitraire à des données d’image Img d’une image d’origine à traiter. Les données D de différence obtenues par la comparaison sont des données d’image distribuées de manière appropriée dans une image arbitraire, de façon à générer des données d’images I0+n dans l’image restaurée.
PCT/JP2007/050486 2006-01-18 2007-01-16 Dispositif de traitement d’image WO2007083621A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007554892A JP4885150B2 (ja) 2006-01-18 2007-01-16 画像処理装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006009613 2006-01-18
JP2006-009613 2006-01-18

Publications (1)

Publication Number Publication Date
WO2007083621A1 true WO2007083621A1 (fr) 2007-07-26

Family

ID=38287574

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/050486 WO2007083621A1 (fr) 2006-01-18 2007-01-16 Dispositif de traitement d’image

Country Status (3)

Country Link
JP (1) JP4885150B2 (fr)
CN (1) CN101375591A (fr)
WO (1) WO2007083621A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060329A (ja) * 2007-08-31 2009-03-19 Acutelogic Corp 画像処理装置及び画像処理方法、画像処理プログラム
JP2009164883A (ja) * 2008-01-07 2009-07-23 Nitto Kogaku Kk 信号処理装置
WO2009110183A1 (fr) * 2008-03-04 2009-09-11 日東光学株式会社 Procédé de génération de données d'information de facteur de changement et dispositif de traitement de signal
WO2009110184A1 (fr) * 2008-03-04 2009-09-11 日東光学株式会社 Procédé de création de données se rapportant à des informations de facteur de changement et processeur de signal
CN102156968A (zh) * 2011-04-11 2011-08-17 合肥工业大学 一种基于颜色立方先验的单一图像能见度复原方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1124122A (ja) * 1997-07-03 1999-01-29 Ricoh Co Ltd 手ぶれ画像補正方法および手ぶれ画像補正装置並びにその方法をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体
JP2002300459A (ja) * 2001-03-30 2002-10-11 Minolta Co Ltd 反復法による画像復元装置、画像復元方法、プログラム及び記録媒体
JP2004235700A (ja) * 2003-01-28 2004-08-19 Fuji Xerox Co Ltd 画像処理装置、画像処理方法、およびそのプログラム

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003060916A (ja) * 2001-08-16 2003-02-28 Minolta Co Ltd 画像処理装置、画像処理方法、プログラム及び記録媒体

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1124122A (ja) * 1997-07-03 1999-01-29 Ricoh Co Ltd 手ぶれ画像補正方法および手ぶれ画像補正装置並びにその方法をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体
JP2002300459A (ja) * 2001-03-30 2002-10-11 Minolta Co Ltd 反復法による画像復元装置、画像復元方法、プログラム及び記録媒体
JP2004235700A (ja) * 2003-01-28 2004-08-19 Fuji Xerox Co Ltd 画像処理装置、画像処理方法、およびそのプログラム

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009060329A (ja) * 2007-08-31 2009-03-19 Acutelogic Corp 画像処理装置及び画像処理方法、画像処理プログラム
US9516285B2 (en) 2007-08-31 2016-12-06 Intel Corporation Image processing device, image processing method, and image processing program
JP2009164883A (ja) * 2008-01-07 2009-07-23 Nitto Kogaku Kk 信号処理装置
WO2009110183A1 (fr) * 2008-03-04 2009-09-11 日東光学株式会社 Procédé de génération de données d'information de facteur de changement et dispositif de traitement de signal
WO2009110184A1 (fr) * 2008-03-04 2009-09-11 日東光学株式会社 Procédé de création de données se rapportant à des informations de facteur de changement et processeur de signal
JP2009211337A (ja) * 2008-03-04 2009-09-17 Nittoh Kogaku Kk 変化要因情報のデータの生成法および信号処理装置
CN102156968A (zh) * 2011-04-11 2011-08-17 合肥工业大学 一种基于颜色立方先验的单一图像能见度复原方法

Also Published As

Publication number Publication date
JP4885150B2 (ja) 2012-02-29
CN101375591A (zh) 2009-02-25
JPWO2007083621A1 (ja) 2009-06-11

Similar Documents

Publication Publication Date Title
JP5007241B2 (ja) 画像処理装置
WO2007083621A1 (fr) Dispositif de traitement d’image
JP3895357B2 (ja) 信号処理装置
JP4965179B2 (ja) 画像処理装置
JP4602860B2 (ja) 画像処理装置
JP4598623B2 (ja) 画像処理装置
JP2007129354A (ja) 画像処理装置
JP4606976B2 (ja) 画像処理装置
JP4763419B2 (ja) 画像処理装置
JP5247169B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP4629537B2 (ja) 画像処理装置
JP5005319B2 (ja) 信号処理装置および信号処理方法
WO2007032148A1 (fr) Processeur d'image
JP4629622B2 (ja) 画像処理装置
JP4763415B2 (ja) 画像処理装置
JP5007234B2 (ja) 画像処理装置
JP4718618B2 (ja) 信号処理装置
JPWO2007077733A1 (ja) 画像処理装置
JP5495500B2 (ja) 変化要因情報のデータの生成法および信号処理装置
JP5005553B2 (ja) 信号処理装置
JP5057665B2 (ja) 画像処理装置
JP5007245B2 (ja) 信号処理装置
WO2006137309A1 (fr) Appareil de traitement d’image
JPWO2008090858A1 (ja) 画像処理装置および画像処理方法
JP2008199305A (ja) 画像処理装置および画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007554892

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200780003289.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07706813

Country of ref document: EP

Kind code of ref document: A1