JP5065099B2 - Method for generating data of change factor information and signal processing apparatus - Google Patents

Method for generating data of change factor information and signal processing apparatus Download PDF

Info

Publication number
JP5065099B2
JP5065099B2 JP2008052808A JP2008052808A JP5065099B2 JP 5065099 B2 JP5065099 B2 JP 5065099B2 JP 2008052808 A JP2008052808 A JP 2008052808A JP 2008052808 A JP2008052808 A JP 2008052808A JP 5065099 B2 JP5065099 B2 JP 5065099B2
Authority
JP
Japan
Prior art keywords
data
change factor
factor information
signal
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008052808A
Other languages
Japanese (ja)
Other versions
JP2009212740A (en
Inventor
史紀 高橋
Original Assignee
日東光学株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日東光学株式会社 filed Critical 日東光学株式会社
Priority to JP2008052808A priority Critical patent/JP5065099B2/en
Publication of JP2009212740A publication Critical patent/JP2009212740A/en
Application granted granted Critical
Publication of JP5065099B2 publication Critical patent/JP5065099B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a method of generating change factor information data and a signal processing apparatus.

  Conventionally, it has been known that a signal sometimes deteriorates when it is photographed by a signal processing device such as a camera. Factors that cause signal degradation include camera shake during shooting, various aberrations of the optical system, lens distortion, and the like.

As a method of restoring an image (signal) deteriorated due to camera shake at the time of shooting, an image deteriorated due to camera shake at the time of shooting is the only information, and a point spread function (PSF: Point Spread) that causes the camera shake is used. A technique for estimating an original object image by estimating (Function) has been proposed (see Non-Patent Document 1). This technique is based on the premise that when the image data (A 0 ) before the change of the subject is multiplied by PSF (G 0 ), the imaged degraded image data (A ′) is obtained. That is, it is assumed that “A 0 · G 0 = A ′” is satisfied. Then, arbitrary values are given to A 0 and G 0 to calculate “G 0 = A ′ / A 0 ”. Then, G 0 is given a constraint condition such that it does not become a negative value, and G 1 is obtained. Then, “A ′ / G 1 = A 1 ” is set, and A 0 is changed to A 1 . A 1 is given a constraint such that A 1 does not become a negative value, and A 2 is obtained. Then, “A ′ / A 2 = G 2 ” is set, and G 1 is changed to G 2 . G 2 is given a constraint condition such that G 2 does not become a negative value, and G 3 is obtained. Such calculation is repeated many times to obtain appropriate PSF and restored image data.

  In addition to general captured images, it is known that various images and signals such as X-ray photographs and microscopic images are deteriorated or changed due to blurring or other causes.

Takeda, Komatsu "Examination of non-negative constraint conditions in Fourier iteration algorithm for blind deconvolution", Optics, Japan Optical Society, 1996, Vol. 25, No. 5, p274-281

  When the technique described in Non-Patent Document 1 is adopted, it is difficult to set a constraint condition corresponding to any image, and since the processing time becomes long, it is difficult to adopt it for a general-purpose device. .

  SUMMARY OF THE INVENTION An object of the present invention is to provide a data generation method and a signal processing apparatus for change factor information that enable practical signal restoration.

  In order to achieve the above object, the change factor information generation method of the present invention is the change factor information data for generating the change factor information data that causes the signal change of the original signal data that has undergone changes such as deterioration. In the generation method, set the arbitrary change factor information data as the first change factor information data, perform a generation process to generate new change factor information data different from the first change factor information data, and generate At the time of processing, reduced data obtained by performing reduction processing for reducing the original signal is used as data obtained by processing the original signal data.

  According to the present invention, since the magnitude of the change is also reduced, it is easy to estimate the data of the change factor information, and the data of the change factor information that enables the practical signal restoration can be generated.

  In addition to the above-described invention, the generation method of the change factor information according to another invention is such that, in the generation process, after the new change factor information data is obtained from the reduced data, the new change factor information data is increased. The enlargement process is performed, and the enlarged change factor information data is used in place of the new change factor information data. The process similar to the generation process is performed at least once, and the processes after the enlargement process are repeated as necessary. . By adopting this method, it is possible to estimate large change factor information data based on good quality change factor data obtained by performing the reduction process.

  In addition to the above-described invention, the generation method of the change factor information according to another invention is the sum of the kinetic energy required to move the data of each signal element, based on the origin position of the obtained change factor information data. Is set to a position where the kinetic energy is Min or more and Min × 1.2 or less.

  In addition to the above-described invention, the method of generating change factor information according to another invention is data having an intensity distribution of Gaussian distribution as the first change factor information data. By adopting this method, it is possible to estimate data of favorable change factor information regardless of any blur or blur in the change.

  In order to achieve the above object, the signal processing apparatus of the present invention, from the original signal data that has undergone changes such as degradation, the signal before the change, the signal that should have been originally acquired, or an approximate signal thereof (hereinafter, A processing unit that restores the original signal), and the processing unit sets any change factor information data as the first change factor information data, and a new change different from the first change factor information data. Perform the generation process to generate the factor information data, and use the reduced data that has been reduced to reduce the original signal as the processed data of the original signal during the generation process. The original signal is restored based on the information data or the data obtained by processing the change factor information data.

  According to the present invention, since the magnitude of the change is also reduced, it is easy to estimate the data of the change factor information, and practical signal restoration is possible.

  In addition to the above-described invention, the signal processing device according to another invention obtains new change factor information data from the reduced data during the generation process, and then enlarges the new change factor information data. An enlargement process is performed, and the generation process is performed using the enlarged change factor information data instead of the new change factor information data. By adopting this configuration, it is possible to estimate large change factor information data based on the good quality change factor information data obtained by performing the reduction process, and practical signal restoration based on the data is possible. It becomes possible.

  According to the present invention, it is possible to provide a data generation method and a signal processing device for change factor information that enable practical signal restoration.

(Configuration of signal processing device)
Hereinafter, a signal processing device according to an embodiment of the present invention will be described with reference to the drawings. Although this signal processing device is a consumer camera, it may be a camera for other uses such as a surveillance camera, a television camera, a handy type video camera, an endoscopic camera, a microscope, a binocular, It can also be applied to devices other than cameras, such as diagnostic imaging apparatuses such as NMR imaging, printers that print images, and scanners that read images.

  FIG. 1 shows an outline of the configuration of the signal processing apparatus 1. The signal processing device 1 includes an imaging unit 2 that captures an image of a person, a control system unit 3 that drives the imaging unit 2, and a processing unit 4 that processes an image captured by the imaging unit 2. ing. Further, the signal processing apparatus 1 according to this embodiment further includes a recording unit 5 that records an image processed by the processing unit 4 and a factor information storage unit 7 that stores data of change factor information that causes image degradation and the like. Have

  The photographing unit 2 includes a photographing optical system having a lens and a photographing element such as a CCD or C-MOS that converts light passing through the lens into an electric signal. The control system unit 3 controls each unit in the signal processing device, such as the photographing unit 2, the processing unit 4, the recording unit 5, and the factor information storage unit 7.

  The processing unit 4 is configured by an image processing processor, and is configured by hardware such as an ASIC (Application Specific Integrated Circuit). Then, the processing unit 4 executes a process related to a data generation method of change factor information, which will be described later, and an image restoration process. For example, the processing unit 4 performs a reduction process for reducing the original signal in the process of generating the data of the change factor information.

  Further, the processing unit 4 may store image data that is a source for generating comparison data, which will be described later. Further, the processing unit 4 is not configured as hardware such as an ASIC, but may be configured to perform processing by software. The recording unit 5 is composed of a semiconductor memory, but magnetic recording means such as a hard disk drive or optical recording means using a DVD or the like may be employed.

  The factor information storage unit 7 is a recording unit that stores data of change factor information generated by the method of generating data of change factor information according to the embodiment of the present invention or data obtained by processing the data. Then, the factor information storage unit 7 stores a blurring locus (history) that forms the center of the data, and a time (weight) that has remained at each point on the locus. The data recorded in the factor information storage unit 7 is from an original image (an image in which deterioration or the like has changed) to an original image (an image before the change, an image that should have been originally taken, or an approximation thereof) This is used by the processing unit 4 when the image is restored. Therefore, data can be exchanged between the processing unit 4 and the factor information storage unit 7.

(Generation of change factor information data)
FIG. 2 shows a flowchart of an example of a method for generating data of change factor information according to the embodiment of the present invention performed by the processing unit 4.

First, the processing unit 4 sets an arbitrary initial value (P 0 ) of the first change factor information data (PSF) (step S101). Here, the initial value is set as a Gaussian disk whose intensity distribution is Gaussian distribution data.

  Next, the processing unit 4 reduces the captured original image data (original signal data) Img ′ to ¼ in parallel with the processing in step S101 or before or after the processing. That is, a reduction process (processing) is performed by thinning out the original signal data Img 'to a capacity value of 25% (step S102). The reduced data that has undergone this reduction processing is denoted as SImg ′.

Next, the processing unit 4 uses I 0 as arbitrary signal data having the same capacity as the reduced data SImg ′, and superimposes I 0 and P 0 to obtain blurred data I 0 ′ for comparison ( Step S103). The arbitrary signal data I 0 is preferably reduced original image data SImg ′.

Next, the processing unit 4 compares SImg ′ with I 0 ′, and obtains data Δ of the difference (Step S104). It is determined whether the difference data Δ is within a predetermined value (step S105). If it is within the predetermined value, the process proceeds to step S120, and if not within the predetermined value, the process proceeds to step S106. In step S106, data obtained by multiplying the difference data Δ by a coefficient k (coefficient based on the change factor information (PSF)) is allocated to I 0 to obtain first restored data I 0 + n (this time I 0 + 1 ). . In step S106, when allocating to I0 , processing for determining the validity of the distribution value (update amount) is performed (step S107). And the process which corrects I0 + n according to the validity is performed (step S108). Next, the process proceeds to step S109, where it is determined whether or not this process has reached a predetermined number of times. If the predetermined number (for example, 10 times) has been reached, the process proceeds to step S120. Migrate to

Then, the processing unit 4 performs the process of step S103 using I 0 + 1 instead of I 0 (step S110), and performs steps S103, S104, S105, S106, S107, S108, S109, and S110. This process is repeated a predetermined number of times (10 times this time). In step S109, it is determined whether or not the predetermined number of times has been reached.

The process of step S106 is a process of moving (distributing) part or all of the pixel data constituting the reduced original image data SImg ′ and the restored data I 0 + n . Details of step S107 and step S108 will be described later.

Next, when it is determined in step S109 that the predetermined number of times has been reached, the processing unit 4 performs a process of calculating a PSF (step S120). The process is a process of obtaining a new PSF (= P) from the obtained restored data I 0 + n and the reduced original image data SImg ′. A specific example of the processing is to Fourier transform the obtained restoration data I 0 + n and the reduced data SImg ′ of the original image, calculate the frequency characteristic of the PSF by division in the frequency space, and inverse Fourier transform the frequency characteristic. To obtain PSF (= P).

  Next, the processing unit 4 doubles the data value of the blur locus portion (skeleton portion) that forms the center of the P data, and performs processing to emphasize the skeleton portion (step S121). FIG. 3A shows the PSF estimated at the current stage on the XY plane. This XY plane corresponds to an XY plane in which the signal processing apparatus 1 shown in FIG. The PSF is composed of a portion (skeleton portion) A corresponding to the locus of blur that forms the center of the data, and a portion B (portion such as blur) having a small data value around the portion. This small portion B is considered as a blurred portion. FIG. 3B shows the state of the PSF after performing the process of emphasizing the portion A (step S121). It can be seen that the skeleton of A is slightly emphasized. In this processing, thinning processing is employed, but other emphasis methods may be employed. As a result of this processing, the P data obtained in step S120 becomes P ′ data. Thereafter, a process of setting the origin of P ′ (details will be described later) is performed so that the kinetic energy for distribution in step S106 is minimized (step S122).

Next, the processing unit 4 determines whether or not P ′ is a valid PSF. Specifically, the processing unit 4 determines whether or not the processing from step S120 to S122 has been repeated three times (step S123). If the number of repetitions, that is, the number of PSF calculations does not satisfy 3 (N), P ′ data is used instead of P 0 (step S124), and the processing from step S103 to S110 is repeated, and then again. Steps S120 to S122 are executed.

If the number of PSF calculations, that is, the number of repetitions satisfies 3 in the determination of step S123 (Y), the processing unit 4 uses the same ratio for the obtained P ′ data and I 0 + n data. Enlargement processing is performed (step S125). In this embodiment, P ′ obtained last and I 0 + n are multiplied by 4/3. It is determined whether or not the enlargement process has reached a predetermined reduction rate, that is, whether or not the original image has been reduced (step S126). If it is determined that the predetermined reduction ratio has not been reached, the enlarged I 0 + n data is used in place of the I 0 data (step S127), and the enlarged P ′ data is used in place of the P 0 (step S124). . On the other hand, a new reduced image SImg ′ obtained by reducing the original image data to 1/3 is obtained. The image reduced to 1/3 is the same size as the data 4/3 times earlier. And the process part 4 repeats the process from step S103 to S110 using each new data, and repeats the process from step S120 to S124. If it is determined in step S126 that the predetermined reduction rate, for example, a reduction rate of 1/3 with respect to the original image data has been reached, the process proceeds to step S128. In step S125, the obtained P ′ data and I 0 + n data are enlarged at the same ratio, but only the P ′ data is enlarged, and instead of the enlarged I 0 + n data. Arbitrary signal data may be used for step S125. The arbitrary signal data is, for example, SImg ′ data. However, in step S125, the direction which the data of the data and I 0 + n of the resultant P 'as in the present embodiment by enlarging processed in the same ratio, for using data of the I 0 + n passing through the restoration process, good quality PSF May be generated.

Note that the P ′ data enlargement process in step S125 is performed by the processing unit 4 inserting new signal element data having an average value of adjacent signal element values between adjacent signal elements. Further, the process of enlarging the I 0 + n data is performed by the processing unit 4 inserting a new pixel between adjacent pixels. The pixel value of the new pixel is a value obtained by averaging the pixel values of adjacent pixels. The processing unit 4 determines whether or not a predetermined reduction rate has been reached by this enlargement processing in step S126. When the predetermined reduction ratio is reached by the enlargement process (Y), the processing unit 4 enlarges P ′ obtained at the current stage to the actual size (step S128). That is, since P ′ obtained at the current stage is one third of the actual size, the processing unit 4 enlarges P ′ obtained at the current stage three times. This enlargement process is also performed by the processing unit 4 inserting a new pixel between adjacent pixels. The pixel value of the new pixel is a value obtained by averaging the pixel values of adjacent pixels. In addition, in each expansion process of step S105, S128, the process part 4 inserts the pixel of the pixel value of one pixel of the adjacent pixel between adjacent pixels other than the above-mentioned interpolation method. The interpolation method performed by the above may be adopted.

  With the above processing, the method for generating data of change factor information according to the embodiment of the present invention is completed (step S129), and a PSF is generated. The PSF is stored in the factor information storage unit 7 by the processing unit 4 and used in the signal restoration process. The PSF obtained through the above process is represented as “G”.

(Process to determine the appropriateness of the distribution value)
Processing for determining the validity of the distribution value performed in step S107 described above will be described with reference to the drawings.

4 and 5 are diagrams for explaining the details of the processing in steps S107 and S108 in FIG. The basic concept of this processing is as follows. This process is not performed for the restored data I 0 + n in a lump, but for each signal element constituting the restored data I 0 + n . This point is greatly different from the processing of the entire signal (image) in steps S103, S105, and S106. First, when there is a pixel whose change in pixel value is greatly increased by updating among a plurality of pixels in a range in which one pixel serving as one signal element is affected by P 0 , an edge and an edge are included in the range of the plurality of pixels. I think that there will be a part. For the portion serving as the edge, the update amount based on the difference data Δ of the corresponding portion is unlikely to be an appropriate amount. This is because the difference between the pixel values of two pixels straddling the edge part is too large, so even if the pixel value is distributed from one pixel to the other pixel across the edge part, the distribution is appropriate. This is because it is difficult to. Therefore, for pixels whose change in pixel value is unnaturally larger than the surroundings due to the update, the update amount in the vicinity of the edge is brought close to an appropriate value by reducing the absolute value of the update amount.

  For this purpose, first, an update amount (Dc) of a certain pixel is calculated from predetermined difference data Δ (step S201). Then, reference is made to a set of a plurality of pixels (a part of signal elements) in the range in which one pixel is affected by the change factor information G and the one pixel, and the pixel value before the update of each pixel to be referred to A minimum value (= Min), a maximum value (= Max), and an average value (= Av) of (= Ib) are calculated (step S202). The partial pixels (reference pixels) may be simply a plurality of adjacent pixels or a plurality of pixels included in a predetermined distance from a predetermined pixel as a center. The predetermined distance is determined using data G of change factor information, for example.

Next, data (= Ia) that is the updated pixel value of one pixel (= one signal element) among the referenced pixels is calculated (step S203). This updated pixel value Ia is a pixel value for each pixel when updated using the update amount (Dc) for each pixel calculated from the difference data δ corresponding to a certain pixel. ) Guided by the formula.
Ia = Ib + Dc (1)

  Next, the validity of the update amount (Dc) before correction is determined. First, when the updated pixel value Ia, which is signal element data constituting the restored data, is not less than the minimum value Min and not more than the maximum value Max (determination in step S204 is “Y” (= Yes)), The pixel has a natural balance of pixel values with the surrounding pixels, and the update amount Dc is not corrected and is used as the update amount (Dp) after correction, thereby obtaining a correction value Ia ′ (steps S205 and S214). . Step S214 is a step for obtaining a correction value Ia 'by the update amount Dp after correction. The arrow shown in FIG. 5A schematically shows the state after the processing of step S205 and step S214, the black circle of the arrow indicates the pixel value Ib before the pixel update, and the triangular arrow indicates Dp Indicates the corrected value Ia ′ after updating, and the length of the arrow indicates the update amount (Dc = Dp). The left arrow indicates an example when the value of the update amount Dc before correction is a positive value, and the right arrow indicates an example when the value of the update amount Dc before correction is a negative value.

  When the determination in step S204 is “N” (= No), the process proceeds to step S206. Here, when the updated pixel value Ia exceeds the maximum value Max and the updated pixel value Ib is equal to or less than the average value Av (determination in step S206 is “Y” (= Yes)), it is determined that the pixel value balance with the surrounding pixels is unnatural. Then, the update amount Dc is changed to a value Dp obtained by subtracting the pixel value Ib before update from the maximum value Max (step S207). As a result, the pixel value of the correction value Ia ′ becomes equal to the maximum value Max. The arrow shown in FIG. 5B schematically shows the state after step S207 and step S214, as in FIG. The left arrow indicates Ia when updated by Dc, and the right arrow indicates a correction value Ia 'when updated by Dp.

When the determination in step S206 is “N” (= No), the process proceeds to step S208. Here, when the updated pixel value Ia exceeds the maximum value Max and the updated pixel value Ib exceeds the average value Av (the determination in step S208 is “Y” (= Yes). )), It is determined that the pixel value balance with the surrounding pixels is unnatural. In this case, the update amount Dc is changed to a value Dp obtained from the following equation (2) (step S209). As a result, the pixel value of the correction value Ia ′ is a value obtained by adding ¼ of the updated pixel value Ia exceeding the maximum value Max to the plus side to the maximum value Max. The arrow shown in FIG. 5C schematically shows the state after step S209 and step S214, as in FIG. 5B.
Dp = 0.25 (Ia−Max) + (Max−Ib) (2)

When the determination in step S208 is “N” (= No), the process proceeds to step S210. Here, when the pixel value Ia after update is less than the minimum value Min and the pixel value Ib before update is equal to or less than the average value Av (the determination in step S210 is “Y” (= Yes)), the pixel Determines that the balance of pixel values with surrounding pixels is unnatural. Then, the update amount Dc is changed to a value Dp obtained from the following equation (3) (step S211). As a result, the pixel value of the correction value Ia ′ is a value obtained by subtracting ¼ of the updated pixel value Ia exceeding the minimum value Min to the minus side from the minimum value Min. The arrow shown in FIG. 5D schematically shows the state after step S211 and step S214, as in FIG. 5B.
Dp = − (Ib−Min) −0.25 (Min−Ia) (3)

  If the determination in step S210 is “N” (= No), the updated pixel value Ia is less than the minimum value Min, and the updated pixel value Ib exceeds the average value Av ( Step S212). In that case, the pixel value balance with the surrounding pixels is unnatural. Then, the update amount Dc is changed to a value Dp obtained by subtracting the pixel value Ib before update from the minimum value Min (step S213). As a result, the pixel value of the correction value Ia ′ becomes equal to the minimum value Min. The arrow shown in (E) of FIG. 5 schematically represents the state after step S213 and step S214, as in FIG. 5 (B).

  The determination processing shown in the above steps S204, S206, S208, and S210 is processing for determining whether or not a predetermined standard is satisfied. It should be noted that the order of determination based on the determination in steps S204, S206, S208, and S210 and whether or not the condition in step S212 is satisfied can be changed as appropriate. Even when the change is made, the first to fourth judgment is actually made. If none of the four conditions is met, the remaining conditions will always be met. Therefore, whether the remaining conditions are met is determined by whether or not the predetermined criteria are satisfied as in step S212 in FIG. It is not a judgment process.

In this way, the obtained Dp is used as an updated amount, and the corrected value Ia ′ is obtained (step S214). As a result, a certain pixel is updated (step S215). Then, it is determined whether or not all the pixels have been updated (step S216). If the determination is “N” (= No), the pixel to be referred to is changed (step S217), and a predetermined standard is obtained from the changed reference pixel (other signal elements) and In order to obtain the correction value Ia ′ of the pixel, the process returns to step S202, and the processes of steps S202 to 217 are repeated. In this process, when the update amount of one pixel is corrected, the reference pixel is changed to change the update amount of another pixel. When the determination in step S216 is “Y” (= Yes), the restoration data I 0 + n is corrected using the correction value Ia ′ of all pixels (step S218). The steps S107 and 108 in FIG.

(Process to set the origin of P ')
The process for setting the origin of P ′ performed in step S122 in FIG. 2 will be described with reference to the drawings.

The factor information storage unit 7 stores the data at the stage of step S122 in FIG. That is, data that can be estimated as blurring of the signal processing device 1 shown in FIG. 6 is stored in the factor information storage unit 7. FIG. 6 shows the appearance of the signal processing apparatus 1 shown in FIG. The rotation of the signal processing apparatus 1 about the XYZ direction axes shown in the drawing is blurred, but it is particularly likely to appear around the X and Y direction axes. The data relating to the blur shown in FIG. 7 is data of change over time of the coordinate data on the XY plane in FIG. The data includes information on the locus of blur as represented by the XY plane shown in FIG. 7 and information on how long it has been at each position on the locus. The starting point A (X 1 , Y 1 ) of the XY plane shown in FIG. 7 is the imaging start position, and the end point B (X N , Y N ) of the trajectory is the imaging end position.

  The deterioration of the image due to blurring is a phenomenon in which the light energy is not concentrated on one point but is scattered on the trajectory AB shown in FIG. Therefore, concentrating the dispersed light energy at one point restores the original image to the original image. One point for concentrating the light energy can be freely determined. For example, it can be determined as a point on point A, point B, point AB in FIG. 4 or a point off the point AB.

Here, the point where the dispersed light energy is concentrated is referred to as “origin position”, and the origin position is represented by a point 0 coordinate (0x, 0y) on the XY plane shown in FIG. Further, data (P ′) in which the skeleton part of the PSF calculated in step S120 described above is emphasized in the process of step S121 is represented by G (Xn, Yn). This indicates “weight”, which is information indicating how long the position (Xn, Yn) has been, and satisfies the equation (4). Equation (4) indicates that light energy is normalized to 1 and handled. Note that (Xn, Yn) is a coordinate on the XY plane shown in FIG.
... (4)

Further, the movement energy for concentrating the dispersed light energy at the point 0 which is the origin position is represented by E (0x, 0y). Then, the movement energy for concentrating the dispersed energy at the origin position (0x, 0y) can be expressed by a function of the movement distance and weight. For example, the following equation (5) (n = 1, 2,... N: N is the number of regions spread in a distributed manner).
... (5)

And the origin position (0x, 0y) which makes the movement energy E (0x, 0y) the minimum value is set. This setting is performed by the processing unit 4. Further, the processing unit 4 stores the data in which the origin position is newly set in place of the data stored in the factor information storage unit 7 before the setting.
Moreover, when placing more weight on the moving distance, the following equation (6) can also be used. This formula (6) has the advantage that there is no square root calculation and the calculation is easy.
... (6)
Furthermore, in order to make calculation easier, the following equation (7) can be used.
... (7)

(Signal restoration processing)
Next, an outline of an image restoration processing method (restoration means) of the processing unit 4 of the signal processing device 1 according to the present embodiment configured as described above will be described with reference to FIGS. 8 and 9. Note that the image restoration processing method is substantially the same as the processing of steps S102, S103, S104, S105, S106, S107, S108, and S110 described above, and is a repetitive processing.

  Here, when the restoration process of the original image to the original image is performed, when the imaging power is turned off, when the processing unit 4 is not operating, when the operating rate of the processing unit 4 is low, etc. It can be a time delayed from the time when the original image was taken. In that case, the original image data stored in the recording unit 5 and the change factor information such as a transfer function for the original image stored in the factor information storage unit 7 are associated with each other for a long time. It is preserved over. As described above, the advantage of delaying the time for executing the restoration process of the original image from the time of shooting the original image is that the burden on the processing unit 4 at the time of shooting involving various processes can be reduced.

In FIG. 8, “I 0 ” is an arbitrary initial image and is image data stored in advance in the recording unit of the processing unit 4. “I 0 ′” indicates data of a degraded image of I 0 of the initial image data, and is comparison data for comparison. “G” is data of change factor information (= degradation factor information (transfer function)) estimated in the process shown in FIG. 2, and the processing unit 4 extracts the data stored in the factor information storage unit 7, It is stored in the recording unit of the processing unit 4. “Img ′” is data of the original image.

“Δ” is difference data between the original image data Img ′ and the comparison data I 0 ′. “K” is an allocation ratio based on the data of the change factor information. “I 0 + n ” is restored image data (restored data) newly generated by allocating the difference data δ to the initial image data I 0 based on the data G of the change factor information. “Img” is data of the original image. Here, the relationship between Img and Img ′ is represented by the following equation (8).
Img ′ = Img * G (8)
Here, “*” is an operator representing a superposition integral.

The difference data δ may be a simple difference between corresponding pixels, but in general, the difference data δ differs depending on the data G of the change factor information, and is expressed by the following equation (9).
δ = f (Img ′, Img, G) (9)

The processing routine of the processing unit 4 first determines the origin position where the dispersed light energy is concentrated (step S300). This determination has already been made in step S122 shown in FIG. Therefore, detailed description of this process is omitted. Then, prepare the arbitrary image data I 0 (step S301). As the initial image data I 0 , the deteriorated original image data Img ′ may be used, and any image data such as black solid, white solid, gray solid, or checkered pattern may be used. good. In step S302, data I 0 of an arbitrary image that is an initial image is input instead of Img in equation (8), and comparison data I 0 ′ that is a degraded image is obtained. Next, the original image data Img ′ and the comparison data I 0 ′ are compared to calculate difference data δ (step S303).

Then, it is determined whether or not each absolute value of the difference data δ is less than a predetermined value (step S304). If the difference data δ is greater than or equal to the predetermined value in step S304, a process of generating new restored image data (= restored data) is performed in step S305. That is, the individual difference data δ from which individual signal elements are obtained is distributed to arbitrary image data I 0 based on the data G of the change factor information, and new restored data I 0 + n is generated.

Then, the validity of the update amount (distribution value) used when generating the restored data I 0 + n is determined (step S306), and the restored data I 0 + n is corrected (step S307). The processing in step S306 and step S307 is performed in the same manner as the processing for determining and correcting the validity of the distribution value performed in steps S106 and S107 shown in FIG. Therefore, detailed description of this process is omitted.

Thereafter, steps S302 to S307 in FIG. 8 are repeated. The restored data I 0 + n in the middle of this repetition is restored data in the middle of processing. In step S304, when each absolute value of the difference data δ of each pixel becomes less than a predetermined value, the iterative process is terminated. Then, the restored data I 0 + n at the time when the repetitive processing is completed is estimated as the original image data Img. That is, when the maximum value or average value of the absolute values of the difference data δ of each pixel becomes smaller than a predetermined value, the restored data I 0 + n that is the basis of the comparison data I 0 + n ′ is the original image data Img. Therefore, the restored data I 0 + n is estimated as the original image data Img. The initial image data I 0 and the change factor information data G may be recorded in the recording unit 5 and transferred to the processing unit 4 as necessary.

  The concept of the above-described iterative processing method (restoring means) is summarized as follows. That is, in this processing method, the processing solution is not solved as an inverse problem, but is solved as an optimization problem for obtaining a rational solution. When solving as an inverse problem, it is theoretically possible, but it is difficult as a real problem.

In the case of solving as an optimization problem, the present embodiment assumes the following conditions.
That is,
(1) The output corresponding to the input is uniquely determined.
(2) If the output is the same, the input is the same.
(3) The input is updated so that the outputs are the same, and the solution is converged by performing iterative processing while correcting the update amount to an appropriate value.

In other words, as shown in FIGS. 9A and 9B, if comparison data I 0 ′ (I 0 + n ′) that is approximate to the original image data Img ′ can be generated, the original data of the generation is generated. The initial image data I 0 or the restored data I 0 + n is approximate to the original image data Img.

  In this embodiment, the value that is the determination criterion for the difference data δ is “6” in this embodiment when each data is represented by 8 bits (0 to 255). That is, when it is less than 6, that is, 5 or less, the processing is finished.

  Next, details of the camera shake restoration processing method (repetitive processing (restoration means) of steps S302, S303, S304, S305, S306, and S307) shown in FIG. 8 will be described in detail with reference to FIG. 10, FIG. 11, FIG. This will be described with reference to FIGS. 14, 15, 16 and 17. FIG.

(Image restoration algorithm)
When there is no camera shake, the light energy corresponding to a given pixel is concentrated on that pixel during the exposure time. In addition, when there is a camera shake, the light energy is distributed to the blurred pixels during the exposure time. Further, if the blur during the exposure time is known, the manner in which the energy is dispersed during the exposure time can be understood, so that it is possible to create a blur-free image from the blurred image.

  Hereinafter, for the sake of simplicity, the description will be made in one horizontal dimension. Let the pixels be S-1, S, S + 1, S + 2, S + 3,... In order from the left, and pay attention to a certain pixel S. When there is no blur, the energy during the exposure time is concentrated on the pixel, so the energy concentration is “1.0”. This state is shown in FIG. The imaging results at this time are shown in the table of FIG. What is shown in FIG. 11 is the correct image data Img when no deterioration occurs. Each data is represented by 8-bit (0 to 255) data.

  Assume that there is blurring during the exposure time, 50% of the exposure time is blurred to the Sth pixel, 30% of time to the S + 1th pixel, and 20% of time to the S + 2th pixel. The way of energy dispersion is as shown in the table of FIG. This becomes the data G of the change factor information.

  Assume that there is blurring during the exposure time, 50% of the exposure time is blurred to the Sth pixel, 30% of time to the S + 1th pixel, and 20% of time to the S + 2th pixel. The way of energy dispersion is as shown in the table of FIG. This becomes the data G of the change factor information. The value of “N” in the above formula (5) is “3”, and the sum of 50%, 30%, and 20% as “weight” is “1”. Therefore, the change factor information G (here, G (Xn) is considered in a one-dimensional horizontal direction) satisfies the above-described formula (4).

Based on this FIG. 12 and Formula (5), the movement energy E (0x, 0y) is calculated. Here, since it is considered in one horizontal dimension, the movement energy is E (0x). Also, the movement distance is calculated by assuming that the movement distance for one pixel is “1”. Then, the movement energy in the case where the dispersed light energy is concentrated on the pixel “S” is obtained by calculating E (0x) as follows.
(1 × 0) + (0 × 0.5) + (1 × 0.3) + (2 × 0.2) = 0.7

Similarly, E (0x) is calculated and calculated as follows when the dispersed light energy is concentrated on the pixel “S + 1”.
(1 × 0.5) + (0 × 0.3) + (1 × 0.2) = 0.7

Similarly, E (0x) is calculated and obtained as follows when the dispersed light energy is concentrated on the pixel “S + 2”.
(2 × 0.5) + (1 × 0.3) + (0 × 0.2) = 1.3

  From the above results, in the case of FIG. 12, the kinetic energy can be set to the minimum value “0.7” by concentrating the dispersed light energy on the pixel “S” or “S1”. Further, when “S = 0.45”, “S + 1 = 0.3”, and “S + 2 = 0.25” instead of FIG. 12, the sum of the kinetic energy to the pixel “S + 1” is the smallest. That is, the movement to the pixel “S” is “0.8”, the movement to the pixel “S + 1” is “0.7”, and the movement to the pixel “S + 2” is “1.2”. Hereinafter, the details of the iterative process when the dispersed light energy is concentrated on the position where the moving energy is the smallest, that is, on the pixel “S” in the above-described example of FIG. 12 will be described.

  Blur is uniform for all pixels and is understood as a linear problem. If there is no upper blur (vertical blur), the blur situation is as shown in the table of FIG. The data shown as “blurred image” in FIG. 13 becomes the degraded original image data Img ′. Specifically, for example, “120” of the pixel “S-3” is in accordance with the distribution ratio of “0.5”, “0.3”, “0.2” of the data G of the change factor information that is blur information, Dispersed in such a manner that “60” is distributed to the “S-3” pixel, “36” is distributed to the “S-2” pixel, and “24” is distributed to the “S-1” pixel. Similarly, “60” which is the pixel data of “S-2” is distributed as “30” in “S-2”, “18” in “S-1”, and “12” in “S”. The original image data Img is calculated from the deteriorated original image data Img ′ and the change factor information data G shown in FIG.

Any arbitrary image data I 0 shown in step S301 can be adopted, but in this description, original image data Img ′ is used. That is, the process starts with I 0 = Img ′. In the table of FIG. 14, “input” corresponds to the initial image data I 0 . This data I 0, that is, Img ′ is superposed and integrated with the change factor information data G in step S302. That is, for example, “60” of the “S-3” pixel of the initial image data I 0 is “30” for the S-3 pixel, “18” for the “S-2” pixel, “12” is assigned to each pixel of “−1”. The other pixels are similarly distributed, and comparison data I 0 ′ shown as “output I 0 ′” is generated. Therefore, the difference data δ in step S303 is as shown in the bottom column of FIG. It is determined whether the maximum absolute value of the difference data δ is a predetermined value, for example, less than 10 (step S304). In this example, the difference data δ of the pixel “S-3” is 30, and step S304 is performed. No (= N) and the process proceeds to step S305.

As shown in FIG. 15, the distribution of the difference data δ is 0.5, which is the distribution ratio of the pixel data “30” of “S-3” to the own location (= the pixel of “S-3”), for example. “15” multiplied by “S-3” is distributed to the pixel of “S-3”, and the data “15” of the pixel of “S-2” is allocated to the pixel of “S-2”. “4.5” multiplied by 0.3 is allocated, and further, the data “9.2” of the pixel “S-1” is allocated to the pixel “S-1” by the distribution ratio. “1.84” multiplied by a certain 0.2 is allocated. The total amount (update amount Dc for each pixel) allocated to the pixels of “S-3” is “21.34”, and this value is the data I 0 of the initial image in FIG. Here, the updated pixel value Ia in FIG. 4 which is the restored data I 0 + 1 in FIG. 8 is calculated in addition to the original image data Img ′. In this example, as shown in FIG. 15, the updated pixel value Ia is “81.34”. In this way, the difference data δ is distributed to the arbitrary image data I 0 using the change factor information data G to generate the restored data I 0 + n shown as “next input” in FIG. . In this case, since this is the first time, it is represented as I 0 + 1 in FIG.

Thereafter, the validity of the update amount is determined in step S306 in FIG. Specifically, in order to correct the restored data I 0 + 1 , an updated pixel value (= Ia) for each pixel is calculated. This calculation is based on equation (1) as described above. Therefore, the difference data δ is distributed to each pixel. Then, as shown in FIG. 15, for each pixel, the minimum value (= Min), maximum value (= Max), and average value (= Av) of the pixel value (= Ib) before the update of each pixel to be referred to Is calculated (step S202 in FIG. 4). For example, the pixel “S-3” refers to the pixel “S-3”, the pixel “S-2”, and the pixel “S-1”. Therefore, as shown in FIG. 15, the minimum value (= Min), the maximum value (= Max), and the average value (= Av) of the pixel “S-3”, the pixel “S-2”, and the pixel “S-1”. Is calculated. In the example shown in the figure, regarding the pixel “S-3”, the maximum value is “82.00” in the pixel “S-1”, the minimum value is “66.00” in the pixel “S-3”, and the average The value is “69.33”, which is a value obtained by dividing the sum of the values of the pixels “S-3”, “S-2”, and “S-1” by 3. The same calculation is performed for the pixels “S−2” to “S + 4”.

Then, it is determined whether Ia and Ib satisfy any of the conditions in steps S204, S206, S208, S210, and S212 in FIG. For example, Ia of the pixel “S-3” satisfies the condition of “Min (60.00) ≦ Ia (81.34) ≦ Max (82.00)”, and therefore satisfies the condition of step S204 in FIG. . Therefore, the process of step S205 is performed as shown in FIG. 15, and the update amount “21.34” as Dc is used as it is as the update amount Dp after correction, and the correction value Ia ′ is equal to “81 before correction” “81 .34 ". The same correction is performed for the updated pixel values Ia of “S−2” to “S”, “S + 2”, and “S + 3”. This correction value Ia ′ becomes the correction value of the restored data I 0 + n .

For example, in the pixel “S + 1”, the conditions of steps S204 and S206 are not satisfied, and the process proceeds to step S208. In order to satisfy “Ib (121.00)> Av (113.33)” and satisfy the condition “Ia (130.11)> Max (121.00)”, the condition of step S208 in FIG. You will be satisfied. Accordingly, the process of step S209 is performed as shown in FIG. 15, and “2.28” is used as the updated amount Dp after correction, and the correction value Ia ′ is “123.28”. The same correction is performed for Ia of the pixel “S + 4”. This Ia ′ becomes a correction value of the restored data I 0 + n .

As shown in FIG. 16, the corrected restored data I 0 + 1 (Ia ′) becomes the new input image data (= replaces the initial image data I 0 ) in step S302, and step S302 is executed. The process proceeds to step S303 to obtain new difference data δ. The size of the difference data δ is determined in step S304. If the difference data δ is larger than the predetermined value, the new difference data δ is distributed to the previously modified restored data I 0 + 1 in step S305 to generate new restored data I 0 + 2 . In this case, the new restoration data I 0 + 2 is corrected in the same manner as described in FIG. 15 (see FIG. 17). For example, the updated pixel value Ia of the pixels “S-3”, “S”, and “S + 3” is corrected in the same manner as the pixel “S-3” in FIG. That is, the updated pixel value Ia becomes the corrected value Ia ′ as it is.

For example, in the pixel “S + 1”, the conditions in steps S204 and S206 are not satisfied, and the process proceeds to step S208. Then, “Ib (121.00)> Av (116.15)” and the condition “Ia (125.43)> Max (121.00)” is satisfied, and therefore the condition of step S208 in FIG. 4 is satisfied. Will be. Therefore, the process of step S209 is performed as shown in FIG. 17, and the correction value Ia ′ is “122.11” using the value “1.16” according to the above-described equation (2) as the update amount Dp after correction. . The same correction is performed on the updated pixel value Ia of the pixel “S + 4”. This correction value Ia ′ becomes the correction value of the restored data I 0 + n .

For example, in the pixel “S-2”, the conditions of steps S204, S206, and S208 are not satisfied, and the process proceeds to step S210. Then, “Ib (77.30) ≦ Av (87.67)” is satisfied, and the condition “Ia (76.97) <Min (77.30)” is satisfied, so the condition of step S210 in FIG. 4 is satisfied. Will be. Therefore, as shown in FIG. 17, the process of step S211 is performed, and the value “−0.082” according to the above equation (3) is used as the updated amount Dp after correction, and the correction value Ia ′ is “77.22”. Become. The same correction is performed on the updated pixel value Ia of the pixels “S−1” and “S + 2”. This correction value Ia ′ becomes the correction value of the restored data I 0 + n .

Thereafter, by performing the step S302 by using the restored data I 0 + 2 that are fixed, a new comparison data I 0 + 2 'from the restored data I 0 + 2 that are fixed are generated. As described above, after steps S302 and S303 are executed, the process proceeds to step S304, and the process proceeds to step S305 based on the determination. Such a process is repeated.

(Main effects obtained by this embodiment)
By executing the process of the change factor information generation method according to the present embodiment (FIG. 2), a valid PSF can be generated even if the data (PSF) of the change factor information is unknown, and practical signal restoration is possible. Is possible. Therefore, a speed sensor or an acceleration sensor that mechanically measures blur or the like can be omitted from the components of the signal processing device 1 such as a camera. Further, during this process, the processing unit 4 performs the reduction process (step S102), whereby the magnitude of the change is also reduced. Therefore, it is easy to estimate the PSF, and an appropriate PSF is generated. In addition, during this process, the processing unit 4 repeatedly performs the enlargement process (step S125) by gradually increasing the enlargement rate, thereby estimating a large PSF based on the high-quality PSF obtained by performing the reduction process. Can do. Further, during this process, the processing unit 4 can perform a process for determining and correcting the validity of the distribution value (steps S107 and S108), thereby suppressing an extreme change in the value of the data due to the distribution. PSF is generated. Further, during this process, the processing unit 4 performs a process of resetting the origin of P ′, whereby an extreme change in the value of data due to the distribution can be suppressed, and a more appropriate PSF is generated. Further, during this process, the processing unit 4 executes the reduction process (step S102), whereby the magnitude of the change is also reduced. Therefore, it becomes easier to estimate the PSF, and a more appropriate PSF is generated. .

  Further, during this process, the processing unit 4 performs a process (step S121) of emphasizing the skeleton part of the PSF, thereby generating a PSF that puts weight on estimation of a change corresponding to blurring. In this process, the processing unit 4 sets a PSF initial value to a Gaussian disk (step S101), and estimates a good PSF regardless of any blur or blur. be able to.

  The signal processing apparatus 1 according to the present embodiment can generate an appropriate PSF even if the PSF is unknown, and can perform practical signal restoration. Therefore, a speed sensor or an acceleration sensor that mechanically measures blur or the like can be omitted from the components of the signal processing apparatus 1. In addition, since the magnitude of the change is reduced by performing the reduction process (step S102) during the generation process of the change factor information, it is easy to estimate the PSF, and a practical signal based on the appropriate PSF. Can be restored. In addition, during the generation process of the change factor information, the processing unit 4 repeatedly performs the enlargement process (step S125) by gradually increasing the enlargement rate, and thereby based on the high-quality PSF obtained by performing the reduction process. A large PSF can be estimated, and a practical signal restoration based on the PSF can be performed.

  Further, the signal processing device 1 repeats Steps S302 to S307 shown in FIG. 8 so that the difference data δ gradually decreases. When the difference data δ becomes smaller than a predetermined value, the original image data Img that is not blurred is obtained. . At this time, since the correction process (FIGS. 4 and 5) is performed, the image data estimated as the original image data Img obtained is reduced in the occurrence of ringing and has a good image restoration state. Become. In addition, the correction process (FIGS. 4 and 5) corrects unnatural data among the signal element data constituting the restored data and suppresses a large change in the pixel value. Even if the data G of the change factor information estimated by the processing shown in (2) is low in reliability, it is possible to restore an appropriate image.

(Other forms)
As described above, the generation method of the change factor information and the signal processing apparatus 1 in the present embodiment have been described, but various modifications can be made without departing from the gist of the present invention. For example, in the method of generating change factor information, the processing unit 4 repeats the processing of steps S103 to S108 shown in FIG. 2 twice or more, but it can be performed only once. Further, the processing unit 4 determines and corrects the validity of the distribution value (steps S107 and S108), resets the origin of P ′ (step S122), enlargement processing (step S125), and finally initializes. The whole or a part of the enlargement process (step S128) to make the data the same size as the PSF data for the image data (step S128) and the process of enhancing the skeleton part of the PSF (step S121) can be omitted. Furthermore, the initial value of PSF can be set to an arbitrary value without being set in the Gaussian disk (step S101).

For example, the signal restoration process in the signal processing apparatus 1 employs the process shown in FIG. 8, but other processes such as a process using a Wiener filter can be employed. Even when the process shown in FIG. 8 is adopted, both or one of the process for setting the origin position (step S300) and the process for determining and correcting the validity of the update amount (steps S306 and S307) are omitted. Can do. Further, even when the process of determining and correcting the validity of the update amount (steps S306 and S307) is adopted, the method of correcting the restored data I 0 + n is not limited to the methods shown in FIGS. Particularly, the method of correcting the difference data δ in steps S205, S207, S209, S211 and S213 can be changed as appropriate according to the case in steps S204, S206, S208, S210 and S212 shown in FIG.

  In addition, the processing unit 4 repeats the processing of steps S103 to S108 shown in FIG. 2 ten times. In step S123, the processing unit 4 repeats steps S103 to S122. You can increase or decrease the number of times depending on the situation. Moreover, it is good also as the user of the signal processing apparatus 1 being able to set those frequency | counts arbitrarily. Furthermore, the number of changes in the ratio of the image size in the reduction process (step S102) and the enlargement process (step S125) and the change in the image size ratio in the enlargement process (step S125) depend on the image quality of the restored image. Can be increased or decreased.

  The reduction / enlargement method in the reduction process (step S102) and the enlargement process (step S125) is performed by thinning out pixels and inserting a new pixel having a value obtained by averaging the pixel values of adjacent pixels. However, other means, for example, a reduction process using a pixel having an average pixel value of a plurality of adjacent pixels instead of the plurality of pixels, a pixel value of an adjacent pixel as it is as a pixel value of a new pixel, and the new pixel You may perform an expansion process by inserting. Further, in the process of enhancing the skeleton part of PSF (step S121), the value of the skeleton part data is doubled, but may be 1.5 times, 3 times, 4 times, 5 times, or the like.

  Further, the processing unit 4 performs an enlargement process (step S128) to finally make the data the same size as the PSF data for the initial image. However, this process does not require data having the same size as the PSF data for the initial image, and can be enlarged to data having a size different from that of the PSF data for the initial image.

Further, the processing unit 4 performs a process of calculating a PSF (step S120). In this process, the obtained restored data I 0 + n and the original image data Img ′ are each Fourier-transformed, and the PSF is obtained by division in the frequency space. PSF (= P) is obtained by calculating the frequency characteristic of the signal and inverse Fourier transforming the frequency characteristic. However, a process for obtaining a new PSF (= P) from the obtained restored data I 0 + n and the original image data Img ′ can be adopted by other means.

  In the process of determining and correcting the appropriateness of the update amount (steps S306 and S307), for example, as a pixel to be referenced, one pixel surrounding the affected range is included in addition to the affected range (affected range). It may be a large range or a range of a predetermined distance centering on a pixel to be corrected. In addition, as a predetermined standard, the maximum value, minimum value, and average value of the reference pixel are not used, but only the maximum value and the minimum value are used, and the correction value exceeds the maximum / minimum value. You may make it add the value of 1/4 or 1/3 to the maximum value or the minimum value. Further, the upper limit is set to 1.2 times the maximum value and the lower limit is set to 0.8 times the minimum value. That is, if the updated pixel value Ia is in the range of X times the maximum value and Y times the minimum value, the pixel value Ia may not be corrected.

  In the iterative process according to the present embodiment, the processing unit 4 determines whether to process the image once obtained in step S105 in FIG. 2 or step S304 in FIG. 8 for each of a plurality of pixels constituting the image. It is determined whether or not the absolute values of the difference data Δ and δ are all less than a predetermined value or the average value of the absolute values is less than a predetermined value, and whether or not the entire image is to be processed. However, the comparison target with the predetermined value may be the difference data for each of the plurality of pixels constituting the image, and it may be determined whether or not the repeated processing is stopped for each pixel. The comparison target with the predetermined value may be the sum of the difference data δ of each pixel, or the sum of the absolute values of the difference data Δ and δ of each pixel, or two or more of the above four. it can. For example, it is determined whether or not the difference data Δ and δ for each pixel farthest from zero and the sum of the difference data δ for each pixel satisfy different criteria. You may do it. As described above, by appropriately selecting a value to be compared with the predetermined value, appropriate processing can be performed according to the type of original image, the state of change, or the status of restoration processing.

  In the above-described embodiment, the restoration target is image data. However, these restoration processing concepts and techniques can be applied to any digital data restoration processing. For example, it can be applied to restoration of digital audio data. As a result of this application, it is possible to efficiently suppress the occurrence of some inaccurate audio data, such as ringing, and even if the data of the change factor information is inaccurate, a restoration process that can obtain reasonable results Is possible.

  In the above-described embodiment, the signal processing device 1 is a consumer camera. However, the signal processing device 1 uses image data captured by a digital camera or the like as shown in FIG. 2, FIG. 4, FIG. 8, FIG. A printer device that performs printing after executing one or more of the processes shown in FIG. In addition, the signal processing apparatus 1 is a computer in which software for operating a printer device while performing one or more of the processes shown in FIGS. 2, 4, 8, and 9 is installed. 4, FIG. 8, FIG. 9, or the like may be a computer installed with software that executes one or more of the processes shown in FIG.

  Moreover, each processing method mentioned above may be programmed. Alternatively, the program may be stored in a storage medium, such as a CD, DVD, or USB memory, and read by a computer. In this case, the signal processing device 1 may be a program stored in the storage medium placed in an external server of the signal processing device 1 and downloaded and used as necessary. In this case, the signal processing apparatus 1 has communication means for downloading the program in the storage medium.

  In the restoration processing method shown in FIG. 2, FIG. 4, FIG. 8, and the like, the processing performed by the processing unit 4 is configured by software, but each of them is composed of parts that perform part of the processing. You may comprise with hardware. Further, the change factor information data G includes not only the deterioration factor information data but also information for simply changing an image and information for improving an image contrary to deterioration.

  Further, when the number of processing iterations (step S109, step S123, step S304) is set automatically or fixedly on the signal processing device 1, the set number of times is set as the PSF value (P, P ′). ) Or data G of change factor information. For example, when the data of a certain pixel is distributed over many pixels due to blurring, the number of iterations may be increased, and when the variance is small, the number of iterations may be decreased.

  Furthermore, the processing may be stopped when the difference data Δ, δ diverges during the iterative processing, or when the energy of the image data after the energy has moved does not decrease but increases. For example, a method of determining whether or not the light is diverging can be determined by looking at the average value of the difference data Δ and δ and determining that the average value is larger than the previous value. In addition, during an iterative process, if an input is to be changed to an abnormal value, the process may be stopped. For example, in the case of 8 bits, if the value to be changed is a value exceeding 255, the processing is stopped. Further, during an iterative process, when an input that is new data is to be changed to an abnormal value, the value may not be used but may be set to a normal value. For example, when a value exceeding 255 within the 8-bit range of 0 to 255 is used as input data, it is processed as a maximum value of 255.

  In addition, when generating the restoration data to be the output image, depending on the PSF value (P, P ′) and the data G of the change factor information, data that goes out of the area of the image to be restored is generated. There is a case. In such a case, data that protrudes outside the area is input to the opposite side. Also, if there is data that should come from outside the area, it is preferable to bring that data from the opposite side.

  Table 1 shows the evaluation of the restored image that has undergone the above-described processes. A conventional example is an image (Img ') that has been photographed and remains degraded due to blurring or the like. An image obtained by restoring the signal by the process shown in FIG. 8 using the PSF data obtained by the process shown in FIG. The PSF data obtained by omitting the predetermined steps shown in Table 1 from the processing shown in FIG. 2 is used, and the signals are restored by omitting the predetermined steps shown in Table 1 from the processing shown in FIG. The images obtained in this way are referred to as Examples 2 to 12. Note that the twelfth embodiment is an image using a Wiener filter for the image restoration process instead of the process of FIG. Further, as Comparative Example 1, the image shown in FIG. 2 without performing the process of FIG. Furthermore, as Comparative Example 2, an image in which the reduction process (step S102) and the enlargement process (steps S125 and S126) are omitted was also examined.

  The images of the conventional example, Examples 1 to 12 and the comparative example were visually inspected for “presence / absence of blurring”, “presence / absence of ringing”, and “presence / absence of blur (so-called out-of-focus)”. The test results are also shown in Table 1. Those who have been examined are those who have 1.0 eyesight in both eyes, not color weakness or color blindness. When it is judged as “present”, it can be judged at first glance that there is a blur or the like. When it is determined that “slightly exists”, it can be determined that there is a blur or the like while visually comparing with other images for 5 seconds or less. When it is determined that there is no image, it is possible to determine that there is no blur or the like while comparing with other images by visually checking for more than 5 seconds. There are two types of images: landscape and person. The image is printed with a commercially available color printer, and is printed while eliminating the influence on the image due to printing conditions.

  Since the conventional example is a deteriorated image, blurring and blurring can be clearly confirmed. In Example 1, neither blurring, ringing, or blur was observed. The blur disappeared because the information (part A in FIG. 3) corresponding to the blur information forming the center of the PSF was appropriate. It is considered that the blur has disappeared due to information other than information corresponding to the blur information that forms the center of the PSF (part B in FIG. 3). In the comparative example, since the PSF in the portion B in FIG. 3 cannot be obtained, this effect is not obtained and the blur cannot be eliminated.

  The second embodiment is an image in which the process of steps S103 to S108 in FIG. 2 is performed only once. In this image, blurring and blurring were improved as compared with the conventional image. In addition, there was a tendency that blurring and blurring were more improved as the number of repetitions of steps S103 to S108 in FIG. 2 was larger.

  Example 3 is an image processed by omitting steps S107, S108, and S122 in FIG. In this image, blurring and blurring were improved as compared with the conventional image.

  Example 4 is an image processed by omitting steps S107 and S108 in FIG. Example 5 is an image processed by omitting step S122 in FIG. In these images, no blur or blur was observed.

  Example 6 is an image in which steps S125 and S126 in FIG. 2 are omitted, and the image is processed without being gradually enlarged. Example 7 is an image in which step S123 in FIG. 2 is omitted and the number of repetitions is one. These images were improved in blurring and blurring as compared with the conventional image.

  Example 8 is an image processed by omitting step S121 in FIG. In this image, some blurring was observed, but no blur was observed except for blurring. In order to obtain the effect of eliminating this blur, the process of step S121 in FIG. 2 is omitted or reduced (for example, the data of the skeleton part is multiplied by 1.5). In order to further obtain the effect of eliminating blurring, the processing in step S121 in FIG. 2 is further emphasized (for example, the data of the skeleton part is tripled).

  The ninth embodiment is an image processed by omitting steps S300, S306, and S307 in FIG. In this image, no blur or blur was observed, but ringing was observed. However, it has been confirmed that the occurrence of ringing is reduced or not observed by performing the repeated processing in which steps S300, S306, and S307 in FIG. 8 are omitted extremely many times (for example, 100 times or more).

Example 10 is an image processed by omitting step S300 in FIG. 8, and Example 11 is an image processed by omitting steps S306 and S307 in FIG. In these images, no blur or blur was observed, and no ringing was observed. That is, it is understood that the occurrence of ringing can be suppressed by executing any one of the process of determining the origin position (step S300) and the process of determining the validity of the update amount and correcting I 0 + n (steps S306 and S307). It was. Moreover, it turned out that the ringing suppression effect by step S306, S307 is a little superior to the ringing suppression effect by the process of step S300.

  Example 12 is an image obtained by omitting all the processes in FIG. 8 and performing image restoration processing using a Wiener filter instead. In this image, ringing was observed although blurring and blurring were improved as compared with the conventional example.

  The comparative example 2 is an image processed by omitting step S102 and the enlargement process (steps S125 and S126) in FIG. In this image, slight blurring and blurring were observed.

  Next, processing in which steps S306 and S307 in FIG. 8 were omitted and the conditions in step S300 were changed was examined. In the modified example of FIG. 12, when the pixel “S” is 0.85, the pixel “S + 1” is 0.3, and the pixel “S + 2” is 0.25, the pixel “S”, “S + 1”, and “S + 2” are distributed. The light energy concentrated was used as the origin position, and the repetition process shown in FIG. 8 was performed. The restored image in each case was visually observed to determine the presence or absence of ringing. Further, “N = 3”, which is the calculation range of Expression (5), is slightly expanded, and the dispersed light energy is concentrated on the pixels “S−1” and “S + 3” in FIG. The presence / absence of ringing of the restored image in the case where the repetitive processing shown was performed was similarly determined. Table 2 shows the determination results.

Note that the movement energy in the case where the dispersed light energy is concentrated on the pixel “S−1” is calculated and obtained as follows in substantially the same manner as in the case of the pixels “S”, “S + 1”, and “S + 2”.
(0 × 0) + (1 × 0.45) + (2 × 0.3) + (3 × 0.25) + (4 × 0) = 1.80
Similarly, the movement energy when the dispersed light energy is concentrated on the pixel “S + 3” is calculated and obtained as follows.
(4 × 0) + (3 × 0.45) + (2 × 0.3) + (1 × 0.25) + (0 × 0) = 2.20

  From the results in Table 2 and many other examples, it was found that ringing was observed when the kinetic energy value exceeded a predetermined value, and ringing could be suppressed by keeping the kinetic energy value within a predetermined value. That is, when the minimum value of the total amount of kinetic energy derived from Equation (5) is Min, even if the kinetic energy exceeds Min, ringing occurs if the value is Min × 1.2 or less. Was able to be suppressed considerably compared with the past.

  As shown in Table 2, the process in which the conditions of S300 in FIG. 8 are changed can be variously changed. For example, the origin position (0x, 0y) that minimizes the total sum of the kinetic energy E (0x, 0y) is set, but the origin that exceeds the minimum value of the kinetic energy E (0x, 0y) and falls below a predetermined value The position (0x, 0y) may be set. In the case of “S = 0.45”, “S + 1 = 0.3”, and “S + 2 = 0.25”, which are the modified examples of FIG. 12, not the pixel “S + 1” with the smallest kinetic energy but the pixel “S” is the origin position. (0x, 0y) may be used. The kinetic energy at this time is 0.8 for the pixel “S”, 0.7 for the pixel “S + 1”, and 1.2 for the pixel “S + 2”. The pixel “S” has a value smaller than 0.84, which is a value obtained by multiplying the minimum value 0.7 by 1.2.

  In the present embodiment, the origin position is any position on the XY plane in FIG. Therefore, the origin position may be determined within a range on the trajectory AB in FIG. That is, for example, any one of the pixels “S”, “S + 1”, and “S + 2” in FIG. 12 that are within the range of the AB trajectory in FIG. 7 or the pixel “S−1” or “S + 3” is the origin position. It is preferable to do so if the total amount of kinetic energy is reduced. Further, a blur locus projected on the X-axis or Y-axis by the blur locus represented by the XY plane in FIG. 7 is defined as a blur locus, and the minimum value of kinetic energy on the locus or a value as in the above-described embodiment. You may make it obtain | require the position which becomes. Further, the data of the change factor information obtained in each embodiment is obtained by deconvolution calculation using a known blur image and this change factor information, in addition to the signal restoration in the iterative process shown in the present embodiment. It can also be used for other signal restoration methods such as signal restoration.

It is a block diagram which shows the main structures of the signal processing apparatus which concerns on embodiment of this invention. FIG. 2 is a processing flowchart for explaining a processing routine according to a method for generating data of change factor information performed by a processing unit of the signal processing device shown in FIG. 1. FIGS. 3A and 3B are diagrams illustrating a process for emphasizing the skeleton part of the PSF illustrated in FIG. 2, in which FIG. 2A illustrates an example of a PSF estimated before the process, and FIG. ing. FIG. 3 is a flowchart for explaining a processing routine for determining validity of an update amount and correcting restored data shown in FIG. 2. FIG. 5 is a diagram schematically illustrating an example of an update amount before and after performing correction related to the processing flow illustrated in FIG. 4. FIG. 8 is an external perspective view showing an outline of the signal processing device shown in FIG. 1, and is a diagram for explaining which plane of the signal processing device 1 is indicated by the XY plane in FIG. 7. It is a figure explaining the process of step S112 of the process routine which concerns on the production | generation method of the data of the change factor information shown in FIG. 2, and is a figure which shows the locus | trajectory of the blur represented on an XY plane. FIG. 2 is a processing flowchart for explaining a processing routine related to an image restoration processing method (repetitive processing) performed by a processing unit of the signal processing device shown in FIG. 1. It is a figure for demonstrating the concept of the processing method shown in FIG. It is a figure for demonstrating concretely the processing method shown in FIG. 8 taking an example of camera shake, and is a table | surface which shows the concentration of energy when there is no camera shake. It is a figure for demonstrating concretely the processing method shown in FIG. 8 as an example of camera shake, and is a figure which shows image data when there is no camera shake. It is a figure for demonstrating concretely the processing method shown in FIG. 8 taking an example of camera shake, and is a figure which shows dispersion | distribution of energy when camera shake occurs. It is a figure for demonstrating concretely the processing method shown in FIG. 8 taking an example of camera shake, and is a figure for demonstrating the condition which produces | generates the data for a comparison from arbitrary images. FIG. 8 is a diagram for specifically explaining the processing method shown in FIG. 8 using camera shake as an example, in which comparison data and a blurred original image to be processed are compared to generate difference data It is a figure for demonstrating. FIG. 8 is a diagram for specifically explaining the processing method shown in FIG. 8 by taking an example of camera shake, and a diagram for explaining a situation in which restored data is generated by allocating difference data and adding it to an arbitrary image. is there. FIG. 8 is a diagram for specifically explaining the processing method shown in FIG. 8 using camera shake as an example. New comparison data is generated from the generated restored data, and the data and the blurred original image to be processed It is a figure for demonstrating the condition which produces | generates the data of a difference by comparing. FIG. 8 is a diagram for specifically explaining the processing method shown in FIG. 8 by taking an example of camera shake, and a diagram for explaining a situation in which newly generated difference data is allocated and new restored data is generated. It is.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Signal processing apparatus 4 Processing part S101 First change factor information data (Process to set)
Io Initial image data (arbitrary signal data)
Io 'Comparison data G Change factor information data Img' Original image data (original signal data)
SImg 'Reduced original image data I 0 + n Restored data Img Original image (original signal)
Δ, δ Difference data P 0 reduced data

Claims (6)

  1. In the generation method of the change factor information data in which the source signal data in which the change such as deterioration has occurred, the change factor information data in which the factor of the signal change uniformly affects the entire signal ,
    Set the arbitrary change factor information data as the first change factor information data, perform a generation process to generate new change factor information data different from the first change factor information data,
    During the generation process, before the data obtained by processing the data of Kihara signal, using the reduced data subjected to the reduction process to reduce the original signal,
    The generation process includes
    ( 1) Using the set first change factor information data, generate comparison data from arbitrary signal data,
    (2) generating comparison data from the original signal data or the signal data of the original signal;
    (3) First restore data is generated by allocating the obtained difference data to the arbitrary signal data using the data of the first change factor information,
    (4) The first restored data is used in place of the arbitrary signal data, and if necessary, the same processing is repeated,
    (5) Generate new change factor data different from the first change factor data from the restored data and the original signal data or data obtained by processing the original signal data obtained as a result,
    The processing of (3) is performed in a method of moving part or all of the original signal or data obtained by processing the original signal data and signal element data constituting the restored data, thereby constituting the restored data. It is determined whether or not the signal element satisfies a predetermined criterion.If the signal element does not satisfy the predetermined criterion, the distribution value calculated using the difference data is corrected, and instead of the unsatisfied data, the A method of generating data of change factor information, characterized by performing a correction process using data obtained by a corrected distribution value .
  2. In the generation process, after obtaining the new change factor information data from the reduced data, an enlargement process for enlarging the new change factor information data is performed, and the enlargement process of the change factor information is performed. using data instead of the data of the new change factor information, the performed product process similar to one or more times, if necessary and repeating the processes after the enlargement processing according to claim 1 Symbol placement To generate data of change factor information.
  3. In the generation process, when the origin position of the obtained data of the change factor information is Min and the minimum value of the total kinetic energy required for the movement of the data of each signal element is Min, the kinetic energy is Min or more. The method of generating data of change factor information according to claim 1 or 2, wherein the position is set to a position where Min x 1.2 or less.
  4. The data of the first change factor information generation method of data change factor information according to any one of claims 1 or et 3, wherein the intensity distribution is data of a Gaussian distribution.
  5. The signal before the change or the signal that should have been originally acquired from the original signal data in which the change such as deterioration has occurred by using the data of the change factor information in which the factor of the signal change uniformly affects the entire signal or A processing unit that restores those approximate signals (hereinafter referred to as original signals);
    The processing unit sets arbitrary change factor information data as first change factor information data, and performs generation processing to generate new change factor information data different from the first change factor information data,
    In the generation process, as the processed data of the original signal, using reduced data that has been subjected to a reduction process to reduce the original signal,
    Performing the restoration of the original signal based on the data of the new change factor information generated or data obtained by processing the data of the change factor information,
    The processor is
    (1) Using the data of the set first change factor information, generate comparison data from arbitrary signal data,
    (2) generating comparison data from the original signal data or the signal data of the original signal;
    (3) First restore data is generated by allocating the obtained difference data to the arbitrary signal data using the data of the first change factor information,
    (4) The first restored data is used in place of the arbitrary signal data, and if necessary, the same processing is repeated,
    (5) Generate new change factor data different from the first change factor data from the restored data and the original signal data or data obtained by processing the original signal data obtained as a result,
    (6) The original signal is restored based on the generated data of the new change factor information or data obtained by processing the data of the change factor,
    The processing of (3) is performed in a method of moving part or all of the original signal or data obtained by processing the original signal data and signal element data constituting the restored data, thereby constituting the restored data. It is determined whether or not the signal element satisfies a predetermined criterion.If the signal element does not satisfy the predetermined criterion, the distribution value calculated using the difference data is corrected, and instead of the unsatisfied data, the A signal processing apparatus that performs correction processing using data obtained by a corrected distribution value .
  6. In the generation process, after obtaining the data of the new change factor information from the data subjected to the reduction process, an enlargement process for enlarging the data of the new change factor information is performed, and the change factor subjected to the enlargement process The signal processing apparatus according to claim 5 , wherein the generation process is performed using information data instead of the new change factor information data.
JP2008052808A 2008-03-04 2008-03-04 Method for generating data of change factor information and signal processing apparatus Expired - Fee Related JP5065099B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008052808A JP5065099B2 (en) 2008-03-04 2008-03-04 Method for generating data of change factor information and signal processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008052808A JP5065099B2 (en) 2008-03-04 2008-03-04 Method for generating data of change factor information and signal processing apparatus

Publications (2)

Publication Number Publication Date
JP2009212740A JP2009212740A (en) 2009-09-17
JP5065099B2 true JP5065099B2 (en) 2012-10-31

Family

ID=41185477

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008052808A Expired - Fee Related JP5065099B2 (en) 2008-03-04 2008-03-04 Method for generating data of change factor information and signal processing apparatus

Country Status (1)

Country Link
JP (1) JP5065099B2 (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0628469A (en) * 1992-07-06 1994-02-04 Olympus Optical Co Ltd Deteriorated image restoring system
JPH11272866A (en) * 1998-03-23 1999-10-08 Olympus Optical Co Ltd Image restoring method
JP2000004363A (en) * 1998-06-17 2000-01-07 Olympus Optical Co Ltd Image restoring method
JP2001242414A (en) * 2000-03-01 2001-09-07 Minolta Co Ltd Design method for beam-shaping element
JP2005017136A (en) * 2003-06-26 2005-01-20 Jeneshia:Kk Optical system evaluator
WO2006082979A1 (en) * 2005-02-07 2006-08-10 Matsushita Electric Industrial Co., Ltd. Image processing device and image processing method
JP4606976B2 (en) * 2005-09-07 2011-01-05 日東光学株式会社 Image processing device
JP4763419B2 (en) * 2005-10-27 2011-08-31 日東光学株式会社 Image processing device
JP4926450B2 (en) * 2005-11-01 2012-05-09 日東光学株式会社 Image processing device
JP4787959B2 (en) * 2006-01-10 2011-10-05 国立大学法人東京工業大学 Image restoration filter and image restoration method using the same
JP5222472B2 (en) * 2006-07-14 2013-06-26 イーストマン コダック カンパニー Image processing apparatus, image restoration method, and program

Also Published As

Publication number Publication date
JP2009212740A (en) 2009-09-17

Similar Documents

Publication Publication Date Title
JP5909540B2 (en) Image processing display device
JP6081726B2 (en) HDR video generation apparatus and method with ghost blur removed on multiple exposure fusion base
US8941761B2 (en) Information processing apparatus and information processing method for blur correction
Tai et al. Richardson-lucy deblurring for scenes under a projective motion path
US9307134B2 (en) Automatic setting of zoom, aperture and shutter speed based on scene depth map
KR101429371B1 (en) Algorithms for estimating precise and relative object distances in a scene
JP6327922B2 (en) Image processing apparatus, image processing method, and program
JP5414752B2 (en) Image processing method, image processing apparatus, imaging apparatus, and image processing program
JP5868076B2 (en) Image processing apparatus and image processing method
JP4453734B2 (en) Image processing apparatus, image processing method, image processing program, and imaging apparatus
EP2489007B1 (en) Image deblurring using a spatial image prior
US9049356B2 (en) Image processing method, image processing apparatus and image processing program
JP4775700B2 (en) Image processing apparatus and image processing method
US7054502B2 (en) Image restoration apparatus by the iteration method
JP6218402B2 (en) Method and apparatus for removing non-uniform motion blur in large input images based on tile units
JP4926920B2 (en) Anti-shake image processing apparatus and anti-shake image processing method
US8358865B2 (en) Device and method for gradient domain image deconvolution
JP3731577B2 (en) Image processing program
JP5756099B2 (en) Imaging apparatus, image processing apparatus, image processing method, and image processing program
KR101341096B1 (en) apparatus and method for restoring image
JP4942216B2 (en) Image processing method, image processing apparatus, imaging apparatus, and program
Kundur et al. A novel blind deconvolution scheme for image restoration using recursive filtering
US7773819B2 (en) Image processing apparatus
JP4577565B2 (en) Image processing method, image processing apparatus, program, and photographing apparatus
CN102833461B (en) Image processing apparatus, image processing method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110302

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20111110

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20111115

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120113

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120515

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120710

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120731

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120809

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150817

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees