WO2015152424A1 - Device, program, and method for assisting with focus evaluation - Google Patents
Device, program, and method for assisting with focus evaluation Download PDFInfo
- Publication number
- WO2015152424A1 WO2015152424A1 PCT/JP2015/060736 JP2015060736W WO2015152424A1 WO 2015152424 A1 WO2015152424 A1 WO 2015152424A1 JP 2015060736 W JP2015060736 W JP 2015060736W WO 2015152424 A1 WO2015152424 A1 WO 2015152424A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- value
- edge component
- elements
- function
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 58
- 238000011156 evaluation Methods 0.000 title abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 60
- 230000004044 response Effects 0.000 claims abstract description 9
- 238000012886 linear function Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 113
- 230000007480 spreading Effects 0.000 claims description 8
- 238000003892 spreading Methods 0.000 claims description 8
- 239000002131 composite material Substances 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 230000003321 amplification Effects 0.000 description 14
- 238000003199 nucleic acid amplification method Methods 0.000 description 14
- 238000009499 grossing Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 239000000523 sample Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 5
- 239000000203 mixture Substances 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000012723 sample buffer Substances 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 241000238631 Hexapoda Species 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000012888 cubic function Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
Definitions
- the present invention relates to a technique for assisting in confirming whether a human is in focus (hereinafter referred to as “focus evaluation”) when shooting a moving image or a still image with a camera or the like.
- focus evaluation when a moving image or a still image is taken with a camera or the like has been generally evaluated by focusing on pixels that represent the edge of an image (for example, pixels that represent the contour of an object).
- pixels that represent the edge of an image for example, pixels that represent the contour of an object.
- there is a technique for enhancing a pixel representing an edge using an edge enhancement process for enhancing a pixel representing an edge using an edge enhancement process.
- Japanese Patent Laid-Open No. 2010-114556 normalizes luminance values constituting image data, and performs a filter operation using HPF on the normalized image data to obtain a filter operation value for each pixel.
- a technique for coloring a pixel having a filter calculation value larger than an edge threshold with a specific color is described (see paragraphs [0080] to [0092] and FIGS. 6 to 11 of the document).
- the pixel representing the edge generally has a large filter calculation value after the filter calculation using HPF.
- a predetermined limit value is provided for the edge enhancement level, and when the edge amount increases due to gain multiplication and the edge enhancement level reaches the limit value, the edge enhancement level is set to a predetermined value. Techniques for clipping to values and saturating are described.
- JP 2010-114556 A Japanese Patent No. 5320538 Japanese Patent No. 5396626 JP 2010-16783 A
- the contour enhancement processing does not sufficiently assist the focus evaluation for an image having few edges and having few pixels whose luminance changes rapidly from neighboring pixels, that is, an image having a small high frequency component as a spatial frequency component. .
- the edge enhancement processing may not provide sufficient assistance for focus evaluation.
- one pixel to which a specific color is assigned may be crushed due to reduction and may not be displayed.
- an image is sometimes captured at 4K (4096 pixels ⁇ 2160 pixels), and when such an image is confirmed on a small, low-resolution monitor equipped with a camera, the image has to be reduced and displayed.
- the conventional contour enhancement processing does not provide sufficient assistance for focus evaluation. For this reason, when shooting a video at 4K, it is actually the case that a large 4K monitor is brought in and the focus of the shot image is evaluated.
- the pixel representing the edge may be one index for evaluating the focus, but the focus is not evaluated only by that.
- Information useful for focus evaluation is also included in pixels that are not pixels that represent edges. For example, the atmosphere of the edge of the entire screen can be confirmed from a pixel that is not a pixel representing the edge. However, this information has not been utilized in assisting focus evaluation in the conventional contour enhancement processing.
- the present invention has been made in view of the above problems.
- the apparatus is means for generating an edge component from a first image, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and the value of each element of the edge component is a first value.
- Edge component generating means corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the image of the image, and means for generating the second image from the edge component,
- the second image includes a plurality of pixels, and the means includes image generation means including means for determining the color of each pixel of the second image as a function of the value of each element of the edge component.
- the apparatus is configured to output an image based on the newly generated second image to the display unit in response to a new input of the first image to the edge component generation unit. .
- the edge component generation means may include means for performing nonlinear processing on the value of each element of the edge component before applying the limiter.
- the value of the element of the edge component after the nonlinear processing is the output of the first nonlinear function that receives the value of the element of the edge component before the nonlinear processing.
- the edge component generation means may include means for performing nonlinear processing including limiter application on the value of each element of the edge component.
- the value of the element of the edge component after the nonlinear processing is an output of the second nonlinear function that receives the value of the element of the edge component before the nonlinear processing, and the second nonlinear function includes at least the first nonlinear function and at least It is a composite function with a function for limiter application.
- the first nonlinear function may be a function defined by the first quadrant and the third quadrant and passing through the origin.
- the apparatus may further include means for switching the first nonlinear function between a plurality of nonlinear functions when the first image is successively input.
- the edge component generation means may include means for applying at least a two-dimensional high-pass filter to the first image to remove at least a direct current component from the spatial frequency component in the first image.
- the edge component generation means includes a first direction one-dimensional high-pass filter means for the first image, a second direction one-dimensional high-pass filter means for the first image, and an output of the first direction one-dimensional high-pass filter means. And a means for synthesizing the output of the two-way one-dimensional high-pass filter means.
- the edge component generation means is an amplification means for changing the value of the element to a value obtained by multiplying the value of the element of the edge component by a predetermined coefficient, and the value of the element of the edge component is larger or smaller than a predetermined threshold value.
- One or both of limiter applying means for changing the value to the threshold value may be provided.
- the edge component generation means replaces the value of an element having an absolute value larger than the absolute value of the surrounding element in the edge component with a value obtained from the value of the surrounding element;
- the edge component may include one or both of spreading filter means for replacing an element value having an absolute value smaller than the absolute value of the surrounding element value with a value obtained from the surrounding element value.
- the isolated point removal filter means specifies a continuous region of 3 elements ⁇ 3 elements in the edge component, and the product of the maximum value that the element can take and the first ratio in the elements excluding the central element of the region.
- the number of elements having an absolute value less than or equal to the threshold value is obtained, and when the obtained number of elements is equal to or greater than the first number, The value can be configured to replace the value of the center element of the region.
- the spreading filter means specifies a continuous area of 3 elements ⁇ 3 elements in the edge component, and in elements other than the element at the center of the area, the product of the maximum value that the element can take and the second ratio is used as an absolute value.
- the number of elements having a value that is greater than or equal to the threshold value is obtained, and when the number of obtained elements is the second number or more, the median value of the element values excluding the central element of the area, It can be configured to replace the value of the central element.
- the first image is generated from the original image
- the original image includes a plurality of pixels
- the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image.
- the apparatus replaces at least a part of the original image with the second image and / or superimposes the original image and the second image, thereby outputting an image output to the display unit. Further image generation means for generating the image may be further provided.
- C out is a value that specifies the color of a certain pixel in the superimposed image
- C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively.
- the further image generating means may include means for adjusting the color of the original image.
- Another embodiment of the present invention is a program that causes a computer to function as a focus evaluation auxiliary device.
- Another embodiment of the present invention is a focus evaluation assisting method.
- This method is a step of generating an edge component from a first image by an edge component generation means, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and each element of the edge component Is obtained from the edge component by the edge component generation step corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image.
- Generating a second image, wherein the second image includes a plurality of pixels the step including determining a color of each pixel of the second image as a function of a value of each element of the edge component.
- an image generation step and an image output step of outputting an image based on the second image to the display means by the image output means are repeated in response to a new input of the first image.
- the edge component generation step may include a step of performing nonlinear processing on the value of each element of the edge component before applying the limiter.
- the value of the element of the edge component after the nonlinear processing is the output of the first nonlinear function that receives the value of the element of the edge component before the nonlinear processing.
- the edge component generation step may include a step of performing non-linear processing including limiter application on the value of each element of the edge component.
- the value of the element of the edge component after the nonlinear processing is an output of the second nonlinear function that receives the value of the element of the edge component before the nonlinear processing, and the second nonlinear function includes at least the first nonlinear function and at least It is a composite function with a function for limiter application.
- the first nonlinear function may be a function defined by the first quadrant and the third quadrant and passing through the origin.
- the method may further include a step of switching the first nonlinear function between a plurality of nonlinear functions when the first image is sequentially input.
- the edge component generation step may include a two-dimensional high-pass filter application step for the first image.
- the edge component generation step includes a first direction one-dimensional high-pass filter application step for the first image, a second direction one-dimensional high-pass filter application step for the first image, a first direction one-dimensional high-pass filter application step, And a step of synthesizing the output obtained by the second direction one-dimensional high-pass filter applying step.
- an amplification step for changing the value of the element to the value obtained by multiplying the value of the element of the edge component by a predetermined coefficient, and the value of the element of the edge component is larger or smaller than a predetermined threshold One or both of the limiter application steps for changing the value to the threshold value may be included.
- the edge component generation step replaces the value of an element having an absolute value greater than the absolute value of the surrounding element in the edge component with a value obtained from the value of the surrounding element;
- the isolated point removal filter applying step includes a step of specifying a continuous area of 3 elements ⁇ 3 elements in the edge component, and a maximum value and a first ratio that the element can take in elements excluding the central element of the area And a step of obtaining the number of elements having a value whose absolute value is less than or equal to the threshold value, and the element at the center of the region is excluded when the number of obtained elements is equal to or greater than the first number Replacing the value of the element at the center of the region with the median of the values of the elements.
- the spreading filter applying step includes a step of specifying a continuous region of 3 elements ⁇ 3 elements in the edge component, and a product of the maximum value that the element can take and the second ratio in the elements other than the central element of the region. And determining the number of elements having a value whose absolute value is equal to or greater than the threshold, and the value of the element excluding the central element in the region when the number of determined elements is equal to or greater than the second number And replacing the value of the element at the center of the region with the median value of.
- the first image is generated from the original image
- the original image includes a plurality of pixels
- the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image.
- the display is performed by replacing at least a part of the original image with the second image and / or superimposing the second image on the original image by the further image generation means. It may further comprise a further image generation step for generating an image to be output to the means.
- the additional image generation means C out ⁇ C in1 + ⁇ C in2 ( ⁇ is a real number satisfying 0 ⁇ ⁇ ⁇ 1, ⁇ is a real number satisfying 0 ⁇ ⁇ ⁇ 1)
- the step of obtaining the color of a certain pixel in the superimposed image can be included.
- C out is a value that specifies the color of a certain pixel in the superimposed image
- C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there.
- the further image generation step may further include a step of adjusting the color of the original image.
- the luminance change between pixels in an image is displayed in a continuous manner using a color corresponding to the luminance change. Therefore, even on a small and low-resolution screen, focus evaluation is relatively easy from the edge atmosphere grasped from the entire image.
- the edge atmosphere means a distribution of luminance change between pixels over the entire image.
- desired enhancement can be performed on the edge component by generating a frequency that does not exist in the spatial frequency of the edge component before nonlinear processing by a nonlinear function.
- removal of at least a direct current component from a frequency component in the first image can be realized by a one-dimensional high-pass filter or a two-dimensional high-pass filter. Since the operation speed of the one-dimensional high-pass filter is higher, when the one-dimensional high-pass filter is used, the removal of at least the direct current component from the frequency component in the first image can be realized at a higher speed.
- the value of the edge component can be converted into a manageable range by amplifying or limiting the value of the edge component.
- noise can be reduced by smoothing edge components.
- the original image or the second image can be relatively emphasized by adjusting the color of the original image.
- FIG. 3 shows a flowchart of a focus evaluation assisting method according to an embodiment of the present invention.
- An alternative configuration of the edge component generation step 120 in FIG. 1A is shown.
- a first image is shown.
- a two-dimensional high-pass filter is shown.
- Edge components are shown. It shows the effect of non-linear processing when using E 3 as a non-linear function f (E).
- a one-dimensional high-pass filter is shown.
- Non-linear processing is described.
- An example of a nonlinear function The effect by the nonlinear function shown by FIG. 4A is shown.
- a generalized function for amplification / limiter application is shown.
- Fig. 4 shows an example of a function for amplification / limiter application.
- FIG. 5 shows another example of a function for amplification / limiter application.
- the function for assigning the color of the pixel of the 2nd picture is shown.
- the figure corresponding to the edge component before execution of the smoothing step 131 in FIG. 1A is shown.
- the figure corresponding to the edge component after execution of the smoothing step 131 in FIG. 1A is shown.
- the graph showing the isolated point 411 and the insect eater 412 in FIG. 7A is shown.
- the flowchart of the isolated point removal filter application step 132 in FIG. 1A is shown.
- the flowchart of the spreading filter application step 133 in FIG. 1A is shown.
- the region specified in the edge component is shown.
- An original image (first image) is shown.
- a second image is shown.
- FIG. 9B shows an alternative configuration of the edge component generation means 620 in FIG. 9A.
- FIG. 1A is a flowchart of a method for assisting focus evaluation. Note that the configuration shown in this flowchart is merely an example, and some steps are not essential as described below.
- the method starts from step 110 in which a first image is input.
- the first image is, for example, 1 of a moving image (consisting of a plurality of frames.
- a moving image displayed on a camera finder or monitor for capturing a still image) is currently captured by the camera. It may be derived from a frame, one frame of a moving image already recorded by a camera (hereinafter referred to as “original image”), or the like.
- the first image includes a plurality of pixels.
- the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. In the following description, it is assumed that the value is the luminance value of the pixel of the original image. Note that the value of each pixel of the first image is an arbitrary value that can be derived from the color of the original image pixel, such as the R, G, or B value of the pixel color of the original image, or the chroma (saturation) value. It may be.
- the luminance value is used as the pixel value of the first image
- the first image can be regarded as an image in which the color is expressed in grayscale
- the derivation of the value of the first pixel may be simply using the color (value) of the pixel of the original image.
- the method continues with step 120 of generating edge components from the first image.
- the edge component is a component obtained by removing at least a direct current component from a spatial frequency component in the first image.
- the edge component includes a plurality of elements, and the value of each element of the edge component corresponds to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image.
- the edge component is represented by an array including the same number of elements as the number of pixels of the first image.
- the method for removing at least the DC component from the spatial frequency component in the first image is arbitrary, but in the present embodiment, it is realized by executing step 121 of applying a two-dimensional high-pass filter (HPF) to the first image. is doing. Alternatively, it can be realized by two one-dimensional high-pass filter applying steps 121a and 121b, as described below with respect to FIG. 1B. Further, at least a direct current component can be removed from the first image by subtracting the first image obtained by applying the low-pass filter from the first image instead of applying the high-pass filter.
- the two-dimensional high-pass filter will be described with reference to FIGS. 2A to 2C.
- FIG. 2A is an example of the first image.
- one square represents one pixel
- the value I xy in the square represents the value of the pixel
- the luminance value of the corresponding pixel of the original image x and y are integers where 0 ⁇ x ⁇ w and 0 ⁇ y ⁇ h, respectively, and w and h are the numbers of pixels in the horizontal and vertical directions of the first image, respectively.
- FIG. 2B is an example of a two-dimensional high-pass filter.
- a filter having a size of 3 elements ⁇ 3 elements is used, but the size may be as desired.
- the coefficients f ij (i and j are integers where 0 ⁇ i ⁇ 3 and 0 ⁇ j ⁇ 3, respectively) of the example high-pass filter can take the following values.
- coefficient values are merely examples, and it is needless to say that any coefficient that functions as a high-pass filter can be used.
- FIG. 2C is an example of edge components. This edge component is obtained by applying the two-dimensional high-pass filter illustrated in FIG. 2B to the first image, and the value of each element is calculated by the following equation.
- I xy (x ⁇ 0, y ⁇ 0, w ⁇ x, h ⁇ y) of the region outside the image may be calculated as an arbitrary value, for example, 0.
- E xy of the element of the edge component calculated by the above equation can be both positive and negative.
- E1 xy and E2 xy are the values of the elements of the edge component before and after the nonlinear processing, respectively, and f (E) is an arbitrary nonlinear function.
- f (E) is an arbitrary nonlinear function.
- It is a nonlinear function. 1 Defined in the first and third quadrants. 2 Pass through the origin.
- E If the possible value of E exceeds the domain of f (E) to be used, it was obtained by multiplying E by a rational number, adding a rational number offset to E, or taking the absolute value of E. , F (E) can be calculated with E being a value within the domain of f (E).
- non-linear functions include, but are not limited to, non-linear functions of E p , sin p (E) and log p (E) (p is a rational number) (that is, excluding E 1 ), and these functions are rational multiples And a function obtained by one or more of adding a rational number offset to these functions and combining these functions.
- f (E) can be a function as follows.
- the non-linear function can also be realized using a table. For example, all values that can be taken by E xy and predetermined outputs corresponding to the respective values are stored in a table, and when nonlinear processing is performed, the output is calculated using a function as this table. be able to.
- FIG. 3 shows the frequency components included in the input image and the output image when the output image is obtained by applying a nonlinear function to the luminance value of each pixel of the input image using the circuit shown.
- HPF is a high-pass filter
- NLF is a nonlinear function
- LMT is a limiter
- ADD is an adder
- MSB is a bit representing a most significant bit, that is, a sign.
- FIG. 2D is a graph showing the effect of nonlinear processing when using Equation (4) as f (E), and this graph is connected to a plot showing the value of the element of the edge component before nonlinear processing.
- a curve 211 and a curve 212 connected with plots representing the values of the elements of the edge components after nonlinear processing are included.
- the horizontal axis of the graph represents a continuous element having an edge component
- the vertical axis represents the value of the element.
- Equation (4) is a nonlinear function in which the output / input ratio increases as the input increases.
- the curve 212 after the nonlinear process has a sharper peak 213 than the curve 211 before the nonlinear process.
- This effect is generally brought about when a nonlinear function that becomes a convex function when an absolute value is taken is used in the nonlinear processing described above.
- the function shown in FIG. 4A also includes a so-called dead band, and as shown in FIG. 4B, noise can be removed or at least reduced to zero a signal whose absolute value is below a certain level. .
- Such a function can also be regarded as a combination with a function for applying a limiter described later.
- nonlinear function may be asymmetric with respect to the origin.
- functions having different properties may be combined to form a nonlinear function in the above-described nonlinear processing.
- Such a nonlinear function is partially linear. Some of them have constant values, some have discontinuous parts, some have quadratic functions, and others have cubic functions.
- the image from the video camera changes every moment, and an appropriate nonlinear function may differ depending on the image.
- an appropriate nonlinear function may differ depending on the image.
- the nonlinear function used in the nonlinear processing described above can be switched between a plurality of nonlinear functions when images are successively input to process the image in this method. . Such switching can be performed by any means.
- the edge component generation step 120 may further include an amplification / limiter application step 123.
- the value of the element of the edge component after the nonlinear processing step 122 (after the two-dimensional high-pass filter applying step 121 if the nonlinear processing step 122 is not included) is multiplied by a predetermined coefficient to the value.
- One or both of changing the value of the element and changing the value to the threshold when the value of the element is greater than or less than a predetermined threshold is performed.
- the amplification / limiter application step 123 or the application of the limiter among the steps may be realized by a function as generally shown in FIG. 5A.
- the horizontal axis and the vertical axis represent input and output, respectively
- T HN , T LN , T LP and T HP are thresholds related to input
- C HN , C LN , C LP and C HP are these This is a value related to the output corresponding to each of the threshold values.
- the absolute values of T LN and T LP, the absolute values of T HN and T HP , the absolute values of C LP and C LN , and the absolute values of C HP and C HN may be equal or different. Good.
- the value of an edge component greater than (or greater than) T LN and less than (or less than) T LP is zero, and the value of an edge component less than (or less than) T HN is C
- the value of the edge component that is HN and is greater than (or greater than) T HP is C HP .
- the function is depicted on the assumption that the slope of the function of the section from T HN to T LN and the function of the section from T LP to T HP is 1, but the slope of at least one function of these sections is It may not be 1. By making the slope greater than 1, this function allows the limiter to be applied and amplified simultaneously.
- the non-linear processing step 122 and the amplification / limiter applying step 123 may be executed as the same step by a function obtained by synthesizing the functions used in the former and the latter.
- the edge component generation step 120 applies a one-dimensional high-pass filter in a direction perpendicular to the line (vertical direction of the first image) instead of the step 121 of applying a two-dimensional high-pass filter.
- Step 121a and step 121b of applying a one-dimensional high-pass filter in the sample direction (the same lateral direction) parallel to the line can be included.
- the final edge component E xy can be obtained by step 124 of combining the first edge component E Lxy and the second edge component E Sxy obtained by the two one-dimensional high-pass filters, respectively.
- FIG. 2E is an example of a one-dimensional high-pass filter.
- a filter having a size of 3 elements is used, but the size may be as desired.
- the coefficients of the example high-pass filter can take the following values.
- the value of the coefficient is an example, and an arbitrary coefficient functioning as a high-pass filter can be used. It goes without saying that the coefficient may be different in the direction perpendicular to the line and the sample direction parallel to the line.
- I xy (x ⁇ 0, y ⁇ 0, w ⁇ x, h ⁇ y) of the region outside the image may be calculated as an arbitrary value, for example, 0.
- non-linear processing steps 122a and 122b and amplification / limiter application steps 123a and 123b is applied to the first edge component E Lxy and the second edge component E Sxy , respectively.
- steps are similar to the non-linear processing step 122 and the amplification / limiter application step 123 described above.
- step 130 a second image is generated.
- This step includes step 134 of imaging the edge component by assigning a color according to the value of the element of the edge component.
- FIG. 6 shows three graphs having the edge component value E xy as the horizontal axis and the R, G and B component values J Rxy , J Gxy and J Bxy of the pixels of the second image as the vertical axis, respectively. Is shown.
- the second image can be generated by assigning the value of the second pixel by the following function.
- J Rxy E xy + offset
- J Gxy E xy + offset (11)
- J Bxy E xy + offset
- offset is an arbitrary offset value.
- the second image is represented by shading from white to black.
- the second image can be generated by any technique that determines the color of each pixel of the second image as a function of the value of each element of the edge component.
- the step 130 for generating the second image may include a smoothing step 131 before the imaging step 134.
- the smoothing step 131 includes one or both of an isolated point removing filter applying step 132 and a spreading filter applying step 133.
- the isolated point removal filter application step 132 and the spread filter application step 133 will be described with reference to FIGS. 7A to 7F.
- FIG. 7A shows an image 410 including an isolated point 411 and a worm-eaten 412 in which the edge component is imaged by binarizing the absolute value of each element value with an appropriate threshold value for the sake of explanation.
- a white pixel indicates that the absolute value of the corresponding element value is greater than or equal to a certain threshold value, and a black pixel is less than a certain threshold value.
- the isolated point 411 is an element having an edge component that has a value that is larger than the absolute value of the value of the surrounding element, preferably a value that is significantly larger than the absolute value of the value of the surrounding element.
- the worm-eaten 412 is an element having a value that is smaller than the absolute value of the value of the surrounding element, preferably an absolute value that is significantly smaller than the absolute value of the value of the surrounding element.
- isolated points 411 and worm-eaten 412 are also shown as different expressions in FIG. 7C using a graph in which the horizontal axis is a continuous element having an edge component and the vertical axis is the value of the element. Such isolated points 411 and worm eaters 412 may occur due to the nature of the first image.
- the application of the isolated point removal filter removes the isolated point 411, and the application of the spread filter removes the worm-eating 412.
- the purpose is to replace the median value, average value, maximum value, or minimum value of the values of surrounding elements.
- An example of an isolated point removal filter and a spread filter will be described below.
- any filter for example, a median filter capable of removing a protruding value, can be used as long as it can achieve the above-described purpose. Needless to say.
- FIG. 7D is a flowchart of an exemplary isolated point removal filter.
- the filter first executes step 431 for specifying a continuous region 450 (see FIG. 7F) of 3 elements ⁇ 3 elements in the edge component.
- the value of the central element 451 is E XY
- the value of the peripheral element 452 is E xy
- the maximum value that the element can take is E max
- step 432 is performed to determine the number of elements having a value E xy whose absolute value is equal to or less than the threshold value, using the product of E max and the first ratio as a threshold value.
- the first ratio may be 10%.
- step 433 is executed to determine whether or not the obtained number of elements is greater than or equal to the first number.
- the first number may be three. If the determination is true, step 434 is executed to replace the value E XY of the central element 451 with the median value of the values E xy of the surrounding elements 452.
- the value E XY of the central element 451 is 100
- the value E xy of the eight surrounding elements 452 is ⁇ 33, 10, 15, ⁇ 20, ⁇ 5, 5, ⁇ 42, 12 ⁇
- the maximum value E Assume that max 255.
- the absolute value of E xy ⁇ 33, 10, 15, 20, 5, 5, 42, 12 ⁇
- E is equal to or less than the threshold value 25.5. Since the absolute value of xy is ⁇ 10, 15, 20, 5, 5, 12 ⁇ , the number of elements whose absolute value of value xy is less than or equal to the threshold value is 6.
- FIG. 7E is a flowchart of an exemplary spreading filter.
- the filter first executes step 441 of specifying a continuous region 450 (see FIG. 7F) of 3 elements ⁇ 3 elements in the edge component.
- the value of the central element 451 is E XY
- the value of the peripheral element 452 is E xy
- the maximum value that the element can take is E max
- the peripheral elements excluding the central element 451 of the specified region 450 In step 452, step 442 is performed in which the product of E max and the second ratio is used as a threshold value to determine the number of elements having a value E xy whose absolute value is equal to or greater than the threshold value.
- the second ratio may be 10%.
- step 443 is performed for determining whether the number of elements obtained is equal to or greater than the second number.
- the second number may be five. If the determination is true, step 444 is executed in which the value E XY of the central element 451 is replaced with the median value of the values E xy of the surrounding elements 452. Since a specific example for the spreading filter is similar to that for the isolated point removal filter, a description thereof will be omitted.
- the smoothing step 131 includes both the isolated point removal filter application step 132 and the spread filter application step 133, either of these steps may be performed first, but empirically, the isolated point removal filter application step 132 is performed. It seems that a better result is obtained if it is executed first. Note that the first ratio and the second ratio described above may be different, and the first number and the second number may be the same.
- the method can include a display image generation step 140.
- the display image generation step 140 includes an image composition step 142.
- 8A and 8B show an original image (substantially equivalent to the first image because it is expressed in gray scale) and a second image, respectively, for the purpose of comparison with the synthesized image.
- FIG. 8C shows a display image generated by combining the original image and the second image so as to overlap each other.
- FIG. 8G and 8H show another example of the display image in which the original image and the original image and the second image are superimposed.
- the lower image in FIG. 8H is a display image obtained for the original image in focus shown in FIG. 8G.
- a display image as shown in FIG. 8H is obtained from an unfocused original image (not shown).
- the color of a certain pixel in the superimposed image is a value that identifies the color (for example, the R, G, and B values of the color, or the R, G, and B values are each a 24-bit color value or 32 If it is stored as one value such as a bit color value, it can be obtained by calculating the value by the following equation.
- C out ⁇ C in1 + ⁇ C in2 ( ⁇ is a real number satisfying 0 ⁇ ⁇ ⁇ 1, ⁇ is a real number satisfying 0 ⁇ ⁇ ⁇ 1) (12)
- C out is a value that specifies the color of a certain pixel in the superimposed image
- C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there.
- the value specifying the color of each pixel in the original image is assumed to be a positive value, but the value specifying the color of each pixel in the second image is the second value.
- both positive and negative values may be taken.
- the value specifying the color of a certain pixel is the first value.
- the pixel representing the edge in the image after superimposition can be confirmed by subtracting by the value specifying the color of the corresponding pixel which is a negative value in the second image.
- the equation (12) represents a general alpha blending equation.
- the display image can also be generated by combining so that a part of the original image is replaced with the second image.
- the position and size of the portion to be replaced can be arbitrarily designated by the user. It is also possible to combine images so that partial replacement of images and superposition are performed simultaneously.
- FIG. 8D shows a display image 510 generated by performing partial replacement and superposition of images at the same time and superimposing a part 511 of the original image and the second image.
- FIG. 8E shows a display image generated by combining the original image shown in FIG. 8A and the second image so as to overlap each other.
- the second image in FIG. 8A is formed from the original image shown in FIG. 8A using a function different from the function that generated the second image (second image shown in FIG. 8B).
- the function used for generating the second image in FIG. 8E is configured to further add an offset to the values of each RGB color with respect to the function used for generating the second image in FIG. 8C. A brighter second image can be generated.
- FIG. 8F corresponds to FIG. 8E in the case where the partial replacement of the image is further performed.
- the display image generation step 140 may include a step 141 for adjusting the color of the original image.
- the color adjustment of the original image can be performed before the composition. For example, by reducing the contrast of the original image by the color adjustment, a display image in which pixels representing edges are more conspicuous may be generated.
- step 150 an image based on the second image is output to a display means, such as a camera finder or monitor.
- the image based on the second image is a display image generated when the present method includes the display image generation step 140, and is the second image itself when not including the display image generation step 140.
- Focus is generally evaluated while moving a focus ring such as a camera during shooting.
- the camera focus changes and the original image also changes. Therefore, since the input first image changes, the second image used for display must be updated. Accordingly, the steps involved in the method are repeated in response to a new input of the first image.
- FIG. 9A is a block diagram of an apparatus for assisting focus evaluation. Note that the configuration shown in this block diagram is merely an example, and some means described below are not essential.
- the graph shown in relation to the first image, the edge component, the second image, the original image before color adjustment, and the original image before color adjustment is shown for each means for explanation. It is a graph which shows what kind of output.
- the horizontal axis and the vertical axis indicate a continuous pixel and a luminance value obtained from the pixel for the first image, the second image, the original image before color adjustment, and the original image after color adjustment, as edge components. Represents a certain continuous element and element value.
- This apparatus includes first image generation means 610.
- This means is a means for generating the first image from the original image.
- This apparatus is provided with edge component generation means 620.
- This means executes the edge component generation step 120. Therefore, the edge generation unit 620 is only required to remove at least a direct current component from the spatial frequency component included in the first image.
- the two-dimensional high-pass filter unit 621 that executes the two-dimensional high-pass filter application step. It is realized by having. Alternatively, it can be realized by providing two one-dimensional high-pass filter means 624a and 624b, as will be described later with reference to FIG. 9B. It can also be realized by providing means for subtracting a low-pass filter applied to the first image from the first image.
- the edge component generation unit 620 may further include one or both of a nonlinear processing unit 622 and an amplification / limiter unit 623 that execute the nonlinear processing step 122 and the amplification / limiter application step 123, respectively.
- the edge generation unit 620 includes a filter buffer unit 624.
- the filter buffer unit 624 is at least data used at a time by the two-dimensional high-pass filter unit 621, for example, 3 pixels ⁇ 3 pixels of the first image (corresponding to the size of the filter in the two-dimensional high-pass filter unit 621). Is a means for temporarily storing data representing and outputting the data to the two-dimensional high-pass filter means 621. If the number of pixels in the horizontal direction of the first image is set to w, the filter buffer means 624 considers the processing efficiency, and the continuous w pixels ⁇ 3 pixels (two-dimensional high-pass filter) of the first image at a time. Data corresponding to the vertical size of the means 621) may be temporarily stored.
- the edge component generation unit 620 includes a one-dimensional high-pass filter unit 621a in the direction perpendicular to the line and a one-dimensional high-pass filter unit in the sample direction parallel to the line. 621b can be provided.
- the one-dimensional high-pass filter means 621a and 621b execute the one-dimensional high-pass filter application steps 121a and 121b, respectively.
- the edge component generation means 620 includes at least one of nonlinear processing means 622a and 622b and amplification / limiter application means 623a and 623b in a direction perpendicular to the line and in a sample direction parallel to the line. Can be provided.
- the edge generation means 620 includes a line buffer means 624a, a sample buffer means 624b, and an edge synthesis means 626.
- the line buffer means 624a has data corresponding to the size of the filter in the 1 line ⁇ 1D high-pass filter means 621a of the first image, for example, the number of pixels in the horizontal direction of the first image is w, and the size of the filter When the length is 3, data representing continuous w ⁇ 3 pixels is temporarily stored.
- data representing continuous pixels in the direction perpendicular to the line is sequentially output to the one-dimensional high-pass filter 621a, and data used at one time by the one-dimensional high-pass filter 621b is output to the sample buffer unit 624b.
- the sample buffer unit 624b temporarily stores data used at one time by the one-dimensional high-pass filter unit 621b and outputs the data to the one-dimensional high-pass filter unit 621b.
- the edge component synthesis means 626 temporarily stores the first edge component E Lxy and the second edge component E Sxy and executes the edge component synthesis step 124.
- the apparatus includes second image generation means 630 that executes the second image generation step 130.
- the second image generating unit 630 includes an imaging unit 634 that executes the imaging step 134.
- the second image generation unit 630 may further include a smoothing unit 631 that executes the smoothing step 131.
- the smoothing means 631 includes one or both of an isolated point removal filter means 632 and a spread filter means 633 that execute the isolated point removal filter application step 132 and the spread filter application step 133, respectively.
- the present apparatus can include display image generation means 640 for executing the display image generation step 140.
- the display image generation means 640 includes image composition means 642 that executes the image composition step 142.
- the display image generating unit 640 may further include a color adjusting unit 641 that executes Step 141 for adjusting the color of the original image.
- the original image and the second image are input to the display image generation means 640. Therefore, the display image generation means 640 delays the timing of passing the original image to the image composition means 642 (and the color adjustment means 641 if present) when the input timing of the original image and the second image is significantly different.
- the delay adjusting means 643 is included.
- This apparatus is configured to output an image based on the second image to a display means (not shown).
- the image based on the second image is a display image generated when the present apparatus includes the display image generating means 640, and is the second image itself when not including the display image generating means 640.
- the device for assisting the focus evaluation has been described.
- the input first image changes, so the second image used for display must be updated.
- the apparatus is configured to output at least an image based on the newly generated second image to the display unit in response to a new input of the first image to the edge component generation unit 620. Is done.
- the present invention can also be implemented as a program for assisting focus evaluation.
- a program for assisting focus evaluation causes a computer to function as an apparatus for evaluating the focus described above.
- the “computer” is a system provided with one or more of hardware / arithmetic / control devices, storage devices, input devices, and output devices.
- the arithmetic / control device includes a CPU and an MPU.
- the storage device includes a memory, a hard disk, an SSD, and the like.
- the input device includes a chip pin, a mouse, a keyboard, a touch panel, a network interface, and the like.
- Output devices include chip pins, network interfaces, displays, printers, speakers, and the like.
- Two or more of the arithmetic / control device, the storage device, the input device, and the output device can be physically integrated by using an FPGA or a microcomputer. Those skilled in the art will be able to implement the present invention in cooperation with one or more of these hardware / resources of arithmetic / control devices, storage devices, input devices and output devices, and programs that are software. Will be clear.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Indication In Cameras, And Counting Of Exposures (AREA)
- Automatic Focus Adjustment (AREA)
- Focusing (AREA)
Abstract
The purpose of the present invention is to provide a focus evaluation assistance device that facilitates focus evaluation even when verifying a captured image on a small, low-resolution monitor. A focus evaluation main device is provided with: an edge component generation means that is a means that generates an edge from a first image and in which the first image comprises a plurality of pixels, the edge component comprises a plurality of elements, and the value of each element of the edge component corresponds to the value of each pixel of the first image after at least a direct current component is subtracted from a space frequency component in the first image; and an image generation means that is a means that generates a second image from the edge component and in which the second image comprises a plurality of pixels and said image generation means comprises a means that determines the color of each pixel of the second image as a function of the value of each element of the edge component. In addition, the focus evaluation main device is configured so as to output an image to a display means, said image being based on a second image that is newly generated in response to the new input of a first image into the edge component generation means. The edge component generation means may be provided with a means that performs non-linear processing of the value of each element of the edge component. The value of an element of the edge component after non-linear processing is the output of a non-linear function that uses the value of the element of the edge component prior to non-linear processing as input.
Description
本願発明は、カメラ等で動画又は静止画を撮影する際に、人間によるフォーカスが合っているかの確認(以下、「フォーカス評価」という。)を補助する技術に関する。
The present invention relates to a technique for assisting in confirming whether a human is in focus (hereinafter referred to as “focus evaluation”) when shooting a moving image or a still image with a camera or the like.
従来、動画又は静止画をカメラ等で撮影するときのフォーカス評価は、一般的に、画像のエッジを表す画素(例えば、物体の輪郭を表す画素)に着目して評価されてきた。そのようなフォーカス評価を補助するために、輪郭強調処理を用いて、エッジを表す画素を強調する技術が存在する。
Conventionally, focus evaluation when a moving image or a still image is taken with a camera or the like has been generally evaluated by focusing on pixels that represent the edge of an image (for example, pixels that represent the contour of an object). In order to assist such focus evaluation, there is a technique for enhancing a pixel representing an edge using an edge enhancement process.
例えば、特開2010-114556号公報には、画像データを構成する輝度値を正規化し、正規化後の画像データに対してHPFを用いたフィルタ演算を行って画素ごとのフィルタ演算値を取得し、エッジ閾値より大きなフィルタ演算値を有する画素を特定の色で色づけする技術が記載されている(当該文献の段落[0080]~[0092]及び図6~11を参照)。ここで、エッジを表す画素は、一般的に、HPFを用いたフィルタ演算後に大きなフィルタ演算値を有する。
For example, Japanese Patent Laid-Open No. 2010-114556 normalizes luminance values constituting image data, and performs a filter operation using HPF on the normalized image data to obtain a filter operation value for each pixel. A technique for coloring a pixel having a filter calculation value larger than an edge threshold with a specific color is described (see paragraphs [0080] to [0092] and FIGS. 6 to 11 of the document). Here, the pixel representing the edge generally has a large filter calculation value after the filter calculation using HPF.
また、フォーカス評価とは関わりなく、非線形処理を用いて入力画像信号に含まれない周波数成分を生成することにより、出力画像を強調する技術が存在する(特許5320538号公報及び特許5396626号公報を参照)。
In addition, there is a technique for enhancing an output image by generating a frequency component that is not included in an input image signal using nonlinear processing regardless of focus evaluation (see Japanese Patent Nos. 5320538 and 5396626). ).
更に、特開2010-16783号公報には、エッジ強調度に所定のリミット値を設けて、ゲイン乗算によりエッジ量が大きくなりエッジ強調度がリミット値に達した場合には、エッジ強調度を所定値にクリップして飽和させる技術が記載されている。
Further, in Japanese Patent Application Laid-Open No. 2010-16783, a predetermined limit value is provided for the edge enhancement level, and when the edge amount increases due to gain multiplication and the edge enhancement level reaches the limit value, the edge enhancement level is set to a predetermined value. Techniques for clipping to values and saturating are described.
輪郭強調処理では、エッジを表す画素を強調している。従って、エッジがあまり存在せず、近隣画素から輝度が急激に変化する画素が少ない画像、即ち空間周波数成分として高周波成分が少ない画像に対しては、輪郭強調処理はフォーカス評価に対する十分な補助とならない。
In the edge enhancement process, pixels representing edges are emphasized. Therefore, the contour enhancement processing does not sufficiently assist the focus evaluation for an image having few edges and having few pixels whose luminance changes rapidly from neighboring pixels, that is, an image having a small high frequency component as a spatial frequency component. .
このような問題に対処するため、高周波成分の少ない画像に対して輪郭強調処理を行う際に、HPFのゲインを上げる、HPFの帯域を広げることが考えられる。しかしながら、単純にゲインを上げると、画像のノイズまで増加させてしまう。また、帯域を広げると、エッジである画素とエッジでない画素とをむしろ判別しづらくなる。なぜなら、例えば、特開2010-114556号公報に記載の技術によると、高周波成分である画素のフィルタ演算値と、高周波成分でない画素のフィルタ演算値との差が小さくなるために、適切なエッジ閾値を設定しづらくなるためである。
In order to deal with such a problem, it is conceivable to increase the HPF gain or widen the HPF band when performing edge enhancement processing on an image with a small number of high-frequency components. However, simply increasing the gain increases the noise of the image. Also, when the band is widened, it is rather difficult to discriminate between pixels that are edges and pixels that are not edges. This is because, for example, according to the technique described in Japanese Patent Application Laid-Open No. 2010-114556, the difference between the filter calculation value of a pixel that is a high-frequency component and the filter calculation value of a pixel that is not a high-frequency component is small. This is because it becomes difficult to set.
画像に十分な高周波成分が含まれない場合のほか、撮影環境により縮小して表示した場合も、輪郭強調処理ではフォーカス評価の十分な補助とならないときがある。例えば、特開2010-114556号公報に記載の技術によると、特定の色が割り当てられた1画素が、縮小によりつぶれ、表示されなくなることがある。近年では4K(4096画素×2160画素)で画像が撮影されることもあり、そのような画像をカメラ搭載の小型で低解像度のモニタで確認する場合は画像を縮小表示せざるを得ず、とりわけ、従来技術の輪郭強調処理ではフォーカス評価の十分な補助とならない。そのため、4Kで映像を撮影する際には、わざわざ大型の4K対応モニタを持ち込んで、撮影された画像のフォーカス評価をしているのが実情である。
In addition to the case where the image does not contain sufficient high-frequency components, even when the image is reduced and displayed depending on the shooting environment, the edge enhancement processing may not provide sufficient assistance for focus evaluation. For example, according to the technique described in Japanese Patent Application Laid-Open No. 2010-114556, one pixel to which a specific color is assigned may be crushed due to reduction and may not be displayed. In recent years, an image is sometimes captured at 4K (4096 pixels × 2160 pixels), and when such an image is confirmed on a small, low-resolution monitor equipped with a camera, the image has to be reduced and displayed. However, the conventional contour enhancement processing does not provide sufficient assistance for focus evaluation. For this reason, when shooting a video at 4K, it is actually the case that a large 4K monitor is brought in and the focus of the shot image is evaluated.
出願人は、そもそも、エッジを表す画素は、フォーカスを評価する1つの指標になるかもしれないが、それのみをもってフォーカスの評価がなされるわけではないことに気付いた。エッジを表す画素でない画素にもフォーカス評価に有用な情報は含まれている。例えば、画面全体のエッジの雰囲気は、エッジを表す画素でない画素から確認することが可能である。しかしながら、従来技術の輪郭強調処理では、フォーカス評価の補助において、この情報が活用されてこなかった。
The applicant has noticed that the pixel representing the edge may be one index for evaluating the focus, but the focus is not evaluated only by that. Information useful for focus evaluation is also included in pixels that are not pixels that represent edges. For example, the atmosphere of the edge of the entire screen can be confirmed from a pixel that is not a pixel representing the edge. However, this information has not been utilized in assisting focus evaluation in the conventional contour enhancement processing.
本願発明は、上述の課題に鑑みてなされたものである。
The present invention has been made in view of the above problems.
本発明の一実施形態は、フォーカス評価補助装置である。本装置は、第1の画像からエッジ成分を生成する手段であって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成手段と、エッジ成分から第2の画像を生成する手段であって、第2の画像は複数の画素を含み、該手段は、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求める手段を含む、画像生成手段とを備えている。なお、本装置は、エッジ成分生成手段への第1の画像の新たな入力に応答して、新たに生成された第2の画像に基づく画像を表示手段へと出力するように構成されている。
One embodiment of the present invention is a focus evaluation assisting device. The apparatus is means for generating an edge component from a first image, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and the value of each element of the edge component is a first value. Edge component generating means corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the image of the image, and means for generating the second image from the edge component, The second image includes a plurality of pixels, and the means includes image generation means including means for determining the color of each pixel of the second image as a function of the value of each element of the edge component. The apparatus is configured to output an image based on the newly generated second image to the display unit in response to a new input of the first image to the edge component generation unit. .
エッジ成分生成手段は、エッジ成分の各要素の値に対して、リミッタ適用の前に非線形処理を行う手段を備えていてもよい。当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第1非線形関数の出力である。
The edge component generation means may include means for performing nonlinear processing on the value of each element of the edge component before applying the limiter. The value of the element of the edge component after the nonlinear processing is the output of the first nonlinear function that receives the value of the element of the edge component before the nonlinear processing.
なお、エッジ成分生成手段は、エッジ成分の各要素の値に対して、リミッタ適用を含む非線形処理を行う手段を備えていてもよい。当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第2非線形関数の出力であり、第2非線形関数は、第1非線形関数と、少なくともリミッタ適用のための関数との合成関数である。
Note that the edge component generation means may include means for performing nonlinear processing including limiter application on the value of each element of the edge component. The value of the element of the edge component after the nonlinear processing is an output of the second nonlinear function that receives the value of the element of the edge component before the nonlinear processing, and the second nonlinear function includes at least the first nonlinear function and at least It is a composite function with a function for limiter application.
第1非線形関数は、第1象限及び第3象限で定義され、原点を通る関数であってよい。
The first nonlinear function may be a function defined by the first quadrant and the third quadrant and passing through the origin.
本装置は、第1の画像が次々と入力されるときに、第1非線形関数を複数の非線形関数の間で切り替えるための手段を更に備えていてもよい。
The apparatus may further include means for switching the first nonlinear function between a plurality of nonlinear functions when the first image is successively input.
エッジ成分生成手段は、第1の画像に2次元ハイパスフィルタを適用して、第1の画像における空間周波数成分から少なくとも直流成分を除去する手段を備えていてもよい。あるいは、エッジ成分生成手段は、第1の画像に対する第1方向1次元ハイパスフィルタ手段と、第1の画像に対する第2方向1次元ハイパスフィルタ手段と、第1方向1次元ハイパスフィルタ手段の出力と第2方向1次元ハイパスフィルタ手段の出力とを合成する手段とを備えていてもよい。
The edge component generation means may include means for applying at least a two-dimensional high-pass filter to the first image to remove at least a direct current component from the spatial frequency component in the first image. Alternatively, the edge component generation means includes a first direction one-dimensional high-pass filter means for the first image, a second direction one-dimensional high-pass filter means for the first image, and an output of the first direction one-dimensional high-pass filter means. And a means for synthesizing the output of the two-way one-dimensional high-pass filter means.
エッジ成分生成手段は、エッジ成分の要素の値に所定の係数を乗算した値へと該要素の値を変更する増幅手段、及び、エッジ成分の要素の値が所定の閾値より大きいか又は小さい場合にその値を閾値へと変更するリミッタ適用手段の一方又は双方を備えていてもよい。
The edge component generation means is an amplification means for changing the value of the element to a value obtained by multiplying the value of the element of the edge component by a predetermined coefficient, and the value of the element of the edge component is larger or smaller than a predetermined threshold value. One or both of limiter applying means for changing the value to the threshold value may be provided.
エッジ成分生成手段は、エッジ成分において、周囲の要素の値の絶対値より大きな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える孤立点除去フィルタ手段と、エッジ成分において、周囲の要素の値の絶対値より小さな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える広げフィルタ手段との一方または双方を備えていてもよい。ここで、孤立点除去フィルタ手段は、エッジ成分において3要素×3要素の連続した領域を特定し、領域の中央の要素を除く要素において、要素がとり得る最大値と第1の割合との積を閾値として、絶対値が閾値以下となる値を有する要素の数を求め、求められた要素の数が第1の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるように構成することができる。広げフィルタ手段は、エッジ成分において3要素×3要素の連続した領域を特定し、領域の中央の要素を除く要素において、要素がとり得る最大値と第2の割合との積を閾値として、絶対値が閾値以上となる値を有する要素の数を求め、求められた要素の数が第2の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるように構成することができる。
The edge component generation means replaces the value of an element having an absolute value larger than the absolute value of the surrounding element in the edge component with a value obtained from the value of the surrounding element; The edge component may include one or both of spreading filter means for replacing an element value having an absolute value smaller than the absolute value of the surrounding element value with a value obtained from the surrounding element value. Good. Here, the isolated point removal filter means specifies a continuous region of 3 elements × 3 elements in the edge component, and the product of the maximum value that the element can take and the first ratio in the elements excluding the central element of the region. Is used as a threshold value, the number of elements having an absolute value less than or equal to the threshold value is obtained, and when the obtained number of elements is equal to or greater than the first number, The value can be configured to replace the value of the center element of the region. The spreading filter means specifies a continuous area of 3 elements × 3 elements in the edge component, and in elements other than the element at the center of the area, the product of the maximum value that the element can take and the second ratio is used as an absolute value. The number of elements having a value that is greater than or equal to the threshold value is obtained, and when the number of obtained elements is the second number or more, the median value of the element values excluding the central element of the area, It can be configured to replace the value of the central element.
本装置において、第1の画像は原画像から生成され、原画像は複数の画素を含み、第1の画像の各画素の値は、原画像の対応する画素の色から導出される。本装置は、原画像の少なくとも一部を第2の画像で置き換えること、及び、原画像と第2の画像とを重ね合わせることの一方または双方を行うことにより、表示手段へと出力される画像を生成する更なる画像生成手段を更に備えていてもよい。なお、画像生成手段は、原画像と第2の画像との重ね合わせを行う場合に、
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるよう構成することができる。ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である。また、更なる画像生成手段は、原画像を色調整する手段を備えていてもよい。 In the apparatus, the first image is generated from the original image, the original image includes a plurality of pixels, and the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. The apparatus replaces at least a part of the original image with the second image and / or superimposes the original image and the second image, thereby outputting an image output to the display unit. Further image generation means for generating the image may be further provided. Note that the image generation means performs the superposition of the original image and the second image,
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
Thus, it is possible to obtain a color of a certain pixel in the image after superposition. Here, C out is a value that specifies the color of a certain pixel in the superimposed image, and C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there. Further, the further image generating means may include means for adjusting the color of the original image.
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるよう構成することができる。ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である。また、更なる画像生成手段は、原画像を色調整する手段を備えていてもよい。 In the apparatus, the first image is generated from the original image, the original image includes a plurality of pixels, and the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. The apparatus replaces at least a part of the original image with the second image and / or superimposes the original image and the second image, thereby outputting an image output to the display unit. Further image generation means for generating the image may be further provided. Note that the image generation means performs the superposition of the original image and the second image,
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
Thus, it is possible to obtain a color of a certain pixel in the image after superposition. Here, C out is a value that specifies the color of a certain pixel in the superimposed image, and C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there. Further, the further image generating means may include means for adjusting the color of the original image.
本発明の別の一実施形態は、コンピュータをフォーカス評価補助装置として機能させるプログラムである。
Another embodiment of the present invention is a program that causes a computer to function as a focus evaluation auxiliary device.
本発明のまた別の一実施形態は、フォーカス評価補助方法である。本方法は、エッジ成分生成手段により、第1の画像からエッジ成分を生成するステップであって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成ステップと、画像生成手段により、エッジ成分から第2の画像を生成するステップであって、第2の画像は複数の画素を含み、該ステップは、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求めるステップを含む、画像生成ステップと、画像出力手段により、第2の画像に基づく画像を表示手段へと出力する画像出力ステップとを含んでいる。なお、本方法において、第1の画像の新たな入力に応答して、エッジ成分生成ステップ、画像生成ステップ及び画像出力ステップは繰り返される。
Another embodiment of the present invention is a focus evaluation assisting method. This method is a step of generating an edge component from a first image by an edge component generation means, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and each element of the edge component Is obtained from the edge component by the edge component generation step corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image. Generating a second image, wherein the second image includes a plurality of pixels, the step including determining a color of each pixel of the second image as a function of a value of each element of the edge component. And an image generation step and an image output step of outputting an image based on the second image to the display means by the image output means. In this method, the edge component generation step, the image generation step, and the image output step are repeated in response to a new input of the first image.
エッジ成分生成ステップは、エッジ成分の各要素の値に対して、リミッタ適用の前に非線形処理を行うステップを含んでいてもよい。当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第1非線形関数の出力である。
The edge component generation step may include a step of performing nonlinear processing on the value of each element of the edge component before applying the limiter. The value of the element of the edge component after the nonlinear processing is the output of the first nonlinear function that receives the value of the element of the edge component before the nonlinear processing.
なお、エッジ成分生成ステップは、エッジ成分の各要素の値に対して、リミッタ適用を含む非線形処理を行うステップを含んでいてもよい。当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第2非線形関数の出力であり、第2非線形関数は、第1非線形関数と、少なくともリミッタ適用のための関数との合成関数である。
The edge component generation step may include a step of performing non-linear processing including limiter application on the value of each element of the edge component. The value of the element of the edge component after the nonlinear processing is an output of the second nonlinear function that receives the value of the element of the edge component before the nonlinear processing, and the second nonlinear function includes at least the first nonlinear function and at least It is a composite function with a function for limiter application.
第1非線形関数は、上述したように、第1象限及び第3象限で定義され、原点を通る関数であってよい。
As described above, the first nonlinear function may be a function defined by the first quadrant and the third quadrant and passing through the origin.
本方法は、第1の画像が次々と入力されるときに、前記第1非線形関数を複数の非線形関数の間で切り替えるステップを更に含んでいてもよい。
The method may further include a step of switching the first nonlinear function between a plurality of nonlinear functions when the first image is sequentially input.
エッジ成分生成ステップは、第1の画像に対する2次元ハイパスフィルタ適用ステップを含んでいてもよい。あるいは、エッジ成分生成ステップは、第1の画像に対する第1方向1次元ハイパスフィルタ適用ステップと、第1の画像に対する第2方向1次元ハイパスフィルタ適用ステップと、第1方向1次元ハイパスフィルタ適用ステップ及び第2方向1次元ハイパスフィルタ適用ステップにより得られた出力を合成するステップとを含んでいてもよい。
The edge component generation step may include a two-dimensional high-pass filter application step for the first image. Alternatively, the edge component generation step includes a first direction one-dimensional high-pass filter application step for the first image, a second direction one-dimensional high-pass filter application step for the first image, a first direction one-dimensional high-pass filter application step, And a step of synthesizing the output obtained by the second direction one-dimensional high-pass filter applying step.
エッジ成分生成ステップは、エッジ成分の要素の値に所定の係数を乗算した値へと該要素の値を変更する増幅ステップ、及び、エッジ成分の要素の値が所定の閾値より大きいか又は小さい場合にその値を閾値へと変更するリミッタ適用ステップの一方又は双方を含んでいてもよい。
In the edge component generation step, an amplification step for changing the value of the element to the value obtained by multiplying the value of the element of the edge component by a predetermined coefficient, and the value of the element of the edge component is larger or smaller than a predetermined threshold One or both of the limiter application steps for changing the value to the threshold value may be included.
エッジ成分生成ステップは、エッジ成分において、周囲の要素の値の絶対値より大きな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える孤立点除去フィルタ適用ステップと、エッジ成分において、周囲の要素の値の絶対値より小さな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える広げフィルタ適用ステップとの一方または双方を含んでいてもよい。ここで、孤立点除去フィルタ適用ステップは、エッジ成分において3要素×3要素の連続した領域を特定するステップと、領域の中央の要素を除く要素において、要素がとり得る最大値と第1の割合との積を閾値として、絶対値が閾値以下となる値を有する要素の数を求めるステップと、求められた要素の数が第1の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるステップとを含んでいてもよい。また、広げフィルタ適用ステップは、エッジ成分において3要素×3要素の連続した領域を特定するステップと、領域の中央の要素を除く要素において、要素がとり得る最大値と第2の割合との積を閾値として、絶対値が閾値以上となる値を有する要素の数を求めるステップと、求められた要素の数が第2の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるステップとを含んでいてもよい。
The edge component generation step replaces the value of an element having an absolute value greater than the absolute value of the surrounding element in the edge component with a value obtained from the value of the surrounding element; One or both of a broadening filter application step of replacing an element value having an absolute value smaller than the absolute value of the surrounding element value with a value obtained from the surrounding element value in the edge component. May be. Here, the isolated point removal filter applying step includes a step of specifying a continuous area of 3 elements × 3 elements in the edge component, and a maximum value and a first ratio that the element can take in elements excluding the central element of the area And a step of obtaining the number of elements having a value whose absolute value is less than or equal to the threshold value, and the element at the center of the region is excluded when the number of obtained elements is equal to or greater than the first number Replacing the value of the element at the center of the region with the median of the values of the elements. The spreading filter applying step includes a step of specifying a continuous region of 3 elements × 3 elements in the edge component, and a product of the maximum value that the element can take and the second ratio in the elements other than the central element of the region. And determining the number of elements having a value whose absolute value is equal to or greater than the threshold, and the value of the element excluding the central element in the region when the number of determined elements is equal to or greater than the second number And replacing the value of the element at the center of the region with the median value of.
本方法において、第1の画像は原画像から生成され、原画像は複数の画素を含み、第1の画像の各画素の値は、原画像の対応する画素の色から導出される。本方法は、更なる画像生成手段により、原画像の少なくとも一部を第2の画像で置き換えること、及び、原画像と第2の画像とを重ね合わせることの一方または双方を行うことにより、表示手段へと出力される画像を生成する更なる画像生成ステップを更に含んでいてもよい。なお、更なる画像生成ステップは、原画像と第2の画像との重ね合わせを行う場合に、更なる画像生成手段により、
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるステップを含むことができる。ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である。また、更なる画像生成ステップは、更に、原画像を色調整するステップを含んでいてもよい。 In the method, the first image is generated from the original image, the original image includes a plurality of pixels, and the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. In this method, the display is performed by replacing at least a part of the original image with the second image and / or superimposing the second image on the original image by the further image generation means. It may further comprise a further image generation step for generating an image to be output to the means. In the further image generation step, when the original image and the second image are superimposed, the additional image generation means
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
The step of obtaining the color of a certain pixel in the superimposed image can be included. Here, C out is a value that specifies the color of a certain pixel in the superimposed image, and C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there. Further, the further image generation step may further include a step of adjusting the color of the original image.
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるステップを含むことができる。ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である。また、更なる画像生成ステップは、更に、原画像を色調整するステップを含んでいてもよい。 In the method, the first image is generated from the original image, the original image includes a plurality of pixels, and the value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. In this method, the display is performed by replacing at least a part of the original image with the second image and / or superimposing the second image on the original image by the further image generation means. It may further comprise a further image generation step for generating an image to be output to the means. In the further image generation step, when the original image and the second image are superimposed, the additional image generation means
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
The step of obtaining the color of a certain pixel in the superimposed image can be included. Here, C out is a value that specifies the color of a certain pixel in the superimposed image, and C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there. Further, the further image generation step may further include a step of adjusting the color of the original image.
本発明によると、画像における画素間の輝度変化が、輝度変化に応じた色を用いて連続的な態様で表示される。従って、小さく解像度の低い画面においても、画像全体から把握されるエッジの雰囲気からフォーカス評価が比較的容易となる。なお、エッジの雰囲気とは、画像全体にわたる画素間の輝度変化の分布等のことを意味している。
According to the present invention, the luminance change between pixels in an image is displayed in a continuous manner using a color corresponding to the luminance change. Therefore, even on a small and low-resolution screen, focus evaluation is relatively easy from the edge atmosphere grasped from the entire image. Note that the edge atmosphere means a distribution of luminance change between pixels over the entire image.
また、一実施形態によると、非線形関数により、非線形処理前のエッジ成分の空間周波数において存在しない周波数を生じさせることによって、エッジ成分に対して所望の強調を行うことできる。
Also, according to one embodiment, desired enhancement can be performed on the edge component by generating a frequency that does not exist in the spatial frequency of the edge component before nonlinear processing by a nonlinear function.
一実施形態によると、1次元ハイパスフィルタ又は2次元ハイパスフィルタにより、第1の画像における周波数成分からの少なくとも直流成分の除去を実現することができる。1次元ハイパスフィルタの動作速度はより高速であるため、1次元ハイパスフィルタを用いた場合には、第1の画像における周波数成分からの少なくとも直流成分の除去をより高速に実現することができる。
According to an embodiment, removal of at least a direct current component from a frequency component in the first image can be realized by a one-dimensional high-pass filter or a two-dimensional high-pass filter. Since the operation speed of the one-dimensional high-pass filter is higher, when the one-dimensional high-pass filter is used, the removal of at least the direct current component from the frequency component in the first image can be realized at a higher speed.
一実施形態によると、エッジ成分の値を増幅し又はその値を制限することにより、エッジ成分の値を扱いやすい範囲に変換することができる。
According to one embodiment, the value of the edge component can be converted into a manageable range by amplifying or limiting the value of the edge component.
一実施形態によると、エッジ成分をスムージングすることにより、ノイズを低減することができる。
According to one embodiment, noise can be reduced by smoothing edge components.
更に、一実施形態によると、原画像と関連させて第2の画像を表示することにより、第2の画像からは認識できない原画像の特徴を同時に把握することができる。その際、原画像の色調整を行うことにより、原画像又は第2の画像を相対的に強調することができる。
Furthermore, according to an embodiment, by displaying the second image in association with the original image, it is possible to simultaneously grasp the features of the original image that cannot be recognized from the second image. At that time, the original image or the second image can be relatively emphasized by adjusting the color of the original image.
まず、本発明の一実施形態である、フォーカス評価を補助するための方法について説明する。
First, a method for assisting focus evaluation, which is an embodiment of the present invention, will be described.
図1Aは、フォーカス評価を補助するための方法のフローチャートである。なお、このフローチャートに示された構成はあくまで一例であり、以下で説明されるように、幾つかのステップは必須でない。
FIG. 1A is a flowchart of a method for assisting focus evaluation. Note that the configuration shown in this flowchart is merely an example, and some steps are not essential as described below.
本方法は、まず、第1の画像を入力するステップ110から開始される。第1の画像は、例えば、カメラで現に撮影している動画(複数フレームから構成される。また、静止画を撮影するために、カメラのファインダーやモニタに表示される動画を含む。)の1フレームや、カメラで既に記録された動画の1フレーム等(以下、「原画像」という。)から導出されたものであってよい。
The method starts from step 110 in which a first image is input. The first image is, for example, 1 of a moving image (consisting of a plurality of frames. In addition, a moving image displayed on a camera finder or monitor for capturing a still image) is currently captured by the camera. It may be derived from a frame, one frame of a moving image already recorded by a camera (hereinafter referred to as “original image”), or the like.
第1の画像は複数の画素を含んでいる。第1の画像の各画素の値は、原画像の対応する画素の色から導出されるもので、以下の説明では、原画像の画素の輝度値であるものとする。なお、第1の画像の各画素の値は、原画像の画素の色のR、G又はBの値やクロマ(彩度)値等、原画像の画素の色から導出可能な任意の値であってよい。また、第1の画像の画素の値として輝度値を用いる場合、第1の画像は色がグレースケールで表現された画像とみなせることから、原画像において色がグレースケールで表現されている場合、第1の画素の値の導出は、原画像の画素の色(値)を単に用いることであってよい。
The first image includes a plurality of pixels. The value of each pixel of the first image is derived from the color of the corresponding pixel of the original image. In the following description, it is assumed that the value is the luminance value of the pixel of the original image. Note that the value of each pixel of the first image is an arbitrary value that can be derived from the color of the original image pixel, such as the R, G, or B value of the pixel color of the original image, or the chroma (saturation) value. It may be. Further, when the luminance value is used as the pixel value of the first image, since the first image can be regarded as an image in which the color is expressed in grayscale, when the color is expressed in grayscale in the original image, The derivation of the value of the first pixel may be simply using the color (value) of the pixel of the original image.
次に、本方法は、第1の画像からエッジ成分を生成するステップ120に続く。ここで、エッジ成分とは、少なくとも、第1の画像における空間周波数成分から少なくとも直流成分を除去したものである。エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する。本実施形態では、エッジ成分は、第1の画像の画素数と同じ数の要素を含む配列によって表されるものとする。
Next, the method continues with step 120 of generating edge components from the first image. Here, the edge component is a component obtained by removing at least a direct current component from a spatial frequency component in the first image. The edge component includes a plurality of elements, and the value of each element of the edge component corresponds to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image. In the present embodiment, the edge component is represented by an array including the same number of elements as the number of pixels of the first image.
第1の画像における空間周波数成分から少なくとも直流成分を除去する手法は任意であるが、本実施形態では、第1の画像に2次元ハイパスフィルタ(HPF)を適用するステップ121を実行することによって実現している。代替として、図1Bに関して後述するように、2つの1次元ハイパスフィルタ適用ステップ121a及び121bによって実現することもできる。また、ハイパスフィルタを適用する代わりに、第1の画像から、第1の画像にローパスフィルタを適用したものを減算することによっても、第1の画像から少なくとも直流成分を除去することができる。以下、図2A~2Cを参照して、2次元ハイパスフィルタについて説明する。
The method for removing at least the DC component from the spatial frequency component in the first image is arbitrary, but in the present embodiment, it is realized by executing step 121 of applying a two-dimensional high-pass filter (HPF) to the first image. is doing. Alternatively, it can be realized by two one-dimensional high-pass filter applying steps 121a and 121b, as described below with respect to FIG. 1B. Further, at least a direct current component can be removed from the first image by subtracting the first image obtained by applying the low-pass filter from the first image instead of applying the high-pass filter. Hereinafter, the two-dimensional high-pass filter will be described with reference to FIGS. 2A to 2C.
図2Aは第1の画像の例である。ここで、1つのマス目は1つの画素を表し、マス内の値Ixyはその画素の値、以下の説明では原画像の対応する画素の輝度値を表すものとする。なお、x及びyは、それぞれ0≦x<w及び0≦y<hである整数であり、w及びhは、それぞれ第1の画像の横方向及び縦方向の画素の数である。
FIG. 2A is an example of the first image. Here, one square represents one pixel, and the value I xy in the square represents the value of the pixel, and in the following description, represents the luminance value of the corresponding pixel of the original image. Note that x and y are integers where 0 ≦ x <w and 0 ≦ y <h, respectively, and w and h are the numbers of pixels in the horizontal and vertical directions of the first image, respectively.
図2Bは2次元ハイパスフィルタの例である。この例においては、大きさが3要素×3要素であるフィルタを用いているが、大きさは所望なものでよい。例示のハイパスフィルタの係数fij(i及びjは、それぞれ0≦i<3及び0≦j<3である整数)は、以下のような値をとることができる。
FIG. 2B is an example of a two-dimensional high-pass filter. In this example, a filter having a size of 3 elements × 3 elements is used, but the size may be as desired. The coefficients f ij (i and j are integers where 0 ≦ i <3 and 0 ≦ j <3, respectively) of the example high-pass filter can take the following values.
上記係数の値は例示であり、ハイパスフィルタとして機能する任意の係数を用いることができることは言うまでもない。
The above coefficient values are merely examples, and it is needless to say that any coefficient that functions as a high-pass filter can be used.
図2Cはエッジ成分の例である。このエッジ成分は、第1の画像に図2Bに例示した二次元ハイパスフィルタを適用して得られたものであり、各要素の値は、以下の式により計算される。
FIG. 2C is an example of edge components. This edge component is obtained by applying the two-dimensional high-pass filter illustrated in FIG. 2B to the first image, and the value of each element is calculated by the following equation.
上式を用いる際、画像外の領域のIxy(x<0,y<0,w≦x,h≦y)は任意の値、例えば0として計算してよい。なお、上式によって計算されるエッジ成分の要素の値Exyは、正負両方の値でありうる。
When using the above equation, I xy (x <0, y <0, w ≦ x, h ≦ y) of the region outside the image may be calculated as an arbitrary value, for example, 0. Note that the value E xy of the element of the edge component calculated by the above equation can be both positive and negative.
エッジ成分生成ステップ120は、更に、非線形処理ステップ122を含んでいてもよい。なお、このステップにおいて行われる非線形処理は、先行技術にあるような単なるクリップの処理ではなく、また、後述するリミッタ適用の前に行われることに注意されたい。この処理においては、以下の計算が行われる。
E2xy=f(E1xy) (3) The edgecomponent generation step 120 may further include a non-linear processing step 122. It should be noted that the non-linear processing performed in this step is not a simple clip processing as in the prior art, but is also performed before the limiter application described later. In this process, the following calculation is performed.
E2 xy = f (E1 xy ) (3)
E2xy=f(E1xy) (3) The edge
E2 xy = f (E1 xy ) (3)
ここで、E1xy及びE2xyは、それぞれ非線形処理前及び後のエッジ成分の要素の値であり、f(E)は任意の非線形関数であるが、好適には、以下の条件が課された非線形関数である。
1 第1象限及び第3象限において定義される。
2 原点を通過する。 Here, E1 xy and E2 xy are the values of the elements of the edge component before and after the nonlinear processing, respectively, and f (E) is an arbitrary nonlinear function. Preferably, the following conditions are imposed: It is a nonlinear function.
1 Defined in the first and third quadrants.
2 Pass through the origin.
1 第1象限及び第3象限において定義される。
2 原点を通過する。 Here, E1 xy and E2 xy are the values of the elements of the edge component before and after the nonlinear processing, respectively, and f (E) is an arbitrary nonlinear function. Preferably, the following conditions are imposed: It is a nonlinear function.
1 Defined in the first and third quadrants.
2 Pass through the origin.
なお、Eのとり得る値が、使用するf(E)の定義域を超える場合には、Eを有理数倍する、Eに有理数のオフセットを加える、Eの絶対値をとる等して得られた、f(E)の定義域内である値をEとしてf(E)を計算することができる。
If the possible value of E exceeds the domain of f (E) to be used, it was obtained by multiplying E by a rational number, adding a rational number offset to E, or taking the absolute value of E. , F (E) can be calculated with E being a value within the domain of f (E).
非線形関数としては、限定することなく例示すると、Ep、sinp(E)及びlogp(E)(pは有理数)なる非線形関数(即ち、E1は除く)、並びに、これら関数を有理数倍すること、これら関数に有理数のオフセットを加えること、及び、これら関数を組み合わせることのうちの1以上により得られた関数が挙げられる。具体的には、f(E)は以下のような関数であることができる。
Examples of non-linear functions include, but are not limited to, non-linear functions of E p , sin p (E) and log p (E) (p is a rational number) (that is, excluding E 1 ), and these functions are rational multiples And a function obtained by one or more of adding a rational number offset to these functions and combining these functions. Specifically, f (E) can be a function as follows.
非線形関数は、テーブルを用いて実現することもできる。例えば、Exyのとり得る全ての値と、各値に対応する予め定められた出力とをテーブルに記憶しておき、非線形処理を行う際に、このテーブルとしての関数を用いて出力を計算することができる。
The non-linear function can also be realized using a table. For example, all values that can be taken by E xy and predetermined outputs corresponding to the respective values are stored in a table, and when nonlinear processing is performed, the output is calculated using a function as this table. be able to.
ある画像に非線形処理を施すと、元の画像の空間周波数成分には含まれていない周波数成分が生じる。図3は、示された回路を用いて、入力画像の各画素の輝度値に非線形関数を適用して出力画像を得たときの、入力画像と出力画像に含まれる周波数成分を示している。なお、HPFはハイパスフィルタ、NLFは非線形関数、LMTはリミッタ、ADDは加算器、MSBはMost Significant Bit即ち符号を表すビットである。
When nonlinear processing is applied to an image, frequency components that are not included in the spatial frequency component of the original image are generated. FIG. 3 shows the frequency components included in the input image and the output image when the output image is obtained by applying a nonlinear function to the luminance value of each pixel of the input image using the circuit shown. Note that HPF is a high-pass filter, NLF is a nonlinear function, LMT is a limiter, ADD is an adder, and MSB is a bit representing a most significant bit, that is, a sign.
発生する周波数成分は、非線形関数に応じて異なるため、上述の非線形処理における非線形関数は、所望する効果に応じて、適宜選択することができる。例えば、図2Dは、f(E)として式(4)を用いたときの非線形処理の効果を表すグラフであり、このグラフは、非線形処理前のエッジ成分の要素の値を表すプロットをつないだ曲線211と非線形処理後のエッジ成分の要素の値を表すプロットをつないだ曲線212とを含んでいる。ここで、グラフの横軸はエッジ成分のある連続した要素を表し、縦軸はその要素の値を表している。式(4)は、出力/入力の比が、入力が大きいほど大きくなる非線形関数である。そのために、非線形処理後の曲線212は、非線形処理前の曲線211に比べて、より鋭いピーク213を有するようになっている。このことは、ゲインを単純に上げる場合に比べて、より大きな値を有するエッジ成分の要素はより大きくなり、小さな値を有する要素との差異が強調されることを意味している。なお、この効果は、上述の非線形処理において、絶対値をとったときに凸関数となる非線形関数を用いたときに一般的にもたらされるものである。
Since the generated frequency component varies depending on the nonlinear function, the nonlinear function in the nonlinear processing described above can be appropriately selected according to the desired effect. For example, FIG. 2D is a graph showing the effect of nonlinear processing when using Equation (4) as f (E), and this graph is connected to a plot showing the value of the element of the edge component before nonlinear processing. A curve 211 and a curve 212 connected with plots representing the values of the elements of the edge components after nonlinear processing are included. Here, the horizontal axis of the graph represents a continuous element having an edge component, and the vertical axis represents the value of the element. Equation (4) is a nonlinear function in which the output / input ratio increases as the input increases. Therefore, the curve 212 after the nonlinear process has a sharper peak 213 than the curve 211 before the nonlinear process. This means that the edge component element having a larger value becomes larger and the difference from the element having a smaller value is emphasized as compared with a case where the gain is simply increased. This effect is generally brought about when a nonlinear function that becomes a convex function when an absolute value is taken is used in the nonlinear processing described above.
また、上述の非線形処理における非線形関数として、対数関数や平方根を利用して、絶対値を取ったときに凹関数となる非線形関数(例えば、式(6)及び(7))を用いた場合には、一般的に、エッジ成分の要素においてノイズは増大するが、低いレベルの信号が確認できるという効果が得られることになる。
In addition, when a nonlinear function (for example, equations (6) and (7)) that becomes a concave function when an absolute value is obtained using a logarithmic function or a square root as a nonlinear function in the above nonlinear processing is used. In general, noise increases in the element of the edge component, but an effect that a low level signal can be confirmed is obtained.
更に、上述の非線形処理における非線形関数として、途中で段が付いた関数や、傾きが負になる関数を非線形関数として用いた場合には、あるレベルの信号の感度を落とすといった効果が得られることになる。そのような非線形関数の例を、図4Aに示す。
Furthermore, when a function with a step in the middle or a function with a negative slope is used as the nonlinear function in the above nonlinear processing, an effect of reducing the sensitivity of a certain level of signal can be obtained. become. An example of such a nonlinear function is shown in FIG. 4A.
図4Aに示された関数は、いわゆる不感帯も含んでおり、図4Bに示すように、絶対値があるレベル以下の信号をゼロにするため、ノイズを除去するか、又は少なくとも低減することができる。なお、このような関数は、後述するリミッタ適用のための関数との合成と見なすこともできる。
The function shown in FIG. 4A also includes a so-called dead band, and as shown in FIG. 4B, noise can be removed or at least reduced to zero a signal whose absolute value is below a certain level. . Such a function can also be regarded as a combination with a function for applying a limiter described later.
なお、非線形関数は、原点に関して非対称であってよい。また、入力されるエッジ成分の値の範囲によって異なる効果が得られるように、性質の異なる関数を組み合わせて、上述の非線形処理における非線形関数としてもよく、このような非線形関数は、部分的に線形なものや、部分的に一定値をとるようなもの、不連続な部分を含むもの、ある部分は二次関数で別の部分は三次関数であるようなものも含んでいる。
Note that the nonlinear function may be asymmetric with respect to the origin. Also, in order to obtain different effects depending on the range of input edge component values, functions having different properties may be combined to form a nonlinear function in the above-described nonlinear processing. Such a nonlinear function is partially linear. Some of them have constant values, some have discontinuous parts, some have quadratic functions, and others have cubic functions.
ビデオカメラの出力画像においてフォーカス評価をする場合、ビデオカメラからの画像は刻一刻と変わり、画像に応じて適切な非線形関数が異なる場合がある。例えばビデオカメラの使用中、本方法において画像を処理するために画像が次々と入力されるときに、上述の非線形処理において用いる非線形関数を複数の非線形関数の間で切り替えることができると便利である。このような切り替えは、任意の手段により行うことができる。
When focus evaluation is performed on an output image of a video camera, the image from the video camera changes every moment, and an appropriate nonlinear function may differ depending on the image. For example, while using a video camera, it is convenient if the nonlinear function used in the nonlinear processing described above can be switched between a plurality of nonlinear functions when images are successively input to process the image in this method. . Such switching can be performed by any means.
エッジ成分生成ステップ120は、更に、増幅/リミッタ適用ステップ123を含んでいてもよい。このステップでは、非線形処理ステップ122の後(非線形処理ステップ122を含まない場合には、2次元ハイパスフィルタ適用ステップ121の後)のエッジ成分の要素の値に所定の係数を乗算した値へと該要素の値を変更すること、及び、要素の値が所定の閾値より大きいか又は小さい場合にその値を閾値へと変更することの一方又は双方が行われる。
The edge component generation step 120 may further include an amplification / limiter application step 123. In this step, the value of the element of the edge component after the nonlinear processing step 122 (after the two-dimensional high-pass filter applying step 121 if the nonlinear processing step 122 is not included) is multiplied by a predetermined coefficient to the value. One or both of changing the value of the element and changing the value to the threshold when the value of the element is greater than or less than a predetermined threshold is performed.
増幅/リミッタ適用ステップ123、又は、当該ステップのうちのリミッタの適用は、図5Aに一般的に示されるような関数によって実現してもよい。この図において、横軸と縦軸はそれぞれ入力と出力を表し、THN、TLN、TLP及びTHPは入力に係る閾値であり、CHN、CLN、CLP及びCHPは、これら閾値のそれぞれに対応した出力に係る値である。TLN及びとTLPの絶対値、THN及びとTHPの絶対値、CLPとCLNの絶対値、及び、CHPとCHNの絶対値のそれぞれは、等しくても異なっていてもよい。この関数に従うと、TLNより大きく(又は以上であり)且つTLPより小さい(又は以下である)エッジ成分の値はゼロとなり、THNより小さい(又は以下である)エッジ成分の値はCHNとなり、THPより大きい(又は以上である)エッジ成分の値はCHPとなる。図5Aにおいて、THNからTLNまでの区間の関数及びTLPからTHPまでの区間の関数の傾きは1であるとして関数を描写しているが、これら区間の少なくとも一方の関数の傾きは1でなくともよい。傾きを1より大きくすることによって、この関数により、リミッタの適用と増幅を同時に行うことができる。
The amplification / limiter application step 123 or the application of the limiter among the steps may be realized by a function as generally shown in FIG. 5A. In this figure, the horizontal axis and the vertical axis represent input and output, respectively, T HN , T LN , T LP and T HP are thresholds related to input, and C HN , C LN , C LP and C HP are these This is a value related to the output corresponding to each of the threshold values. The absolute values of T LN and T LP, the absolute values of T HN and T HP , the absolute values of C LP and C LN , and the absolute values of C HP and C HN may be equal or different. Good. According to this function, the value of an edge component greater than (or greater than) T LN and less than (or less than) T LP is zero, and the value of an edge component less than (or less than) T HN is C The value of the edge component that is HN and is greater than (or greater than) T HP is C HP . In FIG. 5A, the function is depicted on the assumption that the slope of the function of the section from T HN to T LN and the function of the section from T LP to T HP is 1, but the slope of at least one function of these sections is It may not be 1. By making the slope greater than 1, this function allows the limiter to be applied and amplified simultaneously.
図5Aに示される関数おいて、CLP=TLP且つCLN=TLNであってもよく、この場合の関数は図5Bに示されるようなものである。更に、図5Aに示される関数において、CLP=CLN=0であってもよく、この場合の関数は図5Cに示されるようなものである。
In the function shown in FIG. 5A, C LP = T LP and C LN = T LN may be used, and the function in this case is as shown in FIG. 5B. Further, in the function shown in FIG. 5A, C LP = C LN = 0, and the function in this case is as shown in FIG. 5C.
なお、上述したような関数を、数値計算ではなく、テーブルを用いて実現してもよいことは言うまでもない。
Needless to say, the above-described function may be realized using a table instead of numerical calculation.
非線形処理ステップ122と、増幅/リミッタ適用ステップ123とは、前者と後者とにおいて用いる関数を合成した関数により、同一のステップとして実行してもよい。
The non-linear processing step 122 and the amplification / limiter applying step 123 may be executed as the same step by a function obtained by synthesizing the functions used in the former and the latter.
図1Bに示されるように、エッジ成分生成ステップ120は、2次元ハイパスフィルタを適用するステップ121の代わりに、ラインに垂直な方向(第1の画像の縦方向)に1次元ハイパスフィルタを適用するステップ121a及びラインに平行なサンプル方向(同横方向)に1次元ハイパスフィルタを適用するステップ121bを含むことができる。この場合には、2つの1次元ハイパスフィルタによりそれぞれ得られた第1エッジ成分ELxy及び第2エッジ成分ESxyを合成するステップ124により、最終的なエッジ成分Exyを得ることができる。
As shown in FIG. 1B, the edge component generation step 120 applies a one-dimensional high-pass filter in a direction perpendicular to the line (vertical direction of the first image) instead of the step 121 of applying a two-dimensional high-pass filter. Step 121a and step 121b of applying a one-dimensional high-pass filter in the sample direction (the same lateral direction) parallel to the line can be included. In this case, the final edge component E xy can be obtained by step 124 of combining the first edge component E Lxy and the second edge component E Sxy obtained by the two one-dimensional high-pass filters, respectively.
図2Eは1次元ハイパスフィルタの例である。この例においては、大きさが3要素であるフィルタを用いているが、大きさは所望なものでよい。また、ラインに垂直な方向とラインに平行なサンプル方向とで異なる大きさのフィルタを用いてもよい。例示のハイパスフィルタの係数は、以下のような値をとることができる。
FIG. 2E is an example of a one-dimensional high-pass filter. In this example, a filter having a size of 3 elements is used, but the size may be as desired. Moreover, you may use the filter of a magnitude | size different in the direction perpendicular | vertical to a line, and the sample direction parallel to a line. The coefficients of the example high-pass filter can take the following values.
上記係数の値は例示であり、ハイパスフィルタとして機能する任意の係数を用いることができ、係数はラインに垂直な方向とラインに平行なサンプル方向とで異なってもよいことは言うまでもない。
The value of the coefficient is an example, and an arbitrary coefficient functioning as a high-pass filter can be used. It goes without saying that the coefficient may be different in the direction perpendicular to the line and the sample direction parallel to the line.
ラインに垂直な方向のハイパスフィルタの適用は、以下の式により表される。
The application of the high-pass filter in the direction perpendicular to the line is expressed by the following formula.
ラインに平行なサンプル方向のハイパスフィルタの適用は、以下の式により表される。
The application of the high-pass filter in the sample direction parallel to the line is expressed by the following formula.
上式を用いる際、画像外の領域のIxy(x<0,y<0,w≦x,h≦y)は任意の値、例えば0として計算してよい。
When using the above equation, I xy (x <0, y <0, w ≦ x, h ≦ y) of the region outside the image may be calculated as an arbitrary value, for example, 0.
エッジ成分合成ステップ124における合成手法は任意であるが、例えば、単純にExy=ELxy+ESxyという計算により合成することができる。
Although the synthesis method in the edge component synthesis step 124 is arbitrary, for example, it can be synthesized by simply calculating E xy = E Lxy + E Sxy .
図1Bにおいて示されているように、第1エッジ成分ELxy及び第2エッジ成分ESxyに対して、それぞれ、非線形処理ステップ122a及び122b並びに増幅/リミッタ適用ステップ123a及び123bの少なくとも1つを適用することができ、これらステップは、上述した非線形処理ステップ122及び増幅/リミッタ適用ステップ123と同様である。
As shown in FIG. 1B, at least one of non-linear processing steps 122a and 122b and amplification / limiter application steps 123a and 123b is applied to the first edge component E Lxy and the second edge component E Sxy , respectively. These steps are similar to the non-linear processing step 122 and the amplification / limiter application step 123 described above.
次に、本方法は、第2の画像を生成するステップ130に続く。このステップは、エッジ成分の要素の値に応じて色を割り当てることによって、エッジ成分を画像化するステップ134を含む。
The method then continues to step 130 where a second image is generated. This step includes step 134 of imaging the edge component by assigning a color according to the value of the element of the edge component.
以下、図6を参照して、第2の画像を生成する手法を説明する。図6は、エッジ成分の要素の値Exyを横軸として、第2の画像の画素の、R、G及びB成分の値JRxy、JGxy及びJBxyをそれぞれ縦軸として有する3つのグラフを示している。
Hereinafter, a method of generating the second image will be described with reference to FIG. FIG. 6 shows three graphs having the edge component value E xy as the horizontal axis and the R, G and B component values J Rxy , J Gxy and J Bxy of the pixels of the second image as the vertical axis, respectively. Is shown.
第2の画像は、これらグラフのプロットに示される関係でエッジ成分の要素の値から第2の画像の画素の色を割り当てることによって生成することができる。即ち、図6に示されたプロットを上のグラフから順にそれぞれ関数r(Exy)、g(Exy)及びb(Exy)とおくと、第2の画像の画素のRGB値(JRxy,JGxy,JBxy)は、以下のように表すことができる。
JRxy=r(Exy)
JGxy=g(Exy) (10)
JBxy=b(Exy) The second image can be generated by assigning the pixel color of the second image from the values of the elements of the edge components in the relationship shown in the plots of these graphs. That is, if the plots shown in FIG. 6 are respectively given functions r (E xy ), g (E xy ), and b (E xy ) in order from the upper graph, the RGB values (J Rxy ) of the pixels of the second image are set. , J Gxy , J Bxy ) can be expressed as follows:
J Rxy = r (E xy )
J Gxy = g (E xy ) (10)
J Bxy = b (E xy )
JRxy=r(Exy)
JGxy=g(Exy) (10)
JBxy=b(Exy) The second image can be generated by assigning the pixel color of the second image from the values of the elements of the edge components in the relationship shown in the plots of these graphs. That is, if the plots shown in FIG. 6 are respectively given functions r (E xy ), g (E xy ), and b (E xy ) in order from the upper graph, the RGB values (J Rxy ) of the pixels of the second image are set. , J Gxy , J Bxy ) can be expressed as follows:
J Rxy = r (E xy )
J Gxy = g (E xy ) (10)
J Bxy = b (E xy )
なお、関数を定義すべき範囲は、エッジ成分の要素のとり得る値に応じて変化することは言うまでもない。
Needless to say, the range in which the function should be defined varies depending on the possible values of the edge component elements.
代替手法として、第2の画素の値を以下の関数によって割り当てることにより、第2の画像を生成することができる。
JRxy=Exy+offset
JGxy=Exy+offset (11)
JBxy=Exy+offset As an alternative technique, the second image can be generated by assigning the value of the second pixel by the following function.
J Rxy = E xy + offset
J Gxy = E xy + offset (11)
J Bxy = E xy + offset
JRxy=Exy+offset
JGxy=Exy+offset (11)
JBxy=Exy+offset As an alternative technique, the second image can be generated by assigning the value of the second pixel by the following function.
J Rxy = E xy + offset
J Gxy = E xy + offset (11)
J Bxy = E xy + offset
ここで、offsetは任意のオフセット値である。この手法によると、第2の画像は白から黒までの濃淡により表されることになる。
Here, offset is an arbitrary offset value. According to this method, the second image is represented by shading from white to black.
上述の手法は、例示に過ぎない。第2の画像は、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求める任意の手法によって生成することができる。
The above method is only an example. The second image can be generated by any technique that determines the color of each pixel of the second image as a function of the value of each element of the edge component.
第2の画像を生成するステップ130は、画像化ステップ134の前に、スムージングステップ131を含んでいてもよい。スムージングステップ131は、孤立点除去フィルタ適用ステップ132及び広げフィルタ適用ステップ133の一方又は双方を含んでいる。以下、図7A~7Fを参照して、孤立点除去フィルタ適用ステップ132と広げフィルタ適用ステップ133について説明する。
The step 130 for generating the second image may include a smoothing step 131 before the imaging step 134. The smoothing step 131 includes one or both of an isolated point removing filter applying step 132 and a spreading filter applying step 133. Hereinafter, the isolated point removal filter application step 132 and the spread filter application step 133 will be described with reference to FIGS. 7A to 7F.
図7Aは、説明のために、エッジ成分を、適当な閾値によって各要素の値の絶対値を二値化することによって画像化した、孤立点411及び虫食い412を含む画像410を示している。図7A(及び後述の図7B)において、白い画素は対応する要素の値の絶対値がある閾値以上であることを示し、黒い画素は対応する要素の値の絶対値がある閾値未満であることを示す。なお、本来、孤立点411とは、エッジ成分において、周囲の要素の値の絶対値より大きな、好ましくは周囲の要素の値の絶対値に対して著しく大きな絶対値である値を有する要素であり、虫食い412とは、周囲の要素の値の絶対値より小さな、好ましくは周囲の要素の値の絶対値に対して著しく小さな絶対値である値を有する要素である。なお、孤立点411及び虫食い412については、図7Cにおいて、横軸をエッジ成分のある連続した要素、縦軸をその要素の値とするグラフを用いて、別の表現としても示している。このような孤立点411及び虫食い412は、第1の画像の性質によって発生することがある。孤立点除去フィルタの適用は孤立点411を、広げフィルタの適用は虫食い412をそれぞれ除去すること、言い換えると、孤立点411及び虫食い412である要素の値を周囲の要素の値から求められた値、例えば、周囲の要素の値の中央値、平均値、最大値又は最小値で置き換えることを目的とする。以下に孤立点除去フィルタと広げフィルタの一例について説明するが、上述の目的を達成できるフィルタであれば、任意のフィルタ、例えば突出した値を取り除くことが可能なメディアンフィルタ、を用いることができることは言うまでもない。
FIG. 7A shows an image 410 including an isolated point 411 and a worm-eaten 412 in which the edge component is imaged by binarizing the absolute value of each element value with an appropriate threshold value for the sake of explanation. In FIG. 7A (and FIG. 7B described later), a white pixel indicates that the absolute value of the corresponding element value is greater than or equal to a certain threshold value, and a black pixel is less than a certain threshold value. Indicates. Originally, the isolated point 411 is an element having an edge component that has a value that is larger than the absolute value of the value of the surrounding element, preferably a value that is significantly larger than the absolute value of the value of the surrounding element. The worm-eaten 412 is an element having a value that is smaller than the absolute value of the value of the surrounding element, preferably an absolute value that is significantly smaller than the absolute value of the value of the surrounding element. Note that isolated points 411 and worm-eaten 412 are also shown as different expressions in FIG. 7C using a graph in which the horizontal axis is a continuous element having an edge component and the vertical axis is the value of the element. Such isolated points 411 and worm eaters 412 may occur due to the nature of the first image. The application of the isolated point removal filter removes the isolated point 411, and the application of the spread filter removes the worm-eating 412. For example, the purpose is to replace the median value, average value, maximum value, or minimum value of the values of surrounding elements. An example of an isolated point removal filter and a spread filter will be described below. However, any filter, for example, a median filter capable of removing a protruding value, can be used as long as it can achieve the above-described purpose. Needless to say.
図7Dは、例示の孤立点除去フィルタのフローチャートである。このフィルタは、まず、エッジ成分において3要素×3要素の連続した領域450(図7Fを参照)を特定するステップ431を実行する。ここで、中央要素451の値をEXY、周囲要素452の値をExy、要素が取り得る最大値をEmaxとおくと、次に、特定された領域450の中央要素451を除く周囲要素452において、Emaxと第1の割合との積を閾値として、絶対値が閾値以下となる値Exyを有する要素の数を求めるステップ432を実行する。ここで、第1の割合は10%であってよい。更に、求められた要素の数が第1の数以上であるかどうかを判定するステップ433を実行する。ここで、第1の数は3であってよい。そして、判定が真であれば、周囲要素452の値Exyの中央値で、中央要素451の値EXYを置き換えるステップ434を実行する。
FIG. 7D is a flowchart of an exemplary isolated point removal filter. The filter first executes step 431 for specifying a continuous region 450 (see FIG. 7F) of 3 elements × 3 elements in the edge component. Here, assuming that the value of the central element 451 is E XY , the value of the peripheral element 452 is E xy , and the maximum value that the element can take is E max , then the peripheral elements excluding the central element 451 of the specified region 450 In step 452, step 432 is performed to determine the number of elements having a value E xy whose absolute value is equal to or less than the threshold value, using the product of E max and the first ratio as a threshold value. Here, the first ratio may be 10%. Further, step 433 is executed to determine whether or not the obtained number of elements is greater than or equal to the first number. Here, the first number may be three. If the determination is true, step 434 is executed to replace the value E XY of the central element 451 with the median value of the values E xy of the surrounding elements 452.
例えば、中央要素451の値EXYは100であり、8つの周囲要素452の値Exyは{33,10,15,-20,-5,5,-42,12}であり、最大値Emax=255であると仮定する。ここで、閾値は255×10%=25.5であり、Exyの絶対値={33,10,15,20,5,5,42,12}であり、閾値25.5以下となるExyの絶対値は{10,15,20,5,5,12}であるから、値Exyの絶対値が閾値以下となる要素の数は6である。よって、求められた要素の数6は第1の数3以上であるから、中央要素451は孤立点であると判定され、その値EXY=100は7.5(周囲画素452の中央値。ここで、周囲画素452の数が8であるため、中央に近い2つの値Exy={5,10}の算術平均として中央値を求めている)に置き換えられる。
For example, the value E XY of the central element 451 is 100, the value E xy of the eight surrounding elements 452 is {33, 10, 15, −20, −5, 5, −42, 12}, and the maximum value E Assume that max = 255. Here, the threshold value is 255 × 10% = 25.5, the absolute value of E xy = {33, 10, 15, 20, 5, 5, 42, 12}, and E is equal to or less than the threshold value 25.5. Since the absolute value of xy is {10, 15, 20, 5, 5, 12}, the number of elements whose absolute value of value xy is less than or equal to the threshold value is 6. Therefore, since the obtained number of elements 6 is equal to or greater than the first number 3, the central element 451 is determined to be an isolated point, and its value E XY = 100 is 7.5 (the median value of the surrounding pixels 452). Here, since the number of surrounding pixels 452 is 8, the median is obtained as an arithmetic average of two values E xy = {5, 10} close to the center).
図7Eは、例示の広げフィルタのフローチャートである。このフィルタは、まず、エッジ成分において3要素×3要素の連続した領域450(図7Fを参照)を特定するステップ441を実行する。ここで、中央要素451の値をEXY、周囲要素452の値をExy、要素が取り得る最大値をEmaxとおくと、次に、特定された領域450の中央要素451を除く周囲要素452において、Emaxと第2の割合との積を閾値として、絶対値が閾値以上となる値Exyを有する要素の数を求めるステップ442を実行する。ここで、第2の割合は10%であってよい。更に、求められた要素の数が第2の数以上であるかどうかを判定するステップ443を実行する。ここで、第2の数は5であってよい。そして、判定が真であれば、周囲要素452の値Exyの中央値で、中央要素451の値EXYを置き換えるステップ444を実行する。広げフィルタに対する具体例は、孤立点除去フィルタに対するものと類似するため、省略する。
FIG. 7E is a flowchart of an exemplary spreading filter. The filter first executes step 441 of specifying a continuous region 450 (see FIG. 7F) of 3 elements × 3 elements in the edge component. Here, assuming that the value of the central element 451 is E XY , the value of the peripheral element 452 is E xy , and the maximum value that the element can take is E max , then the peripheral elements excluding the central element 451 of the specified region 450 In step 452, step 442 is performed in which the product of E max and the second ratio is used as a threshold value to determine the number of elements having a value E xy whose absolute value is equal to or greater than the threshold value. Here, the second ratio may be 10%. Further, step 443 is performed for determining whether the number of elements obtained is equal to or greater than the second number. Here, the second number may be five. If the determination is true, step 444 is executed in which the value E XY of the central element 451 is replaced with the median value of the values E xy of the surrounding elements 452. Since a specific example for the spreading filter is similar to that for the isolated point removal filter, a description thereof will be omitted.
スムージングステップ131が孤立点除去フィルタ適用ステップ132及び広げフィルタ適用ステップ133の双方を含む場合、これらステップはどちらを先に実行してもよいが、経験的には、孤立点除去フィルタ適用ステップ132を先に実行した方が好ましい結果が得られるようである。なお、上述した第1の割合と第2の割合とは異なっていてもよいし、また、第1の数と第2の数は同じであってもよい。
If the smoothing step 131 includes both the isolated point removal filter application step 132 and the spread filter application step 133, either of these steps may be performed first, but empirically, the isolated point removal filter application step 132 is performed. It seems that a better result is obtained if it is executed first. Note that the first ratio and the second ratio described above may be different, and the first number and the second number may be the same.
本方法は、表示画像生成ステップ140を含むことができる。表示画像生成ステップ140は、画像合成ステップ142を含む。図8A及び8Bには、合成後の画像と対比する目的で、それぞれ原画像(グレースケールで表現したため、実質的に第1の画像を示したものに相当する)及び第2の画像を示した。図8Cは、原画像と第2の画像とを重ね合わせるように合成することにより生成された表示画像を示している。
The method can include a display image generation step 140. The display image generation step 140 includes an image composition step 142. 8A and 8B show an original image (substantially equivalent to the first image because it is expressed in gray scale) and a second image, respectively, for the purpose of comparison with the synthesized image. . FIG. 8C shows a display image generated by combining the original image and the second image so as to overlap each other.
原画像及び原画像と第2の画像とを重ね合わせた表示画像の別例を、図8G及び8Hに示す。図8Hの下の画像は、図8Gに示された、フォーカスが合っている原画像について得られた表示画像である。フォーカスのあっていない原画像(図示せず)からは、図8Hの上のような表示画像が得られることになる。
8G and 8H show another example of the display image in which the original image and the original image and the second image are superimposed. The lower image in FIG. 8H is a display image obtained for the original image in focus shown in FIG. 8G. A display image as shown in FIG. 8H is obtained from an unfocused original image (not shown).
重ね合わせ後の画像のある画素の色は、当該色を特定する値(例えば、当該色のR、G及びBの値のそれぞれ、又は、R、G及びBの値が24ビットカラー値や32ビットカラー値等1つの値として格納されている場合にはその値)を以下の式により計算することによって求めることができる。
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数) (12) The color of a certain pixel in the superimposed image is a value that identifies the color (for example, the R, G, and B values of the color, or the R, G, and B values are each a 24-bit color value or 32 If it is stored as one value such as a bit color value, it can be obtained by calculating the value by the following equation.
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1) (12)
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数) (12) The color of a certain pixel in the superimposed image is a value that identifies the color (for example, the R, G, and B values of the color, or the R, G, and B values are each a 24-bit color value or 32 If it is stored as one value such as a bit color value, it can be obtained by calculating the value by the following equation.
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1) (12)
ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である。
Here, C out is a value that specifies the color of a certain pixel in the superimposed image, and C in1 and C in2 are values that specify the color of the corresponding pixel in the original image and the second image, respectively. is there.
重みα及びβは、ユーザが任意に指定することができる。例えば、α=0.5、β=1のとき、重ね合わせ後の画像において、第2の画像は原画像よりも強調して現れることになる。また、一般的に、原画像の各画素の色を特定する値は正の値をとることが前提となっているが、第2の画像の各画素の色を特定する値は、第2の画像の生成の際に用いる上述した関数によっては、正負両方の値をとり得ることがある。そのような場合に、例えば、白とび(画素の色を特定する値が表現しうる最大の明るさ以上の明るさを有するものを撮影したために、ある画素の色を特定する値がその値のとり得る最大値となってしまうこと)が生じている原画像と第2の画像とをα=1、β=1として重ね合わせると、原画像における白飛びした画素の色を特定する値が第2の画像における負の値である対応する画素の色を特定する値によって減算され、重ね合わせ後の画像においてエッジを表す画素の確認ができるようになることがある。なお、β=1-αの関係を満たす場合、式(12)は一般的なアルファブレンディングの式を表すことになる。
The weights α and β can be arbitrarily specified by the user. For example, when α = 0.5 and β = 1, the second image appears more emphasized than the original image in the superimposed image. In general, the value specifying the color of each pixel in the original image is assumed to be a positive value, but the value specifying the color of each pixel in the second image is the second value. Depending on the above-described function used when generating an image, both positive and negative values may be taken. In such a case, for example, since a picture having a brightness higher than the maximum brightness that can be expressed is taken, the value specifying the color of a certain pixel When the original image and the second image in which the maximum possible value is generated are overlapped with α = 1 and β = 1, the value specifying the color of the overexposed pixel in the original image is the first value. In some cases, the pixel representing the edge in the image after superimposition can be confirmed by subtracting by the value specifying the color of the corresponding pixel which is a negative value in the second image. When the relationship β = 1−α is satisfied, the equation (12) represents a general alpha blending equation.
表示画像は、原画像の一部を第2の画像で置き換えるように合成することにより生成することもできる。ここで、置き換えるべき部分の位置やサイズは、ユーザが任意に指定することができる。なお、画像の部分置き換えと重ね合わせとを同時に行うように合成することも可能である。図8Dは、画像の部分置き換えと重ね合わせとを同時に行い、原画像と第2の画像との一部511を重ね合わせることにより生成された表示画像510を示している。
The display image can also be generated by combining so that a part of the original image is replaced with the second image. Here, the position and size of the portion to be replaced can be arbitrarily designated by the user. It is also possible to combine images so that partial replacement of images and superposition are performed simultaneously. FIG. 8D shows a display image 510 generated by performing partial replacement and superposition of images at the same time and superimposing a part 511 of the original image and the second image.
第2の画像の画像化に用いる関数を変えることにより、合成後の画像の表現を調整することができる。図8Eは、図8Aに示された原画像と、第2の画像とを重ね合わせるように合成することにより生成された表示画像を示しているが、図8Eにおける第2の画像は、図8Cにおける第2の画像(図8Bに示された第2の画像)を生成した関数とは異なる関数を用いて、図8Aに示された原画像から画像化されたものである。図8Eにおける第2の画像の生成に用いた関数は、図8Cにおける第2の画像の生成に用いた関数に対して、RGB各色の値にオフセットを更に加えるように構成されたものであり、より明るい第2の画像を生成可能である。図8Fは、画像の部分置き換えを更に行った場合の図8Eに対応する。
The expression of the combined image can be adjusted by changing the function used for imaging the second image. FIG. 8E shows a display image generated by combining the original image shown in FIG. 8A and the second image so as to overlap each other. The second image in FIG. 8A is formed from the original image shown in FIG. 8A using a function different from the function that generated the second image (second image shown in FIG. 8B). The function used for generating the second image in FIG. 8E is configured to further add an offset to the values of each RGB color with respect to the function used for generating the second image in FIG. 8C. A brighter second image can be generated. FIG. 8F corresponds to FIG. 8E in the case where the partial replacement of the image is further performed.
表示画像生成ステップ140は、原画像の色調整を行うステップ141を含むことができる。このステップにより、合成前に原画像の色調整を行うことができ、例えば、色調整によって原画像のコントラストを下げることにより、エッジを表す画素がより目立つ表示画像を生成できる場合がある。
The display image generation step 140 may include a step 141 for adjusting the color of the original image. By this step, the color adjustment of the original image can be performed before the composition. For example, by reducing the contrast of the original image by the color adjustment, a display image in which pixels representing edges are more conspicuous may be generated.
そして、本方法は、表示手段、例えばカメラのファインダーやモニタに、第2の画像に基づく画像を出力するステップ150に進む。ここで、第2の画像に基づく画像とは、本方法が表示画像生成ステップ140を含む場合には生成された表示画像であり、含まない場合には第2の画像そのものである。
The method then proceeds to step 150 where an image based on the second image is output to a display means, such as a camera finder or monitor. Here, the image based on the second image is a display image generated when the present method includes the display image generation step 140, and is the second image itself when not including the display image generation step 140.
以上、フォーカス評価を補助するための方法について説明してきた。フォーカスは、一般的に、撮影中にカメラ等のフォーカスリングを動かしながら評価される。フォーカスリングを動かすと、カメラのフォーカスは変化し、原画像も変化する。従って、入力される第1の画像は変化するため、表示のために用いられる第2の画像は更新されなければならない。従って、本方法が含むステップは、第1の画像の新たな入力に応答して、繰り返される。
So far, the method for assisting focus evaluation has been described. Focus is generally evaluated while moving a focus ring such as a camera during shooting. When the focus ring is moved, the camera focus changes and the original image also changes. Therefore, since the input first image changes, the second image used for display must be updated. Accordingly, the steps involved in the method are repeated in response to a new input of the first image.
次に、本発明の一実施形態である、フォーカス評価を補助するための装置について説明する。
Next, an apparatus for assisting focus evaluation, which is an embodiment of the present invention, will be described.
図9Aは、フォーカス評価を補助するための装置のブロック図である。なお、このブロック図に示された構成はあくまで一例であり、以下で説明される幾つかの手段は必須ではない。なお、この図において、第1の画像、エッジ成分、第2の画像、色調整前の原画像及び色調整前の原画像に関連して示されたグラフは、説明のために、各手段の出力がどのようなものなのかを示すグラフである。これらグラフの横軸及び縦軸は、第1の画像、第2の画像、色調整前の原画像及び色調整後の原画像については、ある連続した画素及び画素から求まる輝度値を、エッジ成分については、ある連続した要素及び要素の値を、それぞれ表している。
FIG. 9A is a block diagram of an apparatus for assisting focus evaluation. Note that the configuration shown in this block diagram is merely an example, and some means described below are not essential. In this figure, the graph shown in relation to the first image, the edge component, the second image, the original image before color adjustment, and the original image before color adjustment is shown for each means for explanation. It is a graph which shows what kind of output. In these graphs, the horizontal axis and the vertical axis indicate a continuous pixel and a luminance value obtained from the pixel for the first image, the second image, the original image before color adjustment, and the original image after color adjustment, as edge components. Represents a certain continuous element and element value.
本装置は、第1の画像生成手段610を備えている。この手段は、原画像から第1の画像を生成するための手段である。
This apparatus includes first image generation means 610. This means is a means for generating the first image from the original image.
本装置は、エッジ成分生成手段620を備えている。この手段は、エッジ成分生成ステップ120を実行する。従って、エッジ生成手段620は、少なくとも、第1の画像に含まれる空間周波数成分から少なくとも直流成分を除去できればよいが、本実施形態では、2次元ハイパスフィルタ適用ステップを実行する2次元ハイパスフィルタ手段621を備えることにより実現している。代替として、図9Bに関して後述するように、2つの1次元ハイパスフィルタ手段624a及び624bを備えることにより実現することもできる。また、第1の画像から、第1の画像にローパスフィルタを適用したものを減算する手段を備えることによっても実現することができる。エッジ成分生成手段620は、更に、非線形処理ステップ122及び増幅/リミッタ適用ステップ123をそれぞれ実行する非線形処理手段622及び増幅/リミッタ手段623の一方又は双方を備えていてもよい。
This apparatus is provided with edge component generation means 620. This means executes the edge component generation step 120. Therefore, the edge generation unit 620 is only required to remove at least a direct current component from the spatial frequency component included in the first image. However, in the present embodiment, the two-dimensional high-pass filter unit 621 that executes the two-dimensional high-pass filter application step. It is realized by having. Alternatively, it can be realized by providing two one-dimensional high-pass filter means 624a and 624b, as will be described later with reference to FIG. 9B. It can also be realized by providing means for subtracting a low-pass filter applied to the first image from the first image. The edge component generation unit 620 may further include one or both of a nonlinear processing unit 622 and an amplification / limiter unit 623 that execute the nonlinear processing step 122 and the amplification / limiter application step 123, respectively.
エッジ生成手段620は、フィルタ用バッファ手段624を備えている。フィルタ用バッファ手段624は、少なくとも、2次元ハイパスフィルタ手段621で一度に用いるデータ、例えば、第1の画像の連続した3画素×3画素(2次元ハイパスフィルタ手段621におけるフィルタの大きさに対応)を表すデータを一時的に格納し、2次元ハイパスフィルタ手段621に出力する手段である。なお、第1の画像の横方向の画素数をwとおくと、フィルタ用バッファ手段624は、処理の効率を考え、一度に第1の画像の連続したw画素×3画素(2次元ハイパスフィルタ手段621の縦方向の大きさに対応)を表すデータを一時的に格納してもよい。
The edge generation unit 620 includes a filter buffer unit 624. The filter buffer unit 624 is at least data used at a time by the two-dimensional high-pass filter unit 621, for example, 3 pixels × 3 pixels of the first image (corresponding to the size of the filter in the two-dimensional high-pass filter unit 621). Is a means for temporarily storing data representing and outputting the data to the two-dimensional high-pass filter means 621. If the number of pixels in the horizontal direction of the first image is set to w, the filter buffer means 624 considers the processing efficiency, and the continuous w pixels × 3 pixels (two-dimensional high-pass filter) of the first image at a time. Data corresponding to the vertical size of the means 621) may be temporarily stored.
図9Bに示されるように、エッジ成分生成手段620は、2次元ハイパスフィルタ手段621の代わりに、ラインに垂直な方向の1次元ハイパスフィルタ手段621a及びラインに平行なサンプル方向の1次元ハイパスフィルタ手段621bを備えることができる。1次元ハイパスフィルタ手段621a及び621bは、それぞれ1次元ハイパスフィルタ適用ステップ121a及び121bを実行する。この図において示されているように、エッジ成分生成手段620は、ラインに垂直な方向及びラインに平行なサンプル方向の非線形処理手段622a及び622b並びに増幅/リミッタ適用手段623a及び623bの少なくとも1つを備えることができる。これら手段は、第1エッジ成分ELxy及び第2エッジ成分ESxyに対して非線形処理ステップ122及び増幅/リミッタ適用ステップ123を実行する(それぞれ、ステップ122a及び122b並びにステップ123a及び123bを実行する)ものである。
As shown in FIG. 9B, instead of the two-dimensional high-pass filter unit 621, the edge component generation unit 620 includes a one-dimensional high-pass filter unit 621a in the direction perpendicular to the line and a one-dimensional high-pass filter unit in the sample direction parallel to the line. 621b can be provided. The one-dimensional high-pass filter means 621a and 621b execute the one-dimensional high-pass filter application steps 121a and 121b, respectively. As shown in this figure, the edge component generation means 620 includes at least one of nonlinear processing means 622a and 622b and amplification / limiter application means 623a and 623b in a direction perpendicular to the line and in a sample direction parallel to the line. Can be provided. These means execute the non-linear processing step 122 and the amplification / limiter application step 123 for the first edge component E Lxy and the second edge component E Sxy (perform steps 122a and 122b and steps 123a and 123b, respectively). Is.
2つの1次元ハイパスフィルタ手段を用いる場合、エッジ生成手段620は、ラインバッファ手段624a、サンプルバッファ手段624b及びエッジ合成手段626を備えている。ラインバッファ手段624aは、第1の画像の1ライン×1次元ハイパスフィルタ手段621aにおけるフィルタの大きさに相当するデータ、例えば、第1の画像の横方向の画素数がwであり、フィルタの大きさが3である場合、連続したw×3画素を表すデータを一時的に格納する。そして、格納したデータから、1次元ハイパスフィルタ621aには、ラインに垂直な方向の連続した画素を表すデータを逐次出力し、サンプルバッファ手段624bには、1次元ハイパスフィルタ621bで一度に用いるデータ、例えば、ラインに平行なサンプル方向の連続した3画素(1次元ハイパスフィルタ手段621bにおけるフィルタの大きさに対応)を逐次出力する。サンプルバッファ手段624bは、1次元ハイパスフィルタ手段621bで一度に用いるデータを一時的に格納し、1次元ハイパスフィルタ手段621bに出力する。また、エッジ成分合成手段626は、第1エッジ成分ELxy及び第2エッジ成分ESxyを一時的に格納し、エッジ成分合成ステップ124を実行するものである。
When two one-dimensional high-pass filter means are used, the edge generation means 620 includes a line buffer means 624a, a sample buffer means 624b, and an edge synthesis means 626. The line buffer means 624a has data corresponding to the size of the filter in the 1 line × 1D high-pass filter means 621a of the first image, for example, the number of pixels in the horizontal direction of the first image is w, and the size of the filter When the length is 3, data representing continuous w × 3 pixels is temporarily stored. From the stored data, data representing continuous pixels in the direction perpendicular to the line is sequentially output to the one-dimensional high-pass filter 621a, and data used at one time by the one-dimensional high-pass filter 621b is output to the sample buffer unit 624b. For example, three consecutive pixels in the sample direction parallel to the line (corresponding to the size of the filter in the one-dimensional high-pass filter means 621b) are sequentially output. The sample buffer unit 624b temporarily stores data used at one time by the one-dimensional high-pass filter unit 621b and outputs the data to the one-dimensional high-pass filter unit 621b. The edge component synthesis means 626 temporarily stores the first edge component E Lxy and the second edge component E Sxy and executes the edge component synthesis step 124.
本装置は、第2の画像生成ステップ130を実行する第2の画像生成手段630を備えている。第2の画像生成手段630は、画像化ステップ134を実行する画像化手段634を備えている。第2の画像生成手段630は、更に、スムージングステップ131を実行するスムージング手段631を備えていてもよい。スムージング手段631は、孤立点除去フィルタ適用ステップ132及び広げフィルタ適用ステップ133をそれぞれ実行する孤立点除去フィルタ手段632及び広げフィルタ手段633の一方又は双方を備えている。
The apparatus includes second image generation means 630 that executes the second image generation step 130. The second image generating unit 630 includes an imaging unit 634 that executes the imaging step 134. The second image generation unit 630 may further include a smoothing unit 631 that executes the smoothing step 131. The smoothing means 631 includes one or both of an isolated point removal filter means 632 and a spread filter means 633 that execute the isolated point removal filter application step 132 and the spread filter application step 133, respectively.
本装置は、表示画像生成ステップ140を実行する表示画像生成手段640を備えることができる。表示画像生成手段640は、画像合成ステップ142を実行する画像合成手段642を備えている。表示画像生成手段640は、更に、原画像の色調整を行うステップ141を実行する色調整手段641を備えていてもよい。
The present apparatus can include display image generation means 640 for executing the display image generation step 140. The display image generation means 640 includes image composition means 642 that executes the image composition step 142. The display image generating unit 640 may further include a color adjusting unit 641 that executes Step 141 for adjusting the color of the original image.
表示画像生成手段640には、原画像と第2の画像とが入力される。そのため、表示画像生成手段640は、原画像と第2の画像との入力タイミングが大幅にずれる場合に原画像を画像合成手段642(及び、存在する場合には色調整手段641)に渡すタイミング遅延させるための遅延調整手段643を含んでいる。
The original image and the second image are input to the display image generation means 640. Therefore, the display image generation means 640 delays the timing of passing the original image to the image composition means 642 (and the color adjustment means 641 if present) when the input timing of the original image and the second image is significantly different. The delay adjusting means 643 is included.
本装置は、第2の画像に基づく画像を図示しない表示手段へと出力するように構成されている。ここで、第2の画像に基づく画像とは、本装置が表示画像生成手段640を含む場合には生成された表示画像であり、含まない場合には第2の画像そのものである。
This apparatus is configured to output an image based on the second image to a display means (not shown). Here, the image based on the second image is a display image generated when the present apparatus includes the display image generating means 640, and is the second image itself when not including the display image generating means 640.
以上、フォーカス評価を補助するための装置について説明してきた。フォーカス評価を補助するための方法に関して上述したように、フォーカス評価の際には、入力される第1の画像は変化するため、表示のために用いられる第2の画像は更新されなければならない。従って、本装置は、エッジ成分生成手段620への第1の画像の新たな入力に応答して、少なくとも、新たに生成された第2の画像に基づく画像を表示手段へと出力するように構成される。
As described above, the device for assisting the focus evaluation has been described. As described above with respect to the method for assisting focus evaluation, during the focus evaluation, the input first image changes, so the second image used for display must be updated. Accordingly, the apparatus is configured to output at least an image based on the newly generated second image to the display unit in response to a new input of the first image to the edge component generation unit 620. Is done.
本発明は、フォーカス評価を補助するためのプログラムとしても実施可能である。そのようなプログラムは、コンピュータを、上述したフォーカスを評価するための装置として機能させるものである。
The present invention can also be implemented as a program for assisting focus evaluation. Such a program causes a computer to function as an apparatus for evaluating the focus described above.
ここで「コンピュータ」とは、ハードウェア資源である演算・制御装置と、記憶装置と、入力装置と、出力装置とのうちの1以上を備えたシステムのことである。演算・制御装置はCPU及びMPU等を含む。記憶装置は、メモリ、ハードディスク及びSSD等を含む。入力装置は、チップのピン、マウス、キーボード、タッチパネル及びネットワーク・インターフェース等を含む。出力装置は、チップのピン、ネットワーク・インターフェース、ディスプレイ、プリンタ及びスピーカ等を含む。FPGAやマイクロコンピュータ等を使用して、演算・制御装置と記憶装置と入力装置と出力装置とのうちの2以上を物理的に1つのものとすることもできる。これらハードウェア資源である演算・制御装置、記憶装置、入力装置及び出力装置のうちの1以上と、ソフトウェアであるプログラムとが協働して、本発明を実施することができることは、当業者には明らかであろう。
Here, the “computer” is a system provided with one or more of hardware / arithmetic / control devices, storage devices, input devices, and output devices. The arithmetic / control device includes a CPU and an MPU. The storage device includes a memory, a hard disk, an SSD, and the like. The input device includes a chip pin, a mouse, a keyboard, a touch panel, a network interface, and the like. Output devices include chip pins, network interfaces, displays, printers, speakers, and the like. Two or more of the arithmetic / control device, the storage device, the input device, and the output device can be physically integrated by using an FPGA or a microcomputer. Those skilled in the art will be able to implement the present invention in cooperation with one or more of these hardware / resources of arithmetic / control devices, storage devices, input devices and output devices, and programs that are software. Will be clear.
以上、本発明の複数の実施形態について説明してきた。しかしながら、これら実施形態は本発明の例示に過ぎないことに注意されたい。本発明は、特許請求の範囲によってのみ特定され、上述の実施形態に対して様々な変形、修正、削除及び代替をした実施形態をも含んでいる。
In the foregoing, a plurality of embodiments of the present invention have been described. However, it should be noted that these embodiments are merely illustrative of the invention. The present invention is specified only by the scope of the claims, and includes embodiments in which various changes, modifications, deletions and substitutions have been made to the above-described embodiments.
Claims (25)
- 装置であって、
第1の画像からエッジ成分を生成する手段であって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成手段と、
エッジ成分から第2の画像を生成する手段であって、第2の画像は複数の画素を含み、該手段は、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求める手段を含む、画像生成手段と
を備え、第1の画像の新たな入力に応答して、新たに生成された第2の画像に基づく画像を表示手段へと出力するように構成され、エッジ成分生成手段は、エッジ成分の各要素の値に対して、リミッタ適用の前に非線形処理を行う手段を備え、当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第1非線形関数の出力である、装置。 A device,
Means for generating an edge component from a first image, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and a value of each element of the edge component is a space in the first image Edge component generation means corresponding to the value of each pixel of the first image after removing at least the DC component from the frequency component;
Means for generating a second image from an edge component, wherein the second image includes a plurality of pixels, the means comprising: a color of each pixel of the second image as a function of the value of each element of the edge component; An image generation means including a means for determining, in response to a new input of the first image, an image based on the newly generated second image is output to the display means, The edge component generation means includes means for performing nonlinear processing on the value of each element of the edge component before applying the limiter, and the value of the element of the edge component after the nonlinear processing is the edge component before the nonlinear processing. An apparatus that is an output of a first nonlinear function that takes as an input the values of the elements. - 装置であって、
第1の画像からエッジ成分を生成する手段であって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成手段と、
エッジ成分から第2の画像を生成する手段であって、第2の画像は複数の画素を含み、該手段は、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求める手段を含む、画像生成手段と
を備え、第1の画像の新たな入力に応答して、新たに生成された第2の画像に基づく画像を表示手段へと出力するように構成され、エッジ成分生成手段は、エッジ成分の各要素の値に対して、非線形処理を行う手段を備え、当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第2非線形関数の出力であり、第2非線形関数は、第1非線形関数と、少なくともリミッタ適用のための関数との合成関数である、装置。 A device,
Means for generating an edge component from a first image, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and a value of each element of the edge component is a space in the first image Edge component generation means corresponding to the value of each pixel of the first image after removing at least the DC component from the frequency component;
Means for generating a second image from an edge component, wherein the second image includes a plurality of pixels, the means comprising: a color of each pixel of the second image as a function of the value of each element of the edge component; An image generation means including a means for determining, in response to a new input of the first image, an image based on the newly generated second image is output to the display means, The edge component generation means includes means for performing nonlinear processing on the value of each element of the edge component. The value of the element of the edge component after the nonlinear processing is the value of the element of the edge component before the nonlinear processing. An output of a second nonlinear function as an input, wherein the second nonlinear function is a composite function of the first nonlinear function and at least a function for applying a limiter. - 請求項1又は2に記載の装置であって、第1非線形関数は、第1象限及び第3象限で定義され、原点を通る関数である、装置。 3. The apparatus according to claim 1, wherein the first nonlinear function is a function defined by the first quadrant and the third quadrant and passing through the origin.
- 請求項1~3の何れか一項に記載の装置であって、第1の画像が次々と入力されるときに、第1非線形関数を複数の非線形関数の間で切り替えるための手段を更に備えた装置。 The apparatus according to any one of claims 1 to 3, further comprising means for switching the first nonlinear function between a plurality of nonlinear functions when the first image is successively input. Equipment.
- 請求項1から4の何れか一項に記載の装置であって、エッジ成分生成手段は、第1の画像に2次元ハイパスフィルタを適用して、第1の画像における空間周波数成分から少なくとも直流成分を除去する手段を備えた、装置。 5. The apparatus according to claim 1, wherein the edge component generation unit applies a two-dimensional high-pass filter to the first image to at least a direct current component from a spatial frequency component in the first image. An apparatus comprising means for removing
- 請求項1から4の何れか一項に記載の装置であって、エッジ成分生成手段は、第1の画像に対する第1方向1次元ハイパスフィルタ手段と、第1の画像に対する第2方向1次元ハイパスフィルタ手段と、第1方向1次元ハイパスフィルタ手段の出力と第2方向1次元ハイパスフィルタ手段の出力とを合成する手段とを備えた、装置。 5. The apparatus according to claim 1, wherein the edge component generation unit includes a first direction one-dimensional high-pass filter unit for the first image and a second direction one-dimensional high pass for the first image. An apparatus comprising: filter means; means for combining the output of the first direction one-dimensional high-pass filter means and the output of the second direction one-dimensional high-pass filter means.
- 請求項1から6の何れか一項に記載の装置であって、エッジ成分生成手段は、エッジ成分の要素の値に所定の係数を乗算した値へと該要素の値を変更する増幅手段、及び、エッジ成分の要素の値が所定の閾値より大きいか又は小さい場合にその値を閾値へと変更するリミッタ適用手段の一方又は双方を備えた、装置。 The apparatus according to any one of claims 1 to 6, wherein the edge component generation means changes the value of the element to a value obtained by multiplying the value of the element of the edge component by a predetermined coefficient, And an apparatus comprising one or both of limiter applying means for changing the value of the element of the edge component to a threshold value when the value of the element of the edge component is larger or smaller than a predetermined threshold value
- 請求項1から7の何れか一項に記載の装置であって、画像生成手段は、エッジ成分において、周囲の要素の値の絶対値より大きな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える孤立点除去フィルタ手段と、エッジ成分において、周囲の要素の値の絶対値より小さな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える広げフィルタ手段との一方または双方を備えた、装置。 The apparatus according to any one of claims 1 to 7, wherein the image generation means calculates an element value having an absolute value larger than an absolute value of a value of a surrounding element in the edge component. The isolated point removal filter means for replacing with a value obtained from the value of the surrounding element, and the value of the element having an absolute value smaller than the absolute value of the value of the surrounding element in the edge component from the value of the surrounding element An apparatus comprising one or both of a spreading filter means for replacing with a desired value.
- 請求項8に記載の装置であって、孤立点除去フィルタ手段は、
エッジ成分において3要素×3要素の連続した領域を特定し、
領域の中央の要素を除く要素において、要素がとり得る最大値と第1の割合との積を閾値として、絶対値が閾値以下となる値を有する要素の数を求め、
求められた要素の数が第1の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換える
ように構成され、広げフィルタ手段は、
エッジ成分において3要素×3要素の連続した領域を特定し、
領域の中央の要素を除く要素において、要素がとり得る最大値と第2の割合との積を閾値として、絶対値が閾値以上となる値を有する要素の数を求め、
求められた要素の数が第2の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換える
ように構成された、装置。 9. The apparatus according to claim 8, wherein the isolated point removal filter means comprises:
Specify a continuous region of 3 elements x 3 elements in the edge component,
In the elements other than the element at the center of the region, the product of the maximum value that the element can take and the first ratio is used as a threshold value, and the number of elements having an absolute value equal to or less than the threshold value is obtained.
The spread filter means configured to replace the value of the element at the center of the region with the median value of the elements excluding the element at the center of the region when the number of elements obtained is equal to or greater than the first number, Is
Specify a continuous region of 3 elements x 3 elements in the edge component,
In the elements other than the element at the center of the region, the product of the maximum value that can be taken by the element and the second ratio is used as a threshold, and the number of elements having an absolute value equal to or larger than the threshold is obtained.
An apparatus configured to replace a value of a central element of a region with a median value of elements excluding the central element of the region when the determined number of elements is equal to or greater than a second number. - 請求項1から9の何れか一項に記載の装置であって、第1の画像は原画像から生成され、原画像は複数の画素を含み、第1の画像の各画素の値は、原画像の対応する画素の色から導出され、
原画像の少なくとも一部を第2の画像で置き換えること、及び、原画像と第2の画像とを重ね合わせることの一方または双方を行うことにより、表示手段へと出力される画像を生成する別の画像生成手段
を更に備えた装置。 The apparatus according to any one of claims 1 to 9, wherein the first image is generated from an original image, the original image includes a plurality of pixels, and a value of each pixel of the first image is an original value. Derived from the color of the corresponding pixel of the image,
Another method for generating an image to be output to the display means by replacing at least a part of the original image with the second image and / or superimposing the original image and the second image. An apparatus further comprising the image generating means. - 請求項10に記載の装置であって、別の画像生成手段は、原画像と第2の画像との重ね合わせを行う場合に、
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるよう構成され、ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である、装置。 The apparatus according to claim 10, wherein another image generation unit performs superimposition of the original image and the second image.
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
Is used to obtain the color of a certain pixel in the image after superimposition, where C out is a value specifying the color of a certain pixel in the image after superimposition, and C in1 and C in2 are respectively the original values. An apparatus that is a value that identifies the color of the corresponding pixel in the image and the second image. - 請求項10又は11に記載の装置であって、別の画像生成手段は、原画像を色調整する手段を備えた、装置。 12. The apparatus according to claim 10, wherein the other image generating means includes means for adjusting the color of the original image.
- 請求項1から12の何れか一項に記載の装置としてコンピュータを機能させるプログラム。 A program that causes a computer to function as the device according to any one of claims 1 to 12.
- 方法であって、
エッジ成分生成手段により、第1の画像からエッジ成分を生成するステップであって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成ステップと、
画像生成手段により、エッジ成分から第2の画像を生成するステップであって、第2の画像は複数の画素を含み、該ステップは、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求めるステップを含む、画像生成ステップと、
画像出力手段により、第2の画像に基づく画像を表示手段へと出力する画像出力ステップと
を含み、第1の画像の新たな入力に応答して、エッジ成分生成ステップ、画像生成ステップ及び画像出力ステップは繰り返され、エッジ成分生成ステップは、リミッタ適用の前にエッジ成分の各要素の値に対して非線形処理を行うステップを含み、当該非線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする非線形関数の出力である、方法。 A method,
The step of generating an edge component from the first image by the edge component generation means, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and the value of each element of the edge component is: An edge component generation step corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image;
Generating a second image from the edge component by the image generating means, the second image including a plurality of pixels, wherein the step includes the second image as a function of the value of each element of the edge component; An image generation step including the step of determining the color of each pixel of
An image output step of outputting an image based on the second image to the display means by the image output means, and in response to a new input of the first image, an edge component generation step, an image generation step, and an image output The steps are repeated, and the edge component generation step includes a step of performing nonlinear processing on the value of each element of the edge component before applying the limiter, and the value of the element of the edge component after the nonlinear processing is A method that is the output of a nonlinear function that takes as input the value of a previous edge component. - 方法であって、
エッジ成分生成手段により、第1の画像からエッジ成分を生成するステップであって、第1の画像は複数の画素を含み、エッジ成分は複数の要素を含み、エッジ成分の各要素の値は、第1の画像における空間周波数成分から少なくとも直流成分を除去した後の第1の画像の各画素の値に対応する、エッジ成分生成ステップと、
画像生成手段により、エッジ成分から第2の画像を生成するステップであって、第2の画像は複数の画素を含み、該ステップは、エッジ成分の各要素の値の関数として、第2の画像の各画素の色を求めるステップを含む、画像生成ステップと、
画像出力手段により、第2の画像に基づく画像を表示手段へと出力する画像出力ステップと
を含み、第1の画像の新たな入力に応答して、エッジ成分生成ステップ、画像生成ステップ及び画像出力ステップは繰り返され、エッジ成分生成ステップは、エッジ成分の各要素の値に対して非線形処理を行うステップを含み、当該線形処理後のエッジ成分の要素の値は、当該非線形処理前のエッジ成分の要素の値を入力とする第2非線形関数の出力であり、第2非線形関数は、第1非線形関数と、少なくともリミッタ適用のための関数との合成関数である、方法。 A method,
The step of generating an edge component from the first image by the edge component generation means, wherein the first image includes a plurality of pixels, the edge component includes a plurality of elements, and the value of each element of the edge component is: An edge component generation step corresponding to the value of each pixel of the first image after removing at least the direct current component from the spatial frequency component in the first image;
Generating a second image from the edge component by the image generating means, the second image including a plurality of pixels, wherein the step includes the second image as a function of the value of each element of the edge component; An image generation step including the step of determining the color of each pixel of
An image output step of outputting an image based on the second image to the display means by the image output means, and in response to a new input of the first image, an edge component generation step, an image generation step, and an image output The steps are repeated, and the edge component generation step includes a step of performing nonlinear processing on the value of each element of the edge component, and the value of the element of the edge component after the linear processing is the value of the edge component before the nonlinear processing. The method is an output of a second non-linear function having an element value as an input, and the second non-linear function is a composite function of the first non-linear function and at least a function for applying a limiter. - 請求項14又は15に記載の方法であって、第1非線形関数は、第1象限及び第3象限で定義され、原点を通る関数である、方法。 16. The method according to claim 14, wherein the first nonlinear function is a function defined by the first quadrant and the third quadrant and passing through the origin.
- 請求項14~16の何れか一項に記載の方法であって、第1の画像が次々と入力されるときに、第1非線形関数を複数の非線形関数の間で切り替えるステップを更に含む方法。 The method according to any one of claims 14 to 16, further comprising a step of switching the first nonlinear function between a plurality of nonlinear functions when the first image is successively input.
- 請求項14から17の何れか一項に記載の方法であって、エッジ成分生成ステップは、第1の画像に対する2次元ハイパスフィルタ適用ステップを含む、方法。 The method according to any one of claims 14 to 17, wherein the edge component generation step includes a two-dimensional high-pass filter application step for the first image.
- 請求項14から17の何れか一項に記載の方法であって、エッジ成分生成ステップは、第1の画像に対する第1方向1次元ハイパスフィルタ適用ステップと、第1の画像に対する第2方向1次元ハイパスフィルタ適用ステップと、第1方向1次元ハイパスフィルタ適用ステップ及び第2方向1次元ハイパスフィルタ適用ステップにより得られた出力を合成するステップとを含む、方法。 18. The method according to claim 14, wherein the edge component generation step includes a first direction one-dimensional high-pass filter application step for the first image, and a second direction one-dimensional operation for the first image. A method comprising: applying a high-pass filter; and synthesizing outputs obtained by the first-direction one-dimensional high-pass filter application step and the second-direction one-dimensional high-pass filter application step.
- 請求項14から19の何れか一項に記載の方法であって、エッジ成分生成ステップは、エッジ成分の要素の値に所定の係数を乗算した値へと該要素の値を変更する増幅ステップ、及び、エッジ成分の要素の値が所定の閾値より大きいか又は小さい場合にその値を閾値へと変更するリミッタ適用ステップの一方又は双方を含む、方法。 The method according to any one of claims 14 to 19, wherein the edge component generation step changes the value of the element to a value obtained by multiplying the value of the element of the edge component by a predetermined coefficient; And a limiter application step of changing the value of the element of the edge component to a threshold when the value of the element of the edge component is larger or smaller than a predetermined threshold.
- 請求項14から20の何れか一項に記載の方法であって、画像生成ステップは、エッジ成分において、周囲の要素の値の絶対値より大きな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える孤立点除去フィルタ適用ステップと、エッジ成分において、周囲の要素の値の絶対値より小さな絶対値である値を有する要素の値を、該周囲の要素の値から求まる値で置き換える広げフィルタ適用ステップとの一方または双方を含む、方法。 21. The method according to any one of claims 14 to 20, wherein the image generation step calculates a value of an element having an absolute value greater than an absolute value of a value of a surrounding element in the edge component. An isolated point removal filter applying step that replaces with a value obtained from the value of the surrounding element, and the value of the element having an absolute value smaller than the absolute value of the value of the surrounding element in the edge component A method comprising one or both of applying a broadening filter to replace with a value obtained from.
- 請求項21に記載の方法であって、孤立点除去フィルタ適用ステップは、
エッジ成分において3要素×3要素の連続した領域を特定するステップと、
領域の中央の要素を除く要素において、要素がとり得る最大値と第1の割合との積を閾値として、絶対値が閾値以下となる値を有する要素の数を求めるステップと、
求められた要素の数が第1の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるステップと
を含み、広げフィルタ適用ステップは、
エッジ成分において3要素×3要素の連続した領域を特定するステップと、
領域の中央の要素を除く要素において、要素がとり得る最大値を引いた値と第2の割合との積を閾値として、絶対値が閾値以上となる値を有する要素の数を求めるステップと、
求められた要素の数が第2の数以上であった場合に、領域の中央の要素を除く要素の値の中央値で、領域の中央の要素の値を置き換えるステップと
を含む、方法。 The method of claim 21, wherein the isolated point removal filter applying step comprises:
Identifying a continuous region of 3 elements × 3 elements in the edge component;
Obtaining a number of elements having an absolute value equal to or less than the threshold, with a product of the maximum value that can be taken by the element and the first ratio as a threshold in the elements other than the central element of the region;
Applying a spread filter including replacing the value of the central element of the region with the median of the values of the elements excluding the central element of the region when the number of elements obtained is equal to or greater than the first number The steps are
Identifying a continuous region of 3 elements × 3 elements in the edge component;
Obtaining a number of elements having an absolute value greater than or equal to a threshold value, with a product of a value obtained by subtracting the maximum value that can be taken by the element and the second ratio as the threshold value in elements other than the central element of the region;
Replacing the value of the central element of the region with the median of the values of the elements excluding the central element of the region when the determined number of elements is greater than or equal to the second number. - 請求項14から22の何れか一項に記載の方法であって、第1の画像は原画像から生成され、原画像は複数の画素を含み、第1の画像の各画素の値は、原画像の対応する画素の色から導出され、
別の画像生成手段により、原画像の少なくとも一部を第2の画像で置き換えること、及び、原画像と第2の画像とを重ね合わせることの一方または双方を行うことにより、表示手段へと出力される画像を生成する別の画像生成ステップを更に含む方法。 23. The method according to any one of claims 14 to 22, wherein the first image is generated from an original image, the original image includes a plurality of pixels, and the value of each pixel of the first image is the original value. Derived from the color of the corresponding pixel of the image,
Output to the display means by replacing at least a part of the original image with the second image and / or superimposing the original image and the second image by another image generating means A method further comprising another image generation step of generating a processed image. - 請求項23に記載の方法であって、別の画像生成ステップは、原画像と第2の画像との重ね合わせを行う場合に、別の画像生成手段により、
Cout=αCin1+βCin2
(αは0≦α≦1なる実数、βは0≦β≦1なる実数)
により重ね合わせ後の画像のある画素の色を求めるステップを含み、ここで、Coutは重ね合わせ後の画像のある画素の色を特定する値であり、Cin1及びCin2は、それぞれ、原画像及び第2の画像の対応する画素の色を特定する値である、方法。 24. The method according to claim 23, wherein in another image generation step, when the original image and the second image are superimposed, another image generation means
C out = αC in1 + βC in2
(Α is a real number satisfying 0 ≦ α ≦ 1, β is a real number satisfying 0 ≦ β ≦ 1)
To calculate the color of a certain pixel in the image after superposition, where C out is a value specifying the color of a certain pixel in the image after superposition, and C in1 and C in2 are the original values, respectively. A method that is a value that identifies the color of the corresponding pixel in the image and the second image. - 請求項23又は24に記載の方法であって、別の画像生成ステップは、更に、原画像を色調整するステップを含む、方法。 25. The method according to claim 23 or 24, wherein the another image generation step further includes a step of color-adjusting the original image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016511656A JP6325656B2 (en) | 2014-04-04 | 2015-04-06 | Apparatus, program and method for assisting focus evaluation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JPPCT/JP2014/059983 | 2014-04-04 | ||
PCT/JP2014/059983 WO2015151279A1 (en) | 2014-04-04 | 2014-04-04 | Device, program, and method for assisting with focus evaluation |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015152424A1 true WO2015152424A1 (en) | 2015-10-08 |
Family
ID=54239639
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/059983 WO2015151279A1 (en) | 2014-04-04 | 2014-04-04 | Device, program, and method for assisting with focus evaluation |
PCT/JP2015/060736 WO2015152424A1 (en) | 2014-04-04 | 2015-04-06 | Device, program, and method for assisting with focus evaluation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/059983 WO2015151279A1 (en) | 2014-04-04 | 2014-04-04 | Device, program, and method for assisting with focus evaluation |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6325656B2 (en) |
WO (2) | WO2015151279A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106686453A (en) * | 2016-12-05 | 2017-05-17 | 广州视源电子科技股份有限公司 | Image display method and device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109410864A (en) * | 2018-12-04 | 2019-03-01 | 惠科股份有限公司 | Driving method and driving module of display panel and display device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010016783A (en) * | 2008-07-07 | 2010-01-21 | Ricoh Co Ltd | Imaging device |
JP2010114556A (en) * | 2008-11-05 | 2010-05-20 | Sony Corp | Imaging device, image processing device, and image processing method |
JP2011029870A (en) * | 2009-07-24 | 2011-02-10 | Sony Corp | Display signal processing device, display signal processing method, display device, and electronic equipment |
JP2013074395A (en) * | 2011-09-27 | 2013-04-22 | Ricoh Co Ltd | Imaging apparatus |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6285710B1 (en) * | 1993-10-13 | 2001-09-04 | Thomson Licensing S.A. | Noise estimation and reduction apparatus for video signal processing |
JP2003134352A (en) * | 2001-10-26 | 2003-05-09 | Konica Corp | Image processing method and apparatus, and program therefor |
-
2014
- 2014-04-04 WO PCT/JP2014/059983 patent/WO2015151279A1/en active Application Filing
-
2015
- 2015-04-06 JP JP2016511656A patent/JP6325656B2/en active Active
- 2015-04-06 WO PCT/JP2015/060736 patent/WO2015152424A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010016783A (en) * | 2008-07-07 | 2010-01-21 | Ricoh Co Ltd | Imaging device |
JP2010114556A (en) * | 2008-11-05 | 2010-05-20 | Sony Corp | Imaging device, image processing device, and image processing method |
JP2011029870A (en) * | 2009-07-24 | 2011-02-10 | Sony Corp | Display signal processing device, display signal processing method, display device, and electronic equipment |
JP2013074395A (en) * | 2011-09-27 | 2013-04-22 | Ricoh Co Ltd | Imaging apparatus |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106686453A (en) * | 2016-12-05 | 2017-05-17 | 广州视源电子科技股份有限公司 | Image display method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2015151279A1 (en) | 2015-10-08 |
JPWO2015152424A1 (en) | 2017-04-13 |
JP6325656B2 (en) | 2018-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4290193B2 (en) | Image processing device | |
EP2833317B1 (en) | Image display device and/or method therefor | |
JP6635799B2 (en) | Image processing apparatus, image processing method, and program | |
JP4847591B2 (en) | Image processing apparatus, image processing method, and image processing program | |
JP2008244591A (en) | Picture processing apparatus and its method | |
US8488899B2 (en) | Image processing apparatus, method and recording medium | |
JP2008511048A (en) | Image processing method and computer software for image processing | |
KR20140035521A (en) | Local area contrast enhancement | |
Kang et al. | Adaptive height-modified histogram equalization and chroma correction in YCbCr color space for fast backlight image compensation | |
JP6624061B2 (en) | Image processing method, image processing device, and recording medium for storing image processing program | |
JP2009025862A (en) | Image processor, image processing method, image processing program and image display device | |
KR101668829B1 (en) | Texture enhancement method and apparatus reflected human visual characteristic on spatial frequency | |
JP2012023455A (en) | Image processing device, image processing method, and program | |
JP6325656B2 (en) | Apparatus, program and method for assisting focus evaluation | |
JP5614550B2 (en) | Image processing method, image processing apparatus, and program | |
JP5410378B2 (en) | Video signal correction apparatus and video signal correction program | |
JP4246178B2 (en) | Image processing apparatus and image processing method | |
KR101514152B1 (en) | Method and apparatus for improving image quality using singular value decomposition | |
JPWO2006117919A1 (en) | Image processing method, image processing apparatus, and image processing program | |
JP7437921B2 (en) | Image processing device, image processing method, and program | |
JP5247628B2 (en) | Image processing apparatus and method, and image display apparatus and method | |
JP7365206B2 (en) | Image processing device, image processing method, and program | |
JP5247633B2 (en) | Image processing apparatus and method, and image display apparatus and method | |
JP5349204B2 (en) | Image processing apparatus and method, and image display apparatus and method | |
JP5247634B2 (en) | Image processing apparatus and method, and image display apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15773519 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016511656 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15773519 Country of ref document: EP Kind code of ref document: A1 |