CN101404733B - Video signal processing apparatus, video signal processing method and video display apparatus - Google Patents

Video signal processing apparatus, video signal processing method and video display apparatus Download PDF

Info

Publication number
CN101404733B
CN101404733B CN2008101682464A CN200810168246A CN101404733B CN 101404733 B CN101404733 B CN 101404733B CN 2008101682464 A CN2008101682464 A CN 2008101682464A CN 200810168246 A CN200810168246 A CN 200810168246A CN 101404733 B CN101404733 B CN 101404733B
Authority
CN
China
Prior art keywords
image
unit
phase difference
frame
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008101682464A
Other languages
Chinese (zh)
Other versions
CN101404733A (en
Inventor
影山昌广
浜田宏一
米司健一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxell Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2007260488A external-priority patent/JP5250233B2/en
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN101404733A publication Critical patent/CN101404733A/en
Application granted granted Critical
Publication of CN101404733B publication Critical patent/CN101404733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Systems (AREA)
  • Liquid Crystal Display Device Control (AREA)
  • Picture Signal Circuits (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention relates to a video signal processing apparatus, a video signal processing method and a video display device, for achieving high resolution, with using a small number of frames, as an input videos signal. The video signal processing apparatus comprises: an input unit, into which a video frame is inputted; and a resolution converter unit for obtaining an output video frame by increasing a number of pixels building up the input video frame, wherein the resolution converter unit has a same-brightness direction estimation unit, which is configured to produce a sampling phase difference for each video data, by estimating a same-brightness direction for each video data on the input video frame, and the resolution converter unit conducts a high resolution process of video with using the sampling phase difference, which is produced by the same-brightness direction estimation unit.

Description

Image signal processing device, image signal processing method, and image display device
Technical Field
The present invention relates to a technique for increasing the resolution of an image signal, and more particularly to a technique for increasing the resolution by combining a plurality of image frames to increase the number of pixels constituting the image frames and to remove unnecessary aliasing components (aliasing components).
Background
In recent years, television receivers have been increasing in size, and image signals input from broadcasting, communication, accumulation media, and the like are not directly displayed, but are generally displayed by increasing the number of pixels in the horizontal and vertical directions by digital signal processing. In this case, it is impossible to improve the resolution by simply increasing the number of pixels by a generally known interpolation (interpolation) low-pass filter using a sinc function, a spline function, or the like.
Accordingly, as described in patent document 1, patent document 2, and non-patent document 1, a technique (hereinafter, referred to as a conventional technique) is disclosed in which a plurality of input image frames (hereinafter, simply referred to as frames) are combined into one frame, thereby increasing the resolution and increasing the number of pixels.
[ patent document 1] Japanese patent application laid-open No. 8-336046
[ patent document 2] Japanese patent application laid-open No. 9-69755
[ Nonpatent document 1] Queen-Wood extension "super-resolution processing based on a plurality of digital image data", Ricoh Technical Report pp.19-25, No, 24, NOVEMBER, 1998
Disclosure of Invention
In these conventional techniques, resolution is increased by three processes of (1) position estimation, (2) wideband interpolation, and (3) weighted sum. Here, the (1) position estimation means that the difference in sampling phase (sampling position) of each image data is estimated using each image data of a plurality of input image frames. (2) The wide-band interpolation is to increase the density of image data by interpolating the number of pixels (sampling points) using a low-pass filter having a wide band, which includes a folding component and transmits all the high-frequency components of the original signal, for each image data. (3) The weighted sum is obtained by calculating a weighted sum using a weighting coefficient corresponding to the sampling phase of each piece of high-density data, thereby eliminating a folding back component generated at the time of pixel sampling and restoring a high-frequency component of the original signal.
Fig. 2 shows an outline of these high resolution techniques. As shown in fig. 1a, it is assumed that a frame #1(201), a frame #2(202), and a frame #3(203) on different time axes are input and synthesized to obtain an output frame (206). For simplicity, first, a case where the subject moves (204) in the horizontal direction is considered, and a case where the resolution is increased by one-dimensional signal processing on the horizontal line (205) is considered. At this time, as shown in fig. b and d, in frames #2(202) and #1(201), a positional deviation of the signal waveform occurs in accordance with the amount of movement (204) of the subject. This amount of positional deviation is obtained by the above-described (1) positional estimation, and as shown in fig. 1c), frame #2(202) is subjected to motion compensation (207) so that the positional deviation disappears, and a phase difference θ (211) between sampling phases (209) (210) of pixels (208) of each frame is obtained. By performing the above-described (2) broadband interpolation and (3) weighted sum based on the phase difference θ (211), as shown in fig. e, a new pixel (212) is generated at a position in the middle of the original pixel (208) (the phase difference θ ═ pi), thereby achieving high resolution. The weighted sum of (3) will be described later.
In practice, the motion of the object is not only parallel movement, but also motion accompanying rotation, enlargement, reduction, and the like is considered, but if the time interval between frames is small or if the motion of the object is slow, the motion can be considered to be locally approximate to parallel movement.
When the resolution is increased twice as high as the resolution in the one-dimensional direction by the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, at least three frame image signals must be used when the weighted sum of (3) is performed, as shown in fig. 3. Here, fig. 3 is a diagram showing a spectrum of each component in a one-dimensional frequency region. In the figure, the distance from the frequency axis represents the signal intensity, and the rotation angle around the frequency axis represents the phase. The weighted sum of the above (3) is explained in detail below.
In the wide-band interpolation of (2), if pixel interpolation is performed by a wide-band low-pass filter which transmits a band (a band of frequencies from 0 to fs) of twice the Nyquist (Nyquist) frequency, the same component (hereinafter referred to as an original component) as the original signal and the sum of the folded components corresponding to the sampling phases are obtained. It is understood that, at this time, if the above-described (2) processing of the wide-band interpolation is performed on the signals of the three frame images, as shown in fig. 3(a), the phases of the original components (301) (302) (303) of each frame are all matched, and the phases of the folded components (304) (305) (306) are rotated in accordance with the difference between the sampling phases of each frame. In order to make the respective phase relationships easily understandable, the phase relationship of the original component of each frame is shown in the graph (b), and the phase relationship of the folded component of each frame is shown in the graph (c).
Here, by appropriately selecting the coefficients to be multiplied for the signals of the three frame images and performing the weighted sum of (3), the fold back components (304) (305) (306) of the respective frames can be canceled and removed, and only the original components can be extracted. At this time, at least three folding components are required so that the vector sum of the folding components (304) (305) (306) of each frame is 0, that is, both the component of the Re axis (real axis) and the component of the Im axis (imaginary axis) are 0. Therefore, in order to achieve a high resolution twice, that is, in order to remove one aliasing component, at least three frame image signals must be used.
Similarly, as described in patent document 1, patent document 2, and non-patent document 1, when resolution is increased with respect to a horizontal and vertical two-dimensional input signal, since folding is performed in both vertical and horizontal directions, the band of the original signal is doubled in both vertical and horizontal directions, three folding components overlap each other, and 2M +1 to 7 pieces of digital data (a signal of 7 frame images) are necessary to eliminate the folding components.
In the prior art, the scale of a frame memory and a signal processing circuit is increased, and the prior art is not economical. Further, since it is necessary to accurately estimate the positions of a plurality of temporally separated frame images, the configuration is complicated. That is, it is difficult to achieve high resolution of a moving image frame such as a television broadcast signal in the related art.
In addition, although it is mainstream to use interlace scanning in a current television broadcast signal, patent document 1, patent document 2, and non-patent document 1 do not disclose or suggest high resolution of an interlace signal and interlace-progressive conversion (I-P conversion).
In addition, in conventional digital television broadcasting using terrestrial waves and satellites (BS, CS), a program is broadcasted using an HD (High Definition) image signal in addition to an SD (Standard Definition) image signal. However, it is known that, instead of replacing the entire program with the image signal captured by the HD camera, the image signal captured by the SD camera is converted into a signal having the same number of pixels as the HD by an SD → HD converter (up-conversion), and the playback is switched for each program or each scene.
In the conventional receiver, since an image with a high resolution is reproduced when the received signal is an image signal picked up by an HD camera, and an image with a low resolution is reproduced when the received signal is an image signal after SD → HD conversion (up-conversion), there is a problem that the resolution is frequently switched for each program or each scene, and viewing is difficult.
In the above-described conventional technique, in order to perform high-resolution processing using a difference in sampling phase (sampling position), there is a problem that a high-resolution effect cannot be obtained when a signal in which no difference occurs in the sampling phase, that is, when the subject is stationary or when the subject is exactly an integral multiple of the pixel interval.
The present invention further suitably increases the resolution of an image signal.
According to the present invention, it is possible to more appropriately increase the resolution of an image signal.
Drawings
These and other features, objects and advantages of the present invention will become more apparent from the following description taken in conjunction with the accompanying drawings. Wherein:
fig. 1 is an explanatory view of embodiment 1 of the present invention.
Fig. 2 is a diagram illustrating an example of an operation of a general high resolution video signal processing.
Fig. 3 is a diagram illustrating an operation of the related art.
Fig. 4 is a diagram for explaining the operation of example 1 of the present invention.
Fig. 5 is an explanatory view of embodiment 1 of the present invention.
Fig. 6 is an explanatory view of embodiment 1 of the present invention.
Fig. 7 is an explanatory view of embodiment 1 of the present invention.
Fig. 8 is an explanatory view of embodiment 1 of the present invention.
Fig. 9 is an explanatory view of embodiment 1 of the present invention.
Fig. 10 is an explanatory view of embodiment 3 of the present invention.
Fig. 11 is an explanatory view of embodiment 5 of the present invention.
Fig. 12 is an explanatory view of embodiment 5 of the present invention.
Fig. 13 is an explanatory view of embodiment 5 of the present invention.
Fig. 14 is an explanatory view of embodiment 2 of the present invention.
Fig. 15 is an explanatory view of embodiment 4 of the present invention.
Fig. 16 is an explanatory view of embodiment 6 of the present invention.
Fig. 17 is a diagram for explaining an operation of the embodiment of the present invention and a conventional technique.
Fig. 18 is an explanatory view of embodiment 2 of the present invention.
Fig. 19 is an explanatory view of embodiment 12 of the present invention.
Fig. 20 is an explanatory view of embodiment 7 of the present invention.
Fig. 21 is an explanatory view of embodiment 8 of the present invention.
Fig. 22 is an explanatory view of embodiment 7 of the present invention.
Fig. 23 is an explanatory view of embodiment 7 of the present invention.
Fig. 24 is an explanatory view of embodiment 7 of the present invention.
Fig. 25 is an explanatory view of embodiment 7 of the present invention.
Fig. 26 is an explanatory view of embodiment 7 of the present invention.
Fig. 27 is an explanatory view of embodiment 9 of the present invention.
Fig. 28 is an explanatory view of embodiment 9 of the present invention.
Fig. 29 is an explanatory view of embodiment 9 of the present invention.
Fig. 30 is an explanatory view of embodiment 9 of the present invention.
Fig. 31 is an explanatory view of embodiment 9 of the present invention.
Fig. 32 is an explanatory view of embodiment 9 of the present invention.
Fig. 33 is an explanatory view of embodiment 10 of the present invention.
Fig. 34 is a diagram for explaining an operation of the embodiment of the present invention and a conventional technique.
Fig. 35 is an explanatory view of embodiment 13 of the present invention.
FIG. 36 is an explanatory view of embodiment 17 of the present invention.
Fig. 37 is a diagram for explaining an operation of the related art.
Fig. 38 is an explanatory view of embodiment 11 of the present invention.
Fig. 39 is an explanatory view of embodiment 11 of the present invention.
Fig. 40 is an explanatory view of embodiment 11 of the present invention.
Fig. 41 is an explanatory view of embodiment 11 of the present invention.
Fig. 42 is an explanatory view of embodiment 11 of the present invention.
Fig. 43 is an explanatory view of embodiment 12 of the present invention.
Fig. 44 is a diagram for explaining a state of the embodiment of the present invention and a difference in operation of the related art.
Fig. 45 is an explanatory view of embodiment 21 of the invention.
Fig. 46 is an explanatory view of embodiment 21 of the invention.
Fig. 47 is an explanatory view of embodiment 21 of the present invention.
Fig. 48 is an explanatory view of embodiment 21 of the invention.
Fig. 49 is an explanatory view of embodiment 21 of the invention.
Fig. 50 is an explanatory view of embodiment 21 of the invention.
Fig. 51 is a diagram illustrating embodiment 22 of the present invention.
Fig. 52 is an explanatory view of embodiment 25 of the present invention.
Fig. 53 is an explanatory view of embodiment 25 of the present invention.
Fig. 54 is an explanatory view of embodiment 27 of the invention.
Fig. 55 is an explanatory view of embodiment 26 of the invention.
Fig. 56 is an explanatory view of embodiment 30 of the invention.
Fig. 57 is an explanatory view of embodiment 30 of the invention.
Fig. 58 is an explanatory view of embodiment 30 of the invention.
Detailed Description
The following describes embodiments of the present invention, and it is to be understood that the following embodiments are capable of numerous changes and modifications without departing from the scope of the present invention. Therefore, the present invention includes not only the following but also various modifications and alterations within the scope of the claims.
Embodiments of the present invention are described below with reference to the drawings.
In the drawings, the components denoted by the same reference numerals have the same functions.
The expression "phase" in the description and drawings of the present specification includes the meaning of "position" on a two-dimensional image when used in a two-dimensional image space. This position represents a position of fractional pixel accuracy.
Note that the expression "lift" in each description of the present specification and each drawing includes the meaning of "lift processing". Note that the expression "frequency up" in each description of the present specification and each drawing means "frequency up processing". Both of them represent conversion processing (pixel number increasing processing) for increasing the number of pixels of an image or conversion processing (image enlarging conversion processing) for enlarging an image.
Note that the expression "reduce" in each description and each image in the present specification includes the meaning of "reduction processing". The expression "down conversion" in each description of the present specification and each drawing means "down conversion processing". Both of them represent conversion processing (pixel number reduction processing) for reducing the number of pixels of an image or conversion processing (image reduction conversion processing) for reducing an image.
The expression "motion compensation" in each description and each image in the present specification includes calculating a phase difference or a sampling phase difference, that is, a difference in spatial position, and performing alignment (bit alignment)
Figure G2008101682464D0007104714QIETU
And わせ) in the text.
In the following description of the respective examples, the method described in reference 1 or reference 2 may be used for the position estimation (1). For the above-described (2) wide-band interpolation, a general low-pass filter having a passband twice the nyquist frequency as described in non-patent document 1 may be used.
[ reference 1] in Anteng reproduction, "the Zebra Cross-space differentiation Algorithm を uses いた speed ベクトル distribution heat システム" (speed vector distribution measurement System for images Using the space-time differentiation Algorithm), measures the society of Automation collection, pp.1330-1336, Vol.22, No.12, 1986
[ reference 2] Xiaolinghong Happy et al, "phase constrained phase Seki method of づく portrait of image based on DCT transformation に" (correlation method of phase constrained of image based on DCT transformation), and the information technology IEICETtechnical Report ITS2005-92, IE2005-299(2006-02), pp.73-78
In the following embodiments, the expression "SR signal" is an abbreviation of "Super Resolution signal (Super Resolution signal)".
Embodiments of the present invention are described below with reference to the drawings.
[ example 1]
Fig. 1 shows an image signal processing apparatus according to embodiment 1 of the present invention, and describes the features thereof. The image signal processing apparatus of the present embodiment is suitable for an image display apparatus such as a television receiver. In the following description of the present embodiment, an image display device is taken as an example of an image signal processing device.
In fig. 1, the image signal processing apparatus of the present embodiment includes: an input unit (1) to which a frame sequence of a moving image such as a television broadcast signal is input; a resolution conversion unit (2) for increasing the resolution of the frame inputted from the input unit (1); and a display unit (3) for displaying an image on the basis of the frame having been subjected to resolution enhancement by the resolution conversion unit (2). The display unit (3) is, for example, a plasma display panel, a liquid crystal display panel, or an electron/electro-discharge type plasma display panel. The details of the resolution conversion unit (2) will be described below.
In fig. 1, first, a position estimation unit (101) estimates the position of a corresponding pixel in a frame #2 with reference to a sampling phase (sampling position) of a pixel to be processed in the frame #1 input from an input unit (1), and obtains a sampling phase difference θ for each pixel (102).
Then, a booster (103) (104) of a motion compensation/boosting unit (115) performs motion compensation on the frame #2 using the information of the phase difference θ (102) to match the position of the frame #1, and increases the number of pixels of the frame #1 and the frame #2 by 2 times, respectively, to achieve high density. The phase of the data having been densified is shifted by a predetermined amount in a phase shift unit (116). Here, as a means for shifting the phase of the data by a certain amount, a pi/2 phase shifter (106) (108) can be used. In addition, in order to compensate for the delay caused by the pi/2 phase shifters (106) (108), signals of the frames #1 and #2 which have been densified are delayed by the delays (105) (107).
In a fold-back component removing unit (117), the output signals of the delayers (105, 107) and the Hilbert converters (106, 108) are multiplied by coefficients C0, C2, C1, C3 generated by a coefficient determining unit (109) based on the phase difference θ (102) by multipliers (110, 112, 111, 113), and the signals are added by an adder (114) to obtain an output. The output is supplied to the display unit 3. The position estimating unit (101) can be realized by using the conventional technique as it is. The lifters (103) and (104), the pi/2 phase shifters (106) and (108), and the fold back component removing section (117) will be described in detail later.
Fig. 4 shows the operation of embodiment 1 of the present invention. This figure shows the outputs of the retarders (105) (107) and pi/2 phase shifters (106) (108) shown in fig. 1 in a one-dimensional frequency region. In this diagram (a), the signals of the frame #1 and the frame #2 after the lifting output from the delay units (105) and (107) are signals obtained by adding the original components (401) and (402) and the folded components (405) and (406) folded back from the original sampling frequency (fs), respectively. At this time, the folding component (406) rotates only by the phase of the phase difference θ (102).
On the other hand, the signals of the frame #1 and the frame #2 after the lifting output from the pi/2 phase shifters (106) and (108) are signals obtained by adding the original components (403) and (404) after the pi/2 phase shifting and the retrace components (407) and (408) after the pi/2 phase shifting, respectively. In order to make the phase relationship of each component shown in the diagram (a) easy to understand, the diagram (b) and the diagram (c) show the original component and the folded component by extraction, respectively.
When the vector sum of the four components shown in fig. (b) is obtained, the component on the Re axis is 1, and the component on the Im axis is 0; when the vector sum of the four components shown in fig. (c) is obtained, the components of both the Re axis and the Im axis are 0, and thus, coefficients for multiplying the components are determined, and a weighted sum is obtained, so that the folded components can be eliminated, and only the original components can be extracted. That is, it is possible to realize an image signal processing device capable of performing high resolution twice as high as that in the one-dimensional direction by using only two frame images. The details of the coefficient determination method will be described later.
Fig. 5 shows the operation of the lifters (103) and (104) used in embodiment 1 of the present invention. In the figure, the horizontal axis represents frequency, the vertical axis represents gain (the ratio of the amplitude of the output signal to the amplitude of the input signal), and the "frequency-gain" characteristics of the boosters (103) and (104) are shown. Here, the boosters (103) and (104) use a frequency (2fs) that is twice the sampling frequency (fs) of the original signal as a new sampling frequency, and insert a sampling point of a new pixel (zero point) at a position just in the middle of the original pixel interval to double the number of pixels and increase the density, and apply a filter having a pass band in which all frequencies between-fs and + fs are set to gain 2.0. In this case, as shown in the figure, the digital signal has a characteristic of repeating at a frequency of an integral multiple of 2fs due to the symmetry of the digital signal.
Fig. 6 shows a specific example of the lifters (103) and (104) used in example 1 of the present invention. This figure shows tap (tap) coefficients of a filter obtained by performing inverse fourier transform on the frequency characteristics shown in fig. 5. In this case, each tap coefficient Ck (where k is an integer) is a generally known sinc function, and the Ck may be shifted by (- θ) to 2sin (π k + θ)/(π k + θ) in order to compensate for the phase difference θ (102) of sampling for each pixel. In the lifter (103), when the phase difference θ (102) is 0, Ck may be 2sin (pi k)/(pi k). The phase difference θ (102) is expressed by a phase difference in integer pixel units (2 π) and a phase difference in decimal pixel units, whereby compensation for the phase difference in integer pixel units can be achieved by simple pixel shifting, and the filters of the boosters (103) (104) are used for compensation for the phase difference in decimal pixel units.
Fig. 7 shows an example of the operation of the pi/2 phase shifters (106) and (108) used in embodiment 1 of the present invention. As the pi/2 phase shifters (106) (108), a commonly known Hilbert transformer can be used.
In the graph (a), the horizontal axis represents frequency, and the vertical axis represents gain (the ratio of the amplitude of the output signal to the amplitude of the input signal), which represents the "frequency-gain" characteristic of the hilbert transformer. In the Hilbert converter, a frequency (2fs) which is twice the sampling frequency (fs) of the original signal is set as a new sampling frequency, and all frequency components except 0 between-fs and + fs are set to a pass band of a gain of 1.0.
In the graph (b), the horizontal axis represents frequency, and the vertical axis represents phase difference (difference between the phase of the output signal and the phase of the input signal), which represents the "frequency-phase difference" characteristic of the hilbert transformer. Here, the phase of pi/2 is delayed for frequency components between 0 and fs, and the phase of pi/2 is advanced for frequency components between 0 and-fs. In this case, as shown in the figure, the digital signal has a characteristic of repeating at a frequency of an integral multiple of 2fs due to the symmetry of the digital signal.
Fig. 8 shows an example in which the hilbert transformer constitutes the pi/2 phase shifters (106) (108) used in embodiment 1 of the present invention. This figure shows tap coefficients of a filter obtained by performing inverse fourier transform on the frequency characteristics shown in fig. 7. In this case, each tap coefficient Ck may be 0 when k is 2m (where m is an integer), or — 2/(pi k) when k is 2m + 1.
Among them, the pi/2 phase shifter (106) (108) used in embodiment 1 of the present invention can use a differentiator. In this case, if the general formula cos (ω t + α) representing a sine wave is differentiated by t and multiplied by 1/ω, d (cos (ω t + α))/dt × (1/ω) — (sin (ω t + α) ═ cos (ω t + α + pi/2) is obtained, and a function of pi/2 phase shift can be realized. That is, the function of pi/2 phase shift can be realized by obtaining the difference between the value of the target pixel and the value of the adjacent pixel and then applying a filter having a "frequency-amplitude" characteristic of 1/ω.
Fig. 9 shows a specific example of the operation of the coefficient determiner (109) used in example 1 of the present invention. As shown in fig. 4(b), when the vector sum of the four components shown in fig. 4(a) is obtained, the component of Re axis is 1, and the component of Im axis is 0; when the vector sum of the four components shown in fig. 4(c) is obtained, the component of both the Re and Im axes is 0, and the coefficient for multiplying each component is determined in this manner, it is possible to realize an image signal processing apparatus that uses two frames of images and performs a resolution twice as high as that in the one-dimensional direction.
Here, as shown in fig. 1, the coefficient of the output (sum of the original component and the folded component of the frame #1 after the lifting) of the relative delay (105) is C0; the coefficient with respect to the output of the pi/2 phase shifter (106) (the sum of the pi/2 phase shift results of the original component and the folded component of the frame #1 after the lifting) is C1; the coefficient of the output (sum of the original component and the folded component of the frame #2 after lifting) of the relative delay (107) is C2; the coefficient with respect to the output of the Hilbert transformer (106) (the sum of the pi/2 phase shift results of the original component and the folded component of the frame #2 after lifting) is C3.
At this time, if the condition shown in fig. 9(a) is satisfied, the connected equation shown in fig. 9(b) can be obtained from the phase relationship of each component shown in fig. 4(b) and 4(c), and the result shown in fig. 9(c) can be derived by solving the equation.
The coefficient determiner (109) of the present embodiment outputs coefficients C0, C1, C2, and C3 that satisfy any one of fig. 9(a), 9(b), and 9 (C).
As an example, fig. 9(d) shows values of coefficients C0, C1, C2, and C3 when the phase difference θ (102) is changed every pi/8 from 0 to 2 pi. This corresponds to a case where the signal of the original frame #2 is subjected to the motion compensation with respect to the frame #1 by estimating the position with the accuracy of 1/16 pixels. When the value of the phase difference θ (102) is less than 0 or 2 π or more, the phase difference θ (102) may be in the range of 0 to 2 π by adding or subtracting a value of an integer multiple of 2 π to or from the value of the phase difference θ (102) using the periodicity of the sin function or the cos function.
The boosters (103, 104) and the pi/2 phase shifters (106, 107) must have an infinite number of taps to obtain ideal characteristics, but the number of taps is limited, and simplification is not a problem in practical use. In this case, a general window function (for example, a Hanning (Hanning) window function, a Hamming (Hamming) window function, or the like) may be used. If the coefficients of the respective taps of the simplified hilbert transformer are symmetrical about C0, i.e., C (-k) — Ck (k is an integer), the phase can be shifted by a certain amount.
Next, the difference between the operation of the image signal processing apparatus according to embodiment 1 and the operation of the conventional technique will be described with reference to fig. 17. In this drawing (a), an input image in which the subject moves in the right direction between frames #1(1701) and frames #5(1705) is prepared. At this time, as shown in fig. b, the sampling phase of each frame is observed, and the pi object is intentionally moved so that the position of the corresponding pixel between the frame #1(1701) and the frame #2(1702) is shifted by 1/4 pixels (═ pi/2), the position of the corresponding pixel between the frame #1(1701) and the frame #3(1703) is shifted by 1 pixel (═ 2 pi), the position of the corresponding pixel between the frame #1(1701) and the frame #4(1704) is shifted by 5/4 pixels (═ 5 pi/2), and the position of the corresponding pixel between the frame #1(1701) and the frame #5(1705) is shifted by 2 pixels (═ 4). In this case, the phase of each aliasing component included in the signal in each frame can be represented as shown in fig. (c) with reference to the phase of the aliasing component included in the signal in frame #1 (1701). In the case of increasing the resolution twice as high as the input image (a), in the above-described conventional technique, the vector sum of the fold-back components cannot be made 0 using any of three frames from frame #1(1701) to frame #5(1705), and therefore, the resolution cannot be increased. On the other hand, if this embodiment is used, the vector sum of the folding back components can be set to 0 by using, for example, two adjacent frames (for example, frame #1(1701) and frame #2(1702)), and therefore, high resolution can be achieved. That is, the operation state of the present embodiment can be confirmed by using the input image of the graph (a) as a test pattern.
In the above description of example 1, the example of the resolution enhancement in the horizontal direction was described, but the embodiments of the present invention are not limited thereto, and can be applied to the resolution enhancement in the vertical direction and the oblique direction.
According to the image signal processing apparatus of embodiment 1 described above, the phase of each image signal of two input image frames is shifted with respect to the phase of each image signal of the two input image frames, which is smaller than that of the conventional example, and two signals are generated from each image signal. Thereby, four signals can be generated from the image signals of two input image frames. Here, a coefficient for eliminating and combining the aliasing components of the four signals is calculated for each pixel for each of the four signals based on the phase difference between the two input image frames. For each pixel of the generated image, a sum obtained by multiplying each coefficient by the pixel value of the corresponding pixel of each of the four signals and adding the result is calculated, and a new pixel value of the high-resolution image is generated. By performing the above-described processing on each pixel of the generated image, a new high-resolution image can be generated.
Thus, the image signal processing apparatus of embodiment 1 can generate a high-resolution image from an input image with less amount of folded components by using two input image frames less than that of the conventional example.
Further, the image signal processing apparatus of embodiment 1 uses less two input image frames than the conventional example, and therefore can reduce the amount of required image processing compared to the conventional example. Thus, an image signal processing apparatus which generates a high-resolution image from an input image with less aliasing components can be realized at a lower cost than in the conventional example.
[ example 2]
Next, embodiment 2 of the present invention will be described with reference to fig. 18 and 14.
Embodiment 2 shows an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 1.
First, an image processing apparatus for implementing the image signal processing method of the present embodiment is described with reference to fig. 18. The image signal processing apparatus shown in fig. 18 includes: an input unit (1) to which an image signal such as a television broadcast signal is input; a storage unit (11) that stores software for processing the signal input from the input unit (1); a control unit (10) that processes the image signal of the signal input from the input unit (1) in cooperation with software stored in the storage unit (11); a frame buffer #1(21) and a frame buffer #2(22) used by the control unit (10) for buffering data in the image signal processing; and a buffer #3(23) for frame buffering a signal after the image signal processing outputted from the control unit (10) to the output unit (3).
Here, the image signal processing apparatus shown in fig. 18 may have two input units (1) as many as the number of frames used for image processing, or may have only one input unit (1) and continuously input two frames.
Here, the frame buffer #1(21), the frame buffer #2(22), and the storage unit (11) for storing software used for buffering data may be configured using different memory chips, or may be configured using one or a plurality of memory chips, and each data address may be divided and used.
In this embodiment, the control unit (10) performs image signal processing on an image signal input from the input unit (1) in cooperation with software stored in the storage unit (11), and outputs the image signal to the display unit (3). Details of this image signal processing will be described with reference to fig. 14.
Fig. 14 shows an example of a flowchart of the image signal processing method of the present embodiment. The flowchart of fig. 14 is such that, starting at step (1401), the image data of each frame is doubled at step (1418). That is, the image data of the frame #1 is pulled up and written into the frame buffer #1 in step (1402), and the image data of the frame #2 is pulled up and written into the frame buffer #2 in step (1403). Here, the boosting can be realized by making the value of each frame buffer 0 clear once and then writing data for each pixel.
Next, in step (1404), the first pixel (for example, the upper left pixel) of the frame buffer #1 is set as the processing target, and the processing is circulated until the processing of all the pixel data of the frame buffer #1 is completed.
In step (1405), the position of the corresponding pixel in the frame buffer #2 is estimated based on the target pixel in the frame buffer #1, and the phase difference θ is output. In this case, the above-described conventional technique can be used as it is as a method of estimating the position of the corresponding pixel.
In step (1406), based on the phase difference θ obtained in step (1405), the pixels in the vicinity of the corresponding pixel in the frame buffer #2 are motion-compensated. In this case, as the nearby pixels, only the pixel data used in the process of the pi/2 phase shift of step (1408), that is, the pixel data in the range of action of the limited number of taps may be motion-compensated. The motion compensation acts in accordance with the acts described using fig. 5 and 6.
Next, in step (1419), the phase is shifted by a predetermined amount with respect to the frame buffer #1 and the motion-compensated frame buffer # 2. That is, in the steps 1407 and 1408, the pixel data in each frame buffer is shifted by pi/2 phase.
Next, in step (1420), the output data of step (1419) is multiplied by coefficients C0, C1, C2, and C3 set so as to satisfy the conditions of fig. 9(a), (b), and (C) based on the phase difference θ, and the resultant data are added to remove the aliasing components from the pixel data of the frame buffers #1 and #2 and output to the frame buffer # 3. That is, in step (1409), coefficients C0, C1, C2, and C3 are determined based on the phase difference θ, and in steps (1410) (1411) (1412) (1413), the coefficients are multiplied by the pixel data of the frame buffers #1 and #2 and the data subjected to the phase shift of pi/2, respectively, and then all of the coefficients are added in step (1414) and output to the frame buffer # 3. The operation of removing the folded components corresponds to the operation described with reference to fig. 9.
Next, in step (1415), it is determined whether or not the processing of all the pixels in the frame buffer #1 is completed, and if not, the next pixel (for example, the pixel adjacent to the right) is set as the target of the processing in step (1416), and after returning to step (1405), the processing is completed in step (1417) if the processing is completed.
After the image signal processing in the flowchart shown in fig. 14, the signal buffered in the frame buffer #3 as shown in fig. 18 can be output to the display unit (3) in units of frames or pixels.
By performing the above-described processing, it is possible to output a signal with high resolution to the frame buffer #3 using the pixel data of the frame buffer #1 and the frame buffer # 2. In the case of application to animation, it is sufficient to repeat the processing from step (1401) to step (1417) for each frame.
The difference in the operation from the above-described conventional technique can be confirmed in the same manner as in the description of fig. 17 with respect to the image signal processing method of embodiment 2, and the result is the same as in embodiment 1, and therefore, the description thereof is omitted.
According to the image signal processing method of embodiment 2 described above, the phase shift is performed with respect to each of the image signals of two input image frames smaller than that of the conventional example, and two signals are generated from each of the image signals. Thereby, four signals can be generated from the image signals of two input image frames. Here, a coefficient for eliminating and combining the aliasing components of the four signals is calculated for each pixel for each of the four signals based on the phase difference between the two input image frames. For each pixel of the generated image, a sum obtained by multiplying each coefficient by the pixel value of the corresponding pixel of each of the four signals and adding the result is calculated, and a new pixel value of the high-resolution image is generated. By performing the above-described processing on each pixel of the generated image, a new high-resolution image can be generated.
Thus, the image signal processing method of embodiment 2 uses two input image frames less than that of the conventional example, and can generate a high-resolution image from the input image with less amount of folded components.
Further, the image signal processing method of embodiment 2 uses less two input image frames than the conventional example, and therefore can reduce the amount of required image processing compared to the conventional example.
[ example 3]
Fig. 10 shows embodiment 3 of the present invention. The structure shown in this figure is a simplified structure of fig. 1, utilizing the relationship among the coefficients C0, C1, C2, and C3 shown in fig. 9 (C). That is, since C0 is C2 is 1/2 and C1 is C3 is- (1+ cos θ)/(2sin θ), a sum and difference signal is generated by an adder (1001) and a subtractor (1004) from each signal of the frame #1 after the lifting and the frame #2 after the motion compensation/lifting. The sum signal is multiplied by C0(═ 0.5) by a multiplier (1003) after passing through the fs cut-off filter (1002), and is input to an adder (1008). The fs cut filter (1002) is a filter that cuts off a component of the sampling frequency (fs) before the boosting as a zero, and can be realized by using a tap coefficient shown in the drawing (1011), for example. In the "frequency-gain" characteristic of the hilbert converter (1005) shown in fig. 7(a), the gain of the frequency fs is zero, and therefore, the aliasing component cannot be removed, and the fs cut-off filter (1002) is intended to prevent the remaining of an unnecessary component of the frequency fs. Therefore, if a mechanism capable of performing pi/2 phase shift including a component of frequency fs is used instead of the Hilbert converter (1005), the fs cut-off filter (1002) is not necessary.
On the other hand, the difference signal is subjected to phase shift by a predetermined amount (═ pi/2) by a Hilbert transformer (1005), and then multiplied by a multiplier (1006) using a coefficient C1 determined by a coefficient determining unit (1007) based on the phase difference (102), and added by an adder (1008) to obtain an output. Here, the phase shifter 1009 including the delay device (1002) and the hilbert transformer (1005) can be realized with a half of the circuit scale of the phase shifter (116) shown in fig. 1. The coefficient determining unit (1007) may output the coefficient C1 shown in fig. 9(C), and the foldback component removing unit (1010) including the adder (1001), the subtractor (1004), the multipliers (1003) (1006), the adder (1008), and the coefficient determining unit (1007) can reduce the number of multipliers, and thus can be realized in a smaller circuit scale than the foldback component removing unit (117) shown in fig. 1.
The image signal processing method of embodiment 3 can be confirmed to be different from the operation of the above-described conventional technique as in the description of fig. 17, and the result is the same as that of embodiment 1, and therefore, the description thereof is omitted.
The image signal processing apparatus and the image signal processing method according to embodiment 3 can be applied to high resolution in the vertical direction and the oblique direction.
The image signal processing apparatus according to embodiment 3 described above can achieve the effects of the image signal processing apparatus according to embodiment 1, and can be realized with a smaller circuit scale than the image signal processing apparatus according to embodiment 1, thereby achieving a lower cost.
[ example 4]
An image signal processing method according to embodiment 4 of the present invention will be described with reference to fig. 15.
Embodiment 4 shows an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 3. The image processing apparatus that performs the image signal processing method according to the present embodiment is the same as that of embodiment 2 shown in fig. 18, and therefore, the description thereof is omitted.
Fig. 15 shows an example of a flowchart of the operation of the present embodiment. The flowchart of fig. 15 starts with step (1501), and at step (1518), the image data of each frame is lifted. That is, the image data of the frame #1 is pulled up and written into the frame buffer #1 in step (1502), and the image data of the frame #2 is pulled up and written into the frame buffer #2 in step (1503). Here, the boosting can be realized by making the value of each frame buffer 0 clear once and then writing data for each pixel.
Next, in step (1504), the first pixel (for example, the upper left pixel) of the frame buffer #1 is set as the processing target, and the processing is then circulated until the processing of all the pixel data of the frame buffer #1 is completed.
In step (1505), the position of the corresponding pixel in the frame buffer #2 is estimated with respect to the target pixel in the frame buffer #1, and the phase difference θ is output. In this case, the above-described conventional technique can be used as it is as a method of estimating the position of the corresponding pixel.
In step (1506), based on the phase difference θ obtained in step (1505), motion compensation is performed on pixels in the vicinity of the corresponding pixel in the frame buffer # 2. In this case, as the "nearby pixels", only the pixel data used in the processing of the hilbert transform in step (1510), that is, the pixel data in the range of action of the limited number of taps may be motion-compensated. The motion compensation acts in accordance with the acts described using fig. 5 and 6.
Next, in step (1520), the aliasing components are removed from the pixel data of the frame buffers #1 and #2 based on the phase difference θ, and the resulting data is output to the frame buffer # 3. First, in step (1507), the value of the pixel data of the frame buffer #1 is added to the value of the pixel data of the motion-compensated frame buffer #2, and the component of the frequency fs is cut off in step (1509). The operation of the fs cut-off filter (1509) is the same as the operation of (1002) shown in fig. 10.
Further, in step (1508), the value of the pixel data of the motion compensated frame buffer #2 is subtracted from the value of the pixel data of the frame buffer # 1. Here, the phase is shifted by a predetermined amount with respect to the result of the subtraction at step (1519). That is, similarly, the hilbert transform is performed at step (1510) using the data in the vicinity of which the subtraction has been performed. The phase shift operation corresponds to the operation described with reference to fig. 7 and 8.
Next, at step (1511), the data after the addition is multiplied by a coefficient C0(═ 0.5), at step (1512), a coefficient C1 is determined based on the phase difference θ, at step (1513), the coefficient C1 is multiplied by the data after the hilbert transform, at step (1514), the data of both are added, and the result is output to the frame buffer # 3. The operation of removing the folded components corresponds to the operation described with reference to fig. 10.
Next, in step (1515), it is determined whether or not the processing of all the pixels in the frame buffer #1 is completed, and if not, the next pixel (for example, the pixel adjacent to the right) is set as the target of the processing in step (1516), and after returning to step (1505), the processing is completed in step (1517) if the processing is completed.
After the image signal processing in the flowchart shown in fig. 15, the signal buffered in the frame buffer #3 as shown in fig. 18 can be output to the display unit (3) in units of frames or pixels.
By performing the above-described processing, it is possible to output a signal with an increased resolution to the frame buffer #3 using the pixel data of the frame buffer #1 and the frame buffer # 2. In the case of application to animation, it is sufficient to repeat the processing from step (1501) to step (1517) for each frame.
The difference from the operation of the conventional technique described above can be confirmed in the image signal processing method of embodiment 4 using fig. 17, and the result is the same as that of embodiment 1, and therefore, the description thereof is omitted.
The image signal processing method of embodiment 4 can also be applied to high resolution in the vertical direction or the oblique direction.
The image signal processing method according to embodiment 4 described above has the same effect of increasing the resolution of an image signal as the image signal processing method according to embodiment 2. In addition, the image signal processing method according to embodiment 4 has an effect that the same signal processing can be realized with a smaller amount of processing (number of operations) than the image signal processing method according to embodiment 2, because the contents of a part of the processing steps are common as compared with the image signal processing method according to embodiment 2.
[ example 5]
Fig. 11 shows embodiment 5 of the present invention. The configuration shown in this figure is configured to switch to the output from the auxiliary pixel interpolation unit (1105) when the phase difference θ is in the vicinity of 0, based on the configuration shown in fig. 10, in order to prevent the uncertainty of the coefficients C1 and C3 when the phase difference θ is 0, and the vulnerability to noise when the coefficients C1 and C3 when the phase difference θ is close to 0, which are shown in fig. 9 (d). That is, a general interpolation low-pass filter 1101 is prepared as a bypass path, a new C4 is generated by a coefficient determining unit 1103 in addition to the above-mentioned coefficients C0 and C1, the output of the interpolation low-pass filter 1101 is multiplied by a coefficient C4 by a multiplier 1102, and the multiplied output is added to a signal having a high resolution by an adder 1104 and outputted.
The configurations other than the interpolation low-pass filter (1101), multiplier (1102), coefficient determiner (1103), adder (1104), and auxiliary pixel interpolation unit (1105) are the same as those of embodiment 3 shown in fig. 10, and therefore, the description thereof is omitted.
Fig. 12 shows a specific example of an interpolation low-pass filter (1101) used in example 5 of the present invention. This figure shows a tap coefficient of a filter obtained by inverse fourier transforming a frequency characteristic having 1/2 as a cutoff frequency of an original sampling frequency fs. In this case, each tap coefficient Ck (where k is an integer) may be a general sinc function, and Ck may be sin (pi k/2)/(pi k/2).
Fig. 13 shows a specific example of the coefficient determiner (1103) used in example 5 of the present invention. This figure shows an operation in which the new coefficient C4 is normally 0 based on the coefficients C0 and C1 shown in fig. 9(d), but when the phase difference θ is in the vicinity of 0, the value of the coefficient C1 is forcibly set to 0, and the value of the coefficient C4 is set to 1.0. With this operation, in the configuration shown in fig. 11, when the phase difference θ (102) is 0 or in the vicinity of 0, the output of the adder (1104) can be automatically switched to the output of the interpolation low-pass filter (1101). Further, the phase difference θ may be close to 0, and may be continuously and gradually close to the coefficient shown in fig. 13 from the coefficient shown in fig. 12. When the position estimation unit (101) in fig. 1 determines that the pixel corresponding to the pixel to be processed in the frame #1 is not in the frame #2, the output of the adder (1104) may be automatically switched to the output of the interpolation low-pass filter (1101) by similarly controlling the coefficients when the phase difference θ (102) is in the vicinity of 0.
The difference from the operation of the conventional technique described above can be confirmed in the image signal processing apparatus according to embodiment 5 using fig. 17, and the result is the same as that of embodiment 1, and therefore, the description thereof is omitted.
The image signal processing apparatus according to embodiment 5 can be applied to high resolution in the vertical direction and the oblique direction.
The image signal processing apparatus according to embodiment 5 described above has the following effects, in addition to the effects of the image signal processing apparatus according to embodiment 3, compared to the image signal processing apparatus according to embodiment 3: when the phase difference θ (102) is 0 or in the vicinity of 0 (i.e., stationary or substantially stationary), and it is determined that the pixel corresponding to the pixel to be processed in the frame #1 is not in the frame #2, the processing result is not uncertain, and a stable output image can be obtained.
[ example 6]
An image signal processing method according to embodiment 6 of the present invention will be described with reference to fig. 16.
Embodiment 6 shows an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 5. The image processing apparatus that performs the image signal processing method according to the present embodiment is the image processing apparatus shown in fig. 18 as in embodiment 2, and therefore, the description thereof is omitted.
Fig. 16 shows an example of a flowchart of the operation of the present embodiment. The operations of the steps shown in this figure are configured to output the processing result of step (1606) to the frame buffer #3 when the phase difference θ is 0 or in the vicinity of 0, based on the steps of fig. 15 described in embodiment 4, in order to prevent the problems shown in fig. 9(d) that the coefficients C1 and C3 are not definite when the phase difference θ is 0, and the coefficients C1 and C3 become large and fragile with respect to noise and the like when the phase difference θ approaches 0. That is, in step (1601), coefficients C0, C1, and C4 are determined based on the phase difference θ, and in step (1602), the pixel data of the object and the pixel data in the vicinity thereof in the frame buffer #1 are used to perform a general interpolation low-pass filter process, and then multiplied by the coefficient C4 in step (1603), and added to the outputs of steps (1511) and (1513) in step (1604), and output to the frame buffer # 3.
The other steps are the same as the processing steps of fig. 15 described in embodiment 4, and therefore, the description thereof is omitted. The operation of determining the coefficient in step (1601) is the same as the operation shown in fig. 13, and therefore, the description thereof is omitted. Note that the operation of the interpolation low-pass filter in step (1602) is the same as that shown in fig. 12, and therefore, the description thereof is omitted.
Further, the difference from the operation of the above-described conventional technique can be confirmed in the image signal processing method of embodiment 6 using fig. 17, but the result is the same as that of embodiment 1, and therefore, the description thereof is omitted.
The image signal processing method of embodiment 6 can be applied to high resolution in the vertical direction and the oblique direction.
The image signal processing method according to embodiment 6 described above has the following effects, in addition to the effects of the image signal processing method according to embodiment 4, compared to the image signal processing method according to embodiment 4: when the phase difference θ (102) is 0 or in the vicinity of 0 (i.e., stationary or substantially stationary), and it is determined that the pixel corresponding to the pixel to be processed in the frame #1 is not in the frame #2, the processing result does not become unstable, and a stable output image can be obtained.
[ example 7]
Fig. 20 shows an image signal processing apparatus according to embodiment 7 of the present invention. The image processing apparatus of the present embodiment includes: an input unit (1) to which a frame sequence of a moving image such as a television broadcast signal is input; a resolution conversion unit (4) for performing two-dimensional resolution enhancement by combining the frames input from the input unit (1) in the horizontal and vertical directions; and a display unit (3) for displaying an image on the basis of the frame having been subjected to resolution enhancement by the resolution conversion unit (4).
The resolution conversion unit (4) performs resolution conversion processing in the horizontal direction and the vertical direction, and outputs components having a large resolution improvement effect among the results selectively or in a mixed manner, thereby achieving two-dimensional high resolution. The details of the resolution conversion unit (4) will be described below.
In fig. 20, based on a frame #1(2010) and a frame #2(2013) input to an input unit (1), a frame (2011) in which the number of pixels in the horizontal direction is increased and a frame (2014) in which the number of pixels in the vertical direction is increased are generated using a horizontal resolution conversion unit (2001) and a vertical resolution conversion unit (2005), respectively.
Here, each resolution conversion unit (2001) (2005) performs signal processing in the horizontal direction and the vertical direction, respectively, using the configuration of the resolution conversion unit (2) of the image signal processing apparatus according to embodiment 1 of the present invention shown in fig. 1 as it is. In this case, in the horizontal resolution conversion unit (2001), the boosters (103) (104), the retarders (105) (107), and the pi/2 phase shifters (106) (108) shown in fig. 1 perform the boosting, delaying, and pi/2 phase shifting in the horizontal direction, respectively.
Similarly, in the vertical resolution conversion unit (2005), the boosters (103) (104), the retarders (105) (107), and the pi/2 phase shifters (106) (108) shown in fig. 1 perform the boosting, delaying, and pi/2 phase shifting in the vertical direction, respectively. These operations can be implemented using the operations shown in fig. 5 to 8, the conventional techniques, and the like.
Further, instead of the configuration of the resolution converting section of the image signal processing device according to embodiment 1 of the present invention, each resolution converting section (2001) (2005) may be realized by using the resolution converting section of the image signal processing device according to embodiment 3 of the present invention and the resolution converting section of the image signal processing device according to embodiment 5 of the present invention. In the following description, a configuration of a resolution converting section of an image signal processing apparatus according to embodiment 1 of the present invention will be described.
In the present embodiment, the movement shown in fig. 1 and 2 is expanded into two dimensions, assuming that the subject moves two-dimensionally in the horizontal and vertical directions. That is, the position estimating unit ((101) in fig. 1) and the motion compensation/enhancement unit ((115) in fig. 1) in the horizontal resolution converting unit (2001) perform two-dimensional motion compensation on the subject on the frame #2 with respect to the subject on the frame #1, and use the horizontal phase difference θ H in the coefficient determination of the aliasing component removing unit ((117) in fig. 1) in the sampling phase difference of the pixel of each frame.
Similarly, in the position estimation unit ((101) in fig. 1) and the motion compensation/enhancement unit ((115) in fig. 1) in the vertical resolution conversion unit (2005), the subject (2017) in the frame #2 is two-dimensionally motion-compensated with respect to the subject (2016) in the frame #1, and the vertical phase difference θ V is used for coefficient determination in the aliasing component removal unit ((117) in fig. 1) in the sampling phase difference of the pixels in each frame. The operation shown in fig. 9 may be used as it is for determining the coefficient of the folding component removing unit ((117) in fig. 1).
If the subject moves in the oblique direction, the frame (2011) in which the number of pixels in the horizontal direction is increased by the horizontal resolution conversion unit (2001) includes the distortion in the oblique direction, but the distortion is small and negligible in the components (vertical lines and the like) in which the vertical frequency of the original input signal is low. Similarly, the frame 2014 in which the number of pixels in the vertical direction is increased by the vertical resolution conversion unit 2005 includes a distortion in the oblique direction, but the distortion is small and negligible in a component (such as a horizontal line) in which the horizontal frequency of the original input signal is low.
By utilizing this characteristic, a frame (2011) in which the number of pixels in the horizontal direction is increased by the signal processing is generated as an SR (horizontal) signal by a vertical interpolation unit (2004) including a vertical booster (2002) and a pixel interpolator (2003). The pixel interpolator (2003) may use a general vertical low-pass filter to output an average value of pixel data above and below a pixel to be interpolated. Similarly, a frame (2014) in which the number of pixels in the vertical direction increases is used as an SR (vertical) signal by generating a frame (2015) by a horizontal interpolation unit (2008) including a horizontal booster (2006) and a pixel interpolator (2007). The pixel interpolator (2007) may use a general horizontal low-pass filter, and may output an average value of left and right pixel data of a pixel to be interpolated.
In this way, by removing the high-frequency component in the direction orthogonal to the direction of the processing target using the pixel interpolator (2003) (2007) and extracting only the low-frequency component, the influence of the distortion generated when moving in the oblique direction can be reduced to a negligible extent. The SR (horizontal) signal and the SR (vertical) signal generated by the above processing are mixed by a mixer (2009) to become an output signal, which is displayed on a display unit (3).
Here, the detailed configuration and operation of the mixer (2009) will be described. Any one of the following three configuration examples may be used for the mixer (2009).
Fig. 22 shows a first configuration example of the mixer (2009). In the figure, an adder (2201) and a multiplier (2202) are used to generate and output an average value of respective signals SR (horizontal) and SR (vertical) input to a mixer (2009). In the configuration shown in the figure, the horizontal and vertical resolution improving effects are 1/2, but the mixer (2009) can be realized with the simplest configuration, and therefore, low cost can be realized.
Fig. 23 shows a second configuration example of the mixer (2009). In the figure, the signals SR (horizontal) and SR (vertical) input to the mixer (2009) are multiplied by a coefficient K (horizontal) and a coefficient K (vertical) using a multiplier (2303) and a multiplier (2304), respectively, and the resultant signals are added by an adder (2305) and output. The coefficient K (horizontal) and the coefficient K (vertical) are generated by a coefficient determiner 2301 (2302), respectively. The operation of the coefficient determiner 2301 (2302) will be described below.
The folding component removing units (2108) (2109) shown in fig. 21 generate coefficients C0 to C3 shown in fig. 9 by the coefficient determining unit (109) shown in fig. 1 based on the phase difference θ H (2102) and the phase difference θ V (2103) shown in the figure, and perform calculation for removing folding components. In this case, in order to prevent uncertainty of the coefficients C1 and C3 when the phase differences θ H (2102) and θ V (2103) are 0 and to prevent the coefficients C1 and C3 when the phase differences θ H (2102) and θ V (2103) are close to 0 from becoming large and fragile with respect to noise and the like, it is preferable to introduce the coefficient C4(0 ≦ C4 ≦ 1) shown in fig. 13 and perform pixel interpolation assisted by the configuration shown in fig. 11. Conversely, if the value of the coefficient C4 is 0.0, the effect of improving the resolution is small, but as the value of the coefficient C4 approaches 1.0.
With this property, SR (vertical) of the vertical resolution conversion result is strongly reflected with the horizontal phase difference θ H (2102) being around 0 (i.e., the coefficient C4 (horizontal) being around 1.0); in a manner that the SR (horizontal) of the horizontal resolution conversion result is strongly reflected when the vertical phase difference θ V (2103) is in the vicinity of 0 (i.e., the coefficient C4 (vertical) is in the vicinity of 1.0), the coefficient K (horizontal) and the coefficient K (vertical) are determined using the values of the coefficient C4 in the horizontal and vertical directions. To realize this operation, for example, K (horizontal) ═ C4 (horizontal) + (1-C4 (vertical))/2 is calculated by a coefficient determiner (2301) shown in fig. 23 to determine K (horizontal), and K (vertical) ═ C4 (vertical) + (1-C4 (horizontal))/2 is calculated by a coefficient determiner (2303) to determine K (vertical).
Fig. 24 shows collectively an example of the outputs (coefficient K (horizontal) and coefficient K (vertical)) of the coefficient determiners 2301 (2302) when the coefficient C4 (horizontal) and the coefficient C4 (vertical) are changed, respectively. As shown in the figure, acting in such a way that as the coefficient C4 (horizontal) becomes larger, the coefficient K (horizontal) becomes smaller and the coefficient K (vertical) becomes larger; as the coefficient C4 (vertical) becomes larger, the coefficient K (horizontal) becomes larger and the coefficient K (vertical) becomes smaller.
When the coefficient C4 (horizontal) and the coefficient C4 (vertical) are equal in value, the coefficient K (horizontal) and the coefficient K (vertical) are 0.5, respectively. The coefficient C4 thus independently varies horizontally and vertically, the coefficient K is determined in such a manner that the sum of the coefficient K (horizontal) and the coefficient K (vertical) is exactly 1.0, and SR (horizontal) and SR (vertical) are mixed.
A third operation and a configuration example of the mixer (2009) will be described with reference to fig. 25 and 26, respectively. Fig. 25 shows a two-dimensional frequency domain with a horizontal frequency μ and a vertical frequency ν. If the horizontal sampling frequency of the original input image is μ s and the vertical sampling frequency is v s, the output of the resolution conversion unit (4) shown in fig. 20 and 21 is a signal having a horizontal frequency μ in the range of- μ s to μ s and a vertical frequency v in the range of-v s to v s.
The high frequency component is reproduced by the resolution conversion of horizontal and vertical, but since the original signal level of the high frequency component is small, therefore, the effect of the horizontal resolution conversion is larger for the components of the frequency region (2501) in the vicinity of (μ, v) ═ (± μ s/2, 0) (particularly, the region including the frequency of (μ, v) ═ (+ μ s/2, 0), μ >0, and the region including the frequency of (μ, v) ═ (— μ s/2, 0), μ < 0), and the effect of the vertical resolution conversion is larger for the components of the frequency region (2502) in the vicinity of (μ, v) ═ 0, ± v s/2) (particularly, the region including the frequency of (μ, v) ═ 0, + v s/2), v >0, and the region including the frequency of (μ, v) ═ 0, — v s/2), v < 0).
Thus, when these frequency components (2501) (2502) are extracted and mixed by a two-dimensional filter, a component having a large resolution improvement effect can be selectively output.
Fig. 26 shows a configuration example of a mixer (2009) for extracting components having a large effect of horizontal and vertical resolution conversion. In this figure, a two-dimensional filter (2601) is used to extract a component of a frequency region (2501) in which the resolution improvement effect of the SR (horizontal) input to the mixer (2009) is large. Similarly, a two-dimensional filter (2602) is used to extract a component of a frequency region (2502) in which the effect of improving the resolution of the SR (vertical) input to the mixer (2009) is large.
As components other than the frequency regions (2501) and (2502), an average signal of SR (horizontal) and SR (vertical) is generated using an adder (2603) and a multiplier (2604), and components other than the pass bands of the two-dimensional filters (2601) and (2602) (i.e., residual components) are extracted using a two-dimensional filter (2605). The output signals of the two-dimensional filters 2601, 2602, and 2605 are added by an adder 2606 to be the output of a mixer 2009.
In the two-dimensional filters (2601) (2602) (2605) shown in the figure, the numbers enclosed by circles each represent an example of a tap coefficient of the filter. (for simplicity of explanation, the coefficients of the filters are described as integers; the original coefficient values are the products of calculations represented by the numbers enclosed by circles and "× 1/16" or the like represented on the right thereof; for example, 1/16 is multiplied by each number enclosed by circles in the two-dimensional filter (2601) to obtain the original coefficient values; the coefficients of the two-dimensional filters shown in the following examples are the same as those of the above.)
The two-dimensional filter (2601) is a product of a horizontal band-pass filter and a vertical low-pass filter having a center frequency of a pass band of + - μ s/2, the two-dimensional filter (2602) is a product of a vertical band-pass filter and a horizontal low-pass filter having a center frequency of a pass band of + - ν s/2, and the two-dimensional filter (2605) is preferably a filter obtained by subtracting the characteristics of the pass bands of the two-dimensional filter (2601) and the two-dimensional filter (2602) from the entire band.
Next, a difference between the processing of the image signal processing apparatus according to embodiment 7 and the operation of the conventional technique described above will be described with reference to fig. 34. Fig. a shows frame #1(3401), frame #2(3402), frame #3(3403), frame #4(3404), and frame #5(3405) which are input to the resolution converter (4), and fig. b shows each frame output from the resolution converter (4). In each frame, the subject is shifted right-handed at 1/4 pixels once, and the subject is intentionally shifted in four frames by one turn. This operation is also continuously performed in the same manner below frame # 6.
In the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, as described above, when the resolution of a horizontal/vertical two-dimensional input signal is increased, since the aliasing comes from both the vertical and horizontal directions, the band of the original signal is doubled in both the vertical and horizontal directions, three aliasing components overlap, and 2M +1 to 7 pieces of digital data (a signal corresponding to 7 frame images) are necessary to eliminate the aliasing components. Therefore, when signals are input four frames a week as shown in fig. 34(a), independent data cannot be obtained regardless of which 7 frames are selected, and thus the solution of the high resolution processing is not determined and cannot be solved.
On the other hand, according to the present embodiment, for example, two adjacent frames (for example, frame #1(3401) and frame #2 (3402)), (or frame #2(3402) and frame #3(3403)), (or frame #2 (3402)) are used, and the aliasing components in the horizontal direction (or vertical direction) are removed as shown in the drawing (b), whereby high resolution can be achieved. That is, the operation state of the present embodiment can be confirmed by using the input image of the graph (a) as a test pattern. As a pattern of the test pattern, a generally known Circular Zone Plate (CZP) is used, and the effect of resolution conversion can be directly seen on the display unit (3). That is, if the circular zone plate is moved left and right in each frame, an image with an increased resolution in the horizontal direction is displayed, and if the circular zone plate is moved up and down (or inclined), an image with an increased resolution in the vertical direction (or inclined direction) is displayed, and the effect of an increased resolution corresponding to the movement direction of the test pattern can be confirmed.
According to the image signal processing apparatus of embodiment 7 described above, the phase of each image signal of two input image frames is shifted, and two signals are generated from each image signal. Thereby, four signals can be generated from the image signals of two input image frames. Here, a coefficient for eliminating and combining the aliasing components of the four signals is calculated for each pixel for each of the four signals based on the phase difference between the two input image frames. For each pixel of the generated image, a sum obtained by multiplying each coefficient by the pixel value of the corresponding pixel of each of the four signals and adding the result is calculated, and a new pixel value of the high-resolution image is generated. By performing the above-described processing on each pixel of the generated image, an image having a higher resolution in one-dimensional direction than the input image frame can be generated.
This processing is performed in the horizontal direction and the vertical direction, respectively, and an image with high resolution in the horizontal direction and an image with high resolution in the vertical direction are generated. The image with the horizontal resolution and the image with the vertical resolution are subjected to lifting processing in the vertical direction and the horizontal direction, respectively, and then mixed.
Thus, a high-resolution image having a high resolution in both the vertical direction and the horizontal direction can be generated from each of the image signals of the two input image frames smaller than that in the conventional example. That is, a two-dimensional high-resolution image can be generated.
Further, the image signal processing apparatus of embodiment 7 uses less two input image frames than the conventional example, and therefore can reduce the amount of required image processing compared to the conventional example. Thus, an image signal processing apparatus which generates a high-resolution image having a higher resolution than an input image in both the vertical direction and the horizontal direction with less amount of turn-back components can be realized at a lower cost than in the conventional example.
In the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, three frames are used to perform one-dimensional resolution enhancement in the horizontal direction, the vertical direction, and the like with respect to a plurality of directions, and the results are input to a mixer (2009) shown in fig. 20 and mixed, and output as a two-dimensional resolution conversion result. In this case, the scale of the signal processing circuit such as the frame memory and the motion estimation unit is larger than that of the configuration in which two-dimensional resolution conversion is performed using only two frames as shown in fig. 20, but the scale of the signal processing circuit such as the frame memory and the motion estimation unit can be reduced by using at least 7-frame signals as described in patent document 1, patent document 2, and non-patent document 1.
Further, not limited to the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, other conventional high resolution techniques may be applied to perform one-dimensional high resolution in a plurality of directions, such as the horizontal direction and the vertical direction, and the results may be input to a mixer (2009) shown in fig. 20 and mixed and output as a two-dimensional resolution conversion result.
In fig. 20, the case where the resolution of frame #1 is converted using a group of input signals of frame #1 and frame #2 is described as an example, but the resolution of frame #1 may be converted using a plurality of groups such as frame #1 and frame #3, frame #1 and frame #4, and the results may be mixed to obtain the final resolution conversion result of frame # 1.
The mixing method in this case may be an average value of the results, or may be a mixing method based on the value of the coefficient C4 (frame) for each frame as shown in fig. 23 and 24. In this case, as the coefficient C4 (frame), a MAX value (not a small value) of the coefficient C4 (horizontal) and the coefficient C4 (vertical) per frame may also be used. Further, the coefficient C4 (horizontal) and the coefficient C4 (vertical) of all the groups may be compared for each pixel, and the resolution conversion result obtained from the group having the smallest coefficient C4 (i.e., the group having the largest resolution improvement effect) may be selected for each pixel as the final resolution conversion result of the frame # 1.
Thus, for example, when the frame #2 is a frame before (or after) the frame #1 and the frame #3 is a frame in the future of the frame #1 with reference to the frame #1, if the subject changes from "motion" to "still" at the time of the frame #1 (motion is completed), the resolution conversion processing is performed by the frame #1 and the frame #2, and if the subject changes from "still" to "motion" at the time of the frame #1 (motion is started), the resolution conversion processing is performed by the frame #1 and the frame #3, and the processing results are mixed for each pixel in the above manner.
[ example 8]
Fig. 21 shows an image signal processing apparatus according to embodiment 8 of the present invention. The image processing apparatus according to the present embodiment is a modification of the configuration of embodiment 7 described above, and is configured such that the resolution conversion unit (2001) (2005) and the interpolation unit (2004) (2008) shown in fig. 20 are configured in reverse order of processing, and the resolution conversion is performed after the interpolation processing. Thus, the boosters ((103) (104) in fig. 1) in the resolution conversion units (2001) (2005) and the boosters ((2002) (2006) in fig. 20) in the interpolation units (2004) (2008) can be shared, and the position estimation units ((101) in fig. 1) in the horizontal resolution conversion units (2001) and the vertical resolution conversion units (2005) can be shared, so that similar signal processing can be realized with a smaller circuit scale and a smaller amount of computation.
In fig. 21, first, a position estimating unit (2101) estimates the position of a corresponding pixel in a frame #2 with reference to a sampling phase (sampling position) of a pixel to be processed in a frame #1 input to an input unit (1), and obtains sampling phase differences θ H (2102) and θ V (2103) in the horizontal direction and the vertical direction, respectively.
Next, the frame #2 is motion-compensated using the information of the phase differences θ H (2102) and θ V (2103) by the boosters (2104) (2105) of the motion compensation/boost unit (2110), and is aligned with the frame #1, so that the number of pixels of the frame #1 and the frame #2 is increased to 2 times (4 times in total) in the horizontal/vertical directions, respectively, and high density is achieved. The lifter (2104) (2105) expands the action and structure shown in fig. 5 and 6 in two dimensions in the horizontal/vertical direction. In the phase shifting section (2111), the phase of the data having the increased density is shifted by a predetermined amount.
In this case, the horizontal phase shifter (2106) performs phase shift in the horizontal direction, the vertical phase shifter (2107) performs phase shift in the vertical direction, and the retarders (105) (107) and the pi/2 phase shifter (108) shown in fig. 1 can be used in the same manner, and the operation and configuration shown in fig. 7 and 8 can be implemented in the same manner, so that the description thereof is omitted.
With respect to each signal subjected to phase shift, the horizontal/vertical folding components are removed by a folding component removal section (2108) in the horizontal direction and a folding component removal section (2109) in the vertical direction of a folding component removal section (2112), respectively. Then, the output of the horizontal folding component removing unit (2108) is subjected to pixel interpolation by a pixel interpolation unit (2003) to obtain an SR (horizontal) signal, the output of the vertical folding component removing unit (2109) is subjected to pixel interpolation by a pixel interpolation unit (2007) to obtain an SR (vertical) signal, and the SR (vertical) signal are mixed and output by a mixer (2009).
The structure of the folded back component removing section (117) shown in fig. 1 can be used as it is for the folded back component removing sections (2108) (2109). By performing the operation shown in fig. 9 using the horizontal phase difference θ H (2102) in the folding component removing section (2108) and the horizontal phase difference θ H (2103) in the folding component removing section (2109) as the phase difference θ (102), folding components in various directions can be removed.
In the above description, the phase shifter section (2111) similarly uses the retarders (105) (107) and the pi/2 phase shifter (108) shown in fig. 1, and similarly performs the operation and configuration shown in fig. 7 and 8, and the folded back component removing sections (2108) (2109) directly use the configuration of the folded back component removing section (117) shown in fig. 1, but in addition to this, the phase shifter section (1009) shown in fig. 10 may be used in the vertical direction and the horizontal direction as the phase shifter section (2111), and the folded back component removing units (2108) (2109) shown in fig. 10 may be used as the folded back component removing units (1010) shown in fig. 10. In this case, the folded component removing units (2108) (2109) may be provided with the auxiliary pixel interpolation unit (1105) of fig. 11, as in fig. 11.
Since the mixer (2009) is the same as in example 7, the description thereof is omitted.
Note that the operation of inputting a frame shown in fig. 34 is also the same as in embodiment 7, and therefore, the description thereof is omitted.
The image signal processing apparatus according to embodiment 8 described above has the effect of the image signal processing apparatus according to embodiment 7, and since a part of the processing units can be shared in comparison with the image signal processing apparatus according to embodiment 7, it has the effect of realizing the same signal processing with a smaller circuit scale and a smaller amount of computation than the image signal processing apparatus according to embodiment 7.
Further, according to the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, three frames are used, and one-dimensional resolution enhancement in the horizontal direction, the vertical direction, and the like is performed for a plurality of directions, and the results are input to a mixer (2009) shown in fig. 21 and mixed, and output as a two-dimensional resolution conversion result. In this case, the scale of the signal processing circuit such as the frame memory and the motion estimation unit is larger than that of the configuration in which two-dimensional resolution conversion is performed using only two frames as shown in fig. 21, but the scale of the signal processing circuit such as the frame memory and the motion estimation unit can be reduced by using at least 7-frame signals as described in patent document 1, patent document 2, and non-patent document 1.
Further, not limited to the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, other conventional high resolution techniques may be applied to perform one-dimensional high resolution in a plurality of directions, such as the horizontal direction and the vertical direction, and the results may be input to a mixer (2009) shown in fig. 21 and mixed and output as a two-dimensional resolution conversion result.
In fig. 21, the case where the resolution of frame #1 is converted using a group of input signals of frame #1 and frame #2 is described as an example, but the resolution of frame #1 may be converted using a plurality of groups such as frame #1 and frame #3, frame #1 and frame #4, and the results may be mixed to obtain the final resolution conversion result of frame # 1.
The mixing method in this case may be an average value of the results, or may be a mixing method based on the value of the coefficient C4 (frame) for each frame as shown in fig. 23 and 24. In this case, as the coefficient C4 (frame), a MAX value (not a small value) of the coefficient C4 (horizontal) and the coefficient C4 (vertical) per frame may also be used. Further, the coefficient C4 (horizontal) and the coefficient C4 (vertical) of all the groups may be compared for each pixel, and the resolution conversion result obtained from the group having the smallest coefficient C4 (i.e., the group having the largest resolution improvement effect) may be selected for each pixel as the final resolution conversion result of the frame # 1.
Thus, for example, when frame #1 is used as a reference, when frame #2 is a frame before frame #1 and frame #3 is a frame in the future of frame #1, if the subject changes from "motion" to "still" (motion is terminated) at the time of frame #1, resolution conversion processing is performed using frames #1 and #2, and if the subject changes from "still" to "motion" (motion is started) at the time of frame #1, resolution conversion processing is performed using frames #1 and #3, and the processing results are mixed for each pixel in the above manner.
[ example 9]
Fig. 27 shows an image signal processing apparatus according to embodiment 9 of the present invention. The image processing apparatus of the present embodiment is configured as a high-resolution converting section to which oblique components in the lower right and upper right directions are further added to the configuration example shown in fig. 21. That is, an oblique (lower right) phase shifting unit (2701) and an oblique (upper right) phase shifting unit (2702) are added to a phase shifting unit (2708), and a aliasing component removing unit (2705) (2706) is added to a aliasing component removing unit (2709), and after passing through a pixel interpolator (2710) (2711), respective signals of SR (horizontal), SR (vertical), SR (upper right), and SR (lower right) are mixed and output by a mixing unit (2707). Here, the pixel interpolators (2710) and (2711) may use a general two-dimensional low-pass filter, and may output average values of pixel data of the upper, lower, left, and right sides of a pixel to be interpolated.
As the phase difference theta, phase difference information in an oblique direction is necessary, and a phase difference (theta H + theta V) obtained by adding a horizontal phase difference theta H (2102) and a vertical phase difference theta V (2103) by an adder (2703) is input to a folding component removing unit (2705); the phase difference (- θ H + θ V) generated by the subtractor (2704) may be input to the folding component removing unit (2706) and configured as described above. The structure and operation of the folded component removing sections (2106) (2109) (2705) (2706) are common.
Fig. 28(a) to (d) show the operations of the horizontal phase shifter (2106), the vertical phase shifter (2107), the tilt (lower right) phase shifter (2701), and the tilt (upper right) phase shifter (2702) in the two-dimensional frequency region, respectively. Fig. 28(a) to (d) are two-dimensional frequency regions represented by a horizontal frequency μ and a vertical frequency ν, as in fig. 25. These phase shifters (2106) (2107) (2701) (2702) have the same configuration as the phase shifter (116) shown in fig. 1, and the "frequency-phase difference" characteristics of the pi/2 phase shifters (106) (108) are changed in accordance with their respective directions.
That is, in this diagram (a), in the horizontal phase shift unit (2106), when the horizontal frequency sampling frequency of the input signal is μ s, the phase of the frequency component in the range of- μ s to 0 is shifted by pi/2 and the phase of the frequency component in the range of 0 to μ s is shifted by pi/2, in the same manner as the operation shown in fig. 7. Similarly, in the vertical phase shifting unit (2107), when the vertical frequency sampling frequency of the input signal is ν s, the phase of the frequency component in the range of- ν s to 0 is shifted by π/2, and the phase of the frequency component in the range of 0 to ν s is shifted by π/2.
Likewise, in a tilted (lower right) phase shiftIn the portion (2701) and the inclined (upper right) phase shift portion (2702), the phase of the signal is shifted by only-pi/2 or pi/2 as shown in the drawing (c) and the drawing (d), respectively. These "frequency-phase difference" characteristics can be obtained by making the tap coefficient (タップ) shown in fig. 8
Figure G2008101682464D0030154307QIETU
Number) are arranged in each of the horizontal, vertical, oblique (lower right), oblique (upper right) directions in accordance with the two-dimensional sampling points.
Fig. 29 shows a first configuration example of the mixer (2707). In the figure, an adder (2901) and a multiplier (2902) are used to generate and output average values of respective signals SR (horizontal), SR (vertical), SR (lower right), and SR (upper right) input to a mixer (2707). The configuration shown in the figure is an example in which the mixer (2707) is configured most simply, and the horizontal, vertical, lower right, and upper right resolution improving effects are 1/4, respectively.
Fig. 30 shows a second configuration example of the mixer (2707). In the figure, the multiplier (3005), multiplier (3006), multiplier (3007), and multiplier (3008) are used to multiply respective signals of SR (horizontal), SR (vertical), SR (upper right), and SR (lower right) input to the mixer (2707) by a coefficient K (horizontal), a coefficient K (vertical), a coefficient K (lower right), and a coefficient K (upper right), and these signals are added by the adder (3009) and output. The coefficients K (horizontal), K (vertical), K (lower right), and K (upper right) are generated by coefficient determiners 3001 (3002) (3003) (3004), respectively. The operation of the coefficient determination units (3001), (3002), (3003) and (3004) will be described below.
The folding component removing units (2108) (2109) (2705) (2706) shown in fig. 27 generate coefficients C0 to C3 shown in fig. 9 by the coefficient determining unit (109) shown in fig. 1 based on the phase difference θ H (2102), the phase difference θ V (2103), the phase difference (θ H + θ V), and the phase difference (- θ H + θ V) shown in the figure, and perform calculation for removing folding components. In this case, in order to prevent uncertainty of the coefficients C1 and C3 when the phase differences θ H (2102), θ V (2103), (θ H + θ V), and (- θ H + θ V) are 0, and to prevent the problems that the coefficients C1 and C3 when the phase differences θ H (2102), θ V (2103), (θ H + θ V), and (- θ H + θ V) are close to 0 become large and vulnerable to noise, it is preferable to introduce the coefficient C4 (0. ltoreq. C4. ltoreq.1) shown in fig. 13, and to perform pixel interpolation assisted by the configuration shown in fig. 11. Conversely, if the value of the coefficient C4 is 0.0, the effect of improving the resolution is small, but as the value of the coefficient C4 approaches 1.0. With this property, the SR (level) of the horizontal resolution transform result becomes weak when the horizontal phase difference θ H (2102) is around 0 (i.e., the coefficient C4 (level) is around 1.0); when the horizontal phase difference θ H (2102) is not in the vicinity of 0 (that is, when the coefficient C4 (horizontal) is in the vicinity of 0.0), the coefficient K (horizontal) is determined by the coefficient determining unit (3001) so that SR (horizontal) of the horizontal resolution conversion result becomes stronger.
As an example thereof, the coefficient K (horizontal) — (1+ C4 (horizontal) × 3-C4 (vertical) — C4 (lower right) — C4 (upper right))/4 may be sufficient. Similarly, coefficients K (vertical), K (lower right), and K (upper right) are determined by coefficient determination units 3002 (3003) (3004), respectively. At this time, the coefficient K was determined such that the coefficient K (horizontal) + the coefficient K (vertical) + the coefficient K (lower right) + the coefficient K (upper right) became 1.0 for the coefficient C4 (horizontal), the coefficient C4 (vertical), the coefficient C4 (lower right), and the coefficient C4 (upper right) which varied independently, and SR (horizontal), SR (vertical), SR (upper right), SR (lower right)
Fig. 31 and 32 show a third operation and a configuration example of the mixer (2707), respectively. Fig. 31 is a two-dimensional frequency domain represented by a horizontal frequency μ and a vertical frequency ν as in fig. 25. In fig. 31, if the horizontal sampling frequency of the original input image is μ s and the vertical sampling frequency is v s, the output of the resolution conversion unit (4) shown in fig. 27 is a signal having a horizontal frequency μ in the range of- μ s to μ s and a vertical frequency v in the range of-v s to v s.
The effect of the oblique (upper right) resolution conversion is large in the components of the frequency region (3101) in the vicinity of (μ, v) ═ (+ μ s/2, + v s/2) and the vicinity of (μ, v) ═ μ s/2, -v s/2 shown in fig. 31 (in particular, the components of the region including the frequencies of (μ, v) ═ (+ μ s/2, + v s/2), μ >0, v >0, and the region including the frequencies of (μ, v) ═ μ s/2, -v s/2), μ <0, v < 0).
The effect of the oblique (lower right) resolution conversion is large in the components of the frequency region (3102) in the vicinity of (μ, v) ═ (+ μ s/2, — v s/2) and in the vicinity of (μ, v) ═ μ s/2, + v s/2) shown in fig. 31 (in particular, the components of the region including the frequencies of (μ, v) ═ (+ μ s/2, — v s/2), μ >0, v <0, and the region including the frequencies of (μ, v) ═ μ s/2, + v s/2, μ <0, v > 0).
Thus, these frequency components (3101) (3102) are extracted by the two-dimensional filter and mixed with the frequency components (2501) (2502) shown in fig. 25, whereby components having a large resolution improvement effect can be selectively output.
Fig. 32 shows a configuration example of a mixer (2707) for extracting components having a large effect of each resolution conversion of horizontal, vertical, oblique (lower right), and oblique (upper right). In this figure, a two-dimensional filter 3201 is used to extract a component in a frequency region 3102 where the resolution improving effect of SR (lower right) of an input mixer 2707 is large. Similarly, a two-dimensional filter 3202 is used to extract a component of a frequency region 3101 in which the resolution improving effect of the SR (upper right) of the input mixer 2707 is large. Further, the components of the frequency regions (2501) and (2502) having a large resolution improvement effect of SR (horizontal) and SR (vertical) are extracted by the two-dimensional filters (2601) and (2602) shown in fig. 26, respectively. As components other than the frequency regions (2501) (2502) (3101) (3102), average signals of SR (horizontal), SR (vertical), SR (lower right) and SR (upper right) are generated by using an adder (3203) and a multiplier (3204), and components other than the pass bands of the two-dimensional filters (2601) (2602) (3201) (3202) are extracted by using a two-dimensional filter (3205). The output signals of the two-dimensional filters 2601, 2602, 3201, 3202 and 3205 are added by an adder 3206 to be an output of the mixer 2707.
In the two-dimensional filters (2601) (2602) (3201) (3202) (3205) shown in the figure, the numbers surrounded by circles indicate examples of tap coefficients of the filters, respectively.
According to the image signal processing apparatus of embodiment 9 described above, it is possible to generate a high-resolution image in which the resolution is increased in the oblique direction in addition to the horizontal direction and the vertical direction.
Further, according to the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, three frames are used, and one-dimensional (horizontal, vertical, oblique (lower right), oblique (upper right)) resolution is performed in a plurality of directions, and the results are input to a mixer (2707) shown in fig. 27 and mixed, and output as a two-dimensional resolution conversion result. In this case, the scale of the signal processing circuit such as the frame memory and the motion estimation unit is larger than that of the configuration in which two-dimensional resolution conversion is performed using only two frames as shown in fig. 27, but the scale of the signal processing circuit such as the frame memory and the motion estimation unit can be reduced by using at least 7-frame signals as described in patent document 1, patent document 2, and non-patent document 1.
Further, not limited to the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, other conventional high resolution techniques may be applied, and one-dimensional (horizontal, vertical, oblique (lower right), and oblique (upper right)) high resolution may be performed in a plurality of directions, and the results may be input to a mixer (2707) shown in fig. 27 and mixed, and output as a two-dimensional resolution conversion result.
In fig. 27, the case where the resolution of frame #1 is converted using groups of input signals of frame #1 and frame #2 is described as an example, but the resolution of frame #1 may be converted using a plurality of groups of input signals of frame #1 and frame #3, frame #1 and frame #4, and the results may be mixed to obtain the final resolution conversion result of frame # 1. The mixing method in this case may be an average value of the results, or may be a mixing method based on the value of the coefficient C4 (frame) for each frame as shown in fig. 23 and 24. In this case, as the coefficient C4 (frame), a MAX value (not a small value) of the coefficient C4 (horizontal) and the coefficient C4 (vertical) per frame may also be used. Further, the coefficient C4 (horizontal) and the coefficient C4 (vertical) of all the groups may be compared for each pixel, and the resolution conversion result obtained from the group having the smallest coefficient C4 (i.e., the group having the largest resolution improvement effect) may be selected for each pixel as the final resolution conversion result of the frame # 1.
Thus, for example, when frame #1 is used as a reference, when frame #2 is a frame before frame #1 and frame #3 is a frame in the future of frame #1, if the subject changes from "motion" to "still" (motion is terminated) at the time of frame #1, resolution conversion processing is performed using frames #1 and #2, and if the subject changes from "still" to "motion" (motion is started) at the time of frame #1, resolution conversion processing is performed using frames #1 and #3, and the processing results are mixed for each pixel in the above manner.
[ example 10]
An image signal processing method according to embodiment 10 of the present invention will be described with reference to fig. 33.
Embodiment 10 shows an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 9. The image processing apparatus that performs the image signal processing method according to the present embodiment is the image processing apparatus shown in fig. 18 as in embodiment 2, and therefore, the description thereof is omitted.
Fig. 33 is a flowchart showing the operation of the present embodiment. In the flow of fig. 33, from step (3301), in steps (5-1), (5-2), (5-3) and (5-4), the resolution is increased horizontally, vertically, obliquely (lower right) and obliquely (upper right). In each of the steps (5-1), (5-2), (5-3) and (5-4), the process step (5) shown in fig. 14 to 16 or the process step (5) shown in fig. 42 to 44 described later may be performed in each direction of horizontal, vertical, oblique (lower right) and oblique (upper right). That is, as shown in fig. 28, "frequency-phase" characteristics such as pi/2 phase shift (1407) (1408) and hilbert transform (1510) may be changed for each direction, and the phase difference θ may be replaced with θ H, θ V, (θ H + θ V) and (— θ H + θ V). As described with reference to fig. 14 to 16, the processing results of the steps (5-1), (5-2), (5-3), and (5-4) are written into the respective frame buffers # 3. Next, in step (3302-1) (3302-2) (3302-3) (3302-4), pixel interpolation in the vertical, horizontal, and oblique directions is performed, and the full pixels of the two-dimensional frame buffer #3 are generated so that the number of horizontal and vertical pixels of the output frame is the same. Next, in step (3303), the data of each frame buffer #3 is mixed for each pixel according to the method described with reference to fig. 29, 30, and 32, and output to the frame buffer #4 for output. When the operations of the above-described embodiments 8 to 9 are realized by a software program, the steps (5-3) (5-4) of performing the processing in the oblique direction are not necessary, and the steps (3302-3) (3302-4) of performing the pixel interpolation with respect to the results thereof are performed. As the mixing method in step (3303), data is mixed according to the method described with reference to fig. 22, 23, and 26.
According to the image signal processing method of embodiment 10 described above, a high-resolution image having a high resolution in an oblique direction as well as in the horizontal direction and the vertical direction can be generated.
[ example 11]
Fig. 38 shows an image processing apparatus according to embodiment 11 of the present invention. The image processing apparatus of the present embodiment includes: an input unit (1) to which a frame sequence of a moving image such as a television broadcast signal is input; a resolution conversion unit (8) for performing resolution conversion twice in the horizontal direction and twice in the vertical direction using the 4 frames input from the input unit (1); and a display unit (3) for displaying an image on the basis of the frame having been subjected to resolution enhancement by the resolution conversion unit (8). In the resolution conversion unit (8), the input image signals of four frames are subjected to phase shifts in the horizontal direction, the vertical direction, and the horizontal/vertical direction, respectively, thereby removing the aliasing components in the two-dimensional frequency region and achieving a two-dimensional high resolution. The details of the resolution conversion unit (8) will be described below.
In fig. 38, first, the position estimating unit (3806-2) (3806-3) (3806-4) estimates the two-dimensional position of the corresponding pixel in each of the frame #2, the frame #3, and the frame #4 with reference to the two-dimensional sampling position (sampling position) of the pixel to be processed in the frame #1 input to the input unit (1), and obtains the horizontal phase difference θ H2 (3807-2), the θ H3 (3807-3), the θ H4 (3807-4), the vertical phase difference θ V2 (3808-2), the θ V3 (3808-3), and the θ V4 (3808-4). Next, the horizontal/vertical raisers (3801-1) (3801-2) (3801-3) (3801-4) of the motion compensation/raising unit (3810) perform motion compensation on the frames #2, #3, and #4 using the information on the phase differences θ H2 (3807-2), θ H3 (3807-3), θ H4 (3807-4), θ V2 (3808-2), θ V3 (3808-3), and θ V4 (3808-4) to adjust the positions of the frames #2, #3, and #4 with respect to the frame #1, and increase the number of pixels of each frame by 2 times in the horizontal and vertical directions, respectively, thereby increasing the density by 4 times in total. In the phase shifting unit (3811), the phase of the high-density data is shifted by a predetermined amount in the horizontal direction, the vertical direction, and the horizontal/vertical direction by using the horizontal phase shifters (3803-1) (3803-2) (3803-3) (3803-4), the vertical phase shifters (3804-1) (3804-2) (3804-3) (3804-4), and the horizontal/vertical phase shifters (3805-1) (3805-2) (3805-3) (3805-4). Here, as means for shifting the phase of data by a certain amount, a pi/2 phase shifter such as the hilbert transformer described above can be used. In the folding component removing part (3809), folding components in each horizontal/vertical direction are removed by using the total 16 signals from the phase shifting part (3811) and the total 6 phase difference signals from the phase estimating part (3812), and output. The output is supplied to the display unit 3. The position estimation units (3806-2) (3806-3) (3806-4) can use the above-described conventional techniques as they are. The horizontal/vertical lifters (3801-1) (3801-2) (3801-3) (3801-4) expand the actions and structures shown in fig. 5 and 6 in the horizontal/vertical direction two-dimensionally. Details of the phase shifter (3811) and the folded component removing section (3809) will be described later.
Fig. 39 shows a configuration example of the horizontal/vertical phase shifters (3805-1) (3805-2) (3805-3) (3805-4). Since the phase in the horizontal direction and the phase in the vertical direction of the image signal are independent of each other, the horizontal/vertical phase shifter (3805) can be realized by combining the vertical phase shifter (3804) and the horizontal phase shifter (3803) in series as shown in the figure. It is also understood that the same operation can be performed by reversing the connection order and disposing the horizontal phase shifter (3803) before the vertical phase shifter (3804).
Fig. 40 shows the detailed operations of the phase shift unit (3811) and the folded component removing unit (3809). The graph (a) is a two-dimensional frequency region represented by a horizontal frequency μ and a vertical frequency ν. If the horizontal sampling frequency of the original input image is μ s and the vertical sampling frequency is ν s, it is known that the fold-back component is generated at the position of (μ, ν) ═ μ s, 0), (μ, ν) ═ 0, ν, and (μ, ν) ═ μ s, ν s, using the signal in the vicinity of the origin of the graph (a) (i.e., the position of (μ, ν) ═ 0, 0) as the original component. Further, although the folding back components are also generated at positions where their origins are symmetrical (i.e., (μ, ν) ═ μ s, 0), (μ, ν) ═ 0, — ν s, and (μ, ν) (— μ s, — ν s), the folding back components are equivalent to positions where (μ, ν) ═ μ s, 0), (μ, ν) ═ 0, and (μ, ν) ═ μ s, ν s), respectively, because of the symmetry of frequencies. In the high resolution twice in the horizontal direction and twice in the vertical direction by the resolution conversion unit (8) shown in fig. 38, the motion compensation/enhancement unit (3810) performs enhancement twice in the horizontal direction and twice in the vertical direction (0 interpolation) to make the number of pixels 4 times, and then removes the foldback components generated at the positions indicated in fig. 40(a) where (μ, ν) is (μ s, 0), (μ, ν) is (0, ν s), and (μ, ν) is (μ s, ν s). This operation will be described below.
Fig. 40(b) shows the states of the horizontal phase rotation and the vertical phase rotation of each component at the position of (μ, ν) ═ 0, 0), (μ, ν ═ μ s, 0), (μ, ν ═ 0, ν s, and (μ, ν s). As shown in fig. 4, the phase rotation of the original component does not occur between a plurality of frames having different sampling phases, and only the phase rotation of the folded component is performed in accordance with the sampling phase difference. Then, considering that the phase of the original component is used as a reference (Re axis), if the phase shift unit (3811) generates the component of the orthogonal phase axis (Im axis) in the horizontal, vertical, horizontal/vertical directions as shown in fig. 38, the value (total value of signals after phase shift) of only the component of the horizontal Re axis (no phase rotation in the horizontal direction) and the vertical Re axis (no phase rotation in the vertical direction) as the original component (μ, ν) is (0, 0) (that is, #1) is "1" and the value of the other component (that is, #2 to #16) is "0" as shown in fig. 40(b), the folded component can be eliminated and only the original component can be extracted.
Fig. 40(c) shows a matrix equation for realizing the phase relationship shown in fig. 40 (b). In the figure, M is a matrix having 16 × 16 elements, and represents the calculation of each phase rotation of horizontal, vertical, and horizontal/vertical. The details of the matrix M will be described later. The left side of the figure shows the values in fig. 40(b), and the right side C1ReRe to C4im show coefficients multiplied by the respective output signals of the phase shifter (3811) by the foldback component remover (3809) shown in fig. 38. That is, with respect to the frame #1 shown in fig. 38, the output signal of the delayer (3802-1) is multiplied by a coefficient C1ReRe, the output signal of the horizontal phase shifter (3803-1) is multiplied by a coefficient C1ImRe, the output signal of the vertical phase shifter (3804-1) is multiplied by a coefficient C1ReIm, and the output signal of the horizontal/vertical phase shifter (3805-1) is multiplied by a coefficient C1 im. Hereinafter, similarly, in the frame #2, the output signal of the delayer (3802-2) is multiplied by a coefficient C2ReRe, the output signal of the horizontal phase shifter (3803-2) is multiplied by a coefficient C2ImRe, the output signal of the vertical phase shifter (3804-2) is multiplied by a coefficient C2ReIm, and the output signal of the horizontal/vertical phase shifter (3805-2) is multiplied by a coefficient C2 im. With respect to the frame #3, the output signal of the delayer (3802-3) is multiplied by a coefficient C1ReRe, the output signal of the horizontal phase shifter (3803-3) is multiplied by a coefficient C3ImRe, the output signal of the vertical phase shifter (3804-3) is multiplied by a coefficient C3ReIm, and the output signal of the horizontal/vertical phase shifter (3805-3) is multiplied by a coefficient C3 im. With respect to the frame #4, the output signals of the delayers (3802-4) are multiplied by a coefficient C4ReRe, the output signals of the horizontal phase shifters (3803-4) are multiplied by a coefficient C4ImRe, the output signals of the vertical phase shifters (3804-4) are multiplied by a coefficient C4ReIm, and the output signals of the horizontal/vertical phase shifters (3805-4) are multiplied by a coefficient C4 im. When all of the total 16 signals multiplied by the above coefficients are added by a folding component removing unit (3809) described later, if the coefficients C1ReRe to C4im are determined so that the relationship in fig. 40(C) is always satisfied, folding components can be eliminated and only the original components can be extracted.
Fig. 40(d) shows details of the matrix M. The matrix M is a matrix having 16 × 16 elements as described above, and is composed of a partial matrix having 4 × 4 elements expressed by mij (where the row number i and the column number j are integers satisfying 1 ≦ i ≦ 4, and 1 ≦ j ≦ 4). The partial matrix mij is classified into the partial matrices shown in the figures (e) (f) (g) (h) according to the row number i.
Fig. 40(e) shows elements of the partial matrix m1j (i.e., m11, m12, m13, and m14) when the row number i is 1. The partial matrix m1j is an element that acts on a component having a frequency (μ, ν) of (0, 0), and is a unit matrix (i.e., a matrix in which elements on a diagonal line inclined downward to the right are all 1 and the remaining elements are all 0) because no horizontal/vertical phase rotation occurs regardless of a sampling phase difference between frames.
Fig. 40(f) shows elements of the partial matrix m2j (i.e., m21, m22, m23, and m24) with the row number i being 2. The partial matrix m2j is an element that acts on a component having a value of (μ, ν) ((μ s, 0)), and is a rotation matrix that rotates the phase in the horizontal direction in accordance with the sampled horizontal phase difference θ Hj (where j is an integer satisfying 1 ≦ j ≦ 4). That is, #5 and #6, and #7 and #8 shown in fig. 40(b) having the same vertical phase axis are paired, and the phase is rotated by a rotation matrix of θ Hj about the horizontal frequency axis. The horizontal phase difference θ H1 when j is 1 is not shown in fig. 38, but it is interpreted as a phase difference (═ 0) between the frame #1 (reference) and the frame #1 (same as the processing target) and the processing may be performed with θ H1 ═ 0. Hereinafter, the vertical phase difference θ V1 is also treated similarly as θ V1 being equal to 0.
Fig. 40(g) shows elements of the partial matrix m3j (i.e., m31, m32, m33, and m34) when the row number i is 3. The partial matrix m3j is an element that acts on a component having a value of (μ, ν) ((0, ν s)), and is a rotation matrix that rotates the phase in the vertical direction in accordance with the sampled vertical phase difference θ Vj (where j is an integer satisfying 1. ltoreq. j.ltoreq.4). That is, #9 and #11, and #10 and #12 shown in fig. 40(b) having the same horizontal phase axis are respectively paired, and the phase is rotated by a rotation matrix of θ Vj with the vertical frequency axis as the center.
Fig. 40(h) shows elements of the partial matrix m4j (i.e., m41, m42, m43, and m44) when the row number i is 4. The partial matrix m4j is an element that acts on a component having (μ, ν) — (μ s, ν s), and is a rotation matrix that rotates the phase in both the horizontal direction and the vertical direction based on both the sampled horizontal phase difference θ Hj and vertical phase difference θ Vj (where j is an integer satisfying 1 ≦ j ≦ 4). Namely, the product of m2j and m3 j.
In other words, if m1j, m2j, and m3j are rotation matrices in which the phases in the horizontal direction and the vertical direction are both rotated as shown in m4j, it is considered that m1j is set to θ Hj — θ Vj — 0, m2j is set to θ Vj — 0, and m3j is set to θ Hj — 0, and the partial matrices are the same as those described above.
In this way, the matrix M is determined based on the sampling phase differences (θ Hj, θ Vj), and a total of 16 coefficients (C1ReRe to C4 im) are determined so that the equation shown in fig. 40(C) is always satisfied. At this time, the inverse matrix M is obtained in advance from the relative matrix M-1The coefficients (C1ReRe to C4 im) may be determined by the calculation shown in fig. 40 (i). As an inverse matrix M-1As the method (2), a method using a cofactor matrix, a method using a cleaning method (き - し method) by Gauss-Jordan, a method of decomposing into a triangular matrix for calculation, and the like are known, and therefore, the illustration thereof is omitted here.
FIG. 41 shows a detailed configuration example of the folded component removing section (3809) shown in FIG. 38. In this figure, in the coefficient determination unit (4101), based on the horizontal phase difference (θ H2, θ H3, θ H4) and the vertical phase difference (θ V2, θ V3, θ V4) output from the position estimation unit (3812) shown in fig. 38, the coefficients (C1ReRe to C4 im) are generated by the inverse matrix operation shown in fig. 40 (i). These coefficients are multiplied by the signal of each frame output from the phase shift unit (3811) by a multiplier (4102), and subjected to full addition calculation by an adder (4103) to be output signals of the aliasing component removal unit (3809) (i.e., output signals of the resolution conversion unit (8)). Since the horizontal phase difference (θ H2, θ H3, θ H4) and the vertical phase difference (θ V2, θ V3, θ V4) are generally different in value for each pixel on the input frame, the above-described inverse matrix operation must be performed for each pixel. In this case, the horizontal phase difference (θ H2, θ H3, θ H4) and the vertical phase difference (θ V2, θ V3, θ V4) may be made representative phase differences (for example, integral multiples of pi/8 shown in fig. 9 (d)), the coefficients (C1ReRe to C4 im) may be generated in advance, and the resultant may be tabulated using a ROM (Read Only Memory) or the like. Since a general table reference method is known, illustration thereof is omitted.
FIG. 42 shows another example of the structure of the folded component removing section (3809) shown in FIG. 38. In the above description, when the total of 16 coefficients (C1ReRe to C4 imi) is determined so that the equation shown in fig. 40(C) is always satisfied, the inverse matrix M of the relative matrix M is obtained in advance-1The coefficients (C1ReRe to C4 im) are determined by the calculation shown in fig. 40(i), but there is a value based on the horizontal phase difference (θ H2, θ H3, θ H4) and the vertical phase difference (θ V2, θ V3, θ V4) and the inverse matrix M exists-1In the absence, the coefficients (C1ReRe to C4 im) cannot be determined. Inverse matrix M-1Whether or not the inverse matrix M exists is calculated by a coefficient determining unit (4101)-1In the case of the above, the determination can be easily made by an arithmetic process such as a method using a cofactor matrix, a method using an elimination method of Gause-Jordan, a method of calculating by decomposing into a triangular matrix, and the like, and the inverse matrix M does not exist-1In the case of (3), the output signal may be switched so that the resolution conversion unit (4) shown in fig. 21 and the like acquires the output signal using the frame #1 and the frame # 2. That is, the resolution change is generated based on the frame #1 and the frame #2 output from the phase shift unit (3811), and the horizontal phase difference θ H2 (3807-2) and the vertical phase difference θ V2 (3808-2) output from the position estimation unit, using the horizontal folding component removal unit (2108), the vertical folding component removal unit (2109), the pixel interpolator (2003) (2007), and the mixer (2009) shown in fig. 42As a result, the result of the adder (4103) and the result of the switch (4201) are switched to be an output signal. Instead of performing two-dimensional switching using the switch (4201), the output of the adder (4103) and the output of the mixer (2009) may be continuously mixed (i.e., weighted addition operation), for example, in the absence of the inverse matrix M-1Is configured so that the mixing ratio of the output of the mixer (2009) is increased in the vicinity of the pixel.
By the above-described removal processing of the aliasing components, in the two-dimensional frequency region shown in fig. 40(a), the resolution improvement effect from the center to (μ, ν) ═ ((μ s, 0)) in the horizontal direction can be achieved. In addition, a resolution improvement effect from the center to (μ, ν) ═ 0, ν s in the vertical direction can be achieved. In addition, the resolution improvement effect from the center to (μ, ν) ═ μ s, ν s can be achieved in the oblique direction.
Here, in the image signal processing apparatus and the image signal processing method shown in embodiment 7, the resolution is increased in the oblique direction in addition to the horizontal direction and the vertical direction, but the effect of improving the resolution in the oblique direction is not (μ, ν) ═ μ s, ν s as shown in fig. 31.
Thus, the image signal processing apparatus shown in fig. 38 has an effect that the resolution can be improved up to a high-frequency component in an oblique direction, compared with the image signal processing apparatus of embodiment 7.
Next, the operation of the image signal processing apparatus according to embodiment 11 of the present invention will be described with reference to fig. 44. Fig. a shows a frame #1(4401), a frame #2(4402), a frame #3(4403), a frame #4(4404), and a frame #5(4405) which are input to the resolution converting unit (8) shown in fig. 38, and fig. b shows each frame output from the resolution converting unit (8). In each frame, the subject is shifted right-handed at 1/4 pixels once, and the subject is intentionally shifted in four frames by one turn. This operation is also continued after frame # 5.
In the conventional techniques described in patent document 1, patent document 2, and non-patent document 1, as described above, when the resolution of a horizontal/vertical two-dimensional input signal is increased, since the aliasing comes from both the vertical and horizontal directions, the band of the original signal is doubled in both the vertical and horizontal directions, three aliasing components overlap, and 2M +1 to 7 pieces of digital data (a signal corresponding to 7 frame images) are necessary to eliminate the aliasing components. Therefore, when signals are input four frames a week as shown in fig. 34(a), independent data cannot be obtained regardless of which 7 frames are selected, and thus the solution of the high resolution processing is not determined and cannot be solved.
On the other hand, in example 11, for example, four adjacent frames (frame #1(4401), frame #2(4402), frame #3(4403), and frame #4(4404)) are used, and as shown in fig. (b), the fold-back components in the horizontal direction, the vertical direction, and the horizontal/vertical direction are removed, whereby high resolution can be achieved. That is, the operation state of the present embodiment can be confirmed by using the input image of the graph (a) as a test pattern. As a pattern of the test pattern, a generally known Circular Zone Plate (CZP) is used, and the effect of resolution conversion can be directly seen on the display unit (3). That is, if the circular zone plate is moved by one rotation for four frames as shown in fig. 44(a), an image in which the resolution in the horizontal direction and the resolution in the vertical direction are increased can be always displayed, and the effect of the increase in resolution can be confirmed.
According to the image signal processing apparatus of the embodiment 11 described above, a plurality of types of phase shifts (horizontal direction, vertical direction, horizontal/vertical direction) having different directions are performed with respect to the respective image signals of the 4 input image frames, thereby generating 4 signals from the respective image signals. Thereby, 16 signals can be generated from the image signals of 4 input image frames. Here, based on the phase differences of the 4 input image frames, coefficients for eliminating and combining the aliasing components of the 16 signals are calculated for each pixel for each of the 16 signals. For each pixel of the generated image, the sum of the pixel values of the corresponding pixels of each of the 16 signals multiplied by the respective coefficients and added is calculated, and a new pixel value of the high-resolution image is generated.
Thus, the image signal processing apparatus according to embodiment 11 can generate a high-resolution image in which the tilt components in the lower right direction and the upper right direction are also high-resolution in the horizontal direction and the vertical direction.
In addition, the effect of improving the resolution of the image signal processing apparatus according to embodiment 11 is that the resolution can be improved to a higher frequency component in the oblique direction than the image signal processing apparatus according to embodiment 7, and a high-resolution image with a higher image quality can be generated.
[ example 12]
An image signal processing method according to embodiment 12 of the present invention will be described with reference to fig. 43 and 19.
Embodiment 12 shows an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 11.
Here, an image processing apparatus for implementing the image signal processing method of the present embodiment is described using fig. 19. The image signal processing apparatus shown in fig. 19 includes: an input unit (1) to which an image signal such as a television broadcast signal is input; a storage unit (11) that stores software for processing the signal input from the input unit (1); a control unit (10) that processes the image signal of the signal input from the input unit (1) in cooperation with software stored in the storage unit (11); a frame buffer #1(31), a frame buffer #2(32), a frame buffer #3(33), and a frame buffer #4(34) used for buffering data in the image signal processing by the control unit (10); and a buffer #5(35) for frame buffering a signal after the image signal processing outputted from the control unit (10) to the output unit (3).
Here, the number of input units (1) included in the image signal processing apparatus shown in fig. 19 is 4 as many as the number of frames used for image processing, and only one input unit (1) may be included, and 4 frames may be continuously input.
Here, the frame buffer #1(31), the frame buffer #2(32), the frame buffer #3(33), the frame buffer #4(34), and the storage unit (11) for storing software used for buffering data may be configured by using different memory chips, or may be configured by dividing each data address into one or a plurality of memory chips.
In this embodiment, the control unit (10) performs image signal processing on an image signal input from the input unit (1) in cooperation with software stored in the storage unit (11), and outputs the image signal to the display unit (3). Details of this image signal processing will be described with reference to fig. 43.
The flowchart in fig. 43 is such that, from step (4301), in steps (4302-1) (4302-2) (4302-3) (4302-4), the image data of each frame is raised twice in the horizontal/vertical direction. That is, the image data of the frame #1 is lifted up and written in the frame buffer #1 in step (4302-1), the image data of the frame #2 is lifted up and written in the frame buffer #2 in step (4302-2), the image data of the frame #3 is lifted up and written in the frame buffer #3 in step (4302-3), and the image data of the frame #4 is lifted up and written in the frame buffer #4 in step (4302-4). Here, the boosting can be realized by clearing the value of each frame buffer 0 once and then writing data for each horizontal pixel and each vertical pixel.
Next, in step (4303), the first pixel (for example, the upper left pixel) of the frame buffer #1 is set as a processing target, and the processing is circulated until the processing of all the pixel data of the frame buffer #1 is completed.
In step (4304-2), the position of the corresponding pixel in the frame buffer #2 is estimated based on the target pixel in the frame buffer #1, and the horizontal phase difference θ H2 and the vertical phase difference θ V2 are output. Similarly, in step (4304-3), the position of the corresponding pixel in the frame buffer #3 is estimated based on the target pixel in the frame buffer #1, and the horizontal phase difference θ H3 and the vertical phase difference θ V3 are output. In step (4304-4), the position of the corresponding pixel in the frame buffer #4 is estimated based on the target pixel in the frame buffer #1, and the horizontal phase difference θ H4 and the vertical phase difference θ V4 are output. In this case, the above-described conventional technique can be used as it is as a method of estimating the position of the corresponding pixel.
In step (4305-2), the pixels in the vicinity of the corresponding pixel in the frame buffer #2 are motion-compensated based on the horizontal phase difference θ H2 and the vertical phase difference θ V2 obtained in step (4304-2). The motion compensation operation may be performed in the same manner in each of the horizontal and vertical directions using the operation described with reference to fig. 5 and 6. Similarly, in step (4305-3), motion compensation is performed on pixels in the vicinity of the corresponding pixel in the frame buffer #3 based on the horizontal phase difference θ H3 and the vertical phase difference θ V3 obtained in step (4304-3). In step (4305-4), the pixels in the vicinity of the corresponding pixel in the frame buffer #4 are motion-compensated based on the horizontal phase difference θ H4 and the vertical phase difference θ V4 obtained in step (4304-4).
Next, in step (4313), the horizontal phase is shifted by a certain amount through steps (4306-1) (4306-2) (4306-3) (4306-4) and the vertical phase is shifted by a certain amount through steps (4307-1) (4307-2) (4307-3) (4307-4) with respect to the frame buffer #1 and the motion-compensated frame buffer #2, frame buffer #3, and frame buffer # 4. Further, as to the results of the steps (4307-1) (4307-2) (4307-3) (4307-4), the horizontal phase is shifted by a predetermined amount by the steps (4308-1) (4308-2) (4308-3) (4308-4), and the phases of both the horizontal phase and the vertical phase are shifted by a predetermined amount. That is, the pixel data in each frame buffer is shifted by pi/2 phase in the horizontal direction and the vertical direction.
Next, in step (4309), all 16 coefficients (C1ReRe to C4ImIm) are determined by the method shown in fig. 40 based on the horizontal phase difference (θ H2, θ H3, θ H4) and the vertical phase difference (θ V2, θ V3, θ V4), and the outputs of step (4313) are multiplied by the coefficients and added (weighted addition), whereby the aliasing components are removed from the pixel data of the frame buffer #1, the frame buffer #2, the frame buffer #3, and the frame buffer #4, and the pixel data are output to the frame buffer # 5. The operation of removing the folded components is the same as the operation described with reference to fig. 41 or 42.
Next, in step 4310, it is determined whether or not the processing of all pixels in the frame buffer #1 is completed, and if not, in step 4311, the next pixel (for example, the pixel adjacent to the right) is set as the target of the processing, and after returning to step 4304-2 (4304-3) (4304-4), the processing is completed in step 4312 if the processing is completed.
By performing the above-described processing, a signal whose resolution has been increased can be output to the frame buffer #5 using the pixel data of the frame buffers #1, #2, #3, and # 4. In the case of application to animation, it suffices to repeat the processing from step (4301) to step (4312) for each frame.
In addition, although fig. 38, 41, 42, and 43 describe the case where the number of input frames is 4, the present invention is not limited to this, and when n (where n is an integer of 4 or more) frames are input, 4 frames suitable for the resolution conversion processing may be selected from among the input frames and used. For example, when the inverse matrix operation shown in fig. 40(i) is output, the inverse matrix M does not exist-1The number of pixels of (1) is selected from the n frames so that the number of pixels is as small as possible, and 4 frames used for the resolution conversion process are selected and switched for each pixel or each region composed of a plurality of pixels.
Thus, the image signal processing method of embodiment 12 has an effect that the resolution can be improved up to a higher frequency component in the oblique direction than the image signal processing method of embodiment 10. The details of this effect are the same as those of the image signal processing apparatus shown in fig. 38 described in embodiment 11, and therefore, the description thereof is omitted.
As described above, the image signal processing method according to embodiment 12 generates 4 signals from each of the image signals of 4 input image frames by performing a plurality of types of phase shifts (horizontal direction, vertical direction, horizontal/vertical direction) in different directions with respect to the image signals. Thereby, 16 signals can be generated from the image signals of 4 input image frames. Here, based on the phase differences of the 4 input image frames, coefficients for eliminating and combining the aliasing components of the 16 signals are calculated for each pixel for each of the 16 signals. For each pixel of the generated image, the sum of the pixel values of the corresponding pixels of each of the 16 signals multiplied by the respective coefficients and added is calculated, and a new pixel value of the high-resolution image is generated.
Thus, the image signal processing method according to embodiment 12 can generate a high-resolution image in which the tilt components in the lower right direction and the upper right direction are also high-resolution in addition to the horizontal direction and the vertical direction.
In addition, the resolution improvement effect of the image signal processing method of embodiment 12 can improve the resolution up to higher frequency components in the oblique direction than the image signal processing method of embodiment 10, and can generate a high-resolution image with higher image quality.
In addition, in the image signal processing apparatuses and the image signal processing methods according to embodiments 1 to 12, the case where the number of pixels is increased to two while the resolution of an image is improved has been described as an example, but by operating the image signal processing apparatuses and the image signal processing methods in multiple stages or in multiple stages, the number of pixels can be increased by, for example, a power of 2(2, 4, and 8). That is, by performing signal processing using two input image frames to double the number of pixels to an intermediate image frame, and then performing signal processing using two intermediate image frames as new input image frames, the number of pixels can be further doubled to obtain an output image frame. In this case, an output image frame having 4 times the number of pixels can be obtained by comparison with the input image frame. Similarly, if the signal processing is repeated 3 times in total, the number of pixels of the output image frame becomes 8 times as compared with the input image frame. At this time, the number of input image frames required to obtain one output image frame is also a power of 2(2, 4, 8.).
Further, the final output image can be output by the number of pixels other than the power of 2(2 times, 4 times, and 8 times).
[ example 13]
Fig. 35 shows an image display device according to embodiment 13 of the present invention. The image display device of the present embodiment is an image display device configured to perform the image signal processing described in any one of embodiments 7 and 8.
In the figure, an image display device 3500 includes, for example: an input unit 3501 for inputting a broadcast signal, video content, and image content via a broadcast wave including a television signal, a network, and the like; a recording/reproducing unit 3502 for recording or reproducing the content inputted from the input unit 3501; a content storage unit 3503 for recording the content by the recording/reproducing unit 3502; a video signal processor unit 3504 for processing the video signal or video signal reproduced by the video recording/reproducing unit 3502, which is the video signal processing apparatus described in any one of the embodiments 7 and 8; a display unit 3505 for displaying the video signal or the video signal processed by the video signal processor 3504; an audio output unit 3506 for outputting the audio signal reproduced by the recording/reproducing unit 3502; a controller 3507 for controlling each component of the image display device 3500; and a user interface unit 3508 through which the user operates the image display apparatus 3500.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 7 or embodiment 8, and therefore, the description thereof is omitted.
The video display device 3500 includes the video signal processor unit 3504 of the video signal processing device described in any one of embodiment 7 and embodiment 8, and thus can display the video signal or the video signal input to the input unit 3501 as a high-quality video signal or a high-quality video signal at a high resolution on the display unit 3505. Accordingly, even when a signal with a low resolution lower than the resolution of the display device of the display unit 3505 is input from the input unit 3501, the reproduced signal can be made high in resolution, and high-definition display can be performed with high image quality.
In addition, when the video contents or the video contents accumulated in the content accumulator 3503 are reproduced, the video contents or the video contents can be converted into a high-quality video signal or a high-quality video signal having a higher resolution and displayed on the display unit 3505.
Further, by performing the image processing of the picture signal processor unit 3504 after the movie contents or the picture contents accumulated in the content accumulator unit 3503 are reproduced, the data accumulated in the content accumulator unit 3503 becomes a lower resolution than the resolution displayed in the display unit 3505. This has the effect of relatively reducing the amount of data of the content and accumulating the data.
The video signal processor 3504 may be included in the recording/reproducing unit 3502, and the above-described video signal processing may be performed during recording. In this case, it is not necessary to perform the above-described image signal processing at the time of reproduction, and the processing load at the time of reproduction can be reduced.
Here, although the case where the video signal processing is performed by the video signal processor unit 3504 has been described, the video signal processing may be realized by the controller unit 3507 and software. In this case, the image signal processing may be performed by the method described in any one of embodiments 7 and 8.
In this embodiment, the recording/reproducing unit 3502 encodes the state of the content such as video input from the input unit 3501 at the time of recording, and then records the encoded content in the content accumulating unit 3503.
In the present embodiment, the recording/reproducing unit 3502 may decode and reproduce the content such as video input from the input unit 3501 when recording the video.
Further, in the image display apparatus of the present embodiment, the content accumulation unit 3503 is not necessarily required. In this case, the recording/reproducing unit 3503 may reproduce the contents such as the video input from the input unit 3501 without recording the video.
In this case, the video signal or the video signal input to the input unit 3501 can be displayed as a high-resolution and high-quality video signal or video signal on the display unit 3505.
The image display device 3500 may be, for example, a plasma television, a liquid crystal television, a braun tube, a projector, or the like, or may be another device using another device. Similarly, the display unit 3505 may be a plasma panel module, an LCD module, or a projection device, for example. The content accumulation unit 3503 may be, for example, a hard disk drive, a flash memory, or a removable media disk drive. The sound output unit 3506 is, for example, a speaker. The input unit 3501 may have a tuner for receiving a broadcast wave, a LAN connector connected to a network, or a USB connector. Further, a terminal to which a video signal or an audio signal is digitally input may be provided, or an analog input terminal such as a composite (composite) terminal or a component (component) terminal may be provided. The receiving unit may be a receiving unit that wirelessly transmits data.
According to the image display device of embodiment 13 described above, the phase of each image signal included in the input video signal or the two input image frames of the input image signal is shifted, and two signals are generated from each image signal. Thereby, four signals can be generated from the image signals of two input image frames. Here, a coefficient for eliminating and combining the aliasing components of the four signals is calculated for each pixel for each of the four signals based on the phase difference between the two input image frames. For each pixel of the generated image, a sum obtained by multiplying each coefficient by the pixel value of the corresponding pixel of each of the four signals and adding the result is calculated, and a new pixel value of the high-resolution image is generated. By performing the above-described processing on each pixel of the generated image, an image having a higher resolution in one-dimensional direction than the input image frame can be generated.
This processing is performed in the horizontal direction and the vertical direction, respectively, to generate an image with high resolution in the horizontal direction and an image with high resolution in the vertical direction. The image with high resolution in the horizontal direction and the image with high resolution in the vertical direction are subjected to lifting processing in the vertical direction and the horizontal direction, respectively, and then mixed.
Thus, a high-resolution image with high resolution in both the vertical direction and the horizontal direction can be generated from each image signal of two input image frames included in the input video signal or the input image signal. That is, a two-dimensional high-resolution image can be generated and displayed on the display unit.
Further, the image display apparatus of embodiment 13 uses two input image frames, and therefore can realize high-resolution display with a small amount of image processing. Thus, an image display device which has a small amount of folding components and displays a high-resolution video or image on a display unit in both the vertical direction and the horizontal direction can be realized.
[ example 14]
The video display apparatus according to embodiment 14 of the present invention is the video display apparatus according to embodiment 13, in which the video signal processor unit 3504 shown in fig. 35 is replaced with the video signal processing apparatus described in embodiment 9. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 9, and therefore, the description thereof is omitted.
The image display device according to embodiment 14 can generate a high-resolution image, which has been subjected to high resolution in the horizontal direction, the vertical direction, and the oblique direction, compared to the input picture or the input image, using two input image frames included in the input picture signal or the input image signal. Further, an image display device which displays the image on a display portion can be realized.
[ example 15]
The image display apparatus according to embodiment 15 of the present invention is the image display apparatus according to embodiment 13, wherein the video signal processor unit 3504 shown in fig. 35 is replaced with the video signal processing apparatus described in embodiment 11. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 11, and therefore, the description thereof is omitted.
According to the image display device of embodiment 15, using 4 input image frames included in the input video signal or the input image signal, it is possible to generate a high-resolution image whose resolution has been increased in the horizontal direction, the vertical direction, and the oblique direction as compared with the input video or the input image, and it is possible to realize an image display device that displays on the display unit.
In addition, the image display device of example 15 can improve the resolution up to a high frequency component in an oblique direction compared to the image display device of example 14, and can display a high-resolution image with higher image quality.
[ example 16]
The image display device according to embodiment 16 of the present invention is the image display device according to embodiment 13, in which the video signal processor unit 3504 shown in fig. 35 is replaced with the video signal processing device described in one of embodiments 1, 3, and 5. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 1, embodiment 3, or embodiment 5, and therefore, the description thereof is omitted.
According to the image display device of embodiment 16, using two input image frames included in the input video signal or the input image signal, it is possible to generate a high-resolution image whose resolution is higher in one-dimensional direction than the input video or the input image, and to realize an image display device that displays the image on the display unit.
[ example 17]
Fig. 36 shows a video recording and reproducing apparatus according to embodiment 17 of the present invention. The video recording and reproducing apparatus according to the present embodiment is configured to perform the video signal processing described in any one of embodiments 7 and 8.
In the figure, a video recording and reproducing apparatus 3600 includes, for example: an input unit 3501 for inputting a broadcast signal, video content, and image content via a broadcast wave including a television signal, a network, or the like; a recording/reproducing unit 3502 for recording or reproducing the content inputted from the input unit 3501; a content storage unit 3503 for recording the content by the recording/reproducing unit 3502; a video signal processor unit 3504 as an apparatus for processing video signals described in any one of embodiment 7 and embodiment 8, which processes video signals or video signals reproduced by the video recording/reproducing unit 3502; a video output unit 3605 for outputting the video signal or video signal processed by the video signal processor 3504 to another device or the like; an audio output unit 3606 for outputting the audio signal reproduced by the recording/reproducing unit 3502 to another device or the like; a controller 3507 that controls each component of the video recording and reproducing device 3600; and a user interface unit 3508 through which the user operates the video recording/reproducing device 3600.
The recording/reproducing apparatus 3600 includes the video signal processor 3504 of the video signal processing apparatus described in any one of embodiment 7 and embodiment 8, and thereby can output the video signal or the video signal input by the input unit 3501 to another apparatus or the like as a video signal or a video signal with higher resolution and higher image quality. Therefore, it is possible to suitably realize a high-quality and high-resolution signal conversion device that converts a low-resolution video signal or an image signal into a high-quality and high-definition video signal or image signal by performing high-resolution conversion.
In addition, when the video content or the image content stored in the content storage unit 3503 is reproduced, the video signal or the image signal can be converted into a video signal or an image signal with higher resolution and higher image quality and then outputted to another device or the like.
Therefore, it is possible to suitably realize a video recording/reproducing apparatus which stores a video signal or an image signal of low resolution and converts the video signal or the image signal into a video signal or an image signal of high quality and high definition to output the video signal or the image signal when reproducing and outputting the video signal or the image signal.
Further, by performing the video processing of the video signal processor 3504 after the reproduction of the video content or the video content stored in the content accumulator 3503, it is possible to make the data stored in the content accumulator 3503 to have a resolution relatively lower than the resolution of the signal to be output to another device. And thus has an effect of being able to relatively reduce the data amount of the content to be stored.
The video signal processor 3504 may be included in the recording/reproducing unit 3502, and the video signal processing may be performed during recording. In this case, since the image signal processing is not necessary at the time of reproduction, the processing load at the time of reproduction can be reduced.
Although the video signal processing is described as being performed by the video signal processor unit 3504, the video signal processing may be implemented by the controller unit 3507 and software. In this case, the image signal processing may be performed by the method described in any one of embodiment 7 and embodiment 8.
In this embodiment, the recording/reproducing unit 3502 may encode the content such as video input from the input unit 3501 at the time of recording and record the encoded content in the content accumulating unit 3503.
In this embodiment, if the contents such as video inputted from the input unit 3501 during recording are in an encoded state, the recording/reproducing unit 3502 may decode and reproduce the contents.
The video output unit 3605 and the audio output unit 3606 of the present embodiment may be integrated. In this case, a connector shape or the like for outputting the video signal and the audio signal as a single cable can be used.
The recording/reproducing device 3600 may be, for example, an HDD recorder, a DVD recorder, or a device using another storage device. Similarly, the content accumulation unit 3503 may be, for example, a hard disk drive, a flash memory, or a removable media disk drive.
The input unit 3501 may include a tuner for receiving a broadcast wave, a LAN connector connected to a network, or a USB connector. Further, a terminal for digitally inputting a video signal and an audio signal may be provided, and an analog input terminal such as a composite (composite) terminal or a component (component) terminal may be provided. Further, the present invention may be a receiving unit that wirelessly transmits data.
The image video output unit 3605 may include a terminal for digitally outputting a video signal, or may include a terminal for analog output such as a composite terminal or a component terminal. The connector may include a LAN connector connected to a network, or may include a USB connector. Further, the transmission unit may be a transmission unit that wirelessly transmits data. The audio output unit 3606 is also the same as the video output unit 3605.
The input unit 3501 may include, for example, an imaging optical system and a light-receiving element. In this case, the video recording and reproducing device 3600 can be applied to, for example, a digital camera, a video camera, a surveillance camera (surveillance camera system), and the like. In this case, for example, the input unit 3501 may take an image of the subject onto the light receiving element via the imaging optical system, generate image data or video data based on a signal output from the light receiving element, and output the image data or video data to the recording and reproducing unit 3502.
If the recording/reproducing device 3600 is, for example, a digital camera, and records a plurality of images different in time in one shooting, a single high-resolution image with high quality can be obtained if the image signal processing unit 3504 performs the image signal processing on the plurality of image data. Also, the image processing of the image signal processing section 3504 may be performed on the images recorded in the content accumulation section 3503 when outputting data from the digital camera. Note that, by integrating the video recorder/reproducer 3502 and the video signal processor 3504, the video processing of the video signal processor 3504 may be performed before the recording is performed in the content accumulator 3503. In this case, only the enlarged image that the user finally wants to process may be stored in the content accumulation unit 3503, and management is easy when the user processes the image data thereafter.
According to the digital camera described above, high-quality image data having a resolution exceeding that of the light receiving element of the digital camera can be obtained.
In addition, if the recording/reproducing device 3600 is, for example, a video camera, it is sufficient that a video image picked up on a light receiving element by an image pickup optical system of the input unit 3501 is output to the recording/reproducing unit 3502 as video data. The recording/reproducing unit 3502 may record video data in the content accumulating unit 3503, and the video signal processing unit 3504 may generate high-resolution video data from the recorded video data. This makes it possible to obtain high-quality image data having a resolution exceeding that of the light receiving element of the camera. In this case, the video signal processor 3504 may also generate one piece of still image data using the multi-frame data included in the recorded video data. This makes it possible to obtain a piece of high-quality image data from the video data. Further, as in the case of the digital camera described above, the image processing by the video signal processor unit 3504 may be performed before the video data is recorded in the content accumulator unit 3503, or after the recording.
According to the camera described above, high-quality image data having a resolution exceeding that of the light receiving element of the camera and still image data having a high quality using the captured image data can be obtained.
Also in the case where the video recording and reproducing device 3600 is, for example, a monitoring camera (monitoring camera system), high-quality video data having a resolution exceeding that of the light receiving element of the monitoring camera and still image data having high quality obtained from the captured video data can be obtained as in the case of the above-described video camera. In this case, for example, the input unit 3501 including the image pickup optical system and the light receiving element is separated from the recording/reproducing unit 3502, and even when connected by a network cable or the like, it is possible to transmit video data of low resolution to the recording/reproducing unit 3502, and to perform high resolution processing by the video signal processing unit 3504 thereafter. This makes it possible to obtain high-resolution video data while effectively utilizing the bandwidth from the transmission network of the input unit 3501 including the imaging optical system and the light-receiving element.
An embodiment of the present invention can be obtained by integrating the functions and the components of both the image display device according to embodiments 13 to 16 and the video recording and reproducing device according to this embodiment. In this case, the video signal or the image signal processed by the image signal processing can be displayed, and can be output to another device, and the display device, the recording/reproducing device, or the output device can be used as any one of the display device, the recording/reproducing device, and the output device, and is convenient for the user.
According to the video recording and reproducing apparatus of the embodiment 17 described above, the phase of each image signal of 2 input image frames included in the input video signal or the input image signal is shifted, and 2 signals are generated from each image signal. Thereby, 4 signals are generated from the image signals of 2 input image frames. Here, from the phase difference of 2 input image frames, a coefficient for eliminating and combining the aliasing components of the 4 signals is calculated for each pixel for each of the 4 signals. For each pixel of the generated image, the sum of the pixel values of the corresponding pixels of each of the four signals multiplied by the respective coefficients is calculated, and the new pixel value of the high-resolution image is generated. By performing this processing for each pixel of the generated image, an image with a higher resolution in one-dimensional direction than the input image frame is generated.
This is performed in the horizontal direction and the vertical direction, respectively, to generate an image with a high resolution in the horizontal direction and an image with a high resolution in the vertical direction. The image with the high resolution in the horizontal direction and the image with the high resolution in the vertical direction are subjected to lifting processing in the vertical direction and the horizontal direction, respectively, and then mixed.
This enables generation of a high-resolution image with high resolution in both the horizontal direction and the vertical direction from each of the input video signal or the image signals of 2 input image frames included in the input image signal. I.e. a two-dimensional high-resolution image can be generated and output.
In addition, an input video signal or an input image signal is recorded in advance in the recording unit, and when the reproduction is performed from the recording unit, a two-dimensional high-resolution image having a high resolution in both the horizontal direction and the vertical direction can be generated from each of 2 input image frames included in the video signal or the image signal, and can be output.
In addition, the video recording and reproducing apparatus according to embodiment 17 uses 2 input image frames, and therefore can output a high-resolution image with a small amount of image processing. Thus, a video recording/reproducing apparatus which has a small amount of folding components and outputs high-resolution video or images in both the vertical direction and the horizontal direction can be realized.
[ example 18]
A video recording and reproducing apparatus according to embodiment 18 of the present invention is the video recording and reproducing apparatus according to embodiment 17, in which the video signal processor unit 3504 shown in fig. 36 is replaced with the video signal processing apparatus described in embodiment 9. The other configurations are the same as those of the recording and reproducing apparatus of embodiment 17, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 9, and therefore, the description thereof is omitted.
According to the video recording and reproducing apparatus of embodiment 18, a two-dimensional high-resolution image having a resolution higher than that of the input video or the input image in the horizontal direction, the vertical direction, and the oblique direction can be generated and output using 2 input image frames included in the input video signal or the input image signal.
In addition, an input video signal or an input image signal is recorded in advance in the recording unit, and when the reproduction is performed from the recording unit, a two-dimensional high-resolution image whose resolution is increased too high in the horizontal direction, the vertical direction, and the oblique direction can be generated from each of 2 input image frames included in the video signal or the image signal, and can be output.
[ example 19]
A video recording and reproducing apparatus according to embodiment 19 of the present invention is the video recording and reproducing apparatus according to embodiment 17, in which the video signal processor unit 3504 shown in fig. 36 is replaced with the video signal processing apparatus described in embodiment 11. The other configurations are the same as those of the recording and reproducing apparatus of embodiment 17, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in embodiment 11, and therefore, the description thereof is omitted.
According to the video recording and reproducing device of embodiment 19, it is possible to realize a video recording and reproducing device that generates and outputs a two-dimensional high-resolution image having a resolution that is too high in the horizontal direction, the vertical direction, and the oblique direction compared to the input video or the input image, using 4 input image frames included in the input video signal or the input image signal.
In addition, an input video signal or an input image signal is recorded in advance in the recording unit, and when the recording unit reproduces the input video signal or the input image signal, a two-dimensional high-resolution image in which resolution is excessively high in the horizontal direction, the vertical direction, and the oblique direction can be generated from each of the 4 input image frames included in the video signal or the image signal, and can be output.
In addition, the resolution increasing effect of the recording and reproducing apparatus according to embodiment 19 can increase the resolution to a higher frequency component in the oblique direction than the recording and reproducing apparatus according to embodiment 18, and thus a high-resolution image with higher quality can be output.
[ example 20]
The video recording and reproducing apparatus according to embodiment 20 of the present invention is the video recording and reproducing apparatus according to embodiment 17, in which the video signal processor unit 3504 shown in fig. 36 is replaced with the video signal processing apparatus described in one of the embodiments of embodiment 1, embodiment 3, or embodiment 5. The other configurations are the same as those of the recording and reproducing apparatus of embodiment 17, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is the same as that described in the embodiments of embodiment 1, embodiment 3, or embodiment 5, and therefore, the description thereof is omitted.
According to the video recording and reproducing apparatus of embodiment 20, it is possible to realize a video recording and reproducing apparatus that generates and outputs a high-resolution image having a resolution higher than that of the input video or the input image in the one-dimensional direction, using 2 input image frames included in the input video signal or the input image signal.
In addition, an input video signal or an input image signal is recorded in advance in the recording unit, and when the reproduction is performed from the recording unit, a high-resolution image with an excessively high resolution in the one-dimensional direction can be generated from each of 2 input image frames included in the video signal or the image signal, and can be output.
[ example 21]
Fig. 45 shows an image signal processing apparatus according to embodiment 21 of the present invention. The graph processing apparatus of the present embodiment includes: an input unit (1) for inputting a frame sequence of a moving image such as a television broadcast signal; a resolution conversion unit (4) for combining the frames input from the input unit (1) in the horizontal and vertical directions to perform two-dimensional resolution enhancement; a display unit (3) for displaying an image on the basis of the frame having been subjected to resolution conversion by the resolution conversion unit (4); and a phase conversion unit (4501) for converting the phase difference information used in the resolution conversion unit (4). Here, the resolution converting unit (4) used in the image signal processing apparatus of the present embodiment is the same as the resolution converting unit (4) shown in fig. 21 in embodiment 8 of the present invention, and therefore, the description thereof is omitted.
In the image signal processing device according to embodiment 21 of the present invention, the information (the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103)) that cannot be used for the motion of the subject with high resolution in the image signal processing device according to embodiment 8 of the present invention is converted into the information (the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503)) that can be used for high resolution by using the phase conversion unit (4501). The details of the phase conversion unit (4501) are described below.
Fig. 46 shows the operation principle of the phase conversion unit (4501). This view (a) shows a state in which the subject moves in the right direction in the image frame (4601). Taking this as an example, it is assumed that the motion (4602) of the subject includes only a motion in the horizontal direction and does not include a motion in the vertical direction. Fig. (b) is an enlarged view of a part (4603) of the image in fig. (a), and shows a state in which the contour line (4610) of the object changes to the position (4611) with the movement (4602) of the object.
Here, the pixel (4607) at the position of (x0, y0) on the image frame (4601) is moved with the motion (4602) of the subject by the pixel (4606) at the position of (x1, y1) one frame before. At this time, the vertical phase difference θ V (2103) output from the position estimating unit (2101) shown in fig. 45 is 0, and the coefficient C1 (and C3) necessary for the resolution enhancement in the vertical direction is not determined according to the calculation formula shown in fig. 9, so that if the vertical phase difference θ V (2103) is used as it is, the folding component cannot be removed by the folding component removing unit (2109) shown in fig. 45.
On the other hand, as shown in fig. 46(b), if the luminance values (signal levels) of the pixels on the contour line (4610) of the object are the same, it can be considered that the pixel (4607) at the position (x0, y0) on the image frame (4601) is obtained by moving the pixel (4608) at the position (x2, y2) one frame before in the downward direction (4604), or the pixel (4609) at the position (x3, y3) one frame before in the upward direction (4605) to the right. That is, if the motion information in the downward direction (4604) or the upward right direction (4605) is used, the vertical phase difference θ V' (4503) whose value is not 0 can be obtained, and the aliasing component can be removed by the aliasing component removal unit (2109) shown in fig. 45. Therefore, if the direction of the same brightness is estimated in the vicinity of the pixel (4606) at the position (x1, y1) corresponding to the original motion (4602) of the object, the information (the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103)) of the motion of the object that cannot be used for the high resolution can be converted into the information (the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503)) that can be used for the high resolution.
Fig. 46(c), (d), and (e) collectively show the operation of the phase conversion unit (4501). Fig. c shows the original horizontal phase difference θ H (2102) and vertical phase difference θ V (2103), which are values of (x 0-x 1) and (y 0-y 1), respectively. Fig. d shows an operation of phase conversion in the same luminance direction, and shows an operation of converting the pixel (4606) at the position of (x1, y1) corresponding to the original motion (4602) of the object into the pixel (4608) at the position of (x2, y2) on the contour line (4610) of the object. In this diagram, (e) shows operations for obtaining the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) after phase conversion, which are (x 0-x 2) and (y 0-y 2), that is, θ H (2102) + (x 1-x 2) ═ θ H + Δ θ H, and θ V (2103) + (y 1-y 2) ═ θ V + Δ θ V, based on the operations in fig. (c) and (d). The resolution conversion unit (4) shown in fig. 45 may remove the aliasing components by using the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) after the phase conversion. Further, the pixels (4606), (4608) and (4609) shown in fig. 46(b) may be not only actual pixels (real pixels) but also pixels (interpolated pixels) interpolated by using neighboring real pixels. Further, the pixels (4606), (4607), (4608) and (4609) do not have to be on the contour line of the object, and may have a correspondence relationship with the same luminance value.
Fig. 47 shows a configuration example of a phase conversion unit (4501) that realizes the phase conversion operation shown in fig. 46 (e). In this figure, first, based on a frame #2(4604) input from an input unit (1) shown in fig. 45, a horizontal phase correction value Δ θ H (4707) and a vertical phase correction value Δ θ V (4708) are generated by a same luminance direction estimation unit (4702), and a horizontal phase difference θ H (2102) and a vertical phase difference θ V (2103) output from a phase estimation unit (2101) shown in fig. 45 are added by adders (4703) (4704), respectively, and are input to switches (4705) (4706), respectively. In parallel with this, the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103) are input to a phase determination unit (4701), and it is determined by a resolution conversion unit (4) shown in fig. 45 whether or not the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103) can be directly used for the elimination of the aliasing components, and when it is determined that the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103) can be directly used as the phase-converted horizontal phase difference θ H '(4602) and vertical phase difference θ V' (4603), switches the switches (4705) (4706) to the upper side, and outputs the horizontal phase difference θ H (2102) and the vertical phase. On the other hand, when the phase determination unit (4701) determines that "direct use is not possible", switches (4705) (4706) are switched to the lower side, and the outputs of the adders (4703) (4704) are output from the phase conversion unit (4501) as the horizontal phase difference θ H '(4602) and the vertical phase difference θ V' (4603) after the phase conversion.
Fig. 48 shows a method for determining whether or not the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) can be directly used in the phase determination unit (4701). In this figure, it is determined whether or not the coefficient C1 (and the coefficient C3) can be obtained using the calculation formula shown in fig. 9, and it is determined that "the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) cannot be directly used" when the coefficients C1 and C3 are indefinite when either the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) is 0, or when the coefficients C1 and C3 become large and become vulnerable to relative noise or the like as either the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) approaches 0.
Here, a case where the signal input to the input unit (1) shown in fig. 45 is interlaced will be described.
First, general interlaced scanning and progressive scanning are explained with fig. 37. Fig. 37(a) shows the positional relationship of the scanning lines in the interlace scanning (2: 1 interlace), and fig. (b) shows the positional relationship of the scanning lines in the progressive scanning. The horizontal axis in the figure represents the position (t) in the time direction (frame direction), and the vertical axis represents the vertical position (v).
In the progressive scanning in this drawing (b), the scanning lines are sequentially scanned in a frame (3705). In contrast, in the interlace scan in fig. (a), a field (3702) is formed in a form in which a scan line (real scan line) (3701) for transmission or display is skipped or a scan line (3702) for non-display is alternately repeated. In addition, the positions of the scanning line (3701) and the scanning line (3702) are reversed (complementary) in the next field, and the two fields (3703) (3704) are combined to form one frame (3704).
Therefore, in the present embodiment, when the signal input to the input unit (1) shown in fig. 45 is interlaced, it is necessary to replace the phase difference θ (θ H or θ V) with (θ ± pi) in the coefficients C0, C1, C2, and C3 shown in fig. 48.
Therefore, when the phase difference θ becomes ± pi and the coefficients C1 and C3 become indefinite, or when the coefficients C1 and C3 become large as the phase difference θ approaches ± pi and are vulnerable to relative noise or the like, it is determined that "the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) cannot be used as it is".
An example of the operation of the same luminance direction estimating unit (4702) shown in fig. 47 will be described with reference to fig. 49. In fig. 49(a), the real pixel (4901) arranged in a grid shape represents a pixel originally present in a frame. The interpolated pixel (4903) represents N points (N is a positive integer) from #1 to # N arranged on the circumference with the pixel (4902) to be processed as the center, and is a pixel interpolated using a plurality of real pixels (4901) in the vicinity. The position of the interpolation pixel (4903) is not limited to the position shown in the drawing (a), and for example, as shown in the drawing (b), interpolation pixels (4904) at N points #1 to # N may be arranged in a rectangular shape around a central pixel (4902). The interpolation pixel (4903) or (4904) thus arranged selects the pixel (4902) having the smallest difference in luminance value from the center, and obtains a horizontal phase correction value Δ θ H (4707) and a vertical phase correction value Δ θ V (4708) starting from the direction of a straight line connecting the center pixel (4902) and the selected interpolation pixel (4903) or (4904).
Fig. 50 shows a configuration example of the same luminance direction estimating unit (4702). In the figure, first, a horizontal low pass filter (hereinafter referred to as LPF) (5001) and a vertical LPF filter (5002) are applied to a signal of an input frame #2(4504) to obtain a value of a central pixel. These filters are filters for reducing the influence of the folded-back components due to erroneous estimation of the same luminance direction, and the frequencies of about 1/2 to 1/4 of the nyquist band of the input signal may be the cutoff frequencies of the LPFs. Next, a signal of the position of the interpolation pixel (4903) shown in fig. 49(a) or the interpolation pixel (4904) shown in fig. 49(b) is generated using the pixel interpolators #1 (5003-1) to # N (5003-N). Since the pixel interpolators #1 (5003-1) # N (5003-N) can directly use the general interpolation LPF that performs pixel interpolation from the value of the real pixel (4901) in the vicinity of the pixel (4902) located at the center using the sinc function as described above, detailed illustration and description thereof are omitted. Each output of the pixel interpolators #1 (5003-1) # N (5003-N) is a difference signal between the signal before pixel interpolation (i.e., the output of the vertical LPF (5002)) and each difference signal generated by subtractors (5004-1) - (5004-N), and an absolute value of the difference between the luminance values of each of the interpolated pixels # 1- # N is obtained by using absolute value calculators (5005-1) - (5005-N). An interpolation pixel is selected by a direction selector (5006) so that the value (absolute value of the difference in luminance values) is minimized, and the difference between the position of the selected interpolation pixel and the position of the central real pixel in the horizontal direction and the vertical direction is output as a phase correction value Δ θ H (4707) in the horizontal direction and a phase correction value Δ θ V (4708) in the vertical direction. Further, if the cutoff frequencies of the horizontal LPF (5001) and the vertical LPF (5002) are set low (even when the pass band is narrowed), the folding back component is small, but the signal component of the fine texture (pattern) indicating the original same luminance direction is attenuated. Conversely, if the cutoff frequency is set high (even if the pass band is widened), the signal component of the fine texture (pattern) remains, but the influence of the folding back component becomes large. Therefore, in order to make the two have a trade-off relationship, it is preferable that the characteristics of the horizontal LPF (5001) and the vertical LPF (5002) be designed so as to optimize the actual image of the input frame (4504) while checking the results of the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708).
In fig. 49(a) and (b), for simplicity, the pixel at the center (4902) is illustrated at the same position as the real pixel, and actually, the position of the pixel in frame #2 corresponding to the pixel to be processed in frame #1 is obtained by the position estimating unit (2101) illustrated in fig. 45, and the pixel (4902) at the center is obtained. Therefore, in the case where the value of the horizontal phase difference θ H (2102) or the vertical phase difference θ V (2103) is not an integral multiple (including 0) of the interval of the real pixel, the pixel (4902) at the center is also not at the same position as the real pixel. In this case, the tap coefficient of the horizontal LPF (5001) shown in fig. 50 may be shifted according to the horizontal phase difference θ H (2102) and the tap coefficient of the vertical LPF (5002) may be shifted according to the vertical phase difference θ V (2103). Specifically, when the cutoff frequency of the horizontal LPF (5001) is fc (H) and the nyquist frequency is fn (H), the tap coefficient ck (H) (where k is an integer) of the horizontal LPF (5001) is a generally known sinc function, and ck (H) may be 2sin (pi × fc (H) × k/fn (H) - θ H)/(pi × fc (H) × k/fn (H) - θ H). Similarly, the tap coefficient ck (V) of the vertical LPF (5002) may be set to ck (V) (2sin (pi × fc) (V) × k/fn (V) - θ V)/(pi × fc (V) × k/fn (V) - θ V).
Note that, although the interval (radius of circle) between the center pixel (4902) and the interpolation pixel (4903) in fig. 49 a and the interval (1/2 of the length of each side of the rectangle) between the center pixel (4902) and the interpolation pixel (4904) in fig. b correspond to the magnitude of each of the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708), the intervals (radius of circle, 1/2 of the length of each side of the rectangle) may be fixed values or variable values. That is, when the fold-back distortion removal is performed using the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) after the phase conversion shown in fig. 46(e), the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) may be made to be about 1/2 of the interval of the real pixel (4901), so that the coefficient C1 (and the coefficient C3) can be obtained using the calculation formula shown in fig. 9, or the values may be appropriately increased and decreased while keeping the ratio between Δ θ H (4707) and Δ θ V (4708) constant, so that the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) are directed to the same luminance direction.
Further, there are also cases where: even if the values are appropriately increased or decreased while keeping the ratio between Δ θ H (4707) and Δ θ V (4708) constant so that the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) are directed to the same luminance direction, the coefficient C1 (and the coefficient C3) cannot be obtained using the calculation formula shown in fig. 9. For example, in fig. 49(a), when the same luminance direction is horizontal (#1), the phase correction value Δ θ V (4708) in the vertical direction is always 0, and even if the vertical phase difference θ V' (4603) after phase conversion is used, the folded back component in the vertical direction cannot be removed by the folded back component removal section (2109) shown in fig. 45, as in the case of using the vertical phase difference θ V (2103) as it is. In this case, similarly to the image signal processing apparatus according to embodiment 5 of the present invention, a general interpolation low-pass filter (1101) shown in fig. 11 may be prepared as a bypass path in the folding component removing units (2108) (2109), a new C4 other than the coefficients C0 and C1 may be generated by the coefficient determining unit (1103), the output of the interpolation low-pass filter (1101) may be multiplied by the coefficient C4 by the multiplier (1102), and the multiplied signal may be added to the signal having the high resolution by the adder (1104) and output. The configurations other than the interpolation low-pass filter (1101), the multiplier (1102), the coefficient determining unit (1103), the adder (1104), and the interpolation pixel interpolation unit (1105) are the same as those of embodiment 3 shown in fig. 10, and therefore, the description thereof is omitted. The operations and configurations of the interpolation low-pass filter (1101), multiplier (1102), coefficient determiner (1103), adder (1104), and interpolated pixel interpolation unit (1105) are the same as those of embodiment 5 shown in fig. 12 and 13, and therefore, the description thereof is omitted.
When the minimum value of the difference between the signal values of the central pixel (4902) and the interpolation pixels (4903) (4904) shown in fig. 49(a) (b) is greater than a predetermined threshold value, it is determined that the same luminance direction cannot be estimated, and the values of the phase correction value Δ θ H (4707) and Δ θ V (4708) are set to 0 or the like by the direction selector (5006) in the same luminance direction estimating unit (4702) shown in fig. 50, so that the foldback component removing unit (2108) (2109) shown in fig. 45 does not malfunction.
The image signal processing device according to embodiment 21 described above has an effect of achieving a high resolution even when the image signal processing device according to embodiment 8 cannot be used for a motion of a subject with a high resolution, in addition to the effect of the image signal processing device according to embodiment 8.
The configuration of fig. 45 of the image signal processing device according to the present embodiment is based on the configuration of fig. 21 of the image signal processing device according to embodiment 8 of the present invention, but the present invention is not limited to this, and it is understood that the same effects can be obtained by providing the phase conversion unit (4501) shown in fig. 45 to the image signal processing device according to the other embodiment of the present invention, and therefore, the description thereof is omitted.
[ example 22]
Embodiment 22 is an image signal processing method in which processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 21 is realized by a control unit in cooperation with software. The image processing apparatus that performs the image signal processing method according to the present embodiment is the same as that shown in fig. 18 in embodiment 2, and therefore, the description thereof is omitted.
Fig. 51 shows an example of a flowchart of the operation of the present embodiment. The flowchart of fig. 51 is obtained by adding step (5103) to the flowchart shown in fig. 14, which is example 2 of the present invention, and steps (1401) to (1420) are similar operations to those in example 2 of the present invention, and therefore, the description thereof is omitted.
In step (5103), first, in step (5101), it is determined whether or not the coefficient C1 determined based on the calculation formula shown in fig. 9 is appropriate based on the phase difference θ (i.e., the horizontal phase difference θ H or the vertical phase difference θ V) obtained in step (1405), and if appropriate, the routine proceeds to step (1406) and step (1409), and if not appropriate, the routine proceeds to step (5102). Here, when determining whether or not the coefficient C1 is appropriate, if the phase difference θ is 0 and the coefficient C1 becomes unstable, or if the coefficient C1 becomes large due to the phase difference θ being close to 0 and thus vulnerable to relative noise or the like, it is sufficient to determine that the "coefficient C1 is not appropriate".
As described in example 21, when the signal input to the input unit (1) shown in fig. 18 is interlaced, the coefficients C0, C1, C2, and C3 shown in fig. 9 are values obtained by replacing the phase difference θ with (θ ± pi), and therefore, when the phase difference θ is ± pi and the coefficient C1 becomes unstable, or when the coefficient C1 becomes large as the phase difference θ approaches ± pi and is vulnerable to relative noise, the "coefficient C1" may be determined.
In step (5102), the same luminance direction is estimated in the same manner as in the operation described with reference to fig. 46 to 50, the horizontal phase correction value Δ θ H (4707) or the vertical phase correction value Δ θ V (4708) is added to the original phase difference θ (that is, the horizontal phase difference θ H or the vertical phase difference θ V) to convert the phase difference to a new phase difference θ, and step (1406) and step (1409) are performed using the new phase difference θ.
Based on the above flowchart, if the processing is started from step (1401) shown in fig. 51 and ended at step (1417), the signal buffered in the frame buffer #3 shown in fig. 18 can be output to the display unit (3) in units of frames or pixels.
By performing the above-described processing, a signal with high resolution can be output to the frame buffer #3 using the pixel data of the frame buffer #1 and the frame buffer # 2. When the method is applied to a moving image, the processing of steps (1401) to (1417) shown in fig. 51 may be repeated for each frame.
It should be noted that although the flowchart of fig. 51 for explaining the operation of the present embodiment is obtained by adding the step (5103) to the flowchart shown in fig. 14 in embodiment 2 of the present invention, it is obvious that the same effect can be obtained by adding the step (5103) to the flowchart shown in fig. 15 in embodiment 4 of the present invention, and therefore, the explanation thereof is omitted.
Further, even if the ratio of Δ θ H (4707) and Δ θ V (4708) is kept constant so that the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) are directed to the same luminance direction, and the respective values are increased or decreased appropriately, the coefficient C1 (and the coefficient C3) may not be obtained using the calculation formula shown in fig. 9. For example, in fig. 49(a), when the same luminance direction is horizontal (#1), the phase correction value Δ θ V (4708) in the vertical direction is always 0, and even if the vertical phase difference θ V' (4603) after phase conversion is used, the fold back component in the vertical direction cannot be removed by step (1420) as in the case where the vertical phase difference θ V (2103) is directly used. In this case, similarly to the image signal processing method according to embodiment 6 of the present invention, step (1605) and step (1606) shown in fig. 16 may be prepared, and the processing result of step (1606) may be output to the frame buffer #3 when the converted phase difference θ is 0 or around 0. Since this step (1605) and step (1606) are the same as the operation of embodiment 6, the description thereof is omitted.
The image signal processing method according to embodiment 22 described above has an effect of achieving a high resolution even in a case where the subject cannot be used for the high resolution movement in the image signal processing method according to embodiment 2, in addition to the effect of the image signal processing method according to embodiment 2. That is, it is possible to realize high resolution processing corresponding to more various movements of the subject.
[ example 23]
The image display device according to embodiment 23 of the present invention is the image display device according to embodiment 13, in which the video signal processor unit 3504 shown in fig. 35 is used instead of the video signal processing device according to embodiment 21 or embodiment 22. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operation of the video signal processor unit 3504 is described in embodiment 21 or embodiment 22, and therefore, the description thereof is omitted.
The image display device according to embodiment 23 has an effect of achieving a high resolution even when the image display device according to embodiment 13 cannot be used for a motion of the subject with a high resolution, in addition to the effect of the image display device according to embodiment 13. Further, an image display device that displays an image generated by the high resolution processing on a display unit can be realized. That is, it is possible to generate a high-resolution image corresponding to a wider variety of movements of the subject and to display the generated high-resolution image.
[ example 24]
The recording/reproducing apparatus according to embodiment 24 of the present invention is the recording/reproducing apparatus according to embodiment 17, wherein the video signal processor unit 3504 shown in fig. 36 is used instead of the video signal processing apparatus according to embodiment 21 or embodiment 22. The other structure is the same as that of the recording and reproducing apparatus of embodiment 17. And thus, the description is omitted.
The detailed operation of the video signal processor unit 3504 is described in embodiment 21 or embodiment 22, and therefore, the description thereof is omitted.
The recording and reproducing apparatus according to embodiment 24 has an effect of achieving a higher resolution even when the movement of the subject which cannot be used for the higher resolution in the recording and reproducing apparatus according to embodiment 17 is not used, in addition to the effect of the recording and reproducing apparatus according to embodiment 17. Further, an image generated by the high resolution processing can be output. That is, it is possible to generate a high-resolution image corresponding to a wider variety of movements of the subject and to output the generated high-resolution image.
[ example 25]
Fig. 52 shows an image signal processing apparatus according to embodiment 25 of the present invention. The image processing apparatus of the present embodiment includes: an input unit (1) for inputting image frames such as television broadcast signals; a resolution conversion unit (9) for two-dimensional resolution enhancement, which combines frames input from the input unit (1) in the horizontal and vertical directions; a display unit (3) for displaying an image based on the frame having the resolution increased by the resolution conversion unit (9); and a same luminance direction estimating unit (4702) for generating phase difference information used in the resolution converting unit (9). Here, the image signal processing apparatus of the present embodiment is configured such that the input unit (1) of the signal of the input frame #2 is removed from the image signal processing apparatus shown in fig. 45 similar to embodiment 21 of the present invention, and the position estimating unit (2101) is replaced with the same luminance direction estimating unit (4702). The other structures and operations are the same as those of the image signal processing apparatus shown in fig. 45. And thus, the description is omitted.
An image signal processing apparatus according to embodiment 25 of the present invention includes a resolution conversion unit (44) for performing a process of increasing the resolution by using only the signal of frame #1 input from the input unit (1). The details thereof are explained below.
In the image signal processing device according to embodiment 21 described above, information (the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103)) that cannot be used for the motion of the subject with high resolution in the image signal processing device according to embodiment 8 of the present invention is converted into information (the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503)) that can be used for high resolution by using the phase conversion unit (4501) shown in fig. 45.
On the other hand, in example 25 of the present invention, since the signal of frame #2 is the same as the signal of frame #1, that is, the image is considered to be completely still, the image signal input to the phase conversion unit (4501) shown in fig. 45 is made to be frame #1(5201), and the information on the motion of the subject (horizontal phase difference θ H (2102) and vertical phase difference θ V (2103)) is forced to 0, and the configuration of the phase conversion unit (4501) shown in fig. 47 is replaced with only the same luminance direction estimation unit (4702). The details of this operation will be described below.
Fig. 53 shows an operation principle of the same luminance direction estimating unit (4702) in the image signal processing apparatus according to the present embodiment. This drawing (a) shows a state in which the subject is displayed in the image frame (5301). In addition, in the image signal processing apparatus of the present embodiment, unlike the image frame (4601) shown in fig. 46(a), only the signal of one image frame (5301) is processed. The drawing (b) enlarges a part (5302) of the image in the drawing (a) and shows a state of the contour line (5303) of the subject.
Here, as shown in fig. b, if the luminance values (signal levels) of the pixels on the contour line (5303) of the subject are the same, it can be considered that the pixel (5304) at the position of (x1, y1) is obtained by moving the pixel (5305) at the position of (x2, y2) in the left-down direction (5306). That is, if one image frame (5301) is considered as two virtual image frames, and the positional difference (sampling phase difference) between the pixel (5304) on the image frame (5301) and the pixel (5305) having the same brightness located in the vicinity of the processing target on the other image frame (5301) is regarded as motion information, and the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) are generated, the resolution can be increased as in the image signal processing apparatus according to the above-described embodiment 21. The pixels (5304) (5305) shown in fig. (b) may be pixels (interpolated pixels) interpolated by using neighboring real pixels, instead of the actual pixels (real pixels). Further, each of the pixels (5304) and (5305) does not necessarily have to be located on the contour line of the subject, and may be in a correspondence relationship with the same luminance value.
Fig. 53(c), (d), and (e) collectively show the operation of the same luminance direction estimating unit (4702). This diagram (c) shows the original horizontal phase difference θ H (2102) and vertical phase difference θ V (2103), and since one image frame (5301) is regarded as two still image frames, the original horizontal phase difference θ H (2102) is 0 (x 1-x 1), and the original vertical phase difference θ V (2103) is 0 (y 1-y 1). Fig. d shows the operation of phase conversion in the same luminance direction. The operation of converting the pixel (5304) at the (x1, y1) position into the pixel (5305) at the (x2, y2) position on the contour line (5303) of the subject is shown. Fig. e shows an operation of obtaining a horizontal phase difference θ H '(4502) and a vertical phase difference θ V' (4503) after phase conversion based on the operations of fig. c and d, where the horizontal phase difference θ H '(4502) after phase conversion is Δ θ H (x 1-x 2), and the vertical phase difference θ V' (4503) after phase conversion is Δ θ V (y1, y 2).
Here, the same luminance direction estimating unit (4702) shown in fig. 52 has the same configuration and operation as the same luminance direction estimating unit (4702) shown in fig. 50, and Δ θ H and Δ θ V shown in fig. 53(e) are the same as the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) output from the same luminance direction estimating unit (4702) shown in fig. 50.
In the resolution conversion unit (9) shown in fig. 52, the folded component removal may be performed using the horizontal phase difference θ H '(4502) ═ Δ θ H (4707) and the vertical phase difference θ V' (4503) ═ Δ θ V (4708) generated by the same luminance direction estimation unit (4702).
Even if the ratio of Δ θ H (4707) and Δ θ V (4708) is kept constant so that the horizontal phase correction value Δ θ H (4707) and the vertical phase correction value Δ θ V (4708) are directed to the same luminance direction, and the respective values are increased or decreased appropriately, the coefficient C1 (and the coefficient C3) may not be obtained using the calculation formula shown in fig. 9. For example, when the same luminance direction is horizontal, the phase correction value Δ θ V (4708) in the vertical direction is always 0, and the aliasing component in the vertical direction cannot be removed by the aliasing component removal unit (2109) shown in fig. 52. In this case, similarly to the image signal processing apparatus according to embodiment 5 of the present invention, a general interpolation low-pass filter (1101) shown in fig. 11 is prepared as a bypass path in the folding component removing units (2108) (2109), a new C4 is generated in addition to the coefficients C0 and C1 by the coefficient determining unit (1103), the output of the interpolation low-pass filter (1101) is multiplied by the coefficient C4 by the multiplier (1102), and the multiplied signal is added to the high-resolution signal by the adder (1104) and output. The configurations other than the interpolation low-pass filter (1101), the multiplier (1102), the coefficient determining unit (1103), the adder (1104), and the auxiliary pixel interpolation unit (1105) are the same as those of embodiment 3 shown in fig. 10, and therefore, the description thereof is omitted. The operations and configurations of the interpolation low-pass filter (1101), multiplier (1102), coefficient determiner (1103), adder (1104), and auxiliary pixel interpolation unit (1105) are the same as those of embodiment 5 shown in fig. 12 and 13, and therefore, the description thereof is omitted.
According to the image signal processing apparatus of embodiment 25 described above, it is possible to generate an image having a smaller amount of aliasing components and a higher resolution than the input image, using one input image frame less than that of the conventional example. Therefore, even when only one input image frame is provided, the high resolution effect can be achieved in a case where the corresponding pixels between the previous and subsequent frames cannot be estimated due to a still portion in a case where all or a part (one area) of the input image frame is still, a subject is moving vigorously, or the like, or in a case where the corresponding pixels between the previous and subsequent frames cannot be estimated due to an image frame in which one completely different content is listed as a continuous frame.
Further, since the image signal processing apparatus of embodiment 25 uses one input image frame less than that of the conventional example, there is an effect that the amount of required image processing can be reduced compared to the conventional example.
It should be noted that the configuration of fig. 52 for describing the image signal processing apparatus according to the present embodiment is based on the configuration of fig. 45 for describing the image signal processing apparatus according to embodiment 21 of the present invention, but the present invention is not limited to this, and it is obvious that the same effect can be obtained even if the same luminance direction estimating unit (4702) shown in fig. 52 is provided to the image signal processing apparatus according to the other embodiment of the present invention, and only one frame image is input, and therefore, the description thereof is omitted.
[ example 26]
Embodiment 26 is an image signal processing method in which a control unit cooperating with software realizes processing equivalent to the image signal processing of the image signal processing apparatus of embodiment 25. The image processing apparatus that performs the image signal processing method according to the present embodiment is the same image processing apparatus as the embodiment of the present invention except for the input unit (1) and the buffer #2(22) of the input frame #2 from the image processing apparatus shown in fig. 18, and therefore, the description thereof is omitted.
Fig. 55 shows an example of a flowchart of the operation of the present embodiment. The flowchart of fig. 55 deletes step (1403) (1405) (1406) (1418) and newly adds step (5501) (5502) from the flowchart shown in fig. 14 in embodiment 2 of the present invention. The following explains a different aspect from the flowchart shown in fig. 14.
First, the image signal processing method of the present embodiment realizes high-resolution processing using only the signal of frame #1 input from the input unit (1) of the image processing apparatus shown in fig. 18. Therefore, the processing steps associated with the frame #2 and the frame buffer #2 in the flowchart shown in fig. 14 are no longer required, and the step (1403) (1405) (1406) (1418) is deleted.
Next, in step (5501) provided in step (1405) is replaced, the same luminance direction is estimated for the pixel to be processed set in step (1404) by using the operation described with reference to fig. 53, and a phase difference θ (horizontal phase difference θ H '(4502) or vertical phase difference θ V' (4503)) is generated, and step (5502) and step (1409) are performed. In step (5502), the phase difference θ generated in step (5501) is regarded as motion information, and motion compensation is performed on the image in the vicinity of the target pixel in the frame buffer # 1. In this case, as the nearby pixels, motion compensation may be performed only on pixel data used in the processing of the pi/2 phase shift in step (1408), that is, only on pixel data in a range in which a limited number of taps acts. The motion compensation acts the same as described using fig. 5 and 6.
The other steps (1401) (1402), steps (1407) to (1417), and steps (1419) (1420) are similar to the operation in embodiment 2 of the present invention, and therefore, the description thereof is omitted.
By performing the above-described processing, a signal whose resolution has been increased can be output to the frame buffer #3 using the pixel data of the frame buffer # 1. When the method is applied to a moving image, the processing of steps 1401 to 1417 may be repeated for each frame.
According to the image signal processing method of embodiment 26 described above, it is possible to generate an image having a smaller amount of aliasing components and a higher resolution than the input image, using one input image frame less than that of the conventional example.
Further, the image signal processing method of embodiment 26 uses one fewer input image frame than the conventional example, and therefore has the effect of being able to reduce the amount of required image processing compared to the conventional example.
[ example 27]
Fig. 54 shows an image signal processing apparatus according to embodiment 27 of the present invention. The image processing apparatus of the present embodiment includes: an input unit (1) for inputting image frames such as television broadcast signals; a resolution conversion unit (41) for two-dimensional resolution enhancement, which combines frames input from the input unit (1) in the horizontal and vertical directions; and a display unit (3) for displaying an image based on the frame having the resolution increased by the resolution conversion unit (41).
In the resolution conversion unit (41) shown in the figure, based on the input signal (5201) of the frame image, the same luminance direction estimation unit (4702) generates a horizontal phase difference θ H '(4502) ═ Δ θ H (4707) and a vertical phase difference θ V' (4503) ═ Δ θ V (4708), and inputs these pieces of phase difference information to the resolution conversion unit (9-1), and the polarity of each piece of phase difference information is inverted (i.e., multiplied by (-1)) using a multiplier (5401) (5402), and the resultant is input to the resolution conversion unit (9-2). After the resolution conversion units (9-1, 9-2) perform the processing for increasing the resolution, the outputs of the resolution conversion units (9-1, 9-2) are averaged and output by using an adder (5403) and a multiplier (5404). The operation and configuration of the same luminance direction estimating unit (4702) and the resolution converting units (9-1) (9-2) are the same as those of the same luminance direction estimating unit (4702) and the resolution converting unit (9) of the above-described embodiment 25, and therefore, the description thereof is omitted.
Fig. 53(b) shows how the same luminance direction estimating unit (4702) estimates the direction of the pixel (5304) to be processed to be the pixel (5305) in the vicinity of the same luminance. On the other hand, as shown in fig. 49(a) and (b), focusing on the vicinity of the pixel (4902) that is the center of the processing target, since the direction of uniform luminance can be regarded as a straight line in the micro region, the straight line can be extended so as to be point-symmetric about the processing target pixel (4902).
Therefore, even if the values of the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) generated by the same luminance direction estimating unit (4702) are polarity-inverted by multipliers (5401) (5402), and the values of the phase differences are converted so as to be symmetrical about the processing target pixel (4902) as a center point and then input to the resolution converting unit (9-2), the same output result as that of the resolution converting unit (9-1) that directly uses the values of the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) can be obtained.
If errors and noise are included in the values of the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) generated by the same luminance direction estimating unit (4702), the effects thereof appear in opposite polarities in the resolution converting unit (9-1) and the resolution converting unit (9-2), and therefore, the errors and noise can be reduced by averaging the outputs of the two.
The number of resolution conversion units (9) is further increased, and the values of the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) are converted at various magnifications without limiting the magnifications of the multipliers (5401) (5402) to (-1), and then the outputs of the resolution conversion units (9) are averaged to output the result. For example, the number of the resolution conversion units (9) may be changed from 2 to 6 as shown in fig. 54, the magnifications of the horizontal phase difference θ H '(4502) and the vertical phase difference θ V' (4503) inputted to each resolution conversion unit (9) may be added to the two kinds of 1 time and (-1) time as shown in fig. 54 by 0.5 times, (-0.5) times, 1.5 times, and (-1.5) times as many as 6 kinds in total, and the outputs of the resolution conversion units (9) may be summed and averaged to 1/6 times, thereby obtaining an output signal. By increasing the number of resolution conversion units (9) in this way, errors and noise generated in the same luminance direction estimation unit (4702) can be reduced.
According to the image signal processing device of embodiment 27 described above, in addition to the effect of the image signal processing device of embodiment 25, there is an effect that the error and noise generated in the same luminance direction estimating unit (4702) can be reduced.
[ example 28]
An image display apparatus according to embodiment 28 of the present invention is the image display apparatus according to embodiment 13, in which the video signal processor unit 3504 shown in fig. 35 is replaced with the video signal processing apparatus described in embodiments 25 to 27. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operations of the video signal processor unit 3504 are the same as those described in embodiments 25 to 27, and therefore, the description thereof is omitted.
According to the image display device of embodiment 28, since one input image frame less than that of the conventional example is used, there is an effect that the amount of necessary image processing can be further reduced.
[ example 29]
A video recording and reproducing apparatus according to embodiment 29 of the present invention is the video recording and reproducing apparatus according to embodiment 17, in which the video signal processor unit 3504 shown in fig. 36 is replaced with the video signal processing apparatus described in embodiments 25 to 27. The other configuration is the same as that of the recording and reproducing apparatus of embodiment 17, and therefore, the description thereof is omitted.
The detailed operations of the video signal processor unit 3504 are the same as those described in embodiments 25 to 27, and therefore, the description thereof is omitted.
According to the video recording and reproducing apparatus of embodiment 29, since one input image frame less than that of the conventional example is used, it is possible to further reduce the amount of necessary image processing.
[ example 30]
Fig. 56 shows an image signal processing apparatus according to embodiment 30 of the present invention. The image processing apparatus of the present embodiment includes an input unit (1) for inputting a moving image frame sequence such as a television broadcast signal, a resolution conversion unit (43) for performing two-dimensional resolution enhancement in a horizontal and vertical direction by combining frames input from the input unit (1), and a display unit (3) for displaying an image based on the frames subjected to resolution enhancement by the resolution conversion unit (43).
A resolution conversion unit (43) provided in an image signal processing device according to embodiment 30 of the present invention includes: a resolution conversion unit (44) similar to the video signal processing apparatus according to embodiment 25 of the present invention, a resolution conversion unit (4) similar to the video signal processing apparatus according to embodiment 8 of the present invention, and a mixing unit (5601) that mixes output signals of both. The resolution conversion unit (44) and the resolution conversion unit (4) are the same as those shown in fig. 52 and 21, respectively, and therefore, the description thereof is omitted. The operation and structure of the resolution conversion unit (43) will be described in detail below.
First, the resolution conversion unit (44) similar to the image signal processing apparatus according to embodiment 25 of the present invention can realize high resolution only by the signal of frame #1 input from the input unit (1). On the other hand, as described above, the coefficient C1 (and the coefficient C3) cannot be obtained by the calculation formula shown in fig. 9 due to the values of the horizontal phase difference θ H '(4502) ═ Δ θ H (4707) and the vertical phase difference θ V' (4503) ═ Δ θ V (4708) generated by the same luminance direction estimating unit (4702) in the resolution converting unit (44), and the effect of high resolution may not be obtained. For example, when the same luminance direction is the horizontal direction or the vertical direction, the effect of high resolution cannot be obtained.
The resolution conversion unit (4) similar to the image signal processing apparatus according to embodiment 8 of the present invention achieves high resolution using both the signals of frame #1 and frame #2 input from the input unit (1). At this time, as described above, the coefficient C1 (and the coefficient C3) cannot be obtained by the calculation formula shown in fig. 9 due to the values of the horizontal phase difference θ H (2102) and the vertical phase difference θ V (2103) generated by the position estimating unit (2101) in the resolution converting unit (4), and the effect of high resolution may not be obtained. For example, when the input image is a progressive scan, the effect of increasing the resolution cannot be obtained in a region where the subject is still or a region where the motion of the subject is exactly integer pixel units. In addition, when the input image is interlaced, the effect of high resolution cannot be obtained in a region where the signal value of the field image does not change.
Therefore, in the resolution conversion unit (43) provided in the image signal processing device according to embodiment 30 of the present invention, the output signal (SR1(1 frame type)) of the resolution conversion unit (44) and the output signal (SR2(2 frame type)) of the resolution conversion unit (4) are mixed by the mixer (5601), thereby improving the effect of high resolution.
Fig. 57 shows a first configuration example of the mixer (5601). In the figure, an adder (5701) and a multiplier (5702) generate and output an average value of respective signals of SR1 (1-frame type) and SR2 (2-frame type) input to a mixer (5601). In the configuration shown in the figure, although the resolution improvement effects of SR1(1 frame type) and SR2(2 frame type) are 1/2, respectively, the mixer (5601) has the simplest configuration, and therefore can be realized at low cost.
Fig. 58 shows a second configuration example of the mixer (5601). In the figure, the signals SR1 (1-frame type) and SR2 (2-frame type) input to the mixer (5601) are multiplied by a coefficient K (SR1) and a coefficient K (SR2) by a multiplier (5803) and a multiplier (5804), respectively, and added by an adder (5805) to be output. The coefficient K (SR1) and the coefficient K (SR2) are generated by coefficient determiners (5801) (5802), respectively. The operation of the coefficient determining units (5801) and (5802) will be described below.
As described in the operation of example 7 of the present invention, the folding component removing units (2108) (2109) shown in fig. 21 generate coefficients C0 to C3 shown in fig. 9 in the coefficient determining unit (109) shown in fig. 1 at the phase difference θ H (2102) and the phase difference θ V (2103) shown in the figure, and perform calculation of folding component removal. In this case, in order to prevent the coefficients C1 and C3 from becoming unstable when the phase differences θ H (2102) and θ V (2103) are 0, or to prevent the coefficients C1 and C3 from becoming fragile with respect to noise and the like as the phase differences θ H (2102) and θ V (2103) approach 0, it is preferable to introduce the coefficient C4(0 ≦ C4 ≦ 1) shown in fig. 13 and perform the supplementary pixel interpolation as shown in the configuration shown in fig. 11. In other words, there is an effect of resolution improvement when the value of the coefficient C4 is 0.0, but the effect of resolution improvement becomes smaller as the value of the coefficient C4 approaches 1.0.
By utilizing this property, in example 7 of the present invention, the coefficient K (horizontal) and the coefficient K (vertical) are determined by the values of the respective coefficients C4 in the horizontal/vertical directions so that SR (vertical) of the vertical resolution conversion result is strongly reflected when the horizontal phase difference θ H (2102) is in the vicinity of 0 (that is, the coefficient C4 (horizontal) is in the vicinity of 1.0), and SR (horizontal) of the horizontal resolution conversion result is strongly reflected when the vertical phase difference θ V (2103) is in the vicinity of 0 (that is, the coefficient C4 (vertical) is in the vicinity of 1.0). To realize this operation, K (horizontal) ═ C4 (horizontal) + (1-C4 (vertical))/2 is calculated by a coefficient determiner (2301) shown in fig. 23 to determine K (horizontal), and K (vertical) ═ C4 (vertical) + (1-C4 (horizontal))/2 is calculated by a coefficient determiner (2303) to determine K (vertical).
Similarly, in this embodiment, coefficients C4 (horizontal) and C4 (vertical) are generated based on the phase difference θ H '(4502) and the phase difference θ V' (4503) used in the resolution conversion unit (44), and the average value thereof is C4(SR 1). Further, coefficients C4 (horizontal) and C4 (vertical) are generated based on the phase difference θ H (2102) and the phase difference θ V (2103) used in the resolution conversion unit (4), and the average value thereof is C4(SR 2). That is, the coefficients C4(SR1) and C4(SR2) can be used as indicators indicating the degree of the effect of the resolution conversion by the resolution conversion unit (44) and the resolution conversion unit (4) on the resolution.
Therefore, in the mixer (5601) shown in fig. 58, if the coefficient determiner (5801) performs the operation of K (SR1) ═ C4(SR1) + (1-C4 (SR2))/2 to determine K (SR1), and the coefficient determiner (5802) performs the operation of K (SR2) ═ C4(SR2) + (1-C4 (SR1))/2 to determine K (SR2), then the weighted-addition-mixing output is performed on the output (SR1) of the resolution converter (44) and the output (SR2) of the resolution converter (4) based on the coefficients K (SR1) and K (SR2), the ratio of the signal having the greater effect of the resolution converter (44) and the resolution converter (4) in the output signal can be increased, and the effect of high resolution can be increased.
According to the image signal processing apparatus of the embodiment 30 described above, in addition to the effects of the image signal processing apparatuses of the embodiments 25 and 8, even when the subject which cannot be used for the image signal processing apparatuses of the embodiments 25 and 8 has the same luminance direction and subject movement, the resolution can be increased, and the effect of increasing the resolution can be further increased.
The configuration of fig. 56 for describing the image signal processing apparatus of the present embodiment is based on the same resolution conversion unit (44) as the image signal processing apparatus of embodiment 25 of the present invention and the same resolution conversion unit (4) as the image signal processing apparatus of embodiment 8 of the present invention, but is not limited thereto, and it is understood that the same effect can be obtained even if the mixing unit (5601) shown in fig. 56 is provided for the 1-frame input type image signal processing apparatus and the 2-frame input type image signal processing apparatus of other embodiments of the present invention, and therefore, the description thereof is omitted.
[ example 31]
The image display device according to embodiment 31 of the present invention is the image display device according to embodiment 13, in which the video signal processor unit 3504 shown in fig. 35 is replaced with the video signal processing device described in embodiment 30. The other structures are the same as those of the image display device of example 13, and therefore, the description thereof is omitted.
The detailed operations of the video signal processor unit 3504 are the same as those described in embodiment 30, and therefore, the description thereof is omitted.
The image display device according to example 31 has an effect that, in addition to the effects of the image display devices according to examples 28 and 13, even when the same luminance direction of the subject and the subject movement cannot be used for the resolution enhancement in the image display devices according to examples 28 and 13, the resolution enhancement can be achieved, and the resolution enhancement effect can be further increased.
[ example 32]
A video recording and reproducing apparatus according to embodiment 32 of the present invention is the video recording and reproducing apparatus according to embodiment 17, in which the video signal processor unit 3504 shown in fig. 36 is replaced with the video signal processing apparatus described in embodiment 30. The other configuration is the same as that of the recording and reproducing apparatus of embodiment 17, and therefore, the description thereof is omitted.
The detailed operations of the video signal processor unit 3504 are the same as those described in embodiment 30, and therefore, the description thereof is omitted.
The video recording and reproducing apparatus according to embodiment 32 has an effect that, in addition to the effects of the video recording and reproducing apparatuses according to embodiments 29 and 17, even when the same luminance direction of the subject and the subject movement cannot be used for the resolution enhancement in the video recording and reproducing apparatuses according to embodiments 29 and 17, the resolution enhancement can be achieved, and the resolution enhancement effect can be further increased.
The embodiments of the present invention can be applied to, for example, a DVD player, a magnetic disk player, or a semiconductor memory player, in addition to the apparatuses described in the above embodiments. Further, it can be applied to, for example, a portable image display terminal (for example, a portable telephone) for receiving one-segment play.
Further, as the image frame, an image frame of a signal other than a television broadcast signal may be used. Further, for example, a data stream image transmitted via the internet, or an image frame of an image reproduced from a DVD player or an HDD player may be used.
In the above embodiments, the high resolution in units of frames is described as an example. However, the high-resolution object may not necessarily be the entire frame. For example, a part of a frame of an input image or an input video may be a target of resolution enhancement. That is, if the image processing according to one embodiment of the present invention is performed on a plurality of frames of a part of the input video frames, a high-quality enlarged image of the input image or a part of the input video can be obtained. This can be applied to, for example, enlarged display of a part of a video.
In addition, the embodiments of the present invention can be applied to color signals such as red (R), green (G), and blue (B) and color difference signals such as Cb, Cr, Pb, Pr, U, and V converted from RGB signals by a general color space conversion process, in addition to the luminance signal (Y). In this case, the "brightness" in the above description may be converted into "color" or "color difference".
However, an embodiment of the present invention can be obtained by arbitrarily combining the above embodiments.
According to the embodiments of the present invention described above, it is possible to perform processing for appropriately converting a low-resolution image into an enlarged image, and it is possible to appropriately obtain a high-resolution image with high quality. That is, the resolution of the image signal can be appropriately increased.
Further, according to the embodiments of the present invention described above, the number of frames of images required to obtain a high-quality and high-resolution image can be reduced.

Claims (9)

1. An image signal processing apparatus characterized by comprising:
an input unit for inputting two or more image frames; and
a resolution conversion unit for obtaining an output image frame by increasing the number of pixels constituting the input image frame,
the resolution conversion unit includes a same luminance direction estimation unit that estimates a same luminance direction for each image data on each input image frame and generates a sampling phase difference of each image data, and performs resolution processing of an image by using the sampling phase difference generated by the same luminance direction estimation unit,
the resolution conversion unit includes:
a motion compensation/promotion unit for performing motion compensation on the image data of each input image frame using the information of the sampling phase difference generated by the same luminance direction estimation unit and increasing the number of pixels;
a phase shifting unit for performing a predetermined amount of phase shift on the image data of each image frame after the number of pixels is increased;
a coefficient determining unit for determining a coefficient using the information on the sampling phase difference generated by the same luminance direction estimating unit; and
and a fold-back component removing unit that removes and outputs fold-back components by multiplying and adding the image data before and after the phase shift by the coefficient.
2. An image signal processing method characterized by comprising:
an input step of inputting an image frame; and
a resolution conversion step for increasing the number of pixels constituting the input image frame to obtain an output image frame; wherein,
in the resolution conversion step, the same luminance direction estimation is performed by estimating the same luminance direction for each image data on the input image frame and generating a sampling phase difference for each image data, and the image resolution increasing process is performed by the sampling phase difference generated by the same luminance direction estimation,
in the step of the resolution conversion, the resolution conversion step,
estimating the same luminance direction for each image data on the input image frame and generating a sampling phase difference for each image data,
performing motion compensation/lifting processing to perform motion compensation on image data of each input image frame while increasing the number of pixels using information of sampling phase difference generated by the same luminance direction estimation,
performing phase shift processing for performing phase shift of the image data of each image frame with the increased number of pixels by a predetermined amount,
determining a coefficient using information of the sampling phase difference estimated and generated by the same luminance direction,
and performing fold back component removal processing, multiplying the image data before and after the phase shift by the coefficient, adding the result, and removing and outputting fold back components.
3. An image display apparatus, comprising:
an input unit for inputting two or more image frames;
a resolution conversion unit for obtaining an output image frame by increasing the number of pixels constituting the input image frame, and
a display unit for displaying the image generated by the resolution conversion unit,
the resolution conversion unit includes a same luminance direction estimation unit that estimates a same luminance direction for each image data on each input image frame and generates a sampling phase difference for each image data,
the resolution conversion unit performs a resolution-increasing process on the image using the sampling phase difference generated by the same luminance direction estimation unit,
the resolution conversion unit includes:
a motion compensation/promotion unit for performing motion compensation on the image data of each input image frame using the information of the sampling phase difference generated by the same luminance direction estimation unit and increasing the number of pixels;
a phase shifting unit for performing a predetermined amount of phase shift on the image data of each image frame after the number of pixels is increased;
a coefficient determining unit for determining a coefficient using the information on the sampling phase difference generated by the same luminance direction estimating unit; and
and a fold-back component removing unit that removes and outputs fold-back components by multiplying and adding the image data before and after the phase shift by the coefficient.
4. An image signal processing apparatus characterized by comprising:
an input unit that inputs a plurality of image frames; and
a resolution conversion unit for obtaining an output image frame by synthesizing the plurality of input image frames and increasing the number of pixels constituting the image frame,
the resolution conversion unit includes:
a position estimating unit for estimating a sampling phase difference using image data on the input image frame as a reference and corresponding image data on another input image frame;
a phase conversion unit for converting the estimated sampling phase difference and outputting a converted sampling phase difference;
a motion compensation/promotion unit for performing motion compensation on the image data of each input image frame and increasing the number of pixels by using the converted information of the sampling phase difference;
a phase shifting unit for performing a predetermined amount of phase shift on the image data of each image frame after the number of pixels is increased; and
and a fold-back component removing unit that removes and outputs a fold-back component by multiplying and adding each image data before and after the phase shift by a coefficient determined by the estimated information of the sampling phase difference.
5. The image signal processing apparatus according to claim 4, characterized in that:
the phase conversion unit includes a same luminance direction estimation unit that estimates the same luminance direction as that of the processing target pixel in the input image frame and outputs a phase correction value based on the estimation result,
and adding the phase correction value to the sampling phase difference to obtain a converted sampling phase difference.
6. An image signal processing method characterized by comprising:
an input step of inputting a plurality of image frames; and
a resolution conversion step for obtaining an output image frame by increasing the number of pixels constituting an image frame by synthesizing the input plurality of image frames,
in the resolution conversion step, the following processing is performed:
a position estimation process of estimating a sampling phase difference using image data on the input image frame as a reference and corresponding image data on other input image frames;
a phase conversion process of converting the estimated sampling phase difference and outputting a converted sampling phase difference;
motion compensation/lifting processing, namely performing motion compensation on the image data of each input image frame by using the converted information of the sampling phase difference and increasing the number of pixels;
a phase shift process of performing a predetermined amount of phase shift on the image data of each image frame after the number of pixels is increased; and
and a aliasing component removal process for removing aliasing components by multiplying and adding each image data before and after the phase shift by a coefficient determined by the estimated information of the sampling phase difference, and outputting the resulting data.
7. The image signal processing method according to claim 6,
the phase transformation step includes:
a same luminance direction estimating step of estimating the same luminance direction as that of the processing target pixel in the input image frame and outputting a phase correction value based on the estimation result; and
and adding the phase correction value to the sampling phase difference as the converted sampling phase difference.
8. An image display apparatus, comprising:
an input unit that inputs a plurality of image frames;
a resolution conversion unit for obtaining an output image frame by synthesizing the plurality of input image frames to increase the number of pixels constituting the image frame, and
a display unit for displaying the image generated by the resolution conversion unit,
the resolution conversion unit includes:
a position estimating unit for estimating a sampling phase difference using image data on the input image frame as a reference and corresponding image data on another input image frame;
a phase conversion unit for converting the estimated sampling phase difference and outputting a converted sampling phase difference;
a motion compensation/promotion unit for performing motion compensation on the image data of each input image frame and increasing the number of pixels by using the converted information of the sampling phase difference;
a phase shifting unit for performing a predetermined amount of phase shift on the image data of each image frame after the number of pixels is increased; and
and a fold-back component removing unit that removes and outputs a fold-back component by multiplying and adding each image data before and after the phase shift by a coefficient determined by the estimated information of the sampling phase difference.
9. The image display device according to claim 8, wherein:
the phase conversion unit includes a same luminance direction estimation unit that estimates the same luminance direction as that of the processing target pixel in the input image frame and outputs a phase correction value based on the estimation result,
and adding the phase correction value to the sampling phase difference to obtain a converted sampling phase difference.
CN2008101682464A 2007-10-04 2008-10-06 Video signal processing apparatus, video signal processing method and video display apparatus Active CN101404733B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2007260488 2007-10-04
JP2007260485 2007-10-04
JP2007-260485 2007-10-04
JP2007-260488 2007-10-04
JP2007260485A JP5250232B2 (en) 2007-10-04 2007-10-04 Image signal processing apparatus, image signal processing method, and image display apparatus
JP2007260488A JP5250233B2 (en) 2007-10-04 2007-10-04 Image signal processing apparatus, image signal processing method, and image display apparatus

Publications (2)

Publication Number Publication Date
CN101404733A CN101404733A (en) 2009-04-08
CN101404733B true CN101404733B (en) 2011-12-28

Family

ID=40538584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101682464A Active CN101404733B (en) 2007-10-04 2008-10-06 Video signal processing apparatus, video signal processing method and video display apparatus

Country Status (2)

Country Link
JP (1) JP5250232B2 (en)
CN (1) CN101404733B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101788032B1 (en) 2011-03-24 2017-10-19 삼성전자주식회사 Depth sensor, depth information error compensation method thereof, and signal processing system having the depth sensor
CN106912199B (en) 2015-10-20 2018-07-10 奥林巴斯株式会社 Photographic device, image acquisition method and storage medium
JP7065300B2 (en) * 2016-09-29 2022-05-12 パナソニックIpマネジメント株式会社 Reproduction method and reproduction device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584820A (en) * 2003-08-22 2005-02-23 三星电子株式会社 Apparatus for and method of processing display signal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3583831B2 (en) * 1995-06-09 2004-11-04 株式会社リコー Signal processing device
JP2000244877A (en) * 1999-02-24 2000-09-08 Hitachi Ltd Motion correction double speed conversion circuit for image signal and television device
JP4324825B2 (en) * 1999-09-16 2009-09-02 ソニー株式会社 Data processing apparatus, data processing method, and medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584820A (en) * 2003-08-22 2005-02-23 三星电子株式会社 Apparatus for and method of processing display signal

Also Published As

Publication number Publication date
JP5250232B2 (en) 2013-07-31
JP2009094593A (en) 2009-04-30
CN101404733A (en) 2009-04-08

Similar Documents

Publication Publication Date Title
JP4775210B2 (en) Image signal processing apparatus, image resolution increasing method, image display apparatus, recording / reproducing apparatus
JP2009015025A (en) Image signal processing apparatus and image signal processing method
US7830369B2 (en) Video signal processing apparatus, video displaying apparatus and high resolution method for video signal
JP4876048B2 (en) Video transmission / reception method, reception device, video storage device
US8089557B2 (en) Video signal processing apparatus, video signal processing method and video display apparatus
EP2012532A2 (en) Video displaying apparatus, video signal processing apparatus and video signal processing method
JP2009033581A (en) Image signal recording and reproducing device
CN101404733B (en) Video signal processing apparatus, video signal processing method and video display apparatus
JP2009033582A (en) Image signal recording and reproducing device
JP5161935B2 (en) Video processing device
WO2017145752A1 (en) Image processing system and image processing device
JP2010073075A (en) Image signal processing apparatus, image signal processing method
JP2009017242A (en) Image display apparatus, image signal processing apparatus and image signal processing method
JP5250233B2 (en) Image signal processing apparatus, image signal processing method, and image display apparatus
JP4988460B2 (en) Image signal processing apparatus and image signal processing method
JP2009171370A (en) Image signal processing apparatus, image display device, and image signal processing method
JP2009141444A (en) Image signal processor, image display apparatus, image signal processing method
JP5369526B2 (en) Image signal processing device, display device, recording / playback device, and image signal processing method
JP5416898B2 (en) Video display device
JP5416899B2 (en) Video display device
JP2009111721A (en) Input image control device, imaging apparatus, input image control method and program
JP2009163588A (en) Image signal processor, image resolution enhancement method, and image display device
JP2009296193A (en) Image signal processing method and apparatus
JP4752237B2 (en) Image filter circuit and filtering processing method
JP3529590B2 (en) Imaging system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HITACHI LTD.

Free format text: FORMER OWNER: HITACHI,LTD.

Effective date: 20130821

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20130821

Address after: Tokyo, Japan

Patentee after: HITACHI CONSUMER ELECTRONICS Co.,Ltd.

Address before: Tokyo, Japan

Patentee before: Hitachi, Ltd.

ASS Succession or assignment of patent right

Owner name: HITACHI MAXELL LTD.

Free format text: FORMER OWNER: HITACHI LTD.

Effective date: 20150303

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150303

Address after: Osaka Japan

Patentee after: Hitachi Maxell, Ltd.

Address before: Tokyo, Japan

Patentee before: Hitachi Consumer Electronics Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20180302

Address after: Kyoto Japan

Patentee after: MAXELL, Ltd.

Address before: Osaka Japan

Patentee before: Hitachi Maxell, Ltd.

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: Kyoto Japan

Patentee after: MAXELL, Ltd.

Address before: Kyoto Japan

Patentee before: MAXELL HOLDINGS, Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20220531

Address after: Kyoto Japan

Patentee after: MAXELL HOLDINGS, Ltd.

Address before: Kyoto Japan

Patentee before: MAXELL, Ltd.

TR01 Transfer of patent right