WO2010032649A1 - Image display device and imaging apparatus - Google Patents

Image display device and imaging apparatus Download PDF

Info

Publication number
WO2010032649A1
WO2010032649A1 PCT/JP2009/065609 JP2009065609W WO2010032649A1 WO 2010032649 A1 WO2010032649 A1 WO 2010032649A1 JP 2009065609 W JP2009065609 W JP 2009065609W WO 2010032649 A1 WO2010032649 A1 WO 2010032649A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
display
unit
super
Prior art date
Application number
PCT/JP2009/065609
Other languages
French (fr)
Japanese (ja)
Inventor
法和 恒川
誠司 岡田
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2010032649A1 publication Critical patent/WO2010032649A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/106Determination of movement vectors or equivalent parameters within the image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas

Definitions

  • the present invention relates to an image display device and an imaging device capable of displaying an image.
  • High resolution processing that generates one high-resolution image from a plurality of low-resolution images has been proposed.
  • super-resolution processes a reconfiguration-type super-resolution process using repetitive calculations (repetitive calculations) is known as a typical process.
  • the reconfiguration-type super-resolution processing method using iterative computation is the most effective method among the super-resolution processing methods at present, but it is an optimization computation method that requires many iterative computations. It takes a relatively long time.
  • the user wishes to confirm the captured image promptly on the display screen. Therefore, for example, in order to obtain a high-definition still image, even when a plurality of frame images (low-resolution images) for generating a high-resolution image are captured, an image based on the captured image should be displayed promptly. .
  • an object of the present invention is to provide an image display device and an imaging device that can present the result of image processing as early as possible to the user.
  • An image display device includes: an arithmetic processing unit that generates an output image from an input image by predetermined arithmetic processing; and a display control unit that displays a display image based on the generated image of the arithmetic processing unit on the display unit.
  • the display control unit causes the display unit to display an intermediate result of the calculation process generated in the execution process of the calculation process in a stepwise manner during the execution of the calculation process.
  • the arithmetic processing includes unit processing that is repeatedly executed, and the arithmetic processing unit repeatedly executes the unit processing on the intermediate generated image based on the input image to thereby generate the intermediate generated image. And finally generating the output image, and when the unit process is repeatedly executed, the display control unit generates a display image from the intermediate generated image in the iteration process of the unit process, and Display on the display.
  • the display control unit gradually changes the display unit according to the elapsed time from the execution start time of the arithmetic process or the number of times the unit process is repeatedly executed. And the latest intermediate generation image at the time of update is reflected in the display content.
  • the arithmetic processing unit repeatedly executes the unit processing in order to improve the image quality of the intermediate generated image and sequentially updates the intermediate generated image
  • the image display device is based on the repeated execution of the unit processing.
  • the display unit further includes an estimation unit that estimates an image quality improvement amount of the intermediate generation image, and the display control unit displays the display content of the display unit step by step according to the estimated image quality improvement amount when the unit processing is repeatedly executed. And the latest intermediate generation image at the time of update is reflected in the display content.
  • the calculation process includes first to n-th unit processes, and an i-th intermediate generation image based on a part of the input image is generated as a part of the output image by the i-th unit process.
  • the entire output image is formed by synthesizing the first to n-th intermediate generated images generated by the first to n-th unit processes, and the display control unit performs the calculation process ,
  • a display image is generated using the first to m-th intermediate generation images obtained at that time and displayed on the display unit, where n and m are natural numbers, and n> m holds, and i is It is an integer that satisfies 1 ⁇ i ⁇ n.
  • An imaging apparatus includes an imaging unit that acquires an image by imaging and the image display apparatus.
  • the image display device receives the image acquired by the imaging unit as the input image.
  • an image display device and an imaging device capable of presenting the result of image processing as early as possible to the user.
  • FIG. 1 is an overall block diagram of an imaging apparatus according to an embodiment of the present invention. It is a conceptual diagram of the reconstruction type super-resolution process based on the MAP method using an iterative operation.
  • 3 is a flowchart showing a flow of super-resolution processing corresponding to FIG. 2. It is a figure which shows the example of the display image of the periphery at the time of execution of a super-resolution process. It is a figure which shows the analog image of the to-be-photographed object which should be image
  • FIG. 3 is an internal block diagram of a video signal processing unit according to the first embodiment of the present invention.
  • FIG. 8 is an internal block diagram of a super-resolution operation unit in FIG. 7. It is a figure which shows a mode that the period of the periphery at the time of execution of a super-resolution process is classified into three periods concerning 1st Example of this invention.
  • FIG. 6 is a diagram illustrating a relationship between an elapsed time from the start of execution of super-resolution processing and display update processing execution timing according to the first embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a relationship between the number of executions of super-resolution unit processing and the execution timing of display update processing according to the first embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a relationship between the number of executions of super-resolution unit processing and an improvement evaluation value representing an image quality improvement amount by repeated execution of super-resolution unit processing according to the first embodiment of the present invention.
  • It is an internal block diagram of the display processing part which concerns on 1st Example of this invention.
  • (A) And (b) is a figure which shows the figure showing the whole high resolution image, and a display image, respectively, concerning 1st Example of this invention. It is a figure which shows a mode that an additional item is superimposed on a display image concerning 1st Example of this invention. It is a figure which concerns on 1st Example of this invention and shows a mode that another additional item is superimposed on a display image.
  • FIG. 1 is an overall block diagram of an imaging apparatus 1 according to an embodiment of the present invention.
  • the imaging device 1 is a digital video camera, for example.
  • the imaging device 1 can capture a moving image and a still image, and can also capture a still image simultaneously during moving image capturing.
  • the imaging apparatus 1 includes an imaging unit 11, an AFE (Analog Front End) 12, a video signal processing unit 13, a microphone 14, an audio signal processing unit 15, a compression processing unit 16, and a DRAM (Dynamic Random Access Memory).
  • An internal memory 17 such as an SD (Secure Digital) card or a magnetic disk, an expansion processing unit 19, a display processing unit 20, an audio output circuit 21, a TG (timing generator) 22, and a CPU. (Central Processing Unit) 23, a bus 24, a bus 25, an operation unit 26, a display unit 27, and a speaker 28.
  • the operation unit 26 includes a recording button 26a, a shutter button 26b, an operation key 26c, and the like. Each part in the imaging apparatus 1 exchanges signals (data) between the parts via the bus 24 or 25.
  • the TG 22 generates a timing control signal for controlling the timing of each operation in the entire imaging apparatus 1, and gives the generated timing control signal to each unit in the imaging apparatus 1.
  • the timing control signal includes a vertical synchronization signal Vsync and a horizontal synchronization signal Hsync.
  • the CPU 23 comprehensively controls the operation of each unit in the imaging apparatus 1.
  • the operation unit 26 receives an operation by a user. The operation content given to the operation unit 26 is transmitted to the CPU 23.
  • Each unit in the imaging apparatus 1 temporarily records various data (digital signals) in the internal memory 17 during signal processing as necessary.
  • the imaging unit 11 includes an imaging system (image sensor) 33, an optical system, a diaphragm, and a driver (not shown). Incident light from the subject enters the image sensor 33 via the optical system and the stop. Each lens constituting the optical system forms an optical image of the subject on the image sensor 33.
  • the TG 22 generates a drive pulse for driving the image sensor 33 in synchronization with the timing control signal, and applies the drive pulse to the image sensor 33.
  • the image sensor 33 is a solid-state image sensor composed of a CCD (Charge Coupled Devices), a CMOS (Complementary Metal Oxide Semiconductor) image sensor, or the like.
  • the image sensor 33 photoelectrically converts an optical image incident through the optical system and the diaphragm, and outputs an electrical signal obtained by the photoelectric conversion to the AFE 12.
  • the image sensor 33 includes a plurality of light receiving pixels (not shown in FIG. 1) that are two-dimensionally arranged in a matrix, and in each photographing, each light receiving pixel has a charge amount signal corresponding to the exposure time. Stores charge.
  • the electrical signal from each light receiving pixel having a magnitude proportional to the amount of the stored signal charge is sequentially output to the subsequent AFE 12 in accordance with the drive pulse from the TG 22.
  • the AFE 12 amplifies an analog signal output from the image sensor 33 (each light receiving pixel), converts the amplified analog signal into a digital signal, and outputs the digital signal to the video signal processing unit 13.
  • the degree of amplification of signal amplification in the AFE 12 is controlled by the CPU 23.
  • the video signal processing unit 13 performs various types of image processing on the image represented by the output signal of the AFE 12, and generates a video signal for the image after the image processing.
  • the video signal is generally composed of a luminance signal Y representing the luminance of the image and color difference signals U and V representing the color of the image.
  • the microphone 14 converts the ambient sound of the imaging device 1 into an analog audio signal
  • the audio signal processing unit 15 converts the analog audio signal into a digital audio signal.
  • the compression processing unit 16 compresses the video signal from the video signal processing unit 13 using a predetermined compression method.
  • the compressed video signal is recorded in the external memory 18 at the time of capturing and recording a moving image or a still image.
  • the compression processing unit 16 compresses the audio signal from the audio signal processing unit 15 using a predetermined compression method.
  • the video signal from the video signal processing unit 13 and the audio signal from the audio signal processing unit 15 are compressed while being correlated with each other in time by the compression processing unit 16, and after compression, Recorded in the external memory 18.
  • the recording button 26a is a push button switch for instructing start / end of moving image shooting and recording
  • the shutter button 26b is a push button switch for instructing shooting and recording of a still image.
  • the operation mode of the imaging apparatus 1 includes a shooting mode capable of shooting a moving image and a still image, and a playback mode for reproducing and displaying the moving image and the still image stored in the external memory 18 on the display unit 27. It is. Transition between the modes is performed according to the operation on the operation key 26c.
  • An image sequence typified by a captured image sequence refers to a collection of images arranged in time series. Data representing an image is called image data. Image data can also be considered as a kind of video signal. One image is represented by image data for one frame period. One image represented by image data for one frame period is also called a frame image.
  • the recording button 26a When the user presses the recording button 26a in the shooting mode, under the control of the CPU 23, the video signal obtained after the pressing and the corresponding audio signal are sequentially recorded in the external memory 18 via the compression processing unit 16. .
  • the recording button 26a again after starting the moving image shooting the recording of the video signal and the audio signal to the external memory 18 is completed, and the shooting of one moving image is completed.
  • the shutter button 26b In the shooting mode, when the user presses the shutter button 26b, a still image is shot and recorded.
  • a compressed video signal representing a moving image or a still image recorded in the external memory 18 is expanded by the expansion processing unit 19 and displayed. Sent to.
  • the generation of the video signal by the video signal processing 13 is normally performed regardless of the operation contents on the recording button 26 a and the shutter button 26 b, and the video signal is sent to the display processing unit 20. .
  • the display processing unit 20 causes the display unit 27 to display an image corresponding to the given video signal.
  • the display unit 27 is a display device such as a liquid crystal display.
  • a compressed audio signal corresponding to the moving image recorded in the external memory 18 is also sent to the expansion processing unit 19.
  • the decompression processing unit 19 decompresses the received audio signal and sends it to the audio output circuit 21.
  • the audio output circuit 21 converts a given digital audio signal into an audio signal in a format that can be output by the speaker 28 (for example, an analog audio signal) and outputs the audio signal to the speaker 28.
  • the speaker 28 outputs the sound signal from the sound output circuit 21 to the outside as sound (sound).
  • the video signal processing unit 13 is configured to be able to perform super-resolution processing in cooperation with the CPU 23. Through the super-resolution processing, one high-resolution image is generated from a plurality of low-resolution images.
  • the video signal of the high resolution image can be recorded in the external memory 18 via the compression processing unit 16.
  • the resolution of the high resolution image is higher than that of the low resolution image, and the number of pixels in the horizontal and vertical directions of the high resolution image is larger than that of the low resolution image.
  • the super-resolution processing is performed on a plurality of frame images as a plurality of low resolution images obtained at the time of moving image shooting.
  • FIG. 2 shows a conceptual diagram of a reconfigurable super-resolution process based on the MAP method using repetitive operations.
  • this super-resolution processing one high-resolution image is estimated from a plurality of low-resolution images obtained by actual shooting, and the original plurality of low-resolution images are estimated by degrading the estimated high-resolution image.
  • a low resolution image obtained by actual photographing is particularly called an “real low resolution image”, and an estimated low resolution image is particularly called an “estimated low resolution image”.
  • the high resolution image and the low resolution image are repeatedly estimated so that the error between the actual low resolution image and the estimated low resolution image is minimized, and the finally acquired high resolution image is output.
  • FIG. 3 is a flowchart showing the flow of super-resolution processing corresponding to FIG.
  • step S11 an initial high resolution image is generated from an actual low resolution image.
  • step S12 the original actual low resolution image for constructing the current high resolution image is estimated.
  • the estimated image is referred to as an estimated low resolution image as described above.
  • step S13 an update amount for the current high resolution image is derived based on the difference (difference image) between the actual low resolution image and the estimated low resolution image. This update amount is derived so that the error between the actual low-resolution image and the estimated low-resolution image is minimized by repeatedly executing the processes in steps S12 to S14.
  • step S14 the current high resolution image is updated using the updated amount, and a new high resolution image is generated. Thereafter, the process returns to step S12, and the newly generated high resolution image is regarded as the current high resolution image, and the processes of steps S12 to S14 are repeatedly executed. Basically, as the number of repetitions of each of the processes in steps S12 to S14 increases, the resolution of the obtained high-resolution image is substantially improved (resolution is improved), and an ideal high-resolution image can be obtained.
  • Super-resolution processing based on the above-described operation flow is performed in the imaging apparatus 1.
  • super-resolution processing based on the MAP method super-resolution processing based on the MAP method, super-resolution processing based on the ML (Maximum-Likelihood) method, POCS (Projection Onto Convex Set) method, or IBP (Iterative Back Projection) method can also be used. is there.
  • the reconstruction-type super-resolution processing method using iterative computation is an effective method, but it takes a relatively long time for processing because it is an optimization computation method that requires many iterative computations.
  • the user desires to confirm the captured image on the display unit 27 promptly. Therefore, for example, in order to obtain a high-definition still image, even when a plurality of frame images (actual low-resolution images) for generating a high-resolution image are captured, an image based on the captured image should be displayed promptly. is there. In order to realize this, after capturing a plurality of frame images, one of the plurality of frame images is immediately displayed on the display unit 27, while super-resolution processing using the plurality of frame images is executed. A method of executing display of the obtained high resolution image on the display unit 27 and storage in the external memory 18 after completion of the super-resolution processing is conceivable.
  • FIG. 4 shows a conceptual diagram of this method.
  • FIG. 5 is an analog image of a subject to be photographed by the imaging apparatus 1.
  • an image 301 is a frame image before super-resolution processing that is displayed immediately after shooting
  • an image 302 is a high-resolution image based on a plurality of frame images that is displayed after completion of the super-resolution processing. It is.
  • the effect of the super-resolution processing is exaggerated (the same applies to FIG. 6 described later).
  • the user cannot confirm the processing result at all until the super-resolution processing that requires a relatively long time is completed. This is contrary to the user's desire to confirm the effect of high resolution based on super-resolution processing as soon as possible.
  • the image 302 can be obtained only after a lapse of a suitable time, the user often confirms only the image 301 on the display unit 27. Even in such a case, after the super-resolution processing is completed, the high-resolution image is stored in the external memory 18, but when the user who has confirmed only the image 301 looks at the stored image later, Since the image 301 and the stored image are considerably different in image quality, there may be a sense of incongruity.
  • the imaging device 1 displays the intermediate result of the super-resolution processing generated in the process of executing the super-resolution processing on the display unit 27 step by step during the execution of the super-resolution processing.
  • the execution of the super-resolution processing is started while displaying the image 301 immediately after capturing a plurality of frame images for generating a high-resolution image, and the processing consists of steps S12 to S14 in FIG.
  • An intermediate high-resolution image that is sequentially updated in the process of repeatedly executing the group is updated and displayed on the display unit 27 step by step.
  • the images 303 and 304 that should be called intermediate results of the super-resolution processing are sequentially updated and displayed.
  • an image 302 that is a final high-resolution image is displayed.
  • a frame image obtained by shooting is handled as an actual low resolution image unless otherwise specified. Further, it is assumed that an actual low resolution image Fa is obtained by shooting at a certain time, and an actual low resolution image Fb is obtained by subsequent shooting.
  • the shooting interval between the images Fa and Fb corresponds to, for example, a frame period.
  • FIG. 7 is an internal block diagram of the video signal processing unit 13 according to the first embodiment.
  • the video signal processing unit 13 in FIG. 7 includes a super-resolution processing unit 40, an iterative control unit 50, first and second signal control units 51 and 53, and a signal processing unit 52.
  • the super-resolution processing unit 40 includes a memory unit 41 having frame memories 41A and 41B, a motion amount calculation unit 42, a motion amount storage unit 43, and a super-resolution calculation unit 44.
  • the frame memory 41A temporarily stores image data of an actual low resolution image for one frame represented by a digital signal from the AFE 12.
  • the frame memory 41B temporarily stores image data of an actual low resolution image for one frame stored in the frame memory 41A.
  • the contents stored in the frame memory 41A are transferred to the frame memory 42B every time one frame elapses. Thereby, at the end of the second frame, the image data of the actual low resolution images Fa and Fb are recorded in the frame memories 41B and 41A, respectively.
  • the motion amount calculation unit 42 is provided with the image data of the actual low resolution image of the current frame from the AFE 12 and the image data of the actual low resolution image of the previous frame from the frame memory 41A.
  • the motion amount calculation unit 42 calculates a motion amount representing the amount of positional deviation between two given real low-resolution images by comparing the two given image data.
  • This motion amount is a two-dimensional amount including a horizontal component and a vertical component, and is expressed as a so-called motion vector.
  • the calculated motion amount is stored in the motion amount storage unit 43.
  • the motion amount calculation unit 42 calculates a motion amount between two real low-resolution images using a representative point matching method, a block matching method, a gradient method, or the like.
  • the motion amount calculated here has a so-called sub-pixel resolution having a resolution higher than the pixel interval of the actual low-resolution image. That is, the amount of motion is calculated with a distance shorter than the interval pp L between two pixels adjacent in the horizontal or vertical direction in the actual low resolution image as the minimum unit.
  • a known calculation method can be used as a method for calculating a positional deviation amount having sub-pixel resolution. For example, the method described in Japanese Patent Application Laid-Open No. 11-345315 and the method described in “Okutomi,“ Digital Image Processing ”, Second Edition, CG-ARTS Association, issued on March 1, 2007” (p. 40). 205).
  • the super-resolution calculation unit 44 is based on the two actual low-resolution images Fa and Fb given from the frame memories 41A and 41B and the motion amount between the images Fa and Fb stored in the motion amount storage unit 43.
  • a high resolution image is generated by super-resolution processing.
  • the super-resolution calculation unit 44 first generates an initial high-resolution image as a high-resolution image before update from the images Fa and Fb according to the processing in steps S11 to S14 in FIG. 3, and then sequentially updates the high-resolution image. To go.
  • the initial high resolution image is represented by symbol Fx1
  • the high resolution image obtained by executing the process of step S14 once for the initial high resolution image Fx1 is represented by symbol Fx2. That is, the high-resolution image Fx2 is obtained by updating the initial high-resolution image Fx1 only once.
  • the high resolution images Fx3, Fx4,... Are sequentially obtained by sequentially updating the high resolution image Fx2.
  • the first signal processing unit 51 outputs the image data of the high resolution image output from the super-resolution calculation unit 44 to the super-resolution calculation unit 44 and / or the signal processing unit 52 under the control of the iterative control unit 50.
  • the signal processing unit 52 generates a video signal (luminance signal and color difference signal) of the high resolution image from the image data of the high resolution image given through the first signal processing unit 51.
  • the second signal processing unit 52 outputs the video signal of the high-resolution image generated by the signal processing unit 52 to the display processing unit 20 and / or the compression processing unit 16 in FIG. 1 under the control of the iterative control unit 50.
  • the iterative control unit 50 controls the first and second signal processing units 51 and 53. Prior to detailed description of the control, a configuration example of the super-resolution calculation unit 44 will be described.
  • FIG. 8 is an internal block diagram of the super-resolution operation unit 44.
  • the super-resolution operation unit 44 in FIG. 8 includes parts referred to by reference numerals 61 to 65.
  • one of the two images Fa and Fb is set as a reference frame and the other is set as a reference frame.
  • the image Fa is set as a reference frame.
  • the number of pixels of the low resolution image is u and the number of pixels of the high resolution image is v.
  • v is an arbitrary value larger than u. For example, if the resolution of the high resolution image is twice that of the low resolution image in the vertical and horizontal directions, v is 4 times u. Of course, the resolution of the high resolution image may be other than twice the resolution of the low resolution image.
  • a matrix in which the pixel values of the actual low resolution image Fa composed of u pixels are written and arranged is represented by Ya
  • a matrix in which the pixel values of the actual low resolution image Fb composed of u pixels are arranged and represented is represented by Yb.
  • the initial high resolution estimation unit 61 executes a process corresponding to step S11 in FIG.
  • the reference frame can be regarded as an image obtained by shifting the position of the reference frame by an amount corresponding to the amount of motion between the reference frame and the reference frame. Therefore, the initial high-resolution estimation unit 61 detects the displacement of the reference frame with respect to the reference frame based on the amount of movement between the reference frame and the reference frame, which is stored in the movement amount storage unit 43, and detects these displacements. The position deviation correction for canceling out is performed. Then, an initial high-resolution image is generated by combining the base frame and the reference frame after the positional deviation correction.
  • a method for generating an initial high-resolution image a method using an interpolation process as described in JP-A-2006-41603 can be used.
  • X represents a matrix in which pixel values of the initial high-resolution image Fx1 composed of v pixels are written and arranged.
  • a matrix in which pixel values of high-resolution images (for example, Fx2) other than the initial high-resolution image Fx1 are arranged is also represented by X.
  • the contents of the matrix X of the high resolution image Fxi are different from the contents of the matrix X of the high resolution image Fxj (where i ⁇ j).
  • the selection unit 62 selects and outputs either the high resolution image (initial high resolution image) generated by the initial high resolution estimation unit 61 or the high resolution image temporarily stored in the frame memory 65. .
  • the selection unit 62 selects the initial high resolution image estimated by the initial high resolution estimation unit 61 in the first selection operation, and the high resolution image temporarily stored in the frame memory 65 in the second and subsequent selection operations. Select.
  • the high-resolution update amount calculation unit (hereinafter abbreviated as update amount calculation unit) 63 is stored in the high-resolution image selected by the selection unit 62, the actual low-resolution images Fa and Fb, and the motion amount storage unit 43. Based on the amount of motion between the images Fa and Fb, the displacement of the actual low-resolution images Fa and Fb with respect to the high-resolution image given from the selection unit 62 is calculated.
  • each calculated displacement, image blur caused by the low resolution in order to estimate the original low resolution image (that is, the actual low resolution images Fa and Fb) by degrading the high resolution image from the selection unit 62, each calculated displacement, image blur caused by the low resolution, in addition, camera parameter matrices Wa and Wb using the amount of downsampling from a high resolution image of v pixels to a low resolution image of u pixels as parameters are obtained.
  • step S12 of FIG. 3 the update amount calculation unit 63 multiplies the camera parameter matrices Wa and Wb individually by the matrix X of the high resolution image selected by the selection unit 62, thereby Two estimated low resolution images corresponding to the estimated images of the low resolution images Fa and Fb are generated.
  • the two estimated low-resolution images are represented by matrices Wa ⁇ X and Wb ⁇ X.
  • Equation (1) The third term on the right side of Equation (1) is a constraint term based on the high-resolution image from the selection unit 62.
  • 2 is a matrix based on the prior probability model.
  • the matrix C is set based on prior knowledge that “a high-resolution image has few high-frequency components”, and is formed by a high-pass filter such as a Laplacian filter.
  • the update amount calculation unit 63 obtains the gradient ⁇ I / ⁇ X with respect to the evaluation function I.
  • the gradient ⁇ I / ⁇ X is expressed by the following equation (2).
  • the matrix to which the subscript T is assigned represents the transposed matrix of the original matrix. Therefore, for example, Wa T represents a transposed matrix of the matrix Wa.
  • ⁇ I / ⁇ X 2 ⁇ ⁇ Wa T ⁇ (Wa ⁇ X-Ya) + Wb T ⁇ (Wb ⁇ X-Yb) + ⁇ C T ⁇ C ⁇ X ⁇ (2)
  • the gradient ⁇ I / ⁇ X based on the matrix X of the high resolution image Fxi is calculated as an update amount for the high resolution image Fxi (where i is a natural number). This calculation process corresponds to the process of step S13 in FIG.
  • the subtraction unit 64 subtracts the update amount ⁇ I / ⁇ X for the high resolution image Fxi from the matrix X of the high resolution image Fxi selected by the selection unit 62 as in step S14 of FIG.
  • a matrix X ′ of the following formula (3) is calculated (where i is a natural number).
  • the matrix X ′ corresponds to a matrix in which pixel values of the high resolution image Fx (i + 1) are written.
  • the high resolution image Fxi is updated by the subtraction processing in the subtraction unit 64, and the updated high resolution image Fx (i + 1) is generated.
  • X ′ X ⁇ I / ⁇ X (3)
  • the image data of the high resolution image generated by the update by the subtracting unit 64 is output to the first signal control unit 51 in FIG.
  • the image data of the high resolution image output from the subtraction unit 64 is the first signal control unit. It is given to the frame memory 65 via 51.
  • the frame memory 65 temporarily stores the image data of the given high resolution image, and gives this to the selection unit 62.
  • the high-resolution image output from the subtraction unit 64 is updated again by the update amount calculation unit 63 and the subtraction unit 64.
  • the process of updating the high-resolution image Fxi only once and obtaining the high-resolution image Fx (i + 1) is referred to as a super-resolution unit process (where i is a natural number).
  • An upper limit can be set for the number of repetitions of super-resolution unit processing.
  • the super-resolution process is completed when the number of repetitions reaches the upper limit. If it is determined that the update amount for the high-resolution image has become sufficiently small regardless of the number of repetitions of the super-resolution calculation process, the super-resolution process may be completed at that time.
  • the final high-resolution image obtained after the super-resolution process is completed is called “final high-resolution image”, and the intermediate high-resolution image obtained before the super-resolution process is completed is called “intermediate This is called “resolution image”.
  • the final high resolution image is the image Fx9
  • each of the images Fx1 to Fx8 is an intermediate generated high resolution image.
  • image data of the image Fa is given to the display processing unit 20 via the signal processing unit 52, and a display image based on the image Fa (corresponding to the image 301 in FIG. 6) is displayed on the display unit 27. Is displayed. Instead of the display image based on the image Fa, a display image based on the image Fb may be displayed.
  • a display image when simply referred to as a display image, it refers to an image displayed on the display unit 27, and when simply referred to as a display screen, it refers to a display screen on the display unit 27.
  • the display unit 27 is a display unit provided in the imaging device 1, but the display unit 27 is a display device (liquid crystal display or plasma display) external to the imaging device 1. ).
  • the iterative control unit 50 controls the first and second signal control units 51 and 53 so that the following operations are executed in the simple repetition period, the stage display period, and the completion processing period.
  • the image data of the high resolution image output from the super-resolution calculation unit 44 is given only to the super-resolution calculation unit 44 via the first signal control unit 51. Therefore, in the simple repetition period, the display image on the display unit 27 is not updated, while the high-resolution image is updated by repeated execution of the super-resolution unit process.
  • the image data of the high resolution image (intermediately generated high resolution image) output from the super resolution calculation unit 44 is supplied to the super resolution calculation unit 44 via the first signal control unit 51 and the first.
  • the signal is supplied to the display processing unit 20 through the 1 signal control unit 51, the signal processing unit 52, and the second signal control unit 53. Therefore, in the stage display period, the display image on the display unit 27 can be updated with the contents of the latest high-resolution image. At this time, the high-resolution image is continuously updated by repeatedly executing the super-resolution unit process.
  • the completion processing period image data of a high resolution image (final high resolution image) output from the super-resolution calculation unit 44 is given only to the signal processing unit 52 via the first signal control unit 51.
  • the video signal of the high resolution image generated by the signal processing unit 52 based on the image data is output to the display processing unit 20 and the compression processing unit 16 via the second signal control unit 53.
  • the completion processing period is a period that comes after completion of the super-resolution processing.
  • the start time of the simple repetition period, the start time of the stage display period, and the start time of the completion processing period are t A , t B, and t C , respectively.
  • the super-resolution processing is completed when the high-resolution image Fx9 is obtained. That is, it is assumed that the image Fx9 is the final high resolution image.
  • the latest high-resolution image at time t B is Fx3.
  • the super-resolution computing unit 44 starts super-resolution processing with the time t A as a starting point (for example, starts generating the initial high-resolution image Fx1).
  • images Fx1, Fx2, and Fx3 are sequentially generated by the super-resolution calculation unit 44.
  • the image data of the images Fx1, Fx2, and Fx3 is supplied from the first signal control unit 51 to the frame memory 65, but is not supplied to the signal processing unit 52. Therefore, the contents of the images Fx1, Fx2, and Fx3 are not reflected in the display image during the simple repetition period.
  • an image Fx4 ⁇ FX8 first signal controller 51, the signal processing unit 52, and the second signal control unit 53 are sequentially supplied to the display processing unit 20. Therefore, when the image Fx4 is given to the display processing unit 20, the display image of the display unit 27 can be updated using the image Fx4, and when the image Fx5 is given to the display processing unit 20, the image The display image of the display unit 27 can be updated using Fx5.
  • the display processing unit 20 displays the whole or a part of the image Fx4 on the display unit 27, and when the image Fx5 is given to the display processing unit 20, the image Fx4 All or part of Fx5 can be displayed on the display unit 27.
  • the images Fx6 to Fx8 are displayed on the display unit 27.
  • Images 303 and 304 in FIG. 6 correspond to images displayed during the stage display period.
  • the image Fx9 is generated from the phase display period to complete the treatment period at time t C.
  • the image data of the image Fx9 is supplied to the signal processing unit 52 via the first signal control unit 51 and converted into a video signal, and the video signal (the video signal of the image Fx9) is transmitted via the second signal control unit 53.
  • the display processing unit 20 and the compression processing unit 16 can update the display image of the display unit 27 using the image Fx9.
  • the display processing unit 20 can cause the display unit 27 to display all or part of the image Fx9 when the image Fx9 is given to the display processing unit 20.
  • An image 302 in FIG. 6 corresponds to an image displayed during or after the completion processing period.
  • the compression processing unit 16 compresses the video signal of the image Fx9, and the compressed video signal is stored in the external memory 18.
  • the iterative control unit 50 determines whether the time point of interest belongs to a simple iterative period or a stage display period based on a predetermined iterative control index, and based on the iterative control index, during the stage display period Controls the update timing of the display image. Examples of this iterative control index are shown below. As will be described later, it is possible to eliminate the simple repetition period. In other words, immediately after the start of the super-resolution processing, the stage display period may be started immediately (time t A and time t B may be matched).
  • the first index for iterative control is the elapsed time from the start time of execution of the super-resolution processing (that is, time t A ).
  • the iterative control unit 50 compares the elapsed time TE from the time t A at the time of interest with a predetermined reference elapsed time TE REF0 . Then, when TE ⁇ TE REF0 , it is determined that the point of interest belongs to the simple repetition period. When TE ⁇ TE REF0 and the super-resolution processing is not completed, the point of interest becomes the step display period. Judge that it belongs.
  • the elapsed time TE is compared with predetermined reference elapsed times TE REF1 , TE REF2 , TE REF3 ,..., And as shown in FIG.
  • the reference elapsed times TE REF1 , TE REF2 , TE REF3 ,... are reached, the first, second, third ,.
  • TE REF 0 can also be set. If TE REF is set to zero, there is no simple repetition period.
  • the latest high-resolution image obtained at the point of interest is reflected in the display image of the display unit 27.
  • the display image is updated so that the whole or a part of the image Fx4 is displayed on the display unit 27.
  • the high-resolution image is updated a plurality of times between the first display update process and the second display update process.
  • the image Fxi is displayed as the display image in the first display update process.
  • a high-resolution image other than the image Fx (i + 1) (for example, the image Fx (i + 2)) is reflected in the display image.
  • the second index for iterative control is the number of executions PN of super-resolution unit processing for the images Fa and Fb.
  • the number of executions PN is 1, 2, 3,..., The images Fx2, Fx3, Fx4,.
  • the iterative control unit 50 compares the number of executions PN at the time of interest with a predetermined reference number PN REF0 . If PN ⁇ PN REF0 is satisfied, it is determined that the time point of interest belongs to the simple repetition period. On the other hand, when PN ⁇ PN REF0 is established and the super-resolution processing is not completed, it is determined that the time point of interest belongs to the stage display period.
  • the execution number PN is compared with a predetermined reference number PN REF1 , PN REF2 , PN REF3 ,..., And as shown in FIG.
  • PN REF1 , PN REF2 , PN REF3 ,... display update processing of the first time, the second time, the third time ,.
  • the third index for iterative control is the image quality improvement amount of the high resolution image by the repeated execution of the super-resolution unit processing.
  • the update amount calculation unit 63 in FIG. 8 generates the matrices Wa ⁇ X and Wb ⁇ X representing the pixel values of the two estimated low-resolution images from the matrix X representing the pixel values of the high-resolution image Fxi.
  • between the estimated low resolution image and the actual low resolution image is calculated, and the update amount ⁇ I / ⁇
  • the high resolution image Fx (i + 1) is generated from the high resolution image Fxi by the update based on X. Such updating with the update amount ⁇ I / ⁇ X is repeatedly executed to improve the image quality of the high-resolution image.
  • the update amount ⁇ I / ⁇ X is expressed by a matrix having the same number of elements as the number of elements of the matrix X.
  • the iterative control unit 50 obtains the sum of absolute values of each element of the update amount ⁇ I / ⁇ X every time the high-resolution image is updated.
  • the total sum of the absolute values of the update amounts ⁇ I / ⁇ X used when generating the high-resolution image Fx (i + 1) from the high-resolution image Fxi is represented by Q i .
  • the sum Q i represents the image quality improvement amount of the high resolution image Fx (i + 1) viewed from the high resolution image Fxi, and it can be said that the image quality improvement amount is larger as the sum Q i is larger.
  • This improvement evaluation value EV i represents the image quality improvement amount of the high resolution image Fx (i + 1) viewed from the initial high resolution image Fx1.
  • an iterative control unit 50 includes an estimation unit (not shown) that estimates the image quality improvement amount by calculating the improvement evaluation value EV i .
  • FIG. 12 shows the relationship between the number of executions PN of the super-resolution unit process and the improvement evaluation value EV i .
  • the iterative control unit 50 using the third index for iterative control compares the improvement evaluation value EV i at the time of interest with a predetermined reference evaluation value EV REF0 .
  • EV i ⁇ EV REF0 it is determined that the point of interest belongs to the simple repetition period, and when EV i ⁇ EV REF0 and super-resolution processing is not completed, the point of interest is displayed in stages. Judge as belonging to the period. Then, when the time point of interest belongs to the stage display period, the improvement evaluation value EV i is compared with predetermined reference evaluation values EV REF1 , EV REF2 , EV REF3 ,...
  • FIG. 13 is an internal block diagram of the display processing unit 20 according to the first embodiment.
  • the display processing unit 20 in FIG. 13 includes a display video processing unit 71, a VRAM (Video Random Access Memory) 72, and a display driver 73.
  • VRAM Video Random Access Memory
  • the video signal from the video signal processing unit 13 is input to the display video processing unit 71, and in the playback mode, the video signal from the external memory 18 is input to the display video processing unit 71 via the expansion processing unit 19. Entered.
  • the video signal input to the display video processing unit 71 is, for example, a video signal of a low resolution image that has not been subjected to super-resolution processing, or a final high resolution image output from the video signal processing unit 13 of FIG. Alternatively, it is a video signal of an intermediate generated high resolution image.
  • the display video processing unit 71 converts the resolution of the image so that the image (low-resolution image or high-resolution image) represented by the given video signal can be displayed on the display unit 27, and the resolution The video signal of the converted image is written into the VRAM 72.
  • the VRAM 72 is a video display memory for the display unit 27.
  • the display driver 73 displays an image represented by the video signal written in the VRAM 72 on the display screen of the display unit 27.
  • the display video signal processing unit 71 extracts a partial image area of the high-resolution image as a cut-out area by using a cut-out process of cutting out a part from the entire high-resolution image. If necessary, a reduction process for reducing the image size of the high-resolution image can be performed before and after the clipping process.
  • the image size after the cutout area or the image size after the reduction process and the cutout area is determined according to the resolution of the display unit 27.
  • reference numeral 320 represents the entire image area of the high resolution image
  • reference numeral 321 represents the cutout area.
  • a reference numeral 322 in FIG. 14B represents a display image corresponding to an image in the cutout area 321.
  • a region having a predetermined shape at a predetermined position on the high-resolution image can be set as a cut-out region. More specifically, for example, a rectangular area having a predetermined image size located near the center of the high-resolution image can be extracted as a cut-out area.
  • a face detection unit (not shown) included in the video signal processing unit 13 is, for example, based on image data of a low-resolution image Fa (or Fb) that is a source of a high-resolution image, by a known face detection process, The position and size of the face area on (or Fb) is detected. Based on this detection result, the position and size of the face region on the high resolution image can be calculated, and the position and size of the cutout region on the high resolution image can be obtained from the calculation result.
  • the focused area can be set as the cutout area. This is because there is a high possibility that the main subject that the photographer pays attention exists in the in-focus area.
  • an in-focus area can be detected based on an AF evaluation value used for autofocus control using a TTL (Through The Lens) type contrast detection method.
  • an AF evaluation unit included in the video signal processing unit 13 converts the entire image area of the low-resolution image Fa (or Fb) that is the source of the high-resolution image into a plurality of AF evaluation areas. The image is divided and the contrast in each AF evaluation area is detected from the image data of the low-resolution image Fa (or Fb).
  • an AF evaluation value corresponding to the contrast can be obtained. Then, the AF evaluation area having the largest detected contrast (AF evaluation value) among the plurality of AF evaluation areas is determined to be an in-focus area, and a high level corresponding to the in-focus area is determined. An area on the resolution image is set as a cutout area.
  • the position and size of the cutout area may be set according to a manual operation by the user (such as an operation on the operation unit 26).
  • Display additional items It is also possible to superimpose additional items on an image representing the whole or part of a high-resolution image and display the image after superimposition. Examples of additional items (indexes 331, 332, and 340 and an image 335) will be described below.
  • the processing for generating the additional item can be performed by the display video processing unit 71 or can be performed by the video signal processing unit 13.
  • an index 331 representing the remaining processing time and the elapsed time TE is superimposed on an image representing a part (or the whole) of the high-resolution image, and the superimposed image Is displayed as a display image.
  • the index 331 is a rectangular area having a longitudinal direction with respect to the horizontal direction of the image and having a first color and a second color, and the remaining processing is performed according to the area ratio of the first color and the second color area occupying the rectangular area.
  • the first color area is indicated by a hatched area.
  • the first and second color areas are respectively arranged on the left and right sides of the rectangular area representing the entire index 331, and coincide with the left end of the second color area as the elapsed time TE increases.
  • the right end moves to the right.
  • the remaining processing time at the point of interest is the time required to complete the super-resolution processing calculated from the point of interest, and is obtained by subtracting the elapsed time TE at the point of interest from the total processing time.
  • the total processing time is the time from time t A to time t C in FIG. 9, and the number of times of super-resolution unit processing to be repeatedly executed until the super-resolution processing is completed, the low-resolution image, and the high-resolution image. It is calculated from the image size.
  • an index 332 indicating the execution timing of the display update process is superimposed on the image representing a part (or the whole) of the high-resolution image together with the index 331,
  • the superposed image may be displayed as a display image.
  • the index 332 is formed from a plurality of line segments located below the index 331. As the elapsed time TE increases, the right edge of the first color area moves to the right, and when the position of the right edge in the left-right direction matches one of the drawing positions of the plurality of line segments, the display update process is 1 It is executed batchwise.
  • the drawing position of each line segment is determined by setting or predicting the execution timing of the display update process in advance.
  • the entire reduced image 335 of the high resolution image is superimposed on an image representing a part of the high resolution image (image of the cutout region), and after the superimposition These images may be displayed as a display image.
  • the reduced image 335 may be an entire reduced image of the low resolution image (Fa or Fb) that is the source of the high resolution image.
  • an index 340 indicating the degree of expectation of the effect of the super-resolution processing is superimposed on an image representing a part (or the whole) of the high resolution image. Then, the superimposed image may be displayed as a display image. More specifically, for example, using the icons as shown in FIG. 18B, the expectation is classified and displayed in three stages.
  • the resolution (substantial resolution) of the high-resolution image obtained by the super-resolution processing is higher than that of the low-resolution image.
  • the degree of expectation here refers to the degree of improvement in resolving power of a high-resolution image obtained by super-resolution processing with respect to a low-resolution image.
  • the resolution improvement by the super-resolution processing is realized on the premise that there is a positional shift in units of subpixels between the images Fa and Fb. If the amount of motion between the images Fa and Fb is completely zero, the image Fa and the image Fb are completely the same image (ignoring the motion of the subject in the real space), so that the resolution is improved by super-resolution processing. Can't hope. On the other hand, if there is an appropriate misalignment between the images Fa and Fb, high resolution improvement can be expected.
  • the degree of expectation can be estimated based on the amount of motion between the low-resolution images Fa and Fb that are the basis of the high-resolution image.
  • the horizontal component and the vertical component of the motion amount of the images Fa and Fb are detected based on the stored contents of the motion amount storage unit 43, and further, the decimal point part M abH of the detected horizontal component and the decimal point part M of the detected vertical component are detected.
  • the decimal point part here is a decimal point part when the adjacent pixel interval pp L of the low resolution image is set to 1.
  • an evaluation value EV A is obtained based on the decimal point parts M abH and M abV , and when EV A > EV A1 , the expectation is estimated to be the first expectation, and EV A1 ⁇ EV A > EV A2 In some cases, the degree of expectation is estimated to be the second degree of expectation. When EV A2 ⁇ EV A , the degree of expectation is estimated to be the third degree of expectation, and the estimated degree of expectation is reflected in the indicator 340. .
  • the values of EV A1 and EV A2 are set in advance so that “EV A1 > EV 2 > 0” is satisfied.
  • the evaluation value EV A is an evaluation value that takes a higher value as the decimal point portions M abH and M abV are closer to (0.5 ⁇ pp L ).
  • ) is set as the evaluation value EV A.
  • the positional deviation between the images Fa and Fb corresponding to the amount of motion between the images Fa and Fb is corrected, and the images Fa and Fb after the positional deviation correction are combined to achieve high resolution. If both M abH and M abV are zero (or substantially zero), as shown in FIG.
  • the input subject of the image data of the images Fa and Fb to the super-resolution processing unit 40 is the AFE 12 when super-resolution processing is performed in the shooting mode, whereas the super-resolution processing is performed in the reproduction mode. In some cases, it is the decompression processing unit 19. Except that the input subject of the image data of the images Fa and Fb to the super-resolution processing unit 40 is different, the part that receives the operation of the super-resolution processing unit 40 and the output data of the super-resolution processing unit 40 (repetition control unit 50, The operations of the first signal control unit 51, the signal processing unit 52, the second signal control unit 53, and the display processing unit 20) are the same between the shooting mode and the reproduction mode. Therefore, even when super-resolution processing is performed in the reproduction mode, display update processing using an intermediate high-resolution image is executed as described above.
  • the image data of the final high-resolution image is transferred to the external memory 18 via the compression processing unit 16. To be recorded. Once the final high-resolution image based on the images Fa and Fb is obtained, it is not necessary to re-execute the super-resolution processing based on the images Fa and Fb.
  • the super-resolution processing unit 50 is provided in the video signal processing unit 13 (see FIG. 7).
  • the entire image area of the low resolution image and the high resolution image is divided into a plurality of areas, and the super-resolution processing for the plurality of divided areas is executed in a time division manner. Then, the results of the super-resolution processing for the divided areas are displayed in order from the divided areas for which the super-resolution processing has been completed.
  • FIG. 20 shows an internal block diagram of the super-resolution processor 40a according to the second embodiment.
  • the super-resolution processing unit 40a can be provided in the video signal processing unit 13 or the display processing unit 20 in FIG.
  • the super-resolution processing unit 40a includes each part referred to by reference numerals 41 to 43 and 44a.
  • the memory unit 41, the motion amount calculation unit 42, and the motion amount storage unit 43 in the super-resolution processing unit 40a are the same as those shown in FIG.
  • the image data of the images Fa and Fb are input to the super-resolution processing unit 40a from the external memory 18 via the expansion processing unit 19 and in the shooting mode from the AFE 12.
  • the motion amount between the images Fa and Fb is calculated by the motion calculation unit 42 and stored in the motion amount storage unit 43.
  • the image data of the images Fa and Fb are input to the super-resolution calculation unit 44a via the memory unit 41.
  • the super-resolution calculation unit 44a has the same configuration and function as the super-resolution calculation unit 44 of FIG. However, in the super-resolution calculation unit 44a, the output data of the subtraction unit 64 is directly given to the frame memory 65, and the frame memory 65 stores the image data of the high resolution image output from the subtraction unit 64. 8 generates the entire image data of the high-resolution image at the same time, the super-resolution calculation unit 44a has the first, second, and second in the high-resolution image. The image data of the divided areas 3,.
  • the divided image areas DR 1 to DR 9 are formed by dividing the entire image areas of the low resolution images Fa and Fb and the high resolution image into three equal parts in the vertical and horizontal directions, respectively.
  • the entire image area of the image Fa is a combination of the divided areas DR 1 to DR 9 of the image Fa
  • the entire image area of the image Fb is a combination of the divided areas DR 1 to DR 9 of the image Fb.
  • the entire image area of the high resolution image is a combination of the divided areas DR 1 to DR 9 of the high resolution image.
  • the super-resolution calculation unit 44a performs super-resolution processing for each divided region.
  • the contents of the individual super-resolution processing are the same as those shown in the first embodiment.
  • the super-resolution calculation unit 44a performs an operation of “performing the super-resolution processing for the divided region DR j and executing the super-resolution processing for the divided region DR j + 1 after the completion” for the super-resolution for all the divided regions. Repeat until image processing is complete (where j is a natural number).
  • FIGS. 22 and 23 the state of super-resolution processing and display image transition according to the second embodiment will be described.
  • the image data of the image Fa and Fb are input to the super-resolution calculating unit 44a, as shown in FIG. 22, after time t 0, as time progresses, the time t 1, t 2, ⁇ ⁇ ⁇ , T 9 are visited in this order.
  • FIG. 23 shows how the display image changes over time.
  • images 401, 402, 403, and 404 indicate display images at times t 0 , t 1 , t 2 , and t 9 , respectively, and hatched areas shown in the display image of FIG.
  • FIG. 4 shows a part after super-resolution processing.
  • the display processing unit 20 displays the entire image of the low resolution image Fa as the display image 401.
  • the super-resolution calculation unit 44a performs super-resolution processing using the image data in the divided region DR 1 of the image Fa and the image data in the divided region DR 1 of the image Fb between times t 0 and t 1. , thereby generating image data of the divided area DR 1 of the high-resolution image.
  • super-resolution processing is executed using the image data in the divided area DR 2 of the image Fa and the image data in the divided area DR 2 of the image Fb, and thereby the high-resolution image generating image data of the divided area DR 2.
  • the super-resolution computation unit 44a between the time t j-1 -t j, super-resolution image using the image data of the divided region DR j of the image data and the image Fb of the divided region DR j image Fa
  • the processing is executed, thereby generating image data in the divided region DR j of the high resolution image (where j is a natural number).
  • the super-resolution operation unit 44a sequentially executes such unit processing for the time t j ⁇ 1 -t j for nine times.
  • Super-resolution processing using the image data in the divided region DR j of the image data and the image Fb of the divided region DR j image Fa is executed based on the amount of motion between images Fa and Fb of the divided regions DR j Is done.
  • the motion amount between the images Fa and Fb for the divided region DR j is calculated by the motion amount calculation unit 42 and stored in the motion amount storage unit 43.
  • Motion amount calculation unit 42 based on the image data in the divided region DR j of the image data and the image Fb of the divided region DR j image Fa, the movement amount between images Fa and Fb of the divided regions DR j calculate. That is, the motion amount calculation unit 42 calculates the motion amount between the images Fa and Fb for each divided region.
  • one motion amount may be obtained for the entire images Fa and Fb, and the one motion amount may be commonly used as the motion amount for all the divided regions DR 1 to DR 9 between the images Fa and Fb. Is possible.
  • the display processing unit 20 is a low resolution Based on the image data in the divided regions DR 2 to DR 9 of the image Fa and the image data in the divided region DR 1 of the high resolution image, the display image 402 of FIG.
  • the display image 402 is an image obtained by synthesizing an image in the divided regions DR 2 to DR 9 of the low resolution image Fa and an image (intermediately generated image) in the divided region DR 1 of the high resolution image.
  • the image data of the divided area DR 2 of the high-resolution image is generated, the image data is sent to the display processing unit 20 from the super-resolution processing section 40a, the display processing unit 20
  • the display image 403 of FIG. 23 is displayed on the display unit 27 based on the image data in the divided areas DR 3 to DR 9 of the low resolution image Fa and the image data in the divided areas DR 1 and DR 2 of the high resolution image.
  • the display image 403 is an image obtained by synthesizing the images in the divided regions DR 3 to DR 9 of the low resolution image Fa and the images (two intermediate generation images) in the divided regions DR 1 and DR 2 of the high resolution image. is there.
  • a similar display image update is performed at times t 3 to t 8 , and when image data in the divided region DR 9 of the high-resolution image is generated at time t 9 , the entire image regions of the images Fa and Fb are updated.
  • the super-resolution process ends.
  • the image data in the high-resolution image divided region DR 9 is sent to the display processing unit 20, and the display processing unit 20 sends the divided region of the high-resolution image sent from time t 1 to t 9 .
  • a display image 404 of FIG. 23 is displayed on the display unit 27 based on the image data in DR 1 to DR 9 .
  • the display image 404 is an image obtained by synthesizing images in the divided regions DR 1 to DR 9 of the high resolution image, that is, the entire image of the high resolution image to be finally generated. All image data of the high resolution image generated by the super-resolution processing unit 40 a is recorded in the external memory 18 via the compression processing unit 16.
  • the divided area to be subjected to the super-resolution processing is scanned like raster scanning from the upper left to the lower right, but this scanning method can be arbitrarily changed.
  • the predetermined arithmetic processing for generating the output image from the input image is not limited to the super-resolution processing, and processing such as the super-resolution unit processing of the first embodiment is included in the arithmetic processing. It does not matter whether or not iterative execution is included. Therefore, the number of input images for obtaining an output image is not limited to a plurality, and may be one.
  • the predetermined arithmetic processing is arbitrary image processing such as spatial filtering, frequency filtering, and geometric transformation.
  • the display method according to the present invention can be applied to any apparatus or method that generates an output image from an input image by a calculation process that takes a relatively long time (for example, several seconds to several tens of seconds). Of course, the input image and the output image are different from each other.
  • the input image may not be an image obtained by photographing with the imaging device 1, but now it is assumed that the input image is a single frame image obtained by photographing with the imaging device 1.
  • a specific example of the display method according to the third embodiment will be described. A specific example of this display method is similar to that according to the second embodiment.
  • Video signal processing unit 13 generates an output image I O of one by performing predetermined arithmetic processing for one input image (frame image) I IN.
  • the video signal processing unit 13 divides the entire image area of the input image I IN and the output image IO into a plurality of divided areas.
  • the divided image areas DR 1 to DR 9 are divided by dividing the entire image areas of the input image I IN and the output image IO into three equal parts in the vertical and horizontal directions, respectively.
  • the entire image region of the input image I IN is a composite of the divided regions DR 1 ⁇ DR 9 of the input image I IN
  • the entire image area of the output image I O is the output image I O of the divided regions DR 1 ⁇ DR 9 is synthesized.
  • the video signal processing unit 13 executes predetermined arithmetic processing for each divided region.
  • the predetermined operation processing is repeatedly executed until the predetermined operation processing is completed for all the divided areas (where j is a natural number).
  • time t 1 , t 2 ,..., T 9 come in this order.
  • the display processing unit 20 displays the entire image of the input image I IN as the display image.
  • the video signal processing unit 13 performs predetermined arithmetic processing on the image (image data) in the divided region DR j of the input image I IN between the times t j ⁇ 1 and t j , and thereby the output image I
  • An image (image data) in the divided region DR j of O is generated (where j is a natural number).
  • Such unit processing between time t j ⁇ 1 and t j is sequentially executed nine times.
  • the image data of the divided area DR 1 of the output image I O is generated, the image data is sent to the display processing unit 20, the display processing unit 20, the divided region DR of the input image I IN Based on the image data in 2 to DR 9 and the image data in the divided area DR 1 of the output image I O , the image in the divided areas DR 2 to DR 9 of the input image I IN and the divided area DR of the output image I O An image obtained by synthesizing the image in 1 (intermediately generated image) is displayed on the display unit 27.
  • the image data of the divided area DR 2 of the output image I O is generated, the image data is sent to the display processing unit 20, the display processing unit 20, the input image I IN Based on the image data in the divided regions DR 3 to DR 9 and the image data in the divided regions DR 1 and DR 2 of the output image IO , the images and output images in the divided regions DR 3 to DR 9 of the input image I IN An image obtained by synthesizing the images (two intermediately generated images) in the IO divided regions DR 1 and DR 2 is displayed on the display unit 27.
  • a similar display image update is performed at times t 3 to t 8 , and when image data in the divided region DR 9 of the output image IO is generated at time t 9 , the output image I IN is output from the input image I IN.
  • the process for generating O ends.
  • the image data in the divided region DR 9 of the output image I O is sent to the display processing unit 20, the display processing unit 20 divides the output image I O sent at time t 1 ⁇ t 9 based on the image data in the region DR 1 ⁇ DR 9, the displayed image, i.e., the entire image of the output image I O to be finally generated synthesized image of the divided region DR within 1 ⁇ DR 9 of the output image I O This is displayed on the unit 27. All the generated image data of the output image IO is recorded in the external memory 18 via the compression processing unit 16.
  • the number of low-resolution images used for generating a high-resolution image is two, but it may be other than two.
  • the amount of motion between real low-resolution images is derived by computation based on image data, but based on the detection result of a sensor (not shown) that detects the motion of the imaging device 1 in real space, You may make it derive
  • the sensor that detects the movement of the imaging device 1 is, for example, an angular velocity sensor that detects an angular velocity of the imaging device 1, an angular acceleration sensor that detects angular acceleration of the imaging device 1, or an acceleration sensor that detects acceleration of the imaging device 1, Or a combination thereof.
  • the amount of motion between actual low-resolution images may be derived based on both the detection result of such a sensor and the image data.
  • the imaging apparatus 1 in FIG. 1 can be realized by hardware or a combination of hardware and software.
  • all or part of the processing executed in the video signal processing unit 13 and the display processing unit 20 can be realized using software. Of course, it is also possible to form them only by hardware.
  • a block diagram of a part realized by software represents a functional block diagram of the part.
  • the first image display device is formed by the included block. It can be considered that the display unit 27 is further included in the first image display device.
  • the first image display device includes a super-resolution processing unit 40 that generates a high-resolution image from a plurality of low-resolution images by super-resolution processing, and a display based on the high-resolution image generated by the super-resolution processing unit 40
  • a display control unit that displays an image on the display unit 27.
  • the display control unit is mainly formed by the iterative control unit 50 and the display processing unit 20, and the display control unit includes all or one of the first signal control unit 51, the signal processing unit 52, and the second signal control unit 53. It can be considered that the part is included.
  • a super-resolution processing unit 40a (FIG. 20) that generates a high-resolution image from a plurality of low-resolution images by super-resolution processing, and a high-resolution image generated by the super-resolution processing unit 40a.
  • a second image display device is formed by a block including the display processing unit 20 (FIG. 13) that displays a display image based on the display unit 27. It can be considered that the display unit 27 is further included in the second image display device.
  • a third image display device is formed by blocks including the video signal processing 13 and the display processing unit 20. It can be considered that the display unit 27 is further included in the third image display device.
  • the functions of the first, second, or third image display device described above can be realized by an electronic device (for example, an image reproduction device having an image processing function; not shown) different from that of the imaging device 1.
  • an image display device equivalent to the first, second, or third image display device is provided in the electronic device, and one or more frame images are acquired by the imaging device 1, and then one of the images is acquired.
  • the image data of the frame image described above may be supplied to the electronic device wirelessly or by wire or via a recording medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A super-resolution operation unit generates a high-resolution image from multiple low-resolution images by reconstruction-based super-resolution processing using a repeated operation.  The super-resolution processing comprises super-resolution unit processing which is repeatedly executed.  The super-resolution operation unit sequentially updates the high-resolution image estimated from the multiple low-resolution images by repeat execution the super-resolution unit processing and outputs the high-resolution image obtained after a predetermined number of updates as a final high-resolution image.  The execution of the super-resolution processing is started while one low-resolution image (301) is being displayed immediately after the multiple low-resolution images are captured.  Thereafter, while the super-resolution unit processing is repeatedly executed, an intermediate high-resolution image (303, 304) obtained at that point in time is displayed, and after the completion of the super-resolution processing, the final super-resolution image (302) is displayed.

Description

画像表示装置及び撮像装置Image display device and imaging device
 本発明は、画像を表示可能な画像表示装置及び撮像装置に関する。 The present invention relates to an image display device and an imaging device capable of displaying an image.
 複数の低解像度画像から1枚の高解像度画像を生成する高解像度化処理(超解像処理)が提案されている。超解像処理の中で、繰り返し演算(反復演算)を利用した再構成型の超解像処理が代表的な処理として知られている。繰り返し演算を利用した再構成型の超解像処理方法は、超解像処理方法の中で現在最も有効な方法であるが、多くの反復演算を必要とする最適化計算方法であるため、処理に比較的多くの時間がかかる。 High resolution processing (super-resolution processing) that generates one high-resolution image from a plurality of low-resolution images has been proposed. Among the super-resolution processes, a reconfiguration-type super-resolution process using repetitive calculations (repetitive calculations) is known as a typical process. The reconfiguration-type super-resolution processing method using iterative computation is the most effective method among the super-resolution processing methods at present, but it is an optimization computation method that requires many iterative computations. It takes a relatively long time.
 一方において、ユーザは、撮影画像の速やかなる表示画面での確認を希望する。従って例えば、高精細な静止画像を得るために、高解像度画像生成用の複数のフレーム画像(低解像度画像)を撮影した場合であっても、撮影画像に基づく画像を速やかに表示すべきである。 On the other hand, the user wishes to confirm the captured image promptly on the display screen. Therefore, for example, in order to obtain a high-definition still image, even when a plurality of frame images (low-resolution images) for generating a high-resolution image are captured, an image based on the captured image should be displayed promptly. .
 これを実現すべく、或る従来方法では、複数のフレーム画像の撮影後、その複数のフレーム画像の内の1枚を表示部に直ちに表示する一方で、その複数のフレーム画像を用いた超解像処理を実行し、超解像処理の完了後、得られた高解像度画像を記録媒体に保存している(例えば、下記特許文献1参照)。 In order to achieve this, in a conventional method, after capturing a plurality of frame images, one of the plurality of frame images is immediately displayed on the display unit, while the super solution using the plurality of frame images is displayed. After the image processing is executed and the super-resolution processing is completed, the obtained high-resolution image is stored in a recording medium (for example, see Patent Document 1 below).
特開2002-112103号公報JP 2002-112103 A
 しかしながら、この従来方法では、図24に示す如く、撮影直後にユーザが確認する表示画像(低解像度画像)と事後的に保存される保存画像(高解像度画像)との間で、画質に大きな差が生じるため、後で保存画像を見たとき、ユーザは違和感を抱くことも多い(尚、図24では、超解像処理の効果が誇張されている)。また、この従来方法では、比較的長い時間を要する超解像処理が完了するまで、ユーザは処理の結果を全く確認することができない。これは、超解像処理に基づく高解像度化の効果を少しでも早く確認したいというユーザの要望に反する。超解像処理の結果をユーザになるだけ早い段階で提示することができたならば、これらの問題は解消又は軽減される。尚、実行される画像処理が超解像処理である場合に注目して従来の課題を説明したが、超解像処理以外の画像処理を実行する場合にも同様の課題が存在する。 However, in this conventional method, as shown in FIG. 24, there is a large difference in image quality between a display image (low resolution image) that is confirmed by the user immediately after shooting and a saved image (high resolution image) that is saved afterwards. Therefore, the user often feels uncomfortable when viewing the stored image later (in FIG. 24, the effect of the super-resolution processing is exaggerated). Also, with this conventional method, the user cannot confirm the processing result at all until the super-resolution processing that requires a relatively long time is completed. This is contrary to the user's desire to confirm the effect of high resolution based on super-resolution processing as soon as possible. If the result of the super-resolution processing can be presented as early as possible for the user, these problems are solved or reduced. Note that the conventional problem has been described by focusing on the case where the image processing to be executed is the super-resolution process, but the same problem exists when image processing other than the super-resolution process is executed.
 そこで本発明は、画像処理の結果をユーザになるだけ早い段階で提示することのできる画像表示装置及び撮像装置を提供することを目的とする。 Therefore, an object of the present invention is to provide an image display device and an imaging device that can present the result of image processing as early as possible to the user.
 本発明に係る画像表示装置は、所定の演算処理によって入力画像から出力画像を生成する演算処理部と、前記演算処理部の生成画像に基づく表示画像を表示部に表示させる表示制御部と、を備えた画像表示装置において、前記表示制御部は、前記演算処理の実行中において、前記演算処理の実行過程で生成される前記演算処理の中間結果を、段階的に前記表示部に表示させることを特徴とする。 An image display device according to the present invention includes: an arithmetic processing unit that generates an output image from an input image by predetermined arithmetic processing; and a display control unit that displays a display image based on the generated image of the arithmetic processing unit on the display unit. In the image display apparatus provided, the display control unit causes the display unit to display an intermediate result of the calculation process generated in the execution process of the calculation process in a stepwise manner during the execution of the calculation process. Features.
 これにより、画像処理(演算処理)の結果をユーザになるだけ早い段階で提示することが可能となる。 This makes it possible to present the results of image processing (arithmetic processing) as early as possible for the user.
 具体的には例えば、前記演算処理は、反復実行される単位処理を含み、前記演算処理部は、前記入力画像に基づく中間生成画像に対して前記単位処理を反復実行することにより前記中間生成画像を更新して最終的に前記出力画像を生成し、前記表示制御部は、前記単位処理が反復実行されているとき、前記単位処理の反復過程における前記中間生成画像から表示画像を生成して前記表示部に表示させる。 Specifically, for example, the arithmetic processing includes unit processing that is repeatedly executed, and the arithmetic processing unit repeatedly executes the unit processing on the intermediate generated image based on the input image to thereby generate the intermediate generated image. And finally generating the output image, and when the unit process is repeatedly executed, the display control unit generates a display image from the intermediate generated image in the iteration process of the unit process, and Display on the display.
 また例えば、前記表示制御部は、前記単位処理が反復実行されているとき、前記演算処理の実行開始時点からの経過時間または前記単位処理が反復実行された回数に応じて段階的に前記表示部の表示内容を更新し、更新時点における最新の中間生成画像を前記表示内容に反映させる。 Further, for example, when the unit process is repeatedly executed, the display control unit gradually changes the display unit according to the elapsed time from the execution start time of the arithmetic process or the number of times the unit process is repeatedly executed. And the latest intermediate generation image at the time of update is reflected in the display content.
 或いは例えば、前記演算処理部は、前記中間生成画像の画質を改善するために前記単位処理を反復実行して前記中間生成画像を順次更新し、当該画像表示装置は、前記単位処理の反復実行による前記中間生成画像の画質改善量を推定する推定部を更に備え、前記表示制御部は、前記単位処理が反復実行されているとき、推定画質改善量に応じて段階的に前記表示部の表示内容を更新し、更新時点における最新の中間生成画像を前記表示内容に反映させる。 Alternatively, for example, the arithmetic processing unit repeatedly executes the unit processing in order to improve the image quality of the intermediate generated image and sequentially updates the intermediate generated image, and the image display device is based on the repeated execution of the unit processing. The display unit further includes an estimation unit that estimates an image quality improvement amount of the intermediate generation image, and the display control unit displays the display content of the display unit step by step according to the estimated image quality improvement amount when the unit processing is repeatedly executed. And the latest intermediate generation image at the time of update is reflected in the display content.
 また例えば、前記演算処理は、第1~第nの単位処理から成り、第iの単位処理により、前記入力画像の一部に基づく第iの中間生成画像が前記出力画像の一部として生成され、前記第1~第nの単位処理によって生成された第1~第nの中間生成画像を合成することで前記出力画像の全体が形成され、前記表示制御部は、前記演算処理の実行中において、その時点で得られている第1~第mの中間生成画像を用いて表示画像を生成して前記表示部に表示させ、n及びmは自然数であってn>mが成立し、iは1≦i≦nを満たす整数である。 Further, for example, the calculation process includes first to n-th unit processes, and an i-th intermediate generation image based on a part of the input image is generated as a part of the output image by the i-th unit process. The entire output image is formed by synthesizing the first to n-th intermediate generated images generated by the first to n-th unit processes, and the display control unit performs the calculation process , A display image is generated using the first to m-th intermediate generation images obtained at that time and displayed on the display unit, where n and m are natural numbers, and n> m holds, and i is It is an integer that satisfies 1 ≦ i ≦ n.
 本発明に係る撮像装置は、撮影によって画像を取得する撮像部と、前記画像表示装置と、を備える。前記画像表示装置は、前記撮像部によって取得された前記画像を前記入力画像として受ける。 An imaging apparatus according to the present invention includes an imaging unit that acquires an image by imaging and the image display apparatus. The image display device receives the image acquired by the imaging unit as the input image.
 本発明によれば、画像処理の結果をユーザになるだけ早い段階で提示することのできる画像表示装置及び撮像装置を提供することが可能となる。 According to the present invention, it is possible to provide an image display device and an imaging device capable of presenting the result of image processing as early as possible to the user.
 本発明の意義ないし効果は、以下に示す実施の形態の説明により更に明らかとなろう。ただし、以下の実施の形態は、あくまでも本発明の一つの実施形態であって、本発明ないし各構成要件の用語の意義は、以下の実施の形態に記載されたものに制限されるものではない。 The significance or effect of the present invention will be further clarified by the following description of embodiments. However, the following embodiment is merely one embodiment of the present invention, and the meaning of the term of the present invention or each constituent element is not limited to that described in the following embodiment. .
本発明の実施形態に係る撮像装置の全体ブロック図である。1 is an overall block diagram of an imaging apparatus according to an embodiment of the present invention. 繰り返し演算を利用したMAP法に基づく再構成型の超解像処理の概念図である。It is a conceptual diagram of the reconstruction type super-resolution process based on the MAP method using an iterative operation. 図2に対応する超解像処理の流れを表すフローチャートである。3 is a flowchart showing a flow of super-resolution processing corresponding to FIG. 2. 超解像処理の実行時周辺の、表示画像の例を示す図である。It is a figure which shows the example of the display image of the periphery at the time of execution of a super-resolution process. 図1の撮像装置によって撮影されるべき被写体のアナログ画像を示す図である。It is a figure which shows the analog image of the to-be-photographed object which should be image | photographed with the imaging device of FIG. 本発明の実施形態に係り、超解像処理の実行前後及び実行中の表示画像の例を示す図である。It is a figure which concerns on embodiment of this invention and shows the example of the display image before and behind execution of a super-resolution process, and execution. 本発明の第1実施例に係る映像信号処理部の内部ブロック図である。FIG. 3 is an internal block diagram of a video signal processing unit according to the first embodiment of the present invention. 図7の超解像演算部の内部ブロック図である。FIG. 8 is an internal block diagram of a super-resolution operation unit in FIG. 7. 本発明の第1実施例に係り、超解像処理の実行時周辺の期間が3つの期間に分類される様子を示す図である。It is a figure which shows a mode that the period of the periphery at the time of execution of a super-resolution process is classified into three periods concerning 1st Example of this invention. 本発明の第1実施例に係り、超解像処理の実行開始時点からの経過時間と表示更新処理の実行タイミングとの関係を示す図である。FIG. 6 is a diagram illustrating a relationship between an elapsed time from the start of execution of super-resolution processing and display update processing execution timing according to the first embodiment of the present invention. 本発明の第1実施例に係り、超解像単位処理の実行回数と表示更新処理の実行タイミングとの関係を示す図である。FIG. 6 is a diagram illustrating a relationship between the number of executions of super-resolution unit processing and the execution timing of display update processing according to the first embodiment of the present invention. 本発明の第1実施例に係り、超解像単位処理の実行回数と、超解像単位処理の反復実行による画質改善量を表す改善評価値との関係を示す図である。FIG. 6 is a diagram illustrating a relationship between the number of executions of super-resolution unit processing and an improvement evaluation value representing an image quality improvement amount by repeated execution of super-resolution unit processing according to the first embodiment of the present invention. 本発明の第1実施例に係る表示処理部の内部ブロック図である。It is an internal block diagram of the display processing part which concerns on 1st Example of this invention. (a)及び(b)は、本発明の第1実施例に係り、夫々、高解像度画像の全体を表す図及び表示画像を示す図である。(A) And (b) is a figure which shows the figure showing the whole high resolution image, and a display image, respectively, concerning 1st Example of this invention. 本発明の第1実施例に係り、表示画像に付加項目が重畳される様子を示す図である。It is a figure which shows a mode that an additional item is superimposed on a display image concerning 1st Example of this invention. 本発明の第1実施例に係り、表示画像に他の付加項目が重畳される様子を示す図である。It is a figure which concerns on 1st Example of this invention and shows a mode that another additional item is superimposed on a display image. 本発明の第1実施例に係り、表示画像に更に他の付加項目が重畳される様子を示す図である。It is a figure which concerns on 1st Example of this invention and shows a mode that another additional item is further superimposed on a display image. (a)及び(b)は、本発明の第1実施例に係り、夫々、表示画像に更に他の付加項目が重畳される様子を示す図と、その付加項目を表すアイコンを示す図である。(A) And (b) concerns on 1st Example of this invention, and is a figure which shows a mode that another additional item is further superimposed on a display image, respectively, and a figure which shows the icon showing the additional item. . (a)及び(b)は、本発明の第1実施例に係り、位置合わせ後の2枚の低解像度画像間における画素位置関係の例を示す図である。(A) And (b) is a figure which concerns on 1st Example of this invention and shows the example of the pixel positional relationship between the two low-resolution images after alignment. 本発明の第2実施例に係る超解像処理部の内部ブロック図である。It is an internal block diagram of the super-resolution processing part which concerns on 2nd Example of this invention. 本発明の第2実施例に係り、画像の全体が複数の領域に分割される様子を示す図である。It is a figure which shows a mode that the whole image is divided | segmented into a several area | region concerning 2nd Example of this invention. 本発明の第2実施例に係る超解像処理の流れを示す図である。It is a figure which shows the flow of the super-resolution process based on 2nd Example of this invention. 本発明の第2実施例に係り、表示画像の変遷の様子を示す図である。It is a figure which concerns on 2nd Example of this invention and shows the mode of a transition of a display image. 従来技術に係り、撮影直後にユーザが確認する表示画像(低解像度画像)と事後的に保存される保存画像(高解像度画像)とを示す図である。It is a figure which shows the display image (low-resolution image) which a user confirms immediately after imaging | photography and the preservation | save image (high-resolution image) preserve | saved after the fact regarding a prior art.
 以下、本発明の実施の形態につき、図面を参照して具体的に説明する。参照される各図において、同一の部分には同一の符号を付し、同一の部分に関する重複する説明を原則として省略する。後に第1~第3実施例を説明するが、まず、各実施例に共通する事項又は各実施例にて参照される事項について説明する。 Hereinafter, embodiments of the present invention will be specifically described with reference to the drawings. In each of the drawings to be referred to, the same part is denoted by the same reference numeral, and redundant description regarding the same part is omitted in principle. The first to third embodiments will be described later. First, matters that are common to each embodiment or items that are referred to in each embodiment will be described.
 図1は、本発明の実施形態に係る撮像装置1の全体ブロック図である。撮像装置1は、例えば、デジタルビデオカメラである。撮像装置1は、動画像及び静止画像を撮影可能となっていると共に、動画像撮影中に静止画像を同時に撮影することも可能となっている。 FIG. 1 is an overall block diagram of an imaging apparatus 1 according to an embodiment of the present invention. The imaging device 1 is a digital video camera, for example. The imaging device 1 can capture a moving image and a still image, and can also capture a still image simultaneously during moving image capturing.
[基本的な構成の説明]
 撮像装置1は、撮像部11と、AFE(Analog Front End)12と、映像信号処理部13と、マイク14と、音声信号処理部15と、圧縮処理部16と、DRAM(Dynamic Random Access Memory)などの内部メモリ17と、SD(Secure Digital)カードや磁気ディスクなどの外部メモリ18と、伸張処理部19と、表示処理部20と、音声出力回路21と、TG(タイミングジェネレータ)22と、CPU(Central Processing Unit)23と、バス24と、バス25と、操作部26と、表示部27と、スピーカ28と、を備えている。操作部26は、録画ボタン26a、シャッタボタン26b及び操作キー26c等を有している。撮像装置1内の各部位は、バス24又は25を介して、各部位間の信号(データ)のやり取りを行う。
[Description of basic configuration]
The imaging apparatus 1 includes an imaging unit 11, an AFE (Analog Front End) 12, a video signal processing unit 13, a microphone 14, an audio signal processing unit 15, a compression processing unit 16, and a DRAM (Dynamic Random Access Memory). An internal memory 17 such as an SD (Secure Digital) card or a magnetic disk, an expansion processing unit 19, a display processing unit 20, an audio output circuit 21, a TG (timing generator) 22, and a CPU. (Central Processing Unit) 23, a bus 24, a bus 25, an operation unit 26, a display unit 27, and a speaker 28. The operation unit 26 includes a recording button 26a, a shutter button 26b, an operation key 26c, and the like. Each part in the imaging apparatus 1 exchanges signals (data) between the parts via the bus 24 or 25.
 TG22は、撮像装置1全体における各動作のタイミングを制御するためのタイミング制御信号を生成し、生成したタイミング制御信号を撮像装置1内の各部に与える。タイミング制御信号は、垂直同期信号Vsyncと水平同期信号Hsyncを含む。CPU23は、撮像装置1内の各部の動作を統括的に制御する。操作部26は、ユーザによる操作を受け付ける。操作部26に与えられた操作内容は、CPU23に伝達される。撮像装置1内の各部は、必要に応じ、信号処理時に一時的に各種のデータ(デジタル信号)を内部メモリ17に記録する。 The TG 22 generates a timing control signal for controlling the timing of each operation in the entire imaging apparatus 1, and gives the generated timing control signal to each unit in the imaging apparatus 1. The timing control signal includes a vertical synchronization signal Vsync and a horizontal synchronization signal Hsync. The CPU 23 comprehensively controls the operation of each unit in the imaging apparatus 1. The operation unit 26 receives an operation by a user. The operation content given to the operation unit 26 is transmitted to the CPU 23. Each unit in the imaging apparatus 1 temporarily records various data (digital signals) in the internal memory 17 during signal processing as necessary.
 撮像部11は、撮像素子(イメージセンサ)33の他、図示されない光学系、絞り及びドライバを備える。被写体からの入射光は、光学系及び絞りを介して、撮像素子33に入射する。光学系を構成する各レンズは、被写体の光学像を撮像素子33上に結像させる。TG22は、上記タイミング制御信号に同期した、撮像素子33を駆動するための駆動パルスを生成し、該駆動パルスを撮像素子33に与える。 The imaging unit 11 includes an imaging system (image sensor) 33, an optical system, a diaphragm, and a driver (not shown). Incident light from the subject enters the image sensor 33 via the optical system and the stop. Each lens constituting the optical system forms an optical image of the subject on the image sensor 33. The TG 22 generates a drive pulse for driving the image sensor 33 in synchronization with the timing control signal, and applies the drive pulse to the image sensor 33.
 撮像素子33は、CCD(Charge Coupled Devices)やCMOS(Complementary Metal Oxide Semiconductor)イメージセンサ等からなる固体撮像素子である。撮像素子33は、光学系及び絞りを介して入射した光学像を光電変換し、該光電変換によって得られた電気信号をAFE12に出力する。より具体的には、撮像素子33は、マトリクス状に二次元配列された複数の受光画素(図1において不図示)を備え、各撮影において、各受光画素は露光時間に応じた電荷量の信号電荷を蓄える。蓄えた信号電荷の電荷量に比例した大きさを有する各受光画素からの電気信号は、TG22からの駆動パルスに従って、後段のAFE12に順次出力される。 The image sensor 33 is a solid-state image sensor composed of a CCD (Charge Coupled Devices), a CMOS (Complementary Metal Oxide Semiconductor) image sensor, or the like. The image sensor 33 photoelectrically converts an optical image incident through the optical system and the diaphragm, and outputs an electrical signal obtained by the photoelectric conversion to the AFE 12. More specifically, the image sensor 33 includes a plurality of light receiving pixels (not shown in FIG. 1) that are two-dimensionally arranged in a matrix, and in each photographing, each light receiving pixel has a charge amount signal corresponding to the exposure time. Stores charge. The electrical signal from each light receiving pixel having a magnitude proportional to the amount of the stored signal charge is sequentially output to the subsequent AFE 12 in accordance with the drive pulse from the TG 22.
 AFE12は、撮像素子33(各受光画素)から出力されるアナログ信号を増幅し、増幅されたアナログ信号をデジタル信号に変換してから映像信号処理部13に出力する。AFE12における信号増幅の増幅度はCPU23によって制御される。映像信号処理部13は、AFE12の出力信号によって表される画像に対して各種画像処理を施し、画像処理後の画像についての映像信号を生成する。映像信号は、通常、画像の輝度を表す輝度信号Yと、画像の色を表す色差信号U及びVと、から構成される。 The AFE 12 amplifies an analog signal output from the image sensor 33 (each light receiving pixel), converts the amplified analog signal into a digital signal, and outputs the digital signal to the video signal processing unit 13. The degree of amplification of signal amplification in the AFE 12 is controlled by the CPU 23. The video signal processing unit 13 performs various types of image processing on the image represented by the output signal of the AFE 12, and generates a video signal for the image after the image processing. The video signal is generally composed of a luminance signal Y representing the luminance of the image and color difference signals U and V representing the color of the image.
 マイク14は撮像装置1の周辺音をアナログの音声信号に変換し、音声信号処理部15は、このアナログの音声信号をデジタルの音声信号に変換する。 The microphone 14 converts the ambient sound of the imaging device 1 into an analog audio signal, and the audio signal processing unit 15 converts the analog audio signal into a digital audio signal.
 圧縮処理部16は、映像信号処理部13からの映像信号を、所定の圧縮方式を用いて圧縮する。動画像または静止画像の撮影及び記録時において、圧縮された映像信号は外部メモリ18に記録される。また、圧縮処理部16は、音声信号処理部15からの音声信号を、所定の圧縮方式を用いて圧縮する。動画像撮影及び記録時において、映像信号処理部13からの映像信号と音声信号処理部15からの音声信号は、圧縮処理部16にて時間的に互いに関連付けられつつ圧縮され、圧縮後のそれらは外部メモリ18に記録される。 The compression processing unit 16 compresses the video signal from the video signal processing unit 13 using a predetermined compression method. The compressed video signal is recorded in the external memory 18 at the time of capturing and recording a moving image or a still image. The compression processing unit 16 compresses the audio signal from the audio signal processing unit 15 using a predetermined compression method. At the time of moving image shooting and recording, the video signal from the video signal processing unit 13 and the audio signal from the audio signal processing unit 15 are compressed while being correlated with each other in time by the compression processing unit 16, and after compression, Recorded in the external memory 18.
 録画ボタン26aは、動画像の撮影及び記録の開始/終了を指示するための押しボタンスイッチであり、シャッタボタン26bは、静止画像の撮影及び記録を指示するための押しボタンスイッチである。 The recording button 26a is a push button switch for instructing start / end of moving image shooting and recording, and the shutter button 26b is a push button switch for instructing shooting and recording of a still image.
 撮像装置1の動作モードには、動画像及び静止画像の撮影が可能な撮影モードと、外部メモリ18に格納された動画像及び静止画像を表示部27上に再生表示する再生モードと、が含まれる。操作キー26cに対する操作に応じて、各モード間の遷移は実施される。 The operation mode of the imaging apparatus 1 includes a shooting mode capable of shooting a moving image and a still image, and a playback mode for reproducing and displaying the moving image and the still image stored in the external memory 18 on the display unit 27. It is. Transition between the modes is performed according to the operation on the operation key 26c.
 撮影モードでは、所定のフレーム周期にて順次撮影が行われ、撮像素子33から撮影画像列が取得される。撮影画像列に代表される画像列とは、時系列で並ぶ画像の集まりを指す。また、画像を表すデータを画像データと呼ぶ。画像データも、映像信号の一種と考えることができる。1つのフレーム周期分の画像データによって1枚分の画像が表現される。1つのフレーム周期分の画像データによって表現される1枚分の画像を、フレーム画像とも呼ぶ。 In the shooting mode, shooting is sequentially performed at a predetermined frame period, and a shot image sequence is acquired from the image sensor 33. An image sequence typified by a captured image sequence refers to a collection of images arranged in time series. Data representing an image is called image data. Image data can also be considered as a kind of video signal. One image is represented by image data for one frame period. One image represented by image data for one frame period is also called a frame image.
 撮影モードにおいて、ユーザが録画ボタン26aを押下すると、CPU23の制御の下、その押下後に得られる映像信号及びそれに対応する音声信号が、順次、圧縮処理部16を介して外部メモリ18に記録される。動画像撮影の開始後、再度ユーザが録画ボタン26aを押下すると、映像信号及び音声信号の外部メモリ18への記録は終了し、1つの動画像の撮影は完了する。また、撮影モードにおいて、ユーザがシャッタボタン26bを押下すると、静止画像の撮影及び記録が行われる。 When the user presses the recording button 26a in the shooting mode, under the control of the CPU 23, the video signal obtained after the pressing and the corresponding audio signal are sequentially recorded in the external memory 18 via the compression processing unit 16. . When the user presses the recording button 26a again after starting the moving image shooting, the recording of the video signal and the audio signal to the external memory 18 is completed, and the shooting of one moving image is completed. In the shooting mode, when the user presses the shutter button 26b, a still image is shot and recorded.
 再生モードにおいて、ユーザが操作キー26cに所定の操作を施すと、外部メモリ18に記録された動画像又は静止画像を表す圧縮された映像信号は、伸張処理部19にて伸張され表示処理部20に送られる。尚、撮影モードにおいては、通常、録画ボタン26a及びシャッタボタン26bに対する操作内容に関係なく、映像信号処理13による映像信号の生成が逐次行われており、その映像信号は表示処理部20に送られる。 In the playback mode, when the user performs a predetermined operation on the operation key 26c, a compressed video signal representing a moving image or a still image recorded in the external memory 18 is expanded by the expansion processing unit 19 and displayed. Sent to. In the shooting mode, the generation of the video signal by the video signal processing 13 is normally performed regardless of the operation contents on the recording button 26 a and the shutter button 26 b, and the video signal is sent to the display processing unit 20. .
 表示処理部20は、与えられた映像信号に応じた画像を表示部27に表示させる。表示部27は、液晶ディスプレイなどの表示装置である。また、再生モードにおいて動画像を再生する際、外部メモリ18に記録された動画像に対応する圧縮された音声信号も、伸張処理部19に送られる。伸張処理部19は、受け取った音声信号を伸張して音声出力回路21に送る。音声出力回路21は、与えられたデジタルの音声信号をスピーカ28にて出力可能な形式の音声信号(例えば、アナログの音声信号)に変換してスピーカ28に出力する。スピーカ28は、音声出力回路21からの音声信号を音声(音)として外部に出力する。 The display processing unit 20 causes the display unit 27 to display an image corresponding to the given video signal. The display unit 27 is a display device such as a liquid crystal display. In addition, when a moving image is reproduced in the reproduction mode, a compressed audio signal corresponding to the moving image recorded in the external memory 18 is also sent to the expansion processing unit 19. The decompression processing unit 19 decompresses the received audio signal and sends it to the audio output circuit 21. The audio output circuit 21 converts a given digital audio signal into an audio signal in a format that can be output by the speaker 28 (for example, an analog audio signal) and outputs the audio signal to the speaker 28. The speaker 28 outputs the sound signal from the sound output circuit 21 to the outside as sound (sound).
 映像信号処理部13は、CPU23と協働しつつ、超解像処理を実施することが可能に形成されている。超解像処理によって、複数の低解像度画像から1枚の高解像度画像が生成される。この高解像度画像の映像信号を、圧縮処理部16を介して外部メモリ18に記録することができる。高解像度画像の解像度は、低解像度画像のそれよりも高く、高解像度画像の水平方向及び垂直方向の画素数は、低解像度画像のそれよりも多い。例えば、静止画の撮影指示がなされた時に、複数の低解像度画像としての複数のフレーム画像を取得し、それらに対して超解像処理を実施することにより高解像度画像を生成する。或いは例えば、動画撮影時に得られた複数の低解像度画像としての複数のフレーム画像に対して、超解像処理は実施される。 The video signal processing unit 13 is configured to be able to perform super-resolution processing in cooperation with the CPU 23. Through the super-resolution processing, one high-resolution image is generated from a plurality of low-resolution images. The video signal of the high resolution image can be recorded in the external memory 18 via the compression processing unit 16. The resolution of the high resolution image is higher than that of the low resolution image, and the number of pixels in the horizontal and vertical directions of the high resolution image is larger than that of the low resolution image. For example, when an instruction to capture a still image is given, a plurality of frame images as a plurality of low resolution images are acquired, and a high resolution image is generated by performing super-resolution processing on them. Alternatively, for example, the super-resolution processing is performed on a plurality of frame images as a plurality of low resolution images obtained at the time of moving image shooting.
[超解像処理の基本概念]
 超解像処理の基本概念について簡単に説明する。超解像処理の方式の一種として再構成型と呼ばれる方式が存在する。一方で、超解像処理を繰り返し演算(繰り返し型の計算アルゴリズム)にて実現する方式が存在し、この繰り返し演算を再構成型の超解像処理に適用することもできる。本実施形態では、繰り返し演算にて実現できる超解像処理の一種である、繰り返し演算を利用したMAP(Maximum A Posterior)法に基づく再構成型の超解像処理を主たる例にとる。
[Basic concept of super-resolution processing]
The basic concept of super-resolution processing will be briefly described. There is a method called a reconstruction type as a type of super-resolution processing. On the other hand, there is a method for realizing super-resolution processing by repetitive calculation (repetitive calculation algorithm), and this repetitive calculation can also be applied to reconstruction-type super-resolution processing. In the present embodiment, a main example is a reconstruction type super-resolution process based on a MAP (Maximum A Posterior) method using a repetitive calculation, which is a kind of super-resolution process that can be realized by a repetitive calculation.
 図2に、繰り返し演算を利用したMAP法に基づく再構成型の超解像処理の概念図を示す。この超解像処理では、実際の撮影によって得られた複数の低解像度画像から1枚の高解像度画像を推定し、この推定した高解像度画像を劣化させることによって元の複数の低解像度画像を推定する。実際の撮影によって得られた低解像度画像を特に「実低解像度画像」と呼び、推定された低解像度画像を特に「推定低解像度画像」と呼ぶ。その後、実低解像度画像と推定低解像度画像との誤差が最小化されるように、高解像度画像と低解像度画像を反復推定し、最終的に取得される高解像度画像を出力する。 FIG. 2 shows a conceptual diagram of a reconfigurable super-resolution process based on the MAP method using repetitive operations. In this super-resolution processing, one high-resolution image is estimated from a plurality of low-resolution images obtained by actual shooting, and the original plurality of low-resolution images are estimated by degrading the estimated high-resolution image. To do. A low resolution image obtained by actual photographing is particularly called an “real low resolution image”, and an estimated low resolution image is particularly called an “estimated low resolution image”. Thereafter, the high resolution image and the low resolution image are repeatedly estimated so that the error between the actual low resolution image and the estimated low resolution image is minimized, and the finally acquired high resolution image is output.
 図3に、図2に対応する超解像処理の流れをフローチャートにて表す。まず、ステップS11にて、実低解像度画像から初期高解像度画像を生成する。続くステップS12にて、現時点の高解像度画像を構築する元の実低解像度画像を推定する。推定された画像を、上述したように推定低解像度画像と呼ぶ。続くステップS13では、実低解像度画像と推定低解像度画像との差分(差分画像)に基づいて現時点の高解像度画像に対する更新量を導出する。この更新量は、ステップS12~S14の各処理の反復実行によって実低解像度画像と推定低解像度画像との誤差が最小化されるように導出される。そして、続くステップS14にて、その更新量を用いて現時点の高解像度画像を更新し、新たな高解像度画像を生成する。この後、ステップS12に戻り、新たに生成された高解像度画像を現時点の高解像度画像と捉えて、ステップS12~S14の各処理が反復実行される。基本的に、ステップS12~S14の各処理の反復回数が増大するほど、得られる高解像度画像の解像度が実質的に向上し(解像力が向上し)、理想に近い高解像度画像が得られる。 FIG. 3 is a flowchart showing the flow of super-resolution processing corresponding to FIG. First, in step S11, an initial high resolution image is generated from an actual low resolution image. In subsequent step S12, the original actual low resolution image for constructing the current high resolution image is estimated. The estimated image is referred to as an estimated low resolution image as described above. In the subsequent step S13, an update amount for the current high resolution image is derived based on the difference (difference image) between the actual low resolution image and the estimated low resolution image. This update amount is derived so that the error between the actual low-resolution image and the estimated low-resolution image is minimized by repeatedly executing the processes in steps S12 to S14. In the subsequent step S14, the current high resolution image is updated using the updated amount, and a new high resolution image is generated. Thereafter, the process returns to step S12, and the newly generated high resolution image is regarded as the current high resolution image, and the processes of steps S12 to S14 are repeatedly executed. Basically, as the number of repetitions of each of the processes in steps S12 to S14 increases, the resolution of the obtained high-resolution image is substantially improved (resolution is improved), and an ideal high-resolution image can be obtained.
 上述した動作の流れを基本とする超解像処理が、撮像装置1内にて実施される。MAP法に基づく超解像処理以外に、ML(Maximum-Likelihood)法、POCS(Projection Onto Convex Set)法、または、IBP(Iterative Back Projection)法に基づく超解像処理を利用することも可能である。 Super-resolution processing based on the above-described operation flow is performed in the imaging apparatus 1. In addition to super-resolution processing based on the MAP method, super-resolution processing based on the ML (Maximum-Likelihood) method, POCS (Projection Onto Convex Set) method, or IBP (Iterative Back Projection) method can also be used. is there.
 繰り返し演算を利用した再構成型の超解像処理方法は、有効な方法であるが、多くの反復演算を必要とする最適化計算方法であるため、処理に比較的多くの時間がかかる。一方において、ユーザは、撮影画像の速やかなる表示部27上での確認を希望する。従って例えば、高精細な静止画像を得るために、高解像度画像生成用の複数のフレーム画像(実低解像度画像)を撮影した場合であっても、撮影画像に基づく画像を速やかに表示すべきである。これを実現すべく、複数のフレーム画像の撮影後、その複数のフレーム画像の内の1枚を表示部27に直ちに表示する一方で、その複数のフレーム画像を用いた超解像処理を実行し、超解像処理の完了後、得られた高解像度画像の表示部27での表示及び外部メモリ18への保存を実行する、という方法が考えられる。 The reconstruction-type super-resolution processing method using iterative computation is an effective method, but it takes a relatively long time for processing because it is an optimization computation method that requires many iterative computations. On the other hand, the user desires to confirm the captured image on the display unit 27 promptly. Therefore, for example, in order to obtain a high-definition still image, even when a plurality of frame images (actual low-resolution images) for generating a high-resolution image are captured, an image based on the captured image should be displayed promptly. is there. In order to realize this, after capturing a plurality of frame images, one of the plurality of frame images is immediately displayed on the display unit 27, while super-resolution processing using the plurality of frame images is executed. A method of executing display of the obtained high resolution image on the display unit 27 and storage in the external memory 18 after completion of the super-resolution processing is conceivable.
 図4に、この方法の概念図を示す。また、図5は、撮像装置1によって撮影されるべき被写体のアナログ画像である。図4において、画像301は、撮影直後に表示される、超解像処理前のフレーム画像であり、画像302は、超解像処理の完了後に表示される、複数のフレーム画像に基づく高解像度画像である。尚、図4では、超解像処理の効果が誇張されている(後述の図6についても同様)。 Fig. 4 shows a conceptual diagram of this method. FIG. 5 is an analog image of a subject to be photographed by the imaging apparatus 1. In FIG. 4, an image 301 is a frame image before super-resolution processing that is displayed immediately after shooting, and an image 302 is a high-resolution image based on a plurality of frame images that is displayed after completion of the super-resolution processing. It is. In FIG. 4, the effect of the super-resolution processing is exaggerated (the same applies to FIG. 6 described later).
 図4に対応する方法では、比較的長い時間を要する超解像処理が完了するまで、ユーザは処理の結果を全く確認することができない。これは、超解像処理に基づく高解像度化の効果を少しでも早く確認したいというユーザの要望に反する。また、画像302が相応の時間経過後にしか得られないことに鑑み、ユーザが画像301だけしか表示部27上で確認しないことも多い。このような場合においても、超解像処理の完了後、高解像度画像が外部メモリ18に保存されるが、画像301だけしか確認していなかったユーザが、その保存画像を事後的に見ると、画像301と保存画像が画質において随分異なるため違和感を抱くこともある。 In the method corresponding to FIG. 4, the user cannot confirm the processing result at all until the super-resolution processing that requires a relatively long time is completed. This is contrary to the user's desire to confirm the effect of high resolution based on super-resolution processing as soon as possible. In view of the fact that the image 302 can be obtained only after a lapse of a suitable time, the user often confirms only the image 301 on the display unit 27. Even in such a case, after the super-resolution processing is completed, the high-resolution image is stored in the external memory 18, but when the user who has confirmed only the image 301 looks at the stored image later, Since the image 301 and the stored image are considerably different in image quality, there may be a sense of incongruity.
 これらの事情を考慮し、撮像装置1では、超解像処理の実行中において、超解像処理の実行過程で生成される超解像処理の中間結果を段階的に表示部27に表示する。例えば、図6に示す如く、高解像度画像生成用の複数のフレーム画像の撮影直後に画像301を表示しつつ、超解像処理を実行開始し、図3のステップS12~S14の処理なら成る処理群を反復実行する過程において順次更新されていく中間的な高解像度画像を段階的に表示部27に更新表示する。 Considering these circumstances, the imaging device 1 displays the intermediate result of the super-resolution processing generated in the process of executing the super-resolution processing on the display unit 27 step by step during the execution of the super-resolution processing. For example, as shown in FIG. 6, the execution of the super-resolution processing is started while displaying the image 301 immediately after capturing a plurality of frame images for generating a high-resolution image, and the processing consists of steps S12 to S14 in FIG. An intermediate high-resolution image that is sequentially updated in the process of repeatedly executing the group is updated and displayed on the display unit 27 step by step.
 図6に対応する方法によれば、画像301が表示されてから比較的短い時間が経過した後に、超解像処理の中間結果とも言うべき、画像303及び304が順次更新表示され、超解像処理の完了後に、最終的な高解像度画像である画像302が表示される。これにより、ユーザは、超解像処理の結果の一部とはいえ、超解像処理の結果を短い待ち時間で確認することができる。また、上記の違和感を抱く可能性も軽減される。 According to the method corresponding to FIG. 6, after a relatively short time has elapsed since the image 301 was displayed, the images 303 and 304 that should be called intermediate results of the super-resolution processing are sequentially updated and displayed. After the processing is completed, an image 302 that is a final high-resolution image is displayed. Thereby, although the user is a part of the result of the super-resolution processing, the user can check the result of the super-resolution processing with a short waiting time. Moreover, the possibility of having the above-mentioned uncomfortable feeling is reduced.
 以下、上述のような表示方法、超解像処理の内容、及び/又は、それらに関連する技術的事項を、第1~第3実施例にて説明する。矛盾が生じない限り、或る実施例にて述べた事項を他の実施例に適用することも可能である。 Hereinafter, the display method as described above, the contents of the super-resolution processing, and / or technical matters related thereto will be described in the first to third embodiments. As long as no contradiction arises, the matters described in one embodiment can be applied to other embodiments.
 以下の説明では、特に記述しない限り、撮影によって得られたフレーム画像が実低解像度画像として取り扱われる。また、或る時刻の撮影によって実低解像度画像Faが得られ、その後の撮影によって実低解像度画像Fbが得られるものとする。画像Fa及びFbの撮影間隔は、例えば、フレーム周期に相当する。 In the following description, a frame image obtained by shooting is handled as an actual low resolution image unless otherwise specified. Further, it is assumed that an actual low resolution image Fa is obtained by shooting at a certain time, and an actual low resolution image Fb is obtained by subsequent shooting. The shooting interval between the images Fa and Fb corresponds to, for example, a frame period.
 また、本明細書では、記述の簡略化上、記号を用いることによって、その記号に対応する名称を略記又は省略することがある。例えば、本明細書において、実低解像度画像Faを単に「画像Fa」又は「Fa」と表現することもあるが、前者と後者は同じものを指す。 In addition, in this specification, for simplification of description, names are sometimes abbreviated or omitted by using symbols. For example, in this specification, the actual low-resolution image Fa may be simply expressed as “image Fa” or “Fa”, but the former and the latter indicate the same thing.
<<第1実施例>>
 本発明の第1実施例を説明する。図7は、第1実施例に係る映像信号処理部13の内部ブロック図である。図7の映像信号処理部13は、超解像処理部40と、反復制御部50と、第1及び第2信号制御部51及び53と、信号処理部52と、を備えている。超解像処理部40は、フレームメモリ41A及び41Bを有するメモリ部41と、動き量算出部42と、動き量記憶部43と、超解像演算部44と、を備えている。
<< First Example >>
A first embodiment of the present invention will be described. FIG. 7 is an internal block diagram of the video signal processing unit 13 according to the first embodiment. The video signal processing unit 13 in FIG. 7 includes a super-resolution processing unit 40, an iterative control unit 50, first and second signal control units 51 and 53, and a signal processing unit 52. The super-resolution processing unit 40 includes a memory unit 41 having frame memories 41A and 41B, a motion amount calculation unit 42, a motion amount storage unit 43, and a super-resolution calculation unit 44.
 フレームメモリ41Aは、AFE12からのデジタル信号によって表される、1フレーム分の実低解像度画像の画像データを一時的に記憶する。フレームメモリ41Bは、フレームメモリ41Aに記憶された1フレーム分の実低解像度画像の画像データを一時的に記憶する。フレームメモリ41Aの記憶内容は、1フレームが経過する毎に、フレームメモリ42Bに転送される。これにより、第2フレームの終了時点においては、フレームメモリ41B及び41Aに、夫々、実低解像度画像Fa及びFbの画像データが記録される。 The frame memory 41A temporarily stores image data of an actual low resolution image for one frame represented by a digital signal from the AFE 12. The frame memory 41B temporarily stores image data of an actual low resolution image for one frame stored in the frame memory 41A. The contents stored in the frame memory 41A are transferred to the frame memory 42B every time one frame elapses. Thereby, at the end of the second frame, the image data of the actual low resolution images Fa and Fb are recorded in the frame memories 41B and 41A, respectively.
 動き量算出部42には、AFE12より現フレームの実低解像度画像の画像データと、フレームメモリ41Aより前回フレームの実低解像度画像の画像データとが与えられる。動き量算出部42は、与えられた両画像データを比較することにより、与えられた2枚の実低解像度画像間の位置ずれ量を表す動き量を算出する。この動き量は、水平成分及び垂直成分を含む二次元量であり、所謂動きベクトルとして表現される。算出された動き量は、動き量記憶部43に記憶される。 The motion amount calculation unit 42 is provided with the image data of the actual low resolution image of the current frame from the AFE 12 and the image data of the actual low resolution image of the previous frame from the frame memory 41A. The motion amount calculation unit 42 calculates a motion amount representing the amount of positional deviation between two given real low-resolution images by comparing the two given image data. This motion amount is a two-dimensional amount including a horizontal component and a vertical component, and is expressed as a so-called motion vector. The calculated motion amount is stored in the motion amount storage unit 43.
 動き量算出部42は、代表点マッチング法やブロックマッチング法、勾配法などを用いて、2枚の実低解像度画像間の動き量を算出する。ここで算出される動き量は、実低解像度画像の画素間隔よりも分解能の高い、所謂サブピクセルの分解能を有している。つまり、実低解像度画像内の、水平又は垂直方向に隣接する2つの画素の間隔ppよりも短い距離を最小単位として動き量が算出される。サブピクセルの分解能を有する位置ずれ量の算出方法として、公知の算出方法を用いることができる。例えば、特開平11-345315号公報に記載された方法や、“奥富,「ディジタル画像処理」,第二版,CG-ARTS協会,2007年3月1日発行”に記載された方法(p.205参照)を用いればよい。 The motion amount calculation unit 42 calculates a motion amount between two real low-resolution images using a representative point matching method, a block matching method, a gradient method, or the like. The motion amount calculated here has a so-called sub-pixel resolution having a resolution higher than the pixel interval of the actual low-resolution image. That is, the amount of motion is calculated with a distance shorter than the interval pp L between two pixels adjacent in the horizontal or vertical direction in the actual low resolution image as the minimum unit. A known calculation method can be used as a method for calculating a positional deviation amount having sub-pixel resolution. For example, the method described in Japanese Patent Application Laid-Open No. 11-345315 and the method described in “Okutomi,“ Digital Image Processing ”, Second Edition, CG-ARTS Association, issued on March 1, 2007” (p. 40). 205).
 超解像演算部44は、フレームメモリ41A及び41Bから与えられる2枚の実低解像度画像Fa及びFbと、動き量記憶部43に記憶された画像Fa及びFb間の動き量と、に基づき、超解像処理によって高解像度画像を生成する。超解像演算部44は、図3のステップS11~S14の処理に従い、まず、画像Fa及びFbから更新前の高解像度画像としての初期高解像度画像を生成した後、高解像度画像を順次更新していく。初期高解像度画像を記号Fx1にて表し、初期高解像度画像Fx1に対してステップS14の処理を1回分実行することによって得た高解像度画像を記号Fx2にて表す。即ち、初期高解像度画像Fx1を1回だけ更新することによって高解像度画像Fx2が得られる。この後、高解像度画像Fx2を順次更新することによって、高解像度画像Fx3、Fx4、・・・、が順次得られる。 The super-resolution calculation unit 44 is based on the two actual low-resolution images Fa and Fb given from the frame memories 41A and 41B and the motion amount between the images Fa and Fb stored in the motion amount storage unit 43. A high resolution image is generated by super-resolution processing. The super-resolution calculation unit 44 first generates an initial high-resolution image as a high-resolution image before update from the images Fa and Fb according to the processing in steps S11 to S14 in FIG. 3, and then sequentially updates the high-resolution image. To go. The initial high resolution image is represented by symbol Fx1, and the high resolution image obtained by executing the process of step S14 once for the initial high resolution image Fx1 is represented by symbol Fx2. That is, the high-resolution image Fx2 is obtained by updating the initial high-resolution image Fx1 only once. Thereafter, the high resolution images Fx3, Fx4,... Are sequentially obtained by sequentially updating the high resolution image Fx2.
 第1信号処理部51は、超解像演算部44から出力される高解像度画像の画像データを、反復制御部50の制御の下、超解像演算部44及び/又は信号処理部52に出力する。信号処理部52は、第1信号処理部51を介して与えられた高解像度画像の画像データから、その高解像度画像の映像信号(輝度信号及び色差信号)を生成する。第2信号処理部52は、信号処理部52にて生成された高解像度画像の映像信号を、反復制御部50の制御の下、図1の表示処理部20及び/又は圧縮処理部16に出力する。反復制御部50は、第1及び第2信号処理部51及び53を制御するが、その制御の詳細説明の前に、超解像演算部44の構成例を説明する。 The first signal processing unit 51 outputs the image data of the high resolution image output from the super-resolution calculation unit 44 to the super-resolution calculation unit 44 and / or the signal processing unit 52 under the control of the iterative control unit 50. To do. The signal processing unit 52 generates a video signal (luminance signal and color difference signal) of the high resolution image from the image data of the high resolution image given through the first signal processing unit 51. The second signal processing unit 52 outputs the video signal of the high-resolution image generated by the signal processing unit 52 to the display processing unit 20 and / or the compression processing unit 16 in FIG. 1 under the control of the iterative control unit 50. To do. The iterative control unit 50 controls the first and second signal processing units 51 and 53. Prior to detailed description of the control, a configuration example of the super-resolution calculation unit 44 will be described.
[超解像演算部44の構成]
 図8は、超解像演算部44の内部ブロック図である。図8の超解像演算部44は、符号61~65によって参照される部位を備える。
[Configuration of Super-Resolution Operation Unit 44]
FIG. 8 is an internal block diagram of the super-resolution operation unit 44. The super-resolution operation unit 44 in FIG. 8 includes parts referred to by reference numerals 61 to 65.
 超解像演算部44では、2枚の画像Fa及びFbの内、一方が基準フレームとして且つ他方が参照フレームとして設定される。今、画像Faが基準フレームとして設定された場合を想定する。また、低解像度画像の画素数がuであって、且つ、高解像度画像の画素数がvであるとする。vはuより大きな任意の値とされる。例えば、高解像度画像の解像度が垂直及び水平方向の夫々において低解像度画像のそれの2倍とされるならば、vはuの4倍である。勿論、高解像度画像の解像度は、低解像度画像の解像度の2倍以外であってもよい。u画素から成る実低解像度画像Faの画素値を書き並べた行列をYaにて表し、u画素から成る実低解像度画像Fbの画素値を書き並べた行列をYbにて表す。 In the super-resolution calculation unit 44, one of the two images Fa and Fb is set as a reference frame and the other is set as a reference frame. Assume that the image Fa is set as a reference frame. Further, it is assumed that the number of pixels of the low resolution image is u and the number of pixels of the high resolution image is v. v is an arbitrary value larger than u. For example, if the resolution of the high resolution image is twice that of the low resolution image in the vertical and horizontal directions, v is 4 times u. Of course, the resolution of the high resolution image may be other than twice the resolution of the low resolution image. A matrix in which the pixel values of the actual low resolution image Fa composed of u pixels are written and arranged is represented by Ya, and a matrix in which the pixel values of the actual low resolution image Fb composed of u pixels are arranged and represented is represented by Yb.
 初期高解像度推定部61は、図3のステップS11に対応する処理を実行する。参照フレームは参照フレーム及び基準フレーム間の動き量に相当する分だけ基準フレームを位置ずれさせた画像である、とみなすことができる。そこで、初期高解像度推定部61は、動き量記憶部43に記憶された、基準フレームと参照フレームとの間の動き量に基づいて基準フレームに対する参照フレームの位置ずれを検出し、それらの位置ずれを打ち消すための位置ずれ補正を行う。そして、位置ずれ補正後の基準フレーム及び参照フレームを組み合わせることによって、初期高解像度画像を生成する。初期高解像度画像の生成方法として、特開2006-41603号公報にも記載されているような、補間処理を用いた方法を利用可能することができる。 The initial high resolution estimation unit 61 executes a process corresponding to step S11 in FIG. The reference frame can be regarded as an image obtained by shifting the position of the reference frame by an amount corresponding to the amount of motion between the reference frame and the reference frame. Therefore, the initial high-resolution estimation unit 61 detects the displacement of the reference frame with respect to the reference frame based on the amount of movement between the reference frame and the reference frame, which is stored in the movement amount storage unit 43, and detects these displacements. The position deviation correction for canceling out is performed. Then, an initial high-resolution image is generated by combining the base frame and the reference frame after the positional deviation correction. As a method for generating an initial high-resolution image, a method using an interpolation process as described in JP-A-2006-41603 can be used.
 v画素から成る初期高解像度画像Fx1の画素値を書き並べた行列をXにて表す。初期高解像度画像Fx1以外の高解像度画像(例えば、Fx2)の画素値を書き並べた行列もXにて表すこととする。但し、高解像度画像Fxiの行列Xの内容と高解像度画像Fxjの行列Xの内容は異なる(ここで、i≠j)。 X represents a matrix in which pixel values of the initial high-resolution image Fx1 composed of v pixels are written and arranged. A matrix in which pixel values of high-resolution images (for example, Fx2) other than the initial high-resolution image Fx1 are arranged is also represented by X. However, the contents of the matrix X of the high resolution image Fxi are different from the contents of the matrix X of the high resolution image Fxj (where i ≠ j).
 選択部62は、初期高解像度推定部61にて生成された高解像度画像(初期高解像度画像)とフレームメモリ65に一時的に記憶された高解像度画像の何れか1つを選択して出力する。選択部62では、1回目の選択動作において、初期高解像度推定部61で推定された初期高解像度画像を選択し、2回目以降の各選択動作において、フレームメモリ65に一時記憶された高解像度画像を選択する。 The selection unit 62 selects and outputs either the high resolution image (initial high resolution image) generated by the initial high resolution estimation unit 61 or the high resolution image temporarily stored in the frame memory 65. . The selection unit 62 selects the initial high resolution image estimated by the initial high resolution estimation unit 61 in the first selection operation, and the high resolution image temporarily stored in the frame memory 65 in the second and subsequent selection operations. Select.
 高解像度更新量算出部(以下、更新量算出部と略記する)63は、選択部62にて選択された高解像度画像と、実低解像度画像Fa及びFbと、動き量記憶部43に記憶された画像Fa及びFb間の動き量に基づいて、選択部62から与えられた高解像度画像に対する実低解像度画像Fa及びFbの位置ずれを算出する。その後、選択部62からの高解像度画像を劣化させて元の低解像度画像(即ち、実低解像度画像Fa及びFb)を推定するために、算出した各位置ずれ、低解像度化によって生じる画像ぼけ、及び、v画素の高解像度画像からu画素の低解像度画像へのダウンサンプリング量をパラメータとするカメラパラメータ行列Wa及びWbを求める。 The high-resolution update amount calculation unit (hereinafter abbreviated as update amount calculation unit) 63 is stored in the high-resolution image selected by the selection unit 62, the actual low-resolution images Fa and Fb, and the motion amount storage unit 43. Based on the amount of motion between the images Fa and Fb, the displacement of the actual low-resolution images Fa and Fb with respect to the high-resolution image given from the selection unit 62 is calculated. After that, in order to estimate the original low resolution image (that is, the actual low resolution images Fa and Fb) by degrading the high resolution image from the selection unit 62, each calculated displacement, image blur caused by the low resolution, In addition, camera parameter matrices Wa and Wb using the amount of downsampling from a high resolution image of v pixels to a low resolution image of u pixels as parameters are obtained.
 そして、更新量算出部63は、図3のステップS12の如く、選択部62にて選択された高解像度画像の行列Xに対してカメラパラメータ行列Wa及びWbの夫々を個別に乗じることによって、実低解像度画像Fa及びFbの推定画像に相当する2枚の推定低解像度画像を生成する。この2枚の推定低解像度画像は、行列Wa・X及びWb・Xによって表現される。 Then, as shown in step S12 of FIG. 3, the update amount calculation unit 63 multiplies the camera parameter matrices Wa and Wb individually by the matrix X of the high resolution image selected by the selection unit 62, thereby Two estimated low resolution images corresponding to the estimated images of the low resolution images Fa and Fb are generated. The two estimated low-resolution images are represented by matrices Wa · X and Wb · X.
 推定低解像度画像と実低解像度画像との誤差は|Wa・X-Ya|及び|Wb・X-Yb|によって表される。従って、その誤差を見積もるための評価関数として下記式(1)の評価関数Iを定義し、この評価関数Iを最小化するように高解像度画像に対する更新量を求める。式(1)の右辺の第3項は、選択部62からの高解像度画像に基づく拘束項である。この拘束項γ|C・X|における行列Cは、事前確率モデルに基づく行列である。行列Cは、「高解像度画像には高域成分が少ない」という事前知識に基づき設定され、例えばラプラシアンフィルタなどのハイパスフィルタによって形成される。また、係数γは、評価関数Iに対する拘束項の重みを表すパラメータである。
 I=|Wa・X-Ya|+|Wb・X-Yb|+γ|C・X|
                            ・・・(1)
The error between the estimated low resolution image and the actual low resolution image is represented by | Wa · X−Ya | and | Wb · X−Yb |. Therefore, an evaluation function I of the following formula (1) is defined as an evaluation function for estimating the error, and an update amount for the high resolution image is obtained so as to minimize the evaluation function I. The third term on the right side of Equation (1) is a constraint term based on the high-resolution image from the selection unit 62. The matrix C in the constraint term γ | C · X | 2 is a matrix based on the prior probability model. The matrix C is set based on prior knowledge that “a high-resolution image has few high-frequency components”, and is formed by a high-pass filter such as a Laplacian filter. The coefficient γ is a parameter representing the weight of the constraint term for the evaluation function I.
I = | Wa · X−Ya | 2 + | Wb · X−Yb | 2 + γ | C · X | 2
... (1)
 評価関数Iを最小化する手法として任意の手法を採用可能であるが、今、勾配法を用いる場合を想定する。この場合、更新量算出部63では、評価関数Iに対する勾配∂I/∂Xが求められる。勾配∂I/∂Xは、下記式(2)によって表される。式(2)において、添え字Tが付与された行列は、元の行列の転置行列を表す。従って例えば、WaTは行列Waの転置行列を表す。
∂I/∂X=2×{WaT・(Wa・X-Ya)+WbT・(Wb・X-Yb)
       +γCT・C・X}             ・・・(2)
Although any method can be adopted as a method for minimizing the evaluation function I, it is assumed that the gradient method is used now. In this case, the update amount calculation unit 63 obtains the gradient ∂I / ∂X with respect to the evaluation function I. The gradient ∂I / ∂X is expressed by the following equation (2). In equation (2), the matrix to which the subscript T is assigned represents the transposed matrix of the original matrix. Therefore, for example, Wa T represents a transposed matrix of the matrix Wa.
∂I / ∂X = 2 × {Wa T · (Wa · X-Ya) + Wb T · (Wb · X-Yb)
+ ΓC T · C · X} (2)
 高解像度画像Fxiの行列Xに基づく勾配∂I/∂Xは、高解像度画像Fxiに対する更新量として算出される(ここで、iは自然数)。この算出処理は図3のステップS13の処理に相当する。 The gradient ∂I / ∂X based on the matrix X of the high resolution image Fxi is calculated as an update amount for the high resolution image Fxi (where i is a natural number). This calculation process corresponds to the process of step S13 in FIG.
 減算部64は、図3のステップS14のように、選択部62にて選択された高解像度画像Fxiの行列Xから、その高解像度画像Fxiに対する更新量∂I/∂Xを減算することにより、下記式(3)の行列X’を算出する(ここで、iは自然数)。行列X’は、高解像度画像Fx(i+1)の画素値を書き並べた行列に相当する。減算部64における減算処理によって、高解像度画像Fxiが更新されて更新後の高解像度画像Fx(i+1)が生成される。
 X’=X-∂I/∂X                ・・・(3)
The subtraction unit 64 subtracts the update amount ∂I / ∂X for the high resolution image Fxi from the matrix X of the high resolution image Fxi selected by the selection unit 62 as in step S14 of FIG. A matrix X ′ of the following formula (3) is calculated (where i is a natural number). The matrix X ′ corresponds to a matrix in which pixel values of the high resolution image Fx (i + 1) are written. The high resolution image Fxi is updated by the subtraction processing in the subtraction unit 64, and the updated high resolution image Fx (i + 1) is generated.
X ′ = X−∂I / ∂X (3)
 減算部64による更新によって生成された高解像度画像の画像データは、図7の第1信号制御部51に出力される。超解像処理が完了していない期間(図9に示される、後述の単純反復期間及び段階表示期間)においては、減算部64から出力された高解像度画像の画像データは、第1信号制御部51を介してフレームメモリ65に与えられる。フレームメモリ65は、与えられた高解像度画像の画像データを一時的に記憶し、これを選択部62に与える。これにより、減算部64から出力された高解像度画像が、再度、更新量算出部63及び減算部64によって更新される。以下、高解像度画像Fxiを1回分だけ更新して高解像度画像Fx(i+1)を得る処理を、超解像単位処理と呼ぶ(ここで、iは自然数)。 The image data of the high resolution image generated by the update by the subtracting unit 64 is output to the first signal control unit 51 in FIG. During the period when the super-resolution processing is not completed (the simple repetition period and the step display period described later shown in FIG. 9), the image data of the high resolution image output from the subtraction unit 64 is the first signal control unit. It is given to the frame memory 65 via 51. The frame memory 65 temporarily stores the image data of the given high resolution image, and gives this to the selection unit 62. As a result, the high-resolution image output from the subtraction unit 64 is updated again by the update amount calculation unit 63 and the subtraction unit 64. Hereinafter, the process of updating the high-resolution image Fxi only once and obtaining the high-resolution image Fx (i + 1) is referred to as a super-resolution unit process (where i is a natural number).
 超解像単位処理の繰り返しの回数に上限回数を設定しておくことができる。超解像単位処理の繰り返し回数に上限回数が設定されている場合、その繰り返し回数が上限回数に達した時点で超解像処理は完了する。また、超解像演算処理の繰り返し回数に関わらず、高解像度画像に対する更新量が十分に小さくなったと判断される場合には、その時点で超解像処理を完了してもよい。 ∙ An upper limit can be set for the number of repetitions of super-resolution unit processing. When the upper limit is set for the number of repetitions of the super-resolution unit process, the super-resolution process is completed when the number of repetitions reaches the upper limit. If it is determined that the update amount for the high-resolution image has become sufficiently small regardless of the number of repetitions of the super-resolution calculation process, the super-resolution process may be completed at that time.
 超解像処理の完了後に得られる最終的な高解像度画像を、特に「最終高解像度画像」と呼び、超解像処理の完了前に得られる中間的な高解像度画像を、特に「中間生成高解像度画像」と呼ぶ。例えば、最終高解像度画像が画像Fx9である場合、画像Fx1~Fx8の夫々は中間生成高解像度画像である。 The final high-resolution image obtained after the super-resolution process is completed is called “final high-resolution image”, and the intermediate high-resolution image obtained before the super-resolution process is completed is called “intermediate This is called “resolution image”. For example, when the final high resolution image is the image Fx9, each of the images Fx1 to Fx8 is an intermediate generated high resolution image.
[反復制御部50による制御]
 次に、反復制御部50の制御によって実現される動作を詳説する。反復制御部50の制御に着目した場合、画像Fa及びFbに基づく超解像処理の実行時周辺の期間は、図9に示す如く、単純反復期間、段階表示期間及び完了処理期間に分類される。画像Fa及びFbが超解像演算部44に与えられた後、単純反復期間が開始され、単純反復期間の終了後、段階表示期間が開始され、段階表示期間の終了後、完了処理期間が開始される。超解像単位処理の繰り返し回数が上限回数に達したこと等に起因して超解像処理は完了するが、その完了時点が完了処理期間の開始時点である。
[Control by repetitive control unit 50]
Next, the operation realized by the control of the repetitive control unit 50 will be described in detail. When attention is paid to the control of the repetitive control unit 50, the period around the super-resolution processing based on the images Fa and Fb is classified into a simple repetitive period, a stage display period, and a completion processing period as shown in FIG. . After the images Fa and Fb are given to the super-resolution computing unit 44, a simple repetition period is started, a stage display period is started after the end of the simple repetition period, and a completion processing period is started after the end of the stage display period. Is done. Although the super-resolution processing is completed due to the number of repetitions of the super-resolution unit processing reaching the upper limit number, the completion time is the start time of the completion processing period.
 尚、単純反復期間の開始前において、画像Faの画像データが信号処理部52を介して表示処理部20に与えられ、画像Faに基づく表示画像(図6の画像301に対応)が表示部27に表示される。画像Faに基づく表示画像の代わりに、画像Fbに基づく表示画像を表示しても良い。尚、以下、単に表示画像と言った場合、それは、表示部27上に表示される画像を指し、単に表示画面といった場合、それは、表示部27における表示画面を指す。また、本実施形態では、表示部27が撮像装置1に備えられている表示部であることを想定しているが、表示部27は、撮像装置1の外部の表示装置(液晶ディスプレイやプラズマディスプレイ)であってもよい。 Before the start of the simple repetition period, image data of the image Fa is given to the display processing unit 20 via the signal processing unit 52, and a display image based on the image Fa (corresponding to the image 301 in FIG. 6) is displayed on the display unit 27. Is displayed. Instead of the display image based on the image Fa, a display image based on the image Fb may be displayed. In the following description, when simply referred to as a display image, it refers to an image displayed on the display unit 27, and when simply referred to as a display screen, it refers to a display screen on the display unit 27. In the present embodiment, it is assumed that the display unit 27 is a display unit provided in the imaging device 1, but the display unit 27 is a display device (liquid crystal display or plasma display) external to the imaging device 1. ).
 反復制御部50は、単純反復期間、段階表示期間及び完了処理期間において、以下のような動作が実行されるように第1及び第2信号制御部51及び53を制御する。 The iterative control unit 50 controls the first and second signal control units 51 and 53 so that the following operations are executed in the simple repetition period, the stage display period, and the completion processing period.
 単純反復期間では、超解像演算部44から出力される高解像度画像の画像データが第1信号制御部51を介して超解像演算部44にのみ与えられる。このため、単純反復期間では、表示部27の表示画像は更新されない一方で、超解像単位処理の反復実行による高解像度画像の更新が行われる。 In the simple repetition period, the image data of the high resolution image output from the super-resolution calculation unit 44 is given only to the super-resolution calculation unit 44 via the first signal control unit 51. Therefore, in the simple repetition period, the display image on the display unit 27 is not updated, while the high-resolution image is updated by repeated execution of the super-resolution unit process.
 段階表示期間では、超解像演算部44から出力される高解像度画像(中間生成高解像度画像)の画像データが、第1信号制御部51を介して超解像演算部44に与えられると共に第1信号制御部51、信号処理部52及び第2信号制御部53を介して表示処理部20に与えられる。このため、段階表示期間では、表示部27の表示画像を最新の高解像度画像の内容にて更新可能である。また、この際、超解像単位処理の反復実行による高解像度画像の更新も引き続き行われる。 In the stage display period, the image data of the high resolution image (intermediately generated high resolution image) output from the super resolution calculation unit 44 is supplied to the super resolution calculation unit 44 via the first signal control unit 51 and the first. The signal is supplied to the display processing unit 20 through the 1 signal control unit 51, the signal processing unit 52, and the second signal control unit 53. Therefore, in the stage display period, the display image on the display unit 27 can be updated with the contents of the latest high-resolution image. At this time, the high-resolution image is continuously updated by repeatedly executing the super-resolution unit process.
 完了処理期間では、超解像演算部44から出力される高解像度画像(最終高解像度画像)の画像データが、第1信号制御部51を介して信号処理部52にのみ与えられる。その画像データに基づき信号処理部52にて生成された高解像度画像の映像信号は、第2信号制御部53を介して表示処理部20及び圧縮処理部16に出力される。完了処理期間は、超解像処理の完了後に訪れる期間である。 In the completion processing period, image data of a high resolution image (final high resolution image) output from the super-resolution calculation unit 44 is given only to the signal processing unit 52 via the first signal control unit 51. The video signal of the high resolution image generated by the signal processing unit 52 based on the image data is output to the display processing unit 20 and the compression processing unit 16 via the second signal control unit 53. The completion processing period is a period that comes after completion of the super-resolution processing.
 数値例を挙げてより具体的に説明する。図9に示す如く、単純反復期間の開始時刻、段階表示期間の開始時刻、完了処理期間の開始時刻(即ち、超解像処理の完了時点)が、夫々、t、t及びtであるとする。また、高解像度画像Fx9が得られた時点で超解像処理が完了する場合を想定する。つまり、画像Fx9が最終高解像度画像であることを想定する。また、時刻tにおける最新の高解像度画像がFx3であるとする。 A more specific explanation will be given with numerical examples. As shown in FIG. 9, the start time of the simple repetition period, the start time of the stage display period, and the start time of the completion processing period (that is, the completion time of the super-resolution processing) are t A , t B, and t C , respectively. Suppose there is. Further, it is assumed that the super-resolution processing is completed when the high-resolution image Fx9 is obtained. That is, it is assumed that the image Fx9 is the final high resolution image. Also, assume that the latest high-resolution image at time t B is Fx3.
 超解像演算部44は、時刻tを起点として超解像処理を開始する(例えば、初期高解像度画像Fx1の生成を開始する)。単純反復期間において、超解像演算部44にて画像Fx1、Fx2及びFx3が順次生成される。画像Fx1、Fx2及びFx3の画像データは第1信号制御部51からフレームメモリ65には与えられるものの信号処理部52には与えられない。従って、単純反復期間中における表示画像に、画像Fx1、Fx2及びFx3の内容は反映されない。 The super-resolution computing unit 44 starts super-resolution processing with the time t A as a starting point (for example, starts generating the initial high-resolution image Fx1). In the simple repetition period, images Fx1, Fx2, and Fx3 are sequentially generated by the super-resolution calculation unit 44. The image data of the images Fx1, Fx2, and Fx3 is supplied from the first signal control unit 51 to the frame memory 65, but is not supplied to the signal processing unit 52. Therefore, the contents of the images Fx1, Fx2, and Fx3 are not reflected in the display image during the simple repetition period.
 その後、段階表示期間に至ると、時刻tと時刻tとの間において、画像Fx4~Fx8が超解像演算部44によって順次生成される一方で、画像Fx4~Fx8が第1信号制御部51、信号処理部52及び第2信号制御部53を介して、順次、表示処理部20に与えられる。このため、画像Fx4が表示処理部20に与えられた時点で、画像Fx4を用いて表示部27の表示画像を更新することができ、画像Fx5が表示処理部20に与えられた時点で、画像Fx5を用いて表示部27の表示画像を更新することができる。例えば、表示処理部20は、画像Fx4が表示処理部20に与えられた時、画像Fx4の全体又は一部を表示部27に表示させ、画像Fx5が表示処理部20に与えられた時、画像Fx5の全体又は一部を表示部27に表示させることができる。画像Fx6~Fx8についても同様である。図6の画像303及び304は、段階表示期間中に表示される画像に対応する。 Thereafter, when reaching the stage display period, between the time t B and time t C, while the image Fx4 ~ FX8 are sequentially generated by the super-resolution computation unit 44, an image Fx4 ~ FX8 first signal controller 51, the signal processing unit 52, and the second signal control unit 53 are sequentially supplied to the display processing unit 20. Therefore, when the image Fx4 is given to the display processing unit 20, the display image of the display unit 27 can be updated using the image Fx4, and when the image Fx5 is given to the display processing unit 20, the image The display image of the display unit 27 can be updated using Fx5. For example, when the image Fx4 is given to the display processing unit 20, the display processing unit 20 displays the whole or a part of the image Fx4 on the display unit 27, and when the image Fx5 is given to the display processing unit 20, the image Fx4 All or part of Fx5 can be displayed on the display unit 27. The same applies to the images Fx6 to Fx8. Images 303 and 304 in FIG. 6 correspond to images displayed during the stage display period.
 時刻tにおいて画像Fx9が生成されると段階表示期間から完了処理期間に移行する。画像Fx9の画像データは、第1信号制御部51を介して信号処理部52に与えられて映像信号に変換され、その映像信号(画像Fx9の映像信号)は、第2信号制御部53を介して表示処理部20及び圧縮処理部16に送られる。表示処理部20は、画像Fx9の映像信号が与えられた時、画像Fx9を用いて表示部27の表示画像を更新することができる。例えば、表示処理部20は、画像Fx9が表示処理部20に与えられた時、画像Fx9の全体又は一部を表示部27に表示させることができる。図6の画像302は、完了処理期間中又は完了処理期間後に表示される画像に対応する。また、圧縮処理部16は、画像Fx9の映像信号を圧縮し、その圧縮された映像信号は外部メモリ18に保存される。 It shifts the image Fx9 is generated from the phase display period to complete the treatment period at time t C. The image data of the image Fx9 is supplied to the signal processing unit 52 via the first signal control unit 51 and converted into a video signal, and the video signal (the video signal of the image Fx9) is transmitted via the second signal control unit 53. To the display processing unit 20 and the compression processing unit 16. When the video signal of the image Fx9 is given, the display processing unit 20 can update the display image of the display unit 27 using the image Fx9. For example, the display processing unit 20 can cause the display unit 27 to display all or part of the image Fx9 when the image Fx9 is given to the display processing unit 20. An image 302 in FIG. 6 corresponds to an image displayed during or after the completion processing period. The compression processing unit 16 compresses the video signal of the image Fx9, and the compressed video signal is stored in the external memory 18.
 反復制御部50は、着目時点が単純反復期間及び段階表示期間のどちらに属しているのかを所定の反復制御用指標に基づいて判定すると共に、その反復制御用指標に基づいて段階表示期間中における表示画像の更新タイミングを制御する。この反復制御用指標の例を以下に示す。尚、後にも述べられるが、単純反復期間を存在させなくすることも可能である。つまり、超解像処理の開始後、ただちに段階表示期間に移行するようにしてもよい(時刻tと時刻tを一致させても良い)。 The iterative control unit 50 determines whether the time point of interest belongs to a simple iterative period or a stage display period based on a predetermined iterative control index, and based on the iterative control index, during the stage display period Controls the update timing of the display image. Examples of this iterative control index are shown below. As will be described later, it is possible to eliminate the simple repetition period. In other words, immediately after the start of the super-resolution processing, the stage display period may be started immediately (time t A and time t B may be matched).
 第1の反復制御用指標は、超解像処理の実行開始時点(即ち、時刻t)からの経過時間である。第1の反復制御用指標を用いる場合、反復制御部50は、着目時点における時刻tからの経過時間TEと、所定の基準経過時間TEREF0とを比較する。そして、TE<TEREF0である時、着目時点が単純反復期間に属していると判断し、TE≧TEREF0であって且つ超解像処理が完了していないとき、着目時点が段階表示期間に属していると判断する。そして、着目時点が段階表示期間に属している時、経過時間TEと所定の基準経過時間TEREF1、TEREF2、TEREF3、・・・とを比較し、図10に示す如く、経過時間TEが基準経過時間TEREF1、TEREF2、TEREF3、・・・に達した時点において、夫々、第1回目、第2回目、第3回目、・・・の表示更新処理を行う。 The first index for iterative control is the elapsed time from the start time of execution of the super-resolution processing (that is, time t A ). When the first iterative control index is used, the iterative control unit 50 compares the elapsed time TE from the time t A at the time of interest with a predetermined reference elapsed time TE REF0 . Then, when TE <TE REF0 , it is determined that the point of interest belongs to the simple repetition period. When TE ≧ TE REF0 and the super-resolution processing is not completed, the point of interest becomes the step display period. Judge that it belongs. Then, when the time point of interest belongs to the stage display period, the elapsed time TE is compared with predetermined reference elapsed times TE REF1 , TE REF2 , TE REF3 ,..., And as shown in FIG. When the reference elapsed times TE REF1 , TE REF2 , TE REF3 ,... Are reached, the first, second, third ,.
 ここで、0<TEREF0≦TEREF1<TEREF2<TEREF3、・・・である。但し、TEREF=0とすることもできる。TEREFをゼロに設定した場合は、単純反復期間が存在しなくなる。 Here, 0 <TE REF0 ≦ TE REF1 <TE REF2 <TE REF3 ,... However, TE REF = 0 can also be set. If TE REF is set to zero, there is no simple repetition period.
 着目時点における表示更新処理では、その着目時点において得られている最新の高解像度画像が表示部27の表示画像に反映される。例えば、着目時点において得られている最新の高解像度画像が画像Fx4である場合、その画像Fx4の全体又は一部が表示部27に表示されるように表示画像を更新する。1回目の表示更新処理と2回目の表示更新処理との間において、高解像度画像の更新が複数回実行されている場合もあり、この場合は、1回目の表示更新処理において画像Fxiが表示画像に反映され、2回目の表示更新処理において画像Fx(i+1)以外の高解像度画像(例えば、画像Fx(i+2))が表示画像に反映される。 In the display update process at the point of interest, the latest high-resolution image obtained at the point of interest is reflected in the display image of the display unit 27. For example, when the latest high-resolution image obtained at the time of interest is the image Fx4, the display image is updated so that the whole or a part of the image Fx4 is displayed on the display unit 27. In some cases, the high-resolution image is updated a plurality of times between the first display update process and the second display update process. In this case, the image Fxi is displayed as the display image in the first display update process. In the second display update process, a high-resolution image other than the image Fx (i + 1) (for example, the image Fx (i + 2)) is reflected in the display image.
 第2の反復制御用指標は、画像Fa及びFbに対する超解像単位処理の実行回数PNである。実行回数PNが1、2、3、・・・の時、超解像演算部44から画像Fx2、Fx3、Fx4・・・が出力される。第2の反復制御用指標を用いる場合、反復制御部50は、着目時点における実行回数PNと所定の基準回数PNREF0とを比較する。そして、PN<PNREF0が成立するならば、着目時点が単純反復期間に属していると判断する。一方、PN≧PNREF0が成立し且つ超解像処理が完了していないとき、着目時点が段階表示期間に属していると判断する。そして、着目時点が段階表示期間に属している時、実行回数PNと所定の基準回数PNREF1、PNREF2、PNREF3、・・・とを比較し、図11に示す如く、実行回数PNが基準回数PNREF1、PNREF2、PNREF3、・・・に達した時点において、夫々、第1回目、第2回目、第3回目、・・・の表示更新処理を行う。ここで、1≦PNREF0≦PNREF1<PNREF2<PNREF3、・・・である。 The second index for iterative control is the number of executions PN of super-resolution unit processing for the images Fa and Fb. When the number of executions PN is 1, 2, 3,..., The images Fx2, Fx3, Fx4,. When the second index for iterative control is used, the iterative control unit 50 compares the number of executions PN at the time of interest with a predetermined reference number PN REF0 . If PN <PN REF0 is satisfied, it is determined that the time point of interest belongs to the simple repetition period. On the other hand, when PN ≧ PN REF0 is established and the super-resolution processing is not completed, it is determined that the time point of interest belongs to the stage display period. Then, when the time point of interest belongs to the stage display period, the execution number PN is compared with a predetermined reference number PN REF1 , PN REF2 , PN REF3 ,..., And as shown in FIG. When the number of times PN REF1 , PN REF2 , PN REF3 ,... Is reached, display update processing of the first time, the second time, the third time ,. Here, 1 ≦ PN REF0 ≦ PN REF1 <PN REF2 <PN REF3,.
 第3の反復制御用指標は、超解像単位処理の反復実行による高解像度画像の画質改善量である。上述したように、図8の更新量算出部63において、高解像度画像Fxiの画素値を表す行列Xから2枚の推定低解像度画像の画素値を表す行列Wa・X及びWb・Xが生成され、推定低解像度画像と実低解像度画像との誤差|Wa・X-Ya|及び|Wb・X-Yb|に応じた更新量∂I/∂Xが算出されて、該更新量∂I/∂Xに基づく更新によって高解像度画像Fxiから高解像度画像Fx(i+1)が生成される。このような更新量∂I/∂Xによる更新は、高解像度画像の画質を改善するために反復実行される。 The third index for iterative control is the image quality improvement amount of the high resolution image by the repeated execution of the super-resolution unit processing. As described above, the update amount calculation unit 63 in FIG. 8 generates the matrices Wa · X and Wb · X representing the pixel values of the two estimated low-resolution images from the matrix X representing the pixel values of the high-resolution image Fxi. The update amount ∂I / ∂X corresponding to the errors | Wa · X−Ya | and | Wb · X−Yb | between the estimated low resolution image and the actual low resolution image is calculated, and the update amount ∂I / ∂ The high resolution image Fx (i + 1) is generated from the high resolution image Fxi by the update based on X. Such updating with the update amount ∂I / ∂X is repeatedly executed to improve the image quality of the high-resolution image.
 従って、更新量∂I/∂Xから、超解像単位処理の反復実行による高解像度画像の画質改善量を推定することができる。第3の反復制御用指標を利用する場合、具体的には以下のように処理する。 Therefore, it is possible to estimate the image quality improvement amount of the high-resolution image by the repeated execution of the super-resolution unit process from the update amount ∂I / ∂X. When using the third index for repetitive control, specifically, the following processing is performed.
 更新量∂I/∂Xは行列Xの要素数と同じだけの要素数を有する行列によって表現されている。反復制御部50は、高解像度度画像の更新の度に、更新量∂I/∂Xの各要素の絶対値の総和を求める。高解像度画像Fxiから高解像度画像Fx(i+1)を生成する時に用いた更新量∂I/∂Xの各要素の絶対値の総和をQにて表す。総和Qは、高解像度画像Fxiから見た高解像度画像Fx(i+1)の画質改善量を表し、総和Qが大きいほど、その画質改善量は大きいといえる。 The update amount ∂I / ∂X is expressed by a matrix having the same number of elements as the number of elements of the matrix X. The iterative control unit 50 obtains the sum of absolute values of each element of the update amount ∂I / ∂X every time the high-resolution image is updated. The total sum of the absolute values of the update amounts ∂I / ∂X used when generating the high-resolution image Fx (i + 1) from the high-resolution image Fxi is represented by Q i . The sum Q i represents the image quality improvement amount of the high resolution image Fx (i + 1) viewed from the high resolution image Fxi, and it can be said that the image quality improvement amount is larger as the sum Q i is larger.
 反復制御部50は、高解像度画像Fx(i+1)が生成された時、式“EV=(Q+Q+・・・Q)”に従って、改善評価値EVを算出する。この改善評価値EVは、初期高解像度画像Fx1から見た高解像度画像Fx(i+1)の画質改善量を表している。このため、改善評価値EVを算出することによって、その画質改善量の推定を行う推定部(不図示)が反復制御部50に内包されている、と考えることもできる。図12に、超解像単位処理の実行回数PNと改善評価値EVの関係を示す。 When the high-resolution image Fx (i + 1) is generated, the iterative control unit 50 calculates the improvement evaluation value EV i according to the expression “EV i = (Q 1 + Q 2 +... Q i )”. This improvement evaluation value EV i represents the image quality improvement amount of the high resolution image Fx (i + 1) viewed from the initial high resolution image Fx1. For this reason, it can also be considered that an iterative control unit 50 includes an estimation unit (not shown) that estimates the image quality improvement amount by calculating the improvement evaluation value EV i . FIG. 12 shows the relationship between the number of executions PN of the super-resolution unit process and the improvement evaluation value EV i .
 第3の反復制御用指標を利用する反復制御部50は、着目時点における改善評価値EVと、所定の基準評価値EVREF0とを比較する。そして、EV<EVREF0である時、着目時点が単純反復期間に属していると判断し、EV≧EVREF0であって且つ超解像処理が完了していないとき、着目時点が段階表示期間に属していると判断する。そして、着目時点が段階表示期間に属している時、改善評価値EVと所定の基準評価値EVREF1、EVREF2、EVREF3、・・・とを比較し、図12に示す如く、改善評価値EVが基準基準評価値EVREF1、EVREF2、EVREF3、・・・に達した時点において、夫々、第1回目、第2回目、第3回目、・・・の表示更新処理を行う。ここで、0<EVREF0≦EVREF1<EVREF2<EVREF3、・・・である。 The iterative control unit 50 using the third index for iterative control compares the improvement evaluation value EV i at the time of interest with a predetermined reference evaluation value EV REF0 . When EV i <EV REF0 , it is determined that the point of interest belongs to the simple repetition period, and when EV i ≧ EV REF0 and super-resolution processing is not completed, the point of interest is displayed in stages. Judge as belonging to the period. Then, when the time point of interest belongs to the stage display period, the improvement evaluation value EV i is compared with predetermined reference evaluation values EV REF1 , EV REF2 , EV REF3 ,... When the value EV i reaches the reference reference evaluation values EV REF1 , EV REF2 , EV REF3 ,..., Display update processing of the first time, the second time, the third time ,. Here, 0 <EV REF0 ≦ EV REF1 <EV REF2 <EV REF3 .
[表示処理部]
 次に、表示処理部20の構成例を示す。図13は、第1実施例に係る表示処理部20の内部ブロック図を示している。図13の表示処理部20は、表示用映像処理部71、VRAM(Video Random Access Memory)72及び表示ドライバ73を備えている。
[Display processing section]
Next, a configuration example of the display processing unit 20 will be shown. FIG. 13 is an internal block diagram of the display processing unit 20 according to the first embodiment. The display processing unit 20 in FIG. 13 includes a display video processing unit 71, a VRAM (Video Random Access Memory) 72, and a display driver 73.
 撮影モードにおいては映像信号処理部13からの映像信号が表示用映像処理部71に入力され、再生モードにおいては外部メモリ18からの映像信号が伸張処理部19を介して表示用映像処理部71に入力される。表示用映像処理部71に入力される映像信号は、例えば、超解像処理が施されてない低解像度画像の映像信号、又は、図7の映像信号処理部13から出力される最終高解像度画像又は中間生成高解像度画像の映像信号である。 In the shooting mode, the video signal from the video signal processing unit 13 is input to the display video processing unit 71, and in the playback mode, the video signal from the external memory 18 is input to the display video processing unit 71 via the expansion processing unit 19. Entered. The video signal input to the display video processing unit 71 is, for example, a video signal of a low resolution image that has not been subjected to super-resolution processing, or a final high resolution image output from the video signal processing unit 13 of FIG. Alternatively, it is a video signal of an intermediate generated high resolution image.
 表示用映像処理部71は、与えられた映像信号によって表される画像(低解像度画像又は高解像度画像)が表示部27上にて表示可能となるように、その画像の解像度変換を行い、解像度変換後の画像の映像信号をVRAM72に書き込む。VRAM72は、表示部27に対するビデオ表示部分のメモリである。表示ドライバ73は、VRAM72に書き込まれた映像信号によって表される画像を表示部27の表示画面に表示させる。 The display video processing unit 71 converts the resolution of the image so that the image (low-resolution image or high-resolution image) represented by the given video signal can be displayed on the display unit 27, and the resolution The video signal of the converted image is written into the VRAM 72. The VRAM 72 is a video display memory for the display unit 27. The display driver 73 displays an image represented by the video signal written in the VRAM 72 on the display screen of the display unit 27.
[表示用映像処理部]
 表示部27の表示画面上に高解像度画像(及び低解像度画像)の全体を表示させることも可能であるが、撮像装置1に備えられる表示パネル27の解像度は、通常、映像信号処理部13にて生成される画像の解像度よりも低いため、高解像度画像の全体画像領域を表示画面に表示させると、超解像処理の効果が分かりにくい。これを考慮し、高解像度画像の全体画像ではなく高解像度画像の一部画像を表示部27に表示させるようにしてもよい。勿論、表示部27が撮像装置1の外部の表示装置であって、その表示装置の解像度が十分に高い場合は、高解像度画像の全体画像に表示するようにしてもよい。以下、高解像度画像の一部画像を表示部27に表示させる場合の処理例を説明する。
[Video processor for display]
Although it is possible to display the entire high-resolution image (and low-resolution image) on the display screen of the display unit 27, the resolution of the display panel 27 provided in the imaging device 1 is normally set to the video signal processing unit 13. Therefore, if the entire image area of the high-resolution image is displayed on the display screen, the effect of the super-resolution processing is difficult to understand. Considering this, a partial image of the high resolution image instead of the entire high resolution image may be displayed on the display unit 27. Of course, when the display unit 27 is a display device external to the imaging device 1 and the display device has a sufficiently high resolution, it may be displayed on the entire high-resolution image. Hereinafter, a processing example in the case of displaying a partial image of the high resolution image on the display unit 27 will be described.
 表示用映像信号処理部71は、高解像度画像の全体から一部を切り出す切り出し処理を用いて、高解像度画像の一部画像領域を切り出し領域として抽出する。必要に応じ、切り出し処理の前後において、高解像度画像の画像サイズを縮小する縮小処理を行うことも可能である。切り出し領域後の画像サイズ、又は、縮小処理及び切り出し領域後の画像サイズは、表示部27の解像度に応じて決定する。図14(a)において、符号320は高解像度画像の全体画像領域を表し、符号321は切り出し領域を表す。図14(b)の符号322は、切り出し領域321内の画像に相当する表示画像を表している。 The display video signal processing unit 71 extracts a partial image area of the high-resolution image as a cut-out area by using a cut-out process of cutting out a part from the entire high-resolution image. If necessary, a reduction process for reducing the image size of the high-resolution image can be performed before and after the clipping process. The image size after the cutout area or the image size after the reduction process and the cutout area is determined according to the resolution of the display unit 27. In FIG. 14A, reference numeral 320 represents the entire image area of the high resolution image, and reference numeral 321 represents the cutout area. A reference numeral 322 in FIG. 14B represents a display image corresponding to an image in the cutout area 321.
 単純には例えば、高解像度画像上の所定位置における所定形状の領域を切り出し領域として設定することができる。より具体的には例えば、高解像度画像の中央付近に位置する、所定画像サイズの矩形領域を切り出し領域として抽出することができる。 Simply, for example, a region having a predetermined shape at a predetermined position on the high-resolution image can be set as a cut-out region. More specifically, for example, a rectangular area having a predetermined image size located near the center of the high-resolution image can be extracted as a cut-out area.
 また、人物の顔部分が表れる顔領域又は顔領域を含む領域を切り出し領域として設定することもできる。映像信号処理部13に含まれる顔検出部(不図示)は、例えば、高解像度画像の元となる低解像度画像Fa(又はFb)の画像データに基づき、公知の顔検出処理によって低解像度画像Fa(又はFb)上の顔領域の位置及び大きさを検出する。この検出結果に基づいて、高解像度画像上の顔領域の位置及び大きさを算出し、その算出結果から、高解像度画像上の切り出し領域の位置及び大きさを求めることができる。 It is also possible to set a face area where a face portion of a person appears or an area including a face area as a cutout area. A face detection unit (not shown) included in the video signal processing unit 13 is, for example, based on image data of a low-resolution image Fa (or Fb) that is a source of a high-resolution image, by a known face detection process, The position and size of the face area on (or Fb) is detected. Based on this detection result, the position and size of the face region on the high resolution image can be calculated, and the position and size of the cutout region on the high resolution image can be obtained from the calculation result.
 また、ピントの合っている領域を、切り出し領域として設定することもできる。ピントが合っている領域に、撮影者が注目する主要な被写体が存在している可能性が高いからである。例えば、TTL(Through The Lends)方式のコントラスト検出法を用いたオートフォーカス制御に利用されるAF評価値に基づいて、ピントが合っている領域を検出することができる。より具体的には例えば、映像信号処理部13に含まれるAF評価部(不図示)は、高解像度画像の元となる低解像度画像Fa(又はFb)の全体画像領域を複数のAF評価領域に分割し、低解像度画像Fa(又はFb)の画像データから各AF評価領域におけるコントラストを検出する。AF評価領域に含まれる高域周波数成分の量を算出することによって、コントラストに応じたAF評価値を求めることができる。そして、複数のAF評価領域の内、検出されたコントラスト(AF評価値)が最も大きなAF評価領域がピントの合っている領域であると判断して、そのピントの合っている領域に対応する高解像度画像上の領域を切り出し領域に設定する。 Also, the focused area can be set as the cutout area. This is because there is a high possibility that the main subject that the photographer pays attention exists in the in-focus area. For example, an in-focus area can be detected based on an AF evaluation value used for autofocus control using a TTL (Through The Lens) type contrast detection method. More specifically, for example, an AF evaluation unit (not shown) included in the video signal processing unit 13 converts the entire image area of the low-resolution image Fa (or Fb) that is the source of the high-resolution image into a plurality of AF evaluation areas. The image is divided and the contrast in each AF evaluation area is detected from the image data of the low-resolution image Fa (or Fb). By calculating the amount of the high frequency component included in the AF evaluation area, an AF evaluation value corresponding to the contrast can be obtained. Then, the AF evaluation area having the largest detected contrast (AF evaluation value) among the plurality of AF evaluation areas is determined to be an in-focus area, and a high level corresponding to the in-focus area is determined. An area on the resolution image is set as a cutout area.
 また、ユーザによる手動操作(操作部26に対する操作など)に従って、切り出し領域の位置及び大きさを設定しても良い。 Further, the position and size of the cutout area may be set according to a manual operation by the user (such as an operation on the operation unit 26).
[付加項目の表示]
 また、高解像度画像の全体又は一部を表す画像に付加項目を重畳し、この重畳後の画像を表示することも可能である。以下に付加項目の例(指標331、332及び340並びに画像335)を説明する。付加項目の生成するための処理を、表示用映像処理部71にて行うことも可能であるし、映像信号処理部13にて行うことも可能である。
[Display additional items]
It is also possible to superimpose additional items on an image representing the whole or part of a high-resolution image and display the image after superimposition. Examples of additional items ( indexes 331, 332, and 340 and an image 335) will be described below. The processing for generating the additional item can be performed by the display video processing unit 71 or can be performed by the video signal processing unit 13.
 例えば、図15に示す如く、単純反復期間又は段階表示期間において、高解像度画像の一部(又は全体)を表す画像に残り処理時間及び経過時間TEを表す指標331を重畳し、重畳後の画像を表示画像として表示する。指標331は画像の水平方向に対して長手方向を有し且つ第1色及び第2色を有する矩形領域であり、その矩形領域内を占める第1色及び第2色領域の面積比率によって残り処理時間及び経過時間TEを表現する。図15において、第1色領域は斜線領域で示されている。第1及び第2色領域は、夫々、指標331の全体を表す矩形領域の左側及び右側に配置され、経過時間TEが増大するにつれて、第2色領域の左端と一致する、第1色領域の右端が右側に移動していく。着目時点における残り処理時間は、着目時点から起算した、超解像処理が完了するまでに必要な時間であり、全処理時間から着目時点における経過時間TEを差し引くことによって求まる。全処理時間は、図9の時刻tから時刻tまでの時間であり、超解像処理が完了するまでに反復実行されるべき超解像単位処理の回数と低解像度画像及び高解像度画像の画像サイズから計算される。 For example, as shown in FIG. 15, in a simple repetition period or a stage display period, an index 331 representing the remaining processing time and the elapsed time TE is superimposed on an image representing a part (or the whole) of the high-resolution image, and the superimposed image Is displayed as a display image. The index 331 is a rectangular area having a longitudinal direction with respect to the horizontal direction of the image and having a first color and a second color, and the remaining processing is performed according to the area ratio of the first color and the second color area occupying the rectangular area. Express time and elapsed time TE. In FIG. 15, the first color area is indicated by a hatched area. The first and second color areas are respectively arranged on the left and right sides of the rectangular area representing the entire index 331, and coincide with the left end of the second color area as the elapsed time TE increases. The right end moves to the right. The remaining processing time at the point of interest is the time required to complete the super-resolution processing calculated from the point of interest, and is obtained by subtracting the elapsed time TE at the point of interest from the total processing time. The total processing time is the time from time t A to time t C in FIG. 9, and the number of times of super-resolution unit processing to be repeatedly executed until the super-resolution processing is completed, the low-resolution image, and the high-resolution image. It is calculated from the image size.
 また例えば、図16に示す如く、単純反復期間又は段階表示期間において、高解像度画像の一部(又は全体)を表す画像に上記指標331と共に表示更新処理の実行タイミングを示す指標332を重畳し、その重畳後の画像を表示画像として表示してもよい。図16において、指標332は、指標331の下方に位置する、複数の線分から形成される。経過時間TEの増大に伴って第1色領域の右端が右側に移動してゆき、その右端の左右方向における位置が複数の線分の何れかの描画位置と一致したとき、表示更新処理が1回分実行される。表示更新処理の実行タイミングを予め設定又は予想しておくことにより、各線分の描画位置は決定される。 Further, for example, as shown in FIG. 16, in the simple repetition period or the stage display period, an index 332 indicating the execution timing of the display update process is superimposed on the image representing a part (or the whole) of the high-resolution image together with the index 331, The superposed image may be displayed as a display image. In FIG. 16, the index 332 is formed from a plurality of line segments located below the index 331. As the elapsed time TE increases, the right edge of the first color area moves to the right, and when the position of the right edge in the left-right direction matches one of the drawing positions of the plurality of line segments, the display update process is 1 It is executed batchwise. The drawing position of each line segment is determined by setting or predicting the execution timing of the display update process in advance.
 また例えば、図17に示す如く、段階表示期間又は完了処理期間において、高解像度画像の一部を表す画像(切り出し領域の画像)に高解像度画像の全体の縮小画像335を重畳し、その重畳後の画像を表示画像として表示してもよい。縮小画像335は、高解像度画像の元となる低解像度画像(Fa又はFb)の全体の縮小画像であってもよい。 Also, for example, as shown in FIG. 17, in the stage display period or completion processing period, the entire reduced image 335 of the high resolution image is superimposed on an image representing a part of the high resolution image (image of the cutout region), and after the superimposition These images may be displayed as a display image. The reduced image 335 may be an entire reduced image of the low resolution image (Fa or Fb) that is the source of the high resolution image.
 また例えば、図18(a)に示す如く、段階表示期間又は完了処理期間において、高解像度画像の一部(又は全体)を表す画像に超解像処理の効果の期待度を示す指標340を重畳し、その重畳後の画像を表示画像として表示してもよい。より具体的には例えば、図18(b)に示すようなアイコンを用いて、期待度を3段階で分類して表示する。超解像処理によって得られる高解像度画像の解像力(実質的な解像度)は低解像度画像のそれよりも高い。ここにおける期待度とは、超解像処理によって得られる高解像度画像の、低解像度画像に対する、解像力の向上度度合いを指す。 Further, for example, as shown in FIG. 18A, in the stage display period or the completion processing period, an index 340 indicating the degree of expectation of the effect of the super-resolution processing is superimposed on an image representing a part (or the whole) of the high resolution image. Then, the superimposed image may be displayed as a display image. More specifically, for example, using the icons as shown in FIG. 18B, the expectation is classified and displayed in three stages. The resolution (substantial resolution) of the high-resolution image obtained by the super-resolution processing is higher than that of the low-resolution image. The degree of expectation here refers to the degree of improvement in resolving power of a high-resolution image obtained by super-resolution processing with respect to a low-resolution image.
 超解像処理による解像力向上は、画像Fa及びFb間にサブピクセル単位の位置ずれが生じていることを前提にして実現される。仮に、画像Fa及びFb間の動き量が完全にゼロであれば、画像Faと画像Fbは完全に同じ画像であるため(実空間上における被写体の動きを無視)、超解像処理による解像力向上は望めない。一方、画像Fa及びFb間に適切な位置ずれが存在しておれば、高い解像力向上を期待できる。 The resolution improvement by the super-resolution processing is realized on the premise that there is a positional shift in units of subpixels between the images Fa and Fb. If the amount of motion between the images Fa and Fb is completely zero, the image Fa and the image Fb are completely the same image (ignoring the motion of the subject in the real space), so that the resolution is improved by super-resolution processing. Can't hope. On the other hand, if there is an appropriate misalignment between the images Fa and Fb, high resolution improvement can be expected.
 このような観点から、期待度を、高解像度画像の元となる低解像度画像Fa及びFb間の動き量に基づいて推定することができる。例えば、動き量記憶部43の記憶内容に基づき、画像Fa及びFbの動き量の水平成分及び垂直成分を検出し、更に、検出した水平成分の小数点部分MabH及び検出した垂直成分の小数点部分MabVを求める。ここにおける小数点部分とは、低解像度度画像の隣接画素間隔ppを1とおいた場合における小数点部分である。そして、小数点部分MabH及びMabVに基づいて評価値EVを求め、EV>EVA1である時は期待度が第1期待度であると推定し、EVA1≧EV>EVA2である時は期待度が第2期待度であると推定し、EVA2≧EVである時は期待度が第3期待度であると推定した上で、推定した期待度を指標340に反映させる。「EVA1>EV>0」が成立するように、EVA1及びEVA2の値は予め設定される。 From such a viewpoint, the degree of expectation can be estimated based on the amount of motion between the low-resolution images Fa and Fb that are the basis of the high-resolution image. For example, the horizontal component and the vertical component of the motion amount of the images Fa and Fb are detected based on the stored contents of the motion amount storage unit 43, and further, the decimal point part M abH of the detected horizontal component and the decimal point part M of the detected vertical component are detected. Obtain abV . The decimal point part here is a decimal point part when the adjacent pixel interval pp L of the low resolution image is set to 1. Then, an evaluation value EV A is obtained based on the decimal point parts M abH and M abV , and when EV A > EV A1 , the expectation is estimated to be the first expectation, and EV A1 ≧ EV A > EV A2 In some cases, the degree of expectation is estimated to be the second degree of expectation. When EV A2 ≧ EV A , the degree of expectation is estimated to be the third degree of expectation, and the estimated degree of expectation is reflected in the indicator 340. . The values of EV A1 and EV A2 are set in advance so that “EV A1 > EV 2 > 0” is satisfied.
 ここで、評価値EVは、小数点部分MabH及びMabVが(0.5×pp)に近いほど高い値をとるような評価値である。例えば、(|MabH-0.5×pp|+|MabV-0.5×pp|)の逆数を評価値EVとする。超解像処理では、画像Fa及びFb間の動き量に相当する画像Fa及びFb間の位置ずれを補正し、位置ずれ補正後の画像Fa及びFbを組み合わせることで高解像度化を図る。仮に、MabH及びMabVが共にゼロ(或いは略ゼロ)であれば、図19(a)に示す如く、位置ずれ補正後の画像Fa及びFbの対応画素位置は重なり合い、画像Fa及びFbの対応画素は被写体の同じ位置の情報しか示さない。このような状況及びこれに類似する状況では、高い解像力向上効果が見込めない。一方、小数点部分MabH及びMabVが(0.5×pp)に近ければ、図19(b)に示す如く、画像Faだけでは得られなかった、画像Faの隣接画素間の中間位置付近における被写体の情報を画像Fbから得ることができるため、高い解像力向上効果を期待できる。 Here, the evaluation value EV A is an evaluation value that takes a higher value as the decimal point portions M abH and M abV are closer to (0.5 × pp L ). For example, the reciprocal of (| M abH −0.5 × pp L | + | M abV −0.5 × pp L |) is set as the evaluation value EV A. In the super-resolution processing, the positional deviation between the images Fa and Fb corresponding to the amount of motion between the images Fa and Fb is corrected, and the images Fa and Fb after the positional deviation correction are combined to achieve high resolution. If both M abH and M abV are zero (or substantially zero), as shown in FIG. 19A, the corresponding pixel positions of the images Fa and Fb after the positional deviation correction are overlapped, and the correspondence between the images Fa and Fb. The pixel only shows information on the same position of the subject. In such a situation and similar situations, a high resolution improvement effect cannot be expected. On the other hand, if the decimal points M abH and M abV are close to (0.5 × pp L ), as shown in FIG. 19B, the vicinity of the intermediate position between adjacent pixels of the image Fa, which was not obtained with the image Fa alone, is obtained. Since the information on the subject in can be obtained from the image Fb, a high resolution improvement effect can be expected.
[再生時に超解像処理を実行]
 上述の説明は、撮影モードにおいて超解像処理を行うことを想定しているが、再生モードにおいて超解像処理を行うことも可能である。この場合、まず、撮影モードにおいて低解像度画像Fa及びFbを取得し、画像Fa及びFbの画像データを圧縮処理部16にて圧縮してから外部メモリ18に記録しておく。その後、再生モードにおいて、外部メモリ18に記録された画像Fa及びFbの圧縮画像データを伸張処理部19にて伸張し、伸張によって得られた画像Fa及びFbの画像データを、順次、図7に示される超解像処理部40に入力すればよい。
[Execute super-resolution processing during playback]
The above description assumes that super-resolution processing is performed in the shooting mode, but it is also possible to perform super-resolution processing in the reproduction mode. In this case, first, the low resolution images Fa and Fb are acquired in the shooting mode, and the image data of the images Fa and Fb are compressed by the compression processing unit 16 and then recorded in the external memory 18. Thereafter, in the reproduction mode, the compressed image data of the images Fa and Fb recorded in the external memory 18 are expanded by the expansion processing unit 19, and the image data of the images Fa and Fb obtained by the expansion are sequentially shown in FIG. What is necessary is just to input into the super-resolution processing part 40 shown.
 つまり、超解像処理部40に対する画像Fa及びFbの画像データの入力主体は、撮影モードで超解像処理を行う場合においてはAFE12であり、これに対し、再生モードで超解像処理を行う場合においては伸張処理部19である。超解像処理部40に対する画像Fa及びFbの画像データの入力主体が異なる点を除き、超解像処理部40の動作及び超解像処理部40の出力データを受ける部位(反復制御部50、第1信号制御部51、信号処理部52、第2信号制御部53、表示処理部20)の動作は、撮影モード及び再生モード間で同様である。従って、再生モードにおいて超解像処理を行う場合も、上述したものと同様の、中間高解像度画像を利用した表示更新処理が実行される。 That is, the input subject of the image data of the images Fa and Fb to the super-resolution processing unit 40 is the AFE 12 when super-resolution processing is performed in the shooting mode, whereas the super-resolution processing is performed in the reproduction mode. In some cases, it is the decompression processing unit 19. Except that the input subject of the image data of the images Fa and Fb to the super-resolution processing unit 40 is different, the part that receives the operation of the super-resolution processing unit 40 and the output data of the super-resolution processing unit 40 (repetition control unit 50, The operations of the first signal control unit 51, the signal processing unit 52, the second signal control unit 53, and the display processing unit 20) are the same between the shooting mode and the reproduction mode. Therefore, even when super-resolution processing is performed in the reproduction mode, display update processing using an intermediate high-resolution image is executed as described above.
 尚、画像Fa及びFbに基づく超解像処理が完了し、画像Fa及びFbに基づく最終高解像度画像が得られると、その最終高解像度画像の画像データは圧縮処理部16を介して外部メモリ18に記録される。一度、画像Fa及びFbに基づく最終高解像度画像が得られると、画像Fa及びFbに基づく超解像処理の再実行は不要である。 When the super-resolution processing based on the images Fa and Fb is completed and the final high-resolution image based on the images Fa and Fb is obtained, the image data of the final high-resolution image is transferred to the external memory 18 via the compression processing unit 16. To be recorded. Once the final high-resolution image based on the images Fa and Fb is obtained, it is not necessary to re-execute the super-resolution processing based on the images Fa and Fb.
 また、超解像処理部50を映像信号処理部13内に設ける例を上述したが(図7参照)、再生モードにおいて超解像処理を行うことを主眼におき、超解像処理部50及び表示更新処理の実行制御を担う部位(主として、反復制御部50)を表示処理部20内に設けておくことも可能である。 Further, the example in which the super-resolution processing unit 50 is provided in the video signal processing unit 13 has been described (see FIG. 7). However, the super-resolution processing unit 50 and the super-resolution processing unit 50 and It is also possible to provide a part (mainly the repetition control unit 50) responsible for execution control of the display update process in the display processing unit 20.
<<第2実施例>>
 本発明の第2実施例を説明する。上述の第1実施例では、低解像度画像Fa及びFbの全体から高解像度画像Fx1の全体を生成し、その後、1つの超解像単位処理によって高解像度画像Fxiの全体から高解像度画像Fx(i+1)の全体を同時期に生成している。そして、最終高解像度画像が得られるまでに相応の時間がかかることを考慮し、超解像処理の実行中において、超解像処理の中間結果ともいうべき中間高解像度画像を表示するようにしている。
<< Second Example >>
A second embodiment of the present invention will be described. In the first embodiment described above, the entire high resolution image Fx1 is generated from the entire low resolution images Fa and Fb, and then the high resolution image Fx (i + 1) is generated from the entire high resolution image Fxi by one super-resolution unit process. ) Is generated at the same time. In consideration of the fact that it takes a considerable amount of time to obtain the final high-resolution image, an intermediate high-resolution image that should be called an intermediate result of the super-resolution processing is displayed during the super-resolution processing. Yes.
 これに対し、第2実施例では、低解像度画像及び高解像度画像の全体画像領域を複数の領域に分割し、複数の分割領域に対する超解像処理を時分割で実行する。そして、分割領域に対する超解像処理の結果を、超解像処理を終えた分割領域から順に表示していく。 On the other hand, in the second embodiment, the entire image area of the low resolution image and the high resolution image is divided into a plurality of areas, and the super-resolution processing for the plurality of divided areas is executed in a time division manner. Then, the results of the super-resolution processing for the divided areas are displayed in order from the divided areas for which the super-resolution processing has been completed.
 より具体的に説明する。図20に、第2実施例に係る超解像処理部40aの内部ブロック図を示す。超解像処理部40aを、図1の映像信号処理部13又は表示処理部20内に設けることが可能である。超解像処理部40aは、符号41~43及び44aによって参照される各部位を備える。超解像処理部40a内のメモリ部41、動き量算出部42及び動き量記憶部43は、図7に示すものと同じものである。 More specific explanation. FIG. 20 shows an internal block diagram of the super-resolution processor 40a according to the second embodiment. The super-resolution processing unit 40a can be provided in the video signal processing unit 13 or the display processing unit 20 in FIG. The super-resolution processing unit 40a includes each part referred to by reference numerals 41 to 43 and 44a. The memory unit 41, the motion amount calculation unit 42, and the motion amount storage unit 43 in the super-resolution processing unit 40a are the same as those shown in FIG.
 再生モードにおいては外部メモリ18から伸張処理部19を介して、撮影モードにおいてはAFE12から、画像Fa及びFbの画像データが超解像処理部40aに入力される。第1実施例で述べたように、画像Fa及びFb間の動き量が動き算出部42にて算出されて動き量記憶部43に記憶される。画像Fa及びFbの画像データは、メモリ部41を介して超解像演算部44aに入力される。 In the playback mode, the image data of the images Fa and Fb are input to the super-resolution processing unit 40a from the external memory 18 via the expansion processing unit 19 and in the shooting mode from the AFE 12. As described in the first embodiment, the motion amount between the images Fa and Fb is calculated by the motion calculation unit 42 and stored in the motion amount storage unit 43. The image data of the images Fa and Fb are input to the super-resolution calculation unit 44a via the memory unit 41.
 超解像演算部44aは、図8の超解像演算部44と同様の構成及び機能を有する。但し、超解像演算部44aでは、減算部64の出力データはフレームメモリ65に直接与えられ、フレームメモリ65は減算部64から出力される高解像度画像の画像データを記憶する。また、図8の超解像演算部44が高解像度画像の全体の画像データを同時期に生成するに対して、超解像演算部44aは、高解像度画像における、第1、第2、第3、・・・の分割領域の画像データを順番に生成する。 The super-resolution calculation unit 44a has the same configuration and function as the super-resolution calculation unit 44 of FIG. However, in the super-resolution calculation unit 44a, the output data of the subtraction unit 64 is directly given to the frame memory 65, and the frame memory 65 stores the image data of the high resolution image output from the subtraction unit 64. 8 generates the entire image data of the high-resolution image at the same time, the super-resolution calculation unit 44a has the first, second, and second in the high-resolution image. The image data of the divided areas 3,.
 説明の具体化のため、低解像度画像及び高解像度画像の全体画像領域を、9つに分割する場合を考える。図21に示す如く、低解像度画像Fa、Fb及び高解像度画像の夫々の全体画像領域を垂直及び水平方向に夫々3等分することにより、分割領域DR~DRを形成する。画像Faの全体画像領域は、画像Faの分割領域DR~DRを合成したものであり、画像Fbの全体画像領域は、画像Fbの分割領域DR~DRを合成したものであり、高解像度画像の全体画像領域は、高解像度画像の分割領域DR~DRを合成したものである。 For the sake of concrete explanation, let us consider a case where the entire image area of the low resolution image and the high resolution image is divided into nine. As shown in FIG. 21, the divided image areas DR 1 to DR 9 are formed by dividing the entire image areas of the low resolution images Fa and Fb and the high resolution image into three equal parts in the vertical and horizontal directions, respectively. The entire image area of the image Fa is a combination of the divided areas DR 1 to DR 9 of the image Fa, and the entire image area of the image Fb is a combination of the divided areas DR 1 to DR 9 of the image Fb. The entire image area of the high resolution image is a combination of the divided areas DR 1 to DR 9 of the high resolution image.
 超解像演算部44aは、分割領域ごとに超解像処理を実行する。個々の超解像処理の内容自体は、第1実施例で示したものと同じである。超解像演算部44aは、「分割領域DRに対する超解像処理を実行し、それが完了した後に分割領域DRj+1に対する超解像処理を実行する」という動作を、全分割領域に対する超解像処理が完了するまで繰り返し実行する(ここで、jは自然数)。 The super-resolution calculation unit 44a performs super-resolution processing for each divided region. The contents of the individual super-resolution processing are the same as those shown in the first embodiment. The super-resolution calculation unit 44a performs an operation of “performing the super-resolution processing for the divided region DR j and executing the super-resolution processing for the divided region DR j + 1 after the completion” for the super-resolution for all the divided regions. Repeat until image processing is complete (where j is a natural number).
 図22及び図23を参照して、第2実施例に係る超解像処理及び表示画像の変遷の様子を説明する。時刻tに画像Fa及びFbの画像データが超解像演算部44aに入力され、図22に示す如く、時刻tの後、時間が進行するにつれ、時刻t、t、・・・、tが、この順番で訪れるものとする。図23は、表示画像が時間の経過と共に変化していく様子を示している。図23において、画像401、402、403及び404は、夫々、時刻t、t、t、及びtにおける表示画像を示しており、図23の表示画像内に示された斜線領域は、超解像処理済みの部分を示している。 With reference to FIGS. 22 and 23, the state of super-resolution processing and display image transition according to the second embodiment will be described. At time t 0 the image data of the image Fa and Fb are input to the super-resolution calculating unit 44a, as shown in FIG. 22, after time t 0, as time progresses, the time t 1, t 2, · · · , T 9 are visited in this order. FIG. 23 shows how the display image changes over time. In FIG. 23, images 401, 402, 403, and 404 indicate display images at times t 0 , t 1 , t 2 , and t 9 , respectively, and hatched areas shown in the display image of FIG. FIG. 4 shows a part after super-resolution processing.
 時刻tにおいて、表示処理部20は、低解像度画像Faの全体画像を表示画像401として表示する。超解像演算部44aは、時刻t-t間において、画像Faの分割領域DR内の画像データと画像Fbの分割領域DR内の画像データを用いた超解像処理を実行し、これによって高解像度画像の分割領域DR内の画像データを生成する。その後、時刻t-t間において、画像Faの分割領域DR内の画像データと画像Fbの分割領域DR内の画像データを用いた超解像処理を実行し、これによって高解像度画像の分割領域DR内の画像データを生成する。時刻t-t間などについても同様である。つまり、超解像演算部44aは、時刻tj-1-t間において、画像Faの分割領域DR内の画像データと画像Fbの分割領域DR内の画像データを用いた超解像処理を実行し、これによって高解像度画像の分割領域DR内の画像データを生成する(ここで、jは自然数)。超解像演算部44aでは、時刻tj-1-t間における、このような単位処理を、9回分、順次実行する。 At time t 0 , the display processing unit 20 displays the entire image of the low resolution image Fa as the display image 401. The super-resolution calculation unit 44a performs super-resolution processing using the image data in the divided region DR 1 of the image Fa and the image data in the divided region DR 1 of the image Fb between times t 0 and t 1. , thereby generating image data of the divided area DR 1 of the high-resolution image. Thereafter, during the time t 1 -t 2 , super-resolution processing is executed using the image data in the divided area DR 2 of the image Fa and the image data in the divided area DR 2 of the image Fb, and thereby the high-resolution image generating image data of the divided area DR 2. The same applies to the time between t 2 and t 3 . That is, the super-resolution computation unit 44a, between the time t j-1 -t j, super-resolution image using the image data of the divided region DR j of the image data and the image Fb of the divided region DR j image Fa The processing is executed, thereby generating image data in the divided region DR j of the high resolution image (where j is a natural number). The super-resolution operation unit 44a sequentially executes such unit processing for the time t j−1 -t j for nine times.
 画像Faの分割領域DR内の画像データと画像Fbの分割領域DR内の画像データを用いた超解像処理は、分割領域DRについての画像Fa及びFb間の動き量に基づいて実行される。分割領域DRについての画像Fa及びFb間の動き量は、動き量算出部42によって算出されて動き量記憶部43に記憶されている。動き量算出部42は、画像Faの分割領域DR内の画像データと画像Fbの分割領域DR内の画像データとに基づいて、分割領域DRについての画像Fa及びFb間の動き量を算出する。つまり、動き量算出部42は、分割領域ごとに、画像Fa及びFb間の動き量を算出する。但し、画像Fa及びFbの全体に対して1つの動き量を求め、その1つの動き量を、画像Fa及びFb間の、全分割領域DR~DRについての動き量として共通使用することも可能である。 Super-resolution processing using the image data in the divided region DR j of the image data and the image Fb of the divided region DR j image Fa is executed based on the amount of motion between images Fa and Fb of the divided regions DR j Is done. The motion amount between the images Fa and Fb for the divided region DR j is calculated by the motion amount calculation unit 42 and stored in the motion amount storage unit 43. Motion amount calculation unit 42, based on the image data in the divided region DR j of the image data and the image Fb of the divided region DR j image Fa, the movement amount between images Fa and Fb of the divided regions DR j calculate. That is, the motion amount calculation unit 42 calculates the motion amount between the images Fa and Fb for each divided region. However, one motion amount may be obtained for the entire images Fa and Fb, and the one motion amount may be commonly used as the motion amount for all the divided regions DR 1 to DR 9 between the images Fa and Fb. Is possible.
 時刻tにおいて、高解像度画像の分割領域DR内の画像データが生成されると、その画像データは超解像処理部40aから表示処理部20に送られ、表示処理部20は、低解像度画像Faの分割領域DR~DR内の画像データと高解像度画像の分割領域DR内の画像データとに基づいて、図23の表示画像402を表示部27に表示させる。表示画像402は、低解像度画像Faの分割領域DR~DR内の画像と、高解像度画像の分割領域DR内の画像(中間生成画像)とを合成した画像である。 At time t 1, when the image data of the divided area DR 1 of the high-resolution image is generated, the image data is sent to the display processing unit 20 from the super-resolution processing section 40a, the display processing unit 20 is a low resolution Based on the image data in the divided regions DR 2 to DR 9 of the image Fa and the image data in the divided region DR 1 of the high resolution image, the display image 402 of FIG. The display image 402 is an image obtained by synthesizing an image in the divided regions DR 2 to DR 9 of the low resolution image Fa and an image (intermediately generated image) in the divided region DR 1 of the high resolution image.
 同様に、時刻tにおいて、高解像度画像の分割領域DR内の画像データが生成されると、その画像データは超解像処理部40aから表示処理部20に送られ、表示処理部20は、低解像度画像Faの分割領域DR~DR内の画像データと高解像度画像の分割領域DR及びDR内の画像データとに基づいて、図23の表示画像403を表示部27に表示させる。表示画像403は、低解像度画像Faの分割領域DR~DR内の画像と、高解像度画像の分割領域DR及びDR内の画像(2枚の中間生成画像)とを合成した画像である。 Similarly, at time t 2, the the image data of the divided area DR 2 of the high-resolution image is generated, the image data is sent to the display processing unit 20 from the super-resolution processing section 40a, the display processing unit 20 The display image 403 of FIG. 23 is displayed on the display unit 27 based on the image data in the divided areas DR 3 to DR 9 of the low resolution image Fa and the image data in the divided areas DR 1 and DR 2 of the high resolution image. Let The display image 403 is an image obtained by synthesizing the images in the divided regions DR 3 to DR 9 of the low resolution image Fa and the images (two intermediate generation images) in the divided regions DR 1 and DR 2 of the high resolution image. is there.
 時刻t~tについても同様の表示画像の更新が行われ、時刻tにおいて、高解像度画像の分割領域DR内の画像データが生成されると、画像Fa及びFbの全体画像領域に対する超解像処理は終了する。時刻tにおいて、高解像度画像の分割領域DR内の画像データは、表示処理部20に送られ、表示処理部20は、時刻t~tにおいて送られてきた高解像度画像の分割領域DR~DR内の画像データに基づいて、図23の表示画像404を表示部27に表示させる。表示画像404は、高解像度画像の分割領域DR~DR内の画像を合成した画像、即ち、最終的に生成されるべき高解像度画像の全体画像である。超解像処理部40aにて生成された高解像度画像の全画像データは、圧縮処理部16を介して外部メモリ18に記録される。 A similar display image update is performed at times t 3 to t 8 , and when image data in the divided region DR 9 of the high-resolution image is generated at time t 9 , the entire image regions of the images Fa and Fb are updated. The super-resolution process ends. At time t 9 , the image data in the high-resolution image divided region DR 9 is sent to the display processing unit 20, and the display processing unit 20 sends the divided region of the high-resolution image sent from time t 1 to t 9 . A display image 404 of FIG. 23 is displayed on the display unit 27 based on the image data in DR 1 to DR 9 . The display image 404 is an image obtained by synthesizing images in the divided regions DR 1 to DR 9 of the high resolution image, that is, the entire image of the high resolution image to be finally generated. All image data of the high resolution image generated by the super-resolution processing unit 40 a is recorded in the external memory 18 via the compression processing unit 16.
 尚、上述の説明では、超解像処理の対象となる分割領域が、左上から右下に向かってラスタ走査のように走査されているが、この走査の方法は任意に変更可能である。 In the above description, the divided area to be subjected to the super-resolution processing is scanned like raster scanning from the upper left to the lower right, but this scanning method can be arbitrarily changed.
<<第3実施例>>
 本発明の第3実施例を説明する。上述の第1及び第2実施例では、入力画像に対して行われるべき演算処理(画像処理)が超解像処理であることを前提とし、最終的に得るべき出力画像の生成に相応の時間がかかることに鑑みて、演算処理の実行過程で生成される演算処理の中間結果を段階的に表示するようにしている。第1及び第2実施例にとっての上記入力画像は、複数枚の低解像度画像であり、第1及び第2実施例にとっての上記出力画像は、1枚の最終高解像度画像である。
<< Third Example >>
A third embodiment of the present invention will be described. In the first and second embodiments described above, it is assumed that the arithmetic processing (image processing) to be performed on the input image is a super-resolution processing, and a time corresponding to the generation of the output image to be finally obtained. In view of this, intermediate results of arithmetic processing generated in the execution process of arithmetic processing are displayed step by step. The input images for the first and second embodiments are a plurality of low-resolution images, and the output images for the first and second embodiments are one final high-resolution image.
 しかしながら、本発明において、入力画像から出力画像を生成するための所定の演算処理は超解像処理に限定されず、また、その演算処理に第1実施例の超解像単位処理のような処理の反復実行が含まれているか否かも問わない。故に、出力画像を得るための入力画像の枚数も複数枚とは限らず、1枚であっても良い。所定の演算処理は、空間フィルタリング、周波数フィルタリング、幾何学的変換などの任意の画像処理である。このように、本発明にかかる表示方法は、比較的長い時間(例えば数秒~数十秒)がかかる演算処理によって入力画像から出力画像を生成する、任意の装置又は方法に適用可能である。尚、当然ではあるが、入力画像と出力画像は互いに異なる画像である。 However, in the present invention, the predetermined arithmetic processing for generating the output image from the input image is not limited to the super-resolution processing, and processing such as the super-resolution unit processing of the first embodiment is included in the arithmetic processing. It does not matter whether or not iterative execution is included. Therefore, the number of input images for obtaining an output image is not limited to a plurality, and may be one. The predetermined arithmetic processing is arbitrary image processing such as spatial filtering, frequency filtering, and geometric transformation. As described above, the display method according to the present invention can be applied to any apparatus or method that generates an output image from an input image by a calculation process that takes a relatively long time (for example, several seconds to several tens of seconds). Of course, the input image and the output image are different from each other.
 入力画像は撮像装置1の撮影によって得られた画像でなくても構わないが、今、入力画像が撮像装置1の撮影によって得られた1枚のフレーム画像であることを想定して、以下に、第3実施例に係る表示方法の具体例を説明する。この表示方法の具体例は、第2実施例に係るそれと類似している。 The input image may not be an image obtained by photographing with the imaging device 1, but now it is assumed that the input image is a single frame image obtained by photographing with the imaging device 1. A specific example of the display method according to the third embodiment will be described. A specific example of this display method is similar to that according to the second embodiment.
 映像信号処理部13は、1枚の入力画像(フレーム画像)IINに対して所定の演算処理を施すことで1枚の出力画像Iを生成する。映像信号処理部13は、入力画像IIN及び出力画像Iの全体画像領域を複数の分割領域に分割する。説明の具体化のため、図21の示す如く、入力画像IIN及び出力画像Iの夫々の全体画像領域を垂直及び水平方向に夫々3等分することにより、分割領域DR~DRを形成する。入力画像IINの全体画像領域は、入力画像IINの分割領域DR~DRを合成したものであり、出力画像Iの全体画像領域は、出力画像Iの分割領域DR~DRを合成したものである。映像信号処理部13は、分割領域ごとに所定の演算処理を実行する。映像信号処理部13は、「入力画像IINの分割領域DR内の画像に対して所定の演算処理を実行し、それが完了した後に入力画像IINの分割領域DRj+1内の画像に対して所定の演算処理を実行する」という動作を、所定の演算処理が全分割領域に対して完了するまで繰り返し実行する(ここで、jは自然数)。 Video signal processing unit 13 generates an output image I O of one by performing predetermined arithmetic processing for one input image (frame image) I IN. The video signal processing unit 13 divides the entire image area of the input image I IN and the output image IO into a plurality of divided areas. For the sake of concrete explanation, as shown in FIG. 21, the divided image areas DR 1 to DR 9 are divided by dividing the entire image areas of the input image I IN and the output image IO into three equal parts in the vertical and horizontal directions, respectively. Form. The entire image region of the input image I IN is a composite of the divided regions DR 1 ~ DR 9 of the input image I IN, the entire image area of the output image I O is the output image I O of the divided regions DR 1 ~ DR 9 is synthesized. The video signal processing unit 13 executes predetermined arithmetic processing for each divided region. Video signal processing section 13, to "perform a predetermined calculation process on the image in the divided region DR j of the input image I IN, the image of the divided region DR j + 1 of the input image I IN after it has been completed The predetermined operation processing is repeatedly executed until the predetermined operation processing is completed for all the divided areas (where j is a natural number).
 時刻tの後、時間が進行するにつれ、時刻t、t、・・・、tが、この順番で訪れるものとする。時刻tにおいて、表示処理部20は、入力画像IINの全体画像を表示画像として表示する。映像信号処理部13は、時刻tj-1-t間において、入力画像IINの分割領域DR内の画像(画像データ)に対して所定の演算処理を実行し、これによって出力画像Iの分割領域DR内の画像(画像データ)を生成する(ここで、jは自然数)。時刻tj-1-t間における、このような単位処理が、9回分、順次実行される。 As time progresses after time t 0 , time t 1 , t 2 ,..., T 9 come in this order. At time t 0, the display processing unit 20 displays the entire image of the input image I IN as the display image. The video signal processing unit 13 performs predetermined arithmetic processing on the image (image data) in the divided region DR j of the input image I IN between the times t j−1 and t j , and thereby the output image I An image (image data) in the divided region DR j of O is generated (where j is a natural number). Such unit processing between time t j−1 and t j is sequentially executed nine times.
 時刻tにおいて、出力画像Iの分割領域DR内の画像データが生成されると、その画像データは表示処理部20に送られ、表示処理部20は、入力画像IINの分割領域DR~DR内の画像データと出力画像Iの分割領域DR内の画像データとに基づき、入力画像IINの分割領域DR~DR内の画像と出力画像Iの分割領域DR内の画像(中間生成画像)とを合成した画像を、表示部27に表示させる。 At time t 1, when the image data of the divided area DR 1 of the output image I O is generated, the image data is sent to the display processing unit 20, the display processing unit 20, the divided region DR of the input image I IN Based on the image data in 2 to DR 9 and the image data in the divided area DR 1 of the output image I O , the image in the divided areas DR 2 to DR 9 of the input image I IN and the divided area DR of the output image I O An image obtained by synthesizing the image in 1 (intermediately generated image) is displayed on the display unit 27.
 同様に、時刻tにおいて、出力画像Iの分割領域DR内の画像データが生成されると、その画像データは表示処理部20に送られ、表示処理部20は、入力画像IINの分割領域DR~DR内の画像データと出力画像Iの分割領域DR及びDR内の画像データとに基づき、入力画像IINの分割領域DR~DR内の画像と出力画像Iの分割領域DR及びDR内の画像(2枚の中間生成画像)とを合成した画像を、表示部27に表示させる。 Similarly, at time t 2, the the image data of the divided area DR 2 of the output image I O is generated, the image data is sent to the display processing unit 20, the display processing unit 20, the input image I IN Based on the image data in the divided regions DR 3 to DR 9 and the image data in the divided regions DR 1 and DR 2 of the output image IO , the images and output images in the divided regions DR 3 to DR 9 of the input image I IN An image obtained by synthesizing the images (two intermediately generated images) in the IO divided regions DR 1 and DR 2 is displayed on the display unit 27.
 時刻t~tについても同様の表示画像の更新が行われ、時刻tにおいて、出力画像Iの分割領域DR内の画像データが生成されると、入力画像IINから出力画像Iを生成するための処理は終了する。時刻tにおいて、出力画像Iの分割領域DR内の画像データは表示処理部20に送られ、表示処理部20は、時刻t~tにおいて送られてきた出力画像Iの分割領域DR~DR内の画像データに基づき、出力画像Iの分割領域DR~DR内の画像を合成した画像、即ち最終的に生成されるべき出力画像Iの全体画像を表示部27に表示させる。生成された出力画像Iの全画像データは、圧縮処理部16を介して外部メモリ18に記録される。 A similar display image update is performed at times t 3 to t 8 , and when image data in the divided region DR 9 of the output image IO is generated at time t 9 , the output image I IN is output from the input image I IN. The process for generating O ends. At time t 9, the image data in the divided region DR 9 of the output image I O is sent to the display processing unit 20, the display processing unit 20 divides the output image I O sent at time t 1 ~ t 9 based on the image data in the region DR 1 ~ DR 9, the displayed image, i.e., the entire image of the output image I O to be finally generated synthesized image of the divided region DR within 1 ~ DR 9 of the output image I O This is displayed on the unit 27. All the generated image data of the output image IO is recorded in the external memory 18 via the compression processing unit 16.
 <<変形等>>
 上述した説明文中に示した具体的な数値は、単なる例示であって、当然の如く、それらを様々な数値に変更することができる。上述の実施形態の変形例または注釈事項として、以下に、注釈1~注釈5を記す。各注釈に記載した内容は、矛盾なき限り、任意に組み合わせることが可能である。
<< Deformation, etc. >>
The specific numerical values shown in the above description are merely examples, and as a matter of course, they can be changed to various numerical values. As modifications or annotations of the above-described embodiment, notes 1 to 5 are described below. The contents described in each comment can be arbitrarily combined as long as there is no contradiction.
[注釈1]
 上述の実施形態では、高解像度画像を生成するために利用される低解像度画像の枚数が2となっているが、それは2以外であってもよい。
[Note 1]
In the above-described embodiment, the number of low-resolution images used for generating a high-resolution image is two, but it may be other than two.
[注釈2]
 上述の説明では、画像データに基づく演算によって実低解像度画像間の動き量を導出しているが、実空間上における撮像装置1の動きを検出するセンサ(不図示)の検出結果に基づいて、実低解像度画像間の動き量を導出するようにしてもよい。撮像装置1の動きを検出するセンサは、例えば、撮像装置1の角速度を検出する角速度センサ、撮像装置1の角加速度を検出する角加速度センサ、若しくは、撮像装置1の加速度を検出する加速度センサ、又は、それらの組み合わせである。また、そのようなセンサの検出結果と画像データの双方に基づいて、実低解像度画像間の動き量を導出してもよい。
[Note 2]
In the above description, the amount of motion between real low-resolution images is derived by computation based on image data, but based on the detection result of a sensor (not shown) that detects the motion of the imaging device 1 in real space, You may make it derive | lead-out the motion amount between real low-resolution images. The sensor that detects the movement of the imaging device 1 is, for example, an angular velocity sensor that detects an angular velocity of the imaging device 1, an angular acceleration sensor that detects angular acceleration of the imaging device 1, or an acceleration sensor that detects acceleration of the imaging device 1, Or a combination thereof. Further, the amount of motion between actual low-resolution images may be derived based on both the detection result of such a sensor and the image data.
[注釈3]
 図1の撮像装置1は、ハードウェア、或いは、ハードウェアとソフトウェアの組み合わせによって実現可能である。特に、映像信号処理部13及び表示処理部20内で実行される処理の全部又は一部を、ソフトウェアを用いて実現することも可能である。勿論、それらをハードウェアのみで形成することも可能である。ソフトウェアを用いて撮像装置1を構成する場合、ソフトウェアにて実現される部位についてのブロック図は、その部位の機能ブロック図を表すことになる。
[Note 3]
The imaging apparatus 1 in FIG. 1 can be realized by hardware or a combination of hardware and software. In particular, all or part of the processing executed in the video signal processing unit 13 and the display processing unit 20 can be realized using software. Of course, it is also possible to form them only by hardware. When the imaging apparatus 1 is configured using software, a block diagram of a part realized by software represents a functional block diagram of the part.
[注釈4]
 例えば、以下のように考えることができる。第1実施例においては、図7の超解像処理部40、反復制御部50、第1信号制御部51、信号処理部52及び第2信号制御部53と図13の表示処理部20とを含むブロックによって、第1の画像表示装置が形成される。第1の画像表示装置に、表示部27が更に含まれていると考えることもできる。第1の画像表示装置は、超解像処理によって複数の低解像度画像から高解像度画像を生成する超解像処理部40と、超解像処理部40にて生成される高解像度画像に基づく表示画像を表示部27に表示させる表示制御部とを含む。この表示制御部は、主として反復制御部50及び表示処理部20によって形成されるが、この表示制御部に、第1信号制御部51、信号処理部52及び第2信号制御部53の全部又は一部が含まれていると考えることもできる。
[Note 4]
For example, it can be considered as follows. In the first embodiment, the super-resolution processing unit 40, the iteration control unit 50, the first signal control unit 51, the signal processing unit 52, the second signal control unit 53 of FIG. 7 and the display processing unit 20 of FIG. The first image display device is formed by the included block. It can be considered that the display unit 27 is further included in the first image display device. The first image display device includes a super-resolution processing unit 40 that generates a high-resolution image from a plurality of low-resolution images by super-resolution processing, and a display based on the high-resolution image generated by the super-resolution processing unit 40 A display control unit that displays an image on the display unit 27. The display control unit is mainly formed by the iterative control unit 50 and the display processing unit 20, and the display control unit includes all or one of the first signal control unit 51, the signal processing unit 52, and the second signal control unit 53. It can be considered that the part is included.
 第2実施例においては、超解像処理によって複数の低解像度画像から高解像度画像を生成する超解像処理部40a(図20)と、超解像処理部40aにて生成される高解像度画像に基づく表示画像を表示部27に表示させる表示処理部20(図13)とを含むブロックによって、第2の画像表示装置が形成される。第2の画像表示装置に、表示部27が更に含まれていると考えることもできる。 In the second embodiment, a super-resolution processing unit 40a (FIG. 20) that generates a high-resolution image from a plurality of low-resolution images by super-resolution processing, and a high-resolution image generated by the super-resolution processing unit 40a. A second image display device is formed by a block including the display processing unit 20 (FIG. 13) that displays a display image based on the display unit 27. It can be considered that the display unit 27 is further included in the second image display device.
 第3実施例においては、映像信号処理13及び表示処理部20を含むブロックによって、第3の画像表示装置が形成される。第3の画像表示装置に、表示部27が更に含まれていると考えることもできる。 In the third embodiment, a third image display device is formed by blocks including the video signal processing 13 and the display processing unit 20. It can be considered that the display unit 27 is further included in the third image display device.
[注釈5]
 上記の第1、第2又は第3の画像表示装置の機能を撮像装置1と異なる電子機器(例えば、画像処理機能を備えた画像再生装置;不図示)にて実現することも可能である。この場合、その電子機器内に第1、第2又は第3の画像表示装置と同等の画像表示装置を設けるようにし、撮像装置1にて1枚以上のフレーム画像を取得した後、その1枚以上のフレーム画像の画像データを無線又は有線にて或いは記録媒体を介して上記電子機器に供給すればよい。
[Note 5]
The functions of the first, second, or third image display device described above can be realized by an electronic device (for example, an image reproduction device having an image processing function; not shown) different from that of the imaging device 1. In this case, an image display device equivalent to the first, second, or third image display device is provided in the electronic device, and one or more frame images are acquired by the imaging device 1, and then one of the images is acquired. The image data of the frame image described above may be supplied to the electronic device wirelessly or by wire or via a recording medium.
  1 撮像装置
 11 撮像部
 13 映像信号処理部
 20 表示処理部
 27 表示部
 33 撮像素子
 40、40a 超解像処理部
 44、44a 超解像演算部
 50 反復制御部
DESCRIPTION OF SYMBOLS 1 Imaging device 11 Imaging part 13 Video signal processing part 20 Display processing part 27 Display part 33 Imaging element 40, 40a Super-resolution processing part 44, 44a Super-resolution calculating part 50 Iteration control part

Claims (6)

  1.  所定の演算処理によって入力画像から出力画像を生成する演算処理部と、
     前記演算処理部の生成画像に基づく表示画像を表示部に表示させる表示制御部と、を備えた画像表示装置において、
     前記表示制御部は、前記演算処理の実行中において、前記演算処理の実行過程で生成される前記演算処理の中間結果を、段階的に前記表示部に表示させる
    ことを特徴とする画像表示装置。
    An arithmetic processing unit that generates an output image from an input image by a predetermined arithmetic process;
    A display control unit that causes a display unit to display a display image based on a generated image of the arithmetic processing unit;
    The display control unit causes the display unit to display an intermediate result of the calculation process generated in the execution process of the calculation process step by step during the execution of the calculation process.
  2.  前記演算処理は、反復実行される単位処理を含み、
     前記演算処理部は、前記入力画像に基づく中間生成画像に対して前記単位処理を反復実行することにより前記中間生成画像を更新して最終的に前記出力画像を生成し、
     前記表示制御部は、前記単位処理が反復実行されているとき、前記単位処理の反復過程における前記中間生成画像から表示画像を生成して前記表示部に表示させる
    ことを特徴とする請求項1に記載の画像表示装置。
    The arithmetic processing includes unit processing that is repeatedly executed,
    The arithmetic processing unit updates the intermediate generation image by repeatedly executing the unit processing on the intermediate generation image based on the input image, and finally generates the output image,
    2. The display control unit according to claim 1, wherein when the unit process is repeatedly executed, the display control unit generates a display image from the intermediate generation image in the process of repeating the unit process and displays the display image on the display unit. The image display device described.
  3.  前記表示制御部は、前記単位処理が反復実行されているとき、前記演算処理の実行開始時点からの経過時間または前記単位処理が反復実行された回数に応じて段階的に前記表示部の表示内容を更新し、更新時点における最新の中間生成画像を前記表示内容に反映させる
    ことを特徴とする請求項2に記載の画像表示装置。
    When the unit process is repeatedly executed, the display control unit displays the display content of the display step by step according to the elapsed time from the execution start time of the arithmetic process or the number of times the unit process is repeatedly executed. The image display apparatus according to claim 2, wherein the latest intermediate generation image at the time of update is reflected in the display content.
  4.  前記演算処理部は、前記中間生成画像の画質を改善するために前記単位処理を反復実行して前記中間生成画像を順次更新し、
     当該画像表示装置は、前記単位処理の反復実行による前記中間生成画像の画質改善量を推定する推定部を更に備え、
     前記表示制御部は、前記単位処理が反復実行されているとき、推定画質改善量に応じて段階的に前記表示部の表示内容を更新し、更新時点における最新の中間生成画像を前記表示内容に反映させる
    ことを特徴とする請求項2に記載の画像表示装置。
    The arithmetic processing unit sequentially updates the intermediate generation image by repeatedly executing the unit processing in order to improve the image quality of the intermediate generation image,
    The image display apparatus further includes an estimation unit that estimates an image quality improvement amount of the intermediate generated image by the repeated execution of the unit process,
    The display control unit updates the display content of the display step by step according to the estimated image quality improvement amount when the unit process is repeatedly executed, and the latest intermediate generation image at the time of update is used as the display content. The image display device according to claim 2, which is reflected.
  5.  前記演算処理は、第1~第nの単位処理から成り、
     第iの単位処理により、前記入力画像の一部に基づく第iの中間生成画像が前記出力画像の一部として生成され、
     前記第1~第nの単位処理によって生成された第1~第nの中間生成画像を合成することで前記出力画像の全体が形成され、
     前記表示制御部は、前記演算処理の実行中において、その時点で得られている第1~第mの中間生成画像を用いて表示画像を生成して前記表示部に表示させ、
     n及びmは自然数であってn>mが成立し、iは1≦i≦nを満たす整数である
    ことを特徴とする請求項1に記載の画像表示装置。
    The calculation process includes first to n-th unit processes,
    An i-th intermediate generation image based on a part of the input image is generated as a part of the output image by the i-th unit process.
    The entire output image is formed by synthesizing the first to n-th intermediate generation images generated by the first to n-th unit processes.
    The display control unit generates a display image using the first to m-th intermediate generation images obtained at that time during execution of the arithmetic processing and causes the display unit to display the display image.
    2. The image display apparatus according to claim 1, wherein n and m are natural numbers, n> m is established, and i is an integer satisfying 1 ≦ i ≦ n.
  6.  撮影によって画像を取得する撮像部と、
     画像表示装置と、を備え、
     前記画像表示装置として請求項1~請求項5の何れかに記載の画像表示装置を用い、
     前記画像表示装置は、前記撮像部によって取得された前記画像を前記入力画像として受ける
    ことを特徴とする撮像装置。
    An imaging unit for acquiring an image by shooting;
    An image display device,
    The image display device according to any one of claims 1 to 5 is used as the image display device.
    The image display device receives the image acquired by the imaging unit as the input image.
PCT/JP2009/065609 2008-09-16 2009-09-07 Image display device and imaging apparatus WO2010032649A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-236844 2008-09-16
JP2008236844A JP2010072092A (en) 2008-09-16 2008-09-16 Image display device and imaging device

Publications (1)

Publication Number Publication Date
WO2010032649A1 true WO2010032649A1 (en) 2010-03-25

Family

ID=42039476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/065609 WO2010032649A1 (en) 2008-09-16 2009-09-07 Image display device and imaging apparatus

Country Status (2)

Country Link
JP (1) JP2010072092A (en)
WO (1) WO2010032649A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264691A1 (en) * 2021-06-17 2022-12-22 富士フイルム株式会社 Image capture method and image capture device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2762939B1 (en) * 2011-09-29 2017-06-14 FUJIFILM Corporation Lens system and camera system
JP6270555B2 (en) * 2014-03-07 2018-01-31 キヤノン株式会社 Image processing system, imaging apparatus, and control method thereof
JP6304293B2 (en) * 2016-03-23 2018-04-04 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
WO2018003124A1 (en) 2016-07-01 2018-01-04 マクセル株式会社 Imaging device, imaging method, and imaging program
JP6705758B2 (en) * 2017-01-04 2020-06-03 東芝映像ソリューション株式会社 Image quality improvement device that can process multiple times in a time-sharing manner
JP7231598B2 (en) * 2020-11-05 2023-03-01 マクセル株式会社 Imaging device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05189532A (en) * 1992-01-17 1993-07-30 N T T Data Tsushin Kk Still picture control system
JPH05244437A (en) * 1992-02-28 1993-09-21 Canon Inc Picture processing system
JPH077620A (en) * 1993-06-17 1995-01-10 Chuo Denshi Kk Still picture transmitting method in monitoring device
JPH0975551A (en) * 1995-09-12 1997-03-25 Konami Co Ltd Image distinguishing game device
JPH10186455A (en) * 1996-07-01 1998-07-14 Sun Microsyst Inc Improved electronic finder for still image
JPH10301550A (en) * 1997-02-26 1998-11-13 Ricoh Co Ltd Image display method, image output system and image processing system
JP2002112103A (en) * 2000-09-29 2002-04-12 Minolta Co Ltd Digital still camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05189532A (en) * 1992-01-17 1993-07-30 N T T Data Tsushin Kk Still picture control system
JPH05244437A (en) * 1992-02-28 1993-09-21 Canon Inc Picture processing system
JPH077620A (en) * 1993-06-17 1995-01-10 Chuo Denshi Kk Still picture transmitting method in monitoring device
JPH0975551A (en) * 1995-09-12 1997-03-25 Konami Co Ltd Image distinguishing game device
JPH10186455A (en) * 1996-07-01 1998-07-14 Sun Microsyst Inc Improved electronic finder for still image
JPH10301550A (en) * 1997-02-26 1998-11-13 Ricoh Co Ltd Image display method, image output system and image processing system
JP2002112103A (en) * 2000-09-29 2002-04-12 Minolta Co Ltd Digital still camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022264691A1 (en) * 2021-06-17 2022-12-22 富士フイルム株式会社 Image capture method and image capture device

Also Published As

Publication number Publication date
JP2010072092A (en) 2010-04-02

Similar Documents

Publication Publication Date Title
WO2010032649A1 (en) Image display device and imaging apparatus
JP4879261B2 (en) Imaging apparatus, high resolution processing method, high resolution processing program, and recording medium
JP5627256B2 (en) Image processing apparatus, imaging apparatus, and image processing program
JP2009194896A (en) Image processing device and method, and imaging apparatus
JP5263753B2 (en) Super-resolution processing apparatus and method, and imaging apparatus
US20090033792A1 (en) Image Processing Apparatus And Method, And Electronic Appliance
JP5764740B2 (en) Imaging device
US20150334283A1 (en) Tone Mapping For Low-Light Video Frame Enhancement
JP2010268441A (en) Image processor, imaging device, and image reproducing device
KR20080022399A (en) A image generation apparatus and method for the same
JP2012186593A (en) Image processing system, image processing method, and program
JP4640032B2 (en) Image composition apparatus, image composition method, and program
JP6120665B2 (en) Imaging apparatus, image processing apparatus, image processing method, and image processing program
JP2010252258A (en) Electronic device and image capturing apparatus
JP2009168536A (en) Three-dimensional shape measuring device and method, three-dimensional shape regenerating device and method, and program
JP2009049777A (en) Imaging apparatus and its computer program
JP2008294950A (en) Image processing method and device, and electronic device with the same
JP2009237650A (en) Image processor and imaging device
JP4942563B2 (en) Image processing method, image processing apparatus, and electronic apparatus including the image processing apparatus
JP2008079123A (en) Imaging apparatus and focus control program
JP4128123B2 (en) Camera shake correction device, camera shake correction method, and computer-readable recording medium recording camera shake correction program
JP2012235198A (en) Imaging apparatus
US8237820B2 (en) Image synthesis device for generating a composite image using a plurality of continuously shot images
JP2008293388A (en) Image processing method, image processor, and electronic equipment comprising image processor
JP2010251882A (en) Image capturing apparatus and image reproducing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09814494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09814494

Country of ref document: EP

Kind code of ref document: A1