WO2017145752A1 - Image processing system and image processing device - Google Patents

Image processing system and image processing device Download PDF

Info

Publication number
WO2017145752A1
WO2017145752A1 PCT/JP2017/004512 JP2017004512W WO2017145752A1 WO 2017145752 A1 WO2017145752 A1 WO 2017145752A1 JP 2017004512 W JP2017004512 W JP 2017004512W WO 2017145752 A1 WO2017145752 A1 WO 2017145752A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
position shift
pixels
processing
Prior art date
Application number
PCT/JP2017/004512
Other languages
French (fr)
Japanese (ja)
Inventor
影山 昌広
Original Assignee
株式会社日立産業制御ソリューションズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立産業制御ソリューションズ filed Critical 株式会社日立産業制御ソリューションズ
Publication of WO2017145752A1 publication Critical patent/WO2017145752A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present invention relates to an image processing system and an image processing apparatus that perform processing related to image data size and image quality.
  • Images (moving images and still images) taken by a camera are simultaneously transmitted via a communication network such as the Internet, an intranet, or a public network, and are received by an image receiving terminal. It is generally done to display.
  • the number of pixels of a camera used for imaging has increased with the advance of technology, and at present, for example, a full HD size image composed of horizontal 1920 pixels ⁇ vertical 1080 pixels is generally used.
  • the received image may deteriorate or freeze due to an application communication error or data delay.
  • the transmitted image data is recorded and stored on a medium such as a hard disk (magnetic disk) or an optical disk, if the data size is large, it may be possible to record only a short time on a medium with a limited recording capacity.
  • there is a problem that the cost of the recording apparatus is increased due to the necessity of a large-capacity recording medium.
  • the image data size is reduced in accordance with the transmission band and recording capacity. For example, the number of pixels in the captured image frame is reduced to reduce the image size.
  • the data size is reduced by reducing the overall size and then encoding and compressing, and when the image is reproduced, the reduced image is enlarged and displayed.
  • Patent Document 1 discloses two series of digital data. A technique is disclosed in which a wideband output signal exceeding the Nyquist frequency of an input signal is obtained by canceling aliasing distortion in a one-dimensional direction using the signal.
  • Non-Patent Document 1 a plurality of image frames are combined into one frame, and a high-frequency component exceeding the Nyquist frequency of the input image is restored by performing a successive approximation process with iterative calculation, and high-resolution output is performed.
  • a technique for obtaining an image is disclosed.
  • Patent Document 1 Japanese Patent Laid-Open No. 10-280685
  • Non-patent document 1 S.M. C. Park, M.M. K. Park, and M.M. G. King, "Super-Resolution Image Reconstruction: A Technical Overview," IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21-36, 2003.
  • a personal computer hereinafter abbreviated as “PC”
  • a mobile terminal etc. as means for realizing an application for transmitting, recording, and displaying an image of a video phone, a remote conference, a network surveillance camera, or the like.
  • signal processing is performed by performing software processing using a general-purpose CPU (Central Processing Unit), MPU (Micro Processing Unit), GPGPU (General Purpose Computing Computing on Graphics Processing Unit), DSP (Digital Signal Processor), etc.
  • CPU Central Processing Unit
  • MPU Micro Processing Unit
  • GPGPU General Purpose Computing Computing on Graphics Processing Unit
  • DSP Digital Signal Processor
  • the super-resolution processing exemplified in the above-described prior art requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image.
  • a super-resolution technique for obtaining an output image by successive approximation processing using an input image of a plurality of frames as described in Non-Patent Document 1 a high-precision motion search in units of subpixels for each pixel is performed.
  • the accompanying inter-frame alignment (registration), image enlargement, enlargement image reduction, difference detection between the reduced image and the input image, and output image correction processing must be repeated many times (iteration).
  • the motion search and iteration for each pixel has a very large amount of calculation, so that it exceeds the processing limit of calculation resources such as CPU and memory, and frame dropping occurs that makes the motion of the image jerky. Or the entire process may stop, or user input from a mouse, keyboard, or the like may not be accepted.
  • the present invention has been made in view of the above, and provides an image processing system and an image processing apparatus capable of reducing the encoded data size while suppressing an increase in the amount of calculation related to image processing and a decrease in image quality. For the purpose.
  • the present invention provides a first image position shift unit that shifts an image position to any one of a plurality of predetermined shift positions for each of a plurality of temporally continuous images.
  • An image reduction unit that reduces the number of pixels of the image that has been position shifted by the first image position shift unit, and an encoding unit that encodes the image reduced by the image reduction unit to generate an encoded image
  • a decoding unit that decodes the encoded image transmitted via the communication network to generate a decoded image, and an image that is enlarged by increasing the number of pixels of the decoded image decoded by the decoding unit
  • An enlargement unit, a second image position shift unit that shifts the position of the image enlarged by the image enlargement unit so as to cancel the position shift performed by the first image position shift unit, and the second image position Position shift at shift section
  • the encoded data size can be reduced while suppressing an increase in the amount of computation related to image processing and a decrease in image quality, and high-speed image transmission and image sharpening processing can be performed even with relatively inefficient calculation resources.
  • FIG. 1 is a functional block diagram schematically showing an overall configuration of an image processing system according to a first embodiment. It is a figure which shows an example of an image position shift process roughly. It is a figure which shows schematically another example of an image position shift process. It is a functional block diagram which shows roughly an example of a structure of the image expansion and sharpening part of an image receiver. It is a functional block diagram which shows the structure of an image reduction part roughly. It is a functional block diagram which shows roughly the structure of the horizontal processing part and vertical processing part of an image reduction part.
  • FIG. 5 is a diagram schematically illustrating an example of a configuration for realizing a function of image enlargement / clearing processing in an image enlargement / clearing unit illustrated in FIG. 4.
  • FIG. 11 It is a figure which shows typically another example of the structure which implement
  • FIG. 10 is a block diagram illustrating an example of a configuration of a two-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG. 9. It is a block diagram which shows the other example of a structure of the image expansion and sharpening part in the image receiver of FIG. It is a block diagram which shows an example of a structure in the motion detection part which the image expansion and sharpening part of FIG. 15 has. It is a figure which shows the structural example in the case of synchronizing using the frame phase information generation part of an image transmitter, and the frame phase information acquisition part of an image receiver.
  • FIG. 1 It is a figure which shows the structural example in the case of synchronizing using the frame phase information generation part of an image receiver, and the frame phase information acquisition part of an image transmitter. It is a figure which shows the structural example in the case of synchronizing using the frame phase information estimation part of an image receiving part. It is explanatory drawing which shows an example of the operation
  • the cut-off frequency of the low-pass filter is set higher than the Nyquist frequency
  • the reduced distortion component is mixed with the baseband component originally included in the pre-reduction image. Since the aliasing distortion appears to be moire (interference fringe) noise, the image quality may be greatly degraded.
  • the image enlargement / clearing unit (111) of the image receiving device (109) performs aliasing distortion using a plurality of image frames when enlarging the reduced transmission image.
  • a baseband component having a wide band is extracted, and the image is sharpened.
  • the property that the phase of aliasing distortion that occurs when the sampling phase of the same input signal is changed and the sampling is changed is used. This property is used in interlaced scanning. For example, in a general 2: 1 interlace, at the time of imaging, an image is scanned by scanning every scanning line from the top, and a first field consisting of an image of only odd-numbered scanning lines and only an even-numbered image are scanned.
  • the second field consisting of and transmitted.
  • conversion from interlaced scanning to progressive scanning is performed by signal processing, or the first field and the second field are synthesized and perceived in the brain by the afterimage effect of the human eye. , Getting clear frame images.
  • the image position shift unit (103) shifts the position of the image for each frame, and then the image reduction unit (104) reduces the image, thereby reducing the reduced image (transmission image).
  • the phase of the aliasing distortion can be uniquely determined from the image shift method (shift direction and shift amount). For example, if the position of an image of full HD size (horizontal 1920 pixels ⁇ vertical 1080 pixels) is shifted by 1 pixel in the horizontal direction, an image of D1 size (horizontal 704 pixels ⁇ vertical 480 pixels) after image reduction is 0 in the horizontal direction.
  • FIG. 1 is a functional block diagram schematically showing the overall configuration of the image processing system according to the present embodiment.
  • an image processing system includes an image transmission device (101), a communication network (107), an image storage device (108), an image reception device (109), an operation unit (114), an image reception device (109), and
  • the display unit (113) is a schematic configuration.
  • the image transmission device (101) includes a camera (102) that captures an image (still image, moving image), an image position shift unit (103) that performs image position shift processing on an image captured by the camera (102), and an image position An image reduction unit (104) that performs image reduction processing on an image that has undergone shift processing, an encoding unit (105) that performs encoding processing on an image that has undergone image reduction processing, and an image via a communication network (107) Based on the command and data transmitted from the reception device (109), the image transmission device (101) including the camera (102), the image position shift unit (103), the image reduction unit (104), and the encoding unit (105). And a control unit (106) for controlling the entire operation.
  • the image receiving device (109) performs the decoding process with the decoding unit (110) that performs decoding processing on the image transmitted from the image transmitting device (101) via the communication network (107).
  • An image enlargement / clearing unit (111) that performs image enlargement / clarification processing on the image and a control unit (112) that controls the operation of the entire image receiving device (109) are provided.
  • the transmission data size is reduced by reducing the number of pixels constituting one frame of the image in the image reduction unit (104) of the image transmission device (101), and the image reception device.
  • the image enlarging / clearing unit (111) included in (109) By enlarging the image in the image enlarging / clearing unit (111) included in (109), a clear image with less blur can be displayed.
  • the image having the third pixel number output from the image receiving device (109) is further enlarged or reduced using an image enlargement unit or image reduction unit (not shown) to obtain an image having the fourth pixel number (not shown). You may comprise so that it may display on a display part (113) after converting.
  • the camera (102) includes, for example, an imaging unit (not shown) composed of a photoelectric conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) and a lens, signal level adjustment, contrast adjustment, brightness adjustment, and white balance.
  • An image (still image, moving image) captured by the camera (102) is sent to a subsequent image position shift unit (103).
  • the number of pixels of an image is represented by a set of the number of pixels in the horizontal direction (the number of horizontal pixels) and the number of pixels in the vertical direction (the number of vertical pixels).
  • the frame is assumed to be composed of the first number of pixels.
  • the image position shift unit (103) performs image position shift processing that shifts (shifts) the position of the image obtained by the camera (102) vertically and horizontally in units of integer pixels.
  • image position shift processing in the image position shift unit (103) will be described with reference to FIGS.
  • FIG. 2 is a diagram schematically showing an example of the image position shift process.
  • FIG. 2 an image in which three persons are subjects is schematically shown, the broken line is the original image frame (200) taken by the camera (102), and the solid line is shifted by image position shift processing.
  • the obtained image frames (201) to (204) are shown.
  • the arrow has shown the state transition for every frame time (for example, 1/30 second).
  • an image position shift process for shifting an image at a cycle of 4 frames is performed, which is referred to as “4-frame type image position shift”.
  • the shift direction and the shift amount in the image position shift unit (103) are controlled by the control unit (106).
  • FIG. 3 is a diagram schematically showing another example of the image position shift process.
  • the broken line indicates the original image frame (200) taken by the camera (102), and the solid line indicates the image frames (205) and (205) shifted by the image position shifting process. ing. Moreover, the arrow has shown the state transition for every frame time (for example, 1/30 second).
  • an image position shift process for shifting an image at a cycle of 2 frames is performed, which is referred to as “2-frame type image position shift”.
  • the image position shifting process is not limited to the examples shown in FIGS.
  • the position of the image frame (200) is set to one state of position shift, and “image frame (200) ⁇ position shift to right side ⁇ position shift to lower right ⁇ position shift to lower side ⁇ image frame (200) ⁇ ⁇ .. ”may be used to shift the position.
  • the position is not shifted (that is, the position is fixed as the image frame (200)).
  • the position may be shifted as follows: (200) ⁇ image frame (206) ⁇ image frame (200) ⁇ .
  • image position shift processing including position shift in the vertical direction and the oblique direction (not shown) may be performed.
  • a position shift of “9 frame periods” combining the position shift of 3 frames in the vertical direction may be performed.
  • the image sharpening state changes according to the direction of the position shift by the image position shift process.
  • the resolution (clearness) in the horizontal and vertical directions of the image after the image enlargement and sharpening processing is improved, as shown in FIG.
  • “two-frame position shift” is performed, only the resolution (clearness) in the horizontal direction is improved.
  • the resolution in a desired direction can be improved by performing an image position shift process that shifts the image frame in the direction in which the resolution is desired to be improved.
  • FIG. 5 is a functional block diagram schematically showing the configuration of the image reduction unit.
  • FIG. 6 is a functional block diagram schematically illustrating the configuration of the horizontal processing unit and the vertical processing unit of the image reduction unit.
  • an image reduction unit (104) performs an image reduction process for converting the number of pixels constituting one frame of an image so as to be a second number of pixels smaller than the first number of pixels.
  • This configuration is also referred to as an interpolation filter, and a horizontal processing unit (401-H) that performs horizontal image reduction processing on an image input from the image position shift unit (103) and vertical image reduction processing.
  • a vertical processing unit (401-V) Note that the processing order for the input image may be switched between the horizontal processing unit (401-H) and the vertical processing unit (401-V).
  • the horizontal processing unit (401-H) and the vertical processing unit (401-V) can be realized with the same configuration, and a pixel insertion unit (402) that performs m-times upsampling, A low-pass filter (403) for reducing and a pixel thinning unit (404) for performing 1 / n downsampling are provided.
  • m-times upsampling (zero insertion) is performed on the input image by the pixel insertion unit (402), and then the low-pass filter (403 ) To remove or reduce unnecessary high-frequency components, and 1 / n downsampling is performed by the pixel thinning unit (404), so that the number of pixels in the horizontal direction (or vertical direction) of the image is m / n times (where m , N can be increased or decreased to a positive integer).
  • image enlargement processing by the image enlargement unit (301) which will be described later, can be realized with the same configuration by setting the constants m and n of the image reduction unit 104.
  • the horizontal reduction (1920 pixels ⁇ 704 pixels) in the image reduction unit (104) can be realized.
  • Direction expansion (704 pixels ⁇ 1408 pixels) can be realized. The same applies to reduction / enlargement in the vertical direction.
  • the number of output pixels (second pixel number) in the image reduction unit (104) is controlled by setting m and n by the control unit.
  • the encoding unit (105) encodes and compresses the image input from the image reduction unit (104), and transmits the image to the image reception device (109) and the image storage device (108) via the communication network 107. is there.
  • the processing configuration for connecting the image transmission device (101) or the image reception device (109) to the communication network (107) uses a general technique and is not shown.
  • the decoding unit 110 of the image receiving device (109) performs a decoding process according to the encoding method used by the encoding unit (105).
  • Examples of encoding methods used in the encoding unit 105 include MPEG (Moving Picture Expert Group) -1, MPEG-2, MPEG-4, H.264, and the like. H.264, H.C. H.265, VC-1, JPEG (Joint Photographic Experts Group), Motion JPEG, JPEG-2000, or other standardized encoding may be performed, or non-standard encoding may be performed.
  • the encoding method in the encoding unit (105) is controlled by the control unit (106).
  • the communication network (107) may be either wired or wireless, and is a network for communicating digital data using a communication protocol such as a general IP (Internet Protocol).
  • a communication protocol such as a general IP (Internet Protocol).
  • FIG. 4 is a functional block diagram schematically showing an example of the configuration of the image enlargement / clearing unit of the image receiving apparatus.
  • an image enlargement / clearing unit (111) includes an image enlargement unit (301) that performs image enlargement processing on an image having the second number of pixels decoded by the decoding unit (110), and an image enlargement unit.
  • An image position shift unit (302) that performs an image position shift process on the processed image and an aliasing distortion reduction unit (303) that performs an aliasing distortion reduction process on the image that has undergone the image position shift process.
  • the same processing may be performed on all color image signals (RGB (red, green, blue), YUV (luminance Y, color difference UV), etc.).
  • the following signal processing may be performed only on the luminance signal Y and not on the color difference signal (UV), but only on the image enlargement processing.
  • the image enlargement unit (301) converts an image having the second number of pixels (for example, D1 size (horizontal 704 pixels ⁇ vertical 480 pixels)) input from the decoding unit (110) into an image having the third number of pixels.
  • the image enlargement process to be converted into the image reduction processing is performed, and can be realized by the same configuration as the image reduction unit (104) as described above.
  • the purpose of the image enlargement process is to increase the number of pixels of the image (that is, increase the sampling frequency) in order to prevent the aliasing distortion reduced in the subsequent processing block from being aliased again in the frequency domain.
  • the number of pixels may be larger than the second number of pixels. Therefore, for the sake of simplicity of explanation, the third pixel number is assumed to be twice the second pixel number (that is, horizontal 1408 pixels ⁇ vertical 960 pixels).
  • the image position shift unit (302) converts the image that has been subjected to the image enlargement process by the image enlargement unit (301) in the direction opposite to the image position shift process by the image position shift unit (103) of the image transmission device (101). By shifting the position (that is, shifting the position of the image so as to cancel the position shift performed by the image position shift unit (103)), image blur between frames is suppressed. For example, when image position shift processing is performed such that the image position shift unit (103) shifts one pixel in the left direction and one pixel in the upward direction, the image position shift unit (302) is set to 0. 0 in the right direction.
  • the image shift may be performed so as to cancel the image position shift process in the image position shift unit (103). For example, “4” illustrated in FIG.
  • the image position shift unit in order to shift all images to the center positions of the “frame type image position shift” and the “two frame type image position shift” shown in FIG. 3 (that is, the position of the image frame (200)). What is necessary is just to determine the shift direction and shift amount in 302).
  • the shift direction and shift amount in the image position shift unit (302) may be determined so that all images are shifted to fixed positions that are not the respective centers in the image position shift process of the image position shift unit (103). Good.
  • the shift direction and shift amount of the image position shift unit (302) are controlled by the control unit (112).
  • the aliasing distortion reduction unit (303) performs aliasing distortion reduction processing for reducing aliasing distortion of the image subjected to the image position shift processing by the image position shift unit (302), and outputs an image from the image enlargement / clearing unit (111). It is what. Note that details of the aliasing reduction processing will be described later.
  • FIG. 7 is a diagram schematically showing an example of a configuration for realizing the function of the image enlargement / clearing process in the image enlargement / clearing unit shown in FIG.
  • the image enlargement / clearing unit (501) obtains a wideband output signal exceeding the Nyquist frequency of the input signal by canceling the aliasing distortion in the one-dimensional direction using two series of digital signals.
  • each one-dimensional interpolation filter (503- # 0), (503- # 1) uses each position shift amount ( ⁇ 0, ⁇ 1).
  • the position of the image frame is adjusted, the output signal of the adder (504), the image signal after being multiplied by the coefficient K by the multiplier (507) after passing through the subtracter (505) and the Hilbert transformer (506) Are converted so that the phase of the aliasing distortion contained in both of them is 180 degrees (opposite phase), and then added by an adder (508) to correct the aliasing distortion in the image. Can be canceled.
  • the image enlargement unit (301) shown in FIG. 4 corresponds to the one-dimensional enlargement units (502- # 1) and (502- # 2) in FIG. 7, and the image position shift unit (302) This corresponds to the dimension interpolation filters (503- # 0) and (503- # 1).
  • the image enlargement unit (301) shown in FIG. 4 is equivalent to the interpolation filter (401) as described above, and the sinc function used to determine the coefficient of the low-pass filter (403) is sin (x ⁇ t).
  • the one-dimensional enlargement unit (502- #) 1) and (502- # 2) can be omitted.
  • 4 is realized by the adder (504), subtracter (505), Hilbert transformer (506), multiplier (507), and adder (508) in FIG. Therefore, the image enlargement / clearing unit (111) in FIG. 4 and the image enlargement / clearing unit (501) in FIG. 7 can be regarded as equivalent.
  • the configuration of the image enlargement / clearing unit (501) shown in FIG. 7 illustrates the case where the one-dimensional image enlargement process and the aliasing reduction process are performed.
  • the case of performing the direction image enlargement process and the aliasing reduction process will be described below with reference to FIG.
  • FIG. 8 is a diagram schematically showing another example of a configuration for realizing the function of the image enlargement / clearing process in the image enlargement / clearing unit shown in FIG.
  • processing blocks (501-HA), (501-HB), (501-V) having the same configuration as the image enlargement / clearing unit (501) shown in FIG.
  • the outputs of (501-HA) and (501-HB) are connected in series as the input of the processing block (501-V).
  • the horizontal processing unit (601) including processing blocks (501-HA) and (501-HB) performs horizontal image enlargement processing and aliasing distortion reduction processing, and vertical processing including processing blocks (501-V).
  • vertical image enlargement processing and aliasing reduction processing are performed in the part (602).
  • FIGS. 2 and 3 the position shift amount ( ⁇ 0, ⁇ 1) shown in FIG. 7, the horizontal position shift amount ( ⁇ 0, ⁇ 1, ⁇ 2, ⁇ 3) shown in FIG.
  • the relationship with the vertical position shift amount ( ⁇ 0, ⁇ 1) will be described.
  • an image having the first number of pixels (for example, horizontal 1920 pixels ⁇ vertical 1080 pixels) shown in FIG. 1 is the center position (image frame) of the rotational motion in the “4-frame position shift” shown in FIG.
  • an image having a first number of pixels (for example, horizontal 1920 pixels ⁇ vertical 1080 pixels) is reduced to a second number of pixels (for example, horizontal 704 pixels ⁇ vertical 480 pixels), and the communication network (107 ),
  • the aliasing distortion included in the image can be reduced, and the image can be sharpened without performing iteration.
  • FIG. 9 and 10 are diagrams schematically showing another example of a configuration for realizing the function of the image enlargement / clearing processing in the image enlargement / clearing unit shown in FIG.
  • the signal passing through the “one-dimensional asymmetric filter” in which the polarity (positive / negative) of the subtractor (505) is inverted may be added together at the end. The same output is obtained.
  • FIG. 9 the order of the “one-dimensional asymmetric filter” in the horizontal processing unit (601) and the “one-dimensional enlargement unit and one-dimensional interpolation filter” in the vertical processing unit (602) in FIG.
  • the vertical one-dimensional enlargement units are combined into a two-dimensional expansion unit (702)
  • the horizontal / vertical one-dimensional interpolation filters are combined into a two-dimensional interpolation filter (703)
  • the horizontal / vertical one-dimensional asymmetric filters are combined.
  • a two-dimensional processing unit (701) is configured as a two-dimensional asymmetric filter (704).
  • this two-dimensional processing unit (701) two-dimensional interpolation is performed in accordance with the horizontal position shift amounts ( ⁇ 0, ⁇ 1, ⁇ 2, ⁇ 3) and vertical position shift amounts ( ⁇ 0, ⁇ 1) corresponding to the inputs # 0 to # 3.
  • the coefficients of the filter (703) and the two-dimensional asymmetric filter (704) are set, and the two-dimensional processing units (701- # 0 to # 3) are set.
  • each signal passing through each two-dimensional processing unit (701- # 0 to # 3) is passed through each frame memory (705- # 0 to # 3), and then added to all signals by an adder (706). If added, an output equivalent to the configuration of FIG. 8 can be obtained.
  • each two-dimensional processing unit (701- # 0 to # 3) since there is no need for each two-dimensional processing unit (701- # 0 to # 3) for an image frame that is not input, only one two-dimensional processing unit (701) is installed as shown in FIG. In accordance with the amount ( ⁇ 0, ⁇ 1, ⁇ 2, ⁇ 3) and the vertical position shift amount ( ⁇ 0, ⁇ 1), the internal coefficient is set while changing according to the frame number, and the switch (707) is used.
  • the frame memory (705- # 0 to # 3) may be written while being selected for each frame.
  • the configuration of the image enlargement / clearing unit (111) in the case of “2-frame position shift” can be equivalently converted from the configuration of FIG. 7 to the configuration of FIG. That is, the one-dimensional enlargement units (502- # 0) and (502- # 1) in FIG. 7 are combined into the one-dimensional enlargement unit (801) in FIG. 11, and the one-dimensional interpolation filters (503- # 0) in FIG. (503- # 1) are combined into a one-dimensional interpolation filter (802) in FIG. 11, and “adder (504), subtracter (505), Hilbert transformer (506), multiplier (507)” in FIG. The adder (508) "is integrated into the one-dimensional asymmetric filter (803) in FIG.
  • the calculation amount for one one-dimensional interpolation filter can be reduced compared to the calculation amount in the configuration of FIG.
  • FIGS. 12 to 14 are a block diagram and an operation explanatory diagram showing an example of a configuration of an asymmetric filter included in the image enlargement / clearing unit of FIGS. 5 and 11.
  • FIG. 12 is a block diagram showing an example of the configuration of a one-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG.
  • the input # 0 passes through the one-dimensional enlargement unit (502- # 0) and the one-dimensional interpolation filter (503- # 0), and then “signal that has passed through the adder (504)”. Then, "the signal that has passed through the subtracter (505), the Hilbert transformer (506), and the multiplier (507)” is added by the adder (508) to be an output.
  • the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 0 is obtained by multiplying the signal obtained by multiplying the coefficient K through the Hilbert transformer (506) and the input signal by the adder (508). The added signal is output.
  • the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 1 is a value obtained by inverting the polarity (positive / negative) of the coefficient K compared to the coefficient of the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 0. It becomes.
  • the Hilbert transformer (506) is an odd symmetric filter.
  • t 2m (where m is an integer)
  • the filter coefficient (C (t)) shown in FIG. 13 is an example.
  • the coefficient C (t) (where t ⁇ 0) is obtained by multiplying the coefficient of the Hilbert transform by a general window function (such as Hanning window). ), The influence of the filter end may be reduced.
  • FIG. 14 is a block diagram illustrating an example of a configuration of a two-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG.
  • the two-dimensional asymmetric filter (704) is obtained by connecting the above-described one-dimensional asymmetric filter (803) in series.
  • the horizontal processing unit (803-H) converts the number of pixels in the horizontal direction, and the vertical processing unit ( By converting the number of pixels in the vertical direction at 803-V), two-dimensional pixel number conversion can be realized.
  • the above-described configuration of the image enlargement / clearing unit (111) reduces the amount of calculation by eliminating the motion search and iteration for each pixel and realizes image sharpening capable of high-speed processing.
  • the position shift amounts ( ⁇ n, ⁇ n) used in the configurations of FIGS. 7 to 11 are fixed values for each frame. At this time, there is no problem if the subject in the image frame does not move, but there is a problem that multiple images are generated by calculation between the frames if the subject has movement. Therefore, a configuration example for solving such a problem will be described below.
  • FIG. 15 is a block diagram showing another example of the configuration of the image enlargement / clearing unit in the image receiving apparatus of FIG.
  • a processing unit (1001) including an image enlargement unit (301), an image position shift unit (302), and a aliasing distortion reduction unit (303) has the same configuration as that shown in FIG.
  • a signal in which image blur between frames is suppressed by the image position shift unit (302) is passed through a low-pass filter (1002) for suppressing unnecessary aliasing distortion, and an image pixel is detected by a motion detection unit (1003) described later.
  • a control signal (m, where 0 (no motion) ⁇ m ⁇ 1 (with motion)) corresponding to the magnitude of each motion is generated, and the low-pass filter (1002) is generated using the weighted mixing unit (1004).
  • the passed signal (p1) and the signal (p0) passed through the aliasing distortion reduction unit (303) are weighted and mixed in accordance with the control signal m to obtain an output signal.
  • the signal (p0) passed through the aliasing reduction unit (303) is output, and the region where the value of the control signal m is close to 1 (the motion is In the large region, control is performed so that the signal (p1) that has passed through the low-pass filter (1002) is output.
  • FIG. 16 is a block diagram illustrating an example of the configuration of the motion detection unit included in the image enlargement / clearing unit of FIG.
  • control signal m is obtained.
  • This normalization can be realized by a general technique in which a predetermined fixed value is subtracted or multiplied by the output signal of the maximum value unit (1106), and thus detailed illustration and description are omitted.
  • the motion detection unit (1003) configured in this way, an area in which all the values of the four-frame images (# 0, # 1, # 2, # 3) input to the motion detection unit (1003) match (that is, In the area where the image is still), the value of the control signal m is 0, and the area where the values of the images of 4 frames (# 0, # 1, # 2, # 3) do not match (that is, 1 out of 4 frames).
  • the control signal m has a value of 0 ⁇ m ⁇ 1 according to the magnitude of the absolute value of the difference signal described above.
  • the switcher (1101), frame memory (1102), average unit (1103), subtractor (1104) and the absolute value unit (1105) can be easily realized simply by changing from 4 frames to 2 frames.
  • 17 to 19 are block diagrams respectively showing an example of the configuration of the control unit included in the image transmission device and the image reception device in FIG.
  • the image position shift unit (103) included in the image transmission device (101) and the image position shift unit (302) included in the image reception device (109) operate in synchronization with each other accurately in units of frames and are in opposite directions. It is necessary to control to shift the position.
  • the frame phase information is, for example, one of the frame numbers “# 0, # 1, # 2, # 3” in the case of “4 frame type position shift”, and “2 frame type position shift”. In this case, the frame number is “# 0, # 1”.
  • FIG. 17 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information generation unit of the image transmission device and the frame phase information acquisition unit of the image reception unit.
  • the frame phase information generation unit (1201) included in the control unit (106) of the image transmission apparatus (101) generates frame phase information
  • the multiplexing unit (1202) is used to encode the encoding unit (105). Is multiplexed with the image data output from.
  • the multiplexed data is transmitted to the image receiving device (109) via the communication network (107) and separated by the separating unit (1203), and then the image data is sent to the decoding unit (110) to receive frame phase information.
  • the frame phase information acquisition unit (1204) of the control unit (112) can obtain the same information as the frame phase information generated by the frame phase information generation unit (1201) of the image transmission device (101).
  • the frame phase information generation unit (1201), the multiplexing unit (1202), the separation unit (1203), and the frame phase information acquisition unit (1204) can be realized by a general technique, and thus detailed illustration is omitted.
  • FIG. 18 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information generation unit of the image reception unit and the frame phase information acquisition unit of the image transmission apparatus.
  • the frame phase information generating unit (1206) included in the control unit (112) of the image receiving apparatus (109) generates frame phase information, which is transmitted to the image transmitting apparatus (101) via the communication network (107).
  • the frame phase information acquisition unit (1205) included in the control unit (106) of the image transmission device (101) is the same as the frame phase information generated by the frame phase information generation unit (1206) of the image reception device (109). Information can be obtained.
  • the frame phase information generation unit (1206) and the frame phase information acquisition unit (1205) can be realized by a general technique, and thus detailed illustration is omitted.
  • FIG. 19 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information estimation unit of the image reception unit.
  • the frame phase information generation unit (1201) included in the control unit (106) of the image transmission apparatus (101) generates frame phase information, but this information is not transmitted to the image reception apparatus (109).
  • the encoded image data transmitted from the image transmitting device (101) is decoded by the decoding unit (110), and the frame included in the control unit (112) based on the decoded image.
  • Frame phase information is obtained by performing a frame phase information estimation process in the phase information estimation unit (1207).
  • FIG. 20 is an explanatory diagram showing an example of the operation of the frame phase information estimation process of the frame phase information estimation unit shown in FIG.
  • an image (1301) is a D1-size image obtained by reducing the image frame (201) shifted in the upper left direction in FIG. 2 by the image reduction unit (104) included in the image transmission apparatus (101) shown in FIG.
  • This is an image (horizontal 704 pixels ⁇ vertical 480 pixels), and is taken as an example for explanation.
  • FIG. 21 is a flowchart showing frame phase estimation processing in the frame phase information estimation unit.
  • step S1404 data (y ) Is incremented by one to set y ⁇ y + 1 (step S1404), and the processing of steps S1404 and S1402 is repeated until the determination result of step S1403 becomes YES. If the determination result in step S1403 is YES, the image shift direction is estimated according to the value of each counter value (L, R) (step S1405). That is, in step S1405, if “the value of the counter for the left end (L) ⁇ the value of the counter for the right end (R)”, it is estimated that the image is shifted leftward. Estimated to be shifted in the direction. When step S1405 ends, the process ends.
  • the image position shift processing in the image position shift unit (103) will be further considered. That is, depending on the type of image captured by the camera (102) of the image transmission device (101), it is necessary to consider the contents of the image position shift process. For example, in the case where the camera (102) includes image data of a single-plate image sensor on which an image Bayer array color filter is arranged, it is necessary to restrict the operation of the image position shift process of the image position shift unit (103).
  • FIG. 22 is a diagram showing a state of pixel arrangement in the Bayer array color filter.
  • R red, G: green, and B: blue color filters are regularly arranged with a two-pixel interval as one period in both the horizontal and vertical directions. ing. That is, when an image signal (hereinafter referred to as a Bayer image signal) obtained by photoelectrically converting light transmitted through the Bayer array color filter (1501) by a single-plate image sensor is input to the image position shift unit (103), If the position of either or both of the direction and the vertical direction is shifted in units of odd pixels, the positional relationship of R: red, G: green, B: blue changes, and the color of the image after the position shift is Will change.
  • a Bayer image signal obtained by photoelectrically converting light transmitted through the Bayer array color filter (1501) by a single-plate image sensor
  • the position shift unit (103) when the Bayer image signal is input to the image position shift unit (103), the position is shifted in units of two pixels (even number of pixels) in both the horizontal direction and the vertical direction, so that R: red, G: green, B : Image position shift processing is performed so that the positional relationship of blue is not different from that before the position shift.
  • R red
  • G green
  • B Image position shift processing is performed so that the positional relationship of blue is not different from that before the position shift.
  • the position shift in units of 2 pixels (even pixel units) as described above only the color signal (RGB, YUV, etc.) converted from the Bayer image signal or the luminance signal Y converted from the Bayer image signal.
  • FIG. 23 is a functional block diagram schematically showing a configuration example of the image storage device.
  • an image storage device (108) is a device for storing (recording) or reproducing encoded image data, and includes a communication interface (1601), a control unit (1602), a memory (1603). ), Storage (1604), output interface (1605), input interface (1606), and other processing blocks, and a bus (1607) connecting each processing block.
  • the image storage device (108) stores an application program in the storage (1604), the control unit (1602) expands the program from the storage (1604) to the memory (1603), and the control unit (1602) By executing the program, various functions such as recording, reproduction, and search can be realized.
  • various functions realized by the execution of each application program by the control unit (1602) are mainly realized by "various program function units" expanded in the memory (1603).
  • the application program may be stored in the storage (1604) in advance when the image storage device (108) is shipped, or a medium such as a CD (Compact Disc) / DVD (Digital Versatile Disc) or a semiconductor memory. And installed in the image storage device (108) via a medium connection unit (not shown). It is also possible to download and install from the communication network (107) via the communication interface (1601).
  • a medium such as a CD (Compact Disc) / DVD (Digital Versatile Disc) or a semiconductor memory.
  • a medium connection unit not shown
  • the communication interface (1601) is connected to the communication network (107), receives image data from the image transmission apparatus (101) shown in FIG. 109) also has a function of transmitting image data.
  • the control unit (1602) controls the communication interface (1601), the memory (1603) (various program function units), the storage (1604), and the input / output interfaces (1605, 1606).
  • the control unit (1602) also has a function of executing various signal processing according to a processing procedure described later.
  • the function part of the application program stored in the storage (1604) is expanded under the control of the control part (1602).
  • the storage (1604) accumulates image data from the image transmission apparatus (101), and stores application programs and various types of information created by the application programs.
  • the output interface (1605) has a function of outputting an image obtained as a result of signal processing by the control unit to an external device via the bus (1607).
  • the output image is displayed on an external display unit (1608).
  • the input interface (1606) has a function of receiving a signal from the operation unit (1609) and transmitting the signal to the control unit (1602) via the bus (1607).
  • the image storage device (108) configured in this manner follows the following operation sequence during recording. That is, at the time of recording, the image storage device (108) reads a recording application program (not shown) stored in the storage (1604) into the memory (1603), and follows the procedure described in the recording application program according to the procedure described in the recording application program. 1602) controls each part. First, a connection is established with the image transmission apparatus (101) shown in FIG. 1 via the communication interface (1601) and the communication network (107). Thereafter, the encoded image data transmitted from the image transmission device (101) is received via the communication interface (1601) and the communication network (107), and is encoded into the storage (1604) via the bus (1607). Accumulated image data is stored. At this time, the received encoded image data may be decoded by the control unit (1602), output an image via the output interface (1605), and displayed on the display unit (1608). Next, an operation sequence at the time of reproduction in the image storage device (108) will be described.
  • the image storage device (108) follows the following operation sequence during reproduction. That is, at the time of playback, the image storage device (108) reads a playback application program (not shown) stored in the storage (1604) into the memory (1603), and follows the procedure described in the playback application program according to the procedure described in the playback application program. 1602) controls each part. Thereafter, the encoded image data stored in the storage (1604) is read out via the bus (1607) and transmitted to the image receiving device (109) via the communication interface (1601) and the communication network (107). .
  • the encoded image data to be transmitted may be decoded by the control unit (1602), and the image may be output and displayed on the display unit (1608) via the output interface (1605).
  • the encoded image data stored in the storage (1604) is read out only by the image storage device (108) alone, decoded by the control unit (1602), and displayed via the output interface (1605) (1608). ) May be output and displayed.
  • the image position shift unit (103) of the image transmission apparatus (101) is displayed on the display unit (1608) as it is, the image is periodically blurred. Become.
  • the various program function units developed in the memory (1603) include the frame phase information estimation function (1610) having the same function as the frame phase information estimation unit (1207) and the same as the image position shift unit (302).
  • An image position shift function (1611) having a function is provided, and image blurring is canceled by performing a process of canceling out the effect of the image position shift process. That is, the image position shift function (1611) uses the third pixel number in the operation of the image position shift unit (302) in FIG. 4 as the pixel number of the encoded image data accumulated in the storage (1604) (second In this case, the operation of the image position shift unit (302) can be used as it is.
  • the image position shift function (1611) can be realized by using a convolution process with a linear filter having asymmetric coefficients.
  • FIG. 24 is a diagram for explaining an example of filter coefficients used in the image position shift function of FIG.
  • coefficients (a) to (d) indicate filter coefficients that are convoluted with the decoded image.
  • a symbol or a numerical value in parentheses that is, 0, ⁇ , 1- ⁇ , ⁇ , 1- ⁇ , etc.
  • the symbol “T” represents transposition.
  • the symbol “*” represents a convolution operation. That is, the coefficients (a) to (d) in FIG. 24 represent the coefficients of a two-dimensional filter of horizontal 3 taps ⁇ vertical 3 taps.
  • the center of gravity of the two-dimensional filter represented by the coefficient (a) is shifted in the lower right direction, and an image obtained by convolving this filter coefficient is in the lower right direction.
  • the image convolved with coefficient (b) is shifted in the lower left direction
  • the image convolved with coefficient (c) is shifted in the upper right direction
  • the image convolved with coefficient (d) is in the upper left. The position shifts in the direction.
  • an image having the first number of pixels in FIG. 1 is based on the two-dimensional coordinates of the center position of the rotational motion in “4-frame position shift” in FIG. 0,0), two-dimensional coordinates ( ⁇ h, ⁇ v) ⁇ (h, ⁇ v) ⁇ (h, v) ⁇ ( ⁇ h, v) ⁇ ( ⁇ h, ⁇ v) ⁇ .
  • blurring of the image can be suppressed by convolving it with the stored image. With the above configuration, image blurring can be suppressed when the image stored in the image storage device (108) is played back on the display unit (1608).
  • the “two-dimensional filter of horizontal 3 taps ⁇ vertical 3 taps” described above is an example for explaining the operation, and the present invention is not limited to this, and a two-dimensional filter having a different number of taps. Obviously, a filter may be used. In addition, it is obvious that the image in which blurring is suppressed in this way may be transmitted to the outside via the communication interface (1601) and the communication network (107).
  • the super-resolution processing exemplified in the prior art requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image.
  • a super-resolution technique for obtaining an output image by successive approximation processing using an input image of a plurality of frames as described in Non-Patent Document 1 a high-precision motion search in units of subpixels for each pixel is performed.
  • the accompanying inter-frame alignment (registration), image enlargement, enlargement image reduction, difference detection between the reduced image and the input image, and output image correction processing must be repeated many times (iteration).
  • the motion search and iteration for each pixel has a very large amount of calculation, so that it exceeds the processing limit of calculation resources such as CPU and memory, and frame dropping occurs that makes the motion of the image jerky. Or the entire process may stop, or user input from a mouse, keyboard, or the like may not be accepted.
  • an image position shift unit (103) (103) that shifts the position of the image to any of a plurality of predetermined shift positions.
  • a first image position shift unit an image reduction unit (104) for reducing the number of pixels of the image whose position has been shifted by the image position shift unit (103), and an image reduced by the image reduction unit (104)
  • a decoding unit (105) that generates a decoded image by decoding the encoded image sent via the communication network (107)
  • An image enlargement unit (301) that increases the number of pixels of the decoded image decoded by the conversion unit (110) and enlarges the image
  • an image position shift unit 103)
  • An image position shift unit (302) (second image position shift unit) that shifts the position so as to cancel and cancel the shift, and an aliasing distortion of the image shifted by the image position shift unit (302) Since it is configured to include the aliasing distortion reduction unit (303) that performs aliasing distortion reduction processing that
  • FIG. 25 A second embodiment of the present invention will be described with reference to FIGS. 25 and 26.
  • FIG. 25 A second embodiment of the present invention will be described with reference to FIGS. 25 and 26.
  • the image transmission apparatus is configured to have a function of switching presence / absence of image position shift processing, and performs image position shift processing that cancels and cancels image position shift processing. Even when displayed via an image receiving device or an image storage device that does not have means for performing (that is, means for shifting the position in the reverse direction), an image that is not periodically blurred can be provided.
  • FIG. 25 is a functional block diagram schematically showing a configuration example of the image transmission apparatus according to the present embodiment.
  • the same members as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
  • an image transmission device (1802) includes a camera (102) that captures an image (still image, moving image), and an image position shift unit (103) that performs an image position shift process on the image captured by the camera (102).
  • the image signal from the camera (102) is input to the image reduction unit (104), or the image signal from the camera (102) is passed through the image position shift unit (103) and then to the image reduction unit (104).
  • a switcher (1803) for switching between input and image, an image reduction unit (104) for performing an image reduction process on an image input via the switcher (1803), and an encoding process for the image subjected to the image reduction process
  • a control unit that controls the overall operation of the image transmission apparatus (1802) including the mela (102), the image position shift unit (103), the switch (1803), the image reduction unit (104), and the encoding unit (105).
  • FIG. 26 is a diagram illustrating an example of a menu display screen generated by the control unit included in the image receiving device or the image storage device and displayed on the display unit.
  • the menu display screen (1901) generated by the control unit (112, 1602) of the image receiving device (109) or the image storage device (108) and displayed on the display unit (113, 1608) is an image transmission.
  • a message portion (1902) indicating that the display is a menu display for selecting an operation related to the image position shift processing of the apparatus (1802), a message portion (1903) indicating that the image position shift processing is not performed, and an image position shift A selection unit (1905) for selecting not to perform processing, a message unit (1904) indicating that image position shift processing is performed, and a selection unit (1906) for selecting execution of image position shift processing And have.
  • FIG. 26 illustrates the case where the user of the image receiving device (109) or the image storage device (108) has selected to perform image position shifting.
  • the message portion (1903) indicating that the image position shift process is not performed may include a message indicating that resolution improvement cannot be expected.
  • the message portion (1904) indicating that the image position shift process is to be performed may include a message indicating that resolution improvement can be
  • the image transmission device (1802) When the image transmission device (1802) is connected to the image reception device (109) or the image storage device (108) having the image position shift unit (302) via the communication network (107), the user can display a menu. By selecting the selection unit (1906) of the screen (1901), the switch (1803) is switched to the lower side in the figure, and the image that has passed through the image position shift unit (103) is displayed by the image reduction unit (104). The image is reduced, encoded by the encoding unit (105), and transmitted to the image reception device (109) and the image storage device (108) via the communication network (107). As a result, the image receiving device (109) can output a high-quality image with reduced aliasing distortion.
  • the image transmission apparatus (1802) when the image transmission apparatus (1802) is connected to an image reception apparatus or image storage apparatus that does not have the image position shift unit (302) via the communication network (107), the user can display a menu display screen ( 1901) by selecting the selection unit (1905), the switch (1803) is switched to the upper side in the figure, and the image reduction unit (104) reduces the image without passing through the image position shift unit (103).
  • the encoding unit (105) encodes the data and transmits the data to the image reception device (109) and the image storage device (108) via the communication network (107).
  • the image receiving apparatus can output an image without blurring.
  • the switch (1803) is controlled in accordance with a control signal sent from the control unit (1804) according to the setting of the menu display screen (1901). At this time, the image is transmitted via the communication network (107). You may enable it to control according to the command automatically transmitted from the receiver (109) or the image storage device (108). That is, the image receiving device (109) or the image storage device (108) having the image position shift unit (302) is set in advance so as to send a command indicating that image position shift processing is performed, and this command is transmitted to the control unit ( When the signal is not received in 1804), it may be determined that an image receiving apparatus that does not have the image position shift unit (302) is connected, and the switch (1803) may be switched to the upper side in the drawing.
  • means for performing image position shift processing that cancels and cancels the image position shift process that is, means for position shifting in the reverse direction
  • the displayed image is periodically blurred.
  • any other image receiving device that does not include an image position shift unit, and the like can display a suitable image for each.
  • FIG. 27 is a functional block diagram schematically showing a configuration example of the image transmission apparatus according to the present embodiment.
  • FIG. 28 is a functional block diagram schematically showing a configuration example of the image receiving apparatus according to the present embodiment.
  • the same members as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
  • an image transmission device includes a camera (102) that captures an image (still image, moving image), and an image position shift unit (103) that performs an image position shift process on the image captured by the camera (102).
  • the image reduction unit (104) that performs the image reduction process on the image that has been subjected to the image position shift process, and cancels out the position shift performed by the image position shift unit (103) on the image that has undergone the image reduction process.
  • An image position shift unit (2002) that shifts the position as described above, an encoding unit (105) that performs an encoding process on an image that has undergone image position shift processing by the image position shift unit (2002), and a communication network (107) Based on the command and data received via the camera, the camera (102), the image position shift unit (103), the image reduction unit (104), and the image position shift unit (2002) Control unit for controlling an image transmission apparatus (2001) the whole operations including the fine coding unit (105) and a (2003).
  • the image position shift unit (2002) is for shifting the image in the direction opposite to the image blur caused by the image position shift unit (103) under the control of the control unit (2003).
  • the asymmetric filter described above can also be used as the image position shift unit (2002).
  • an image receiving device (2004) performs a decoding process on the image sent from the image transmitting device (2001) via the communication network (107), and a decoding process.
  • An image position shift unit (2005) that performs the same image position shift process as the image position shift unit 103 on the performed image, and an image enlargement / clearing process on the image that has been subjected to the image position shift process by the image position shift unit (2005)
  • the image position shift unit (2005) operates under the control of the control unit (2006), and cancels the position shift performed by the image position shift unit (2002) of the image transmission device (2001) so as to cancel the position shift. To do.
  • FIG. 29 is a diagram illustrating an example of filter coefficients used in the image position shift unit in FIG.
  • coefficients (a) to (d) indicate filter coefficients that are convoluted with the decoded image.
  • a symbol or a numerical value in parentheses that is, 0, ⁇ , 1- ⁇ , ⁇ , 1- ⁇ , etc.
  • the right shoulder of the parenthesis “ ⁇ 1” of the symbol represents an inverse characteristic
  • the symbol “T” on the right shoulder of the parenthesis represents transposition.
  • the symbol “*” represents a convolution operation. That is, the coefficients (a) to (d) in FIG. 29 represent the coefficients of the inverse filter of the two-dimensional filter shown in FIG.
  • each coefficient of an inverse filter can be calculated
  • the image reception device (2004) can output a clear image by the same operation as the first embodiment.
  • the image transmission apparatus (2001) is connected to an image reception apparatus or an image storage apparatus that does not include an image position shift unit, it is possible to suppress image blurring that occurs during display.
  • the image storage device (108) in the first embodiment is used as the image receiving device (1612).
  • the image transmission device (101) is used as the image transmission device, and is described in an image reception application program (not shown) stored in the storage (1604) of the image storage device (108).
  • the control unit (1602) controls each unit in accordance with the processing (described later in detail with reference to FIG. 30), whereby the image storage device (108) is also used as the image reception device (1612) (see FIG. 23).
  • FIG. 30 is a flowchart showing an example of processing in the image receiving apparatus according to the present embodiment.
  • the image receiving device (1612) first acquires the encoded image data via the communication network (107) (step S2201), and the control unit (1602) decodes the image data. It stores in the memory (1603) (step S2202). Subsequently, frame phase information is acquired (step S2205). In parallel with step S2205, the image is enlarged (step S2203), and the position of the image is shifted (step S2204). These processes correspond to the processing contents of the image enlargement unit (301) and the image position shift unit (302) shown in FIG.
  • step S2206 aliasing distortion reduction processing is performed (step S2206).
  • step S2206 two-dimensional processing is performed (step S2207), the image is stored in a predetermined frame memory (memory (1603) in FIG. 23) (step S2208), and the values of all frame memories are added for each same pixel position. (Step S2209). Note that these processes correspond to the processing contents in the configuration shown in FIGS. 9 and 10.
  • step S2210 low-pass filter processing for suppressing unnecessary aliasing distortion is performed (step S2210).
  • step S2211 a motion detection process is performed to obtain a control signal m (step S2211). This processing corresponds to the processing content in the configuration shown in FIG.
  • Step S2212 weighted mixing is performed using the image after the aliasing reduction obtained in step S2206, the image after the low-pass filter obtained in step S2210, and the control signal m obtained in step S2211, thereby obtaining an output image ( Step S2212).
  • This processing corresponds to the operation of the weighted mixing unit (1004) shown in FIG.
  • the output image signal obtained in step S2212) is output via the output interface (1605) (step S2213), and the process ends.
  • steps S2201 to S2213 are controlled by the control unit (1602) so that it is executed each time image data is transmitted from the image transmission apparatus (101) to the image reception apparatus (1612). Good.
  • a command instructing transmission of image data may be transmitted from the image receiving apparatus (109) to the image transmitting apparatus (101).
  • the various program function units developed in the memory (1603) of FIG. 23 with another image position shift function (not shown)
  • the operation of the image position shift unit (2005) described in FIG. It is obvious that the operation shown in FIG. 29 can be realized, and it is obvious that the same operation as that shown in FIG. 28 can be realized using the configuration of the image receiving device (1612).
  • this invention is not limited to each above-mentioned embodiment, Various modifications are included.
  • the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • the functions of the embodiments of the present invention can be realized by software program codes.
  • a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium.
  • the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention.
  • a storage medium for supplying such program code for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
  • an OS operating system
  • the computer CPU or the like performs part or all of the actual processing based on the instruction of the program code.
  • the program code is stored in a storage means such as a hard disk or a memory of a system or apparatus, or a storage medium such as a CD-RW or CD-R
  • the computer (or CPU or MPU) of the system or apparatus may read and execute the program code stored in the storage means or the storage medium when used.
  • the described hardware may be implemented by ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), etc., and the described software is assembler, C / C ++, perl, Shell, PHP, Python. , Java (registered trademark) or a wide range of programs or script languages may be used.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the described software is assembler, C / C ++, perl, Shell, PHP, Python.
  • Java registered trademark
  • a wide range of programs or script languages may be used.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. All the components may be connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The present invention is provided with: an image position shifting unit (103) for shifting the position of an inputted image; an image reduction unit (104) for reducing the size of the position-shifted image by cutting the number of pixels thereof; an image enlargement unit (301) which increases the number of pixels of a decoded image to enlarge the same, the decoded image being the reduced image which was encoded, sent via a communication network (107), and decoded; an image position shifting unit (302) for position-shifting the enlarged image so as to cancel out and negate the position shifting performed by the image position shifting unit (103); and a folding distortion reduction unit (303) for performing folding distortion reduction processing for reducing folding distortion of the image having been positioned-shifted by the image position shifting unit (302), by using information on another image having been position-shifted by the image position shifting unit (302).

Description

画像処理システムおよび画像処理装置Image processing system and image processing apparatus 参照による取り込みImport by reference
 本出願は、平成28年(2016年)2月24日に出願された日本出願である特願2016-033145の優先権を主張し、その内容を参照することにより、本出願に取り込む。 This application claims the priority of Japanese Patent Application No. 2016-033145, which was filed on February 24, 2016, and is incorporated herein by reference.
 本発明は、画像のデータサイズや画質に係る処理を行う画像処理システムおよび画像処理装置に関する。 The present invention relates to an image processing system and an image processing apparatus that perform processing related to image data size and image quality.
 テレビ電話、遠隔会議、またはネットワーク監視カメラ等のように、カメラによって撮影された画像(動画像や静止画像)を、インターネット、イントラネット、公衆網などの通信ネットワーク経由で同時に伝送し、画像受信端末で表示することが一般に行われている。 Images (moving images and still images) taken by a camera, such as a videophone, a remote conference, or a network surveillance camera, are simultaneously transmitted via a communication network such as the Internet, an intranet, or a public network, and are received by an image receiving terminal. It is generally done to display.
 一方で、撮像に用いるカメラの画素数が技術の進歩とともに増加してきており、現在では例えば水平1920画素×垂直1080画素で構成されるフルHDサイズの画像などが一般に用いられている状況である。しかしながら、画像の伝送を行うアプリケーションにおいて通信ネットワークの伝送帯域の限界を上回るデータサイズの画像を伝送した場合には、アプリケーションの通信エラーやデータ遅延が生じることにより受信画像が劣化したりフリーズしたりすることがある。また、伝送された画像データをハードディスク(磁気ディスク)や光ディスク等のメディアに記録して保存する際にデータサイズが大きい場合には、記録容量の限られたメディアでは短時間の記録しかできなかったり、大容量の記録メディアが必要になることによって記録装置のコストアップを招いたりするという問題がある。 On the other hand, the number of pixels of a camera used for imaging has increased with the advance of technology, and at present, for example, a full HD size image composed of horizontal 1920 pixels × vertical 1080 pixels is generally used. However, when an image is transmitted in an image transmission application that exceeds the limit of the transmission bandwidth of the communication network, the received image may deteriorate or freeze due to an application communication error or data delay. Sometimes. In addition, when the transmitted image data is recorded and stored on a medium such as a hard disk (magnetic disk) or an optical disk, if the data size is large, it may be possible to record only a short time on a medium with a limited recording capacity. In addition, there is a problem that the cost of the recording apparatus is increased due to the necessity of a large-capacity recording medium.
 したがって、画像を伝送したり記録したりするアプリケーションを実現するシステムでは、伝送帯域や記録容量に合わせて画像のデータサイズを削減しており、例えば、撮像された画像フレームにおける画素数を減らして画像の全体サイズを縮小したうえで符号化して圧縮することによりデータサイズを削減し、画像を再生する際には縮小された画像を拡大して表示することが一般的にも行われている。 Therefore, in a system that realizes an application for transmitting and recording images, the image data size is reduced in accordance with the transmission band and recording capacity. For example, the number of pixels in the captured image frame is reduced to reduce the image size. In general, the data size is reduced by reducing the overall size and then encoding and compressing, and when the image is reproduced, the reduced image is enlarged and displayed.
 しかしながら、画像の符号化データサイズと画質には相関関係があり、符号化データサイズを減らすほどに画質は劣化してしまう。そこで、画質の低下を抑制しつつ符号化データサイズを削減することを目的とした技術の開発が行われており、画像の鮮明化に関する技術として、例えば、特許文献1には、2系列のディジタル信号を用いて1次元方向の折り返し歪をキャンセルすることにより、入力信号のナイキスト周波数を超えた広帯域な出力信号を得る技術が開示されている。また、非特許文献1には、複数の画像フレームを1フレームに合成し、反復演算を伴う逐次近似処理を行うことによって、入力画像のナイキスト周波数を超えた高周波成分を復元し、高解像度の出力画像を得る技術が開示されている。 However, there is a correlation between the encoded data size of the image and the image quality, and the image quality deteriorates as the encoded data size is reduced. Therefore, a technique for reducing the encoded data size while suppressing a decrease in image quality has been developed. As a technique related to image sharpening, for example, Patent Document 1 discloses two series of digital data. A technique is disclosed in which a wideband output signal exceeding the Nyquist frequency of an input signal is obtained by canceling aliasing distortion in a one-dimensional direction using the signal. In Non-Patent Document 1, a plurality of image frames are combined into one frame, and a high-frequency component exceeding the Nyquist frequency of the input image is restored by performing a successive approximation process with iterative calculation, and high-resolution output is performed. A technique for obtaining an image is disclosed.
 特許文献1:特開平10-280685号公報 Patent Document 1: Japanese Patent Laid-Open No. 10-280685
 非特許文献1:S.C.Park, M.K.Park, and M.G.King, "Super-Resolution Image Reconstruction: A Technical Overview," IEEE Signal Processing Magazine, vol.20, no.3, pp.21-36, 2003. Non-patent document 1: S.M. C. Park, M.M. K. Park, and M.M. G. King, "Super-Resolution Image Reconstruction: A Technical Overview," IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21-36, 2003.
 ところで、前述したような、テレビ電話、遠隔会議、またはネットワーク監視カメラ等の画像を伝送し、あるいは記録し、表示するアプリケーションを実現する手段として、パーソナルコンピュータ(以下、PCと略記)やモバイル端末などが用いられることが多い。これらの端末では、信号処理を汎用的なCPU(Central Processing Unit)やMPU(Micro Processing Unit)、GPGPU(General Purpose Computing on Graphics Processing Unit)、DSP(Digital Signal Processor)などによるソフトウェア処理を行うことによって、特別なハードウェアを用いなくても、画像の符号化、復号化、縮小、拡大、鮮明化、などを実現することができる。 By the way, as described above, a personal computer (hereinafter abbreviated as “PC”), a mobile terminal, etc. as means for realizing an application for transmitting, recording, and displaying an image of a video phone, a remote conference, a network surveillance camera, or the like. Is often used. In these terminals, signal processing is performed by performing software processing using a general-purpose CPU (Central Processing Unit), MPU (Micro Processing Unit), GPGPU (General Purpose Computing Computing on Graphics Processing Unit), DSP (Digital Signal Processor), etc. Even without using special hardware, it is possible to realize encoding, decoding, reduction, enlargement, sharpening, and the like of an image.
 その一方で、上記従来技術に例示した超解像処理は、画像に含まれる高周波成分を増幅するだけのエッジ強調処理などに比べて、処理に必要な演算量が極めて多い。例えば、非特許文献1に記載されているような、複数フレームの入力画像を用いて、逐次近似処理によって出力画像を得る超解像技術では、画素ごとのサブピクセル単位の高精度な動き探索を伴うフレーム間位置合わせ(レジストレーション)、画像の拡大、拡大画像の縮小、縮小画像と入力画像との差分検出、および出力画像の補正処理を、何度も反復処理(イテレーション)する必要がある。つまり、従来技術においては、画素ごとの動き探索やイテレーションは演算量が極めて多いため、CPU等やメモリ等の計算リソースの処理限界を超えてしまって、画像の動きがぎくしゃくするコマ落ちが発生したり、処理全体が止まってしまったり、マウスやキーボード等によるユーザ入力を受け付けず無応答の状態になったりすることが考えられる。 On the other hand, the super-resolution processing exemplified in the above-described prior art requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image. For example, in a super-resolution technique for obtaining an output image by successive approximation processing using an input image of a plurality of frames as described in Non-Patent Document 1, a high-precision motion search in units of subpixels for each pixel is performed. The accompanying inter-frame alignment (registration), image enlargement, enlargement image reduction, difference detection between the reduced image and the input image, and output image correction processing must be repeated many times (iteration). In other words, in the conventional technology, the motion search and iteration for each pixel has a very large amount of calculation, so that it exceeds the processing limit of calculation resources such as CPU and memory, and frame dropping occurs that makes the motion of the image jerky. Or the entire process may stop, or user input from a mouse, keyboard, or the like may not be accepted.
 本発明は、上記に鑑みてなされたものであり、画像処理に係る演算量の増加および画質の低下を抑制しつつ符号化データサイズを削減することができる画像処理システムおよび画像処理装置を提供することを目的とする。 The present invention has been made in view of the above, and provides an image processing system and an image processing apparatus capable of reducing the encoded data size while suppressing an increase in the amount of calculation related to image processing and a decrease in image quality. For the purpose.
 上記目的を達成するために、本発明は、時間的に連続する複数の画像のそれぞれに対して、予め定めた複数のシフト位置の何れかに画像の位置をシフトする第1画像位置シフト部と、前記第1画像位置シフト部で位置シフトされた画像の画素数を削減して縮小する画像縮小部と、前記画像縮小部で縮小された画像を符号化して符号化画像を生成する符号化部と、通信ネットワークを介して送られた前記符号化画像を復号化して復号画像を生成する復号化部と、前記復号化部で復号化された復号化画像の画素数を増加させて拡大する画像拡大部と、前記画像拡大部で拡大された画像に対して、前記第1画像位置シフト部で行われた位置シフトを打ち消すように位置シフトする第2画像位置シフト部と、前記第2画像位置シフト部で位置シフトされた画像の折り返し歪を、前記第2画像位置シフト部で位置シフトされた他の画像の情報を用いて低減する折り返し歪低減処理を行う折り返し歪低減部とを備えたものとする。 To achieve the above object, the present invention provides a first image position shift unit that shifts an image position to any one of a plurality of predetermined shift positions for each of a plurality of temporally continuous images. An image reduction unit that reduces the number of pixels of the image that has been position shifted by the first image position shift unit, and an encoding unit that encodes the image reduced by the image reduction unit to generate an encoded image A decoding unit that decodes the encoded image transmitted via the communication network to generate a decoded image, and an image that is enlarged by increasing the number of pixels of the decoded image decoded by the decoding unit An enlargement unit, a second image position shift unit that shifts the position of the image enlarged by the image enlargement unit so as to cancel the position shift performed by the first image position shift unit, and the second image position Position shift at shift section The aliasing distortion of the image, and that a folded distortion reduction unit which performs aliasing distortion reduction processing for reducing by using information on the position shifted other images in the second image position shifting unit.
 画像処理に係る演算量の増加および画質の低下を抑制しつつ符号化データサイズを削減することができ、比較的非力な計算リソースでも高速な画像伝送および画像鮮明化処理を行うことができる。 The encoded data size can be reduced while suppressing an increase in the amount of computation related to image processing and a decrease in image quality, and high-speed image transmission and image sharpening processing can be performed even with relatively inefficient calculation resources.
第1の実施の形態に係る画像処理システムの全体構成を模式的に示す機能ブロック図である。1 is a functional block diagram schematically showing an overall configuration of an image processing system according to a first embodiment. 画像位置シフト処理の一例を概略的に示す図である。It is a figure which shows an example of an image position shift process roughly. 画像位置シフト処理の他の一例を概略的に示す図である。It is a figure which shows schematically another example of an image position shift process. 画像受信装置の画像拡大・鮮明化部の構成の一例を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly an example of a structure of the image expansion and sharpening part of an image receiver. 画像縮小部の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows the structure of an image reduction part roughly. 画像縮小部の水平処理部及び垂直処理部の構成を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the structure of the horizontal processing part and vertical processing part of an image reduction part. 図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の一例を模式的に示す図である。FIG. 5 is a diagram schematically illustrating an example of a configuration for realizing a function of image enlargement / clearing processing in an image enlargement / clearing unit illustrated in FIG. 4. 図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の他の一例を模式的に示す図である。It is a figure which shows typically another example of the structure which implement | achieves the function of the image expansion and sharpening process in the image expansion and sharpening part shown in FIG. 図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の他の一例を模式的に示す図である。It is a figure which shows typically another example of the structure which implement | achieves the function of the image expansion and sharpening process in the image expansion and sharpening part shown in FIG. 図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の他の一例を模式的に示す図である。It is a figure which shows typically another example of the structure which implement | achieves the function of the image expansion and sharpening process in the image expansion and sharpening part shown in FIG. 図7の構成を等価変換した構成を示す図である。It is a figure which shows the structure which carried out equivalent conversion of the structure of FIG. 図11に示した画像拡大・鮮明化部が有する1次元非対称フィルタの構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the one-dimensional asymmetric filter which the image expansion and sharpening part shown in FIG. 11 has. 図12の構成における1次元非対称フィルタのフィルタ係数を示す図である。It is a figure which shows the filter coefficient of the one-dimensional asymmetric filter in the structure of FIG. 図9における画像拡大・鮮明化部が有する2次元非対称フィルタにおける構成の一例を示すブロック図である。FIG. 10 is a block diagram illustrating an example of a configuration of a two-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG. 9. 図1の画像受信装置における画像拡大・鮮明化部の構成の他の例を示すブロック図である。It is a block diagram which shows the other example of a structure of the image expansion and sharpening part in the image receiver of FIG. 図15の画像拡大・鮮明化部が有する動き検出部における構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure in the motion detection part which the image expansion and sharpening part of FIG. 15 has. 画像送信装置のフレーム位相情報生成部および画像受信装置のフレーム位相情報取得部を用いて同期を行う場合の構成例を示す図である。It is a figure which shows the structural example in the case of synchronizing using the frame phase information generation part of an image transmitter, and the frame phase information acquisition part of an image receiver. 画像受信装置のフレーム位相情報生成部および画像送信装置のフレーム位相情報取得部を用いて同期を行う場合の構成例を示す図である。It is a figure which shows the structural example in the case of synchronizing using the frame phase information generation part of an image receiver, and the frame phase information acquisition part of an image transmitter. 画像受信部のフレーム位相情報推定部を用いて同期を行う場合の構成例を示す図である。It is a figure which shows the structural example in the case of synchronizing using the frame phase information estimation part of an image receiving part. 図19に示したフレーム位相情報推定部のフレーム位相情報推定処理の動作の一例を示す説明図である。It is explanatory drawing which shows an example of the operation | movement of the frame phase information estimation process of the frame phase information estimation part shown in FIG. フレーム位相情報推定部におけるフレーム位相推定処理を示すフローチャートである。It is a flowchart which shows the frame phase estimation process in a frame phase information estimation part. ベイヤ配列カラーフィルタにおける画素の配置の様子を示す図である。It is a figure which shows the mode of the arrangement | positioning of the pixel in a Bayer array color filter. 画像蓄積装置の一構成例を模式的に示す機能ブロック図である。It is a functional block diagram which shows typically the example of 1 structure of an image storage device. 図23の画像位置シフト機能において用いるフィルタ係数の一例を説明する図である。It is a figure explaining an example of the filter coefficient used in the image position shift function of FIG. 第2の実施の形態に係る画像送信装置の一構成例を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the example of 1 structure of the image transmission apparatus which concerns on 2nd Embodiment. 画像受信装置や画像蓄積装置が有する制御部が生成して表示部に表示するメニュー表示画面の一例を示す図である。It is a figure which shows an example of the menu display screen which the control part which an image receiving device or an image storage device has produces | generates and displays on a display part. 第3の実施の形態に係る画像送信装置の一構成例を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the example of 1 structure of the image transmission apparatus which concerns on 3rd Embodiment. 第3の実施の形態に係る画像受信装置の一構成例を概略的に示す機能ブロック図である。It is a functional block diagram which shows roughly the example of 1 structure of the image receiver which concerns on 3rd Embodiment. 図28の画像位置シフト部において用いるフィルタ係数の一例を説明する図である。It is a figure explaining an example of the filter coefficient used in the image position shift part of FIG. 第4の実施の形態に係る画像受信装置における処理の一例を示すフローチャートである。It is a flowchart which shows an example of the process in the image receiver which concerns on 4th Embodiment.
 以下、本発明の実施の形態を図面を参照しつつ説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 <第1の実施の形態>
 本発明の第1の実施の形態を図1~図24を参照しつつ説明する。
<First Embodiment>
A first embodiment of the present invention will be described with reference to FIGS.
 まず、本実施の形態により実現される画像鮮明化の概要について説明する。
 画像信号をサンプリングする際に、画像の1フレームを構成する画素数に応じて一意に定まるナイキスト周波数よりも高い周波数成分は、折り返し歪となる。後述する画像処理システム(後の図1等参照)では、画像送信装置(101)が有する画像縮小部(104)において画像を縮小する際に、ローパスフィルタ(後の図5等参照)のカットオフ周波数をナイキスト周波数よりも低く設定すると、折り返し歪を低減する作用を期待できるが、その反面、縮小後の画像は高周波成分が減衰されるため、ぼけた画像となる傾向がある。一方、ローパスフィルタのカットオフ周波数をナイキスト周波数よりも高く設定すると、縮小後の画像には縮小前の画像に元々含まれているベースバンド成分に折り返し歪の成分が混じる。折り返し歪はモアレ(干渉縞)状のノイズに見えるため、画質が大きく劣化する可能性がある。
First, an overview of image sharpening realized by the present embodiment will be described.
When the image signal is sampled, a frequency component higher than the Nyquist frequency that is uniquely determined according to the number of pixels constituting one frame of the image becomes aliasing distortion. In an image processing system to be described later (see FIG. 1 and the like later), when the image is reduced in the image reduction unit (104) of the image transmission apparatus (101), a low-pass filter (see FIG. 5 and the like later) is cut off. Setting the frequency lower than the Nyquist frequency can be expected to reduce the aliasing distortion, but on the other hand, the reduced image tends to be a blurred image because the high frequency component is attenuated. On the other hand, when the cut-off frequency of the low-pass filter is set higher than the Nyquist frequency, the reduced distortion component is mixed with the baseband component originally included in the pre-reduction image. Since the aliasing distortion appears to be moire (interference fringe) noise, the image quality may be greatly degraded.
 そこで、本実施の形態においては、画像受信装置(109)が有する画像拡大・鮮明化部(111)においては、縮小後の伝送画像を拡大する際に、複数の画像フレームを用いて折り返し歪を低減することにより、広帯域のベースバンド成分を抽出して、画像の鮮明化を図る。折り返し歪の低減には、同一の入力信号に対して標本化の位相を変えてサンプリングしなおすと、発生する折り返し歪の位相が変化する性質を利用する。この性質は、インタレース走査などで利用されている。例えば、一般的な2:1インタレースでは、撮像時に、画像を上から1走査線ごとに飛び越して走査し、奇数番目の走査線だけの画像から成る第1フィールドと、偶数番目のだけの画像から成る第2フィールドに分けて伝送する。表示時には、信号処理によってインタレース走査からプログレッシブ走査に変換したり、或いは、人間の目の残像効果によって第1フィールドと第2フィールドが脳内で合成されて知覚されることを利用したりして、鮮明なフレーム画像を得ている。 Therefore, in the present embodiment, the image enlargement / clearing unit (111) of the image receiving device (109) performs aliasing distortion using a plurality of image frames when enlarging the reduced transmission image. By reducing the frequency, a baseband component having a wide band is extracted, and the image is sharpened. In order to reduce the aliasing distortion, the property that the phase of aliasing distortion that occurs when the sampling phase of the same input signal is changed and the sampling is changed is used. This property is used in interlaced scanning. For example, in a general 2: 1 interlace, at the time of imaging, an image is scanned by scanning every scanning line from the top, and a first field consisting of an image of only odd-numbered scanning lines and only an even-numbered image are scanned. It is divided into the second field consisting of and transmitted. At the time of display, conversion from interlaced scanning to progressive scanning is performed by signal processing, or the first field and the second field are synthesized and perceived in the brain by the afterimage effect of the human eye. , Getting clear frame images.
 また、インタレース走査の考え方だけでは、撮像した画素数の「約数」の画素数、すなわち「1/N」(ただし、Nは正整数)の画素数にしか変換することができないため、例えば、フルHDサイズ(水平1920画素×垂直1080画素)からD1サイズ(水平704画素×垂直480画素)へ変換する場合のように、「非約数」の画素数への縮小を行うことができない。 In addition, since only the concept of interlaced scanning can be converted into only the “divisor” of the number of captured pixels, that is, “1 / N” (where N is a positive integer), for example, As in the case of conversion from the full HD size (horizontal 1920 pixels × vertical 1080 pixels) to the D1 size (horizontal 704 pixels × vertical 480 pixels), the reduction to the “non-divisor” number of pixels cannot be performed.
 そこで、本実施の形態においては、まず、画像位置シフト部(103)によってフレームごとに画像の位置をシフトしたのちに、画像縮小部(104)によって画像を縮小することにより、縮小画像(伝送画像)に含まれる折り返し歪の位相を意図的に変化させる。このとき、被写体が静止していれば、画像シフトの仕方(シフト方向とシフト量)から折り返し歪の位相を一意に決定できる。たとえば、フルHDサイズ(水平1920画素×垂直1080画素)の画像の位置を水平方向に1画素ずらすと、画像縮小後のD1サイズ(水平704画素×垂直480画素)の画像では、水平方向に0.37(=704/1920)画素ずれるため、折り返し歪の位相は11π/15(=2π×704/1920)ラジアンだけ回転する。この位相回転量は、フレーム全体で一定かつ事前に決定できるため、前述したサブピクセル単位の動き探索が不要になる。その後、後述する動作原理に基づいて折り返し歪を低減することにより、非約数の画素数に縮小した場合にも画像の鮮明化を行うことができるようになる。 Therefore, in the present embodiment, first, the image position shift unit (103) shifts the position of the image for each frame, and then the image reduction unit (104) reduces the image, thereby reducing the reduced image (transmission image). ) Intentionally change the phase of the aliasing distortion included. At this time, if the subject is stationary, the phase of the aliasing distortion can be uniquely determined from the image shift method (shift direction and shift amount). For example, if the position of an image of full HD size (horizontal 1920 pixels × vertical 1080 pixels) is shifted by 1 pixel in the horizontal direction, an image of D1 size (horizontal 704 pixels × vertical 480 pixels) after image reduction is 0 in the horizontal direction. .37 (= 704/1920) pixels shift, so the phase of the aliasing distortion rotates by 11π / 15 (= 2π × 704/1920) radians. Since the phase rotation amount is constant and can be determined in advance for the entire frame, the above-described motion search in units of subpixels becomes unnecessary. Thereafter, by reducing the aliasing distortion based on the operation principle described later, the image can be sharpened even when the number of pixels is reduced to a non-divisor.
 以上のような知見に基づいた本実施の形態の詳細を以下に説明する。 Details of the present embodiment based on the above knowledge will be described below.
 図1は、本実施の形態に係る画像処理システムの全体構成を模式的に示す機能ブロック図である。 FIG. 1 is a functional block diagram schematically showing the overall configuration of the image processing system according to the present embodiment.
 図1において、画像処理システムは、画像送信装置(101)、通信ネットワーク(107)、画像蓄積装置(108)、画像受信装置(109)、操作部(114)、画像受信装置(109)、および表示部(113)によって概略構成されている。 In FIG. 1, an image processing system includes an image transmission device (101), a communication network (107), an image storage device (108), an image reception device (109), an operation unit (114), an image reception device (109), and The display unit (113) is a schematic configuration.
 画像送信装置(101)は、画像(静止画、動画)を撮像するカメラ(102)と、カメラ(102)で撮像した画像に画像位置シフト処理を行う画像位置シフト部(103)と、画像位置シフト処理を行った画像に画像縮小処理を行う画像縮小部(104)と、画像縮小処理を行った画像に符号化処理を行う符号化部(105)と、通信ネットワーク(107)を介して画像受信装置(109)から送信されるコマンドやデータに基づいて、カメラ(102)、画像位置シフト部(103)、画像縮小部(104)、及び符号化部(105)を含む画像送信装置(101)全体の動作を制御する制御部(106)とを有している。また、画像受信装置(109)は、通信ネットワーク(107)を介して画像送信装置(101)から送られてきた画像に復号化処理を行う復号化部(110)と、復号化処理を行った画像に画像拡大・鮮明化処理を行う画像拡大・鮮明化部(111)と、画像受信装置(109)全体の動作を制御する制御部(112)とを有している。 The image transmission device (101) includes a camera (102) that captures an image (still image, moving image), an image position shift unit (103) that performs image position shift processing on an image captured by the camera (102), and an image position An image reduction unit (104) that performs image reduction processing on an image that has undergone shift processing, an encoding unit (105) that performs encoding processing on an image that has undergone image reduction processing, and an image via a communication network (107) Based on the command and data transmitted from the reception device (109), the image transmission device (101) including the camera (102), the image position shift unit (103), the image reduction unit (104), and the encoding unit (105). And a control unit (106) for controlling the entire operation. The image receiving device (109) performs the decoding process with the decoding unit (110) that performs decoding processing on the image transmitted from the image transmitting device (101) via the communication network (107). An image enlargement / clearing unit (111) that performs image enlargement / clarification processing on the image and a control unit (112) that controls the operation of the entire image receiving device (109) are provided.
 そして、本実施の形態における画像処理システムでは、画像送信装置(101)が有する画像縮小部(104)において画像の1フレームを構成する画素数を減らすことにより伝送データサイズを減らすとともに、画像受信装置(109)が有する画像拡大・鮮明化部(111)において画像を拡大することにより、ぼやけの少ない鮮明な画像を表示することができる。なお、図示しない画像拡大部あるいは画像縮小部を用いて、画像受信装置(109)から出力された第3の画素数の画像をさらに拡大あるいは縮小し、図示しない第4の画素数を有する画像に変換したのちに、表示部(113)で表示するように構成してもよい。 In the image processing system according to the present embodiment, the transmission data size is reduced by reducing the number of pixels constituting one frame of the image in the image reduction unit (104) of the image transmission device (101), and the image reception device. By enlarging the image in the image enlarging / clearing unit (111) included in (109), a clear image with less blur can be displayed. It should be noted that the image having the third pixel number output from the image receiving device (109) is further enlarged or reduced using an image enlargement unit or image reduction unit (not shown) to obtain an image having the fourth pixel number (not shown). You may comprise so that it may display on a display part (113) after converting.
 カメラ(102)は、例えば、図示しないCCD(Charge Coupled Device)やCMOS(Complementary Metal-Oxide Semiconductor)などの光電変換素子とレンズからなる撮像部と、信号レベル調整やコントラスト調整、ブライトネス調整、ホワイトバランス調整などを行う信号処理部とによって構成されており、カメラ(102)で撮像された画像(静止画、動画)は後段の画像位置シフト部(103)に送られる。なお、以降の説明では画像の画素数を水平方向の画素数(水平画素数)と垂直方向の画素数(垂直画素数)の組で表すものとし、カメラ(102)で撮像される画像の1フレームは第1の画素数で構成されるものとして以下では説明する。 The camera (102) includes, for example, an imaging unit (not shown) composed of a photoelectric conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) and a lens, signal level adjustment, contrast adjustment, brightness adjustment, and white balance. An image (still image, moving image) captured by the camera (102) is sent to a subsequent image position shift unit (103). In the following description, the number of pixels of an image is represented by a set of the number of pixels in the horizontal direction (the number of horizontal pixels) and the number of pixels in the vertical direction (the number of vertical pixels). In the following description, the frame is assumed to be composed of the first number of pixels.
 画像位置シフト部(103)は、カメラ(102)で得られた画像の位置を整数画素の単位で上下左右にシフトさせる(ずらす)画像位置シフト処理を行うものである。ここで、画像位置シフト部(103)における画像位置シフト処理について図2及び図3を参照しつつ説明する。 The image position shift unit (103) performs image position shift processing that shifts (shifts) the position of the image obtained by the camera (102) vertically and horizontally in units of integer pixels. Here, the image position shift processing in the image position shift unit (103) will be described with reference to FIGS.
 図2は、画像位置シフト処理の一例を概略的に示す図である。 FIG. 2 is a diagram schematically showing an example of the image position shift process.
 図2では、3人の人物が被写体となっている画像を模擬的に示しており、破線はカメラ(102)で撮影された元の画像フレーム(200)を、実線は画像位置シフト処理でシフトされた画像フレーム(201)~(204)を示している。また、矢印は、1フレーム時間(例えば1/30秒)ごとの状態遷移を示している。図2では、4フレーム周期で画像のシフトを行う画像位置シフト処理を行っており、「4フレーム型画像位置シフト」と称する。 In FIG. 2, an image in which three persons are subjects is schematically shown, the broken line is the original image frame (200) taken by the camera (102), and the solid line is shifted by image position shift processing. The obtained image frames (201) to (204) are shown. Moreover, the arrow has shown the state transition for every frame time (for example, 1/30 second). In FIG. 2, an image position shift process for shifting an image at a cycle of 4 frames is performed, which is referred to as “4-frame type image position shift”.
 画像位置シフト処理後の画像フレーム(201)の状態では、元の画像フレーム(200)の位置を左上にシフトして画像の右端と下端に所定の画素値(例えば、黒色を表す画素値=0)を挿入している。同様に、画像フレーム(202)の状態では、元の画像フレーム(200)の位置を右上にシフトして画像の左端と下端に黒色(画素値=0)を挿入し、画像フレーム(203)の状態では、元の画像フレーム(200)の位置を右下にシフトして画像の左端と上端に黒色(画素値=0)を挿入し、画像フレーム(204)の状態では、元の画像フレーム(200)の位置を左下にシフトして画像の右端と上端に黒色(画素値=0)を挿入している。つまり、4フレーム型画像位置シフトでは、1フレーム時間ごとに画像フレーム(201)~(204)の順に状態遷移させることにより画像位置シフト処理を行う。なお、画像位置シフト部(103)におけるシフト方向およびシフト量は、制御部(106)により制御されている。 In the state of the image frame (201) after the image position shift process, the position of the original image frame (200) is shifted to the upper left, and predetermined pixel values (for example, pixel value representing black = 0) are set at the right end and the lower end of the image. ) Is inserted. Similarly, in the state of the image frame (202), the position of the original image frame (200) is shifted to the upper right and black (pixel value = 0) is inserted at the left end and the lower end of the image, and the image frame (203) In the state, the position of the original image frame (200) is shifted to the lower right and black (pixel value = 0) is inserted at the left end and the upper end of the image. In the state of the image frame (204), the original image frame ( 200) is shifted to the lower left, and black (pixel value = 0) is inserted at the right and upper ends of the image. That is, in the 4-frame type image position shift, the image position shift process is performed by performing state transition in the order of the image frames (201) to (204) every frame time. The shift direction and the shift amount in the image position shift unit (103) are controlled by the control unit (106).
 図3は、画像位置シフト処理の他の一例を概略的に示す図である。 FIG. 3 is a diagram schematically showing another example of the image position shift process.
 図3においても図2と同様に、破線はカメラ(102)で撮影された元の画像フレーム(200)を、実線は画像位置シフト処理でシフトされた画像フレーム(205),(205)を示している。また、矢印は、1フレーム時間(例えば1/30秒)ごとの状態遷移を示している。図3では、2フレーム周期で画像のシフトを行う画像位置シフト処理を行っており、「2フレーム型画像位置シフト」と称する。 In FIG. 3, as in FIG. 2, the broken line indicates the original image frame (200) taken by the camera (102), and the solid line indicates the image frames (205) and (205) shifted by the image position shifting process. ing. Moreover, the arrow has shown the state transition for every frame time (for example, 1/30 second). In FIG. 3, an image position shift process for shifting an image at a cycle of 2 frames is performed, which is referred to as “2-frame type image position shift”.
 画像位置シフト処理後の画像フレーム(205)の状態では、元の画像フレーム(200)の位置を左側にシフトして画像の右端に黒色(画素値=0)を挿入している。同様に、画像フレーム(206)の状態では、元の画像フレーム(200)の位置を右側にシフトして画像の左端に黒色(画素値=0)を挿入している。つまり、2フレーム型画像位置シフトでは、1フレーム時間ごとに画像フレーム(205),(206)の順に状態遷移させることにより画像位置シフト処理を行う。 In the state of the image frame (205) after the image position shift processing, the position of the original image frame (200) is shifted to the left and black (pixel value = 0) is inserted at the right end of the image. Similarly, in the state of the image frame (206), the position of the original image frame (200) is shifted to the right, and black (pixel value = 0) is inserted at the left end of the image. That is, in the two-frame type image position shift, the image position shift process is performed by performing state transition in the order of the image frames (205) and (206) every frame time.
 なお、画像位置シフト処理は図2及び図3に示した例に限定されるものではない。例えば、図2に示した4フレーム型位置シフトにおいて、「画像フレーム(201)→画像フレーム(204)→画像フレーム(203)→画像フレーム(202)→左上(201)→・・・」の順に位置シフトしたり、「画像フレーム(201)→画像フレーム(202)→画像フレーム(204)→画像フレーム(203)→画像フレーム(201)→・・・」の順に位置シフトしたり、「画像フレーム(201)→画像フレーム(204)→画像フレーム(202)→画像フレーム(203)→画像フレーム(201)→・・・」の順に位置シフトしたり、あるいはランダム(無作為)な順番で位置シフトしたりしてもよい。また、例えば、画像フレーム(200)の位置を位置シフトの一状態とし、「画像フレーム(200)→右側へ位置シフト→右下へ位置シフト→下側へ位置シフト→画像フレーム(200)→・・・」のように位置シフトしてもよい。 Note that the image position shifting process is not limited to the examples shown in FIGS. For example, in the 4-frame position shift shown in FIG. 2, the order of “image frame (201) → image frame (204) → image frame (203) → image frame (202) → upper left (201) →. Position shift or “image frame (201) → image frame (202) → image frame (204) → image frame (203) → image frame (201) →... (201) → Image frame (204) → Image frame (202) → Image frame (203) → Image frame (201) →..., Or position shift in a random (random) order You may do it. Further, for example, the position of the image frame (200) is set to one state of position shift, and “image frame (200) → position shift to right side → position shift to lower right → position shift to lower side → image frame (200) → · .. ”may be used to shift the position.
 同様に、例えば、図3に示した2フレーム型位置シフトにおいて、画像フレーム(205)の状態では位置シフトを行わずに(すなわち、画像フレーム(200)のまま位置を固定し)、「画像フレーム(200)→画像フレーム(206)→画像フレーム(200)→・・・」のように位置シフトしてもよい。 Similarly, for example, in the two-frame type position shift shown in FIG. 3, in the state of the image frame (205), the position is not shifted (that is, the position is fixed as the image frame (200)). The position may be shifted as follows: (200) → image frame (206) → image frame (200) →.
 また、図示しない上下方向や斜め方向への位置シフトを含む画像位置シフト処理としてもよい。 Further, image position shift processing including position shift in the vertical direction and the oblique direction (not shown) may be performed.
 さらには、4フレーム型位置シフトのように4フレーム周期で位置シフトを行うものや2フレーム型位置シフトのように2フレーム周期で位置シフトを行うものに限られず、例えば、水平方向に3フレーム周期で位置シフトを行うものや、水平方向への3フレームの位置シフトと垂直方向の2フレームの位置シフトとを組み合わせた「6フレーム周期」の位置シフト、或いは、水平方向への3フレームの位置シフトと垂直方向への3フレームの位置シフトとを組み合わせた「9フレーム周期」の位置シフトなどを行ってもよい。 Furthermore, it is not limited to those that perform position shift with a 4-frame cycle, such as a 4-frame position shift, or those that perform position shift with a 2-frame cycle, such as a 2-frame position shift. A position shift of “6 frames cycle” combining position shift of 3 frames in the horizontal direction and position shift of 2 frames in the vertical direction, or position shift of 3 frames in the horizontal direction Further, a position shift of “9 frame periods” combining the position shift of 3 frames in the vertical direction may be performed.
 なお、後に詳述する画像拡大・鮮明化部(111)における画像拡大鮮明化処理においては、画像位置シフト処理によって位置シフトした方向に応じて画像の鮮明化の状態が変化する。例えば、図2に示した「4フレーム型位置シフト」を行った場合は、画像拡大・鮮明化処理後の画像の水平方向と垂直方向の解像度(鮮明さ)が向上し、図3に示した「2フレーム型位置シフト」を行った場合は、水平方向の解像度(鮮明さ)だけが向上する。言い換えれば、解像度を向上させたい方向に画像フレームをシフトするような画像位置シフト処理を行うことにより、所望の方向の解像度を向上することができる。 Note that, in the image enlargement and sharpening process in the image enlargement and sharpening unit (111), which will be described in detail later, the image sharpening state changes according to the direction of the position shift by the image position shift process. For example, when the “4-frame type position shift” shown in FIG. 2 is performed, the resolution (clearness) in the horizontal and vertical directions of the image after the image enlargement and sharpening processing is improved, as shown in FIG. When “two-frame position shift” is performed, only the resolution (clearness) in the horizontal direction is improved. In other words, the resolution in a desired direction can be improved by performing an image position shift process that shifts the image frame in the direction in which the resolution is desired to be improved.
 図5は、画像縮小部の構成を概略的に示す機能ブロック図である。また、図6は、画像縮小部の水平処理部及び垂直処理部の構成を概略的に示す機能ブロック図である。 FIG. 5 is a functional block diagram schematically showing the configuration of the image reduction unit. FIG. 6 is a functional block diagram schematically illustrating the configuration of the horizontal processing unit and the vertical processing unit of the image reduction unit.
 図5において、画像縮小部(104)は、画像の1フレームを構成する画素数を第1の画素数よりも少ない第2の画素数となるよう変換する画像縮小処理を行うものであって、補間フィルタとも呼ばれる構成であり、画像位置シフト部(103)から入力された画像に対して、水平方向の画像縮小処理を行う水平処理部(401-H)と、垂直方向の画像縮小処理を行う垂直処理部(401-V)とを有している。なお、入力画像に対する処理の順番は水平処理部(401-H)と垂直処理部(401-V)とで入れ替えてもよい。 In FIG. 5, an image reduction unit (104) performs an image reduction process for converting the number of pixels constituting one frame of an image so as to be a second number of pixels smaller than the first number of pixels. This configuration is also referred to as an interpolation filter, and a horizontal processing unit (401-H) that performs horizontal image reduction processing on an image input from the image position shift unit (103) and vertical image reduction processing. And a vertical processing unit (401-V). Note that the processing order for the input image may be switched between the horizontal processing unit (401-H) and the vertical processing unit (401-V).
 図6において、水平処理部(401-H)と垂直処理部(401-V)は同様の構成で実現可能であり、m倍アップサンプリングを行う画素挿入部(402)と、高周波成分を除去あるいは低減するローパスフィルタ(403)と、1/nダウンサンプリングを行う画素間引き部(404)とを有している。 In FIG. 6, the horizontal processing unit (401-H) and the vertical processing unit (401-V) can be realized with the same configuration, and a pixel insertion unit (402) that performs m-times upsampling, A low-pass filter (403) for reducing and a pixel thinning unit (404) for performing 1 / n downsampling are provided.
 水平処理部(401-H)および垂直処理部(401-V)においては、入力画像に対して画素挿入部(402)によってm倍アップサンプリング(零挿入)を行ったのちに、ローパスフィルタ(403)によって不要な高周波成分を除去あるいは低減し、画素間引き部(404)によって1/nダウンサンプリングを行うことにより、画像の水平方向(又は垂直方向)の画素数をm/n倍(ただし、m、nは正の整数)に増加あるいは減少することできる。なお、後述する画像拡大部(301)による画像拡大処理は、画像縮小部104の定数m,nの設定によって同様の構成で実現可能である。 In the horizontal processing unit (401-H) and the vertical processing unit (401-V), m-times upsampling (zero insertion) is performed on the input image by the pixel insertion unit (402), and then the low-pass filter (403 ) To remove or reduce unnecessary high-frequency components, and 1 / n downsampling is performed by the pixel thinning unit (404), so that the number of pixels in the horizontal direction (or vertical direction) of the image is m / n times (where m , N can be increased or decreased to a positive integer). Note that image enlargement processing by the image enlargement unit (301), which will be described later, can be realized with the same configuration by setting the constants m and n of the image reduction unit 104.
 すなわち、例えば、画像フレームを構成する画素数として、フルHDサイズ(水平1920画素×垂直1080画素)とD1サイズ(水平704画素×垂直480画素)を考えた場合、m=11およびn=30の設定を用いれば、画像縮小部(104)における水平方向の縮小(1920画素→704画素)を実現することができ、m=2、n=1と設定すれば、画像拡大部(301)における水平方向の拡大(704画素→1408画素)を実現することができる。垂直方向の縮小・拡大も同様である。なお、画像縮小部(104)における出力画素数(第2の画素数)は、制御部106によりm,nを設定することにより制御される。 That is, for example, when considering the full HD size (horizontal 1920 pixels × vertical 1080 pixels) and D1 size (horizontal 704 pixels × vertical 480 pixels) as the number of pixels constituting the image frame, m = 11 and n = 30 If the setting is used, the horizontal reduction (1920 pixels → 704 pixels) in the image reduction unit (104) can be realized. If m = 2 and n = 1 are set, the horizontal reduction in the image enlargement unit (301) is realized. Direction expansion (704 pixels → 1408 pixels) can be realized. The same applies to reduction / enlargement in the vertical direction. Note that the number of output pixels (second pixel number) in the image reduction unit (104) is controlled by setting m and n by the control unit.
 ローパスフィルタ(403)には、ナイキスト周波数をカットオフ周波数とする理想的な周波数特性を逆フーリエ変換して得たsinc関数(=sin(x)/x)に窓関数(ハニング窓など)を乗じたフィルタ係数を用いる。また、小数画素精度の画像位置シフトを行う場合には、sinc関数の中心位置をtだけ動かしてsin(x-t)/(x-t)とすることにより実現する。なお、後段の処理で折り返し歪を低減する処理を行うことを前提とすれば、水平方向および垂直方向の各ローパスフィルタ(403)のカットオフ周波数をナイキスト周波数よりも高い周波数に設定することも考えられる。また、図6に示した構成と等価な動作はポリフェーズフィルタ(多相フィルタ)によって実現することができる。 The low-pass filter (403) is multiplied by a window function (such as a Hanning window) by a sinc function (= sin (x) / x) obtained by inverse Fourier transform of an ideal frequency characteristic having a Nyquist frequency as a cutoff frequency. Filter coefficients are used. Further, when the image position shift with decimal pixel accuracy is performed, the center position of the sinc function is moved by t to be sin (xt) / (xt). If it is assumed that the processing for reducing the aliasing distortion is performed in the subsequent processing, it is possible to set the cutoff frequency of each low-pass filter (403) in the horizontal direction and the vertical direction to a frequency higher than the Nyquist frequency. It is done. Further, an operation equivalent to the configuration shown in FIG. 6 can be realized by a polyphase filter (polyphase filter).
 符号化部(105)は、画像縮小部(104)から入力された画像を符号化して圧縮し、通信ネットワーク107を介して画像受信装置(109)や画像蓄積装置(108)に伝送するものである。なお、画像送信装置(101)や画像受信装置(109)と通信ネットワーク(107)とを接続するための処理構成は一般的な技術を用いるものとして図示を省略する。また、画像受信装置(109)の復号化部110は、符号化部(105)で用いられた符号化方法に応じて復号化処理を行う。 The encoding unit (105) encodes and compresses the image input from the image reduction unit (104), and transmits the image to the image reception device (109) and the image storage device (108) via the communication network 107. is there. Note that the processing configuration for connecting the image transmission device (101) or the image reception device (109) to the communication network (107) uses a general technique and is not shown. The decoding unit 110 of the image receiving device (109) performs a decoding process according to the encoding method used by the encoding unit (105).
 符号化部105で用いる符号化方法としては、例えば、MPEG(Moving Picture Expert Group)-1、MPEG-2、MPEG-4、H.264、H.265、VC-1、JPEG(Joint Photographic Experts Group)、Motion JPEG、JPEG-2000などの標準規格化された符号化を行ってもよいし、非標準の符号化を行ってもよい。なお、符号化部(105)における符号化方式は制御部(106)により制御される。 Examples of encoding methods used in the encoding unit 105 include MPEG (Moving Picture Expert Group) -1, MPEG-2, MPEG-4, H.264, and the like. H.264, H.C. H.265, VC-1, JPEG (Joint Photographic Experts Group), Motion JPEG, JPEG-2000, or other standardized encoding may be performed, or non-standard encoding may be performed. The encoding method in the encoding unit (105) is controlled by the control unit (106).
 通信ネットワーク(107)は、有線あるいは無線のどちらでもよく、一般的なIP(Internet Protocol)などの通信プロトコルを用いてデジタルデータを通信するためのネットワークである。 The communication network (107) may be either wired or wireless, and is a network for communicating digital data using a communication protocol such as a general IP (Internet Protocol).
 図4は、画像受信装置の画像拡大・鮮明化部の構成の一例を概略的に示す機能ブロック図である。 FIG. 4 is a functional block diagram schematically showing an example of the configuration of the image enlargement / clearing unit of the image receiving apparatus.
 図4において、画像拡大・鮮明化部(111)は、復号化部(110)で復号された第2の画素数を持つ画像に画像拡大処理を行う画像拡大部(301)と、画像拡大部処理を行った画像に画像位置シフト処理を行う画像位置シフト部(302)と、画像位置シフト処理を行った画像に折り返し歪低減処理を行う折り返し歪低減部(303)とを有している。 In FIG. 4, an image enlargement / clearing unit (111) includes an image enlargement unit (301) that performs image enlargement processing on an image having the second number of pixels decoded by the decoding unit (110), and an image enlargement unit. An image position shift unit (302) that performs an image position shift process on the processed image and an aliasing distortion reduction unit (303) that performs an aliasing distortion reduction process on the image that has undergone the image position shift process.
 なお、以下の各信号処理においては、カラー画像信号(RGB(赤、緑、青)や、YUV(輝度Y、色差UV)など)のすべてに対して同様の処理を行ってもよい。また、以下の各信号処理を輝度信号Yだけに対して行い、色差信号(UV)に対しては行わず、画像拡大処理だけを行うようにしてもよい。 In the following signal processing, the same processing may be performed on all color image signals (RGB (red, green, blue), YUV (luminance Y, color difference UV), etc.). Alternatively, the following signal processing may be performed only on the luminance signal Y and not on the color difference signal (UV), but only on the image enlargement processing.
 画像拡大部(301)は、復号化部(110)から入力された第2の画素数(例えば、D1サイズ(水平704画素×垂直480画素))を持つ画像を第3の画素数を持つ画像に変換する画像拡大処理を行うものであり、前述のように画像縮小部(104)と同様の構成により実現することができる。画像拡大処理は、後段の処理ブロックで低減した折り返し歪が再び周波数領域で折り返すことを防止するために画像の画素数を増加させる(すなわち、サンプリング周波数を高くする)ことが目的であり、第3の画素数は第2の画素数よりも大きい画素数であればよい。したがって、ここでは説明の簡単のために第3の画素数を第2の画素数の2倍(すなわち、水平1408画素×垂直960画素)であるとして説明する。 The image enlargement unit (301) converts an image having the second number of pixels (for example, D1 size (horizontal 704 pixels × vertical 480 pixels)) input from the decoding unit (110) into an image having the third number of pixels. The image enlargement process to be converted into the image reduction processing is performed, and can be realized by the same configuration as the image reduction unit (104) as described above. The purpose of the image enlargement process is to increase the number of pixels of the image (that is, increase the sampling frequency) in order to prevent the aliasing distortion reduced in the subsequent processing block from being aliased again in the frequency domain. The number of pixels may be larger than the second number of pixels. Therefore, for the sake of simplicity of explanation, the third pixel number is assumed to be twice the second pixel number (that is, horizontal 1408 pixels × vertical 960 pixels).
 画像位置シフト部(302)は、画像拡大部(301)で画像拡大処理を行った画像を画像送信装置(101)の画像位置シフト部(103)による画像位置シフト処理とは逆方向に画像の位置をシフトする(すなわち、画像位置シフト部(103)で行われた位置シフトを打ち消すように画像の位置をシフトする)ことにより、フレーム間の画像ブレを抑えるものである。例えば、画像位置シフト部(103)で左方向に1画素、上方向に1画素シフトするような画像位置シフト処理を行った場合には、画像位置シフト部(302)において、右方向に0.73(=1408/1920)画素、下方向に0.89(=960/1080)画素シフトするような画像位置シフト処理を行えば、画像が元の位置に戻ることになり、フレーム間の画像ブレが抑えられる。なお、画像位置シフト部(302)における画像位置シフト処理においては、画像位置シフト部(103)における画像位置シフト処理を打ち消すように画像シフトを行えば良いため、例えば、図2に示した「4フレーム型画像位置シフト」や図3に示した「2フレーム型画像位置シフト」の各中心の位置(すなわち画像フレーム(200)の位置)にすべての画像がシフトするように、画像位置シフト部(302)におけるシフト方向とシフト量を決定すればよい。また、画像位置シフト部(103)の画像位置シフト処理における各中心ではない固定の位置にすべての画像がシフトするように、画像位置シフト部(302)におけるシフト方向とシフト量を決定してもよい。画像位置シフト部(302)のシフト方向とシフト量は制御部(112)により制御される。 The image position shift unit (302) converts the image that has been subjected to the image enlargement process by the image enlargement unit (301) in the direction opposite to the image position shift process by the image position shift unit (103) of the image transmission device (101). By shifting the position (that is, shifting the position of the image so as to cancel the position shift performed by the image position shift unit (103)), image blur between frames is suppressed. For example, when image position shift processing is performed such that the image position shift unit (103) shifts one pixel in the left direction and one pixel in the upward direction, the image position shift unit (302) is set to 0. 0 in the right direction. If an image position shift process is performed such that 73 (= 1408/1920) pixels are shifted downward by 0.89 (= 960/1080) pixels, the image returns to the original position, and image blur between frames is lost. Is suppressed. In the image position shift process in the image position shift unit (302), the image shift may be performed so as to cancel the image position shift process in the image position shift unit (103). For example, “4” illustrated in FIG. The image position shift unit (in order to shift all images to the center positions of the “frame type image position shift” and the “two frame type image position shift” shown in FIG. 3 (that is, the position of the image frame (200)). What is necessary is just to determine the shift direction and shift amount in 302). Further, the shift direction and shift amount in the image position shift unit (302) may be determined so that all images are shifted to fixed positions that are not the respective centers in the image position shift process of the image position shift unit (103). Good. The shift direction and shift amount of the image position shift unit (302) are controlled by the control unit (112).
 折り返し歪低減部(303)は、画像位置シフト部(302)で画像位置シフト処理を行った画像の折り返し歪を低減する折り返し歪低減処理を行い、画像拡大・鮮明化部(111)の出力画像とするものである。なお、折り返し歪低減処理の詳細については後に詳述する。 The aliasing distortion reduction unit (303) performs aliasing distortion reduction processing for reducing aliasing distortion of the image subjected to the image position shift processing by the image position shift unit (302), and outputs an image from the image enlargement / clearing unit (111). It is what. Note that details of the aliasing reduction processing will be described later.
 図7は、図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の一例を模式的に示す図である。 FIG. 7 is a diagram schematically showing an example of a configuration for realizing the function of the image enlargement / clearing process in the image enlargement / clearing unit shown in FIG.
 図7において、画像拡大・鮮明化部(501)は、2系列のディジタル信号を用いて、1次元方向の折り返し歪をキャンセルすることにより、入力信号のナイキスト周波数を超えた広帯域な出力信号を得ることができるものであり、1次元拡大部(502-#0),(502-#1)、1次元補間フィルタ(503-#0),(503-#1)、ヒルベルト変換器(506)などにより構成されている。 In FIG. 7, the image enlargement / clearing unit (501) obtains a wideband output signal exceeding the Nyquist frequency of the input signal by canceling the aliasing distortion in the one-dimensional direction using two series of digital signals. One-dimensional enlargement unit (502- # 0), (502- # 1), one-dimensional interpolation filter (503- # 0), (503- # 1), Hilbert transformer (506), etc. It is comprised by.
 画像拡大・鮮明化部(501)においては、2系列のディジタル信号(すなわち、入力#0および入力#1)は、各1次元拡大部(502-#0),(502-#1)によって1次元方向(水平方向あるいは垂直方向)に拡大されたのち、それぞれの位置シフト量(α0,α1)を用いて、各1次元補間フィルタ(503-#0),(503-#1)によって互いの画像フレームの位置を合わせ、加算器(504)の出力信号と、減算器(505)およびヒルベルト変換器(506)を通したのちに乗算器(507)よって係数Kを乗じた後の画像信号との両者に含まれる折り返し歪の位相が互いに180度(逆位相)の関係になるように変換したのちに、加算器(508)によって両者を加算することにより画像における折り返し歪をキャンセルすることができる。 In the image enlarging / sharpening unit (501), two series of digital signals (that is, input # 0 and input # 1) are converted to 1 by each one-dimensional enlarging unit (502- # 0) and (502- # 1). After being expanded in the dimensional direction (horizontal direction or vertical direction), each one-dimensional interpolation filter (503- # 0), (503- # 1) uses each position shift amount (α0, α1). The position of the image frame is adjusted, the output signal of the adder (504), the image signal after being multiplied by the coefficient K by the multiplier (507) after passing through the subtracter (505) and the Hilbert transformer (506) Are converted so that the phase of the aliasing distortion contained in both of them is 180 degrees (opposite phase), and then added by an adder (508) to correct the aliasing distortion in the image. Can be canceled.
 ここで、図4に示した画像拡大部(301)は、図7における1次元拡大部(502-#1),(502-#2)に相当し、画像位置シフト部(302)は、1次元補間フィルタ(503-#0),(503-#1)に相当する。なお、図4に示した画像拡大部(301)は、前述したように補間フィルタ(401)と等価であり、その中のローパスフィルタ(403)の係数決定に用いるsinc関数をsin(x-t)/(x-t)とおいて、画像位置シフト部(302)とまとめて1次元補間フィルタ(503-#0),(503-#1)とすることにより、1次元拡大部(502-#1),(502-#2)を省略することもできる。また、図4における折り返し歪低減部(303)は、図7における加算器(504)、減算器(505)、ヒルベルト変換器(506)、乗算器(507)、加算器(508)により実現される構成と考えることができ、したがって、図4における画像拡大・鮮明化部(111)と図7における画像拡大・鮮明化部(501)は等価と見なすことができる。 Here, the image enlargement unit (301) shown in FIG. 4 corresponds to the one-dimensional enlargement units (502- # 1) and (502- # 2) in FIG. 7, and the image position shift unit (302) This corresponds to the dimension interpolation filters (503- # 0) and (503- # 1). Note that the image enlargement unit (301) shown in FIG. 4 is equivalent to the interpolation filter (401) as described above, and the sinc function used to determine the coefficient of the low-pass filter (403) is sin (x−t). ) / (Xt) and the one-dimensional interpolation filters (503- # 0) and (503- # 1) together with the image position shift unit (302), the one-dimensional enlargement unit (502- #) 1) and (502- # 2) can be omitted. 4 is realized by the adder (504), subtracter (505), Hilbert transformer (506), multiplier (507), and adder (508) in FIG. Therefore, the image enlargement / clearing unit (111) in FIG. 4 and the image enlargement / clearing unit (501) in FIG. 7 can be regarded as equivalent.
 なお、図7に示した画像拡大・鮮明化部(501)の構成では、1次元方向の画像拡大処理と折り返し歪低減処理を行う場合を例示したものであり、水平方向および垂直方向の2次元方向の画像拡大処理と折り返し歪低減処理を行う場合については図8を用いて以下に説明する。 The configuration of the image enlargement / clearing unit (501) shown in FIG. 7 illustrates the case where the one-dimensional image enlargement process and the aliasing reduction process are performed. The case of performing the direction image enlargement process and the aliasing reduction process will be described below with reference to FIG.
 図8は、図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の他の一例を模式的に示す図である。 FIG. 8 is a diagram schematically showing another example of a configuration for realizing the function of the image enlargement / clearing process in the image enlargement / clearing unit shown in FIG.
 図8においては、図7に示した画像拡大・鮮明化部(501)と同様の構成を有する処理ブロック(501-HA),(501-HB),(501-V)を用い、処理ブロック(501-HA),(501-HB)の出力を処理ブロック(501-V)の入力として直列に接続した構成となっている。そして、処理ブロック(501-HA),(501-HB)からなる水平処理部(601)で水平方向の画像拡大処理と折り返し歪低減処理を行うとともに、処理ブロック(501-V)からなる垂直処理部(602)で垂直方向の画像拡大処理と折り返し歪低減処理を行う。すなわち、画像拡大・鮮明化部(111)をこのように構成することにより、4系統の信号(すなわち、入力#0~#3)を用いて、2次元の画像拡大処理と折り返し歪低減処理を実現することができる。なお、水平処理部(601)と垂直処理部(602)の処理の順番を互いに入れ替えた構成としても同様の処理を実現することができる。 In FIG. 8, processing blocks (501-HA), (501-HB), (501-V) having the same configuration as the image enlargement / clearing unit (501) shown in FIG. In this configuration, the outputs of (501-HA) and (501-HB) are connected in series as the input of the processing block (501-V). The horizontal processing unit (601) including processing blocks (501-HA) and (501-HB) performs horizontal image enlargement processing and aliasing distortion reduction processing, and vertical processing including processing blocks (501-V). In the part (602), vertical image enlargement processing and aliasing reduction processing are performed. That is, by configuring the image enlargement / clearing unit (111) in this way, two-dimensional image enlargement processing and aliasing distortion reduction processing are performed using four systems of signals (ie, inputs # 0 to # 3). Can be realized. Note that the same processing can be realized even if the processing order of the horizontal processing unit (601) and the vertical processing unit (602) is interchanged.
 ここで、図2および図3に示した画像位置シフトと、図7に示した位置シフト量(α0,α1)、図8に示した水平位置シフト量(α0,α1,α2,α3)、および垂直位置シフト量(β0,β1)との関係について説明する。なお、図1に示した第1の画素数(例えば、水平1920画素×垂直1080画素)を持つ画像が、図2に示した「4フレーム型位置シフト」では、回転運動の中心位置(画像フレーム(200)の位置)の2次元座標を基準(0,0)として、2次元座標(-h,-v)→(h,-v)→(h,v)→(-h,v)→(-h,-v)→・・・と位置シフトするものとする(ただし、h=2m,v=2n,かつ、m、nは正の整数とする)。同様に、図3に示した「2フレーム型位置シフト」では、水平往復運動の中心位置(画像フレーム(200)の位置)の2次元座標を基準(0,0)として、2次元座標(-h,0)→(h,0)→(-h,0)→・・・と位置シフトするものとして、以下説明する。 Here, the image position shift shown in FIGS. 2 and 3, the position shift amount (α0, α1) shown in FIG. 7, the horizontal position shift amount (α0, α1, α2, α3) shown in FIG. The relationship with the vertical position shift amount (β0, β1) will be described. Note that an image having the first number of pixels (for example, horizontal 1920 pixels × vertical 1080 pixels) shown in FIG. 1 is the center position (image frame) of the rotational motion in the “4-frame position shift” shown in FIG. The two-dimensional coordinates (position of (200)) are set as the reference (0, 0), and the two-dimensional coordinates (−h, −v) → (h, −v) → (h, v) → (−h, v) → It is assumed that the position is shifted as (−h, −v) →... (Where h = 2m, v = 2n, and m and n are positive integers). Similarly, in the “two-frame position shift” shown in FIG. 3, the two-dimensional coordinates (− The following description will be made on the assumption that the position is shifted as h, 0) → (h, 0) → (−h, 0) →.
 まず、図2に示した「4フレーム型位置シフト」の場合について説明する。この「4フレーム型位置シフト」に対応する場合は、図8に示した2次元の処理ブロックを用いる。前述のように、第1の画素数(例えば、水平1920画素×垂直1080画素)を持つ画像を、第2の画素数(例えば、水平704画素×垂直480画素)に縮小して伝送したのちに、第3の画素数(例えば、水平1408画素×垂直960画素)に拡大して折り返し歪低減処理を行う際には、第1の画素数の画像における(-h,-v)→(h,-v)→(h,v)→(-h,v)→(-h,-v)→・・・の位置シフトは、第3の画素数の画像では、p=1408/1920、q=960/1080として、(-ph,-qv)→(ph,-qv)→(ph,qv)→(-ph,qv)→(-ph,-qv)→・・・の位置シフトに相当する。したがって、水平位置シフト量(α0,α1,α2,α3)=(ph,-ph,-ph,ph)、および垂直位置シフト量(β0,β1)=(qv,-qv)と設定することにより、画像送信装置(101)側の画像位置シフト部(103)における画像位置シフト処理と画像受信装置(109)側の画像位置シフト部(302)による画像位置シフト処理の効果を相殺して打ち消すことができ、画像ブレを止めることができる。 First, the case of “4-frame position shift” shown in FIG. 2 will be described. When this “4-frame position shift” is supported, the two-dimensional processing block shown in FIG. 8 is used. As described above, after an image having a first number of pixels (for example, horizontal 1920 pixels × vertical 1080 pixels) is reduced to a second number of pixels (for example, horizontal 704 pixels × vertical 480 pixels) and transmitted. When the aliasing distortion reduction process is performed by enlarging the third pixel number (for example, horizontal 1408 pixels × vertical 960 pixels), (−h, −v) → (h, −v) → (h, v) → (−h, v) → (−h, −v) →... In the third pixel number image, p = 1408/1920, q = 960/1080 corresponds to a position shift of (−ph, −qv) → (ph, −qv) → (ph, qv) → (−ph, qv) → (−ph, −qv) →. . Therefore, by setting the horizontal position shift amount (α0, α1, α2, α3) = (ph, −ph, −ph, ph) and the vertical position shift amount (β0, β1) = (qv, −qv). The image position shift process in the image position shift unit (103) on the image transmission apparatus (101) side and the effect of the image position shift process (302) on the image reception apparatus (109) side are canceled and canceled. And image blurring can be stopped.
 したがって、図8の水平処理部(601)において、係数Kα=-1/tan(π(α0-α1))=-1/tan(2πph)=-1/tan(2πh×1408/1920)=-1/tan(πh×22/15)と設定すれば、水平方向の折り返し歪を低減できる。また、図8の垂直処理部(602)において、係数Kβ=-1/tan(π(β0-β1))=-1/tan(2πqv)=-1/tan(2πv×960/1080)=-1/tan(πv×16/9)と設定すれば、垂直方向の折り返し歪を低減できる。 Accordingly, in the horizontal processing unit (601) of FIG. 8, the coefficient Kα = −1 / tan (π (α0−α1)) = − 1 / tan (2πph) = − 1 / tan (2πh × 1408/1920) = − If it is set to 1 / tan (πh × 22/15), the folding distortion in the horizontal direction can be reduced. Further, in the vertical processing unit (602) of FIG. 8, the coefficient Kβ = −1 / tan (π (β0−β1)) = − 1 / tan (2πqv) = − 1 / tan (2πv × 960/1080) = − Setting 1 / tan (πv × 16/9) can reduce the folding distortion in the vertical direction.
 次に、図3に示した「2フレーム型位置シフト」の場合について説明する。この「2フレーム型位置シフト」に対応する場合は、図7に示した1次元の処理ブロックを水平処理として用い、水平方向の画像拡大処理と折り返し歪低減処理を行う。一方、垂直処理については、図6に示した補間フィルタを用いて垂直方向の画像拡大だけを行い、これらの水平処理と垂直処理を直列接続して、2次元の処理を行う。前述のように、第1の画素数(例えば、水平1920画素×垂直1080画素)を持つ画像を、第2の画素数(例えば、水平704画素×垂直480画素)に縮小して通信ネットワーク(107)を介して伝送したのちに、第3の画素数(例えば、水平1408画素×垂直960画素)に拡大して折り返し歪低減処理を行う際には、第1の画素数の画像における(-h,0)→(h,0)→(-h,0)→・・・の位置シフトは、第3の画素数の画像では、p=1408/1920として、(-ph,0)→(ph,0)→(-ph,0)→・・・の位置シフトに相当する。したがって、水平位置シフト量(α0,α1)=(ph,-ph)と設定すれば、画像送信装置(101)側の画像位置シフト部(103)による画像位置シフト処理と画像受信装置(109)側の画像位置シフト部(302)による画像位置シフト処理の効果を相殺して打ち消すことができ、画像ブレを止めることができる。 Next, the case of “2-frame type position shift” shown in FIG. 3 will be described. When this “two-frame position shift” is supported, the one-dimensional processing block shown in FIG. 7 is used as horizontal processing, and horizontal image enlargement processing and aliasing reduction processing are performed. On the other hand, regarding the vertical processing, only the image enlargement in the vertical direction is performed using the interpolation filter shown in FIG. 6, and these horizontal processing and vertical processing are connected in series to perform two-dimensional processing. As described above, an image having a first number of pixels (for example, horizontal 1920 pixels × vertical 1080 pixels) is reduced to a second number of pixels (for example, horizontal 704 pixels × vertical 480 pixels), and the communication network (107 ), And when the aliasing distortion reduction processing is performed by enlarging the third pixel number (for example, horizontal 1408 pixels × vertical 960 pixels), (−h , 0) → (h, 0) → (−h, 0) →..., (−ph, 0) → (ph) in the image with the third pixel number, p = 1408/1920. , 0) → (−ph, 0) →... Therefore, if the horizontal position shift amount (α0, α1) = (ph, −ph) is set, the image position shift process by the image position shift unit (103) on the image transmitting apparatus (101) side and the image receiving apparatus (109) The effect of the image position shift processing by the image position shift unit (302) on the side can be canceled and canceled, and image blur can be stopped.
 したがって、図7の画像拡大・鮮明化部(501)において、係数K=-1/tan(π(α0-α1))=-1/tan(2πph)=-1/tan(2πh×1408/1920)=-1/tan(πh×22/15)と設定すれば、水平方向の折り返し歪を低減できる。 Accordingly, in the image enlarging / sharpening unit (501) of FIG. 7, the coefficient K = −1 / tan (π (α0−α1)) = − 1 / tan (2πph) = − 1 / tan (2πh × 1408/1920) ) = − 1 / tan (πh × 22/15), the folding distortion in the horizontal direction can be reduced.
 なお、前述の画像縮小処理によって非約数の画素数に縮小した場合にも、画像に含まれる折り返し歪を低減でき、イテレーションを行うことなく、画像を鮮明化することができる。 Note that even when the image is reduced to a non-divisor pixel number by the above-described image reduction processing, the aliasing distortion included in the image can be reduced, and the image can be sharpened without performing iteration.
 ここで、前述した図8の構成は、図7の構成に比べて演算量が約3倍になるため、CPU等の計算リソースを圧迫することが考えられる。そこで、図8に示した構成の演算量を削減しつつ等価な動作を実現することができる構成を図9および図10を用いて説明する。 Here, in the configuration of FIG. 8 described above, the amount of calculation is about three times that of the configuration of FIG. Therefore, a configuration capable of realizing an equivalent operation while reducing the calculation amount of the configuration shown in FIG. 8 will be described with reference to FIGS. 9 and 10.
 図9および図10は、図4に示した画像拡大・鮮明化部における画像拡大・鮮明化処理の機能を実現する構成の他の一例を模式的に示す図である。 9 and 10 are diagrams schematically showing another example of a configuration for realizing the function of the image enlargement / clearing processing in the image enlargement / clearing unit shown in FIG.
 前述した図7の構成においては、ヒルベルト変換が固定係数の奇対称フィルタであることと、係数Kαと係数Kβがそれぞれ一定であることに着目すると、「加算器(504)、減算器(505)、ヒルベルト変換器(506)、乗算器(507)、および加算器(508)」はすべて固定係数からなる線形演算であり、ひとつの「1次元非対称フィルタ」(後述)として表すことができることがわかる。すなわち、図7の構成は、入力#0に対して1次元補間フィルタ(503-#0)を通したのちに「1次元非対称フィルタ」を通した信号と、入力#1に対して1次元補間フィルタ(503-#1)を通したのちに減算器(505)の極性(正負)を反転した「1次元非対称フィルタ」を通した信号とを、最後にまとめて加算するように構成しても、同一の出力が得られることになる。 In the configuration of FIG. 7 described above, focusing on the fact that the Hilbert transform is a fixed coefficient odd-symmetric filter and that the coefficient Kα and the coefficient Kβ are constant, “adder (504), subtracter (505)”. , Hilbert transformer (506), multiplier (507), and adder (508) "are all linear operations composed of fixed coefficients, and can be expressed as one" one-dimensional asymmetric filter "(described later). . That is, in the configuration of FIG. 7, the signal passing through the one-dimensional interpolation filter (503- # 0) for the input # 0 and then passing through the “one-dimensional asymmetric filter” and the one-dimensional interpolation for the input # 1 After passing through the filter (503- # 1), the signal passing through the “one-dimensional asymmetric filter” in which the polarity (positive / negative) of the subtractor (505) is inverted may be added together at the end. The same output is obtained.
 そこで、図9においては、図8の水平処理部(601)における「1次元非対称フィルタ」と、垂直処理部(602)における「1次元拡大部および1次元補間フィルタ」の順番を入れ替え、水平・垂直の各1次元拡大部をまとめて2次元拡大部(702)とし、水平・垂直の各1次元補間フィルタをまとめて2次元補間フィルタ(703)とし、水平・垂直の各1次元非対称フィルタをまとめて2次元非対称フィルタ(704)として、2次元処理部(701)を構成する。 Therefore, in FIG. 9, the order of the “one-dimensional asymmetric filter” in the horizontal processing unit (601) and the “one-dimensional enlargement unit and one-dimensional interpolation filter” in the vertical processing unit (602) in FIG. The vertical one-dimensional enlargement units are combined into a two-dimensional expansion unit (702), the horizontal / vertical one-dimensional interpolation filters are combined into a two-dimensional interpolation filter (703), and the horizontal / vertical one-dimensional asymmetric filters are combined. Collectively, a two-dimensional processing unit (701) is configured as a two-dimensional asymmetric filter (704).
 この2次元処理部(701)においては、入力#0~#3に対応する水平位置シフト量(α0,α1,α2,α3)および垂直位置シフト量(β0,β1)に合わせて、2次元補間フィルタ(703)と2次元非対称フィルタ(704)の各係数を設定し、各2次元処理部(701-#0~#3)とする。 In this two-dimensional processing unit (701), two-dimensional interpolation is performed in accordance with the horizontal position shift amounts (α0, α1, α2, α3) and vertical position shift amounts (β0, β1) corresponding to the inputs # 0 to # 3. The coefficients of the filter (703) and the two-dimensional asymmetric filter (704) are set, and the two-dimensional processing units (701- # 0 to # 3) are set.
 続いて、各2次元処理部(701-#0~#3)を通った各信号を、各フレームメモリ(705-#0~#3)を介したのちに加算器(706)で全信号を加算すれば、図8の構成と等価な出力を得ることができる。 Subsequently, each signal passing through each two-dimensional processing unit (701- # 0 to # 3) is passed through each frame memory (705- # 0 to # 3), and then added to all signals by an adder (706). If added, an output equivalent to the configuration of FIG. 8 can be obtained.
 このように、図8に示した構成では、入力#0~#3を用いて、水平処理部(601)と垂直処理部(602)のすべての演算を行わなければ出力は得られなかったが、図9の構成では、入力#0~#3に対する処理は、各2次元処理部(701-#0~#3)によって他の入力と独立して演算される。それぞれの入力#0~#3は、画像フレームごとに、入力#0、入力#1、入力#2、入力#3、入力#0、・・・の順番に入力されるため、入力されない画像フレームについては、演算結果をフレームメモリ(705-#0~#3)に蓄えておけばよい。さらに、入力されない画像フレームについては各2次元処理部(701-#0~#3)が必要ないことから、図10に示すように2次元処理部(701)をひとつだけ設置し、水平位置シフト量(α0,α1,α2,α3)および垂直位置シフト量(β0,β1)に合わせて、フレーム番号に応じて内部の係数を変えながら設定するように構成し、切り替え器(707)を用いてフレームメモリ(705-#0~#3)をフレームごとに選択しながら書き込む構成にしてもよい。 In this way, in the configuration shown in FIG. 8, no output can be obtained unless all calculations of the horizontal processing unit (601) and the vertical processing unit (602) are performed using the inputs # 0 to # 3. In the configuration of FIG. 9, the processes for the inputs # 0 to # 3 are calculated independently of the other inputs by the respective two-dimensional processing units (701- # 0 to # 3). Each input # 0 to # 3 is input in the order of input # 0, input # 1, input # 2, input # 3, input # 0,... For each image frame. For the above, the calculation result may be stored in the frame memory (705- # 0 to # 3). Further, since there is no need for each two-dimensional processing unit (701- # 0 to # 3) for an image frame that is not input, only one two-dimensional processing unit (701) is installed as shown in FIG. In accordance with the amount (α 0, α 1, α 2, α 3) and the vertical position shift amount (β 0, β 1), the internal coefficient is set while changing according to the frame number, and the switch (707) is used. The frame memory (705- # 0 to # 3) may be written while being selected for each frame.
 したがって、図10に示したように画像拡大・鮮明化部(111)を構成することにより、図8の構成における演算量に比べ、約1/6(=2(水平・垂直)/3(水平・水平・垂直)×1/4(処理する入力フレーム数の比))の演算量に削減することができる。 Therefore, by configuring the image enlargement / clearing unit (111) as shown in FIG. 10, the calculation amount is about 1/6 (= 2 (horizontal / vertical) / 3 (horizontal) compared to the calculation amount in the configuration of FIG. (Horizontal / vertical) × 1/4 (ratio of the number of input frames to be processed))).
 同様に、「2フレーム型位置シフト」の場合の画像拡大・鮮明化部(111)の構成も、図7の構成を、図11の構成に等価変換できる。すなわち、図7における1次元拡大部(502-#0),(502-#1)を図11における1次元拡大部(801)にまとめ、図7における1次元補間フィルタ(503-#0),(503-#1)を図11における1次元補間フィルタ(802)にまとめ、図7における「加算器(504)、減算器(505)、ヒルベルト変換器(506)、乗算器(507)、および加算器(508)」を図11における1次元非対称フィルタ(803)にまとめて、水平位置シフト量(α0,α1)に合わせて、フレーム番号に応じて内部の係数を変えながら設定するように構成し、切り替え器(804)を用いてフレームメモリ(805-#0),(805-#1)をフレームごとに選択しながら書き込む構成にして、加算器(806)で両信号を加算すれば、図7の構成と等価な出力を得ることができる。 Similarly, the configuration of the image enlargement / clearing unit (111) in the case of “2-frame position shift” can be equivalently converted from the configuration of FIG. 7 to the configuration of FIG. That is, the one-dimensional enlargement units (502- # 0) and (502- # 1) in FIG. 7 are combined into the one-dimensional enlargement unit (801) in FIG. 11, and the one-dimensional interpolation filters (503- # 0) in FIG. (503- # 1) are combined into a one-dimensional interpolation filter (802) in FIG. 11, and “adder (504), subtracter (505), Hilbert transformer (506), multiplier (507)” in FIG. The adder (508) "is integrated into the one-dimensional asymmetric filter (803) in FIG. 11, and is set while changing the internal coefficient in accordance with the frame number in accordance with the horizontal position shift amount (α0, α1). Then, the frame memory (805- # 0), (805- # 1) is written while being selected for each frame using the switch (804), and both signals are output by the adder (806). If added, an output equivalent to the configuration of FIG. 7 can be obtained.
 すなわち、図11のように画像拡大・鮮明化部(111)を構成することにより、図7の構成における演算量に比べ、約1次元補間フィルタひとつ分の演算量を削減することができる。 That is, by configuring the image enlargement / clearing unit (111) as shown in FIG. 11, the calculation amount for one one-dimensional interpolation filter can be reduced compared to the calculation amount in the configuration of FIG.
 図12~図14は、図5および図11の画像拡大・鮮明化部が有する非対称フィルタにおける構成の一例を示すブロック図および動作説明図である。 FIGS. 12 to 14 are a block diagram and an operation explanatory diagram showing an example of a configuration of an asymmetric filter included in the image enlargement / clearing unit of FIGS. 5 and 11. FIG.
 図12は、図11の画像拡大・鮮明化部が有する1次元非対称フィルタの構成の一例を示すブロック図である。 FIG. 12 is a block diagram showing an example of the configuration of a one-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG.
 前述した図7においては、入力#0は、1次元拡大部(502-#0)と1次元補間フィルタ(503-#0)を通ったのちに、「加算器(504)を通った信号」と、「減算器(505)、ヒルベルト変換器(506)、乗算器(507)を通った信号」が、加算器(508)で加算されて出力となる。このとき、入力#1=0(無信号)と置くと、入力#0に対しては、加算器(504)も減算器(505)も係数=1(すなわち、信号は変化なし)となることがわかる。したがって、入力#0に対する1次元非対称フィルタ(704)のフィルタ係数は、図12に示すように、ヒルベルト変換器(506)を通して係数Kを乗じた信号と入力信号とを、加算器(508)で加算した信号を出力とする。 In FIG. 7 described above, the input # 0 passes through the one-dimensional enlargement unit (502- # 0) and the one-dimensional interpolation filter (503- # 0), and then “signal that has passed through the adder (504)”. Then, "the signal that has passed through the subtracter (505), the Hilbert transformer (506), and the multiplier (507)" is added by the adder (508) to be an output. At this time, if the input # 1 = 0 (no signal) is set, for the input # 0, both the adder (504) and the subtracter (505) have a coefficient of 1 (that is, the signal does not change). I understand. Therefore, as shown in FIG. 12, the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 0 is obtained by multiplying the signal obtained by multiplying the coefficient K through the Hilbert transformer (506) and the input signal by the adder (508). The added signal is output.
 一方、入力#0=0(無信号)と置くと、入力#1に対しては、加算器(504)の係数は1であり、減算器(505)の係数は-1となることがわかる。したがって、入力#1に対する1次元非対称フィルタ(704)のフィルタ係数は、入力#0に対する1次元非対称フィルタ(704)のフィルタ係数の係数と比べて、係数Kの極性(正負)を反転させた値となる。 On the other hand, if input # 0 = 0 (no signal), the coefficient of the adder (504) is 1 and the coefficient of the subtracter (505) is −1 for the input # 1. . Therefore, the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 1 is a value obtained by inverting the polarity (positive / negative) of the coefficient K compared to the coefficient of the filter coefficient of the one-dimensional asymmetric filter (704) for the input # 0. It becomes.
 ヒルベルト変換器(506)は奇対称フィルタであり、t=2m(ただし、mは整数)のときは係数C(t)=0であり、t=2m+1(ただし、mは整数)のときは係数C(t)=2/(πt)である。したがって、図12の構成では、1次元非対称フィルタ(704)のフィルタ係数は、図13に示すように偶対称でも奇対称でもない「非対称」の動作になる。なお、図13に示したフィルタ係数(C(t))は一例であり、例えばヒルベルト変換の係数に一般的な窓関数(ハニング窓など)を乗じて係数C(t)(ただし、t≠0)として、フィルタ端の影響を軽減してもよい。 The Hilbert transformer (506) is an odd symmetric filter. When t = 2m (where m is an integer), the coefficient C (t) = 0, and when t = 2m + 1 (where m is an integer), the coefficient C (t) = 2 / (πt). Therefore, in the configuration of FIG. 12, the filter coefficients of the one-dimensional asymmetric filter (704) are “asymmetric” operations that are neither even nor odd symmetric as shown in FIG. The filter coefficient (C (t)) shown in FIG. 13 is an example. For example, the coefficient C (t) (where t ≠ 0) is obtained by multiplying the coefficient of the Hilbert transform by a general window function (such as Hanning window). ), The influence of the filter end may be reduced.
 図14は、図9の画像拡大・鮮明化部が有する2次元非対称フィルタにおける構成の一例を示すブロック図である。 FIG. 14 is a block diagram illustrating an example of a configuration of a two-dimensional asymmetric filter included in the image enlargement / clearing unit in FIG.
 この2次元非対称フィルタ(704)は、前述した1次元非対称フィルタ(803)を直列に接続したものであり、水平処理部(803-H)で水平方向の画素数を変換し、垂直処理部(803-V)で垂直方向の画素数を変換することにより、2次元の画素数変換を実現できる。 The two-dimensional asymmetric filter (704) is obtained by connecting the above-described one-dimensional asymmetric filter (803) in series. The horizontal processing unit (803-H) converts the number of pixels in the horizontal direction, and the vertical processing unit ( By converting the number of pixels in the vertical direction at 803-V), two-dimensional pixel number conversion can be realized.
 このように、非対称フィルタを用いて入力画像フレームごとに独立した信号処理を行うことにより、折り返し歪低減における演算量削減を実現することができる。 Thus, by performing independent signal processing for each input image frame using an asymmetric filter, it is possible to reduce the amount of computation in reducing aliasing distortion.
 次に、被写体に動きを含む画像に対応する処理について説明する。 Next, a process corresponding to an image including movement in the subject will be described.
 前述した画像拡大・鮮明化部(111)の構成により、画素ごとの動き探索とイテレーションを不要とすることによって演算量を減らし、高速な処理が可能な画像鮮明化を実現した。この中で、画素ごとの動き探索を不要にするために、図7~図11の構成に用いた位置シフト量(αn、βn)をフレームごとに固定の値とした。このとき、画像フレーム中の被写体に動きがない場合には問題ないが、被写体に動きがある場合には、フレーム間の演算によって多重像が生じてしまう問題がある。そこで、このような問題を解決するための構成例を以下に説明する。 The above-described configuration of the image enlargement / clearing unit (111) reduces the amount of calculation by eliminating the motion search and iteration for each pixel and realizes image sharpening capable of high-speed processing. Among these, in order to eliminate the need for motion search for each pixel, the position shift amounts (αn, βn) used in the configurations of FIGS. 7 to 11 are fixed values for each frame. At this time, there is no problem if the subject in the image frame does not move, but there is a problem that multiple images are generated by calculation between the frames if the subject has movement. Therefore, a configuration example for solving such a problem will be described below.
 図15は、図1の画像受信装置における画像拡大・鮮明化部の構成の他の例を示すブロック図である。 FIG. 15 is a block diagram showing another example of the configuration of the image enlargement / clearing unit in the image receiving apparatus of FIG.
 図15において、画像拡大部(301)、画像位置シフト部(302)、および折り返し歪低減部(303)からなる処理部(1001)は、図4に示した構成と同一である。この中の画像位置シフト部(302)によってフレーム間の画像ブレが抑えられた信号を不要な折り返し歪を抑えるためのローパスフィルタ(1002)に通し、後述する動き検出部(1003)によって画像の画素ごとの動きの大きさに応じた制御信号(m、ただし0(動きなし)≦m≦1(動きあり))を生成して、加重混合部(1004)を用いて、ローパスフィルタ(1002)を通した信号(p1)と、折り返し歪低減部(303)を通した信号(p0)とを制御信号mに応じて加重混合して、出力信号とする。ここで、加重混合部(1004)は、制御信号mから制御信号(1-m)を生成する減算器(1005)、乗算器(1006)(1007)、および加算器(1008)からなり、画素ごとに「出力=p1×m+p0×(1-m)」の処理を行う。すなわち、制御信号mの値が0に近い領域(静止に近い領域)では折り返し歪低減部(303)を通した信号(p0)が出力され、制御信号mの値が1に近い領域(動きが大きい領域)ではローパスフィルタ(1002)を通した信号(p1)が出力されるように制御される。 15, a processing unit (1001) including an image enlargement unit (301), an image position shift unit (302), and a aliasing distortion reduction unit (303) has the same configuration as that shown in FIG. A signal in which image blur between frames is suppressed by the image position shift unit (302) is passed through a low-pass filter (1002) for suppressing unnecessary aliasing distortion, and an image pixel is detected by a motion detection unit (1003) described later. A control signal (m, where 0 (no motion) ≦ m ≦ 1 (with motion)) corresponding to the magnitude of each motion is generated, and the low-pass filter (1002) is generated using the weighted mixing unit (1004). The passed signal (p1) and the signal (p0) passed through the aliasing distortion reduction unit (303) are weighted and mixed in accordance with the control signal m to obtain an output signal. Here, the weighted mixing unit (1004) includes a subtracter (1005) that generates a control signal (1-m) from the control signal m, a multiplier (1006) (1007), and an adder (1008). Every time, “output = p1 × m + p0 × (1−m)” is performed. That is, in the region where the value of the control signal m is close to 0 (the region close to static), the signal (p0) passed through the aliasing reduction unit (303) is output, and the region where the value of the control signal m is close to 1 (the motion is In the large region, control is performed so that the signal (p1) that has passed through the low-pass filter (1002) is output.
 図16は、図15の画像拡大・鮮明化部が有する動き検出部における構成の一例を示すブロック図である。 FIG. 16 is a block diagram illustrating an example of the configuration of the motion detection unit included in the image enlargement / clearing unit of FIG.
 動き検出部(1003)において、前述した「4フレーム位置シフト型」のときには、ローパスフィルタ(1002)から入力された連続する4フレームの画像を、切り替え器(1101)によりフレーム番号(#0、#1、#2、#3、#0、・・・)に従って切り替えながら、フレームメモリ(1102)に格納する。続いて、各フレームメモリ(1102)から読みだした画像を、平均部(1103)を用いて画素ごとに平均し、減算器(1104)と絶対値器(1105)を用いて平均画像と各フレームとの画素ごとの差の絶対値を計算したのちに、最大値器(1106)によってこれらの最大値を画素ごとに求め、正規化器(1107)によって「0≦m≦1」の範囲に収まるようにmの値を正規化したのちに、制御信号mとする。なお、この正規化には、予め定めた固定値を最大値器(1106)の出力信号に減算したり乗算したりする一般的な技術によって実現できるため、詳細な図示および説明は省略する。 In the case of the “4-frame position shift type” described above in the motion detection unit (1003), four consecutive frame images input from the low-pass filter (1002) are converted into frame numbers (# 0, #) by the switch (1101). 1, # 2, # 3, # 0,...) And switching to frame memory (1102). Subsequently, the image read from each frame memory (1102) is averaged for each pixel by using the averaging unit (1103), and the average image and each frame by using the subtracter (1104) and the absolute value unit (1105). After calculating the absolute value of the difference for each pixel, the maximum value is obtained for each pixel by the maximum value unit (1106) and falls within the range of “0 ≦ m ≦ 1” by the normalizer (1107). Thus, after normalizing the value of m, the control signal m is obtained. This normalization can be realized by a general technique in which a predetermined fixed value is subtracted or multiplied by the output signal of the maximum value unit (1106), and thus detailed illustration and description are omitted.
 このように構成した動き検出部(1003)により、動き検出部(1003)に入力された4フレームの画像(#0、#1、#2、#3)の値がすべて一致する領域(すなわち、画像が静止している領域)では制御信号mの値は0になり、4フレームの画像(#0、#1、#2、#3)の値が一致しない領域(すなわち、4フレームのうち1フレーム以上で画像が動いている領域)では前述した差信号の絶対値の大きさに応じて制御信号mが0<m≦1の値となる。 By the motion detection unit (1003) configured in this way, an area in which all the values of the four-frame images (# 0, # 1, # 2, # 3) input to the motion detection unit (1003) match (that is, In the area where the image is still), the value of the control signal m is 0, and the area where the values of the images of 4 frames (# 0, # 1, # 2, # 3) do not match (that is, 1 out of 4 frames). In the area where the image is moving over the frame), the control signal m has a value of 0 <m ≦ 1 according to the magnitude of the absolute value of the difference signal described above.
 なお、前述した「2フレーム位置シフト型」のときには、図16に示した動き検出部(1003)の構成のうち、切り替え器(1101)、フレームメモリ(1102)、平均部(1103)、減算器(1104)、および絶対値器(1105)を、4フレーム分から2フレーム分に変更するだけで容易に実現できるため、図示による説明を省略する。 In the case of the “two-frame position shift type” described above, among the configuration of the motion detection unit (1003) shown in FIG. 16, the switcher (1101), frame memory (1102), average unit (1103), subtractor (1104) and the absolute value unit (1105) can be easily realized simply by changing from 4 frames to 2 frames.
 以上述べた構成により、画像中の被写体に動きがある場合でも、画像の動領域では前述した多重像の発生を抑えながら、画像の静止領域で高画質な画像を得ることができる。例えば、この構成を監視カメラに応用すれば、駐車している車両は画像の静止領域になるため、ナンバープレート等を精細な画像で視認できるようになる。 With the configuration described above, even when the subject in the image moves, it is possible to obtain a high-quality image in the still region of the image while suppressing the occurrence of the multiple images described above in the moving region of the image. For example, if this configuration is applied to a surveillance camera, a parked vehicle becomes a still image area, so that a license plate or the like can be visually recognized with a fine image.
 次に、画像送信装置と画像受信装置とのフレーム同期について説明する。 Next, frame synchronization between the image transmission device and the image reception device will be described.
 図17~図19は、図1の画像送信装置および画像受信装置が有する制御部の構成の一例をそれぞれ示すブロック図である。 17 to 19 are block diagrams respectively showing an example of the configuration of the control unit included in the image transmission device and the image reception device in FIG.
 画像送信装置(101)が有する画像位置シフト部(103)と、画像受信装置(109)が有する画像位置シフト部(302)とは、フレーム単位で互いに正確に同期して動作し、互いに逆方向に位置シフトするように制御する必要がある。この同期に用いるフレーム位相情報の取得には、図17~図19に示す構成のうちいずれかひとつを用いる。なお、フレーム位相情報とは、例えば、「4フレーム型位置ずらし」の場合にはフレーム番号「#0、#1、#2、#3」のいずれかであり、「2フレーム型位置ずらし」の場合にはフレーム番号「#0、#1」のいずれかとなる。 The image position shift unit (103) included in the image transmission device (101) and the image position shift unit (302) included in the image reception device (109) operate in synchronization with each other accurately in units of frames and are in opposite directions. It is necessary to control to shift the position. For obtaining the frame phase information used for this synchronization, any one of the configurations shown in FIGS. 17 to 19 is used. The frame phase information is, for example, one of the frame numbers “# 0, # 1, # 2, # 3” in the case of “4 frame type position shift”, and “2 frame type position shift”. In this case, the frame number is “# 0, # 1”.
 図17は、画像送信装置のフレーム位相情報生成部および画像受信部のフレーム位相情報取得部を用いて同期を行う場合の構成例を示す図である。 FIG. 17 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information generation unit of the image transmission device and the frame phase information acquisition unit of the image reception unit.
 図17においては、画像送信装置(101)の制御部(106)が有するフレーム位相情報生成部(1201)でフレーム位相情報を生成し、多重部(1202)を用いて、符号化部(105)から出力される画像データと多重する。この多重データを、通信ネットワーク(107)を介して画像受信装置(109)に伝送し、分離部(1203)でこれらを分離したのちに、画像データは復号化部(110)へ、フレーム位相情報は制御部(112)へそれぞれ入力する。これにより、制御部(112)のフレーム位相情報取得部(1204)では、画像送信装置(101)のフレーム位相情報生成部(1201)で生成したフレーム位相情報と同一の情報を得ることができる。なお、フレーム位相情報生成部(1201)、多重部(1202)、分離部(1203)、およびフレーム位相情報取得部(1204)は、一般的な技術で実現できるため、詳細な図示は省略する。 In FIG. 17, the frame phase information generation unit (1201) included in the control unit (106) of the image transmission apparatus (101) generates frame phase information, and the multiplexing unit (1202) is used to encode the encoding unit (105). Is multiplexed with the image data output from. The multiplexed data is transmitted to the image receiving device (109) via the communication network (107) and separated by the separating unit (1203), and then the image data is sent to the decoding unit (110) to receive frame phase information. Are respectively input to the control unit (112). Accordingly, the frame phase information acquisition unit (1204) of the control unit (112) can obtain the same information as the frame phase information generated by the frame phase information generation unit (1201) of the image transmission device (101). The frame phase information generation unit (1201), the multiplexing unit (1202), the separation unit (1203), and the frame phase information acquisition unit (1204) can be realized by a general technique, and thus detailed illustration is omitted.
 図18は、画像受信部のフレーム位相情報生成部および画像送信装置のフレーム位相情報取得部を用いて同期を行う場合の構成例を示す図である。 FIG. 18 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information generation unit of the image reception unit and the frame phase information acquisition unit of the image transmission apparatus.
 図18においては、画像受信装置(109)の制御部(112)が有するフレーム位相情報生成部(1206)でフレーム位相情報を生成し、通信ネットワーク(107)を介して画像送信装置(101)に伝送する。これにより、画像送信装置(101)の制御部(106)が有するフレーム位相情報取得部(1205)では、画像受信装置(109)のフレーム位相情報生成部(1206)で生成したフレーム位相情報と同一の情報を得ることができる。なお、フレーム位相情報生成部(1206)およびフレーム位相情報取得部(1205)は、一般的な技術で実現できるため、詳細な図示は省略する。 In FIG. 18, the frame phase information generating unit (1206) included in the control unit (112) of the image receiving apparatus (109) generates frame phase information, which is transmitted to the image transmitting apparatus (101) via the communication network (107). To transmit. Accordingly, the frame phase information acquisition unit (1205) included in the control unit (106) of the image transmission device (101) is the same as the frame phase information generated by the frame phase information generation unit (1206) of the image reception device (109). Information can be obtained. The frame phase information generation unit (1206) and the frame phase information acquisition unit (1205) can be realized by a general technique, and thus detailed illustration is omitted.
 図19は、画像受信部のフレーム位相情報推定部を用いて同期を行う場合の構成例を示す図である。 FIG. 19 is a diagram illustrating a configuration example when synchronization is performed using the frame phase information estimation unit of the image reception unit.
 図19においては、画像送信装置(101)の制御部(106)が有するフレーム位相情報生成部(1201)でフレーム位相情報を生成するが、この情報を画像受信装置(109)には伝送しない。画像受信装置(109)では、画像送信装置(101)から伝送された符号化画像データを復号化部(110)で復号化し、この復号画像をもとにして、制御部(112)が有するフレーム位相情報推定部(1207)でフレーム位相情報推定処理を行うことによりフレーム位相情報を得る。 In FIG. 19, the frame phase information generation unit (1201) included in the control unit (106) of the image transmission apparatus (101) generates frame phase information, but this information is not transmitted to the image reception apparatus (109). In the image receiving device (109), the encoded image data transmitted from the image transmitting device (101) is decoded by the decoding unit (110), and the frame included in the control unit (112) based on the decoded image. Frame phase information is obtained by performing a frame phase information estimation process in the phase information estimation unit (1207).
 図20は、図19に示したフレーム位相情報推定部のフレーム位相情報推定処理の動作の一例を示す説明図である。 FIG. 20 is an explanatory diagram showing an example of the operation of the frame phase information estimation process of the frame phase information estimation unit shown in FIG.
 図20において、画像(1301)は、図2において左上方向に位置シフトした画像フレーム(201)を図1に示した画像送信装置(101)が有する画像縮小部(104)によって縮小したD1サイズの画像(水平704画素×垂直480画素)であり、説明のための一例として挙げている。図20では、画像(1301)上の2次元座標を(x,y)(ただし、xは水平座標、yは垂直座標)として、y=0の位置にある画素値の系列の一例、y=1の位置にある画素値の系列の一例、y=2の位置にある画素値の系列の一例をそれぞれ示している。 In FIG. 20, an image (1301) is a D1-size image obtained by reducing the image frame (201) shifted in the upper left direction in FIG. 2 by the image reduction unit (104) included in the image transmission apparatus (101) shown in FIG. This is an image (horizontal 704 pixels × vertical 480 pixels), and is taken as an example for explanation. In FIG. 20, assuming that the two-dimensional coordinates on the image (1301) are (x, y) (where x is a horizontal coordinate and y is a vertical coordinate), an example of a series of pixel values at a position of y = 0, y = An example of a series of pixel values at position 1 and an example of a series of pixel values at position y = 2 are shown.
 画像フレーム(201)は左上方向に位置シフトしており、前述したように画像の右端と下端には黒(画素値=0)が挿入されている。したがって、縮小後の画像(1301)においては、もともと画像が存在する左端では、x=0の位置の画素値P(0,y)と、x=1の位置の画素値P(1,y)の大小関係は絵柄に応じてランダムに変化する。一方,もともと画像が存在しない右端では、黒挿入されたために、x=704の位置の画素値P(704,y)よりもx=703の位置の画素値P(703,y)のほうが大きくなる。この性質を利用し、画像が左右のどちらに位置シフトしているか推定するフレーム位相情報推定処理を行う。 The position of the image frame (201) is shifted in the upper left direction, and black (pixel value = 0) is inserted at the right end and the lower end of the image as described above. Therefore, in the reduced image (1301), at the left end where the image originally exists, the pixel value P (0, y) at the position of x = 0 and the pixel value P (1, y) at the position of x = 1. The magnitude relationship of changes randomly according to the pattern. On the other hand, the pixel value P (703, y) at the position x = 703 is larger than the pixel value P (704, y) at the position x = 704 at the right end where the image originally does not exist because black is inserted. . Using this property, frame phase information estimation processing is performed to estimate whether the image is shifted to the left or right.
 図21は、フレーム位相情報推定部におけるフレーム位相推定処理を示すフローチャートである。 FIG. 21 is a flowchart showing frame phase estimation processing in the frame phase information estimation unit.
 図21において、フレーム位相情報推定部(1207)は、フレーム位相情報推定処理が開始されると、まず、左端用カウンタと右端用カウンタの各データ(L,R)をメモリ上に用意し、各値を0に初期化するとともに、垂直座標を示すデータ(y)をメモリ上に用意し、y=0に初期化する(ステップS1401)。続いて、画像の左端の2画素の値(P(0,y)およびP(1,y))と右端の2画素の値(P(703,y)およびP(704,y))の各大小関係に応じて、各カウンタ値(L,R)を更新する(ステップS1402)。すなわち、ステップS1402では、もし「P(0,y)≧P(1,y)」ならば左端用カウンタの値(L)を1増やすとともに、もし「P(704,y)≧P(703,y)」ならば右端用カウンタの値(R)を1増やす。続いて、処理対象の垂直座標の値が画像の下端に達したか否か、すなわち、y=479になったかどうかを判定し(ステップS1403)、判定結果がNOの場合には、データ(y)の値をひとつ増やしてy←y+1とし(ステップS1404)、ステップS1403の判定結果がYESになるまでステップS1404,S1402の処理を繰り返す。また、ステップS1403での判定結果がYESの場合には、各カウンタ値(L,R)の値に応じて画像シフト方向を推定する(ステップS1405)。すなわち、ステップS1405では、もし「左端用カウンタの値(L)≧右端用カウンタの値(R)」ならば、画像は左方向にシフトされていると推定し、そうでなければ、画像は右方向にシフトされていると推定する。ステップS1405が終了したら処理を終了する。 In FIG. 21, when the frame phase information estimation process is started, the frame phase information estimation unit (1207) first prepares each data (L, R) of the left end counter and the right end counter on the memory, In addition to initializing the value to 0, data (y) indicating the vertical coordinate is prepared on the memory and initialized to y = 0 (step S1401). Subsequently, the values of the two pixels at the left end of the image (P (0, y) and P (1, y)) and the values of the two pixels at the right end (P (703, y) and P (704, y)) Each counter value (L, R) is updated according to the magnitude relationship (step S1402). That is, in step S1402, if “P (0, y) ≧ P (1, y)”, the value (L) of the counter for the left end is incremented by 1, and if “P (704, y) ≧ P (703, y) ", the value (R) of the right end counter is incremented by one. Subsequently, it is determined whether or not the value of the vertical coordinate to be processed has reached the lower end of the image, that is, whether or not y = 479 (step S1403). If the determination result is NO, data (y ) Is incremented by one to set y ← y + 1 (step S1404), and the processing of steps S1404 and S1402 is repeated until the determination result of step S1403 becomes YES. If the determination result in step S1403 is YES, the image shift direction is estimated according to the value of each counter value (L, R) (step S1405). That is, in step S1405, if “the value of the counter for the left end (L) ≧ the value of the counter for the right end (R)”, it is estimated that the image is shifted leftward. Estimated to be shifted in the direction. When step S1405 ends, the process ends.
 以上の処理により、画像が左右のどちらに位置シフトしているかを推定することができる。なお、図21においては、説明の簡単のために左右方向のフレーム位相情報推定処理のみを例示しているが、上下方向についても同様の処理を行うことによりフレーム位相情報推定を行うことが可能である。すなわち、図21に示した各データ値および数値更新の方向を画像に対して90度回転させた手順を用いれば、画像が上下のどちらに位置シフトしているかも同様に推定できる。また、画像の端の画素値の大小関係の比較ではなく、画素値の分散を比較しても同様の推定を行うことができることは明らかであり、さらに大小関係と分散の両方を用いて同様の推定を行ってもよい。 By the above processing, it can be estimated whether the image is shifted to the left or right. In FIG. 21, only the frame phase information estimation process in the left-right direction is illustrated for simplicity of explanation, but it is possible to perform frame phase information estimation by performing the same process in the vertical direction. is there. That is, if the procedure in which each data value and numerical value update direction shown in FIG. 21 is rotated by 90 degrees with respect to the image is used, it can be similarly estimated whether the image is shifted in the vertical direction. In addition, it is clear that the same estimation can be performed not by comparing the pixel values at the edge of the image, but by comparing the variances of the pixel values. An estimation may be performed.
 ここで、画像位置シフト部(103)における画像位置シフト処理についてさらに考察する。すなわち、画像送信装置(101)のカメラ(102)で撮像される画像の種類によっては、画像位置シフト処理の内容を考慮する必要がある。例えば、カメラ(102)が画像ベイヤ配列カラーフィルタを配置した単板撮像素子の画像データを備える場合には、画像位置シフト部(103)の画像位置シフト処理の動作に制約を加える必要がある。 Here, the image position shift processing in the image position shift unit (103) will be further considered. That is, depending on the type of image captured by the camera (102) of the image transmission device (101), it is necessary to consider the contents of the image position shift process. For example, in the case where the camera (102) includes image data of a single-plate image sensor on which an image Bayer array color filter is arranged, it is necessary to restrict the operation of the image position shift process of the image position shift unit (103).
 図22は、ベイヤ配列カラーフィルタにおける画素の配置の様子を示す図である。 FIG. 22 is a diagram showing a state of pixel arrangement in the Bayer array color filter.
 図22に示すように、ベイヤ配列カラーフィルタ(1501)は、R:赤、G:緑、B:青の各カラーフィルタが、水平方向、垂直方向ともに2画素間隔を1周期として、規則正しく配置されている。つまり、ベイヤ配列カラーフィルタ(1501)を透過した光が単板撮像素子によって光電変換された画像信号(以下、ベイヤ画像信号と称する)を画像位置シフト部(103)に入力する場合には、水平方向、垂直方向のいずれか一方あるいは両方を奇数画素単位で位置シフトしてしまうと、R:赤、G:緑、B:青の位置関係が変化してしまい、位置シフト後の画像の色が変化してしまうこととなる。 As shown in FIG. 22, in the Bayer array color filter (1501), R: red, G: green, and B: blue color filters are regularly arranged with a two-pixel interval as one period in both the horizontal and vertical directions. ing. That is, when an image signal (hereinafter referred to as a Bayer image signal) obtained by photoelectrically converting light transmitted through the Bayer array color filter (1501) by a single-plate image sensor is input to the image position shift unit (103), If the position of either or both of the direction and the vertical direction is shifted in units of odd pixels, the positional relationship of R: red, G: green, B: blue changes, and the color of the image after the position shift is Will change.
 したがって、ベイヤ画像信号を画像位置シフト部(103)に入力する場合には、水平方向、垂直方向ともに2画素単位(偶数画素単位)で位置シフトすることにより、R:赤、G:緑、B:青の位置関係が位置シフト前と変わらないような画像位置シフト処理を行う。また、このように2画素単位(偶数画素単位)で位置シフトしたのちに、ベイヤ画像信号から変換したカラー信号(RGBやYUVなど)や、ベイヤ画像信号から変換した輝度信号Yだけに対して、本実施の形態に示した各信号処理をそのまま適用することができる。 Therefore, when the Bayer image signal is input to the image position shift unit (103), the position is shifted in units of two pixels (even number of pixels) in both the horizontal direction and the vertical direction, so that R: red, G: green, B : Image position shift processing is performed so that the positional relationship of blue is not different from that before the position shift. In addition, after the position shift in units of 2 pixels (even pixel units) as described above, only the color signal (RGB, YUV, etc.) converted from the Bayer image signal or the luminance signal Y converted from the Bayer image signal, Each signal processing shown in this embodiment can be applied as it is.
 図23は、画像蓄積装置の一構成例を模式的に示す機能ブロック図である。 FIG. 23 is a functional block diagram schematically showing a configuration example of the image storage device.
 図23において、画像蓄積装置(108)は、符号化された画像データを蓄積(録画)したり再生したりするための装置であり、通信インタフェース(1601)、制御部(1602)、メモリ(1603)、ストレージ(1604)、出力インタフェース(1605)、入力インタフェース(1606)などの処理ブロック及び各処理ブロック接続するバス(1607)から概略構成されている。 In FIG. 23, an image storage device (108) is a device for storing (recording) or reproducing encoded image data, and includes a communication interface (1601), a control unit (1602), a memory (1603). ), Storage (1604), output interface (1605), input interface (1606), and other processing blocks, and a bus (1607) connecting each processing block.
 画像蓄積装置(108)は、アプリケーションプログラムをストレージ(1604)に格納しており、制御部(1602)がストレージ(1604)から上記プログラムをメモリ(1603)に展開し、制御部(1602)が上記プログラムを実行することで、録画、再生、検索等の各種機能を実現することができる。なお、以後の説明では簡単のために、制御部(1602)が各アプリケーションプログラムを実行して実現する各種機能を、メモリ(1603)に展開された「各種プログラム機能部」が主体となって実現するものとして説明するが、同様の機能を持つ処理部としてハードウェアを用い、各処理部が主体となって各機能を実現することも可能である。 The image storage device (108) stores an application program in the storage (1604), the control unit (1602) expands the program from the storage (1604) to the memory (1603), and the control unit (1602) By executing the program, various functions such as recording, reproduction, and search can be realized. In the following description, for the sake of simplicity, various functions realized by the execution of each application program by the control unit (1602) are mainly realized by "various program function units" expanded in the memory (1603). However, it is also possible to use hardware as a processing unit having the same function and implement each function mainly by each processing unit.
 アプリケーションプログラムは、画像蓄積装置(108)が出荷時点で予めストレージ(1604)に格納されていても良いし、CD(Compact Disc)・DVD(Digital Versatile Disc)などの光学メディアや半導体メモリ等の媒体に格納されて、図示しない媒体接続部を介して画像蓄積装置(108)にインストールされても良い。また、通信インタフェース(1601)を介して通信ネットワーク(107)からダウンロードしてインストールすることも可能である。 The application program may be stored in the storage (1604) in advance when the image storage device (108) is shipped, or a medium such as a CD (Compact Disc) / DVD (Digital Versatile Disc) or a semiconductor memory. And installed in the image storage device (108) via a medium connection unit (not shown). It is also possible to download and install from the communication network (107) via the communication interface (1601).
 通信インタフェース(1601)は、通信ネットワーク(107)に接続され、通信ネットワーク(107)を経由して、図1に示した画像送信装置(101)からの画像データを受信したり、画像受信装置(109)に画像データを送信したりする機能も有している。制御部(1602)は、通信インタフェース(1601)、メモリ(1603)(各種プログラム機能部)、ストレージ(1604)、入出力インタフェース(1605,1606)を制御する。また、制御部(1602)は、後述する処理手順にしたがって、各種信号処理を実行する機能も有している。メモリ(1603)は、制御部(1602)の制御により、ストレージ(1604)に格納しているアプリケーションプログラムの機能部が展開される。ストレージ(1604)は、画像送信装置(101)からの画像データを蓄積するとともに、アプリケーションプログラムや、アプリケーションプログラムで作成した各種情報を保存するものである。出力インタフェース(1605)は、制御部で信号処理した結果の画像を、バス(1607)を介して外部機器に出力する機能を有する。出力された画像は、外部の表示部(1608)で表示される。入力インタフェース(1606)は、操作部(1609)からの信号を受け、バス(1607)を介して制御部(1602)に伝える機能を有している。 The communication interface (1601) is connected to the communication network (107), receives image data from the image transmission apparatus (101) shown in FIG. 109) also has a function of transmitting image data. The control unit (1602) controls the communication interface (1601), the memory (1603) (various program function units), the storage (1604), and the input / output interfaces (1605, 1606). The control unit (1602) also has a function of executing various signal processing according to a processing procedure described later. In the memory (1603), the function part of the application program stored in the storage (1604) is expanded under the control of the control part (1602). The storage (1604) accumulates image data from the image transmission apparatus (101), and stores application programs and various types of information created by the application programs. The output interface (1605) has a function of outputting an image obtained as a result of signal processing by the control unit to an external device via the bus (1607). The output image is displayed on an external display unit (1608). The input interface (1606) has a function of receiving a signal from the operation unit (1609) and transmitting the signal to the control unit (1602) via the bus (1607).
 このように構成した画像蓄積装置(108)は、録画時には以下のような動作シーケンスに従う。すなわち、画像蓄積装置(108)は、録画時には、ストレージ(1604)に格納されている図示しない録画用アプリケーションプログラムをメモリ(1603)に読み込み、この録画用アプリケーションプログラムに記載された手順に従って制御部(1602)が各部を制御する。まず、通信インタフェース(1601)と通信ネットワーク(107)を経由して、図1に示した画像送信装置(101)との接続を確立する。その後、通信インタフェース(1601)と通信ネットワーク(107)を経由して、画像送信装置(101)から伝送される符号化画像データを受信し、バス(1607)を介して、ストレージ(1604)に符号化画像データを蓄積する。このとき、受信した符号化画像データを制御部(1602)で復号化し、出力インタフェース(1605)を介して画像を出力して、表示部(1608)で表示してもよい。続いて、画像蓄積装置(108)における再生時の動作シーケンスについて説明する。 The image storage device (108) configured in this manner follows the following operation sequence during recording. That is, at the time of recording, the image storage device (108) reads a recording application program (not shown) stored in the storage (1604) into the memory (1603), and follows the procedure described in the recording application program according to the procedure described in the recording application program. 1602) controls each part. First, a connection is established with the image transmission apparatus (101) shown in FIG. 1 via the communication interface (1601) and the communication network (107). Thereafter, the encoded image data transmitted from the image transmission device (101) is received via the communication interface (1601) and the communication network (107), and is encoded into the storage (1604) via the bus (1607). Accumulated image data is stored. At this time, the received encoded image data may be decoded by the control unit (1602), output an image via the output interface (1605), and displayed on the display unit (1608). Next, an operation sequence at the time of reproduction in the image storage device (108) will be described.
 また、画像蓄積装置(108)は、再生時には以下のような動作シーケンスに従う。すなわち、画像蓄積装置(108)は、再生時には、ストレージ(1604)に格納されている図示しない再生用アプリケーションプログラムをメモリ(1603)に読み込み、この再生用アプリケーションプログラムに記載された手順に従って制御部(1602)が各部を制御する。その後、ストレージ(1604)に蓄積された符号化画像データを、バス(1607)を介して読み出し、通信インタフェース(1601)と通信ネットワーク(107)を経由して、画像受信装置(109)に送信する。 Also, the image storage device (108) follows the following operation sequence during reproduction. That is, at the time of playback, the image storage device (108) reads a playback application program (not shown) stored in the storage (1604) into the memory (1603), and follows the procedure described in the playback application program according to the procedure described in the playback application program. 1602) controls each part. Thereafter, the encoded image data stored in the storage (1604) is read out via the bus (1607) and transmitted to the image receiving device (109) via the communication interface (1601) and the communication network (107). .
 なお、送信時に、送信する符号化画像データを制御部(1602)で復号化し、出力インタフェース(1605)を介して表示部(1608)に画像を出力し、表示してもよい。また、画像蓄積装置(108)だけの単体で、ストレージ(1604)に蓄積された符号化画像データを読み出して、制御部(1602)で復号化し、出力インタフェース(1605)を介して表示部(1608)に画像を出力し、表示してもよい。ただし、前述したように、画像送信装置(101)が有する画像位置シフト部(103)によって画像位置をシフト処理した画像をそのまま表示部(1608)に表示すると、画像が周期的にブレることになる。そこで、メモリ(1603)に展開された各種プログラム機能部に、フレーム位相情報推定部(1207)と同様の機能を有するフレーム位相情報推定機能(1610)、および画像位置シフト部(302)と同様の機能を有する画像位置シフト機能(1611)を備え、画像位置シフト処理の効果を相殺して打ち消す処理を行うことにより、画像ブレをキャンセルする。すなわち、画像位置シフト機能(1611)は、図4の画像位置シフト部(302)の動作における第3の画素数を、ストレージ(1604)に蓄積された符号化画像データの画素数(第2の画素数)に読み替えることにより、画像位置シフト部(302)の動作をそのまま用いて実現することができる。 At the time of transmission, the encoded image data to be transmitted may be decoded by the control unit (1602), and the image may be output and displayed on the display unit (1608) via the output interface (1605). Also, the encoded image data stored in the storage (1604) is read out only by the image storage device (108) alone, decoded by the control unit (1602), and displayed via the output interface (1605) (1608). ) May be output and displayed. However, as described above, if an image whose image position is shifted by the image position shift unit (103) of the image transmission apparatus (101) is displayed on the display unit (1608) as it is, the image is periodically blurred. Become. Therefore, the various program function units developed in the memory (1603) include the frame phase information estimation function (1610) having the same function as the frame phase information estimation unit (1207) and the same as the image position shift unit (302). An image position shift function (1611) having a function is provided, and image blurring is canceled by performing a process of canceling out the effect of the image position shift process. That is, the image position shift function (1611) uses the third pixel number in the operation of the image position shift unit (302) in FIG. 4 as the pixel number of the encoded image data accumulated in the storage (1604) (second In this case, the operation of the image position shift unit (302) can be used as it is.
 また、画像位置シフト機能(1611)は、非対称の係数を持つ線形フィルタによる畳み込み処理を用いることによって実現することができる。 Also, the image position shift function (1611) can be realized by using a convolution process with a linear filter having asymmetric coefficients.
 図24は、図23の画像位置シフト機能において用いるフィルタ係数の一例を説明する図である。 FIG. 24 is a diagram for explaining an example of filter coefficients used in the image position shift function of FIG.
 図24において、係数(a)~(d)は、復号化した画像に畳み込むフィルタ係数を示したものである。各係数(a)~(d)において、カッコ内の記号あるいは数値(すなわち、0,α,1-α,β,1-βなど)は、各係数の値を表しており、カッコの右肩の記号「T」は転置を表している。また、記号「*」は畳み込み演算を表している。すなわち、図24の係数(a)~(d)は、水平3タップ×垂直3タップの2次元フィルタの係数を表している。 24, coefficients (a) to (d) indicate filter coefficients that are convoluted with the decoded image. In each coefficient (a) to (d), a symbol or a numerical value in parentheses (that is, 0, α, 1-α, β, 1-β, etc.) represents the value of each coefficient, and the right shoulder of the parenthesis The symbol “T” represents transposition. The symbol “*” represents a convolution operation. That is, the coefficients (a) to (d) in FIG. 24 represent the coefficients of a two-dimensional filter of horizontal 3 taps × vertical 3 taps.
 ここで、α>0、β>0とすると、係数(a)で表される2次元フィルタの重心は、右下方向にずれることになり、このフィルタ係数を畳み込んだ画像は右下方向に位置がシフトする。同様に、係数(b)を畳み込んだ画像は左下方向に位置がシフトし、係数(c)を畳み込んだ画像は右上方向に位置がシフトし、係数(d)を畳み込んだ画像は左上方向に位置がシフトする。 Here, when α> 0 and β> 0, the center of gravity of the two-dimensional filter represented by the coefficient (a) is shifted in the lower right direction, and an image obtained by convolving this filter coefficient is in the lower right direction. The position shifts. Similarly, the image convolved with coefficient (b) is shifted in the lower left direction, the image convolved with coefficient (c) is shifted in the upper right direction, and the image convolved with coefficient (d) is in the upper left. The position shifts in the direction.
 例えば、図1における第1の画素数(例えば、水平1920画素×垂直1080画素)を持つ画像が、図2における「4フレーム型位置シフト」において、回転運動の中心位置の2次元座標を基準(0,0)として、2次元座標(-h,-v)→(h,-v)→(h,v)→(-h,v)→(-h,-v)→・・・と位置シフトした場合には、ストレージ(1604)に蓄積された符号化画像データの画素数(第2の画素数)の画像では、p=704/1920、q=480/1080として、(-ph,-qv)→(ph,-qv)→(ph,qv)→(-ph,qv)→(-ph,-qv)→・・・と位置シフトすることに相当する。 For example, an image having the first number of pixels in FIG. 1 (for example, horizontal 1920 pixels × vertical 1080 pixels) is based on the two-dimensional coordinates of the center position of the rotational motion in “4-frame position shift” in FIG. 0,0), two-dimensional coordinates (−h, −v) → (h, −v) → (h, v) → (−h, v) → (−h, −v) →. In the case of shifting, in the image of the number of pixels (second pixel number) of the encoded image data accumulated in the storage (1604), p = 704/1920, q = 480/1080, (−ph, − This corresponds to a position shift of qv) → (ph, −qv) → (ph, qv) → (−ph, qv) → (−ph, −qv) →.
 したがって、フィルタ係数において、α=ph、β=qvとして決定しておき、係数(a)→(b)→(d)→(c)→(a)→・・・の順番でフレームごとに切り替えながら蓄積された画像に畳み込めば、画像のブレを抑えることができる。以上のように構成することにより、画像蓄積装置(108)に蓄積された画像を表示部(1608)で再生する際に、画像ブレを抑えることができる。 Accordingly, the filter coefficients are determined as α = ph and β = qv, and are switched for each frame in the order of coefficients (a) → (b) → (d) → (c) → (a) →. However, blurring of the image can be suppressed by convolving it with the stored image. With the above configuration, image blurring can be suppressed when the image stored in the image storage device (108) is played back on the display unit (1608).
 なお、前述した「水平3タップ×垂直3タップの2次元フィルタ」は動作の説明のための一例であり、本発明はこれに限定されるわけではなく、これとは異なるタップ数を持つ2次元フィルタを用いてよいことは明らかである。また、このようにしてブレを抑えた画像を、通信インタフェース(1601)および通信ネットワーク(107)を介して、外部に送信してもよいことは明らかである。 The “two-dimensional filter of horizontal 3 taps × vertical 3 taps” described above is an example for explaining the operation, and the present invention is not limited to this, and a two-dimensional filter having a different number of taps. Obviously, a filter may be used. In addition, it is obvious that the image in which blurring is suppressed in this way may be transmitted to the outside via the communication interface (1601) and the communication network (107).
 以上のように構成した本実施の形態の効果を説明する。 The effect of the present embodiment configured as described above will be described.
 従来技術に例示した超解像処理は、画像に含まれる高周波成分を増幅するだけのエッジ強調処理などに比べて、処理に必要な演算量が極めて多い。例えば、非特許文献1に記載されているような、複数フレームの入力画像を用いて、逐次近似処理によって出力画像を得る超解像技術では、画素ごとのサブピクセル単位の高精度な動き探索を伴うフレーム間位置合わせ(レジストレーション)、画像の拡大、拡大画像の縮小、縮小画像と入力画像との差分検出、および出力画像の補正処理を、何度も反復処理(イテレーション)する必要がある。つまり、従来技術においては、画素ごとの動き探索やイテレーションは演算量が極めて多いため、CPU等やメモリ等の計算リソースの処理限界を超えてしまって、画像の動きがぎくしゃくするコマ落ちが発生したり、処理全体が止まってしまったり、マウスやキーボード等によるユーザ入力を受け付けず無応答の状態になったりすることが考えられる。 The super-resolution processing exemplified in the prior art requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image. For example, in a super-resolution technique for obtaining an output image by successive approximation processing using an input image of a plurality of frames as described in Non-Patent Document 1, a high-precision motion search in units of subpixels for each pixel is performed. The accompanying inter-frame alignment (registration), image enlargement, enlargement image reduction, difference detection between the reduced image and the input image, and output image correction processing must be repeated many times (iteration). In other words, in the conventional technology, the motion search and iteration for each pixel has a very large amount of calculation, so that it exceeds the processing limit of calculation resources such as CPU and memory, and frame dropping occurs that makes the motion of the image jerky. Or the entire process may stop, or user input from a mouse, keyboard, or the like may not be accepted.
 これに対して本実施の形態においては、時間的に連続する複数の画像のそれぞれに対して、予め定めた複数のシフト位置の何れかに画像の位置をシフトする画像位置シフト部(103)(第1画像位置シフト部)と、画像位置シフト部(103)で位置シフトされた画像の画素数を削減して縮小する画像縮小部(104)と、画像縮小部(104)で縮小された画像を符号化して符号化画像を生成する符号化部(105)と、通信ネットワーク(107)を介して送られた符号化画像を復号化して復号画像を生成する復号化部(110)と、復号化部(110)で復号化された復号化画像の画素数を増加させて拡大する画像拡大部(301)と、画像拡大部(301)で拡大された画像に対して、画像位置シフト部(103)で行われた位置シフトを相殺して打ち消すように位置シフトする画像位置シフト部(302)(第2画像位置シフト部)と、画像位置シフト部(302)で位置シフトされた画像の折り返し歪を、画像位置シフト部(302)で位置シフトされた他の画像の情報を用いて低減する折り返し歪低減処理を行う折り返し歪低減部(303)とを備えるように構成したので、画像処理に係る演算量の増加および画質の低下を抑制しつつ符号化データサイズを削減することができる。すなわち、画像を構成する画素ごとの動き探索とイテレーションを不要とすることによって演算量を減らすことができ、比較的非力な計算リソースを用いても高速な処理が可能な画像鮮明化、および伝送データサイズの少ない画像伝送を実現できる。 On the other hand, in the present embodiment, for each of a plurality of temporally continuous images, an image position shift unit (103) (103) that shifts the position of the image to any of a plurality of predetermined shift positions. A first image position shift unit), an image reduction unit (104) for reducing the number of pixels of the image whose position has been shifted by the image position shift unit (103), and an image reduced by the image reduction unit (104) A decoding unit (105) that generates a decoded image by decoding the encoded image sent via the communication network (107), An image enlargement unit (301) that increases the number of pixels of the decoded image decoded by the conversion unit (110) and enlarges the image, and an image position shift unit ( 103) An image position shift unit (302) (second image position shift unit) that shifts the position so as to cancel and cancel the shift, and an aliasing distortion of the image shifted by the image position shift unit (302) Since it is configured to include the aliasing distortion reduction unit (303) that performs aliasing distortion reduction processing that reduces using the information of the other image whose position is shifted in (302), an increase in the amount of calculation related to image processing and image quality It is possible to reduce the encoded data size while suppressing the decrease in the above. That is, by eliminating the need for motion search and iteration for each pixel that makes up the image, the amount of computation can be reduced, and image sharpening and transmission data that can be processed at high speed even with relatively inefficient computing resources Image transmission with a small size can be realized.
 <第2の実施の形態>
 本発明の第2の実施の形態を図25および図26を参照しつつ説明する。
<Second Embodiment>
A second embodiment of the present invention will be described with reference to FIGS. 25 and 26. FIG.
 本実施の形態は、第1の実施の形態の画像送信装置において、画像位置シフト処理の有無を切り替える機能を有するように構成し、画像位置シフト処理を相殺して打ち消すような画像位置シフト処理を行う手段(すなわち、逆方向に位置シフトする手段)を有しない画像受信装置や画像蓄積装置を介して表示した場合においても周期的にブレない画像を提供できるようにしたものである。 In this embodiment, the image transmission apparatus according to the first embodiment is configured to have a function of switching presence / absence of image position shift processing, and performs image position shift processing that cancels and cancels image position shift processing. Even when displayed via an image receiving device or an image storage device that does not have means for performing (that is, means for shifting the position in the reverse direction), an image that is not periodically blurred can be provided.
 図25は、本実施の形態に係る画像送信装置の一構成例を概略的に示す機能ブロック図である。図中、第1の実施の形態と同様の部材には同じ符号を付し、説明を省略する。 FIG. 25 is a functional block diagram schematically showing a configuration example of the image transmission apparatus according to the present embodiment. In the figure, the same members as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
 図25において、画像送信装置(1802)は、画像(静止画、動画)を撮像するカメラ(102)と、カメラ(102)で撮像した画像に画像位置シフト処理を行う画像位置シフト部(103)と、カメラ(102)からの画像信号を画像縮小部(104)に入力するか、カメラ(102)からの画像信号を画像位置シフト部(103)を通したのちに画像縮小部(104)に入力するか、を切り替える切り替え器(1803)と、切り換え器(1803)を介して入力された画像に画像縮小処理を行う画像縮小部(104)と、画像縮小処理を行った画像に符号化処理を行う符号化部(105)と、通信ネットワーク(107)を介して画像受信装置(109)や画像蓄積装置(108)から送信されるコマンドやデータに基づいて、カメラ(102)、画像位置シフト部(103)、切り換え器(1803)、画像縮小部(104)、及び符号化部(105)を含む画像送信装置(1802)全体の動作を制御する制御部(106)とを有している。 In FIG. 25, an image transmission device (1802) includes a camera (102) that captures an image (still image, moving image), and an image position shift unit (103) that performs an image position shift process on the image captured by the camera (102). The image signal from the camera (102) is input to the image reduction unit (104), or the image signal from the camera (102) is passed through the image position shift unit (103) and then to the image reduction unit (104). A switcher (1803) for switching between input and image, an image reduction unit (104) for performing an image reduction process on an image input via the switcher (1803), and an encoding process for the image subjected to the image reduction process On the basis of commands and data transmitted from the image receiving device (109) and the image storage device (108) via the communication unit (105) and the communication network (107). A control unit that controls the overall operation of the image transmission apparatus (1802) including the mela (102), the image position shift unit (103), the switch (1803), the image reduction unit (104), and the encoding unit (105). 106).
 図26は、画像受信装置や画像蓄積装置が有する制御部が生成して表示部に表示するメニュー表示画面の一例を示す図である。 FIG. 26 is a diagram illustrating an example of a menu display screen generated by the control unit included in the image receiving device or the image storage device and displayed on the display unit.
 図26において、画像受信装置(109)や画像蓄積装置(108)が有する制御部(112,1602)が生成して表示部(113,1608)に表示するメニュー表示画面(1901)は、画像送信装置(1802)の画像位置シフト処理に関する動作を選択するためのメニュー表示であることを示すメッセージ部(1902)と、画像位置シフト処理を行わない旨を示すメッセージ部(1903)と、画像位置シフト処理を行わないことを選択するための選択部(1905)と、画像位置シフト処理を行う旨を示すメッセージ部(1904)と、画像位置シフト処理を行うことを選択するための選択部(1906)とを有している。なお、図26においては、画像受信装置(109)又は画像蓄積装置(108)のユーザが、画像位置シフトを行うことを選択した場合を例示している。なお、画像位置シフト処理を行わない旨を示すメッセージ部(1903)には、解像度向上が期待できない旨を示すメッセージを含んでもよい。また、画像位置シフト処理を行う旨を示すメッセージ部(1904)には、解像度向上が期待できる旨を示すメッセージを含んでもよい。 In FIG. 26, the menu display screen (1901) generated by the control unit (112, 1602) of the image receiving device (109) or the image storage device (108) and displayed on the display unit (113, 1608) is an image transmission. A message portion (1902) indicating that the display is a menu display for selecting an operation related to the image position shift processing of the apparatus (1802), a message portion (1903) indicating that the image position shift processing is not performed, and an image position shift A selection unit (1905) for selecting not to perform processing, a message unit (1904) indicating that image position shift processing is performed, and a selection unit (1906) for selecting execution of image position shift processing And have. FIG. 26 illustrates the case where the user of the image receiving device (109) or the image storage device (108) has selected to perform image position shifting. The message portion (1903) indicating that the image position shift process is not performed may include a message indicating that resolution improvement cannot be expected. The message portion (1904) indicating that the image position shift process is to be performed may include a message indicating that resolution improvement can be expected.
 画像送信装置(1802)は、通信ネットワーク(107)を介して、画像位置シフト部(302)を有する画像受信装置(109)又は画像蓄積装置(108)と接続した場合には、ユーザがメニュー表示画面(1901)の選択部(1906)を選択することにより、切り替え器(1803)を図中の下側に切り替えて、画像位置シフト部(103)を通した画像を画像縮小部(104)で縮小し、符号化部(105)で符号化して、通信ネットワーク(107)を介して画像受信装置(109)や画像蓄積装置(108)に送信する。これにより、画像受信装置(109)では、折り返し歪を低減した高画質の画像を出力できる。 When the image transmission device (1802) is connected to the image reception device (109) or the image storage device (108) having the image position shift unit (302) via the communication network (107), the user can display a menu. By selecting the selection unit (1906) of the screen (1901), the switch (1803) is switched to the lower side in the figure, and the image that has passed through the image position shift unit (103) is displayed by the image reduction unit (104). The image is reduced, encoded by the encoding unit (105), and transmitted to the image reception device (109) and the image storage device (108) via the communication network (107). As a result, the image receiving device (109) can output a high-quality image with reduced aliasing distortion.
 一方、この画像送信装置(1802)は、通信ネットワーク(107)を介して、画像位置シフト部(302)を有しない画像受信装置又は画像蓄積装置と接続した場合には、ユーザがメニュー表示画面(1901)の選択部(1905)を選択することにより、切り替え器(1803)を図中の上側に切り替えて、画像位置シフト部(103)を通さずに画像縮小部(104)で縮小し、符号化部(105)で符号化して、通信ネットワーク(107)を介して画像受信装置(109)や画像蓄積装置(108)に送信する。これにより、画像受信装置でも、ブレを含まない画像を出力できるようになる。 On the other hand, when the image transmission apparatus (1802) is connected to an image reception apparatus or image storage apparatus that does not have the image position shift unit (302) via the communication network (107), the user can display a menu display screen ( 1901) by selecting the selection unit (1905), the switch (1803) is switched to the upper side in the figure, and the image reduction unit (104) reduces the image without passing through the image position shift unit (103). The encoding unit (105) encodes the data and transmits the data to the image reception device (109) and the image storage device (108) via the communication network (107). As a result, the image receiving apparatus can output an image without blurring.
 なお、切り替え器(1803)は、メニュー表示画面(1901)の設定に応じて制御部(1804)から送られる制御信号にしたがって制御されるが、このとき、通信ネットワーク(107)を介して、画像受信装置(109)又は画像蓄積装置(108)から自動的に送信されたコマンドに応じて制御できるようにしてもよい。すなわち、画像位置シフト部(302)を有する画像受信装置(109)又は画像蓄積装置(108)は画像位置シフト処理を行う旨を示すコマンドを送出するように予め設定し、このコマンドが制御部(1804)で受信されないときには、画像位置シフト部(302)を有しない画像受信装置が接続されていると判定して、切り替え器(1803)を図中の上側に切り替えるようにしてもよい。 The switch (1803) is controlled in accordance with a control signal sent from the control unit (1804) according to the setting of the menu display screen (1901). At this time, the image is transmitted via the communication network (107). You may enable it to control according to the command automatically transmitted from the receiver (109) or the image storage device (108). That is, the image receiving device (109) or the image storage device (108) having the image position shift unit (302) is set in advance so as to send a command indicating that image position shift processing is performed, and this command is transmitted to the control unit ( When the signal is not received in 1804), it may be determined that an image receiving apparatus that does not have the image position shift unit (302) is connected, and the switch (1803) may be switched to the upper side in the drawing.
 その他の構成は第1の実施の形態と同様である。 Other configurations are the same as those in the first embodiment.
 以上のように構成した本実施の形態においても、第1の実施の形態と同様の効果を得ることができる。 Also in the present embodiment configured as described above, the same effect as in the first embodiment can be obtained.
 また、画像位置シフト部(103)によって画像位置シフト処理を行った画像においては、画像位置シフト処理を相殺して打ち消すような画像位置シフト処理を行う手段(すなわち、逆方向に位置シフトする手段)を有しない画像受信装置では表示した画像が周期的にブレることになるが、画像位置シフト処理の有無を切り換える構成を備えたことにより、画像位置シフト部を備える本実施の形態の画像受信装置等と、画像位置シフト部を備えない他の画像受信装置等の何れに接続した場合においても、それぞれに好適な画像を表示することができる。 Further, in the image subjected to the image position shift process by the image position shift unit (103), means for performing image position shift processing that cancels and cancels the image position shift process (that is, means for position shifting in the reverse direction) In an image receiving apparatus that does not have an image, the displayed image is periodically blurred. And any other image receiving device that does not include an image position shift unit, and the like, can display a suitable image for each.
 <第3の実施の形態>
 本発明の第3の実施の形態を図27~図29を参照しつつ説明する。
<Third Embodiment>
A third embodiment of the present invention will be described with reference to FIGS.
 本実施の形態は、画像位置シフト処理を相殺して打ち消すような画像位置シフト処理を行う手段(すなわち、逆方向に位置シフトする手段)を有しない画像受信装置を介して表示した場合においても周期的にブレない画像を提供できるようにした他の実施の形態を示すものである。 In the present embodiment, even when the image position shift process is displayed via an image receiving apparatus that does not have a means for performing an image position shift process that cancels and cancels the image position shift process (that is, a means for shifting the position in the reverse direction). Another embodiment in which an image without blurring can be provided is shown.
 図27は、本実施の形態に係る画像送信装置の一構成例を概略的に示す機能ブロック図である。また、図28は、本実施の形態に係る画像受信装置の一構成例を概略的に示す機能ブロック図である。図中、第1の実施の形態と同様の部材には同じ符号を付し、説明を省略する。 FIG. 27 is a functional block diagram schematically showing a configuration example of the image transmission apparatus according to the present embodiment. FIG. 28 is a functional block diagram schematically showing a configuration example of the image receiving apparatus according to the present embodiment. In the figure, the same members as those in the first embodiment are denoted by the same reference numerals, and description thereof is omitted.
 図27において、画像送信装置(2001)は、画像(静止画、動画)を撮像するカメラ(102)と、カメラ(102)で撮像した画像に画像位置シフト処理を行う画像位置シフト部(103)と、画像位置シフト処理を行った画像に画像縮小処理を行う画像縮小部(104)と、画像縮小処理を行った画像に画像位置シフト部(103)で行われた位置シフトを相殺して打ち消すように位置シフトする画像位置シフト部(2002)と、画像位置シフト部(2002)で画像位置シフト処理を行った画像に符号化処理を行う符号化部(105)と、通信ネットワーク(107)を介して受信されるコマンドやデータに基づいて、カメラ(102)、画像位置シフト部(103)、画像縮小部(104)、画像位置シフト部(2002)及び符号化部(105)を含む画像送信装置(2001)全体の動作を制御する制御部(2003)とを有している。 In FIG. 27, an image transmission device (2001) includes a camera (102) that captures an image (still image, moving image), and an image position shift unit (103) that performs an image position shift process on the image captured by the camera (102). The image reduction unit (104) that performs the image reduction process on the image that has been subjected to the image position shift process, and cancels out the position shift performed by the image position shift unit (103) on the image that has undergone the image reduction process. An image position shift unit (2002) that shifts the position as described above, an encoding unit (105) that performs an encoding process on an image that has undergone image position shift processing by the image position shift unit (2002), and a communication network (107) Based on the command and data received via the camera, the camera (102), the image position shift unit (103), the image reduction unit (104), and the image position shift unit (2002) Control unit for controlling an image transmission apparatus (2001) the whole operations including the fine coding unit (105) and a (2003).
 画像位置シフト部(2002)は、制御部(2003)からの制御によって、画像位置シフト部(103)によって生じた画像ブレと逆方向に画像をシフトするためのものであり、画像蓄積装置(108)が有する画像位置シフト機能(1611)と同様に動作する。すなわち、図4の画像位置シフト部(302)の動作における第3の画素数を、伝送する画像データの画素数(第2の画素数)に読み替えることにより、画像位置シフト部(302)の動作をそのまま用いることができる。なお、前述した非対称フィルタを画像位置シフト部(2002)として用いることもできる。 The image position shift unit (2002) is for shifting the image in the direction opposite to the image blur caused by the image position shift unit (103) under the control of the control unit (2003). ) Operates in the same manner as the image position shift function (1611). That is, the operation of the image position shift unit (302) is performed by replacing the third pixel number in the operation of the image position shift unit (302) in FIG. 4 with the pixel number (second pixel number) of the image data to be transmitted. Can be used as they are. The asymmetric filter described above can also be used as the image position shift unit (2002).
 図28において、画像受信装置(2004)は、通信ネットワーク(107)を介して画像送信装置(2001)から送られてきた画像に復号化処理を行う復号化部(110)と、復号化処理を行った画像に画像位置シフト部103と同様の画像位置シフト処理を行う画像位置シフト部(2005)と、画像位置シフト部(2005)で画像位置シフト処理を行った画像に画像拡大・鮮明化処理を行う画像拡大・鮮明化部(111)と、画像受信装置(2004)全体の動作を制御する制御部(2006)とを有している。 In FIG. 28, an image receiving device (2004) performs a decoding process on the image sent from the image transmitting device (2001) via the communication network (107), and a decoding process. An image position shift unit (2005) that performs the same image position shift process as the image position shift unit 103 on the performed image, and an image enlargement / clearing process on the image that has been subjected to the image position shift process by the image position shift unit (2005) An image enlarging / sharpening unit (111) for performing the above and a control unit (2006) for controlling the entire operation of the image receiving apparatus (2004).
 画像位置シフト部(2005)は、制御部(2006)からの制御によって動作し、画像送信装置(2001)の画像位置シフト部(2002)で行われた位置シフトを相殺して打ち消すように位置シフトするものである。 The image position shift unit (2005) operates under the control of the control unit (2006), and cancels the position shift performed by the image position shift unit (2002) of the image transmission device (2001) so as to cancel the position shift. To do.
 図29は、図28の画像位置シフト部において用いるフィルタ係数の一例を説明する図である。 FIG. 29 is a diagram illustrating an example of filter coefficients used in the image position shift unit in FIG.
 図29において、係数(a)~(d)は、復号化した画像に畳み込むフィルタ係数を示したものである。各係数(a)~(d)において、カッコ内の記号あるいは数値(すなわち、0,α,1-α,β,1-βなど)は、各係数の値を表しており、カッコの右肩の「-1」は逆特性を表し、カッコの右肩の記号「T」は転置を表している。また、記号「*」は畳み込み演算を表している。すなわち、図29の係数(a)~(d)は、図24に示した2次元フィルタの逆フィルタの係数を表している。なお、逆フィルタの各係数は、一般的な技術によって求めることができるため、具体的な説明を省略する。 29, coefficients (a) to (d) indicate filter coefficients that are convoluted with the decoded image. In each coefficient (a) to (d), a symbol or a numerical value in parentheses (that is, 0, α, 1-α, β, 1-β, etc.) represents the value of each coefficient, and the right shoulder of the parenthesis “−1” of the symbol represents an inverse characteristic, and the symbol “T” on the right shoulder of the parenthesis represents transposition. The symbol “*” represents a convolution operation. That is, the coefficients (a) to (d) in FIG. 29 represent the coefficients of the inverse filter of the two-dimensional filter shown in FIG. In addition, since each coefficient of an inverse filter can be calculated | required with a general technique, concrete description is abbreviate | omitted.
 その他の構成は第1の実施の形態と同様である。 Other configurations are the same as those in the first embodiment.
 以上のように構成した本実施の形態においても、第1の実施の形態と同様の効果をえることができる。 Also in the present embodiment configured as described above, the same effects as those of the first embodiment can be obtained.
 また、画像送信装置(2001)と画像受信装置(2004)とを組み合わせて用いた場合には、第1の実施の形態と同様の動作によって、画像受信装置(2004)では鮮明な画像を出力できるとともに、画像送信装置(2001)を画像位置シフト部を備えない画像受信装置や画像蓄積装置と接続した場合においても表示時に生じる画像のブレを抑制することができる。 Further, when the image transmission device (2001) and the image reception device (2004) are used in combination, the image reception device (2004) can output a clear image by the same operation as the first embodiment. In addition, even when the image transmission apparatus (2001) is connected to an image reception apparatus or an image storage apparatus that does not include an image position shift unit, it is possible to suppress image blurring that occurs during display.
 <第4の実施の形態>
 本発明の第4の実施の形態を図30を参照しつつ説明する。
<Fourth embodiment>
A fourth embodiment of the present invention will be described with reference to FIG.
 本実施の形態は、第1の実施の形態における画像蓄積装置(108)を画像受信装置(1612)として用いるものである。 In the present embodiment, the image storage device (108) in the first embodiment is used as the image receiving device (1612).
 本実施の形態における画像処理システムでは、画像送信装置として画像送信装置(101)を用い、画像蓄積装置(108)のストレージ(1604)に格納されている図示しない画像受信用アプリケーションプログラムに記載された処理(後の図30により詳述)に従って制御部(1602)が各部を制御することにより、画像蓄積装置(108)を画像受信装置(1612)としても用いる(図23参照)。 In the image processing system according to the present embodiment, the image transmission device (101) is used as the image transmission device, and is described in an image reception application program (not shown) stored in the storage (1604) of the image storage device (108). The control unit (1602) controls each unit in accordance with the processing (described later in detail with reference to FIG. 30), whereby the image storage device (108) is also used as the image reception device (1612) (see FIG. 23).
 図30は、本実施の形態に係る画像受信装置における処理の一例を示すフローチャートである。 FIG. 30 is a flowchart showing an example of processing in the image receiving apparatus according to the present embodiment.
 図30において、画像受信装置(1612)は、まず、通信ネットワーク(107)を経由して符号化された画像データを取得し(ステップS2201)、制御部(1602)で画像データを復号して、メモリ(1603)に格納する(ステップS2202)。続いて、フレーム位相情報を取得する(ステップS2205)。また、ステップS2205と並行して、画像を拡大し(ステップS2203)、画像の位置シフトを行う(ステップS2204)。これらの処理は、図4に示した画像拡大部(301)および画像位置シフト部(302)の処理内容にそれぞれ対応する。 In FIG. 30, the image receiving device (1612) first acquires the encoded image data via the communication network (107) (step S2201), and the control unit (1602) decodes the image data. It stores in the memory (1603) (step S2202). Subsequently, frame phase information is acquired (step S2205). In parallel with step S2205, the image is enlarged (step S2203), and the position of the image is shifted (step S2204). These processes correspond to the processing contents of the image enlargement unit (301) and the image position shift unit (302) shown in FIG.
 続いて、折り返し歪低減処理を行う(ステップS2206)。ステップS2206においては、2次元処理を行い(ステップS2207)、所定のフレームメモリ(図23におけるメモリ(1603))に画像を格納し(ステップS2208)、全フレームメモリの値を同一画素位置ごとに加算する(ステップS2209)。なお、これらの処理は、図9及び図10に示した構成における処理内容に対応する。ステップS2206と並行して、不要な折り返し歪を抑えるためのローパスフィルタ処理を行う(ステップS2210)。続いて、動き検出処理を行い、制御信号mを得る(ステップS2211)。なお、この処理は、図16に示した構成における処理内容に対応する。 Subsequently, aliasing distortion reduction processing is performed (step S2206). In step S2206, two-dimensional processing is performed (step S2207), the image is stored in a predetermined frame memory (memory (1603) in FIG. 23) (step S2208), and the values of all frame memories are added for each same pixel position. (Step S2209). Note that these processes correspond to the processing contents in the configuration shown in FIGS. 9 and 10. In parallel with step S2206, low-pass filter processing for suppressing unnecessary aliasing distortion is performed (step S2210). Subsequently, a motion detection process is performed to obtain a control signal m (step S2211). This processing corresponds to the processing content in the configuration shown in FIG.
 続いて、ステップS2206で得た折り返し歪低減後の画像と、ステップS2210で得たローパスフィルタ後の画像と、ステップS2211で得た制御信号mとを用いて加重混合を行い、出力画像を得る(ステップS2212)。この処理は、図15に示した加重混合部(1004)の動作に対応する。ステップS2212)で得た出力画像信号を出力インタフェース(1605)を介して出力し(ステップS2213)、処理を終了する。 Subsequently, weighted mixing is performed using the image after the aliasing reduction obtained in step S2206, the image after the low-pass filter obtained in step S2210, and the control signal m obtained in step S2211, thereby obtaining an output image ( Step S2212). This processing corresponds to the operation of the weighted mixing unit (1004) shown in FIG. The output image signal obtained in step S2212) is output via the output interface (1605) (step S2213), and the process ends.
 なお、上記のステップS2201~S2213で示した処理は、画像送信装置(101)から画像受信装置(1612)に画像データが伝送されるたびに実行されるように制御部(1602)で制御すればよい。あるいは、ステップS2201~S2213で示した処理が完了するたびに、画像受信装置(109)から画像送信装置(101)に対して、画像データの送信を指示するコマンドを送信してもよい。また、図23のメモリ(1603)に展開された各種プログラム機能部に、さらに別の図示しない画像位置シフト機能を備えることにより、図28で説明した画像位置シフト部(2005)の動作(すなわち、図29に示した動作)を実現できることは明らかであり、画像受信装置(1612)の構成を用いて、図28に示した構成と同様の動作を実現できることは明らかである。 Note that the processing shown in steps S2201 to S2213 is controlled by the control unit (1602) so that it is executed each time image data is transmitted from the image transmission apparatus (101) to the image reception apparatus (1612). Good. Alternatively, every time the processing shown in steps S2201 to S2213 is completed, a command instructing transmission of image data may be transmitted from the image receiving apparatus (109) to the image transmitting apparatus (101). Further, by providing the various program function units developed in the memory (1603) of FIG. 23 with another image position shift function (not shown), the operation of the image position shift unit (2005) described in FIG. It is obvious that the operation shown in FIG. 29 can be realized, and it is obvious that the same operation as that shown in FIG. 28 can be realized using the configuration of the image receiving device (1612).
 その他の構成は第1の実施の形態と同様である。 Other configurations are the same as those in the first embodiment.
 以上のように構成した本実施の形態においても第1の実施の形態と同様の効果を得ることができる。 Also in the present embodiment configured as described above, the same effects as in the first embodiment can be obtained.
 なお、本発明は上記した各実施の形態に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施の形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、上記の各構成、機能等は、それらの一部又は全部を、例えば集積回路で設計する等により実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。 In addition, this invention is not limited to each above-mentioned embodiment, Various modifications are included. For example, the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described. Moreover, you may implement | achieve part or all of said each structure, function, etc., for example by designing with an integrated circuit. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
 すなわち、本発明の各実施の形態の機能はソフトウェアのプログラムコードによって実現することができる。この場合、プログラムコードを記録した記憶媒体をシステム或は装置に提供し、そのシステム或は装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたプログラムコードを読み出す。この場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコード自体、及びそれを記憶した記憶媒体は本発明を構成することになる。このようなプログラムコードを供給するための記憶媒体としては、例えば、フレキシブルディスク、CD-ROM、DVD-ROM、ハードディスク、光ディスク、光磁気ディスク、CD-R、磁気テープ、不揮発性のメモリカード、ROMなどが用いられる。 That is, the functions of the embodiments of the present invention can be realized by software program codes. In this case, a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium. In this case, the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention. As a storage medium for supplying such program code, for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
 また、プログラムコードの指示に基づき、コンピュータ上で稼動しているOS(オペレーティングシステム)などが実際の処理の一部又は全部を行い、その処理によって前述した実施の形態の機能が実現されるようにしてもよい。さらに、記憶媒体から読み出されたプログラムコードが、コンピュータ上のメモリに書きこまれた後、そのプログラムコードの指示に基づき、コンピュータのCPUなどが実際の処理の一部又は全部を行い、その処理によって前述した実施の形態の機能が実現されるようにしてもよい。 Also, based on the instruction of the program code, an OS (operating system) running on the computer performs part or all of the actual processing, and the functions of the above-described embodiments are realized by the processing. May be. Further, after the program code read from the storage medium is written in the memory on the computer, the computer CPU or the like performs part or all of the actual processing based on the instruction of the program code. Thus, the functions of the above-described embodiments may be realized.
 さらに、実施の形態の機能を実現するソフトウェアのプログラムコードを、ネットワークを介して配信することにより、それをシステム又は装置のハードディスクやメモリ等の記憶手段又はCD-RW、CD-R等の記憶媒体に格納し、使用時にそのシステム又は装置のコンピュータ(又はCPUやMPU)が当該記憶手段や当該記憶媒体に格納されたプログラムコードを読み出して実行するようにしても良い。 Further, by distributing the program code of the software that realizes the functions of the embodiment via a network, the program code is stored in a storage means such as a hard disk or a memory of a system or apparatus, or a storage medium such as a CD-RW or CD-R And the computer (or CPU or MPU) of the system or apparatus may read and execute the program code stored in the storage means or the storage medium when used.
 最後に、ここで述べたプロセス及び技術は本質的に如何なる特定の装置に関連することはなく、コンポーネントの如何なる相応しい組み合わせによってでも実装できることを理解する必要がある。更に、汎用目的の多様なタイプのデバイスがここで記述した教授に従って使用可能である。ここで述べた方法のステップを実行するのに、専用の装置を構築するのが有益であることが判るかもしれない。また、実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。本発明は、具体例に関連して記述したが、これらは、すべての観点に於いて限定の為ではなく説明の為である。本分野にスキルのある者には、本発明を実施するのに相応しいハードウェア、ソフトウェア、及びファームウエアの多数の組み合わせがあることが解るであろう。例えば、記述したハードウェアは、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)等で実装してもよく、記述したソフトウェアは、アセンブラ、C/C++、perl、Shell、PHP、Python、Java(登録商標)等の広範囲のプログラム又はスクリプト言語で実装してもよい。 Finally, it should be understood that the processes and techniques described herein are not inherently related to any particular equipment, and can be implemented by any suitable combination of components. In addition, various types of devices for general purpose can be used in accordance with the teachings described herein. It may prove useful to build a dedicated device to perform the method steps described herein. Various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiments. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, constituent elements over different embodiments may be appropriately combined. Although the present invention has been described with reference to specific examples, these are in all respects illustrative rather than restrictive. Those skilled in the art will appreciate that there are numerous combinations of hardware, software, and firmware that are suitable for implementing the present invention. For example, the described hardware may be implemented by ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), etc., and the described software is assembler, C / C ++, perl, Shell, PHP, Python. , Java (registered trademark) or a wide range of programs or script languages may be used.
 さらに、前述の実施形態において、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。全ての構成が相互に接続されていても良い。 Furthermore, in the above-described embodiment, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. All the components may be connected to each other.
 加えて、本技術分野の通常の知識を有する者には、本発明のその他の実装がここに開示された本発明の明細書及び実施形態の考察から明らかになる。記述された実施形態の多様な態様及び/又はコンポーネントは、データを管理する機能を有するコンピュータ化ストレージシステムに於いて、単独又は如何なる組み合わせでも使用することが出来る。 In addition, other implementations of the invention will become apparent to those skilled in the art from consideration of the specification and embodiments of the invention disclosed herein. Various aspects and / or components of the described embodiments can be used singly or in any combination in a computerized storage system capable of managing data.

Claims (11)

  1.  時間的に連続する複数の画像のそれぞれに対して、予め定めた複数のシフト位置の何れかに画像の位置をシフトする第1画像位置シフト部と、
     前記第1画像位置シフト部で位置シフトされた画像の画素数を削減して縮小する画像縮小部と、
     前記画像縮小部で縮小された画像を符号化して符号化画像を生成する符号化部と、
     通信ネットワークを介して送られた前記符号化画像を復号化して復号画像を生成する復号化部と、
     前記復号化部で復号化された復号化画像の画素数を増加させて拡大する画像拡大部と、
     前記画像拡大部で拡大された画像に対して、前記第1画像位置シフト部で行われた位置シフトを打ち消すように位置シフトする第2画像位置シフト部と、
     前記第2画像位置シフト部で位置シフトされた画像の折り返し歪を、前記第2画像位置シフト部で位置シフトされた他の画像の情報を用いて低減する折り返し歪低減処理を行う折り返し歪低減部と
    を備えたことを特徴とする画像処理システム。
    A first image position shift unit that shifts the position of the image to any of a plurality of predetermined shift positions for each of a plurality of temporally continuous images;
    An image reduction unit for reducing the number of pixels of the image shifted in position by the first image position shift unit;
    An encoding unit that encodes the image reduced by the image reduction unit to generate an encoded image;
    A decoding unit that decodes the encoded image sent via a communication network to generate a decoded image;
    An image enlarging unit for enlarging by increasing the number of pixels of the decoded image decoded by the decoding unit;
    A second image position shift unit that shifts the position of the image enlarged by the image enlargement unit so as to cancel the position shift performed by the first image position shift unit;
    An aliasing distortion reducing unit that performs aliasing distortion reduction processing for reducing aliasing distortion of an image that has been position-shifted by the second image position shifting unit using information of another image that has been position-shifted by the second image position shifting unit. And an image processing system.
  2.  請求項1記載の画像処理システムにおいて、
     前記画像縮小部で縮小された画像に対して、前記第1画像位置シフト部による位置シフトを打ち消す位置に画像をシフトし、前記符号化部に送る第3画像位置シフト部と、
     前記復号化部で復号化された復号化画像に対して、前記第3画像位置シフト部による位置シフトを打ち消す位置に画像をシフトし、前記画像拡大部に送る第4画像位置シフト部と
    をさらに備えたことを特徴とする画像処理システム。
    The image processing system according to claim 1,
    A third image position shift unit that shifts the image to a position that cancels the position shift by the first image position shift unit with respect to the image reduced by the image reduction unit, and sends the image to the encoding unit;
    A fourth image position shift unit that shifts the image to a position where the position shift by the third image position shift unit is canceled with respect to the decoded image decoded by the decoding unit, and sends the image to the image enlargement unit; An image processing system comprising:
  3.  請求項1記載の画像処理システムにおいて、
     前記画像縮小部は、元の画像の画素数の非約数(非整数分の1)の画素数を有する画像に縮小することを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The image reduction system is characterized in that the image reduction unit reduces the image to an image having a pixel number that is a non-divisor (a fraction of a non-integer) of the number of pixels of the original image.
  4.  請求項1記載の画像処理システムにおいて、
     前記折り返し歪低減部は、非対称の係数をもつ非対称フィルタを用いることを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The image processing system, wherein the aliasing reduction unit uses an asymmetric filter having an asymmetric coefficient.
  5.  請求項1記載の画像処理システムにおいて、
     前記第2画像位置シフト部は、前記第1画像位置シフト部での位置シフトに基づいて生成されるフレーム位相情報に基づいて位置シフトを行うことを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The image processing system, wherein the second image position shift unit performs position shift based on frame phase information generated based on the position shift in the first image position shift unit.
  6.  請求項1記載の画像処理システムにおいて、
     前記第2画像位置シフト部は、前記復号化部で復号化された復号化画像に基づいて位置シフトを行うことを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The image processing system, wherein the second image position shift unit performs position shift based on the decoded image decoded by the decoding unit.
  7.  請求項1記載の画像処理システムにおいて、
     前記第1画像位置シフト部は、前記通信ネットワークを介して受信されるコマンド情報に基づいて位置シフトを行うことを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The image processing system, wherein the first image position shift unit performs position shift based on command information received via the communication network.
  8.  請求項7記載の画像処理システムにおいて、
     前記コマンド情報を決定する決定部を有することを特徴とする画像処理システム。
    The image processing system according to claim 7.
    An image processing system comprising: a determination unit that determines the command information.
  9.  請求項1記載の画像処理システムにおいて、
     前記折り返し歪低減部は、前記画像の静止領域に折り返し歪低減処理を行うとともに、前記画像の動領域には前記第2画像位置シフト部で位置シフトされた画像出力する動き適応処理部を備えたことを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The aliasing distortion reduction unit includes a motion adaptation processing unit that performs aliasing reduction processing on a still area of the image and outputs an image that is position-shifted by the second image position shifter in the moving area of the image. An image processing system characterized by that.
  10.  時間的に連続する複数の画像であって、予め定めた複数のシフト位置の何れかにそれぞれ位置シフトされ、画素数を削減されて縮小され、符号化され、通信ネットワークを介して送られた符号化画像を復号化して復号画像を生成する復号化部と、
     前記復号化部で復号化された復号化画像の画素数を増加させて拡大する画像拡大部と、
     前記画像拡大部で拡大された画像に対して、前記位置シフトを打ち消すように位置シフトする画像位置シフト部と、
     前記画像位置シフト部で位置シフトされた画像の折り返し歪を、前記画像位置シフト部で位置シフトされた他の画像の情報を用いて低減する折り返し歪低減処理を行う折り返し歪低減部と
    を備えたことを特徴とする画像処理装置。
    Codes that are a plurality of temporally continuous images, each shifted to one of a plurality of predetermined shift positions, reduced in number of pixels, reduced, encoded, and sent via a communication network A decoding unit for decoding the decoded image to generate a decoded image;
    An image enlarging unit for enlarging by increasing the number of pixels of the decoded image decoded by the decoding unit;
    An image position shift unit that shifts the position of the image enlarged by the image enlargement unit so as to cancel the position shift;
    A aliasing distortion reducing unit that performs aliasing distortion reduction processing for reducing aliasing distortion of the image that has been position shifted by the image position shifting unit using information of another image that has been position shifted by the image position shifting unit; An image processing apparatus.
  11.  時間的に連続する複数の画像のそれぞれに対して、予め定めた複数のシフト位置の何れかに画像の位置をシフトする第1画像位置シフト部と、
     前記第1画像位置シフト部で位置シフトされた画像の画素数を削減して縮小する画像縮小部と、
     前記画像縮小部で縮小された画像を符号化して符号化画像を生成する符号化部とを備え、
     前記符号化画像を復号化して復号画像を生成する復号化部と、前記復号化部で復号化された復号化画像の画素数を増加させて拡大する画像拡大部と、前記画像拡大部で拡大された画像に対して、前記第1画像位置シフト部で行われた位置シフトを打ち消すように位置シフトする第2画像位置シフト部と、前記第2画像位置シフト部で位置シフトされた画像の折り返し歪を、前記第2画像位置シフト部で位置シフトされた他の画像の情報を用いて低減する折り返し歪低減処理を行う折り返し歪低減部とを備えた処理装置に、通信ネットワークを介して前記符号化画像を送ることを特徴とする画像処理装置。
    A first image position shift unit that shifts the position of the image to any of a plurality of predetermined shift positions for each of a plurality of temporally continuous images;
    An image reduction unit for reducing the number of pixels of the image shifted in position by the first image position shift unit;
    An encoding unit that encodes the image reduced by the image reduction unit to generate an encoded image;
    A decoding unit that decodes the encoded image to generate a decoded image, an image enlarging unit that increases the number of pixels of the decoded image decoded by the decoding unit, and an enlargement by the image enlarging unit A second image position shift unit that shifts the position of the image shifted by the first image position shift unit to cancel the position shift performed by the first image position shift unit; The processing apparatus including a aliasing distortion reducing unit that performs aliasing distortion reduction processing for reducing distortion using information of another image whose position is shifted by the second image position shifting unit is configured to transmit the code via a communication network. An image processing apparatus for sending a digitized image.
PCT/JP2017/004512 2016-02-24 2017-02-08 Image processing system and image processing device WO2017145752A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-033145 2016-02-24
JP2016033145A JP6516695B2 (en) 2016-02-24 2016-02-24 Image processing system and image processing apparatus

Publications (1)

Publication Number Publication Date
WO2017145752A1 true WO2017145752A1 (en) 2017-08-31

Family

ID=59686142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/004512 WO2017145752A1 (en) 2016-02-24 2017-02-08 Image processing system and image processing device

Country Status (2)

Country Link
JP (1) JP6516695B2 (en)
WO (1) WO2017145752A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111801948A (en) * 2018-03-01 2020-10-20 索尼公司 Image processing apparatus and method, imaging element, and imaging apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7350515B2 (en) * 2019-05-22 2023-09-26 キヤノン株式会社 Information processing device, information processing method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323480A (en) * 2006-06-02 2007-12-13 Fuji Xerox Co Ltd Image processor, image encoding device, image decoding device, image processing system, and program
JP2009017425A (en) * 2007-07-09 2009-01-22 Sony Corp Moving image conversion device, moving image reproduction device, method, and computer program
JP2012195908A (en) * 2011-03-18 2012-10-11 Hitachi Kokusai Electric Inc Image transfer system, image transfer method, image reception apparatus, image transmission apparatus, and image pickup apparatus
JP2013518463A (en) * 2010-01-22 2013-05-20 トムソン ライセンシング Sampling-based super-resolution video encoding and decoding method and apparatus
WO2013191193A1 (en) * 2012-06-20 2013-12-27 株式会社日立国際電気 Video compression and transmission system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007323480A (en) * 2006-06-02 2007-12-13 Fuji Xerox Co Ltd Image processor, image encoding device, image decoding device, image processing system, and program
JP2009017425A (en) * 2007-07-09 2009-01-22 Sony Corp Moving image conversion device, moving image reproduction device, method, and computer program
JP2013518463A (en) * 2010-01-22 2013-05-20 トムソン ライセンシング Sampling-based super-resolution video encoding and decoding method and apparatus
JP2012195908A (en) * 2011-03-18 2012-10-11 Hitachi Kokusai Electric Inc Image transfer system, image transfer method, image reception apparatus, image transmission apparatus, and image pickup apparatus
WO2013191193A1 (en) * 2012-06-20 2013-12-27 株式会社日立国際電気 Video compression and transmission system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111801948A (en) * 2018-03-01 2020-10-20 索尼公司 Image processing apparatus and method, imaging element, and imaging apparatus
CN111801948B (en) * 2018-03-01 2023-01-03 索尼公司 Image processing apparatus and method, imaging element, and imaging apparatus
US11765468B2 (en) 2018-03-01 2023-09-19 Sony Corporation Image processing device and method thereof, imaging element, and imaging device

Also Published As

Publication number Publication date
JP6516695B2 (en) 2019-05-22
JP2017152902A (en) 2017-08-31

Similar Documents

Publication Publication Date Title
US8315481B2 (en) Image transmitting apparatus, image receiving apparatus, image transmitting and receiving system, recording medium recording image transmitting program, and recording medium recording image receiving program
JP4356777B2 (en) Image processing apparatus, image processing method, program, and recording medium
JP5529293B2 (en) A method for edge enhancement for time scaling with metadata
JP4775210B2 (en) Image signal processing apparatus, image resolution increasing method, image display apparatus, recording / reproducing apparatus
JP4876048B2 (en) Video transmission / reception method, reception device, video storage device
JP2009033581A (en) Image signal recording and reproducing device
US20090009660A1 (en) Video displaying apparatus, video signal processing apparatus and video signal processing method
US20090091653A1 (en) Video signal processing apparatus, video signal processing method and video display apparatus
US8774552B2 (en) Image transfer system, image transfer method, image reception apparatus, image transmission apparatus, and image pickup apparatus
WO2017145752A1 (en) Image processing system and image processing device
JP2009044538A (en) Monitoring system and imaging device employed therefor
JP2008294950A (en) Image processing method and device, and electronic device with the same
WO2009133403A2 (en) Television system
JP5113479B2 (en) Image signal processing apparatus and image signal processing method
JP2016208281A (en) Video encoding device, video decoding device, video encoding method, video decoding method, video encoding program and video decoding program
JP2009033582A (en) Image signal recording and reproducing device
US20120287313A1 (en) Image capture apparatus
JP6416255B2 (en) Image processing system and image processing method
JP2009017242A (en) Image display apparatus, image signal processing apparatus and image signal processing method
JP5742048B2 (en) Color moving image structure conversion method and color moving image structure conversion device
JP5686316B2 (en) Color moving image motion estimation method and color moving image motion estimation device
JP2009278473A (en) Image processing device, imaging apparatus mounting the same, and image reproducing device
JP2007519354A (en) Method and apparatus for video deinterlacing using motion compensated temporal interpolation
JP2007295226A (en) Video processor
JP6099104B2 (en) Color moving image structure conversion method and color moving image structure conversion device

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17756193

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17756193

Country of ref document: EP

Kind code of ref document: A1