WO2016002245A1 - Image processing system, and image processing method - Google Patents

Image processing system, and image processing method Download PDF

Info

Publication number
WO2016002245A1
WO2016002245A1 PCT/JP2015/053725 JP2015053725W WO2016002245A1 WO 2016002245 A1 WO2016002245 A1 WO 2016002245A1 JP 2015053725 W JP2015053725 W JP 2015053725W WO 2016002245 A1 WO2016002245 A1 WO 2016002245A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
image processing
processing
signal
Prior art date
Application number
PCT/JP2015/053725
Other languages
French (fr)
Japanese (ja)
Inventor
影山 昌広
奥 万寿男
野里 浩司
前 愛州
山下 和也
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2016531129A priority Critical patent/JP6416255B2/en
Publication of WO2016002245A1 publication Critical patent/WO2016002245A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an image processing system and an image processing method, and more particularly to an image processing system and an image processing method using an image signal encoding technique and a sharpening technique.
  • Video encoding technology represented by H.264 and still image coding technology represented by JPEG are indispensable technologies for applications such as videophones, remote conferences, and network surveillance cameras that transmit images.
  • an image captured by a plurality of cameras installed at a number of locations is simultaneously transmitted via a communication network such as the Internet, an intranet, or a public network.
  • a communication network such as the Internet, an intranet, or a public network.
  • these images are collectively displayed on a single screen at a receiving terminal.
  • encoded data having a data size obtained by multiplying the amount of encoding generated in one image by the total number of images is transmitted to one terminal via the communication network.
  • a communication error or data delay may occur in an application that transmits a moving image, and a received image may be deteriorated or frozen.
  • the transmitted image is recorded and stored in a medium such as a hard disk (magnetic disk) or an optical disk
  • a medium such as a hard disk (magnetic disk) or an optical disk
  • a large-capacity recording medium is required, which increases the cost of the recording apparatus.
  • a medium having a limited recording capacity is used, there is a problem that only a short recording can be performed.
  • the system that realizes the application needs to encode and compress the image in accordance with the transmission band and the recording capacity to reduce the data size. is there.
  • the encoded data size of the image and the image quality are in a contradictory relationship, and the image quality deteriorates as the encoded data size is reduced. Therefore, it is desired to realize a technique that can maintain a desired image quality for the user even if the encoded data size is reduced.
  • the number of pixels of a camera used for imaging has increased with the advance of technology, and currently, for example, an image of horizontal 1920 pixels ⁇ vertical 1080 pixels (hereinafter abbreviated as full HD size) is generally used. Yes.
  • the number of pixels at the time of imaging is intentionally reduced, for example, horizontal 704 pixels ⁇ vertical 480 pixels (hereinafter abbreviated as D1 size).
  • the data size may be reduced by using the above-mentioned image as an encoding target.
  • a part of the captured image frame may be trimmed to be an encoding target image, but the entire captured image frame is reduced to reduce the number of pixels to be an encoding target image.
  • the reduced image is generally displayed in an enlarged manner.
  • a technique for sharpening an image As a technique for sharpening an image, a technique for amplifying a high-frequency component included in an image and enhancing an edge or the like has been used for a long time.
  • a signal processing technique called super-resolution has been actively developed.
  • a Nyquist frequency representing the resolution limit of a transmitted or recorded image is obtained by performing nonlinear processing on a high-frequency component included in the image and then adding it to the original image. Generate higher harmonic components.
  • a high-frequency component exceeding the Nyquist frequency is used as an aliasing component in the image, and a plurality of image frames are used as input images, and the subject shown in each image frame After accurately aligning the position of, the baseband component and the aliasing component are separated, demodulated (inverted) to the frequency component before the aliasing, and the aliasing component after demodulation is added to the baseband signal, Get an image.
  • the aliasing component is separated from the image of one frame in the same manner as the technique described in Patent Document 2.
  • Non-Patent Document 1 introduces a number of techniques for obtaining a high-resolution image by using a plurality of image frames and performing successive approximation with iterative calculation.
  • Non-Patent Document 2 describes a technique for obtaining a high-resolution image from one frame image by successive approximation processing.
  • a personal computer hereinafter abbreviated as PC
  • a mobile terminal or the like is used as a means for realizing an application for transmitting, recording, or displaying an image such as a videophone, a remote conference, or a network surveillance camera. It is often done.
  • special hardware is used by performing signal processing by software processing using a general-purpose CPU (Central Processing Unit), MPU (Micro Processing Unit), GPGPU (General Purpose Computing on Graphics Processing Unit), etc.
  • CPU Central Processing Unit
  • MPU Micro Processing Unit
  • GPGPU General Purpose Computing on Graphics Processing Unit
  • the above-described super-resolution processing requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image.
  • edge enhancement processing that only amplifies high-frequency components included in an image.
  • the input image is enlarged and enlarged
  • the image reduction, the difference detection with the input image, and the output image correction processing need to be repeated several times, which greatly consumes calculation resources such as a CPU and a memory necessary for calculation.
  • images taken by a plurality of cameras installed at a number of locations are simultaneously transmitted via a communication network, and these images are combined into one screen by one image receiving terminal.
  • this image receiving terminal In the case of display, if this image receiving terminal is to be realized on a PC, a plurality of images are decoded, enlarged, and sharpened at the same time. There are problems such as frame dropping that makes the movement jerky, the entire process stopped, and that the user input from the mouse, keyboard, etc. is not accepted and no response is made. Therefore, it is not easy to sharpen an image in parallel while receiving a plurality of image data.
  • the present invention has been made in view of such a situation, and the image quality can be improved by flexibly executing image processing of image enlargement and sharpening according to the performance of calculation resources provided in the image receiving terminal.
  • a technique for realizing enhanced image transmission is provided.
  • an image processing system includes a plurality of first image processing devices and an image encoded from the first image processing device.
  • An image processing system including a second image processing device that receives a signal, the first image processing device including an imaging unit that captures an image, an encoding unit that encodes an image and outputs a signal, and the signal A transmission unit that transmits to the second image processing device, and the second image processing device receives a signal and outputs a decoded decoded image, and performs a sharpening process on the decoded image to clarify the decoded image.
  • a sharpening processing unit that outputs an image; a display unit that displays a decoded image or a sharpened image; and a control unit that switches an image to be displayed on the display unit.
  • Generate image, high-frequency from enlarged image The second processed image is generated by synthesizing the first processed image extracted from the components and subjected to nonlinear processing and the enlarged image, and an image obtained by removing the aliasing component from the second processed image is output as a sharpened image. It is characterized by doing.
  • an image processing method in which images captured by a plurality of first image processing devices are processed by a second image processing device, the first step of capturing images by the first image processing device, and the image is encoded and a signal
  • a second step of outputting a signal a third step of transmitting a signal to the second image processing device, a fourth step of receiving a signal and outputting a decoded image, and a sharpening process of the decoded image,
  • a fifth step of outputting a sharpened image a sixth step of displaying the decoded image or the sharpened image on the display unit, and a seventh step of switching the image to be displayed on the display unit.
  • An enlarged image obtained by enlarging the decoded image is generated, a high-frequency component is extracted from the enlarged image, and the first processed image obtained by performing nonlinear processing and the enlarged image are combined to generate a second processed image, and the second processed Fold from image And outputs an image obtained by removing components as sharpening the image.
  • the present invention it is possible to flexibly execute image enlargement and sharpening signal processing according to the performance of calculation resources, and to realize image transmission with improved image quality.
  • the present invention provides a technique for realizing image transmission with improved image quality.
  • the embodiment of the present invention may be implemented by software running on a general-purpose computer, or may be implemented by a combination of software and hardware.
  • removing used in the following description includes the meaning of “reducing” or “weakening” in addition to the meaning of “completely eliminating”.
  • a full HD size horizontal 1920 pixels ⁇ vertical 1080 pixels
  • a D1 size horizontal 704 pixels ⁇ vertical 480 pixels
  • FIG. 1 is a diagram showing a schematic configuration of an image processing system according to an embodiment of the present invention.
  • the image processing system includes n (where n is an integer equal to or greater than 1) image transmission devices (101- # 1 to #n), a communication network (105), an image reception device (106), and a user who receives an image reception device ( 106) and the display unit (112) for displaying the output image of the image receiving device (106).
  • the image transmission device (101) includes a camera (102), an image reduction unit (103), and an encoding unit (104).
  • the camera (102) includes, for example, a general imaging unit including a photoelectric conversion element (not shown) such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) and a lens, signal level adjustment, contrast adjustment, and brightness adjustment. And a general signal processing unit that performs white balance adjustment and the like.
  • the image reduction unit (103) performs processing to reduce the number of pixels of the image obtained by the camera (102), and may trim the part of the captured image frame to reduce the number of pixels. When a camera with a small number of pixels is used, the image reduction unit (103) may be omitted.
  • the encoding unit (104) is a generally known MPEG (Moving Picture Expert Group) -1, MPEG-2, MPEG-4, H.264. H.264, VC-1, JPEG (Joint Photographic Experts Group), Motion JPEG, JPEG-2000, etc. may be standardized encoding, or non-standard encoding may be performed.
  • MPEG Motion Picture Expert Group
  • JPEG Joint Photographic Experts Group
  • Motion JPEG Joint Photographic Experts Group
  • JPEG-2000 Motion JPEG-2000, etc.
  • the processing configuration for connecting the image transmission apparatus (101) and the communication network (105) can be realized by a general technique, and is not shown.
  • the communication network (105) may be either wired or wireless, and is a network for communicating digital data using a communication protocol such as a general IP (Internet Protocol).
  • a communication protocol such as a general IP (Internet Protocol).
  • the image receiving device (106) includes a decoder (107- # 1 to #n) for decoding each image transmitted from the image transmitting device (101- # 1 to #n), and an image selection described later.
  • the processing unit (108) that performs image composition, the processing unit (109) that performs image enlargement and sharpening described later, and the processing unit (108, 109) are controlled based on signals from the operation unit (111). It is comprised from the control part (110) which performs.
  • the processing configuration for connecting the communication network (105) and the image receiving device (106) can be realized by a general technique, and is not shown.
  • the image reproduced by the image receiving device (106) is displayed on the display unit (112).
  • n ⁇ 2 only one image is selected and displayed from n images (a) an image selection mode and a plurality of images up to n (in FIG. 1). , 9 images are illustrated) are combined into one image and displayed (b)
  • the image composition mode is switched by a signal from the user operation unit (111).
  • each image captured by n (where n is an integer of 1 or more) cameras can be displayed on a single image receiving device via a communication network. At this time, it is possible to selectively display only one image designated by the user from among the n images and to display a plurality of images up to the upper limit of n images.
  • a clear image with less blur can be displayed even when the image is enlarged and displayed.
  • FIG. 2 shows a configuration example of a general image sharpening process disclosed so far.
  • the image sharpening process shown in FIG. 2 can be used as the processing unit (109) that performs image enlargement and sharpening shown in FIG.
  • the input image is enlarged by the image enlargement unit (201) and then included in the enlarged image by the horizontal / vertical high-pass filter (202).
  • a high frequency component is extracted, and the multiplier (203) multiplies the high frequency component by a coefficient K to adjust the signal strength of the high frequency component.
  • the nonlinear processing unit (204) performs nonlinear processing of the signal strength, and adds the adder (205 ) And the high-frequency component after the non-linear processing are added to obtain an output image.
  • a general high-frequency component amplification process edge enhancement
  • the nonlinear processing of the signal intensity in the nonlinear processing unit (204) includes, for example, processing for generating odd harmonic components by raising the signal value to the third power, small amplitude signal removal (coring) for reducing noise, A large amplitude signal removal (limiter) that suppresses excessive edge enhancement may be provided.
  • This image sharpening process has an advantage that blurring of an image can be removed with a small amount of computation compared with the image sharpening process shown in FIGS. 2B and 2C described later, but the number of pixels of the image is reduced. There is no effect of removing the aliasing component that occurs when the number is increased or decreased.
  • the image sharpening process (109 (a)) having the configuration shown in FIG. 2A can be realized by, for example, the technique described in Patent Document 1, and thus the detailed configuration and operation of each part are not described. To do.
  • the image sharpening process (109 (b)) having the configuration shown in FIG. 2 (b)
  • the input image is enlarged by the image enlargement unit (206)
  • one or more enlargements are made by the folding separation unit (207).
  • the baseband component and the aliasing component are separated from the image and demodulated (inverted) to the frequency component before the aliasing component is aliased by the demodulator (208)
  • the demodulated aliasing component and the baseband signal are added (209).
  • the image sharpening process by removing the aliasing component, the high-frequency component exceeding the Nyquist frequency is reproduced and the image is sharpened while suppressing image quality deterioration such as jaggedness and moire of lines included in the image.
  • this image sharpening process has no effect of amplifying the attenuated high frequency component.
  • the image sharpening process (109 (b)) having the configuration shown in FIG. 2B can be realized by the technique described in, for example, Patent Document 2 or Patent Document 3, and thus the detailed configuration and operation of each unit. Description of is omitted.
  • the temporarily set output image (initial image) is reduced by the image reduction unit (210) so as to have the same number of pixels as the input image.
  • the difference detection unit (211) detects the difference between the reduced image and the input image to generate a correction value for each pixel, and the adder (212) adds the input image and the correction value and inputs them to the image enlargement unit (213).
  • the output is used as a new temporary output image.
  • the output of the image enlargement unit (213) is set as an output image when the number of times exceeds the predetermined number.
  • the input image may be one frame, or the input image may be a plurality of frames, and the average image after the position correction is performed so that the position of the subject on each frame coincides with the difference detection unit (211) and the adder ( 212).
  • This image sharpening process can simultaneously realize the removal of the aliasing component and the blur removal of the image (amplification of the high frequency component).
  • this image sharpening process requires repetitive operations, high-performance computing resources are required to realize real-time processing.
  • the image sharpening process (109 (c)) having the configuration shown in FIG. 2C can be realized by the techniques described in Non-Patent Document 1 and Non-Patent Document 2, for example. The description of the operation is omitted.
  • FIG. 3 is a flowchart showing an example of the procedure of the display switching process according to the embodiment of the present invention.
  • the display switching process is started from step (301), the display mode designated by the user is identified in step (302), and if (a) the image selection mode is selected, the process proceeds to step (303); If it is the image composition mode, the process proceeds to step (305).
  • the image selection mode is a processing mode in which one image that the user wants to display is selected from n decoded images (where n is an integer of 1 or more) and displayed.
  • n is an integer of 1 or more
  • the sharpening process target may be limited to only this image.
  • step (303) based on an instruction from the user, an image that the user wants to display is selected from n decoded images (where n is an integer equal to or greater than 1), and the process proceeds to step (304). Then, the image enlargement and sharpening processes are performed, and the display switching process is terminated in step (307).
  • the image composition mode is a processing mode in which a plurality of images (up to nine images are illustrated in FIG. 1) with an upper limit of n images are combined and displayed as one image.
  • the decoded nine images of D1 size horizontal 704 pixels ⁇ vertical 480 pixels
  • the display unit of full HD size horizontal 1920 pixels ⁇ vertical 1080 pixels.
  • step (305) a plurality of images to be displayed are combined into one image, and the image enlargement and sharpening processes are performed in step (306). Each part is controlled so as not to be performed, and the display switching process is ended in step (307).
  • step (B) Display image blur in the image composition mode is relatively small. Therefore, since the effect of blur removal by the sharpening process is also relatively small, even if it is controlled to stop the sharpening process in the (b) image composition mode, no significant problem occurs.
  • the display switching process shown in FIG. 3 may be executed when a mode switching operation by the user occurs.
  • the switching operation from the (a) image selection mode to the (b) image composition mode by the user is performed by, for example, a specific key operation (for example, pressing down of the escape key) using the keyboard of the operation unit (111) shown in FIG. )
  • a specific operation with the mouse of the operation unit (111) for example, click of the right button
  • the switching operation from the (b) image composition mode to the (a) image selection mode by the user is performed by, for example, a code corresponding to an image that the user wants to select while viewing a plurality of images displayed on the display unit (112). Is displayed on the display unit (112) in correspondence with the mouse movement of the operation unit (111). After moving the cursor over the image you want to select, double-click the left mouse button to select the image and switch modes simultaneously.
  • an embodiment of the present invention proposes an image reception process having the following procedure.
  • FIG. 4 is a flowchart showing an example of the procedure of image reception processing according to the embodiment of the present invention.
  • image reception processing is started from step (401), and reception processing and decoding processing of n pieces of image data (where n is an integer of 1 or more) transmitted in step (402) are performed.
  • the subsequent steps (301 to 307) are the procedure of the display switching process shown in FIG. 3, and according to the display mode set by the user operation, (a) in the image selection mode, the process proceeds to step (404).
  • step (a) in the image selection mode the process proceeds to step (404).
  • step (a) in the image selection mode the process proceeds to step (404).
  • b) In the image composition mode image display is performed in step (409), and the image reception process is terminated in step (410).
  • step (404) measurement of the time required for the image enlargement and sharpening processing executed in step (405) is started, and in step (406), the time measurement is terminated, and the process proceeds to step (407).
  • step (404) it is only necessary to start the timer in step (404), read the elapsed time indicated by the timer in step (406) and set it as the processing time in step (405), and then terminate the operation of the timer.
  • the elapsed time from the start of the timer may be read in step (404) and step (406), respectively, and the time difference between the two may be used as the processing time in step (405).
  • step (404) may be omitted, and the difference between the time read in the previous step (406) and the time read in the current step (406) may be used as the processing time in the step (405).
  • step (405) an image enlargement and sharpening process is executed by performing the process shown in FIG.
  • step (407) the processing time of step (405) measured in step (406) is compared with a predetermined set value, and the contents of the next sharpening process in step (405) are determined as will be described later. Specifically, the content of the next sharpening processing is determined from the processing content shown in FIG. 6 described later, for example, so that the processing time of step (405) is less than a predetermined set value. Thereafter, an image is displayed in step (409), and the image receiving process is terminated in step (410).
  • a predetermined setting value for example, when step (405) is executed in units of one frame of an image, a value that takes into account a margin to the reciprocal of the average frame rate when displaying the image is used as the setting value. Good. That is, for example, when it is desired to display 5 frames per second, the predetermined set value may be set to a value of 200 milliseconds or less.
  • FIG. 5 is a diagram showing a configuration example of image enlargement and sharpening processing according to the embodiment of the present invention.
  • each of the image enlargement and sharpening processes shown in FIGS. 2A, 2B, and 2C has advantages and disadvantages, and there are options for the contents of each process as described later. .
  • a plurality of values are obtained in accordance with the time taken for image enlargement and sharpening measured by the procedure of the image reception processing shown in FIG. 4, that is, in accordance with the degree of margin in the calculation resources provided in the image reception device (106).
  • the processing unit (501) for enlarging and sharpening the image is different from the processing unit (109 (a)) for enlarging and sharpening the image shown in FIGS. 2 (a) and 2 (b).
  • a processing unit (109 (b)) that performs sharpening is combined in series. This combination makes it possible to compensate for the disadvantages of the two processing units described above, removing blurring of the image by the processing unit (109 (a)) and removing the aliasing component by the processing unit (109 (b)). Will be able to.
  • the switching units (502) and (503) can be switched to use the processing unit (109 (c)) that performs image enlargement and sharpening shown in FIG. 2 (c).
  • This configuration shows an example of a combination of image enlargement and sharpening processing, and is not limited to the configuration of FIG.
  • the order of the processing unit (109 (a)) and the processing unit (109 (b)) is reversed, the processing unit (109 (a)) and the processing unit (109 (b)), or the processing unit (109 ( Only one of c)) may be used.
  • the processing unit (109 (a)) that performs image enlargement and sharpening
  • the processing time can be shortened by reducing the calculation amount.
  • the nonlinear processing (204) shown in FIG. 2 (a) may be performed to generate odd-order harmonic components.
  • the image quality is slightly sacrificed.
  • the amount of calculation can be greatly reduced by not performing the entire processing unit (109 (a)).
  • the processing unit (109 (b)) that performs image enlargement and sharpening also has processing content options (examples) as shown in FIG. That is, the image can be sharpened by demodulating (208) and adding (209) the aliasing component in FIG. 2 (b), but when the computing resources are insufficient, this demodulation (208) and adding (209). Even if it does not perform, the baseband signal which isolate
  • the processing unit (109 (a)) enlarges the image in the processing unit (109 (a)).
  • the image enlargement unit (206) of (109 (b)) is unnecessary.
  • the processing unit (109 (c)) that performs image enlargement and sharpening also has processing content options (examples) as shown in FIG. That is, the correction value is sufficiently increased by increasing the number of times each process of the image reduction unit (210), the difference detection unit (211), the adder (212), and the image enlargement unit (213) in FIG. The result of convergence to a small value may be used as the output image. Conversely, by reducing the number of iterations, the image quality is slightly sacrificed, but the amount of computation can be reduced and the processing time can be shortened. Further, the image quality can be improved by increasing the number of frames of the input image used for this processing, and conversely, by setting the number of frames of the input image to 1, it is not necessary to align the subject between frames, and the calculation amount Can be greatly reduced.
  • one processing content is selected from the many options described above based on the processing content determined in step (407) in FIG. 4, and each unit (109 (a ), 109 (b), 109 (c), 502, 503). This specific operation will be described later.
  • the above-described image enlargement and sharpening processing enables flexible sharpening processing according to the performance of computing resources.
  • FIG. 6 collectively shows the operation (example) of the control unit (504) in FIG. 5 described above.
  • the processing units (109 (a), 109 (b), and 109 (c)) that perform image enlargement and sharpening in FIG. 5 have options for each processing content, and are necessary for each processing. The amount of computation is also different. Therefore, as shown in FIG. 6, the level is set according to the amount of calculation necessary for the processing, and the processing contents of each processing unit (109 (a), 109 (b), 109 (c)) are set. Set in advance. Note that the processing contents and processing parameters shown in FIG. 6 are examples for explaining the operation, and are not limited to the processing contents.
  • FIG. 6 from level 1 to level 4, only the image enlargement and sharpening is performed by the processing unit (109 (a)), and the processing of the processing unit (109 (b)) and processing unit (109 (c)) is performed. Indicates not to do. Further, the calculation amount of the processing unit (109 (a)) from level 1 to level 4 is set so as to gradually increase and achieve high image quality. Similarly, for the other levels, the processing content of each processing unit (109 (a), 109 (b), 109 (c)) is set so that the calculation amount and the image quality have a positive correlation. When the processing time of step (405) in FIG. 4 is larger than the predetermined set value, the level of FIG. 6 is changed so as to decrease.
  • step (405) when the processing time of step (405) is smaller than the predetermined set value, FIG. If the level 6 is changed to be larger, the processing time of step (405) can be controlled in the vicinity of a predetermined set value, and an image quality suitable for the calculation resource can be obtained.
  • the processing time obtained in one measurement may be used as the processing time in step (405), or frequent level changes may be made by using the average value of the processing times obtained in a plurality of measurements. It may be suppressed.
  • FIG. 8 is a diagram showing a schematic configuration of the second image processing system according to the embodiment of the present invention.
  • the image transmission apparatus 101 first image processing apparatus
  • the image receiving device 106 second image processing device
  • the image recording / reproducing device 114 records the encoded image data captured by the image transmission device (101) on a recording medium, and after performing time shift and archiving, the encoded image data from the recording medium. Is transmitted to the image receiving device (106).
  • a general hard disk magnetic disk
  • an optical disk such as a DVD (Digital Versatile Disc) or CD (Compact Disc)
  • a magnetic tape or the like is used alone as a recording medium, or a plurality of recording media are arrayed. It can be used in a connected state.
  • the recording operation and the reproducing operation may be performed independently, or another image data may be reproduced at the same time while recording the encoded image data on the recording medium.
  • the recording medium and configuration, the control method of the image recording / reproducing device (114), and the processing configuration for connecting the image recording / reproducing device (114) and the communication network (105) can be realized by general techniques. For this reason, illustration is omitted.
  • the image recording / reproducing apparatus (114) is configured as an independent apparatus, and is illustrated to be connected to the image transmitting apparatus (101) and the image receiving apparatus (106) via the communication network (105).
  • the present invention is not limited to this configuration.
  • the image recording / reproducing device (114) is built in the image transmitting device (101), or the image recording / reproducing device (114) is installed in the image receiving device (106). It may be configured to be built in.
  • FIG. 8 shows three types of image transmission apparatuses (101- # p) (101- # q) (101- # r) described below as a typical example of the image transmission apparatus 101.
  • An image transmission device (101- # p) is an image reduction unit whose characteristics are optimized between a camera (102- # p) having a large number of pixels such as a full HD size and an image reception device (806) described later. (103- # p), an encoding unit (104), and an identification signal transmission unit (801) described later.
  • the image transmission apparatus (101- # q) includes a camera (102- # q) having a large number of pixels such as a full HD size, an image reduction unit (103- # q) whose characteristics are not optimized, and an encoding unit (104).
  • the image transmission device (101- # r) includes a camera (102- # r) having a small number of pixels such as a D1 size and an encoding unit (104).
  • the encoding unit (104) is common to each image transmission apparatus.
  • the identification signal transmission unit (801) encodes an identification signal indicating that the characteristics are optimized with the image receiving device (806) (that is, the characteristics match), and the camera (102 -# P) means for multiplexing the image from the encoded data and transmitting it to the image receiving device (806) via the communication network (105).
  • An identification signal encoding processing configuration, a data multiplexing processing configuration, and a processing configuration for connecting the identification signal transmission unit (801) and the communication network (105) can be realized by a general technique. Is omitted.
  • the identification signal may be determined in advance by using, for example, specific flag data indicating that the characteristics have been optimized, or using digital data indicating the model name or serial number of the camera.
  • the image receiving device (806) is obtained by adding an identification signal receiving unit (802) to the configuration of the image receiving device (106) shown in FIG.
  • the control unit (810) in the image reception device (806) is a first processing unit that performs image selection or image synthesis based on the signal from the identification signal reception unit (802) and the signal from the operation unit (111). 108 and the operation of the second processing unit 109 for enlarging and sharpening the image.
  • the processing configuration for connecting the communication network (105) and the identification signal receiving unit (802) or separating the identification signal from the transmitted encoded data can be realized by a general technique. The illustration is omitted.
  • the device (114) is the same as that shown in FIG. 1 or FIG.
  • the image receiving device (806) uses the identification signal receiving unit (802) to determine whether or not a predetermined identification signal is multiplexed for each transmitted data.
  • the characteristics of the image reduction unit in the image transmission device (101- # p) are optimized with the image reception device (806), and other image transmission devices (101- # q) ( It can be determined that the characteristics of the image reduction unit in 101- # r) are not optimized or the image reduction unit is not provided. As described with reference to FIG.
  • the image sharpening is performed in the processing unit (109) that performs image enlargement and sharpening. Since the processing may be difficult, the processing unit (109) that performs image enlargement and sharpening is forcibly sharpened regardless of whether the display mode is (a) image selection mode or (b) image composition mode. Is controlled by the control unit (810).
  • the data determined that the predetermined identification signal is multiplexed with the transmission data that is, the image transmitted from the image transmitting apparatus (101- # p) whose characteristics are optimized are described above. The presence / absence of the sharpening process is switched according to the display mode.
  • the frame rate of the image at the time of reproduction may be arbitrarily set. For example, there are cases in which an image can be stopped for each frame or played slowly. Thus, lowering the frame rate at the time of reproduction corresponds to reducing the amount of calculation required per unit time. Therefore, when the frame rate at the time of reproduction is lowered, the sharpening process may be performed even in the image synthesis mode (b).
  • the image processing system of the difference in the present embodiment includes an image having a plurality of first image processing devices and a second image processing device that receives an image signal encoded from the first image processing device.
  • the first image processing device includes an imaging unit that captures an image, an encoding unit that encodes an image and outputs a signal, and a transmission unit that transmits the signal to the second image processing device;
  • the second image processing apparatus includes: a decoding unit that receives a signal and outputs a decoded image; a sharpening processing unit that sharpens the decoded image and outputs a sharpened image; and a decoded image
  • the image processing apparatus includes a display unit that displays a sharpened image and a control unit that switches an image displayed on the display unit.
  • the sharpening processing unit generates an enlarged image obtained by enlarging the decoded image, and extracts a high-frequency component from the enlarged image.
  • First process that performed nonlinear processing And-image-larger image to generate a second processed image by synthesizing, and outputs an image obtained by removing an aliasing component from the second processed image as a clear image.
  • an image from which aliasing components are removed can be output, and the image quality of an image signal sent from a remote location can be displayed with an improved quality.
  • the image transmission device (101) having various characteristics can be connected to the image reception device (806) via the communication network (105). ), The image sharpening process can be appropriately performed.
  • FIG. 9 is a flowchart showing an example of the procedure of the second display switching process according to the embodiment of the present invention.
  • the contents of steps (301), (302), (303), (304), (305), (306) and (307) in FIG. 9 are the same as the contents of each step in FIG. .
  • step (901) After selecting an image to be displayed in step (303) in FIG. 9, it is determined in step (901) whether or not the aforementioned identification signal corresponding to the selected image has been received, and the identification signal is determined. If received, the process proceeds to step (304), and if no identification signal is received, the process proceeds to step (902). In step (902), image enlargement is performed, but sharpening processing is not performed. As a result, the image that has received the identification signal (that is, the image whose characteristics match the image sharpening) is subjected to the image sharpening process, and the image that does not receive the identification signal (that is, whether the characteristics of the image sharpening and the characteristics match) With respect to (unknown image), it is possible to control so as not to perform the image sharpening process.
  • the identification signal that is, the image whose characteristics match the image sharpening
  • the image that does not receive the identification signal that is, whether the characteristics of the image sharpening and the characteristics match
  • the image processing method described in the present embodiment is an image processing method in which images captured by a plurality of first image processing devices are processed by the second image processing device, and the images are processed by the first image processing device.
  • a fourth step of sharpening the decoded image and outputting a sharpened image a sixth step of displaying the decoded image or the sharpened image on the display unit, and an image displayed on the display unit.
  • the frame rate at the time of reproduction can be lowered as described above.
  • the predetermined set value described above can be set to a value larger than the set value at the time of normal real-time reproduction, and blurring with greater effect can be performed without causing a shortage of calculation resources. become able to.
  • an image from which aliasing components are removed can be output, and the image quality of an image signal sent from a remote location can be displayed with an improved quality.
  • the image sharpening process can be appropriately performed even when an image having various characteristics is received. Become.
  • FIG. 10 is a diagram showing a schematic configuration of a third image processing system according to the embodiment of the present invention.
  • each image transmission device (101- # p) (101- # q) (101- # r) connected to the image reception device (1006) via the communication network (106) is connected to the image reception device.
  • the image receiving apparatus (1006) includes an image transmitting apparatus identifying unit (1001) and a processing method determining unit (1002). ).
  • the image transmission device identification unit (1001) is means for identifying individual image transmission devices (101- # p) (101- # q) (101- # r), and this operation will be described later with reference to FIG. To do.
  • the processing method determination unit (1002) determines the image based on the attributes of the image transmission devices (101- # p) (101- # q) (101- # r) identified by the image transmission device identification unit (1001).
  • the processing unit (109) that performs enlargement and sharpening determines whether or not sharpening is performed, and sends the result to the control unit (810).
  • the other units and the control unit (810) in FIG. 10 are the same as the configurations and operations of the other units and the control unit (810) shown in FIG. 1 and FIG. FIG.
  • FIG. 11 is an explanatory diagram of operations of the image transmission device identification unit (1001) and the processing method determination unit (1002) described above.
  • a means for identifying each image transmission device for example, an IP (Internet Protocol) address or a MAC (Media Access Control) address that is uniquely set for each device is used to identify each device on the communication network. To do.
  • a unique name for example, lobby, passage, entrance / exit, etc.
  • Characteristics are optimized between the IP address or MAC address (logical address) or installation location (physical address) for each device and the image receiving apparatus (1006) shown in FIG.
  • the flag indicating whether or not is managed as a list or a list and the characteristics are optimized for each image transmission device
  • the image sent from the image transmission device The processing method is determined such that the sharpening process is not performed unless the characteristics are optimized.
  • image selection / combination mode i.e., i images among (i + j) images (where i and j are each an integer of 1 or more)
  • FIG. 12 (c) there is provided a mode in which image # 1) is displayed large as a main screen and the other j images (images # 2 to # 6 in FIG. 12C) are displayed small as sub-screens. You may make it switch mutually between selection mode and (b) image composition mode.
  • image selection / combination mode control is performed so that the image sharpening process is performed for i images (image # 1 in FIG. 12C) selected as the parent screen, and the other j images are selected.
  • whether or not to perform the image sharpening process may be determined based on the size of the image to be displayed (that is, the number of horizontal pixels and the number of vertical pixels). That is, a threshold value for the number of pixels may be determined in advance, and an image sharpening process may be performed on an image displayed with a larger number of pixels than the threshold value.
  • the ratio of the sharpened image is desirably about 60-80, assuming that the area of the sharpened image, the decoded image that has not been sharpened, and the total screen is 100. If the sharpened image is too small, the details are not visible even if the sharpening process is performed, and the effect is weak. Also, if the sharpened image is too large, (a) the load of the image selection mode and the sharpening processing is not much different, but rather the amount of processing increases by the amount combined with the decoded image, and the merit of combining images is thin. is there.
  • image sharpening processing may be performed on an image displayed with a larger number of pixels than the transmitted image size.
  • the (c) image selection / combination mode is regarded as (a) a special case of the image selection mode, that is, (a) a case where j images of the child screen are combined and displayed on the image in the image selection mode. You can also.
  • the present invention can be realized by software program code that implements the functions of the embodiments.
  • a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium.
  • the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention.
  • a storage medium for supplying such program code for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
  • an OS operating system
  • the computer CPU or the like performs part or all of the actual processing based on the instruction of the program code.
  • the program code is stored in a storage means such as a hard disk or a memory of a system or apparatus, or a storage medium such as a CD-RW or CD-R
  • the computer (or CPU or MPU) of the system or apparatus may read and execute the program code stored in the storage means or the storage medium when used.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. All the components may be connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Television Systems (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

Provided is a technology which achieves image transmission with improved image quality by flexibly performing signal processing related to image enlargement and sharpening, in accordance with the capacity of computing resources.

Description

画像処理システム、及び、画像処理方法Image processing system and image processing method
 本発明は、画像処理システム及び画像処理方法に関し、特に、画像信号の符号化技術と鮮明化技術を用いる画像処理システム及び画像処理方法に関するものである。 The present invention relates to an image processing system and an image processing method, and more particularly to an image processing system and an image processing method using an image signal encoding technique and a sharpening technique.
 ITU-T勧告のH.264に代表される動画像符号化技術やJPEGに代表される静止画符号化技術は、テレビ電話、遠隔会議、またはネットワーク監視カメラ等の画像を伝送するアプリケーションにおいて欠かせない技術である。 ITU-T recommendation H. Video encoding technology represented by H.264 and still image coding technology represented by JPEG are indispensable technologies for applications such as videophones, remote conferences, and network surveillance cameras that transmit images.
 このような画像を符号化して伝送するアプリケーションにおいて、多数の拠点に設置された複数のカメラによって撮影された画像を、インターネット、イントラネット、公衆網などの通信ネットワーク経由で同時に伝送し、1個の画像受信端末でこれらの画像を1個の画面にまとめて表示することが一般に行われている。このとき、1個の画像で発生する符号化量に、トータルの画像数を乗算したデータサイズの符号化データが、通信ネットワークを介して1個の端末に伝送されることになる。通信ネットワークの伝送帯域の限界を上回るデータサイズの画像が伝送された場合、動画像を伝送するアプリケーションには通信エラーやデータ遅延が生じ、受信画像の劣化やフリーズなどが発生することがある。 In an application that encodes and transmits such an image, an image captured by a plurality of cameras installed at a number of locations is simultaneously transmitted via a communication network such as the Internet, an intranet, or a public network. Generally, these images are collectively displayed on a single screen at a receiving terminal. At this time, encoded data having a data size obtained by multiplying the amount of encoding generated in one image by the total number of images is transmitted to one terminal via the communication network. When an image having a data size exceeding the limit of the transmission bandwidth of the communication network is transmitted, a communication error or data delay may occur in an application that transmits a moving image, and a received image may be deteriorated or frozen.
 また、伝送された画像をハードディスク(磁気ディスク)や光ディスク等のメディアに記録して保存する際に、データサイズが大きいと大容量の記録メディアが必要になり、記録装置のコストアップを招いてしまう。あるいは、限られた記録容量のメディアを用いると、短時間の記録しかできないという問題がある。 Further, when the transmitted image is recorded and stored in a medium such as a hard disk (magnetic disk) or an optical disk, if the data size is large, a large-capacity recording medium is required, which increases the cost of the recording apparatus. . Alternatively, when a medium having a limited recording capacity is used, there is a problem that only a short recording can be performed.
 したがって、画像を伝送したり記録したりするアプリケーションをユーザに提供する場合、そのアプリケーションを実現するシステムでは、伝送帯域や記録容量に合わせて画像を符号化して圧縮し、データサイズを削減する必要がある。しかし、一般的に画像の符号化データサイズと画質とは相反する関係にあり、符号化データサイズを減らすほど画質は劣化する。そこで、符号化データサイズを削減してもユーザにとって所望の画質を維持できる技術の実現が望まれている。 Therefore, when providing an application for transmitting or recording an image to a user, the system that realizes the application needs to encode and compress the image in accordance with the transmission band and the recording capacity to reduce the data size. is there. However, in general, the encoded data size of the image and the image quality are in a contradictory relationship, and the image quality deteriorates as the encoded data size is reduced. Therefore, it is desired to realize a technique that can maintain a desired image quality for the user even if the encoded data size is reduced.
 このようなアプリケーションにおいて、撮像に用いるカメラの画素数が技術の進歩とともに増加してきており、現在では例えば水平1920画素×垂直1080画素(以下、フルHDサイズと略記)の画像などが一般に用いられている。しかしながら、前述したような通信ネットワークの伝送帯域や記録メディアの容量の制約などを鑑みて、撮像時の画素数を意図的に減らし、例えば水平704画素×垂直480画素(以下、D1サイズと略記)の画像を符号化対象とすることによって、データサイズを削減することがある。このとき、撮像された画像フレームの一部をトリミングして符号化対象の画像とすることもあるが、撮像された画像フレームの全体を縮小して画素数を減らし、符号化対象の画像とすることが多い。伝送あるいは記録された画像を再生する際には、縮小された画像を拡大して表示することが一般的に行われている。 In such an application, the number of pixels of a camera used for imaging has increased with the advance of technology, and currently, for example, an image of horizontal 1920 pixels × vertical 1080 pixels (hereinafter abbreviated as full HD size) is generally used. Yes. However, in view of the transmission bandwidth of the communication network and the limitation of the recording medium capacity as described above, the number of pixels at the time of imaging is intentionally reduced, for example, horizontal 704 pixels × vertical 480 pixels (hereinafter abbreviated as D1 size). In some cases, the data size may be reduced by using the above-mentioned image as an encoding target. At this time, a part of the captured image frame may be trimmed to be an encoding target image, but the entire captured image frame is reduced to reduce the number of pixels to be an encoding target image. There are many cases. When a transmitted or recorded image is reproduced, the reduced image is generally displayed in an enlarged manner.
 画像を拡大する際に、補間フィルタ等によって画素数を増やすだけでは、拡大後の画像がぼやけてしまうことがよく知られている。そこで、画像の鮮明化を併用することによって、ぼやけを抑える技術が多数提案されている。 It is well known that when an image is enlarged, the image after enlargement is blurred only by increasing the number of pixels using an interpolation filter or the like. Therefore, many techniques for suppressing blurring by using image sharpening in combination have been proposed.
 画像を鮮明化する技術として、画像に含まれる高周波成分を増幅してエッジ等を強調する技術が古くから用いられている。また近年では、超解像と呼ばれる信号処理技術が盛んに開発されている。例えば、特許文献1に記載された技術では、画像に含まれる高周波成分に対して非線形処理を行ったのちに元の画像に加えることによって、伝送あるいは記録された画像の解像度限界を表すナイキスト周波数を超える高調波成分を生成する。また特許文献2に記載された技術では、ナイキスト周波数を超えた高周波成分が折返し成分として画像に含まれていることを利用し、複数の画像フレームを入力画像とし、各画像フレームに写っている被写体の位置を正確に合わせたのちに、ベースバンド成分と折返し成分を分離し、折り返す前の周波数成分に復調(逆変換)し、復調後の折返し成分をベースバンド信号に加えることによって、高解像度の画像を得る。また、特許文献3に記載された技術では、1フレームの画像から、特許文献2に記載された技術と同様の折返し成分の分離を行っている。また、非特許文献1には、複数の画像フレームを用いて、反復演算を伴う逐次近似処理によって高解像度の画像を得る技術が多数紹介されている。また、非特許文献2には、1フレームの画像から、逐次近似処理によって高解像度の画像を得る技術が述べられている。 As a technique for sharpening an image, a technique for amplifying a high-frequency component included in an image and enhancing an edge or the like has been used for a long time. In recent years, a signal processing technique called super-resolution has been actively developed. For example, in the technique described in Patent Document 1, a Nyquist frequency representing the resolution limit of a transmitted or recorded image is obtained by performing nonlinear processing on a high-frequency component included in the image and then adding it to the original image. Generate higher harmonic components. Further, in the technique described in Patent Document 2, a high-frequency component exceeding the Nyquist frequency is used as an aliasing component in the image, and a plurality of image frames are used as input images, and the subject shown in each image frame After accurately aligning the position of, the baseband component and the aliasing component are separated, demodulated (inverted) to the frequency component before the aliasing, and the aliasing component after demodulation is added to the baseband signal, Get an image. In the technique described in Patent Document 3, the aliasing component is separated from the image of one frame in the same manner as the technique described in Patent Document 2. Non-Patent Document 1 introduces a number of techniques for obtaining a high-resolution image by using a plurality of image frames and performing successive approximation with iterative calculation. Non-Patent Document 2 describes a technique for obtaining a high-resolution image from one frame image by successive approximation processing.
特許第5320538号公報Japanese Patent No. 5320538 特許第5103314号公報Japanese Patent No. 5103314 特許第5250233号公報Japanese Patent No. 5250233
 前述したような、テレビ電話、遠隔会議、またはネットワーク監視カメラ等の画像を伝送し、あるいは記録し、表示するアプリケーションを実現する手段として、パーソナルコンピュータ(以下、PCと略記)やモバイル端末などが用いられることが多い。これらの端末では、信号処理を汎用的なCPU(Central Processing Unit)やMPU(Micro Processing Unit)、GPGPU(General Purpose Computing on Graphics Processing Unit)などによるソフトウェア処理で行うことによって、特別なハードウェアを用いなくても、画像の符号化、復号化、縮小、拡大、鮮明化、複数画像の合成、などを実現できる。 As described above, a personal computer (hereinafter abbreviated as PC), a mobile terminal, or the like is used as a means for realizing an application for transmitting, recording, or displaying an image such as a videophone, a remote conference, or a network surveillance camera. It is often done. In these terminals, special hardware is used by performing signal processing by software processing using a general-purpose CPU (Central Processing Unit), MPU (Micro Processing Unit), GPGPU (General Purpose Computing on Graphics Processing Unit), etc. Without encoding, it is possible to realize image encoding, decoding, reduction, enlargement, sharpening, synthesis of a plurality of images, and the like.
 その一方で、前述した超解像処理は、画像に含まれる高周波成分を増幅するだけのエッジ強調処理などに比べて、処理に必要な演算量が極めて大きい。例えば、非特許文献1や非特許文献2に記載されている逐次近似処理を用いた超解像技術では、後述するように、1枚の出力画像を得るために、入力画像の拡大、拡大後の画像の縮小、入力画像との差分検出、および出力画像の補正処理を、数回反復する必要があり、演算に必要なCPU等やメモリ等の計算リソースを多大に消費してしまう。特に、前述したような、多数の拠点に設置された複数のカメラによって撮影された画像を、通信ネットワーク経由で同時に伝送し、1個の画像受信端末でこれらの画像を1個の画面にまとめて表示する場合に、この画像受信端末をPCで実現しようとすると、複数の画像の復号化と拡大と鮮明化の処理を同時に行うことになり、計算リソースの処理限界を大きく超えてしまって、画像の動きがぎくしゃくするコマ落ちが発生したり、処理全体が止まってしまったり、マウスやキーボード等によるユーザ入力を受け付けず無応答の状態になったりする問題が生じる。したがって、複数の画像データを受信しながら、画像の鮮明化を並行して行うことは容易ではない。 On the other hand, the above-described super-resolution processing requires a large amount of calculation for processing compared to edge enhancement processing that only amplifies high-frequency components included in an image. For example, in the super-resolution technique using the successive approximation process described in Non-Patent Document 1 and Non-Patent Document 2, as will be described later, in order to obtain one output image, the input image is enlarged and enlarged The image reduction, the difference detection with the input image, and the output image correction processing need to be repeated several times, which greatly consumes calculation resources such as a CPU and a memory necessary for calculation. In particular, as described above, images taken by a plurality of cameras installed at a number of locations are simultaneously transmitted via a communication network, and these images are combined into one screen by one image receiving terminal. In the case of display, if this image receiving terminal is to be realized on a PC, a plurality of images are decoded, enlarged, and sharpened at the same time. There are problems such as frame dropping that makes the movement jerky, the entire process stopped, and that the user input from the mouse, keyboard, etc. is not accepted and no response is made. Therefore, it is not easy to sharpen an image in parallel while receiving a plurality of image data.
 本発明はこのような状況に鑑みてなされたものであり、画像受信端末が備えている計算リソースの性能に応じて、画像の拡大と鮮明化の信号処理を柔軟に実行することによって、画質を高めた画像伝送を実現する技術を提供するものである。 The present invention has been made in view of such a situation, and the image quality can be improved by flexibly executing image processing of image enlargement and sharpening according to the performance of calculation resources provided in the image receiving terminal. A technique for realizing enhanced image transmission is provided.
 上記課題を解決するために、例えば請求の範囲に記載の構成を採用する。本願は上記課題を解決する手段を複数含んでいるが、その一例を挙げるならば、画像処理システムであって、複数の第1画像処理装置と、第1画像処理装置から符号化された画像の信号を受け取る第2画像処理装置とを有する画像処理システムであって、第1画像処理装置は、画像を撮像する撮像部と、画像を符号化し、信号を出力する符号化部と、信号を前記第2画像処理装置へ送信する送信部と、を備え、第2画像処理装置は、信号を受信し、復号化した復号画像を出力する復号化部と、復号画像を鮮明化処理し、鮮明化画像を出力する鮮明化処理部と、復号画像または、鮮明化画像を表示する表示部と、表示部で表示する画像を切り替える制御部とを備え、鮮明化処理部では、復号画像を拡大した拡大画像を生成し、拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と拡大画像とを合成して第2処理済画像を生成し、前記第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする。 In order to solve the above problems, for example, the configuration described in the claims is adopted. The present application includes a plurality of means for solving the above problems. To give an example, an image processing system includes a plurality of first image processing devices and an image encoded from the first image processing device. An image processing system including a second image processing device that receives a signal, the first image processing device including an imaging unit that captures an image, an encoding unit that encodes an image and outputs a signal, and the signal A transmission unit that transmits to the second image processing device, and the second image processing device receives a signal and outputs a decoded decoded image, and performs a sharpening process on the decoded image to clarify the decoded image. A sharpening processing unit that outputs an image; a display unit that displays a decoded image or a sharpened image; and a control unit that switches an image to be displayed on the display unit. Generate image, high-frequency from enlarged image The second processed image is generated by synthesizing the first processed image extracted from the components and subjected to nonlinear processing and the enlarged image, and an image obtained by removing the aliasing component from the second processed image is output as a sharpened image. It is characterized by doing.
 あるいは、複数の第1画像処理装置で撮像した画像を第2画像処理装置で処理する画像処理方法であって、第1画像処理装置で画像を撮像する第1ステップと、画像を符号化し、信号を出力する第2ステップと、信号を前記第2画像処理装置へ送信する第3ステップと、信号を受信し、復号化した復号画像を出力する第4ステップと、復号画像を鮮明化処理し、鮮明化画像を出力する第5ステップと、復号画像または、鮮明化画像を表示部で表示する第6ステップと、表示部で表示する画像を切り替える第7ステップと、を備え、第5ステップでは、復号画像を拡大した拡大画像を生成し、拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と拡大画像とを合成して第2処理済画像を生成し、第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする。 Alternatively, there is an image processing method in which images captured by a plurality of first image processing devices are processed by a second image processing device, the first step of capturing images by the first image processing device, and the image is encoded and a signal A second step of outputting a signal, a third step of transmitting a signal to the second image processing device, a fourth step of receiving a signal and outputting a decoded image, and a sharpening process of the decoded image, A fifth step of outputting a sharpened image, a sixth step of displaying the decoded image or the sharpened image on the display unit, and a seventh step of switching the image to be displayed on the display unit. In the fifth step, An enlarged image obtained by enlarging the decoded image is generated, a high-frequency component is extracted from the enlarged image, and the first processed image obtained by performing nonlinear processing and the enlarged image are combined to generate a second processed image, and the second processed Fold from image And outputs an image obtained by removing components as sharpening the image.
 本発明によれば、計算リソースの性能に応じて、画像の拡大と鮮明化の信号処理を柔軟に実行でき、画質を高めた画像伝送を実現することができるようになる。 According to the present invention, it is possible to flexibly execute image enlargement and sharpening signal processing according to the performance of calculation resources, and to realize image transmission with improved image quality.
本発明の実施形態による第1の画像処理システムの概略構成を説明するための図である。It is a figure for demonstrating schematic structure of the 1st image processing system by embodiment of this invention. 一般的な画像鮮明化処理(例)を説明するための図である。It is a figure for demonstrating a general image sharpening process (example). 本発明の実施形態による第1の表示切り替え処理の手順(例)を説明するためのフローチャートである。It is a flowchart for demonstrating the procedure (example) of the 1st display switching process by embodiment of this invention. 本発明の実施形態による画像受信処理の手順(例)を説明するためのフローチャートである。It is a flowchart for demonstrating the procedure (example) of the image reception process by embodiment of this invention. 本発明の実施形態による画像拡大および鮮明化処理の構成(例)を説明するための図である。It is a figure for demonstrating the structure (example) of the image expansion and sharpening process by embodiment of this invention. 本発明の実施形態による画像拡大および鮮明化処理の動作(例)を説明するための図である。It is a figure for demonstrating the operation | movement (example) of the image expansion and sharpening process by embodiment of this invention. 本発明の実施形態による画像縮小の構成を説明するための図である。It is a figure for demonstrating the structure of the image reduction by embodiment of this invention. 本発明の実施形態による第2の画像処理システムの概略を説明するための図である。It is a figure for demonstrating the outline of the 2nd image processing system by embodiment of this invention. 本発明の実施形態による第2の表示切り替え処理を説明するための図である。It is a figure for demonstrating the 2nd display switching process by embodiment of this invention. 本発明の実施形態による第3の画像処理システムの概略構成を説明するための図である。It is a figure for demonstrating schematic structure of the 3rd image processing system by embodiment of this invention. 本発明の実施形態による第3の画像処理システムの動作を説明するための図である。It is a figure for demonstrating operation | movement of the 3rd image processing system by embodiment of this invention. 本発明の実施形態によるその他の画像表示形態(例)を説明するための図である。It is a figure for demonstrating the other image display form (example) by embodiment of this invention.
 本発明は、画質を高めた画像伝送を実現する技術を提供するものである。 The present invention provides a technique for realizing image transmission with improved image quality.
 以下、添付図面を参照して本発明の実施形態について説明する。また、各図において共通の構成については同一の参照番号が付される。なお、添付図面は本発明の原理に則った具体的な実施形態と実装例を示しているが、これらは本発明の理解のためのものであり、決して本発明を限定的に解釈するために用いられるものではない。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. Also, the same reference numerals are assigned to common components in the drawings. The attached drawings show specific embodiments and implementation examples based on the principle of the present invention, but these are for understanding the present invention and are not intended to limit the present invention. Not used.
 本実施形態では、当業者が本発明を実施するのに十分詳細にその説明がなされているが、他の実装・形態も可能で、本発明の技術的思想の範囲と精神を逸脱することなく構成・構造の変更や多様な要素の置き換えが可能であることを理解する必要がある。従って、以降の記述をこれに限定して解釈してはならない。 This embodiment has been described in sufficient detail for those skilled in the art to practice the present invention, but other implementations and configurations are possible without departing from the scope and spirit of the technical idea of the present invention. It is necessary to understand that the configuration and structure can be changed and various elements can be replaced. Therefore, the following description should not be interpreted as being limited to this.
 更に、本発明の実施形態は、汎用コンピュータ上で稼動するソフトウェアで実装しても良いし、ソフトウェアとハードウェアの組み合わせで実装しても良い。 Furthermore, the embodiment of the present invention may be implemented by software running on a general-purpose computer, or may be implemented by a combination of software and hardware.
 また、以下の説明で用いる「除去する」という用語は、「完全に無くす」という意味のほかに、「低減する」あるいは「弱める」という意味も含む。 In addition, the term “removing” used in the following description includes the meaning of “reducing” or “weakening” in addition to the meaning of “completely eliminating”.
 また、画像フレームを構成する画素数として、フルHDサイズ(水平1920画素×垂直1080画素)とD1サイズ(水平704画素×垂直480画素)を例に挙げて以下説明するが、これら以外の画素数を持った画像フレームを扱う場合においても本発明を適用できることは明らかであり、決して本発明を限定的に解釈するために用いられるものではない。 In addition, as an example of the number of pixels constituting the image frame, a full HD size (horizontal 1920 pixels × vertical 1080 pixels) and a D1 size (horizontal 704 pixels × vertical 480 pixels) will be described below as an example. It is clear that the present invention can be applied to the case of handling image frames having the same, and is not used to limit the present invention.
 <画像処理システムの概略構成>
 図1は、本発明の実施形態による画像処理システムの概略構成を示す図である。画像処理システムは、n個(ただし、nは1以上の整数)の画像送信装置(101-#1~#n)、通信ネットワーク(105)、画像受信装置(106)、ユーザが画像受信装置(106)を操作するための操作部(111)、画像受信装置(106)の出力画像を表示するための表示部(112)によって構成される。
<Schematic configuration of image processing system>
FIG. 1 is a diagram showing a schematic configuration of an image processing system according to an embodiment of the present invention. The image processing system includes n (where n is an integer equal to or greater than 1) image transmission devices (101- # 1 to #n), a communication network (105), an image reception device (106), and a user who receives an image reception device ( 106) and the display unit (112) for displaying the output image of the image receiving device (106).
 画像送信装置(101)は、カメラ(102)と、画像縮小部(103)と、符号化部(104)によって構成される。カメラ(102)は、例えば、図示しないCCD(Charge Coupled Device)やCMOS(Complementary Metal-Oxide Semiconductor)などの光電変換素子とレンズからなる一般的な撮像部と、信号レベル調整、コントラスト調整、ブライトネス調整、ホワイトバランス調整などを行う一般的な信号処理部によって構成される。画像縮小部(103)は、カメラ(102)で得た画像の画素数を減らす処理を行うものであり、撮像された画像フレームの一部をトリミングして画素数を減らしてもよい。また、もともと画素数の少ないカメラを用いる場合には、画像縮小部(103)を省略してもよい。符号化部(104)は、一般に知られているMPEG(Moving Picture Expert Group)-1、MPEG-2、MPEG-4、H.264、VC-1、JPEG(Joint Photographic Experts Group)、Motion JPEG、JPEG-2000などの標準規格化された符号化を行ってもよいし、非標準の符号化を行ってもよい。なお、画像送信装置(101)と通信ネットワーク(105)とを接続するための処理構成は、一般的な技術で実現可能なため、図示を省略している。 The image transmission device (101) includes a camera (102), an image reduction unit (103), and an encoding unit (104). The camera (102) includes, for example, a general imaging unit including a photoelectric conversion element (not shown) such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor) and a lens, signal level adjustment, contrast adjustment, and brightness adjustment. And a general signal processing unit that performs white balance adjustment and the like. The image reduction unit (103) performs processing to reduce the number of pixels of the image obtained by the camera (102), and may trim the part of the captured image frame to reduce the number of pixels. When a camera with a small number of pixels is used, the image reduction unit (103) may be omitted. The encoding unit (104) is a generally known MPEG (Moving Picture Expert Group) -1, MPEG-2, MPEG-4, H.264. H.264, VC-1, JPEG (Joint Photographic Experts Group), Motion JPEG, JPEG-2000, etc. may be standardized encoding, or non-standard encoding may be performed. Note that the processing configuration for connecting the image transmission apparatus (101) and the communication network (105) can be realized by a general technique, and is not shown.
 通信ネットワーク(105)は、有線あるいは無線のどちらでもよく、一般的なIP(Internet Protocol)などの通信プロトコルを用いてデジタルデータを通信するためのネットワークである。 The communication network (105) may be either wired or wireless, and is a network for communicating digital data using a communication protocol such as a general IP (Internet Protocol).
 画像受信装置(106)は、画像送信装置(101-#1~#n)から伝送されたそれぞれの画像を復号するための復号化器(107-#1~#n)と、後述する画像選択あるいは画像合成を行う処理部(108)と、後述する画像拡大および鮮明化を行う処理部(109)と、操作部(111)からの信号に基づいてこれらの処理部(108、109)を制御する制御部(110)から構成される。なお、通信ネットワーク(105)と画像受信装置(106)とを接続するための処理構成は、一般的な技術で実現可能なため、図示を省略している。 The image receiving device (106) includes a decoder (107- # 1 to #n) for decoding each image transmitted from the image transmitting device (101- # 1 to #n), and an image selection described later. Alternatively, the processing unit (108) that performs image composition, the processing unit (109) that performs image enlargement and sharpening described later, and the processing unit (108, 109) are controlled based on signals from the operation unit (111). It is comprised from the control part (110) which performs. Note that the processing configuration for connecting the communication network (105) and the image receiving device (106) can be realized by a general technique, and is not shown.
 画像受信装置(106)で再生された画像は、表示部(112)で表示される。このとき、前記n=1のときには、図1における表示形態(113)の(a)画像選択モードに示すように、1個の画像を表示部の画面全体に表示する。一方、前記n≧2の場合には、n個の画像の中から1個の画像だけを選択して表示する(a)画像選択モードと、n個を上限とした複数の画像(図1では、9個の画像の場合を例示)を1個の画像に合成して表示する(b)画像合成モードとを、ユーザ操作部(111)からの信号によって切り替える。 The image reproduced by the image receiving device (106) is displayed on the display unit (112). At this time, when n = 1, one image is displayed on the entire screen of the display unit as shown in (a) image selection mode of the display form (113) in FIG. On the other hand, in the case of n ≧ 2, only one image is selected and displayed from n images (a) an image selection mode and a plurality of images up to n (in FIG. 1). , 9 images are illustrated) are combined into one image and displayed (b) The image composition mode is switched by a signal from the user operation unit (111).
 以上述べた画像処理システムの構成により、n個(ただし、nは1以上の整数)のカメラによって撮影された各画像を、通信ネットワークを介して1個の画像受信装置で表示できるようになる。このとき、n個の画像の中で、ユーザが指定した1個の画像だけを選択的に表示することと、n個を上限とした複数の画像を網羅的に表示することが可能になる。また、画像拡大および鮮明化を行う処理を用いることにより、画像を拡大して表示したときにも、ぼやけの少ない鮮明な画像を表示することができる。 With the configuration of the image processing system described above, each image captured by n (where n is an integer of 1 or more) cameras can be displayed on a single image receiving device via a communication network. At this time, it is possible to selectively display only one image designated by the user from among the n images and to display a plurality of images up to the upper limit of n images. In addition, by using a process for enlarging and sharpening an image, a clear image with less blur can be displayed even when the image is enlarged and displayed.
 <一般的な画像鮮明化処理の概要とその課題>
 図2は、これまでに開示されている一般的な画像鮮明化処理の構成例を示している。図2に示す画像鮮明化処理は、図1に示した画像拡大および鮮明化を行う処理部(109)として用いることができる。
<Overview of general image sharpening processing and its problems>
FIG. 2 shows a configuration example of a general image sharpening process disclosed so far. The image sharpening process shown in FIG. 2 can be used as the processing unit (109) that performs image enlargement and sharpening shown in FIG.
 図2(a)に示す構成の画像鮮明化処理(109(a))では、画像拡大部(201)によって入力画像を拡大したのちに、水平・垂直ハイパスフィルタ(202)によって拡大画像に含まれる高周波成分を抽出し、乗算器(203)でこの高周波成分に係数Kを乗じて高周波成分の信号強度を調整し、非線形処理部(204)で信号強度の非線形処理を行って、加算器(205)で拡大後の画像と非線形処理後の高周波成分を加算し、出力画像とする。ここで、非線形処理部(204)の動作を省略すれば、一般的な高周波成分の増幅処理(エッジ強調)になる。また、非線形処理部(204)における信号強度の非線形処理は、例えば信号値を3乗することによって奇数次高調波成分を生成する処理や、ノイズを低減する小振幅信号除去(コアリング)や、過度のエッジ強調を抑える大振幅信号除去(リミッタ)を備えてもよい。この画像鮮明化処理は、後述する図2(b)および図2(c)に示す画像鮮明化処理と比較して少ない演算量で画像のぼやけを除去できる利点があるが、画像の画素数を増減したときに発生する折り返し成分を除去する効果はない。なお、図2(a)に示す構成の画像鮮明化処理(109(a))は、例えば特許文献1に記載されている技術によって実現可能なため、各部の詳細な構成および動作の説明は省略する。 In the image sharpening process (109 (a)) having the configuration shown in FIG. 2 (a), the input image is enlarged by the image enlargement unit (201) and then included in the enlarged image by the horizontal / vertical high-pass filter (202). A high frequency component is extracted, and the multiplier (203) multiplies the high frequency component by a coefficient K to adjust the signal strength of the high frequency component. The nonlinear processing unit (204) performs nonlinear processing of the signal strength, and adds the adder (205 ) And the high-frequency component after the non-linear processing are added to obtain an output image. Here, if the operation of the nonlinear processing unit (204) is omitted, a general high-frequency component amplification process (edge enhancement) is performed. The nonlinear processing of the signal intensity in the nonlinear processing unit (204) includes, for example, processing for generating odd harmonic components by raising the signal value to the third power, small amplitude signal removal (coring) for reducing noise, A large amplitude signal removal (limiter) that suppresses excessive edge enhancement may be provided. This image sharpening process has an advantage that blurring of an image can be removed with a small amount of computation compared with the image sharpening process shown in FIGS. 2B and 2C described later, but the number of pixels of the image is reduced. There is no effect of removing the aliasing component that occurs when the number is increased or decreased. Note that the image sharpening process (109 (a)) having the configuration shown in FIG. 2A can be realized by, for example, the technique described in Patent Document 1, and thus the detailed configuration and operation of each part are not described. To do.
 図2(b)に示す構成の画像鮮明化処理(109(b))では、画像拡大部(206)によって入力画像を拡大したのちに、折返し分離部(207)によって1個あるいは複数個の拡大画像からベースバンド成分と折返し成分を分離し、復調器(208)によって折返し成分を折り返す前の周波数成分に復調(逆変換)したのちに、復調した折返し成分とベースバンド信号を加算器(209)で加算し、出力画像とする。この画像鮮明化処理は、折返し成分を除去することにより、画像中に含まれる線のギザギザやモアレ等の画質劣化を抑えながら、ナイキスト周波数を超える高周波成分を再生して、画像を鮮明化する。その一方で、この画像鮮明化処理には、減衰した高周波成分を増幅する効果はない。なお、図2(b)に示す構成の画像鮮明化処理(109(b))は、例えば特許文献2や特許文献3に記載されている技術によって実現可能なため、各部の詳細な構成および動作の説明は省略する。 In the image sharpening process (109 (b)) having the configuration shown in FIG. 2 (b), after the input image is enlarged by the image enlargement unit (206), one or more enlargements are made by the folding separation unit (207). After the baseband component and the aliasing component are separated from the image and demodulated (inverted) to the frequency component before the aliasing component is aliased by the demodulator (208), the demodulated aliasing component and the baseband signal are added (209). To obtain an output image. In this image sharpening process, by removing the aliasing component, the high-frequency component exceeding the Nyquist frequency is reproduced and the image is sharpened while suppressing image quality deterioration such as jaggedness and moire of lines included in the image. On the other hand, this image sharpening process has no effect of amplifying the attenuated high frequency component. Note that the image sharpening process (109 (b)) having the configuration shown in FIG. 2B can be realized by the technique described in, for example, Patent Document 2 or Patent Document 3, and thus the detailed configuration and operation of each unit. Description of is omitted.
 図2(c)に示す構成の画像鮮明化処理(109(c))では、仮に設定した出力画像(初期画像)を画像縮小部(210)によって入力画像と同じ画素数になるように縮小し、差分検出部(211)によって縮小画像と入力画像の差分を検出して画素ごとの補正値を生成し、加算器(212)によって入力画像と補正値を加えて画像拡大部(213)に入力し、その出力を新たな仮の出力画像とする。これらの処理を、補正値が徐々に小さくなるように何回も反復して行い、すべての画素に対応する補正値が予め定められた閾値よりも小さくなっとき、あるいは、反復回数が予め定められた回数よりも大きくなったときに、画像拡大部(213)の出力を出力画像とする。このとき、入力画像を1フレームとしてもよいし、入力画像を複数フレームとし、各フレーム上の被写体の位置が一致するように位置補正したのちの平均画像を差分検出部(211)と加算器(212)への入力画像としてもよい。この画像鮮明化処理は、折返し成分の除去と画像のぼやけ除去(高周波成分の増幅)を同時に実現できる。その一方で、この画像鮮明化処理には反復演算が必要なため、リアルタイム処理を実現するためには高性能の計算リソースを必要とする。なお、図2(c)に示す構成の画像鮮明化処理(109(c))は、例えば非特許文献1や非特許文献2に記載されている技術によって実現可能なため、各部の詳細な構成および動作の説明は省略する。 In the image sharpening process (109 (c)) having the configuration shown in FIG. 2 (c), the temporarily set output image (initial image) is reduced by the image reduction unit (210) so as to have the same number of pixels as the input image. The difference detection unit (211) detects the difference between the reduced image and the input image to generate a correction value for each pixel, and the adder (212) adds the input image and the correction value and inputs them to the image enlargement unit (213). The output is used as a new temporary output image. These processes are repeated many times so that the correction value gradually decreases, and when the correction values corresponding to all the pixels become smaller than a predetermined threshold value, or the number of repetitions is predetermined. The output of the image enlargement unit (213) is set as an output image when the number of times exceeds the predetermined number. At this time, the input image may be one frame, or the input image may be a plurality of frames, and the average image after the position correction is performed so that the position of the subject on each frame coincides with the difference detection unit (211) and the adder ( 212). This image sharpening process can simultaneously realize the removal of the aliasing component and the blur removal of the image (amplification of the high frequency component). On the other hand, since this image sharpening process requires repetitive operations, high-performance computing resources are required to realize real-time processing. Note that the image sharpening process (109 (c)) having the configuration shown in FIG. 2C can be realized by the techniques described in Non-Patent Document 1 and Non-Patent Document 2, for example. The description of the operation is omitted.
 以上述べた画像鮮明化処理により、ぼやけを抑えた拡大画像を得ることができる。しかしながら、これらの画像鮮明化処理は、画像に含まれる高周波成分を増幅するだけのエッジ強調処理などに比べて、処理に必要な演算量が大きい。したがって、複数のカメラで得られた画像のすべてに対して画像鮮明化処理を行うと、画像受信装置(106)が備えている計算リソースの処理限界を大きく超えてしまって、画像の動きがぎくしゃくするコマ落ちが発生したり、処理全体が止まってしまったり、マウスやキーボード等によるユーザ入力を受け付けず無応答の状態になったりする問題が生じるため、複数の画像データを受信しながら、画像の鮮明化を並行して行うことは容易ではない。 By the above-described image sharpening process, an enlarged image with reduced blur can be obtained. However, these image sharpening processes require a large amount of calculation for processing compared to edge enhancement processes that only amplify high-frequency components included in an image. Therefore, when the image sharpening process is performed on all the images obtained by the plurality of cameras, the processing limit of the calculation resource provided in the image receiving apparatus (106) is greatly exceeded, and the movement of the image is jerky. Frame dropping occurs, the entire process stops, or user input from a mouse, keyboard, etc. is not accepted, resulting in a non-responsive state. It is not easy to perform sharpening in parallel.
 <表示切り替え処理の手順(例)>
 前述の課題を解決するために、本発明の実施形態では、以下に述べるような手順を有する表示切り替え処理を提案する。
<Procedure for display switching processing (example)>
In order to solve the above-described problems, the embodiment of the present invention proposes a display switching process having a procedure as described below.
 図3は、本発明の実施形態による表示切り替え処理の手順の一例を示すフローチャートである。図3において、ステップ(301)から表示切り替え処理を開始し、ステップ(302)にてユーザが指定した表示モードを識別し、(a)画像選択モードであればステップ(303)に進み、(b)画像合成モードであればステップ(305)に進む。 FIG. 3 is a flowchart showing an example of the procedure of the display switching process according to the embodiment of the present invention. In FIG. 3, the display switching process is started from step (301), the display mode designated by the user is identified in step (302), and if (a) the image selection mode is selected, the process proceeds to step (303); If it is the image composition mode, the process proceeds to step (305).
 (a)画像選択モードは、復号化されたn個(ただし、nは1以上の整数)の画像の中から、ユーザが表示したい1個の画像を選択して表示する処理モードである。このとき、例えば復号化されたD1サイズ(水平704画素×垂直480画素)の画像をフルHDサイズ(水平1920画素×垂直1080画素)の表示部にて表示することを想定すると、水平方向に約2.7倍(=1920/704)、垂直方向に約2.3倍(=1080/480)の画像拡大を行うことになり、画像のぼやけが顕著である。逆に言うと、鮮明化処理によるぼやけ除去の効果が大きい。また、表示する画像が1個であるため、鮮明化の処理対象もこの画像だけに絞ればよい。すなわち、表示しない画像に対する鮮明化処理は不要となり、復号化したすべての画像に対して鮮明化処理を行うよりも、処理に必要な計算リソースを削減できる。したがって、ステップ(303)では、ユーザからの指示に基づき、復号化されたn個(ただし、nは1以上の整数)の画像の中からユーザが表示したい画像を選択し、ステップ(304)にて画像拡大および鮮明化の処理を行って、ステップ(307)にて表示切り替え処理を終了する。 (A) The image selection mode is a processing mode in which one image that the user wants to display is selected from n decoded images (where n is an integer of 1 or more) and displayed. At this time, for example, assuming that a decoded image of D1 size (horizontal 704 pixels × vertical 480 pixels) is displayed on a full HD size (horizontal 1920 pixels × vertical 1080 pixels) display unit, The image is enlarged by 2.7 times (= 1920/704) and about 2.3 times (= 1080/480) in the vertical direction, and the image blur is remarkable. In other words, the effect of blur removal by the sharpening process is great. In addition, since only one image is displayed, the sharpening process target may be limited to only this image. That is, a sharpening process for an image that is not to be displayed is not necessary, and calculation resources required for the process can be reduced as compared to performing a sharpening process for all the decoded images. Accordingly, in step (303), based on an instruction from the user, an image that the user wants to display is selected from n decoded images (where n is an integer equal to or greater than 1), and the process proceeds to step (304). Then, the image enlargement and sharpening processes are performed, and the display switching process is terminated in step (307).
 一方、(b)画像合成モードは、n個を上限とした複数の画像(図1では、9個の画像の場合を例示)を1個の画像に合成して表示する処理モードである。このとき、例えば復号化されたD1サイズ(水平704画素×垂直480画素)の9個の画像を1個の画像に合成してフルHDサイズ(水平1920画素×垂直1080画素)の表示部にて表示することを想定すると、復号化された1個の画像あたりの表示画素数は、水平640画素(=1920/3)×垂直360画素(=1080/3)となり、伝送されたD1サイズよりも小さくなる。したがって、画像拡大および鮮明化の処理は不要になるため、ステップ(305)では、表示すべき複数の画像を1個の画像に合成し、ステップ(306)にて画像拡大および鮮明化の処理を行わないように各部を制御して、ステップ(307)にて表示切り替え処理を終了する。なお、画像合成後の1個の画像あたりの表示画素数が、符号化画像として伝送された画素数よりも大きい場合でも、前述した(a)画像選択モードにおける表示画素数よりは小さくなるため、(b)画像合成モードにおける表示画像のぼやけは比較的小さい。したがって、鮮明化処理によるぼやけ除去の効果も比較的小さくなるため、(b)画像合成モードで鮮明化処理を止めるように制御しても大きな問題は生じない。 On the other hand, (b) the image composition mode is a processing mode in which a plurality of images (up to nine images are illustrated in FIG. 1) with an upper limit of n images are combined and displayed as one image. At this time, for example, the decoded nine images of D1 size (horizontal 704 pixels × vertical 480 pixels) are combined into one image and displayed on the display unit of full HD size (horizontal 1920 pixels × vertical 1080 pixels). Assuming that display is performed, the number of display pixels per decoded image is 640 horizontal pixels (= 1920/3) × 360 vertical pixels (= 1080/3), which is larger than the transmitted D1 size. Get smaller. Accordingly, since the image enlargement and sharpening processes are unnecessary, in step (305), a plurality of images to be displayed are combined into one image, and the image enlargement and sharpening processes are performed in step (306). Each part is controlled so as not to be performed, and the display switching process is ended in step (307). Even when the number of display pixels per image after image synthesis is larger than the number of pixels transmitted as an encoded image, it is smaller than the number of display pixels in the above-described (a) image selection mode. (B) Display image blur in the image composition mode is relatively small. Therefore, since the effect of blur removal by the sharpening process is also relatively small, even if it is controlled to stop the sharpening process in the (b) image composition mode, no significant problem occurs.
 なお、図3に示した表示切り替え処理は、ユーザによるモード切り替え操作が発生したときに実行すればよい。このとき、ユーザによる(a)画像選択モードから(b)画像合成モードへの切り替え操作は、例えば、図1に示す操作部(111)のキーボードによる特定のキー操作(例えば、エスケープキーの押下げ)や、操作部(111)のマウスによる特定の操作(例えば、右ボタンのクリック)をトリガとして、モード切り替えを実行すればよい。また、ユーザによる(b)画像合成モードから(a)画像選択モードへの切り替え操作は、例えば、表示部(112)に表示された複数の画像をユーザが見ながら、選択したい画像と対応したコードを操作部(111)のキーボードから入力したり、選択したい画像と対応した位置にあるテンキーを押下げたり、操作部(111)のマウスの動きと対応して表示部(112)に表示されるカーソルを選択したい画像の上に移動させたのちにマウスの左ボタンをダブルクリックしたりして、画像の選択とモード切り替えを同時に実行すればよい。 Note that the display switching process shown in FIG. 3 may be executed when a mode switching operation by the user occurs. At this time, the switching operation from the (a) image selection mode to the (b) image composition mode by the user is performed by, for example, a specific key operation (for example, pressing down of the escape key) using the keyboard of the operation unit (111) shown in FIG. ) Or a specific operation with the mouse of the operation unit (111) (for example, click of the right button) may be used as a trigger to perform mode switching. In addition, the switching operation from the (b) image composition mode to the (a) image selection mode by the user is performed by, for example, a code corresponding to an image that the user wants to select while viewing a plurality of images displayed on the display unit (112). Is displayed on the display unit (112) in correspondence with the mouse movement of the operation unit (111). After moving the cursor over the image you want to select, double-click the left mouse button to select the image and switch modes simultaneously.
 以上述べた表示切り替え処理の手順(例)により、(a)画像選択モードでは、鮮明化の処理対象を表示画像だけに絞り、表示しない画像に対する鮮明化処理を止めることによって、復号化したすべての画像に対して鮮明化の処理を行うよりも、処理に必要な計算リソースを削減できるようになる。また、(b)画像合成モードでは、鮮明化の処理を一切行わないようにすることによって、計算リソースを大きく削減できるようになる。 According to the procedure (example) of the display switching process described above, in (a) the image selection mode, all the decoded images are reduced by narrowing the sharpening process target to only the display image and stopping the sharpening process for the image that is not displayed. Rather than performing the sharpening process on the image, it is possible to reduce the calculation resources necessary for the process. In (b) image composition mode, calculation resources can be greatly reduced by not performing any sharpening process.
 <画像受信処理の手順(例)>
 前述した表示切り替え処理の手順(例)では、表示モードごとに鮮明化処理の有無を切り替えることによって計算リソースの削減を実現した。しかしながら、鮮明化処理そのものに必要な計算リソースを削減できたわけではない。そこで、この課題(鮮明化処理に必要な計算リソースの削減)を解決するために、本発明の実施形態では、以下に述べるような手順を有する画像受信処理を提案する。
<Image reception processing procedure (example)>
In the procedure (example) of the display switching process described above, the calculation resources are reduced by switching the presence / absence of the sharpening process for each display mode. However, the calculation resources necessary for the sharpening process itself have not been reduced. Therefore, in order to solve this problem (reduction of calculation resources necessary for the sharpening process), an embodiment of the present invention proposes an image reception process having the following procedure.
 図4は、本発明の実施形態による画像受信処理の手順の一例を示すフローチャートである。図4において、ステップ(401)から画像受信処理を開始し、ステップ(402)にて伝送されたn個(ただし、nは1以上の整数)の画像データの受信処理と復号化処理を行う。続くステップ(301~307)は、図3に示した表示切り替え処理の手順であり、ユーザ操作によって設定された表示モードに応じて、(a)画像選択モードのときにはステップ(404)に進み、(b)画像合成モードのときにはステップ(409)にて画像表示を行って、ステップ(410)にて画像受信処理を終了する。 FIG. 4 is a flowchart showing an example of the procedure of image reception processing according to the embodiment of the present invention. In FIG. 4, image reception processing is started from step (401), and reception processing and decoding processing of n pieces of image data (where n is an integer of 1 or more) transmitted in step (402) are performed. The subsequent steps (301 to 307) are the procedure of the display switching process shown in FIG. 3, and according to the display mode set by the user operation, (a) in the image selection mode, the process proceeds to step (404). b) In the image composition mode, image display is performed in step (409), and the image reception process is terminated in step (410).
 ステップ(404)では、ステップ(405)で実行する画像拡大および鮮明化の処理にかかる時間の計測を開始し、ステップ(406)でこの時間計測を終了して、ステップ(407)に進む。このとき、ステップ(404)でタイマーを起動し、ステップ(406)でタイマーが示す経過時間を読み取ってステップ(405)の処理時間としたのちに、タイマーの動作を終了すればよい。あるいは、タイマー起動時からの経過時間を、ステップ(404)とステップ(406)でそれぞれ読み取り、両者の時間差をステップ(405)の処理時間としてもよい。あるいは、ステップ(404)を省略し、前回ステップ(406)で読み取った時間と今回ステップ(406)で読み取った時間との差をステップ(405)の処理時間としてもよい。なお、ステップ(405)では、後述する図5に示す構成の処理を行うことによって、画像拡大および鮮明化処理を実行する。 In step (404), measurement of the time required for the image enlargement and sharpening processing executed in step (405) is started, and in step (406), the time measurement is terminated, and the process proceeds to step (407). At this time, it is only necessary to start the timer in step (404), read the elapsed time indicated by the timer in step (406) and set it as the processing time in step (405), and then terminate the operation of the timer. Alternatively, the elapsed time from the start of the timer may be read in step (404) and step (406), respectively, and the time difference between the two may be used as the processing time in step (405). Alternatively, the step (404) may be omitted, and the difference between the time read in the previous step (406) and the time read in the current step (406) may be used as the processing time in the step (405). In step (405), an image enlargement and sharpening process is executed by performing the process shown in FIG.
 ステップ(407)では、ステップ(406)で計測したステップ(405)の処理時間と所定の設定値を比較して、後述するようにステップ(405)における次回の鮮明化処理の内容を決定する。具体的には、ステップ(405)の処理時間が所定の設定値以下になるように、例えば後述する図6に示す処理内容の中から次回の鮮明化処理の内容を決定する。その後、ステップ(409)にて画像表示を行って、ステップ(410)にて画像受信処理を終了する。このとき、所定の設定値として、例えばステップ(405)を画像の1フレーム単位で実行する場合には、画像を表示する際の平均フレームレートの逆数にマージンを考慮した値を設定値とすればよい。すなわち、例えば毎秒5フレームの表示を行いたいときには、所定の設定値を200ミリ秒以下の値とすればよい。 In step (407), the processing time of step (405) measured in step (406) is compared with a predetermined set value, and the contents of the next sharpening process in step (405) are determined as will be described later. Specifically, the content of the next sharpening processing is determined from the processing content shown in FIG. 6 described later, for example, so that the processing time of step (405) is less than a predetermined set value. Thereafter, an image is displayed in step (409), and the image receiving process is terminated in step (410). At this time, as a predetermined setting value, for example, when step (405) is executed in units of one frame of an image, a value that takes into account a margin to the reciprocal of the average frame rate when displaying the image is used as the setting value. Good. That is, for example, when it is desired to display 5 frames per second, the predetermined set value may be set to a value of 200 milliseconds or less.
 以上述べた画像受信処理の手順(例)により、画像拡大および鮮明化にかかった時間に応じて、図1に示した画像受信装置(106)が備えている計算リソースにおける余裕の度合いを把握できるようになり、この結果に基づいて次回の鮮明化処理の内容を決定することにより、計算リソースの処理限界を大きく超えてしまって、画像の動きがぎくしゃくするコマ落ちが発生したり、処理全体が止まってしまったり、マウスやキーボード等によるユーザ入力を受け付けず無応答の状態になったりする問題を回避でき、計算リソースの性能に応じて画像の拡大と鮮明化の信号処理を柔軟に実行できるようになる。 According to the procedure (example) of the image reception process described above, it is possible to grasp the degree of margin in the calculation resources provided in the image reception device (106) shown in FIG. 1 according to the time taken for image enlargement and sharpening. By determining the content of the next sharpening process based on this result, the processing limit of the computational resource is greatly exceeded, and frame dropping may occur where the motion of the image is jerky, or the entire process It is possible to avoid problems such as stopping or not receiving user input from a mouse, keyboard, etc., and flexibly executing image enlargement and sharpening signal processing according to the performance of computing resources become.
 <画像拡大および鮮明化処理の構成(例)>
 図5は、本発明の実施形態による画像拡大および鮮明化処理の構成例を示す図である。前述したように、図2(a)(b)(c)に示した画像拡大および鮮明化処理には、それぞれに利点と欠点があり、また後述するように各処理の内容に選択肢が存在する。そこで、図4に示した画像受信処理の手順によって測定した画像拡大および鮮明化にかかった時間に応じて、すなわち画像受信装置(106)が備えている計算リソースにおける余裕の度合いに応じて、複数の画像拡大および鮮明化処理を組み合わせたり、各処理の内容を選択したりすることによって、計算リソースの性能に応じて画像の拡大と鮮明化の信号処理を柔軟に実行する。
<Configuration of image enlargement and sharpening processing (example)>
FIG. 5 is a diagram showing a configuration example of image enlargement and sharpening processing according to the embodiment of the present invention. As described above, each of the image enlargement and sharpening processes shown in FIGS. 2A, 2B, and 2C has advantages and disadvantages, and there are options for the contents of each process as described later. . In view of this, a plurality of values are obtained in accordance with the time taken for image enlargement and sharpening measured by the procedure of the image reception processing shown in FIG. 4, that is, in accordance with the degree of margin in the calculation resources provided in the image reception device (106). By combining these image enlargement and sharpening processes and selecting the contents of each process, the image enlargement and sharpening signal processing is flexibly executed in accordance with the performance of the computational resources.
 図5において、画像の拡大と鮮明化を行う処理部(501)は、図2(a)(b)に示した画像拡大および鮮明化を行う処理部(109(a))と別の画像拡大および鮮明化を行う処理部(109(b))を直列に組み合わせたものである。この組み合わせによって前述した両処理部の短所を補い合うことができるようになり、処理部(109(a))で画像のぼやけを除去し、処理部(109(b))で折返し成分を除去することができるようになる。また、切り替え器(502)(503)によって、図2(c)に示した画像拡大および鮮明化を行う処理部(109(c))を用いるように切り替えることもできる。なお、この構成は画像拡大および鮮明化処理の組み合わせ方の一例を示しており、図5の構成に限定されるわけではない。例えば、処理部(109(a))と処理部(109(b))の順序を逆にしたり、処理部(109(a))と処理部(109(b))、あるいは処理部(109(c))のどちらか一方のみを用いるようにしたりしてもよい。 In FIG. 5, the processing unit (501) for enlarging and sharpening the image is different from the processing unit (109 (a)) for enlarging and sharpening the image shown in FIGS. 2 (a) and 2 (b). And a processing unit (109 (b)) that performs sharpening is combined in series. This combination makes it possible to compensate for the disadvantages of the two processing units described above, removing blurring of the image by the processing unit (109 (a)) and removing the aliasing component by the processing unit (109 (b)). Will be able to. In addition, the switching units (502) and (503) can be switched to use the processing unit (109 (c)) that performs image enlargement and sharpening shown in FIG. 2 (c). This configuration shows an example of a combination of image enlargement and sharpening processing, and is not limited to the configuration of FIG. For example, the order of the processing unit (109 (a)) and the processing unit (109 (b)) is reversed, the processing unit (109 (a)) and the processing unit (109 (b)), or the processing unit (109 ( Only one of c)) may be used.
 ここで、画像拡大および鮮明化を行う処理部(109(a))では、図5に示すような処理内容の選択肢(例)がある。すなわち、図2(a)における水平・垂直フィルタ(202)のタップ数を増やすことによってフィルタの周波数特性をきめ細かく設定して画質を向上したり、逆にタップ数を減らすことによって、画質は若干犠牲になるが、演算量を減らして処理時間を短くしたりできる。また、図2(a)に示した非線形処理(204)を行って奇数次高調波成分を生成してもよいし、逆に非線形処理(204)を行わないことによって、画質は若干犠牲になるが、演算量を減らして処理時間を短くしたりできる。また、処理部(109(a))全体の処理を行わないことによって、演算量を大きく減らすこともできる。 Here, in the processing unit (109 (a)) that performs image enlargement and sharpening, there are processing content options (examples) as shown in FIG. That is, by increasing the number of taps of the horizontal / vertical filter (202) in FIG. 2A to improve the image quality by finely setting the frequency characteristics of the filter, or conversely, by reducing the number of taps, the image quality is slightly sacrificed. However, the processing time can be shortened by reducing the calculation amount. In addition, the nonlinear processing (204) shown in FIG. 2 (a) may be performed to generate odd-order harmonic components. Conversely, by not performing the nonlinear processing (204), the image quality is slightly sacrificed. However, it is possible to shorten the processing time by reducing the calculation amount. Also, the amount of calculation can be greatly reduced by not performing the entire processing unit (109 (a)).
 同様に、画像拡大および鮮明化を行う処理部(109(b))にも、図5に示すような処理内容の選択肢(例)がある。すなわち、図2(b)における折返し成分の復調(208)と加算(209)によって画像を鮮明化することもできるが、計算リソースが不足しているときには、この復調(208)と加算(209)を行わなくても、折返し成分を分離したベースバンド信号が出力画像となり、折返し成分に起因するギザギザやモアレを抑えることができる。また、この処理に用いる入力画像のフレーム数を増やすことによって折返し成分の分離精度を上げて高画質化することもできるし、逆に入力画像のフレーム数を1とすることによって、フレーム間の被写体の位置合わせを不要として演算量を大きく減らすこともできる。また、処理部(109(b))全体の処理を行わないことによって、演算量を大きく減らすこともできる。なお、処理部(109(a))と処理部(109(b))を併用する際には、処理部(109(a))の画像拡大部(201)において画像を拡大するため、処理部(109(b))の画像拡大部(206)は不要である。 Similarly, the processing unit (109 (b)) that performs image enlargement and sharpening also has processing content options (examples) as shown in FIG. That is, the image can be sharpened by demodulating (208) and adding (209) the aliasing component in FIG. 2 (b), but when the computing resources are insufficient, this demodulation (208) and adding (209). Even if it does not perform, the baseband signal which isolate | separated the folding component becomes an output image, and the jaggedness and moire resulting from the folding component can be suppressed. In addition, by increasing the number of frames of the input image used for this processing, it is possible to improve the separation component separation accuracy and to improve the image quality. Conversely, by setting the number of frames of the input image to 1, the subject between frames can be improved. Therefore, the amount of calculation can be greatly reduced. Also, the amount of calculation can be greatly reduced by not performing the entire processing unit (109 (b)). When the processing unit (109 (a)) and the processing unit (109 (b)) are used in combination, the processing unit (109 (a)) enlarges the image in the processing unit (109 (a)). The image enlargement unit (206) of (109 (b)) is unnecessary.
 同様に、画像拡大および鮮明化を行う処理部(109(c))にも、図5に示すような処理内容の選択肢(例)がある。すなわち、図2(c)における画像縮小部(210)、差分検出部(211)、加算器(212)、画像拡大部(213)の各処理を反復する回数を増やすことによって、補正値が十分に小さな値に収束した結果を出力画像としてもよいし、逆にこの反復回数を減らすことによって、画質は若干犠牲になるが、演算量を減らして処理時間を短くしたりできる。また、この処理に用いる入力画像のフレーム数を増やすして画質を向上することもできるし、逆に入力画像のフレーム数を1とすることによって、フレーム間の被写体の位置合わせを不要として演算量を大きく減らすこともできる。 Similarly, the processing unit (109 (c)) that performs image enlargement and sharpening also has processing content options (examples) as shown in FIG. That is, the correction value is sufficiently increased by increasing the number of times each process of the image reduction unit (210), the difference detection unit (211), the adder (212), and the image enlargement unit (213) in FIG. The result of convergence to a small value may be used as the output image. Conversely, by reducing the number of iterations, the image quality is slightly sacrificed, but the amount of computation can be reduced and the processing time can be shortened. Further, the image quality can be improved by increasing the number of frames of the input image used for this processing, and conversely, by setting the number of frames of the input image to 1, it is not necessary to align the subject between frames, and the calculation amount Can be greatly reduced.
 図5における制御部(504)を用いて、図4におけるステップ(407)によって決定された処理内容に基づいて、前述した多数の選択肢の中から1つの処理内容を選択し、各部(109(a)、109(b)、109(c)、502、503)の制御を行う。この具体的な動作については後述する。 Using the control unit (504) in FIG. 5, one processing content is selected from the many options described above based on the processing content determined in step (407) in FIG. 4, and each unit (109 (a ), 109 (b), 109 (c), 502, 503). This specific operation will be described later.
 以上述べた画像拡大および鮮明化処理により、計算リソースの性能に応じた柔軟な鮮明化処理を行うことができるようになる。 The above-described image enlargement and sharpening processing enables flexible sharpening processing according to the performance of computing resources.
 <画像拡大および鮮明化処理の動作(例)>
 図6に、前述した図5における制御部(504)の動作(例)をまとめて示す。前述のように、図5における画像拡大および鮮明化を行う処理部(109(a)、109(b)、109(c))には、それぞれの処理内容に選択肢があり、それぞれの処理に必要な演算量も異なる。そこで、図6に示すように、処理に必要な演算量の大きさごとに応じてレベルを設定し、各処理部(109(a)、109(b)、109(c))の処理内容を事前に設定しておく。なお、図6に示した各処理内容や処理パラメータは動作説明のための一例であり、この処理内容に限定されるわけではない。
<Image enlargement and sharpening processing (example)>
FIG. 6 collectively shows the operation (example) of the control unit (504) in FIG. 5 described above. As described above, the processing units (109 (a), 109 (b), and 109 (c)) that perform image enlargement and sharpening in FIG. 5 have options for each processing content, and are necessary for each processing. The amount of computation is also different. Therefore, as shown in FIG. 6, the level is set according to the amount of calculation necessary for the processing, and the processing contents of each processing unit (109 (a), 109 (b), 109 (c)) are set. Set in advance. Note that the processing contents and processing parameters shown in FIG. 6 are examples for explaining the operation, and are not limited to the processing contents.
 図6において、レベル1からレベル4までは、処理部(109(a))による画像拡大および鮮明化だけを行い、処理部(109(b))および 処理部(109(c))の処理を行わないことを示している。また、レベル1からレベル4まで処理部(109(a))の演算量が徐々に大きくなるとともに、高画質になるように設定している。その他のレベルについても同様に、演算量と画質が正の相関になるように各処理部(109(a)、109(b)、109(c))の処理内容を設定している。図4におけるステップ(405)の処理時間が所定の設定値よりも大きいときには図6のレベルが小さくなるように変更し、逆にステップ(405)の処理時間が所定の設定値よりも小さいときには図6のレベルが大きくなるように変更すれば、ステップ(405)の処理時間が所定の設定値の近傍に制御でき、計算リソースに見合った画質が得られるようになる。なお、ステップ(405)の処理時間として、1回の計測で得られた処理時間を用いてもよいし、複数回の計測で得られた処理時間の平均値を用いることによって頻繁なレベル変更を抑えてもよい。 In FIG. 6, from level 1 to level 4, only the image enlargement and sharpening is performed by the processing unit (109 (a)), and the processing of the processing unit (109 (b)) and processing unit (109 (c)) is performed. Indicates not to do. Further, the calculation amount of the processing unit (109 (a)) from level 1 to level 4 is set so as to gradually increase and achieve high image quality. Similarly, for the other levels, the processing content of each processing unit (109 (a), 109 (b), 109 (c)) is set so that the calculation amount and the image quality have a positive correlation. When the processing time of step (405) in FIG. 4 is larger than the predetermined set value, the level of FIG. 6 is changed so as to decrease. Conversely, when the processing time of step (405) is smaller than the predetermined set value, FIG. If the level 6 is changed to be larger, the processing time of step (405) can be controlled in the vicinity of a predetermined set value, and an image quality suitable for the calculation resource can be obtained. The processing time obtained in one measurement may be used as the processing time in step (405), or frequent level changes may be made by using the average value of the processing times obtained in a plurality of measurements. It may be suppressed.
 本実施の形態では、実施例1の構成に加え、カメラ識別機能を加えた画像処理装置について説明する。なお、実施例1と同様の構成については、適宜説明を省略する。
<第2の画像処理システムの概略構成>
 図8は、本発明の実施形態による第2の画像処理システムの概略構成を示す図である。図1および図7を用いて説明したように、本発明の実施形態による画像処理システムには、様々な特性を持った画像送信装置101(第1画像処理装置)が、通信ネットワーク105を介して、画像受信装置106(第2画像処理装置)と接続される。画像記録再生装置114は実施例1と同様、画像送信装置(101)で撮影された画像符号化データを記録媒体に記録し、タイムシフトやアーカイブを行ったのちに、記録媒体から画像符号化データを再生し、画像受信装置(106)に向けて送信する装置である。このとき、記録媒体として、一般的なハードディスク(磁気ディスク)や、DVD(Digital Versatile Disc)やCD(Compact Disc)などの光ディスク、磁気テープ、等を単独で用いたり、また複数の記録媒体をアレイ状に接続して用いたりすることができる。また、記録動作と再生動作をそれぞれ独立に行う構成としてもよいし、画像符号化データを記録媒体に記録しながら、それと同時に別の画像データを再生する構成としてもよい。これらの記録媒体や構成、画像記録再生装置(114)の制御方法、および画像記録再生装置(114)と通信ネットワーク(105)とを接続するための処理構成は、一般的な技術で実現可能なため、図示を省略している。なお、図1では、画像記録再生装置(114)を独立した装置構成とし、通信ネットワーク(105)を介して画像送信装置(101)および画像受信装置(106)と接続するように図示しているが、この構成に限定されるわけではなく、画像記録再生装置(114)を画像送信装置(101)の中に内蔵する構成としたり、画像記録再生装置(114)を画像受信装置(106)の中に内蔵する構成としたりしてもよい。
In this embodiment, an image processing apparatus in which a camera identification function is added to the configuration of the first embodiment will be described. In addition, about the structure similar to Example 1, description is abbreviate | omitted suitably.
<Schematic configuration of second image processing system>
FIG. 8 is a diagram showing a schematic configuration of the second image processing system according to the embodiment of the present invention. As described with reference to FIGS. 1 and 7, in the image processing system according to the embodiment of the present invention, the image transmission apparatus 101 (first image processing apparatus) having various characteristics is connected via the communication network 105. , Connected to the image receiving device 106 (second image processing device). Similar to the first embodiment, the image recording / reproducing device 114 records the encoded image data captured by the image transmission device (101) on a recording medium, and after performing time shift and archiving, the encoded image data from the recording medium. Is transmitted to the image receiving device (106). At this time, a general hard disk (magnetic disk), an optical disk such as a DVD (Digital Versatile Disc) or CD (Compact Disc), a magnetic tape, or the like is used alone as a recording medium, or a plurality of recording media are arrayed. It can be used in a connected state. Further, the recording operation and the reproducing operation may be performed independently, or another image data may be reproduced at the same time while recording the encoded image data on the recording medium. The recording medium and configuration, the control method of the image recording / reproducing device (114), and the processing configuration for connecting the image recording / reproducing device (114) and the communication network (105) can be realized by general techniques. For this reason, illustration is omitted. In FIG. 1, the image recording / reproducing apparatus (114) is configured as an independent apparatus, and is illustrated to be connected to the image transmitting apparatus (101) and the image receiving apparatus (106) via the communication network (105). However, the present invention is not limited to this configuration. The image recording / reproducing device (114) is built in the image transmitting device (101), or the image recording / reproducing device (114) is installed in the image receiving device (106). It may be configured to be built in.
 図8に、画像送信装置101の代表的な一例として、以下に述べる3種類の画像送信装置(101-#p)(101-#q)(101-#r)を示す。画像送信装置(101-#p)は、フルHDサイズなどの画素数の多いカメラ(102-#p)と、後述する画像受信装置(806)との間で特性が最適化された画像縮小部(103-#p)と、符号化部(104)と、後述する識別信号送信部(801)を備えている。画像送信装置(101-#q)は、フルHDサイズなどの画素数の多いカメラ(102-#q)と、特性が最適化されていない画像縮小部(103-#q)と、符号化部(104)を備えている。画像送信装置(101-#r)は、D1サイズなどの画素数の少ないカメラ(102-#r)と符号化部(104)を備えている。なお、符号化部(104)については、各画像送信装置に共通である。 FIG. 8 shows three types of image transmission apparatuses (101- # p) (101- # q) (101- # r) described below as a typical example of the image transmission apparatus 101. An image transmission device (101- # p) is an image reduction unit whose characteristics are optimized between a camera (102- # p) having a large number of pixels such as a full HD size and an image reception device (806) described later. (103- # p), an encoding unit (104), and an identification signal transmission unit (801) described later. The image transmission apparatus (101- # q) includes a camera (102- # q) having a large number of pixels such as a full HD size, an image reduction unit (103- # q) whose characteristics are not optimized, and an encoding unit (104). The image transmission device (101- # r) includes a camera (102- # r) having a small number of pixels such as a D1 size and an encoding unit (104). The encoding unit (104) is common to each image transmission apparatus.
 識別信号送信部(801)は、画像受信装置(806)との間で特性が最適化されていること(すなわち、各特性がマッチしていること)を示す識別信号を符号化し、カメラ(102-#p)からの画像を符号化したデータと多重して、通信ネットワーク(105)を介して画像受信装置(806)に送信する手段である。識別信号の符号化の処理構成や、データ多重の処理構成、識別信号送信部(801)と通信ネットワーク(105)とを接続するための処理構成は、一般的な技術で実現可能なため、図示を省略している。なお、この識別信号は、例えば、特性を最適化したことを表す特定のフラグデータを用いたり、カメラの機種名や製造番号などを示すデジタルデータを用いたりして、予め定めておけばよい。 The identification signal transmission unit (801) encodes an identification signal indicating that the characteristics are optimized with the image receiving device (806) (that is, the characteristics match), and the camera (102 -# P) means for multiplexing the image from the encoded data and transmitting it to the image receiving device (806) via the communication network (105). An identification signal encoding processing configuration, a data multiplexing processing configuration, and a processing configuration for connecting the identification signal transmission unit (801) and the communication network (105) can be realized by a general technique. Is omitted. The identification signal may be determined in advance by using, for example, specific flag data indicating that the characteristics have been optimized, or using digital data indicating the model name or serial number of the camera.
 画像受信装置(806)は、図1に示した画像受信装置(106)の構成に、識別信号受信部(802)を追加したものである。また、画像受信装置(806)における制御部(810)は、識別信号受信部(802)からの信号と操作部(111)からの信号に基づいて、画像選択あるいは画像合成を行う第1処理部108と画像拡大および鮮明化を行う第2処理部109の動作を制御する。なお、通信ネットワーク(105)と識別信号受信部(802)とを接続したり、伝送された符号化データから識別信号を分離したりするための処理構成は、一般的な技術で実現可能なため、図示を省略している。また、復号化部(107)、画像選択あるいは画像合成を行う処理部(108)、画像拡大および鮮明化を行う処理部(109)、操作部(111)、表示部(112)、画像記録再生装置(114)、については、図1や図4に示した構成と同一であるため、説明を省略する。 The image receiving device (806) is obtained by adding an identification signal receiving unit (802) to the configuration of the image receiving device (106) shown in FIG. The control unit (810) in the image reception device (806) is a first processing unit that performs image selection or image synthesis based on the signal from the identification signal reception unit (802) and the signal from the operation unit (111). 108 and the operation of the second processing unit 109 for enlarging and sharpening the image. The processing configuration for connecting the communication network (105) and the identification signal receiving unit (802) or separating the identification signal from the transmitted encoded data can be realized by a general technique. The illustration is omitted. Also, a decoding unit (107), a processing unit (108) that performs image selection or image synthesis, a processing unit (109) that performs image enlargement and sharpening, an operation unit (111), a display unit (112), image recording / playback The device (114) is the same as that shown in FIG. 1 or FIG.
 ここで、各画像送信装置(101-#p)(101-#q)(101-#r)からのデータが、通信ネットワーク(105)を介して、画像受信装置(806)に入力された場合に、画像受信装置(806)では、識別信号受信部(802)を用いて、伝送されたデータごとに、予め定められた識別信号が多重されているか否かを判別する。このように、画像送信装置(101-#p)における画像縮小部の特性が画像受信装置(806)との間で最適化されていることや、他の画像送信装置(101-#q)(101-#r)における画像縮小部の特性が最適化されていない、あるいは画像縮小部を備えていないことを判別できる。図7を用いて説明したように、画像縮小部の特性が最適化されていない、あるいは画像縮小部を備えていない場合には、画像拡大および鮮明化を行う処理部(109)における画像鮮明化処理が困難になる可能性があるため、表示モードが(a)画像選択モードか(b)画像合成モードかにかかわらず、画像拡大および鮮明化を行う処理部(109)で強制的に鮮明化を行わないように、制御部(810)によって制御する。一方、予め定められた識別信号が伝送データに多重されていると判別されたデータ、すなわち、特性が最適化されている画像送信装置(101-#p)から伝送された画像については、前述した表示モードに従って鮮明化処理の有無を切り替える。 Here, when data from each of the image transmission apparatuses (101- # p) (101- # q) (101- # r) is input to the image reception apparatus (806) via the communication network (105) In addition, the image receiving device (806) uses the identification signal receiving unit (802) to determine whether or not a predetermined identification signal is multiplexed for each transmitted data. As described above, the characteristics of the image reduction unit in the image transmission device (101- # p) are optimized with the image reception device (806), and other image transmission devices (101- # q) ( It can be determined that the characteristics of the image reduction unit in 101- # r) are not optimized or the image reduction unit is not provided. As described with reference to FIG. 7, when the characteristics of the image reduction unit are not optimized or the image reduction unit is not provided, the image sharpening is performed in the processing unit (109) that performs image enlargement and sharpening. Since the processing may be difficult, the processing unit (109) that performs image enlargement and sharpening is forcibly sharpened regardless of whether the display mode is (a) image selection mode or (b) image composition mode. Is controlled by the control unit (810). On the other hand, the data determined that the predetermined identification signal is multiplexed with the transmission data, that is, the image transmitted from the image transmitting apparatus (101- # p) whose characteristics are optimized are described above. The presence / absence of the sharpening process is switched according to the display mode.
 なお、画像記録再生装置(114)に記録された画像データを再生して受信する場合には、再生する際の画像のフレームレートを任意に設定できる場合がある。例えば、画像を1フレームごとに静止させたり、スロー再生したりできる場合がある。このように再生時のフレームレートを下げることは、単位時間あたりに必要な演算量を減らすことに相当する。したがって、再生時のフレームレートを下げた場合には、(b)画像合成モードであっても、鮮明化の処理を行う構成としてもよい。 It should be noted that when the image data recorded in the image recording / reproducing apparatus (114) is reproduced and received, the frame rate of the image at the time of reproduction may be arbitrarily set. For example, there are cases in which an image can be stopped for each frame or played slowly. Thus, lowering the frame rate at the time of reproduction corresponds to reducing the amount of calculation required per unit time. Therefore, when the frame rate at the time of reproduction is lowered, the sharpening process may be performed even in the image synthesis mode (b).
 以上を踏まえると、本実施例に期差の画像処理システムは、複数の第1画像処理装置と、第1画像処理装置から符号化された画像の信号を受け取る第2画像処理装置とを有する画像処理システムであって、第1画像処理装置は、画像を撮像する撮像部と、画像を符号化し、信号を出力する符号化部と、信号を前記第2画像処理装置へ送信する送信部と、を備え、第2画像処理装置は、信号を受信し、復号化した復号画像を出力する復号化部と、復号画像を鮮明化処理し、鮮明化画像を出力する鮮明化処理部と、復号画像または、鮮明化画像を表示する表示部と、表示部で表示する画像を切り替える制御部とを備え、鮮明化処理部では、復号画像を拡大した拡大画像を生成し、拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と拡大画像とを合成して第2処理済画像を生成し、前記第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする。 In view of the above, the image processing system of the difference in the present embodiment includes an image having a plurality of first image processing devices and a second image processing device that receives an image signal encoded from the first image processing device. In the processing system, the first image processing device includes an imaging unit that captures an image, an encoding unit that encodes an image and outputs a signal, and a transmission unit that transmits the signal to the second image processing device; The second image processing apparatus includes: a decoding unit that receives a signal and outputs a decoded image; a sharpening processing unit that sharpens the decoded image and outputs a sharpened image; and a decoded image Alternatively, the image processing apparatus includes a display unit that displays a sharpened image and a control unit that switches an image displayed on the display unit. The sharpening processing unit generates an enlarged image obtained by enlarging the decoded image, and extracts a high-frequency component from the enlarged image. First process that performed nonlinear processing And-image-larger image to generate a second processed image by synthesizing, and outputs an image obtained by removing an aliasing component from the second processed image as a clear image.
 このように第1処理と第2処理を行うことで、折り返し成分を除去した画像を出力でき、遠隔地から送られた画像信号の画質を高めて表示することができる。 By performing the first processing and the second processing in this way, an image from which aliasing components are removed can be output, and the image quality of an image signal sent from a remote location can be displayed with an improved quality.
 また、実施例1とは異なり、画像送信装置からの識別信号を受信することで、様々な特性を持った画像送信装置(101)が、通信ネットワーク(105)を介して、画像受信装置(806)と接続された場合にも、画像鮮明化処理を適切に実施ことができるようになる。 Also, unlike the first embodiment, by receiving the identification signal from the image transmission device, the image transmission device (101) having various characteristics can be connected to the image reception device (806) via the communication network (105). ), The image sharpening process can be appropriately performed.
 <第2の表示切り替え処理の手順(例)>
前述した制御部(810)の動作を実現するために、本発明の実施形態では、以下に述べるような手順を有する表示切り替え処理を提案する。
<Second Display Switching Procedure (Example)>
In order to realize the operation of the control unit (810) described above, the embodiment of the present invention proposes a display switching process having a procedure as described below.
 図9は、本発明の実施形態による第2の表示切り替え処理の手順の一例を示すフローチャートである。ここで、図9におけるステップ(301)(302)(303)(304)(305)(306)(307)の内容は、図3における各ステップの内容と同一であり、これらの説明は省略する。 FIG. 9 is a flowchart showing an example of the procedure of the second display switching process according to the embodiment of the present invention. Here, the contents of steps (301), (302), (303), (304), (305), (306) and (307) in FIG. 9 are the same as the contents of each step in FIG. .
 図9におけるステップ(303)にて表示したい画像を選択したのちに、ステップ(901)にて、この選択された画像に対応する前述の識別信号を受信したか否かを判別し、識別信号を受信した場合にはステップ(304)に進み、識別信号を受信しなかった場合にはステップ(902)に進む。ステップ(902)では、画像拡大は行うが、鮮明化処理は行わないようにする。これにより、識別信号を受信した画像(すなわち、画像鮮明化と特性がマッチする画像)については画像鮮明化処理を行い、識別信号を受信しない画像(すなわち、画像鮮明化と特性がマッチするかどうか不明な画像)については画像鮮明化処理を行わないように制御できるようになる。 After selecting an image to be displayed in step (303) in FIG. 9, it is determined in step (901) whether or not the aforementioned identification signal corresponding to the selected image has been received, and the identification signal is determined. If received, the process proceeds to step (304), and if no identification signal is received, the process proceeds to step (902). In step (902), image enlargement is performed, but sharpening processing is not performed. As a result, the image that has received the identification signal (that is, the image whose characteristics match the image sharpening) is subjected to the image sharpening process, and the image that does not receive the identification signal (that is, whether the characteristics of the image sharpening and the characteristics match) With respect to (unknown image), it is possible to control so as not to perform the image sharpening process.
 以上を踏まえると、本実施例に記載の画像処理方法は、複数の第1画像処理装置で撮像した画像を第2画像処理装置で処理する画像処理方法であって、第1画像処理装置で画像を撮像する第1ステップと、画像を符号化し、信号を出力する第2ステップと、信号を前記第2画像処理装置へ送信する第3ステップと、信号を受信し、復号化した復号画像を出力する第4ステップと、復号画像を鮮明化処理し、鮮明化画像を出力する第5ステップと、復号画像または、鮮明化画像を表示部で表示する第6ステップと、表示部で表示する画像を切り替える第7ステップと、を備え、第5ステップでは、復号画像を拡大した拡大画像を生成し、拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と拡大画像とを合成して第2処理済画像を生成し、第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする。 In view of the above, the image processing method described in the present embodiment is an image processing method in which images captured by a plurality of first image processing devices are processed by the second image processing device, and the images are processed by the first image processing device. A second step of encoding an image and outputting a signal; a third step of transmitting a signal to the second image processing device; and receiving a signal and outputting a decoded image A fourth step of sharpening the decoded image and outputting a sharpened image, a sixth step of displaying the decoded image or the sharpened image on the display unit, and an image displayed on the display unit. A seventh step of switching, and in the fifth step, an enlarged image obtained by enlarging the decoded image is generated, and the first processed image obtained by extracting a high-frequency component from the enlarged image and performing nonlinear processing and the enlarged image are synthesized. Second place It generates already image, and outputs an image obtained by removing an aliasing component from the second processed image as a clear image.
 また、画像記録再生装置(114)に記録された画像データを再生して受信する場合には、前述のように、再生時のフレームレートを下げることもできる。この場合には、前述した所定の設定値を、通常のリアルタイム再生時の設定値よりも大きい値にすることができ、計算リソース不足を生じることなく、より効果の大きいぼやけ除去を実施することができるようになる。 Also, when the image data recorded in the image recording / reproducing apparatus (114) is reproduced and received, the frame rate at the time of reproduction can be lowered as described above. In this case, the predetermined set value described above can be set to a value larger than the set value at the time of normal real-time reproduction, and blurring with greater effect can be performed without causing a shortage of calculation resources. become able to.
 このように第1処理と第2処理を行うことで、折り返し成分を除去した画像を出力でき、遠隔地から送られた画像信号の画質を高めて表示することができる。 By performing the first processing and the second processing in this way, an image from which aliasing components are removed can be output, and the image quality of an image signal sent from a remote location can be displayed with an improved quality.
 また、実施例1とは異なり、画像送信装置からの識別信号を受信することで、様々な特性を持った画像が受信された場合にも、画像鮮明化処理を適切に実施ことができるようになる。 Also, unlike the first embodiment, by receiving the identification signal from the image transmission device, the image sharpening process can be appropriately performed even when an image having various characteristics is received. Become.
 本実施の形態では、実施例2とは異なる方法でカメラ識別を行う画像処理装置に空いて説明する。なお、実施例1,2と同様の構成については、適宜説明を省略する。 In the present embodiment, a description will be given with reference to an image processing apparatus that performs camera identification by a method different from that in the second embodiment. In addition, about the structure similar to Example 1, 2, description is abbreviate | omitted suitably.
 <第3の画像処理システムの概略構成および動作>
図10は、本発明の実施形態による第3の画像処理システムの概略構成を示す図である。同図において、通信ネットワーク(106)を介して画像受信装置(1006)と接続されている各画像送信装置(101-#p)(101-#q)(101-#r)が、画像受信装置(1006)における鮮明化処理との間で、特性がそれぞれ最適化されているか否かを識別するために、画像受信装置(1006)に画像送信装置識別部(1001)と処理方法決定部(1002)を備える。画像送信装置識別部(1001)は、個々の画像送信装置(101-#p)(101-#q)(101-#r)を識別する手段であり、この動作については図11を用いて後述する。処理方法決定部(1002)は、画像送信装置識別部(1001)で識別された各画像送信装置(101-#p)(101-#q)(101-#r)の属性に基づいて、画像拡大および鮮明化を行う処理部(109)にて鮮明化を行うか否かを決定し、その結果を制御部(810)に送る。図10におけるその他の各部および制御部(810)は、図1および図8に示したその他の各部および制御部(810)の構成および動作と同一であるため、説明を省略する。
図11は、前述した画像送信装置識別部(1001)および処理方法決定部(1002)の動作の説明図である。各画像送信装置を識別する手段として、例えば、通信ネットワーク上で各機器を識別するために、機器ごとにユニークに設定されているIP(Internet Protocol)アドレスや、MAC(Media Access Control)アドレスを利用する。あるいは、画像送信装置が設置されている場所などに応じて、人間が識別しやすいようなユニークな名称(例えば、ロビー、通路、出入口、など)を予め各画像送信装置に付与しておいてもよい。このような機器ごとのIPアドレスやMACアドレス(論理的なアドレス)、あるいは設置場所(物理的なアドレス)と、図10に示した画像受信装置(1006)との間で特性が最適化されているか否か(すなわち<鮮明化を行うか否か)を示すフラグを、一覧表やリストとして管理し、画像送信装置ごとに、特性が最適化されていればその画像送信装置から送られた画像の鮮明化処理を行い、特性が最適化されていなければ鮮明化処理を行わない、と処理方法を決定する。
<Schematic Configuration and Operation of Third Image Processing System>
FIG. 10 is a diagram showing a schematic configuration of a third image processing system according to the embodiment of the present invention. In the figure, each image transmission device (101- # p) (101- # q) (101- # r) connected to the image reception device (1006) via the communication network (106) is connected to the image reception device. In order to identify whether or not the characteristics have been optimized with the sharpening process in (1006), the image receiving apparatus (1006) includes an image transmitting apparatus identifying unit (1001) and a processing method determining unit (1002). ). The image transmission device identification unit (1001) is means for identifying individual image transmission devices (101- # p) (101- # q) (101- # r), and this operation will be described later with reference to FIG. To do. The processing method determination unit (1002) determines the image based on the attributes of the image transmission devices (101- # p) (101- # q) (101- # r) identified by the image transmission device identification unit (1001). The processing unit (109) that performs enlargement and sharpening determines whether or not sharpening is performed, and sends the result to the control unit (810). The other units and the control unit (810) in FIG. 10 are the same as the configurations and operations of the other units and the control unit (810) shown in FIG. 1 and FIG.
FIG. 11 is an explanatory diagram of operations of the image transmission device identification unit (1001) and the processing method determination unit (1002) described above. As a means for identifying each image transmission device, for example, an IP (Internet Protocol) address or a MAC (Media Access Control) address that is uniquely set for each device is used to identify each device on the communication network. To do. Alternatively, a unique name (for example, lobby, passage, entrance / exit, etc.) that can be easily identified by a person according to the location where the image transmission device is installed may be assigned to each image transmission device in advance. Good. Characteristics are optimized between the IP address or MAC address (logical address) or installation location (physical address) for each device and the image receiving apparatus (1006) shown in FIG. If the flag indicating whether or not (that is, whether or not to perform sharpening) is managed as a list or a list and the characteristics are optimized for each image transmission device, the image sent from the image transmission device The processing method is determined such that the sharpening process is not performed unless the characteristics are optimized.
 以上述べた第3の画像処理システムの構成および動作により、様々な特性を持った画像送信装置(101)が、通信ネットワーク(105)を介して、画像受信装置(806)と接続された場合に、識別信号を送受信する手段を用いなくても、画像鮮明化処理を適切に実施ことができるようになる。
<その他の画像表示形態(例)>
 以上説明した本発明の実施形態では、説明を簡単にするために、図1に示した画像表示形態(113)のような(a)画像選択モードと(b)画像合成モードの2種類の画像表示形態を例に挙げて説明してきたが、この画像表示形態に限定されるわけではない。例えば、図12に示すような(c)画像選択合成モード、すなわち、(i+j)個(ただし、i、jはそれぞれ1以上の整数)の画像のうち、i個の画像(図12(c)では画像#1)を親画面として大きく表示し、その他のj個の画像(図12(c)では画像#2~#6)を子画面として小さく表示するモードを設け、前述した(a)画像選択モードや(b)画像合成モードと相互に切り換えるようにしてもよい。(c)画像選択合成モードでは、親画面として選択されたi個の画像(図12(c)では画像#1)については、画像鮮明化処理を行うように制御し、その他のj個の画像(図12(c)では画像#2~#6)については、画像鮮明化処理を行わないように制御すれば、復号化したすべての画像に対して鮮明化の処理を行うよりも、処理に必要な計算リソースを削減できるようになる。このとき、表示する画像のサイズ(すなわち、水平画素数と垂直画素数)をもとに、画像鮮明化処理を行うか行わないかを判定してもよい。すなわち、画素数の閾値を予め決定しておき、その閾値よりも大きい画素数で表示する画像に対して画像鮮明化処理を行うようにしてもよい。この(c)画像選択合成モードの場合には、鮮明化画像と鮮明化処理をしていない復号画像と合計画面の面積を100とすると、鮮明化画像の割合は60-80程度が望ましい。これより鮮明化画像が小さすぎると、鮮明化処理をしても細部まで見えにく、効果が薄い。また、鮮明化画像が大きすぎると、(a)画像選択モードと鮮明化処理の負荷が大差なく、むしろ復号画像と合成する分だけ処理量が多くなってしまい、画像合成するメリットが薄いためである。
When the image transmission apparatus (101) having various characteristics is connected to the image reception apparatus (806) via the communication network (105) by the configuration and operation of the third image processing system described above. The image sharpening process can be appropriately performed without using a means for transmitting and receiving the identification signal.
<Other image display forms (example)>
In the embodiment of the present invention described above, in order to simplify the description, two types of images, that is, (a) an image selection mode and (b) an image composition mode as in the image display form (113) shown in FIG. Although the display form has been described as an example, it is not limited to this image display form. For example, as shown in FIG. 12, (c) image selection / combination mode, i.e., i images among (i + j) images (where i and j are each an integer of 1 or more) (FIG. 12 (c)). In FIG. 12, there is provided a mode in which image # 1) is displayed large as a main screen and the other j images (images # 2 to # 6 in FIG. 12C) are displayed small as sub-screens. You may make it switch mutually between selection mode and (b) image composition mode. (C) In the image selection / combination mode, control is performed so that the image sharpening process is performed for i images (image # 1 in FIG. 12C) selected as the parent screen, and the other j images are selected. (Images # 2 to # 6 in FIG. 12C) can be processed more clearly than performing the sharpening process on all decoded images if the control is performed so that the image sharpening process is not performed. The required computing resources can be reduced. At this time, whether or not to perform the image sharpening process may be determined based on the size of the image to be displayed (that is, the number of horizontal pixels and the number of vertical pixels). That is, a threshold value for the number of pixels may be determined in advance, and an image sharpening process may be performed on an image displayed with a larger number of pixels than the threshold value. In the case of this (c) image selection / combination mode, the ratio of the sharpened image is desirably about 60-80, assuming that the area of the sharpened image, the decoded image that has not been sharpened, and the total screen is 100. If the sharpened image is too small, the details are not visible even if the sharpening process is performed, and the effect is weak. Also, if the sharpened image is too large, (a) the load of the image selection mode and the sharpening processing is not much different, but rather the amount of processing increases by the amount combined with the decoded image, and the merit of combining images is thin. is there.
 また、伝送された画像サイズよりも大きい画素数で表示する画像に対して画像鮮明化処理を行うようにしてもよい。なお、この(c)画像選択合成モードは、(a)画像選択モードの特殊な場合、すなわち(a)画像選択モードの画像に子画面のj個の画像を合成して表示する場合、とみなすこともできる。 Further, image sharpening processing may be performed on an image displayed with a larger number of pixels than the transmitted image size. The (c) image selection / combination mode is regarded as (a) a special case of the image selection mode, that is, (a) a case where j images of the child screen are combined and displayed on the image in the image selection mode. You can also.
 <まとめ>
 (i)以上説明した本発明の実施形態によれば、1個の画像を表示部の画面全体に表示する「画像選択モード」では画像の鮮明化を行い、複数の画像を1個の画像に合成して表示する「画像合成モード」では画像の鮮明化を行わないように制御することによって、復号化したすべての画像に対して鮮明化の処理を行うよりも、処理に必要な計算リソースを削減できるようになる。また、鮮明化の処理にかかった時間を計測し、この時間が所定の設定値以下となるように次回の鮮明化処理の内容を決定することによって、計算リソースの性能に応じた柔軟な鮮明化処理を行うことができるようになる。
<Summary>
(I) According to the embodiment of the present invention described above, in the “image selection mode” in which one image is displayed on the entire screen of the display unit, the image is sharpened, and a plurality of images are converted into one image. In the “image composition mode” where the images are combined and displayed, control is performed so as not to sharpen the image, so that the calculation resources required for processing are reduced rather than performing sharpening processing on all decoded images. Can be reduced. In addition, by measuring the time taken for the sharpening process and determining the content of the next sharpening process so that this time is less than or equal to the preset value, flexible sharpening according to the performance of the computing resources Processing can be performed.
 (ii)本発明は、実施形態の機能を実現するソフトウェアのプログラムコードによって実現できる。この場合、プログラムコードを記録した記憶媒体をシステム或は装置に提供し、そのシステム或は装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたプログラムコードを読み出す。この場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコード自体、及びそれを記憶した記憶媒体は本発明を構成することになる。このようなプログラムコードを供給するための記憶媒体としては、例えば、フレキシブルディスク、CD-ROM、DVD-ROM、ハードディスク、光ディスク、光磁気ディスク、CD-R、磁気テープ、不揮発性のメモリカード、ROMなどが用いられる。 (Ii) The present invention can be realized by software program code that implements the functions of the embodiments. In this case, a storage medium in which the program code is recorded is provided to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium. In this case, the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention. As a storage medium for supplying such program code, for example, a flexible disk, CD-ROM, DVD-ROM, hard disk, optical disk, magneto-optical disk, CD-R, magnetic tape, nonvolatile memory card, ROM Etc. are used.
 また、プログラムコードの指示に基づき、コンピュータ上で稼動しているOS(オペレーティングシステム)などが実際の処理の一部又は全部を行い、その処理によって前述した実施の形態の機能が実現されるようにしてもよい。さらに、記憶媒体から読み出されたプログラムコードが、コンピュータ上のメモリに書きこまれた後、そのプログラムコードの指示に基づき、コンピュータのCPUなどが実際の処理の一部又は全部を行い、その処理によって前述した実施の形態の機能が実現されるようにしてもよい。 Also, based on the instruction of the program code, an OS (operating system) running on the computer performs part or all of the actual processing, and the functions of the above-described embodiments are realized by the processing. May be. Further, after the program code read from the storage medium is written in the memory on the computer, the computer CPU or the like performs part or all of the actual processing based on the instruction of the program code. Thus, the functions of the above-described embodiments may be realized.
 さらに、実施の形態の機能を実現するソフトウェアのプログラムコードを、ネットワークを介して配信することにより、それをシステム又は装置のハードディスクやメモリ等の記憶手段又はCD-RW、CD-R等の記憶媒体に格納し、使用時にそのシステム又は装置のコンピュータ(又はCPUやMPU)が当該記憶手段や当該記憶媒体に格納されたプログラムコードを読み出して実行するようにしても良い。 Further, by distributing the program code of the software that realizes the functions of the embodiment via a network, the program code is stored in a storage means such as a hard disk or a memory of a system or apparatus, or a storage medium such as a CD-RW or CD-R And the computer (or CPU or MPU) of the system or apparatus may read and execute the program code stored in the storage means or the storage medium when used.
 最後に、ここで述べたプロセス及び技術は本質的に如何なる特定の装置に関連することはなく、コンポーネントの如何なる相応しい組み合わせによってでも実装できることを理解する必要がある。更に、汎用目的の多様なタイプのデバイスがここで記述した教授に従って使用可能である。ここで述べた方法のステップを実行するのに、専用の装置を構築するのが有益であることが判るかもしれない。また、実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。本発明は、具体例に関連して記述したが、これらは、すべての観点に於いて限定の為ではなく説明の為である。本分野にスキルのある者には、本発明を実施するのに相応しいハードウェア、ソフトウェア、及びファームウエアの多数の組み合わせがあることが解るであろう。例えば、記述したソフトウェアは、アセンブラ、C/C++、perl、Shell、PHP、Java(登録商標)等の広範囲のプログラム又はスクリプト言語で実装できる。 Finally, it should be understood that the processes and techniques described herein are not inherently related to any particular equipment, and can be implemented by any suitable combination of components. In addition, various types of devices for general purpose can be used in accordance with the teachings described herein. It may prove useful to build a dedicated device to perform the method steps described herein. Various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiments. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, constituent elements over different embodiments may be appropriately combined. Although the present invention has been described with reference to specific examples, these are in all respects illustrative rather than restrictive. Those skilled in the art will appreciate that there are numerous combinations of hardware, software, and firmware that are suitable for implementing the present invention. For example, the described software can be implemented in a wide range of programs or script languages such as assembler, C / C ++, perl, shell, PHP, Java (registered trademark).
 さらに、前述の実施形態において、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。全ての構成が相互に接続されていても良い。 Furthermore, in the above-described embodiment, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. All the components may be connected to each other.
 加えて、本技術分野の通常の知識を有する者には、本発明のその他の実装がここに開示された本発明の明細書及び実施形態の考察から明らかになる。記述された実施形態の多様な態様及び/又はコンポーネントは、データを管理する機能を有するコンピュータ化ストレージシステムに於いて、単独又は如何なる組み合わせでも使用することが出来る。 In addition, other implementations of the invention will become apparent to those skilled in the art from consideration of the specification and embodiments of the invention disclosed herein. Various aspects and / or components of the described embodiments can be used singly or in any combination in a computerized storage system capable of managing data.
 101・・・画像送信装置
 106・・・画像受信装置
 109、501・・・画像拡大および鮮明化を行う処理部の一例
 111・・・操作部
 113・・・表示形態の一例。
DESCRIPTION OF SYMBOLS 101 ... Image transmission apparatus 106 ... Image reception apparatus 109,501 ... An example of the process part which performs an image expansion and sharpening 111 ... Operation part 113 ... An example of a display form.

Claims (9)

  1.  複数の第1画像処理装置と、前記第1画像処理装置から符号化された画像の信号を受け取る第2画像処理装置とを有する画像処理システムであって、
     前記第1画像処理装置は、
     画像を撮像する撮像部と、
     前記画像を符号化し、信号を出力する符号化部と、
     前記信号を前記第2画像処理装置へ送信する送信部と、を備え、
     前記第2画像処理装置は、
     前記信号を受信し、復号化した復号画像を出力する復号化部と、
     前記復号画像を鮮明化処理し、鮮明化画像を出力する鮮明化処理部と、
     前記復号画像または、前記鮮明化画像を表示する表示部と、
     前記表示部で表示する画像を切り替える制御部とを備え、
     前記鮮明化処理部では、前記復号画像を拡大した拡大画像を生成し、前記拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と前記拡大画像とを合成して第2処理済画像を生成し、前記第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする画像処理システム。
    An image processing system comprising a plurality of first image processing devices and a second image processing device for receiving a signal of an image encoded from the first image processing device,
    The first image processing apparatus includes:
    An imaging unit that captures an image;
    An encoding unit that encodes the image and outputs a signal;
    A transmission unit for transmitting the signal to the second image processing device,
    The second image processing device includes:
    A decoding unit that receives the signal and outputs a decoded image;
    A sharpening processing unit that sharpens the decoded image and outputs a sharpened image;
    A display unit for displaying the decoded image or the sharpened image;
    A control unit that switches an image to be displayed on the display unit,
    The sharpening processing unit generates an enlarged image obtained by enlarging the decoded image, combines a first processed image obtained by extracting a high frequency component from the enlarged image and performing nonlinear processing, and the enlarged image to perform a second process. An image processing system for generating a finished image and outputting an image obtained by removing the aliasing component from the second processed image as a sharpened image.
  2.  請求項1に記載の画像処理システムであって、
     前記第2画像処理装置は、
    前記鮮明化処理部における処理時間を計測する時間計測部と、
    前記処理時間に基づいて、次に入力された画像に対する鮮明化処理の内容を決定する処理決定部と、をさらに有することを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The second image processing device includes:
    A time measuring unit for measuring a processing time in the sharpening processing unit;
    An image processing system, further comprising: a process determining unit that determines the content of the sharpening process for the next input image based on the processing time.
  3.  請求項1に記載の画像処理システムであって、
     前記第1画像処理装置は、
    前記画像を縮小し符号化部へ出力する画像縮小部をさらに有し、
     前記送信部は、前記信号に対し前記画像を符号化した第1画像処理装置を識別するための識別信号を付加して前記第2画像処理装置へ送信し、
     前記第2画像処理装置は、
    前記識別信号を受信し、前記識別信号を送信した前記第1画像処理装置を特定する信号識別部と、
    前記信号識別部が識別した結果に基づいて前記鮮明化処理の内容を決定する処理決定部と、をさらに有することを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The first image processing apparatus includes:
    An image reduction unit that reduces the image and outputs the reduced image to the encoding unit;
    The transmission unit adds an identification signal for identifying the first image processing apparatus that has encoded the image to the signal and transmits the signal to the second image processing apparatus,
    The second image processing device includes:
    A signal identification unit that receives the identification signal and identifies the first image processing apparatus that has transmitted the identification signal;
    An image processing system, further comprising: a process determining unit that determines a content of the sharpening process based on a result identified by the signal identifying unit.
  4.  請求項1に記載の画像処理システムであって、
     前記第1画像処理装置は、
    前記画像を縮小し復号化部へ出力する画像縮部をさらに有し、
     前記送信部は、前記信号に対し、前記画像縮小部と前記鮮明化処理部とにおける処理の特性が最適化されているか否かを示す特性識別信号を付加して前記第2画像処理装置へ送信し、
     前記第2画像処理装置は、
    前記特性識別信号を識別した結果に基づいて前記鮮明化処理の内容を決定する処理決定部を、さらに有することを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The first image processing apparatus includes:
    An image reduction unit that reduces the image and outputs the reduced image to the decoding unit;
    The transmission unit adds a characteristic identification signal indicating whether or not the processing characteristics of the image reduction unit and the sharpening processing unit are optimized to the signal, and transmits the signal to the second image processing apparatus. And
    The second image processing device includes:
    An image processing system, further comprising: a process determining unit that determines contents of the sharpening process based on a result of identifying the characteristic identification signal.
  5.  請求項1に記載の画像処理システムであって、
     前記表示部は、ユーザからの指示に基づき、鮮明化処理画像と、復号画像と、鮮明化画像と復号画像との合成画像と、を切り替えて表示することを特徴とする画像処理システム。
    The image processing system according to claim 1,
    The display unit is configured to switch and display a sharpened image, a decoded image, and a combined image of the sharpened image and the decoded image based on an instruction from a user.
  6.  複数の第1画像処理装置で撮像した画像を第2画像処理装置で処理する画像処理方法であって、
     前記第1画像処理装置で画像を撮像する第1ステップと、
     前記画像を符号化し、信号を出力する第2ステップと、
     前記信号を前記第2画像処理装置へ送信する第3ステップと、
     前記信号を受信し、復号化した復号画像を出力する第4ステップと、
     前記復号画像を鮮明化処理し、鮮明化画像を出力する第5ステップと、
     前記復号画像または、前記鮮明化画像を表示部で表示する第6ステップと、
     前記表示部で表示する画像を切り替える第7ステップと、を備え、
     前記第5ステップでは、前記復号画像を拡大した拡大画像を生成し、前記拡大画像から高周波成分を抜き出して非線形処理を行った第1処理済画像と前記拡大画像とを合成して第2処理済画像を生成し、前記第2処理済画像から折り返し成分を除去した画像を鮮明化画像として出力することを特徴とする画像処理方法。
    An image processing method for processing an image captured by a plurality of first image processing devices with a second image processing device,
    A first step of capturing an image with the first image processing device;
    A second step of encoding the image and outputting a signal;
    A third step of transmitting the signal to the second image processing device;
    A fourth step of receiving the signal and outputting a decoded image;
    A fifth step of sharpening the decoded image and outputting the sharpened image;
    A sixth step of displaying the decoded image or the sharpened image on a display unit;
    A seventh step of switching an image to be displayed on the display unit,
    In the fifth step, an enlarged image obtained by enlarging the decoded image is generated, a first processed image obtained by extracting a high-frequency component from the enlarged image and subjected to nonlinear processing and the enlarged image are synthesized and second processed. An image processing method characterized in that an image is generated and an image obtained by removing a aliasing component from the second processed image is output as a sharpened image.
  7.  請求項6に記載の画像処理方法であって、
     前記第5ステップにおける処理時間を計測する第8ステップと、
    前記処理時間に基づいて、次に入力された画像に対する鮮明化処理の内容を決定する第9ステップと、をさらに有することを特徴とする画像処理方法。
    The image processing method according to claim 6,
    An eighth step of measuring the processing time in the fifth step;
    And a ninth step of determining the content of the sharpening process for the next input image based on the processing time.
  8.  請求項6に記載の画像処理方法であって、
     前記第2ステップの前に、前記画像を縮小する第10ステップと、
     前記画像を符号化した第1画像処理装置を識別するための識別信号を受信し、該第1画像処理装置を特定する第11ステップと、
     前記第11ステップが特定した結果に基づいて、前記第5ステップにおける処理内容を決定する第12ステップと、をさらに有し、
     前記第3ステップでは、前記信号に対し前記識別信号を付加して前記第2画像処理装置へ送信することを特徴とする画像処理方法。
    The image processing method according to claim 6, wherein
    A tenth step of reducing the image before the second step;
    An eleventh step of receiving an identification signal for identifying the first image processing device that has encoded the image, and identifying the first image processing device;
    Based on the result specified in the eleventh step, further includes a twelfth step for determining the processing content in the fifth step,
    In the third step, the identification signal is added to the signal and transmitted to the second image processing apparatus.
  9.  請求項6に記載の画像処理方法であって、
     前記第2ステップの前に、前記画像を縮小する第10ステップと、
     前記第2ステップで出力された前記信号に対し、前記第10ステップと前記第6ステップとにおける処理の特性が最適化されているか否かを示す特性識別信号を付加する第13ステップと、
     前記特性信号を識別した結果に基づいて前記第6ステップでの処理の内容を決定する第14ステップと、をさらに有することを特徴とする画像処理方法。
    The image processing method according to claim 6, wherein
    A tenth step of reducing the image before the second step;
    A thirteenth step of adding a characteristic identification signal indicating whether or not the characteristics of the processing in the tenth step and the sixth step are optimized with respect to the signal output in the second step;
    An image processing method, further comprising: a fourteenth step of determining a content of the process in the sixth step based on a result of identifying the characteristic signal.
PCT/JP2015/053725 2014-07-04 2015-02-12 Image processing system, and image processing method WO2016002245A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016531129A JP6416255B2 (en) 2014-07-04 2015-02-12 Image processing system and image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JPPCT/JP2014/067857 2014-07-04
PCT/JP2014/067857 WO2016002063A1 (en) 2014-07-04 2014-07-04 Image processing system, and image processing method

Publications (1)

Publication Number Publication Date
WO2016002245A1 true WO2016002245A1 (en) 2016-01-07

Family

ID=55018661

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/JP2014/067857 WO2016002063A1 (en) 2014-07-04 2014-07-04 Image processing system, and image processing method
PCT/JP2015/053725 WO2016002245A1 (en) 2014-07-04 2015-02-12 Image processing system, and image processing method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/067857 WO2016002063A1 (en) 2014-07-04 2014-07-04 Image processing system, and image processing method

Country Status (2)

Country Link
JP (1) JP6416255B2 (en)
WO (2) WO2016002063A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018028559A (en) * 2015-01-07 2018-02-22 シャープ株式会社 Image data output device, image data output method, image display device, and integrated circuit

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0822586A (en) * 1994-07-07 1996-01-23 Meidensha Corp Multipoint monitoring system
JP2004304328A (en) * 2003-03-28 2004-10-28 Toshiba Corp Video signal processing circuit and video signal processing method
JP5103314B2 (en) * 2008-07-28 2012-12-19 株式会社日立製作所 Image signal processing apparatus, image signal processing method, and video display apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007189557A (en) * 2006-01-13 2007-07-26 Toshiba Corp Video monitoring system and video switching method
JP2010206273A (en) * 2009-02-27 2010-09-16 Toshiba Corp Information processing apparatus
JP2011013852A (en) * 2009-06-30 2011-01-20 Toshiba Corp Information processor and parallel operation control method
JP4886888B2 (en) * 2009-12-01 2012-02-29 キヤノン株式会社 Image processing apparatus and control method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0822586A (en) * 1994-07-07 1996-01-23 Meidensha Corp Multipoint monitoring system
JP2004304328A (en) * 2003-03-28 2004-10-28 Toshiba Corp Video signal processing circuit and video signal processing method
JP5103314B2 (en) * 2008-07-28 2012-12-19 株式会社日立製作所 Image signal processing apparatus, image signal processing method, and video display apparatus

Also Published As

Publication number Publication date
WO2016002063A1 (en) 2016-01-07
JPWO2016002245A1 (en) 2017-04-27
JP6416255B2 (en) 2018-10-31

Similar Documents

Publication Publication Date Title
JP6819702B2 (en) Television devices, mobile phones, playback devices, cameras and methods
RU2653464C2 (en) Image processing device and method of image processing
JP6962325B2 (en) Image processing equipment, image processing methods, and programs
JP6883219B2 (en) Coding device and coding method, and system
US10587892B2 (en) Image processing apparatus, image processing method, and program for generating motion compensated image data
JP2013034163A (en) Image processing device and image processing method
JP2013141187A (en) Image processing apparatus and image processing method
JP2010041337A (en) Image processing unit and image processing method
KR20070027605A (en) Image compression device, image compression method, and image compression program
JP2011151682A (en) Image processing apparatus and method
CN102301723A (en) image processing device and method
US20200288123A1 (en) Image processing apparatus and image processing method
EP2974265B1 (en) Imaging apparatus and imaging apparatus control method
CN104620586A (en) Image processing device and method
US10003767B2 (en) Image processing apparatus and image processing method
US20130216150A1 (en) Image processing device, image processing method, and program
WO2018173873A1 (en) Coding device and coding method, and decoding device and decoding method
WO2013073328A1 (en) Image processing apparatus and image processing method
US20130114714A1 (en) Image processing device and image processing method
JP2018088652A (en) Imaging apparatus, image processing method, and program
US8457420B2 (en) Image decoding apparatus for decoding image data encoded by a method designating whether to perform distortion suppression processing and control method for the same
JP6416255B2 (en) Image processing system and image processing method
US10360660B2 (en) Image processing apparatus and image processing method for handling raw images
US20130182967A1 (en) Image processing device and image processing method
JP2008182347A (en) Video processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15815397

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016531129

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15815397

Country of ref document: EP

Kind code of ref document: A1