WO2018097114A1 - Image processing device, image processing method, and program - Google Patents

Image processing device, image processing method, and program Download PDF

Info

Publication number
WO2018097114A1
WO2018097114A1 PCT/JP2017/041739 JP2017041739W WO2018097114A1 WO 2018097114 A1 WO2018097114 A1 WO 2018097114A1 JP 2017041739 W JP2017041739 W JP 2017041739W WO 2018097114 A1 WO2018097114 A1 WO 2018097114A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
output
sharpness
input
Prior art date
Application number
PCT/JP2017/041739
Other languages
French (fr)
Japanese (ja)
Inventor
将史 大矢
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2017161799A external-priority patent/JP2018093472A/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2018097114A1 publication Critical patent/WO2018097114A1/en
Priority to US16/396,039 priority Critical patent/US10868938B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3188Scale or resolution adjustment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/58Edge or detail enhancement; Noise or error suppression, e.g. colour misregistration correction

Definitions

  • the present invention relates to an image processing technique for reproducing one image by combining images output from a plurality of devices.
  • JP-A-2010-103863 discloses a technique for reproducing one image by superimposing images output from a plurality of devices.
  • JP-A-2010-103863 color reproduction in a dark region of an image reproduced by using an image projection apparatus superimposed on a printed matter formed by an image forming apparatus, compared to the case of using only the image projection apparatus.
  • the resolution of the image output by the image projecting apparatus is lower than the resolution of the image output by the image forming apparatus. End up. Due to the reduction in resolution, the information held at the resolution of the input image data cannot be expressed and the sharpness is lowered. In this case, if the image output from each device is superimposed, the sharpness of the image output by the image projector decreases, so that the sharpness of the image reproduced by superimposing the image output from each device is reduced. The degree is lower than the sharpness of the input image.
  • the present invention provides image processing for suppressing a decrease in the sharpness of a superimposed image with respect to an input image, which occurs when a single image is generated by superimposing images output from a plurality of devices based on the input image.
  • the purpose is to do.
  • an image processing apparatus outputs a first output image output from a first image output apparatus based on an input image, and an output from a second image output apparatus based on the input image.
  • An image processing device that generates image data to be output to the second image output device in order to generate one image by superimposing the second output image with a higher resolution than the first output image.
  • a first acquisition unit configured to acquire input image data representing the input image; and a second acquisition unit configured to acquire first output image data generated based on the input image data and output to the first image output device.
  • First generation means for generating second output image data to be output to the second image output device based on the acquisition means, the input image data, and the first output image data, and Second Sharpness of the image represented by the image data is characterized in that depending on the sharpness of the first output image data represents an image.
  • the present invention it is possible to suppress a decrease in the sharpness of a superimposed image with respect to an input image, which occurs when an image output from a plurality of devices is superimposed based on the input image to generate one image.
  • the block diagram which shows an example of a function structure of the image processing apparatus 1 Flow chart for processing executed by image processing apparatus 1 Flow chart for processing executed by image processing apparatus 1
  • generating projection image data A block diagram showing an example of a hardware configuration of the image processing apparatus 1 Flowchart for processing (S203) for calculating degree of sharpness reduction Flowchart for processing (S203) for calculating degree of sharpness reduction
  • the block diagram which shows an example of a function structure of the image processing apparatus 1 The block diagram which shows an example of a function structure of the image processing apparatus 1
  • the block diagram which shows an example of a function structure of the image processing apparatus 1 A diagram schematically showing resolution conversion processing based on variable magnification A diagram schematically showing resolution conversion processing based on variable magnification
  • the resolution in this embodiment is also expressed as a measure of the fineness of expression in a projected or formed image, and uses dpi (dot per inch) as a unit.
  • the sharpness in this embodiment is a measure of the clarity of a fine portion (high resolution portion) of an image.
  • the sharpness can be expressed by a response function MTF (modulation transfer function) represented by a rate of decrease in output contrast with respect to input contrast at each resolution.
  • FIG. 14A shows input image data (also referred to as first image data) representing an image to be reproduced.
  • FIG. 14B shows formed image data (also referred to as third image data) representing an image having the same size as the input image data (enlargement ratio 1.0), and
  • FIG. 14C shows twice the input image data ( The formation image data showing the image of the size of the enlargement ratio 2.0) is shown.
  • the formed image data is image data input by the image forming apparatus to form an image on a recording medium.
  • an input image data representing an image having a resolution of 300 dpi and a size of 8192 pixels in the horizontal direction and 4320 pixels in the vertical direction
  • an image forming apparatus capable of forming an image with a resolution of 1200 dpi based on the input image data.
  • An example of the formed image is shown.
  • the image forming apparatus forms an image by scanning a recording head that ejects ink on a recording medium a plurality of times. Since the distance between the recording medium and the recording head is constant regardless of the size of the image to be formed, the resolution of the image formed by the image forming apparatus is constant. For example, as shown in FIG.
  • the number of pixels of the image represented by the formed image data is set both vertically and horizontally with respect to the number of pixels of the image represented by the input image data. 4 times.
  • the resolution of an image output (formed) by the image forming apparatus is made constant.
  • FIG. 18A shows the input image data representing the image to be reproduced
  • FIG. 18B shows the size of the image represented by the input image data when projected with the number of pixels of the image projection apparatus (enlargement ratio 1.0).
  • 3 shows projection image data (also referred to as second image data) representing an image having an “actual size” size.
  • FIG. 18C shows projection image data representing an image when the size of the image to be projected is twice the “same size” size (enlargement ratio 2.0).
  • the projection image data is image data input by the image projection apparatus to project an image.
  • the input image data is the same as in FIG. 14, and the number of pixels of the image output from the image projection apparatus is 4096 pixels in the horizontal direction and 2160 pixels in the vertical direction.
  • an image projection apparatus Unlike an image forming apparatus, an image projection apparatus generates an image by driving a display element such as a liquid crystal panel for each pixel, and displays the image by projecting the generated image with a projection lens. For this reason, the number of pixels of an image to be output is determined by a display element such as a liquid crystal panel held in advance, and the number of pixels to be output cannot be increased or decreased as in the image forming apparatus. As a result, as shown in FIG. 18, the image was projected in the case of projecting the image at the same size based on the input image data representing the image having the resolution of 300 dpi and in the case of projecting the image at the size of 2 times. The number of pixels in the image is constant. Therefore, the resolution of the image decreases in inverse proportion to the increase in the size of the image to be projected.
  • the image forming apparatus can increase or decrease the number of pixels of the image to be output by the scanning method of the recording head, it is possible to form an image with a constant resolution regardless of the image size.
  • the number of pixels of the output image is determined by a display element such as a liquid crystal panel, so that the resolution of the image decreases in inverse proportion to the enlargement of the image size caused by extending and projecting the input image. Resulting in.
  • the output image from the image forming apparatus has a constant sharpness regardless of the size of the reproduction target image, whereas the output image from the image projection apparatus increases in pixel size as the size of the reproduction target image increases. The sharpness when viewed from the position is lowered.
  • one image is reproduced by superimposing the image projected by the image projection apparatus and the image formed by the image forming apparatus. At that time, the reduction in the sharpness of the image projected by the image projection apparatus as described above is compensated by enhancing the sharpness of the image formed by the image forming apparatus. Details will be described below.
  • FIG. 4 is a hardware configuration example of the image processing apparatus 1 in the present embodiment.
  • the image processing apparatus 1 is a computer, for example, and includes a CPU 1401, a ROM 1402, and a RAM 1403.
  • the CPU 1401 executes an OS (Operating System) and various programs stored in the ROM 1402, HDD (Hard Disk Drive) 1412, and the like using the RAM 1403 as a work memory.
  • the CPU 1401 controls each component via the system bus 1408. Note that the processing according to the flowchart to be described later is executed by the CPU 1401 after the program code stored in the ROM 1402, the HDD 1412, or the like is expanded in the RAM 1403.
  • a display 5 is connected to a VC (video card) 1404.
  • a general-purpose I / F (interface) 1405 is connected to an input device 1410 such as a mouse and a keyboard, the image projection apparatus 2, and the image forming apparatus 3 via a serial bus 1409.
  • a SATA (Serial ATA) I / F 1406 is connected to a general-purpose drive 1413 for reading and writing the HDD 1412 and various recording media via a serial bus 1411.
  • a NIC (network interface card) 1407 inputs / outputs information to / from an external device.
  • the CPU 1401 uses various recording media mounted on the HDD 1412 or the general-purpose drive 1413 as a storage location for various data.
  • the CPU 1401 displays a UI (user interface) provided by the program on the display 5 and receives an input such as a user instruction accepted via the input device 1410.
  • FIG. 1 is a block diagram illustrating a functional configuration of the image processing apparatus 1.
  • 1 is an image processing apparatus
  • 2 is an image projection apparatus (projector)
  • 3 is an image forming apparatus (printer)
  • 4 is illumination that determines ambient light when observing a superimposed image.
  • the superimposed image indicates an image in which the projection image 502 projected by the image projection device 2 is superimposed on the formation image 501 formed on the recording medium by the image forming device 3.
  • the image processing apparatus 1 can be implemented by a printer driver installed in a general personal computer, for example. In that case, each part of the image processing apparatus 1 described below is realized by a computer executing a predetermined program.
  • the image processing device 1 may include the image projection device 2 and the image forming device 3.
  • the image processing apparatus 1 and the image projection apparatus 2, and the image processing apparatus 1 and the image forming apparatus 3 are connected by an interface or a circuit.
  • the image processing apparatus 1 includes a first input terminal 101, a second input terminal 102, an acquisition unit 103, a first generation unit 104, a first color conversion LUT 105, a calculation unit 106, and a second generation unit 107. , A second color conversion LUT 108, a first output terminal 109, and a second output terminal.
  • the acquisition unit 103 acquires input image data representing an image to be reproduced via the first input terminal 101. Further, the projection state of the image projection device 2 is acquired via the second input terminal 102. The projection state will be described later.
  • the first generation unit 104 refers to the first color conversion LUT 105 based on the input image data described above, and generates image data (projection image data) to be input to the image projection device 2.
  • the calculation unit 106 acquires the input image data and the projection image data, and converts the resolution of the image represented by the input image data and the resolution of the image represented by the projection image data according to the projection state. Further, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated based on the input image data representing the image whose resolution has been converted and the projection image data.
  • the second generation unit 107 enhances the sharpness of the image represented by the input image data based on the input image data and the degree of sharpness reduction.
  • the second color conversion LUT 108 is referred to, and image data (formed image data) to be input to the image forming apparatus 3 is generated.
  • the projection image data generated by the first generation unit 104 is output to the image projection apparatus 2 via the first output terminal 109, and the formed image data generated by the second generation unit 107 is output by the second output terminal 110. And output to the image forming apparatus 3.
  • the image projection apparatus 2 has a projection optical unit (not shown).
  • the projection optical unit includes a lamp that is a light source, a liquid crystal driving device that drives a liquid crystal panel based on input projection image data, and a projection lens.
  • the light from the lamp is decomposed into R, G, and B light by the optical system and guided to the liquid crystal panel.
  • the light guided to each liquid crystal panel is modulated in luminance by each liquid crystal panel, and an image is projected onto a printed matter formed by the image forming apparatus 3 by a projection lens.
  • the image forming apparatus 3 records ink dots on the recording medium by moving a recording head (not shown) vertically and horizontally relative to the recording medium based on the formed image data generated by the image processing apparatus 1. , Form an image.
  • the image forming apparatus 3 uses an ink jet printer, but other types of printers such as an electrophotographic method may be used.
  • the acquisition unit 103 acquires input image data via the first input terminal 101.
  • the input image data is 3-channel color image data in which 8-bit RGB values are recorded in each pixel.
  • the image represented by the input image data has a higher resolution than the image projected by the image projection device 2. That is, the image projected by the image projection device 2 has a lower resolution than the image represented by the input image data.
  • the input image data acquired by the input image data acquisition unit 103 is sent to the first generation unit 104, the calculation unit 106, and the second generation unit 107. Further, the projection state is acquired via the second input terminal.
  • the projection state is an image determined from the distance (projection distance) between the image projection apparatus 2 and the formed image 501 that is the projection target when the image projection apparatus 2 projects the projection image 502 and the relationship between the projection lens and the liquid crystal panel. It is a horn.
  • the projection distance D is acquired as 4-bit data converted into meters (m) and the angle of view ⁇ is converted into radians.
  • the projection state is acquired by receiving an input from the user via the UI screen illustrated in FIG. 19 or directly acquired from the image projection apparatus 2 by connecting the image projection apparatus 2 and the image processing apparatus 1.
  • the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image projection device 2.
  • a known bicubic method is used for resolution conversion, but other resolution conversion methods such as a bilinear method may be used.
  • projection image data is generated by referring to a second color conversion LUT 105 held in advance.
  • the first color conversion LUT 105 referred to is shown in FIG. As shown in FIG. 3, the correspondence between the signal value (RGB value) recorded for each pixel of the input image data and the signal value (RGB value) recorded for each pixel of the projection image data is maintained.
  • the first color conversion LUT 105 described above is created in advance by projecting a chart with a known input signal value recorded in the projection image data and measuring the color of the projected image.
  • the generated projection image data is three-channel color image data in which 8-bit RGB values are recorded in each pixel, as in the case of the input image data.
  • the projection image data is generated, it is sent to the image projection device 2 via the first output terminal 109. It is also sent to the calculation unit 106.
  • the calculation unit 106 acquires the input image data and the projection state acquired in S201, and the projection image data generated in S202. Further, based on the acquired projection state, an enlargement ratio E of the size of the image projected by the image projection apparatus 2 with respect to the size of the image represented by the input image data is calculated.
  • the enlargement ratio is a magnification for enlarging the image.
  • the resolution of the image represented by the projection image data and the resolution of the image represented by the input image data are converted. After converting the resolution, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated. The calculated degree of sharpness reduction is sent to the second generator.
  • the detailed processing of S203 will be described using the flowchart shown in FIG. 5A.
  • step S203 input image data, projection image data, and a projection state are acquired.
  • step S2032 based on the acquired projection state (D and ⁇ ) and the projection state (projection distance D 0 and angle of view ⁇ 0 ) in which the size of the image represented by the input image data and the size of the projection image 502 are equal.
  • the enlargement ratio E is calculated using the following equation (1).
  • the projection state (D 0 and ⁇ 0 ) having the same magnification as described above is determined in advance by the following method and held in the calculation unit 106. Projecting projection image data generated based on input image data whose image size is known in advance, and searching for a projection state in which the image size of the projected image 502 projected matches the size of the image represented by the input image data Determined by.
  • a calculation formula may be constructed based on the characteristics of the projection lens and the liquid crystal panel included in the image projection apparatus 2, and the calculation formula may be used. Note that, in S2031, to acquire only the projection distance D as the projection state, at S2032, it may calculate the enlargement ratio E by dividing D 0 from the projection distance D.
  • the resolution of the image represented by the input image data and the resolution of the image represented by the projection image data are converted based on the enlargement ratio E calculated in S2032 and the resolution Rf of the image formed by the image forming apparatus 3.
  • the resolution conversion of the image represented by the projection image data is performed by using the following formula (2) based on the enlargement ratio E, the resolution R f , and the resolution R p of the image projected by the image projection device 2.
  • the scaling factor Fp is calculated.
  • the calculated scaling factor F p based, resolution-converting each pixel of the projection image data in F p pieces.
  • a known nearest neighbor method is used for this resolution conversion.
  • the resolution conversion of the image represented by the input image data is performed by first using the following formula (3) based on the enlargement ratio E, the resolution R f , and the resolution R in of the image represented by the input image data. to calculate the scaling factor F in the.
  • a known bicubic method is used for the resolution conversion.
  • the resolutions R f and R p described above are acquired by user input or directly by connecting the image projection apparatus 2 or the image forming apparatus 3 and the image processing apparatus 1. Both the resolutions R f and R p are preferably the highest resolution that each device can output.
  • step S2034 the pixel value of the input image data representing the image whose resolution is converted in step S2033 and the pixel value of the projection image data are subtracted, and the difference obtained by the subtraction process is set as the sharpness reduction degree.
  • the degree of reduction in sharpness is calculated as 3-channel color image data in which 8-bit RGB values are recorded, as in the case of input image data and projection image data.
  • the subtraction process is performed independently on each of the R, G, and B channels for each pixel of each image.
  • the pixel value of the R channel of the input image data is I_R x, y (x is the pixel position in the horizontal direction and y is the pixel position in the vertical direction), and the pixel value of the R plane of the projection image data is P_R x, y . Further, the pixel value of the R plane having the degree of reduction in sharpness is set as Q_R x, y, and is calculated from the following equation (4).
  • FIG. 12 schematically shows an example in which the degree of reduction in sharpness is calculated from input image data and projection image data. Since the G and B planes are the same process, description thereof is omitted. The above calculation is calculated for each pixel in the R channel, and then processed in the order of G and B. Since I_R x, y and P_R x, y are 8-bit data representing 0 to 255 , Q_R x, y output in the above calculation is 9-bit data representing -255 to +255.
  • the example which processes sequentially is shown above, it is not limited to said example. For example, the calculation for each channel may be processed in parallel.
  • the calculated degree of reduction in sharpness (3-channel color image data in which 9-bit RGB values are recorded in each pixel) is sent to the second generation unit 107.
  • the second generation unit 107 enhances the sharpness of the image represented by the input image data whose resolution has been converted in S2033 based on the degree of reduction in sharpness calculated in S203.
  • the sharpness enhancement is realized by adding the pixel value of the input image data subjected to resolution conversion and the pixel value of the image data representing the degree of reduction in sharpness.
  • the pixel value of the R channel of the input image data is I_R x, y (x is the pixel position in the horizontal direction, y is the pixel position in the vertical direction), and the pixel value of the R channel of the degree of reduction in sharpness is Q_R x, y . . Further, the pixel value of the R channel of the input image data after the addition processing is set as I_R ′ x, y and is calculated from the following equation (5).
  • FIG. 13 schematically shows an example in which input image data representing an image with enhanced sharpness is calculated based on the input image data and image data representing the degree of sharpness reduction. If I_R x, y is less than 0 as a result of the calculation, the input image data representing an image in which sharpness is emphasized is represented by 0 to 255 by clipping to 0 if it is 0, 256 or more. Bit data. Since the G and B channels are the same process, the description thereof is omitted. The above calculation is calculated for each pixel in the R channel and then sequentially processed with G and B, but may be parallel processing.
  • the second image conversion LUT 108 held in advance is referred to generate the formed image data.
  • the second color conversion LUT 108 referred to is shown in FIG. As shown in FIG. 7, the correspondence between the signal value (RGB value) recorded for each pixel of the input image data and the signal value (RGB value) recorded for each pixel of the formed image data is maintained.
  • the second color conversion LUT 108 described above is created in advance by forming an image on a recording medium based on a chart with known input signal values recorded in the formed image data, and measuring the color of the formed image. deep.
  • the formed image data to be generated is 3-channel color image data in which RGB values of 8 bits are recorded in each pixel, as in the case of input image data.
  • the generated formed image data is sent to the image forming apparatus 3 via the second output terminal 110. Thus, a series of processes for generating the projection image data and the formed image data is completed.
  • the input image data and the projection image data are subjected to resolution conversion processing according to the projection state, and the image projection apparatus 2 projects the image to be reproduced based on the input image data and the projection image data.
  • An example of calculating the degree of reduction in sharpness of an image to be displayed has been shown.
  • the method for calculating the degree of reduction in sharpness is not limited to the above example.
  • the degree of reduction in the sharpness of the projected image 502 with respect to the image to be reproduced by arithmetic processing based on the resolution R f of the image formed by the image forming apparatus 3 and the resolution R p of the image projected by the image projecting apparatus 2 May be calculated.
  • the detailed processing of S203 in the calculation processing of the sharpness reduction degree will be described with reference to the flowchart shown in FIG.
  • S2031 and S2032 are the same as those of the first embodiment described above, description thereof is omitted.
  • S2035 only the input image data is converted based on the enlargement ratio E calculated in S2032 and the resolution Rf of the image formed by the image forming apparatus 3. Since the resolution conversion for the input image data is the same as that in the first embodiment, description thereof is omitted.
  • FIG. 16 shows an example of a high-pass filter applied when the scaling factor F p is 3, 5, or 9.
  • a high-pass filter having a matrix of F p ⁇ F p is generated, and the input image data subjected to resolution conversion in S2035 is subjected to filter processing, and the processing result is set as a degree of reduction in sharpness.
  • the sharpness of the image that can be reproduced by the image forming apparatus 3 is extracted from the sharpness of the image to be reproduced. Then, by performing a filtering process using a high-pass filter based on the scaling factor F p to sharpness extracted above, and calculates the sharpness is lost when expressed by the image projection apparatus 2. By executing a series of processing, the sharpness that can be reproduced by the image forming apparatus 3 and cannot be reproduced by the image projection apparatus 2 among the sharpnesses of the image to be reproduced is calculated as the above-described reduction degree of the sharpness. be able to.
  • a high-pass filter is generated and filter processing is performed using the generated high-pass filter.
  • a plurality of types of filters may be held in advance.
  • a filter for filter processing is acquired from a plurality of types of filters stored in advance according to the above-described scaling factor and the frequency band in which the sharpness that can be calculated based on the scaling factor is reduced.
  • the configuration example using one image projection device 2 is shown, but the configuration using two or more image projection devices 2 may be used.
  • n image projection apparatuses 2a to 2c and one image forming apparatus 3 are used, and n image projection apparatuses 2 project n projection images at the same position.
  • a superimposed image expression system may be constructed.
  • each image projection apparatuses 2a to 2d and one image forming apparatus 3 are used to divide the input image data into 2 ⁇ 2, and the divided input image data Projection image data generated based on the above is projected from the image projection apparatuses 2a to 2d.
  • a superimposed image expression system multi-projection
  • Each image projection device 2a to 2d reproduces each area of the image represented by the input image data, thereby superimposing a projection image having a higher resolution or a larger size than the projection image projected by one image projection device 2. It becomes possible to do.
  • FIG. 17 shows a functional configuration of the image processing apparatus 1 for calculating the degree of reduction in sharpness in the projected image 502.
  • the image processing apparatus 1 includes an imaging device 6 that captures a projection image 502 and a third input terminal 112 that acquires data from the imaging device 6.
  • Image data obtained by imaging the projection image 502 by the imaging device 6 may be used as projection image data used for calculating the degree of reduction in sharpness. As described above, by using the projection image data obtained by capturing the actual projection image 502, it is possible to cope with a change in the relationship between the projection image data and the projection image 502 due to the deterioration of the image projection apparatus 2 over time. It becomes possible.
  • the projection state is not limited to the above example.
  • a known trapezoidal distortion correction keystone correction
  • an example of generating projection image data in consideration of ⁇ being a projection state will be shown.
  • the first generation unit 104 acquires the angle ⁇ formed as the projection state. Furthermore, a known affine transformation parameter (trapezoidal distortion correction coefficient) for transforming input image data into a trapezoidal image is held for each ⁇ , and the input image data is converted into a trapezoidal image using the affine transformation parameters corresponding to the obtained ⁇ . Is converted into image data.
  • the affine transformation parameter includes, for example, a horizontal movement amount, a vertical movement amount, a horizontal scaling factor, and a vertical scaling factor for each pixel position of the input image data in accordance with ⁇ .
  • a known bicubic method is used for resolution conversion when converting to trapezoidal image data.
  • the image data representing the trapezoidal image is inversely transformed using the affine transformation parameters to return to the same rectangle as the input image data.
  • a known nearest neighbor method is used.
  • the nearest neighbor method that replicates neighboring pixels at the time of inverse conversion, it becomes possible to generate projection image data that assumes sharpness lost when converting to image data representing a trapezoidal image.
  • the first color conversion LUT 105 is referred to generate projection image data. Since the processing after generating the projection image data in consideration of ⁇ being the projection state is the same, the description is omitted.
  • the present invention is not limited to the above example.
  • the affine transformation parameters corresponding to the respective corrections may be held, and the conversion process similar to the above-described trapezoidal distortion correction may be performed.
  • the brightness of the image may be reduced due to a decrease in the amount of light generated in the peripheral portion as compared with the central portion of the projected image 502.
  • the input image data may be corrected in advance. In the above case, it is necessary to calculate the degree of reduction in sharpness based on input image data that has been subjected to correction processing.
  • An example in which projection image data is generated in consideration of ⁇ and ⁇ which are projection states will be shown below.
  • the first generation unit 104 acquires a light amount reduction amount ⁇ and a coefficient ⁇ for adjusting the light amount of the entire image for each pixel position of the projection image 502 as a projection state.
  • the pixel value I_R ′ x, y of the corrected input image data is calculated.
  • I_R ′ x, y (I_R x, y + ⁇ x, y ) ⁇ ⁇ x, y (6)
  • the light amount reduction amount ⁇ for each pixel position of the projected image 502 and the coefficient ⁇ for adjusting the light amount of the entire image are obtained by projecting an image based on input image data whose signal value is known, and measuring the projected image. To determine in advance. Based on the corrected input image data, the first color conversion LUT 105 is referred to generate projection image data. Since the processing after generating the projection image data in consideration of the projection states ⁇ and ⁇ is the same, the description is omitted.
  • the sharpness of the input image data is enhanced by adding the pixel value of the input image data and the pixel value of the image data representing the degree of reduction of the sharpness.
  • the enhancement process is not limited to the above example.
  • a correction value corresponding to the value of the degree of sharpness reduction may be separately stored, and this correction value may be added to the pixel value of the input image data.
  • a gamma ( ⁇ ) value corresponding to the degree of reduction in sharpness may be stored in advance, and the sharpness of input image data may be enhanced by a known ⁇ correction process using the stored ⁇ value. .
  • a plurality of known edge enhancement filters with different enhancement levels for emphasizing fine portions of an image may be held, and the edge enhancement filters may be used properly according to the degree of sharpness reduction.
  • the projection image data is generated based on the input image data, and then the formation image data is generated.
  • the processing of this embodiment is not limited to the above example.
  • projection image data is generated based on input image data in advance and stored in the HDD 1412 or the like. Input image data and projection image data generated in advance are acquired, and formation image data is generated based on the input image data and the projection image data.
  • Example 2 In the first embodiment, the example in which the formed image data is generated by enhancing the sharpness of the image represented by the input image data based on the degree of reduction in the sharpness calculated from the projection image data and the input image data has been described.
  • the luminance range that can be expressed by the formed image 501 changes according to the ambient light determined by the illumination 4.
  • the formed image 501 is an image representing an image recorded by reflecting the irradiated light. Therefore, when the illumination light is small (dark), the luminance range that can be expressed by the formed image 501 is narrow, and conversely, when the illumination light is large (bright), the luminance range that can be expressed by the formed image 501 tends to be wide. is there.
  • the enhancement degree of the process for enhancing the sharpness is controlled in consideration of the luminance range of the formed image 501 that changes according to the ambient light. As a result, it is possible to reduce fluctuations in the effect of suppressing the reduction in sharpness due to ambient light.
  • An example of realizing the above processing will be described mainly with respect to differences from the first embodiment.
  • the second generation unit 107 acquires ambient light information via the fourth output terminal 111.
  • the ambient light information is 4-bit data representing the intensity of the ambient light irradiated on the superimposed image.
  • the ambient light information is directly acquired by input by the user or by connecting the illumination 4 and the image processing apparatus 1.
  • the acquisition of ambient light information is not limited to the above example.
  • the light intensity information of a presumed superimposed image observation scene (outdoor clear sky, outdoor cloudy sky, indoor spotlight lighting, indoor office lighting, etc.) is held, Light intensity information may be acquired. Since the configuration other than the above is the same as that of the first embodiment, the description thereof is omitted.
  • S204 different from the first embodiment is described.
  • the degree of reduction in sharpness calculated in S203 is corrected based on the ambient light information acquired from the fourth input terminal 111, and the sharpness of the image represented by the input image data is determined by the corrected degree of reduction in sharpness. Emphasize.
  • the formation image data is generated by referring to the second color conversion LUT 108 held in advance as in the first embodiment.
  • the enhancement process is realized by adding a sharpness reduction component corrected with a correction coefficient corresponding to the ambient light information to the pixel value of the input image data subjected to resolution conversion.
  • the formed image 501 is an image representing an image recorded by reflecting the irradiated light. Therefore, when the ambient light is small, the luminance range that can be expressed by the formed image 501 tends to be narrow, and conversely, when the ambient light is large, the luminance range that can be expressed by the formed image 501 tends to be wide. Therefore, when the ambient light is small compared to the case where there is a lot of ambient light, the projection image 502 cannot be fully expressed, and a correction that emphasizes the sharpness component expressed only by the formed image 501 is performed, thereby changing the ambient light. It is possible to reduce fluctuations in the effect of suppressing the reduction in sharpness. Detailed processing contents will be described below.
  • the correction coefficient Z is determined from the ambient light information with reference to the LUT 113 that holds the correspondence relationship between the ambient light information and the correction coefficient held in advance.
  • An example of the LUT 113 is shown in FIG.
  • the LUT 113 measures the luminance range that can be expressed by the formed image 501 for each observation environment in which ambient light is changed in advance, and determines the luminance range according to the ratio of the luminance range. Similar to S2033, the addition processing is performed independently on each of the R, G, and B channels for each pixel of each image.
  • the pixel value of the R plane of the input image data is I_R x, y (x is the pixel position in the horizontal direction, y is the pixel position in the vertical direction), the pixel value of the R plane of the degree of sharpness reduction is Q_R x, y , and emphasis
  • the pixel value of the R plane of the input image data after processing be I_R ′ x, y .
  • the correction coefficient Z is used to calculate from the following equation (7).
  • I_R ′ x, y I_R x, y + (Q_R x, y ) ⁇ Z (7)
  • ⁇ Modification> an example in which the conversion LUT 113 that holds the correspondence between the ambient light information and the correction coefficient is shown, but the present invention is not limited to the above example.
  • a calculation formula for predicting the correspondence between the ambient light information and the correction coefficient may be constructed, and the correction coefficient may be calculated according to the ambient light information input using the above calculation formula.
  • the present invention is not limited to the above example.
  • two-dimensional ambient light information similar to the input image data is acquired, and enhancement processing according to the ambient light information is performed for each region (pixel) of the input image data.
  • the degree of emphasis may be controlled.
  • Example 3 In the first embodiment, the example in which the formed image data is generated by enhancing the sharpness of the image represented by the input image data based on the degree of reduction in the sharpness calculated from the projection image data and the input image data has been described.
  • the resolution R p of the projected image the resolution varies depending on the magnification of the image. Therefore, for example, when displaying by reducing the input image (if the enlargement ratio E is below 1) may resolution R p of the projected image is higher than the resolution R f of the formed image.
  • the degree of reduction in sharpness occurring in the formed image is predicted based on the input image and the enlargement ratio E, and the sharpness of the projection image is emphasized based on the predicted degree of reduction in sharpness.
  • An example of realizing the above processing will be described mainly with respect to differences from the first embodiment.
  • the enlargement ratio E is less than 1
  • the enlargement ratio E is referred to as a reduction ratio S.
  • the reduction ratio is a magnification when the image is reduced.
  • FIG. 1 A functional configuration of the image processing apparatus 1 is shown in FIG.
  • the image processing apparatus 1 is connected to the image projection apparatus 2 via the second output terminal 110, and the image processing apparatus 1 is connected to the image forming apparatus via the first output terminal 109. 3 is connected.
  • the first generation unit 104 refers to the first color conversion LUT 105 based on the input image data, and generates image data (formed image data) to be input to the image forming apparatus 3.
  • the calculation unit 106 acquires the input image data and the formed image data, and converts the resolution of the image represented by the input image data and the resolution of the image represented by the formed image data according to the projection state.
  • the degree of reduction in the sharpness of the image represented by the formed image data relative to the image represented by the input image data is calculated based on the input image data representing the image whose resolution has been converted and the formed image data.
  • the second generation unit 107 enhances the sharpness of the image represented by the input image data based on the input image data and the degree of sharpness reduction. Further, based on input image data representing an image with enhanced sharpness, the second color conversion LUT 108 is referred to, and image data (projected image data) to be input to the image projection device 3 is generated.
  • the formed image data generated by the first generation unit 104 is output to the image forming apparatus 3 via the first output terminal 109, and the projection image data generated by the second generation unit 107 is output by the second output terminal 110. Is output to the image projection apparatus 2.
  • the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image forming apparatus 3.
  • a known bicubic method is used for resolution conversion, but other resolution conversion methods such as a bilinear method may be used.
  • the first color conversion LUT 105 that is held in advance is referred to, and formed image data is generated.
  • the first color conversion LUT 105 referred to is the same as the second color conversion LUT 108 in the first embodiment, and a description thereof will be omitted.
  • the calculation unit 106 acquires the input image data and the projection state acquired in S201, and the formation image data generated in S202'. Furthermore, based on the acquired projection state, a reduction ratio S of the size of the image projected by the image projection apparatus 2 with respect to the size of the image represented by the input image data is calculated. Based on the calculated reduction ratio S, the resolution of the image represented by the formed image data and the resolution of the image represented by the input image data are converted. After converting the resolution, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated. The calculated degree of sharpness reduction is sent to the second generator 107.
  • the second generation unit 107 enhances the sharpness of the image represented by the input image data whose resolution is converted in S ⁇ b> 2033 ′, based on the degree of sharpness reduction calculated in S ⁇ b> 203 ′.
  • Sharpness enhancement is performed by addition processing as in the first embodiment.
  • the second color conversion LUT 108 held in advance is referred to generate projection image data. Since the second color conversion LUT 108 referred to is the same as the first color conversion LUT 105 in the first embodiment, the description thereof is omitted.
  • the calculation unit 106 acquires input image data, projection image data, and a projection state, as in the first embodiment.
  • the calculation unit 106 obtains the obtained projection state (D and ⁇ ) and the projection state in which the size of the image represented by the input image data and the size of the projection image are equal (projection distance D 0 and angle of view ⁇ ). 0 )), the reduction ratio S is calculated using the following equation (8).
  • S D / D 0 + ⁇ / ⁇ 0 (8)
  • the resolution of each pixel of the formed image data is converted to F f .
  • a known nearest neighbor method is used for this resolution conversion.
  • other methods such as the bicubic method may be used instead of the nearest neighbor method.
  • a known bicubic method is used for the resolution conversion.
  • the resolutions R f and R p described above are acquired by user input or directly by connecting the image projection apparatus 2 or the image forming apparatus 3 and the image processing apparatus 1. Both the resolutions R f and R p are preferably the highest resolution that each device can output.
  • the calculation unit 106 subtracts the pixel value of the input image data representing the image whose resolution has been converted in S2033 ′ and the pixel value of the formed image data, and reduces the difference obtained by the subtraction process to reduce the sharpness.
  • the degree The method for calculating the degree of reduction in sharpness and generating an image in which the sharpness of the input image is emphasized is the same as that in the first embodiment, and a description thereof will be omitted.
  • ⁇ Modification> In the present embodiment, an example in which a reduction in sharpness generated in a superimposed image of a projection image by the image projection device 2 and a formation image by the image forming device 3 has been described.
  • the combination of apparatuses for generating a superimposed image is not limited to the above example.
  • the sharpness reduction generated in the output image of the device having a low expressible resolution is higher. Any combination may be used as long as it can be compensated by the output image of the apparatus. For example, as shown in FIG.
  • a plurality of image projecting apparatuses having different expressible resolutions are used, and a first image projecting apparatus having a lower resolution is obtained by a projection image of the second image projecting apparatus 2b having a higher expressible resolution. You may supplement the fall of the sharpness of the projection image by 2a.
  • an example in which an image projecting apparatus that projects an image and an image forming apparatus that forms an image on a recording medium is used as an apparatus that outputs an image.
  • the above-described processing can be applied to other image output apparatuses.
  • the device combination may be a combination of two or more image output devices capable of generating a superimposed image.
  • an image display device such as a liquid crystal display or an organic EL display may be used.
  • an image is formed on a recording medium that transmits light, such as an OHP sheet, by the image forming device, and the OHP sheet on which the image is formed is placed on the image display device. Put it on.
  • the above-described processing can be applied to image superposition when one is a formed image formed by the image forming apparatus and the other is a display image displayed by the image display apparatus.
  • the sharpness reduction that occurs in the output image of the image output device whose resolution that can be expressed is reduced according to the image size of the superimposed image can be reduced by the output image of the image output device that can express higher resolution.
  • An example to supplement was shown.
  • the reduction of the sharpness of the superimposed image is not limited to the reduction of the sharpness according to the size of the superimposed image.
  • the reduction in image sharpness also occurs according to the output characteristics of the image output apparatus. For example, an image formed by an image forming apparatus is caused by a deviation in the landing position of a color material (ink), bleeding (mechanical dot gain) when the color material is fixed on a recording medium, optical blur (optical dot gain), and the like.
  • the sharpness is lower than that of the input image.
  • an example will be described in which, in addition to a reduction in sharpness corresponding to the size of the superimposed image, a reduction in sharpness corresponding to the output characteristics of the image output apparatus is also suppressed.
  • the output characteristics (characteristics of the sharpness of the formed image) of the image forming apparatus 3 are measured in advance, and a filter (hereinafter referred to as a compensation filter) that is opposite to the measured characteristics in frequency space is created. Keep it.
  • a filter created in advance is convolved with the input image data.
  • FIG. 22 shows a functional configuration of the image processing apparatus 1 according to the fourth embodiment.
  • a compensation filter 113 having a reverse characteristic with respect to the output characteristic of the image forming apparatus 3 is provided in advance. Since the processing flow is the same as that of the third embodiment except for S202 ′, the description is omitted, and S202 ′ different from that of the third embodiment will be described.
  • the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image forming apparatus 3. Furthermore, a convolution process is performed on the input image data whose resolution has been converted, according to the output characteristics of the image forming apparatus 3 provided in advance. A method for creating a compensation filter created in advance will be described below.
  • the compensation filter is created by printing a chart including a plurality of sine wave pattern images and uniform pattern images having different frequencies as shown in FIG. 23 on a recording medium and measuring the printed chart. Details of the method for creating the compensation filter will be described below.
  • the reflectance distribution of the output chart is acquired using a known image acquisition device (scanner, camera, microscope, etc.).
  • a frequency response value fi (u) that is an output characteristic of the image forming apparatus 3 is calculated by the following equation (11).
  • u is the frequency of the sine wave
  • Max (u) and Min (u) are the maximum reflectance and the minimum reflectance of the image that change according to the frequency u, respectively.
  • White and Black are the reflectances of the uniform pattern, respectively.
  • a known inverse Fourier transform is performed on the above Rx, and a filter calculated by the inverse Fourier transform is used as a compensation filter. Note that when the above compensation filter is used to compensate up to a high-frequency component, noise and brightness fluctuations occur. Therefore, it is desirable that the compensation intensity (enhancement degree) is lower than less than 4 Cycle / mm when the sensitivity is 4 Cycle / mm or more, which has a low sensitivity in the known visual characteristics.
  • a sharpness reduction due to the output characteristics of the image formed by the image forming apparatus 3 (color material landing position deviation, bleeding, optical blur, etc.) is predicted in advance, and the input image is enhanced.
  • An example is shown.
  • the apparatus in which the sharpness reduction according to the output characteristics occurs is not limited to the image forming apparatus 3.
  • the sharpness is lowered by the optical blurring of the projection lens.
  • an image display device such as a display, an optical blur is generated by the liquid crystal panel, and the sharpness is lowered.
  • the processing of the present embodiment can also be applied to sharpness reduction according to the output characteristics of these image output devices.
  • an example in which compensation according to output characteristics is applied to any one of a plurality of devices is shown. However, compensation processing according to the output characteristics of each device used for generating a superimposed image is performed. A configuration to perform is desirable.
  • an example is shown in which one filter having characteristics opposite to the output characteristics of the image formed by the image forming apparatus 3 is held in advance and the sharpness of the input image is enhanced.
  • the output characteristics described above vary depending on image printing conditions (recording medium, ink type, number of passes, carriage speed, scanning direction, halftone processing). Therefore, it is desirable to hold a plurality of inverse characteristic filters corresponding to the above printing conditions and perform the switching process according to the printing conditions.
  • one inverse characteristic filter and a filter correction coefficient for each printing condition are provided, and a plurality of inverse characteristic filters are generated by switching the filter correction coefficient according to the printing conditions. May be.
  • the sharpness enhancement process is performed on the input image data using the inverse characteristic filter.
  • the image processing procedure including the inverse characteristic filter is not limited to the above example.
  • an inverse characteristic filtering process based on the output characteristics of each image output device may be performed.
  • Max (u), Min (u), White, and Black are described as reflectance, but luminance, density, and RGB values of the device may be used.
  • the chart for acquiring the output characteristics of the output image is not limited to the example of FIG.
  • a rectangular wave pattern may be used instead of the sine wave pattern as long as the responsiveness for each frequency can be calculated.
  • the CTF value calculated by applying Expression (11) to the rectangular wave pattern is used as the frequency characteristic fi (u).
  • the CTF value may be converted into the MTF value using a known Coltman correction equation without using the frequency characteristic.
  • an inverse characteristic filter is generated and held in advance.
  • an input unit is provided for the user to input the reflectance distribution of the chart, and the inverse characteristic is determined according to the input reflectance distribution.
  • a filter may be generated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

An image processing device, which generates image data for generating one image by superimposing a first output image output by a first image output device and a second output image output by a second image output device, is characterized by having a first acquisition means for acquiring input image data, a second acquisition means for acquiring first output image data to be output to a first image output device, and a generation means for generating second output image data to be output to the second image output device on the basis of the input image data and the first output image data and in that the sharpness of the image represented by the second output image data is in accordance with the sharpness of the image represented by the first output image data.

Description

画像処理装置、画像処理方法及びプログラムImage processing apparatus, image processing method, and program
 本発明は、複数のデバイスそれぞれから出力された画像を組み合わせることによって1つの画像を再現するための画像処理技術に関する。 The present invention relates to an image processing technique for reproducing one image by combining images output from a plurality of devices.
 近年、デジタルカメラ等による撮像やCG(コンピュータグラフィックス)レンダリングにおいて、より高精細で広いダイナミックレンジを持つ画像を扱う機会が増えてきている。同様に、プリンタ等の出力デバイスによる上述したような画像の再現についても需要が高まっている。この需要に対し、実物(被写体)やCGオブジェクトの色や階調、質感等を、迫力のあるサイズで再現可能であるプロジェクタや大判プリンタ等の利用が期待されている。複数のデバイスから出力された画像を重畳することによって1つの画像を再現する技術として、JP公報特開2010-103863号が開示する技術がある。JP公報特開2010-103863号では、画像形成装置によって形成されたプリント物に画像投影装置が投影する画像を重ねることによって、画像投影装置のみを用いる場合よりも再現する画像の暗部領域における色再現範囲を拡大する技術が開示されている。 In recent years, there are increasing opportunities to handle images with higher definition and a wider dynamic range in imaging with a digital camera or the like and CG (computer graphics) rendering. Similarly, the demand for image reproduction as described above by an output device such as a printer is increasing. In response to this demand, the use of projectors, large-format printers, and the like that can reproduce the color, gradation, texture, and the like of real objects (subjects) and CG objects in powerful sizes is expected. JP-A-2010-103863 discloses a technique for reproducing one image by superimposing images output from a plurality of devices. In JP-A-2010-103863, color reproduction in a dark region of an image reproduced by using an image projection apparatus superimposed on a printed matter formed by an image forming apparatus, compared to the case of using only the image projection apparatus. Techniques for expanding the scope are disclosed.
特開2010-103863号公報JP 2010-103863 A
 上述したように複数のデバイスから出力された画像を重畳する際に、解像度の異なる画像を重畳する場合がある。この場合、各デバイスから出力された画像を重畳して再現される画像の鮮鋭度が、入力画像の鮮鋭度よりも低下してしまうといった課題がある。画像投影装置によって出力された画像と画像形成装置によって出力された画像とを重畳する場合を例にこの課題を説明する。画像投影装置によって投影された画像は画素数が一定であるために、画像のサイズの拡大に反比例して出力できる画像の解像度が低下する。そのため、画像形成装置と画像投影装置とに同じ画像データを入力し引き延ばして出力する場合、画像形成装置によって出力された画像の解像度よりも画像投影装置によって出力された画像の解像度の方が低くなってしまう。この解像度の低下により、入力された画像データの解像度で保持されていた情報を表現しきれずに、鮮鋭度が低下してしまう。この場合に各デバイスから出力された画像を重畳すると、画像投影装置によって出力された画像の鮮鋭度が低下しているために、各デバイスから出力された画像を重畳して再現される画像の鮮鋭度が、入力画像の鮮鋭度よりも低下してしまう。 As described above, when images output from a plurality of devices are superimposed, images with different resolutions may be superimposed. In this case, there is a problem that the sharpness of the image reproduced by superimposing the images output from each device is lower than the sharpness of the input image. This problem will be described by taking as an example a case where an image output by the image projection apparatus and an image output by the image forming apparatus are superimposed. Since the number of pixels in the image projected by the image projector is constant, the resolution of the image that can be output in inverse proportion to the increase in the image size is reduced. For this reason, when the same image data is input to the image forming apparatus and the image projecting apparatus and is output after being extended, the resolution of the image output by the image projecting apparatus is lower than the resolution of the image output by the image forming apparatus. End up. Due to the reduction in resolution, the information held at the resolution of the input image data cannot be expressed and the sharpness is lowered. In this case, if the image output from each device is superimposed, the sharpness of the image output by the image projector decreases, so that the sharpness of the image reproduced by superimposing the image output from each device is reduced. The degree is lower than the sharpness of the input image.
 本発明は、入力画像に基づいて複数の装置から出力された画像を重畳して1つの画像を生成する際に生じる、入力画像に対する重畳画像の鮮鋭度の低下を抑制するための画像処理を提供することを目的とする。 The present invention provides image processing for suppressing a decrease in the sharpness of a superimposed image with respect to an input image, which occurs when a single image is generated by superimposing images output from a plurality of devices based on the input image. The purpose is to do.
 上記課題を解決するために、本発明に係る画像処理装置は、入力画像に基づいて第1画像出力装置から出力された第1出力画像と、前記入力画像に基づいて第2画像出力装置から出力され、前記第1出力画像よりも高解像度な第2出力画像と、を重畳することによって1つの画像を生成するために、前記第2画像出力装置に出力する画像データを生成する画像処理装置であって、前記入力画像を表す入力画像データを取得する第1取得手段と、前記入力画像データに基づいて生成された、前記第1画像出力装置に出力する第1出力画像データを取得する第2取得手段と、前記入力画像データと、前記第1出力画像データと、に基づいて、前記第2画像出力装置に出力する第2出力画像データを生成する第1生成手段と、を有し、前記第2出力画像データが表す画像の鮮鋭度は、前記第1出力画像データが表す画像の鮮鋭度に応じていることを特徴とする。 In order to solve the above problems, an image processing apparatus according to the present invention outputs a first output image output from a first image output apparatus based on an input image, and an output from a second image output apparatus based on the input image. An image processing device that generates image data to be output to the second image output device in order to generate one image by superimposing the second output image with a higher resolution than the first output image. A first acquisition unit configured to acquire input image data representing the input image; and a second acquisition unit configured to acquire first output image data generated based on the input image data and output to the first image output device. First generation means for generating second output image data to be output to the second image output device based on the acquisition means, the input image data, and the first output image data, and Second Sharpness of the image represented by the image data is characterized in that depending on the sharpness of the first output image data represents an image.
 本発明によれば、入力画像に基づいて複数の装置から出力された画像を重畳して1つの画像を生成する際に生じる、入力画像に対する重畳画像の鮮鋭度の低下を抑制することができる。 According to the present invention, it is possible to suppress a decrease in the sharpness of a superimposed image with respect to an input image, which occurs when an image output from a plurality of devices is superimposed based on the input image to generate one image.
画像処理装置1の機能構成の一例を示すブロック図The block diagram which shows an example of a function structure of the image processing apparatus 1 画像処理装置1が実行する処理についてのフローチャートFlow chart for processing executed by image processing apparatus 1 画像処理装置1が実行する処理についてのフローチャートFlow chart for processing executed by image processing apparatus 1 投影画像データを生成するための第1色変換LUT105の一例を示す図The figure which shows an example of the 1st color conversion LUT105 for producing | generating projection image data 画像処理装置1のハードウェア構成の一例を示すブロック図A block diagram showing an example of a hardware configuration of the image processing apparatus 1 鮮鋭度の低下度合いを算出する処理(S203)についてのフローチャートFlowchart for processing (S203) for calculating degree of sharpness reduction 鮮鋭度の低下度合いを算出する処理(S203)についてのフローチャートFlowchart for processing (S203) for calculating degree of sharpness reduction 環境光情報に応じて補正係数を決定するためのLUT113の一例を示す図The figure which shows an example of LUT113 for determining a correction coefficient according to ambient light information 形成画像データを生成するための第2色変換LUT108の一例を示す図The figure which shows an example of the 2nd color conversion LUT108 for producing | generating formation image data 画像処理装置1の機能構成の一例を示すブロック図The block diagram which shows an example of a function structure of the image processing apparatus 1 画像処理装置1の機能構成の一例を示すブロック図The block diagram which shows an example of a function structure of the image processing apparatus 1 画像処理装置1の機能構成の一例を示すブロック図The block diagram which shows an example of a function structure of the image processing apparatus 1 変倍率に基づいた解像度の変換処理を模式的に示した図A diagram schematically showing resolution conversion processing based on variable magnification 変倍率に基づいた解像度の変換処理を模式的に示した図A diagram schematically showing resolution conversion processing based on variable magnification 鮮鋭度の低下度合いを算出する処理の一例を模式的に示した図The figure which showed typically an example of the process which calculates the fall degree of sharpness 画像の鮮鋭度を強調する処理の一例を模式的に示した図The figure which showed an example of the processing which emphasizes the sharpness of an image typically 再現対象の画像と画像形成装置が出力する画像との関係を模式的に示す概念図Schematic diagram schematically showing the relationship between the image to be reproduced and the image output by the image forming apparatus 鮮鋭度の低下度合いを算出する処理(S203)についてのフローチャートFlowchart for processing (S203) for calculating degree of sharpness reduction 変倍率に応じたハイパスフィルタの一例を示す模式図Schematic diagram showing an example of a high-pass filter according to the variable magnification 変倍率に応じたハイパスフィルタの一例を示す模式図Schematic diagram showing an example of a high-pass filter according to the variable magnification 変倍率に応じたハイパスフィルタの一例を示す模式図Schematic diagram showing an example of a high-pass filter according to the variable magnification 画像処理装置1の機能構成の一例を示すブロック図The block diagram which shows an example of a function structure of the image processing apparatus 1 再現対象の画像と画像投影装置が出力する画像との関係を模式的に示す概念図Schematic diagram schematically showing the relationship between the image to be reproduced and the image output by the image projection device ユーザによる入力を受け付けるためのUI画面の一例を示す図The figure which shows an example of UI screen for receiving the input by a user 画像処理装置1の機能構成を示すブロック図The block diagram which shows the function structure of the image processing apparatus 1 画像処理装置1の機能構成を示すブロック図The block diagram which shows the function structure of the image processing apparatus 1 画像処理装置1の機能構成を示すブロック図The block diagram which shows the function structure of the image processing apparatus 1 画像形成装置3の出力特性を計測するためのチャートの一例を示す図The figure which shows an example of the chart for measuring the output characteristic of the image forming apparatus 3
 以下、本発明の実施例について、図面を参照して説明する。尚、同一の構成については、同じ符号を付して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In addition, about the same structure, the same code | symbol is attached | subjected and demonstrated.
 [実施例1]
 <再現対象の画像とデバイスが出力する画像との関係>
 まず、図14及び図18を用いて、再現対象の画像と画像形成装置が出力する画像との関係及び、再現対象の画像と画像投影装置が出力する画像との関係を説明する。尚、本実施例における解像度は、投影もしくは形成された画像における表現の細かさの尺度としても表現され、単位としてdpi(dot per inch)を用いる。また、本実施例における鮮鋭度は、画像の微細部分(高解像度部分)の明瞭性の尺度である。例えば、鮮鋭度は、各解像度における入力コントラストに対する出力コントラストの減少の割合で表される応答関数MTF(modulation transfer function)で表現可能である。コントラストの減少の割合が低く応答性が良い場合を鮮鋭度が高いと表現し、コントラストの減少の割合が高く応答性が悪い場合を鮮鋭度が低いと表現する。
[Example 1]
<Relationship between the image to be reproduced and the image output by the device>
First, the relationship between the image to be reproduced and the image output from the image forming apparatus and the relationship between the image to be reproduced and the image output from the image projection apparatus will be described with reference to FIGS. Note that the resolution in this embodiment is also expressed as a measure of the fineness of expression in a projected or formed image, and uses dpi (dot per inch) as a unit. Further, the sharpness in this embodiment is a measure of the clarity of a fine portion (high resolution portion) of an image. For example, the sharpness can be expressed by a response function MTF (modulation transfer function) represented by a rate of decrease in output contrast with respect to input contrast at each resolution. When the contrast reduction rate is low and the response is good, the sharpness is high, and when the contrast reduction rate is high and the response is bad, the sharpness is low.
 図14(a)に再現対象の画像を表す入力画像データ(第1画像データとも呼ぶ)を示す。図14(b)に入力画像データの等倍(拡大率1.0)のサイズの画像を表す形成画像データ(第3画像データとも呼ぶ)、図14(c)に入力画像データの2倍(拡大率2.0)のサイズの画像を表す形成画像データを示す。尚、形成画像データは、画像形成装置が記録媒体上に画像を形成するために入力する画像データである。図14では、解像度300dpiで、水平方向に8192画素、垂直方向に4320画素のサイズである画像を表す入力画像データと、解像度1200dpiの画像を形成可能な画像形成装置が当該入力画像データに基づいて形成した画像の例を示している。画像形成装置は、記録媒体に対してインクを吐出する記録ヘッドを複数回走査することで画像を形成する。記録媒体と記録ヘッドの距離は形成する画像のサイズによらず一定のため、画像形成装置が形成する画像の解像度は一定となる。例えば、図14に示すように、解像度300dpiの画像を上述した画像形成装置が解像度1200dpiで出力する場合、入力画像データが表す画像の画素数に対し形成画像データが表す画像の画素数を縦横共に4倍にする。また、入力画像データが表す画像の2倍の画像サイズで画像を形成する場合では、入力画像データが表す画像の画素数に対し形成画像データが表す画像の画素数を縦横共に2倍にする。上述のようにすることで、画像形成装置が出力(形成)する画像の解像度を一定にしている。 FIG. 14A shows input image data (also referred to as first image data) representing an image to be reproduced. FIG. 14B shows formed image data (also referred to as third image data) representing an image having the same size as the input image data (enlargement ratio 1.0), and FIG. 14C shows twice the input image data ( The formation image data showing the image of the size of the enlargement ratio 2.0) is shown. The formed image data is image data input by the image forming apparatus to form an image on a recording medium. In FIG. 14, an input image data representing an image having a resolution of 300 dpi and a size of 8192 pixels in the horizontal direction and 4320 pixels in the vertical direction, and an image forming apparatus capable of forming an image with a resolution of 1200 dpi based on the input image data. An example of the formed image is shown. The image forming apparatus forms an image by scanning a recording head that ejects ink on a recording medium a plurality of times. Since the distance between the recording medium and the recording head is constant regardless of the size of the image to be formed, the resolution of the image formed by the image forming apparatus is constant. For example, as shown in FIG. 14, when the above-described image forming apparatus outputs an image with a resolution of 300 dpi at a resolution of 1200 dpi, the number of pixels of the image represented by the formed image data is set both vertically and horizontally with respect to the number of pixels of the image represented by the input image data. 4 times. When an image is formed with an image size twice that of the image represented by the input image data, the number of pixels of the image represented by the formed image data is doubled both vertically and horizontally with respect to the number of pixels of the image represented by the input image data. As described above, the resolution of an image output (formed) by the image forming apparatus is made constant.
 図18(a)に再現対象の画像を表す入力画像データ、図18(b)に入力画像データが表す画像を画像投影装置の画素数(拡大率1.0)で投影した場合のサイズ(「等倍」サイズと仮定)である画像を表す投影画像データ(第2画像データとも呼ぶ)を示す。さらに、図18(c)に投影する画像のサイズを「等倍」サイズの2倍(拡大率2.0)にした場合の画像を表す投影画像データを示す。尚、投影画像データは、画像投影装置が画像を投影するために入力する画像データである。図18において、入力画像データは図14と同様であり、画像投影装置が出力する画像の画素数は水平方向に4096画素、垂直方向に2160画素の例を示している。画像投影装置は、画像形成装置と異なり、液晶パネル等の表示素子を画素毎に駆動させて画像を生成し、生成した画像を投射レンズにより投射することで画像を表示する。そのため、予め保持する液晶パネル等の表示素子によって出力する画像の画素数が決定してしまい、画像形成装置のように出力する画素数を増減させることができない。その結果、図18に示すように、解像度300dpiの画像を表す入力画像データに基づいて等倍のサイズで画像を投影する場合と、2倍のサイズで画像を投影する場合とで、投影された画像の画素数は一定である。そのため、投影する画像のサイズの拡大に反比例して画像の解像度が低下する。 18A shows the input image data representing the image to be reproduced, and FIG. 18B shows the size of the image represented by the input image data when projected with the number of pixels of the image projection apparatus (enlargement ratio 1.0). 3 shows projection image data (also referred to as second image data) representing an image having an “actual size” size. Further, FIG. 18C shows projection image data representing an image when the size of the image to be projected is twice the “same size” size (enlargement ratio 2.0). The projection image data is image data input by the image projection apparatus to project an image. In FIG. 18, the input image data is the same as in FIG. 14, and the number of pixels of the image output from the image projection apparatus is 4096 pixels in the horizontal direction and 2160 pixels in the vertical direction. Unlike an image forming apparatus, an image projection apparatus generates an image by driving a display element such as a liquid crystal panel for each pixel, and displays the image by projecting the generated image with a projection lens. For this reason, the number of pixels of an image to be output is determined by a display element such as a liquid crystal panel held in advance, and the number of pixels to be output cannot be increased or decreased as in the image forming apparatus. As a result, as shown in FIG. 18, the image was projected in the case of projecting the image at the same size based on the input image data representing the image having the resolution of 300 dpi and in the case of projecting the image at the size of 2 times. The number of pixels in the image is constant. Therefore, the resolution of the image decreases in inverse proportion to the increase in the size of the image to be projected.
 以上に述べた通り、画像形成装置は記録ヘッドの走査方法によって出力する画像の画素数を増減できるために、画像サイズによらず一定の解像度で画像の形成が可能である。一方、画像投影装置では、出力する画像の画素数が液晶パネル等の表示素子によって決定されてしまうために、入力画像を引き延ばして投影することによる画像サイズの拡大に反比例して画像の解像度が低下してしまう。すなわち、画像形成装置による出力画像は再現対象画像のサイズによらず一定の鮮鋭度であるのに対し、画像投影装置による出力画像は再現対象画像のサイズが大きくなると画素サイズも大きくなり、同一の位置から見た場合の鮮鋭度が低下してしまう。本実施例では、画像投影装置が投影する画像と画像形成装置が形成する画像とを重畳して1つの画像を再現する。その際に、上述したような画像投影装置で投影した画像の鮮鋭度の低下を、画像形成装置が形成する画像の鮮鋭度を強調することで補う。以下で詳細を説明する。 As described above, since the image forming apparatus can increase or decrease the number of pixels of the image to be output by the scanning method of the recording head, it is possible to form an image with a constant resolution regardless of the image size. On the other hand, in the image projection apparatus, the number of pixels of the output image is determined by a display element such as a liquid crystal panel, so that the resolution of the image decreases in inverse proportion to the enlargement of the image size caused by extending and projecting the input image. Resulting in. In other words, the output image from the image forming apparatus has a constant sharpness regardless of the size of the reproduction target image, whereas the output image from the image projection apparatus increases in pixel size as the size of the reproduction target image increases. The sharpness when viewed from the position is lowered. In the present embodiment, one image is reproduced by superimposing the image projected by the image projection apparatus and the image formed by the image forming apparatus. At that time, the reduction in the sharpness of the image projected by the image projection apparatus as described above is compensated by enhancing the sharpness of the image formed by the image forming apparatus. Details will be described below.
 <画像処理装置1のハードウェア構成>
 図4は、本実施例における画像処理装置1のハードウェア構成例である。画像処理装置1は、例えばコンピュータであり、CPU1401、ROM1402、RAM1403を備える。CPU1401は、RAM1403をワークメモリとして、ROM1402、HDD(ハードディスクドライブ)1412などに格納されたOS(オペレーティングシステム)や各種プログラムを実行する。また、CPU1401は、システムバス1408を介して各構成を制御する。尚、後述するフローチャートによる処理は、ROM1402やHDD1412などに格納されたプログラムコードがRAM1403に展開され、CPU1401によって実行される。VC(ビデオカード)1404には、ディスプレイ5が接続される。汎用I/F(インターフェース)1405には、シリアルバス1409を介して、マウスやキーボードなどの入力デバイス1410や画像投影装置2、画像形成装置3が接続される。SATA(シリアルATA)I/F1406には、シリアルバス1411を介して、HDD1412や各種記録メディアの読み書きを行う汎用ドライブ1413が接続される。NIC(ネットワークインターフェースカード)1407は、外部装置との間で情報の入出力を行う。CPU1401は、HDD1412や汎用ドライブ1413にマウントされた各種記録メディアを各種データの格納場所として使用する。CPU1401は、プログラムによって提供されるUI(ユーザインターフェース)をディスプレイ5に表示し、入力デバイス1410を介して受け付けるユーザ指示などの入力を受信する。
<Hardware Configuration of Image Processing Apparatus 1>
FIG. 4 is a hardware configuration example of the image processing apparatus 1 in the present embodiment. The image processing apparatus 1 is a computer, for example, and includes a CPU 1401, a ROM 1402, and a RAM 1403. The CPU 1401 executes an OS (Operating System) and various programs stored in the ROM 1402, HDD (Hard Disk Drive) 1412, and the like using the RAM 1403 as a work memory. The CPU 1401 controls each component via the system bus 1408. Note that the processing according to the flowchart to be described later is executed by the CPU 1401 after the program code stored in the ROM 1402, the HDD 1412, or the like is expanded in the RAM 1403. A display 5 is connected to a VC (video card) 1404. A general-purpose I / F (interface) 1405 is connected to an input device 1410 such as a mouse and a keyboard, the image projection apparatus 2, and the image forming apparatus 3 via a serial bus 1409. A SATA (Serial ATA) I / F 1406 is connected to a general-purpose drive 1413 for reading and writing the HDD 1412 and various recording media via a serial bus 1411. A NIC (network interface card) 1407 inputs / outputs information to / from an external device. The CPU 1401 uses various recording media mounted on the HDD 1412 or the general-purpose drive 1413 as a storage location for various data. The CPU 1401 displays a UI (user interface) provided by the program on the display 5 and receives an input such as a user instruction accepted via the input device 1410.
 <画像処理装置1の機能構成>
 図1は、画像処理装置1の機能構成を示したブロック図である。図1において、1は画像処理装置、2は画像投影装置(プロジェクタ)、3は画像形成装置(プリンタ)、4は重畳画像を観察する際の環境光を決定する照明を示す。重畳画像は、画像形成装置3によって記録媒体上に形成された形成画像501の上に、画像投影装置2によって投影された投影画像502が重なった画像を示す。尚、画像処理装置1は例えば一般的なパーソナルコンピュータにインストールされたプリンタドライバによって実施され得る。その場合、以下に説明する画像処理装置1の各部は、コンピュータが所定のプログラムを実行することにより実現されることになる。また、別の構成としては、例えば、画像投影装置2と画像形成装置3を画像処理装置1が含む構成としても良い。
<Functional Configuration of Image Processing Apparatus 1>
FIG. 1 is a block diagram illustrating a functional configuration of the image processing apparatus 1. In FIG. 1, 1 is an image processing apparatus, 2 is an image projection apparatus (projector), 3 is an image forming apparatus (printer), and 4 is illumination that determines ambient light when observing a superimposed image. The superimposed image indicates an image in which the projection image 502 projected by the image projection device 2 is superimposed on the formation image 501 formed on the recording medium by the image forming device 3. The image processing apparatus 1 can be implemented by a printer driver installed in a general personal computer, for example. In that case, each part of the image processing apparatus 1 described below is realized by a computer executing a predetermined program. As another configuration, for example, the image processing device 1 may include the image projection device 2 and the image forming device 3.
 画像処理装置1と画像投影装置2、画像処理装置1と画像形成装置3とはインターフェース又は回路によって接続されている。画像処理装置1は、第1入力端子101と、第2入力端子102と、取得部103と、第1生成部104と、第1色変換LUT105と、算出部106と、第2生成部107と、第2色変換LUT108と、第1出力端子109と、第2出力端子とを有する。取得部103は、第1入力端子101を介して、再現対象の画像を表す入力画像データを取得する。また、第2入力端子102を介して、画像投影装置2の投影状態を取得する。投影状態については後述する。第1生成部104は、上述した入力画像データに基づいて、第1色変換LUT105を参照し、画像投影装置2に入力する画像データ(投影画像データ)を生成する。算出部106は、入力画像データと投影画像データとを取得し、投影状態に応じて入力画像データが表す画像の解像度及び投影画像データが表す画像の解像度を変換する。さらに、解像度が変換された画像を表す入力画像データ及び投影画像データに基づいて、入力画像データが表す画像に対する投影画像データが表す画像の鮮鋭度の低下度合いを算出する。第2生成部107は、入力画像データと鮮鋭度の低下度合いとに基づいて、入力画像データが表す画像の鮮鋭度を強調する。さらに、鮮鋭度が強調された画像を表す入力画像データに基づいて、第2色変換LUT108を参照し、画像形成装置3に入力する画像データ(形成画像データ)を生成する。第1生成部104で生成された投影画像データは、第1出力端子109を介して、画像投影装置2に出力され、第2生成部107で生成された形成画像データは、第2出力端子110を介して、画像形成装置3に出力される。 The image processing apparatus 1 and the image projection apparatus 2, and the image processing apparatus 1 and the image forming apparatus 3 are connected by an interface or a circuit. The image processing apparatus 1 includes a first input terminal 101, a second input terminal 102, an acquisition unit 103, a first generation unit 104, a first color conversion LUT 105, a calculation unit 106, and a second generation unit 107. , A second color conversion LUT 108, a first output terminal 109, and a second output terminal. The acquisition unit 103 acquires input image data representing an image to be reproduced via the first input terminal 101. Further, the projection state of the image projection device 2 is acquired via the second input terminal 102. The projection state will be described later. The first generation unit 104 refers to the first color conversion LUT 105 based on the input image data described above, and generates image data (projection image data) to be input to the image projection device 2. The calculation unit 106 acquires the input image data and the projection image data, and converts the resolution of the image represented by the input image data and the resolution of the image represented by the projection image data according to the projection state. Further, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated based on the input image data representing the image whose resolution has been converted and the projection image data. The second generation unit 107 enhances the sharpness of the image represented by the input image data based on the input image data and the degree of sharpness reduction. Further, based on input image data representing an image with enhanced sharpness, the second color conversion LUT 108 is referred to, and image data (formed image data) to be input to the image forming apparatus 3 is generated. The projection image data generated by the first generation unit 104 is output to the image projection apparatus 2 via the first output terminal 109, and the formed image data generated by the second generation unit 107 is output by the second output terminal 110. And output to the image forming apparatus 3.
 <画像投影装置2の構成と動作>
 画像投影装置2は、投射光学部(不図示)を有する。投射光学部は、光源であるランプと、入力された投影画像データに基づいて液晶パネルを駆動させる液晶駆動装置と、投射レンズとを備える。ランプからの光が光学系によってR、G、Bの光に分解されて、液晶パネルにそれぞれ導かれる。各液晶パネルに導かれた光は、各液晶パネルで輝度変調されて投射レンズにより、画像形成装置3により形成されたプリント物上に画像を投影する。
<Configuration and Operation of Image Projecting Device 2>
The image projection apparatus 2 has a projection optical unit (not shown). The projection optical unit includes a lamp that is a light source, a liquid crystal driving device that drives a liquid crystal panel based on input projection image data, and a projection lens. The light from the lamp is decomposed into R, G, and B light by the optical system and guided to the liquid crystal panel. The light guided to each liquid crystal panel is modulated in luminance by each liquid crystal panel, and an image is projected onto a printed matter formed by the image forming apparatus 3 by a projection lens.
 <画像形成装置3の構成と動作>
 画像形成装置3は、画像処理装置1において生成された形成画像データに基づいて、記録ヘッド(不図示)を記録媒体に対して相対的に縦横に移動させてインクドットを記録媒体上に記録し、画像を形成する。本実施例において、画像形成装置3はインクジェット方式のプリンタを用いるが、電子写真方式など他の方式のプリンタでも構わない。
<Configuration and Operation of Image Forming Apparatus 3>
The image forming apparatus 3 records ink dots on the recording medium by moving a recording head (not shown) vertically and horizontally relative to the recording medium based on the formed image data generated by the image processing apparatus 1. , Form an image. In the present embodiment, the image forming apparatus 3 uses an ink jet printer, but other types of printers such as an electrophotographic method may be used.
 <画像処理装置1の処理内容>
 次に、上述した機能構成を備えた画像処理装置1の処理内容について、図2Aのフローチャートを用いて説明する。以下、各ステップ(工程)は符号の前にSをつけて表す。
<Processing content of image processing apparatus 1>
Next, processing contents of the image processing apparatus 1 having the above-described functional configuration will be described with reference to the flowchart of FIG. 2A. Hereinafter, each step (process) is represented by adding S before the reference numeral.
 S201において、取得部103は、第1入力端子101を介して入力画像データを取得する。入力画像データは、各画素に各8ビットのRGB値が記録された3チャンネルのカラー画像データである。また、入力画像データが表す画像は、画像投影装置2が投影する画像の解像度よりも高解像度であることとする。つまり、画像投影装置2が投影する画像は、入力画像データが表す画像よりも低解像度となる。入力画像データ取得部103で取得した入力画像データは、第1生成部104と算出部106と第2生成部107とに送られる。さらに、第2入力端子を介して、投影状態を取得する。投影状態は、画像投影装置2が投影画像502を投影する際の、画像投影装置2と投影対象である形成画像501との距離(投影距離)及び、投射レンズと液晶パネルとの関係から定まる画角である。投影距離Dはメートル(m)単位、画角θはラジアン単位に換算した4ビットデータとして取得する。投影状態は、図19に示すUI画面を介したユーザによる入力を受付けることによって取得するか、もしくは画像投影装置2と画像処理装置1とを接続することで画像投影装置2より直接取得する。 In S201, the acquisition unit 103 acquires input image data via the first input terminal 101. The input image data is 3-channel color image data in which 8-bit RGB values are recorded in each pixel. The image represented by the input image data has a higher resolution than the image projected by the image projection device 2. That is, the image projected by the image projection device 2 has a lower resolution than the image represented by the input image data. The input image data acquired by the input image data acquisition unit 103 is sent to the first generation unit 104, the calculation unit 106, and the second generation unit 107. Further, the projection state is acquired via the second input terminal. The projection state is an image determined from the distance (projection distance) between the image projection apparatus 2 and the formed image 501 that is the projection target when the image projection apparatus 2 projects the projection image 502 and the relationship between the projection lens and the liquid crystal panel. It is a horn. The projection distance D is acquired as 4-bit data converted into meters (m) and the angle of view θ is converted into radians. The projection state is acquired by receiving an input from the user via the UI screen illustrated in FIG. 19 or directly acquired from the image projection apparatus 2 by connecting the image projection apparatus 2 and the image processing apparatus 1.
 S202において、第1生成部105は、画像投影装置2の出力する画像の画素数に基づいて、取得部103で取得した入力画像データが表す画像の解像度を変換する。解像度の変換には公知のバイキュービック法を用いるが、バイリニア法など他の解像度変換方法を用いても良い。さらに、解像度を変換した画像を表す入力画像データに基づいて、予め保持する第2色変換LUT105を参照し、投影画像データを生成する。参照される第1色変換LUT105を図3に示す。図3に示す通り、入力画像データの画素ごとに記録されている信号値(RGB値)と投影画像データの各画素に記録される信号値(RGB値)との対応関係を保持している。上述した第1色変換LUT105は、投影画像データに記録された入力信号値が既知のチャートを投影し、投影された画像の色を測定することで予め作成しておく。生成する投影画像データは、入力画像データと同様に、各画素に各8ビットのRGB値が記録された3チャンネルのカラー画像データである。投影画像データが生成されると、第1出力端子109を介して画像投影装置2に送られる。また、算出部106にも送られる。 In S202, the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image projection device 2. A known bicubic method is used for resolution conversion, but other resolution conversion methods such as a bilinear method may be used. Further, based on input image data representing an image whose resolution has been converted, projection image data is generated by referring to a second color conversion LUT 105 held in advance. The first color conversion LUT 105 referred to is shown in FIG. As shown in FIG. 3, the correspondence between the signal value (RGB value) recorded for each pixel of the input image data and the signal value (RGB value) recorded for each pixel of the projection image data is maintained. The first color conversion LUT 105 described above is created in advance by projecting a chart with a known input signal value recorded in the projection image data and measuring the color of the projected image. The generated projection image data is three-channel color image data in which 8-bit RGB values are recorded in each pixel, as in the case of the input image data. When the projection image data is generated, it is sent to the image projection device 2 via the first output terminal 109. It is also sent to the calculation unit 106.
 S203において、算出部106は、S201で取得した入力画像データ及び投影状態と、S202で生成した投影画像データとを取得する。さらに、取得した投影状態に基づいて、入力画像データが表す画像のサイズに対する画像投影装置2が投影する画像のサイズの拡大率Eを算出する。ここで拡大率は、画像を拡大する際の倍率である。算出した拡大率Eに基づいて、投影画像データが表す画像の解像度及び入力画像データが表す画像の解像度を変換する。解像度を変換した後に、入力画像データが表す画像に対する投影画像データが表す画像の鮮鋭度の低下度合いを算出する。算出された鮮鋭度の低下度合いは第2生成部に送られる。以下に、S203の詳細な処理について、図5Aに示すフローチャートを用いて説明する。 In S203, the calculation unit 106 acquires the input image data and the projection state acquired in S201, and the projection image data generated in S202. Further, based on the acquired projection state, an enlargement ratio E of the size of the image projected by the image projection apparatus 2 with respect to the size of the image represented by the input image data is calculated. Here, the enlargement ratio is a magnification for enlarging the image. Based on the calculated enlargement ratio E, the resolution of the image represented by the projection image data and the resolution of the image represented by the input image data are converted. After converting the resolution, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated. The calculated degree of sharpness reduction is sent to the second generator. Hereinafter, the detailed processing of S203 will be described using the flowchart shown in FIG. 5A.
 S2031において、入力画像データ、投影画像データ、投影状態を取得する。S2032において、取得した投影状態(D及びθ)と、入力画像データが表す画像のサイズと投影画像502のサイズとが等倍となる投影状態(投影距離D及び画角θ)とを基に、以下に示す式(1)を用いて拡大率Eを算出する。 In step S2031, input image data, projection image data, and a projection state are acquired. In S2032, based on the acquired projection state (D and θ) and the projection state (projection distance D 0 and angle of view θ 0 ) in which the size of the image represented by the input image data and the size of the projection image 502 are equal. Then, the enlargement ratio E is calculated using the following equation (1).
E=D/D+θ/θ・・・式(1) E = D / D 0 + θ / θ 0 (1)
 尚、上述した等倍となる投影状態(D及びθ)は、以下の方法により予め決定し、算出部106に保持させておく。予め画像サイズが既知の入力画像データを基に生成した投影画像データを投影し、投影された投影画像502の画像サイズが入力した入力画像データが表す画像のサイズと一致する投影状態を探索することによって決定する。尚、画像投影装置2が備える投射レンズや液晶パネルの特性により計算式を構築し、当該計算式を用いて算出しても良い。尚、S2031において、投影状態として投影距離Dのみを取得し、S2032において、投影距離DからDを除算することによって拡大率Eを算出してもよい。 Note that the projection state (D 0 and θ 0 ) having the same magnification as described above is determined in advance by the following method and held in the calculation unit 106. Projecting projection image data generated based on input image data whose image size is known in advance, and searching for a projection state in which the image size of the projected image 502 projected matches the size of the image represented by the input image data Determined by. Note that a calculation formula may be constructed based on the characteristics of the projection lens and the liquid crystal panel included in the image projection apparatus 2, and the calculation formula may be used. Note that, in S2031, to acquire only the projection distance D as the projection state, at S2032, it may calculate the enlargement ratio E by dividing D 0 from the projection distance D.
 S2033において、S2032で算出した拡大率Eと、画像形成装置3で形成する画像の解像度Rとに基づいて、入力画像データが表す画像の解像度及び投影画像データが表す画像の解像度を変換する。投影画像データが表す画像の解像度変換は、まず、拡大率Eと解像度R、画像投影装置2で投影する画像の解像度Rを基に以下の式(2)を用いて、投影画像データの変倍率Fを算出する。 In S2033, the resolution of the image represented by the input image data and the resolution of the image represented by the projection image data are converted based on the enlargement ratio E calculated in S2032 and the resolution Rf of the image formed by the image forming apparatus 3. The resolution conversion of the image represented by the projection image data is performed by using the following formula (2) based on the enlargement ratio E, the resolution R f , and the resolution R p of the image projected by the image projection device 2. The scaling factor Fp is calculated.
=R/(R/E)・・・式(2) Fp = Rf / ( Rp / E) ... Formula (2)
 算出した変倍率Fを基に、投影画像データの各画素をF個に解像度変換する。この解像度変換には、公知の最近傍法を用いる。投影画像データの変倍率F=3の場合における、投影画像データが表す画像の解像度変換の例を図11Aに示す。最近傍法を用いるため、変換前の画素値を複製する処理となり、投影された投影画像502を模した画像を表す投影画像データに変換することができる。尚、解像度の変換には、最近傍法ではなくバイキュービック法など他の方法でも良い。 The calculated scaling factor F p based, resolution-converting each pixel of the projection image data in F p pieces. A known nearest neighbor method is used for this resolution conversion. FIG. 11A shows an example of resolution conversion of the image represented by the projection image data when the scaling factor F p = 3 of the projection image data. Since the nearest neighbor method is used, the pixel value before conversion is duplicated and can be converted into projection image data representing an image simulating the projected image 502. For resolution conversion, other methods such as the bicubic method may be used instead of the nearest neighbor method.
 同様に、入力画像データが表す画像の解像度変換は、まず、拡大率Eと解像度R、入力画像データが表す画像の解像度Rinを基に以下の式(3)を用いて、入力画像データの変倍率Finを算出する。 Similarly, the resolution conversion of the image represented by the input image data is performed by first using the following formula (3) based on the enlargement ratio E, the resolution R f , and the resolution R in of the image represented by the input image data. to calculate the scaling factor F in the.
in=R/(Rin/E)・・・式(3) F in = R f / (R in / E) (3)
 算出した変倍率Finを基に、入力画像データの各画素をFin個に解像度変換する。この解像度変換には、公知のバイキュービック法を用いる。入力画像データの変倍率Fin=1.5の場合における、入力画像データが表す画像の解像度変換の例を図11Bに示す。変換前の近傍画素値を基に補間処理することで解像度を変換するバイキュービック法を用いることで、形成画像501を模した画像を表す形成画像データを生成するための入力画像データに変換することが可能となる。尚、上述した解像度R及びRは、ユーザによる入力によって取得するか、もしくは画像投影装置2又は画像形成装置3と画像処理装置1とを接続することで直接取得する。解像度R及びRはどちらも各デバイスが出力できる最高解像度であることが望ましい。 The calculated scaling ratio F in the group, to resolution conversion of each pixel of the input image data to F in pieces. A known bicubic method is used for the resolution conversion. FIG. 11B shows an example of resolution conversion of the image represented by the input image data when the scaling factor F in = 1.5 of the input image data. Conversion to input image data for generating formed image data representing an image simulating the formed image 501 by using a bicubic method for converting resolution by performing interpolation processing based on the neighboring pixel values before conversion. Is possible. Note that the resolutions R f and R p described above are acquired by user input or directly by connecting the image projection apparatus 2 or the image forming apparatus 3 and the image processing apparatus 1. Both the resolutions R f and R p are preferably the highest resolution that each device can output.
 S2034において、S2033で解像度変換された画像を表す入力画像データの画素値と投影画像データの画素値とを減算処理し、減算処理して得た差分を鮮鋭度の低下度合いとする。鮮鋭度の低下度合いは、入力画像データや投影画像データと同様に、各8ビットのRGB値を記録した3チャンネルのカラー画像データとして算出される。減算処理は、各画像の各画素について、R、G、Bの各チャンネルで独立に行う。入力画像データのRチャンネルの画素値をI_Rx,y(xは水平方向の画素位置、yは垂直方向の画素位置)、投影画像データのRプレーンの画素値をP_Rx,yとする。さらに、鮮鋭度の低下度合いのRプレーンの画素値をQ_Rx,yとし、以下の式(4)より算出する。 In step S2034, the pixel value of the input image data representing the image whose resolution is converted in step S2033 and the pixel value of the projection image data are subtracted, and the difference obtained by the subtraction process is set as the sharpness reduction degree. The degree of reduction in sharpness is calculated as 3-channel color image data in which 8-bit RGB values are recorded, as in the case of input image data and projection image data. The subtraction process is performed independently on each of the R, G, and B channels for each pixel of each image. The pixel value of the R channel of the input image data is I_R x, y (x is the pixel position in the horizontal direction and y is the pixel position in the vertical direction), and the pixel value of the R plane of the projection image data is P_R x, y . Further, the pixel value of the R plane having the degree of reduction in sharpness is set as Q_R x, y, and is calculated from the following equation (4).
Q_Rx,y=(I_Rx,y-P_Rx,y)・・・式(4) Q_R x, y = (I_R x, y -P_R x, y ) (4)
 図12に、入力画像データと投影画像データから鮮鋭度の低下度合いを算出する例を模式的に示す。G、Bプレーンについては同様の処理のため、説明を省略する。上記の計算はRチャンネルにおいて、画素毎に算出された後に、G、Bと順次処理される。I_Rx,y、P_Rx,yは0~255までを表す8ビットデータであるため、上記の計算で出力されるQ_Rx,yは-255~+255を表す9ビットデータとなる。尚、上記に順次処理する例を示したが、上記の一例に限定されない。例えば、各チャンネルでの計算を並列処理しても良い。算出した鮮鋭度の低下度合い(各画素に各9ビットのRGB値が記録された3チャンネルのカラー画像データ)を第2生成部107に送る。 FIG. 12 schematically shows an example in which the degree of reduction in sharpness is calculated from input image data and projection image data. Since the G and B planes are the same process, description thereof is omitted. The above calculation is calculated for each pixel in the R channel, and then processed in the order of G and B. Since I_R x, y and P_R x, y are 8-bit data representing 0 to 255 , Q_R x, y output in the above calculation is 9-bit data representing -255 to +255. In addition, although the example which processes sequentially is shown above, it is not limited to said example. For example, the calculation for each channel may be processed in parallel. The calculated degree of reduction in sharpness (3-channel color image data in which 9-bit RGB values are recorded in each pixel) is sent to the second generation unit 107.
 S204において、第2生成部107は、S203で算出された鮮鋭度の低下度合いに基づいて、S2033において解像度変換された入力画像データが表す画像の鮮鋭度を強調する。鮮鋭度の強調は、解像度変換された入力画像データの画素値と鮮鋭度の低下度合いを表す画像データの画素値とを加算処理することで実現する。画像投影装置2が投影画像502を投影することによって鮮鋭度が低下した分を、投影画像502と重畳する形成画像501の鮮鋭度を強調することで、重畳画像の鮮鋭度の低下を抑制できる。加算処理は、S2034の減算処理と同様に、各画像の各画素について、R、G、Bの各チャンネルで独立に行う。入力画像データのRチャンネルの画素値をI_Rx,y(xは水平方向の画素位置、yは垂直方向の画素位置)、鮮鋭度の低下度合いのRチャンネルの画素値をQ_Rx,yとする。さらに、加算処理後の入力画像データのRチャンネルの画素値をI_R’x,yとし、以下の式(5)より算出する。 In S204, the second generation unit 107 enhances the sharpness of the image represented by the input image data whose resolution has been converted in S2033 based on the degree of reduction in sharpness calculated in S203. The sharpness enhancement is realized by adding the pixel value of the input image data subjected to resolution conversion and the pixel value of the image data representing the degree of reduction in sharpness. By reducing the sharpness of the formed image 501 that is superimposed on the projection image 502 by reducing the sharpness of the image projection device 2 by projecting the projection image 502, it is possible to suppress a decrease in the sharpness of the superimposed image. The addition process is performed independently on each of the R, G, and B channels for each pixel of each image, as in the subtraction process of S2034. The pixel value of the R channel of the input image data is I_R x, y (x is the pixel position in the horizontal direction, y is the pixel position in the vertical direction), and the pixel value of the R channel of the degree of reduction in sharpness is Q_R x, y . . Further, the pixel value of the R channel of the input image data after the addition processing is set as I_R ′ x, y and is calculated from the following equation (5).
I_R’x,y=I_Rx,y+Q_Rx,y・・・式(5) I_R′x , y = I_Rx , y + Q_Rx , y Expression (5)
 図13に、入力画像データと鮮鋭度の低下度合いを表す画像データとに基づいて、鮮鋭度が強調された画像を表す入力画像データを算出する例を模式的に示す。計算の結果、I_Rx,yが0未満となる場合は0、256以上の場合は255にクリッピング処理することで、鮮鋭度が強調された画像を表す入力画像データは0~255までを表す8ビットデータとなる。G、Bチャンネルについても同様の処理のため、説明を省略する。上記の計算はRチャンネルにおいて、画素毎に算出された後に、G、Bと順次処理されるが、並列処理としても良い。 FIG. 13 schematically shows an example in which input image data representing an image with enhanced sharpness is calculated based on the input image data and image data representing the degree of sharpness reduction. If I_R x, y is less than 0 as a result of the calculation, the input image data representing an image in which sharpness is emphasized is represented by 0 to 255 by clipping to 0 if it is 0, 256 or more. Bit data. Since the G and B channels are the same process, the description thereof is omitted. The above calculation is calculated for each pixel in the R channel and then sequentially processed with G and B, but may be parallel processing.
 さらに、鮮鋭度が強調された画像を表す入力画像データに基づいて、予め保持する第2色変換LUT108を参照し、形成画像データを生成する。参照される第2色変換LUT108を図7に示す。図7に示す通り、入力画像データの画素ごとに記録されている信号値(RGB値)と形成画像データの各画素に記録される信号値(RGB値)との対応関係を保持している。上述した第2色変換LUT108は、形成画像データに記録された入力信号値が既知のチャートを基に記録媒体上に画像を形成し、形成された画像の色を測定することで予め作成しておく。生成する形成画像データは、入力画像データと同様に、各画素に各8ビットのRGB値が記録された3チャンネルのカラー画像データである。生成した形成画像データは第2出力端子110を介して、画像形成装置3に送られる。以上で、投影画像データと形成画像データを生成する一連の処理が完了する。 Further, based on the input image data representing the image with enhanced sharpness, the second image conversion LUT 108 held in advance is referred to generate the formed image data. The second color conversion LUT 108 referred to is shown in FIG. As shown in FIG. 7, the correspondence between the signal value (RGB value) recorded for each pixel of the input image data and the signal value (RGB value) recorded for each pixel of the formed image data is maintained. The second color conversion LUT 108 described above is created in advance by forming an image on a recording medium based on a chart with known input signal values recorded in the formed image data, and measuring the color of the formed image. deep. The formed image data to be generated is 3-channel color image data in which RGB values of 8 bits are recorded in each pixel, as in the case of input image data. The generated formed image data is sent to the image forming apparatus 3 via the second output terminal 110. Thus, a series of processes for generating the projection image data and the formed image data is completed.
 上記に説明した処理制御を行うことで、再現対象の画像に対する投影画像502の鮮鋭度の低下を、形成画像501の鮮鋭度を強調することで補うことができる。その結果、画像形成装置3によって形成されたプリント物(形成画像501)の上に投影装置2によって投影された投影画像502を重畳した重畳画像における鮮鋭度の低下を抑制することができる。 By performing the processing control described above, it is possible to compensate for the reduction in the sharpness of the projection image 502 with respect to the image to be reproduced by enhancing the sharpness of the formed image 501. As a result, it is possible to suppress a reduction in sharpness in a superimposed image obtained by superimposing the projection image 502 projected by the projection device 2 on a printed matter (formed image 501) formed by the image forming device 3.
 <変形例>
 尚、本実施例では、投影状態に応じて入力画像データ及び投影画像データに解像度の変換処理を施し、入力画像データと投影画像データとに基づいて、再現対象の画像に対する画像投影装置2が投影する画像の鮮鋭度の低下度合いを算出する例を示した。しかし、鮮鋭度の低下度合いの算出方法は上記の一例に限定されない。例えば、画像形成装置3で形成する画像の解像度Rと画像投影装置2で投影する画像の解像度Rとを基にした演算処理によって、再現対象の画像に対する投影画像502の鮮鋭度の低下度合いを算出しても良い。上記の鮮鋭度の低下度合いの算出処理における、S203の詳細な処理について、図15に示すフローチャートを用いて説明する。
<Modification>
In this embodiment, the input image data and the projection image data are subjected to resolution conversion processing according to the projection state, and the image projection apparatus 2 projects the image to be reproduced based on the input image data and the projection image data. An example of calculating the degree of reduction in sharpness of an image to be displayed has been shown. However, the method for calculating the degree of reduction in sharpness is not limited to the above example. For example, the degree of reduction in the sharpness of the projected image 502 with respect to the image to be reproduced by arithmetic processing based on the resolution R f of the image formed by the image forming apparatus 3 and the resolution R p of the image projected by the image projecting apparatus 2 May be calculated. The detailed processing of S203 in the calculation processing of the sharpness reduction degree will be described with reference to the flowchart shown in FIG.
 S2031及びS2032は、上述した実施例1と同様のため説明を省略する。S2035において、S2032で算出した拡大率Eと画像形成装置3が形成する画像の解像度Rとに基づいて、入力画像データのみを解像度変換する。入力画像データに対する解像度変換は、上述した実施例1と同様のため説明を省略する。 Since S2031 and S2032 are the same as those of the first embodiment described above, description thereof is omitted. In S2035, only the input image data is converted based on the enlargement ratio E calculated in S2032 and the resolution Rf of the image formed by the image forming apparatus 3. Since the resolution conversion for the input image data is the same as that in the first embodiment, description thereof is omitted.
 S2036において、上述したS2033と同様に、式(2)を用いて変倍率Fを算出し、算出した変倍率Fに基づくマトリクスサイズのハイパスフィルタを生成し、フィルタ処理する。図16に変倍率Fが3、5、9の場合に適用するハイパスフィルタの例を示す。図16に示す通り、F×Fのマトリクスを持つハイパスフィルタを生成し、S2035において解像度変換を施した入力画像データに対しフィルタ処理し、処理結果を鮮鋭度の低下度合いとする。 In S2036, similarly to S2033 described above, it calculates the scaling factor F p using equation (2), to generate a high pass filter matrix size based on the magnification ratio F p calculated, filters. FIG. 16 shows an example of a high-pass filter applied when the scaling factor F p is 3, 5, or 9. As shown in FIG. 16, a high-pass filter having a matrix of F p × F p is generated, and the input image data subjected to resolution conversion in S2035 is subjected to filter processing, and the processing result is set as a degree of reduction in sharpness.
 まず、入力画像データが表す画像を画像形成装置3で形成する画像の解像度に変換することで、再現対象の画像の鮮鋭度のうち画像形成装置3で再現できる画像の鮮鋭度を抽出する。次に、上記で抽出した鮮鋭度に対し変倍率Fに基づくハイパスフィルタを用いたフィルタ処理を行うことで、画像投影装置2で表現する際に失われる鮮鋭度を算出する。一連の処理を実行することで、再現対象の画像の鮮鋭度のうち、画像形成装置3で再現でき、かつ、画像投影装置2で再現できない鮮鋭度を、上述した鮮鋭度の低下度合いとして算出することができる。 First, by converting the image represented by the input image data into the resolution of the image formed by the image forming apparatus 3, the sharpness of the image that can be reproduced by the image forming apparatus 3 is extracted from the sharpness of the image to be reproduced. Then, by performing a filtering process using a high-pass filter based on the scaling factor F p to sharpness extracted above, and calculates the sharpness is lost when expressed by the image projection apparatus 2. By executing a series of processing, the sharpness that can be reproduced by the image forming apparatus 3 and cannot be reproduced by the image projection apparatus 2 among the sharpnesses of the image to be reproduced is calculated as the above-described reduction degree of the sharpness. be able to.
 また、上述した変形例では、ハイパスフィルタを生成し、生成したハイパスフィルタを用いてフィルタ処理を行ったが、予め複数種類のフィルタを保持しておいてもよい。この場合、予め保持しておいた複数種類のフィルタから、上述した変倍率や、変倍率に基づいて算出可能な鮮鋭度が低下する周波数帯域に応じてフィルタ処理用のフィルタを取得する。 In the above-described modification, a high-pass filter is generated and filter processing is performed using the generated high-pass filter. However, a plurality of types of filters may be held in advance. In this case, a filter for filter processing is acquired from a plurality of types of filters stored in advance according to the above-described scaling factor and the frequency band in which the sharpness that can be calculated based on the scaling factor is reduced.
 また、本実施例では、画像投影装置2を1台用いる構成例を示したが、画像投影装置2を2台以上用いる構成であっても良い。例えば、図10に示すように、n台の画像投影装置2a~2cと1台の画像形成装置3とを用い、n台の画像投影装置2よりn枚の投影画像を同位置に重ねて投影する重畳画像表現システム(スタックプロジェクション)を構築しても良い。画像投影装置2を2台以上用いることで2枚以上の投影画像を形成画像の上に重畳し、重畳画像で表現可能な輝度レンジをさらに高輝度側に拡大することが可能となる。 In the present embodiment, the configuration example using one image projection device 2 is shown, but the configuration using two or more image projection devices 2 may be used. For example, as shown in FIG. 10, n image projection apparatuses 2a to 2c and one image forming apparatus 3 are used, and n image projection apparatuses 2 project n projection images at the same position. A superimposed image expression system (stack projection) may be constructed. By using two or more image projection apparatuses 2, it is possible to superimpose two or more projection images on the formed image and further expand the luminance range that can be represented by the superimposed image to the higher luminance side.
 また、例えば、図9に示すように、4台の画像投影装置2a~2dと1台の画像形成装置3を用い、入力画像データを2×2に分割処理し、分割処理された入力画像データを基に生成した投影画像データを各画像投影装置2a~2dより投影する。以上の構成により、形成画像に対し2×2の投影画像を重畳する重畳画像表現システム(マルチプロジェクション)を構築しても良い。各画像投影装置2a~2dが入力画像データが表す画像の各領域を再現することによって、1台の画像投影装置2が投影する投影画像より高解像度な投影画像又はより大きなサイズの投影画像を重畳することが可能となる。 Further, for example, as shown in FIG. 9, four image projection apparatuses 2a to 2d and one image forming apparatus 3 are used to divide the input image data into 2 × 2, and the divided input image data Projection image data generated based on the above is projected from the image projection apparatuses 2a to 2d. With the above configuration, a superimposed image expression system (multi-projection) that superimposes a 2 × 2 projection image on a formed image may be constructed. Each image projection device 2a to 2d reproduces each area of the image represented by the input image data, thereby superimposing a projection image having a higher resolution or a larger size than the projection image projected by one image projection device 2. It becomes possible to do.
 また、本実施例では、投影画像データが表す画像の解像度を形成画像501の解像度に一致させ、解像度変換を施した投影画像データと入力画像データとに基づいて、鮮鋭度の低下度合いを算出する例を述べた。しかし、鮮鋭度の低下度合いの算出に用いる投影画像データの生成方法は上記の一例に限定されない。投影画像502における鮮鋭度の低下度合いを算出するための画像処理装置1の機能構成を図17に示す。図17に示すように、画像処理装置1は、投影画像502を撮像する撮像装置6と撮像装置6からデータを取得するための第3入力端子112とを備える。撮像装置6によって投影画像502を撮像して得られた画像データを、鮮鋭度の低下度合いを算出するために用いる投影画像データとしても良い。上述したように、実際の投影画像502を撮像して得た投影画像データを用いることで、画像投影装置2の経年劣化による投影画像データと投影画像502との関係の変化にも対応することが可能となる。 In this embodiment, the resolution of the image represented by the projection image data is matched with the resolution of the formed image 501, and the degree of reduction in sharpness is calculated on the basis of the projection image data subjected to resolution conversion and the input image data. An example was given. However, the method of generating projection image data used for calculating the degree of reduction in sharpness is not limited to the above example. FIG. 17 shows a functional configuration of the image processing apparatus 1 for calculating the degree of reduction in sharpness in the projected image 502. As illustrated in FIG. 17, the image processing apparatus 1 includes an imaging device 6 that captures a projection image 502 and a third input terminal 112 that acquires data from the imaging device 6. Image data obtained by imaging the projection image 502 by the imaging device 6 may be used as projection image data used for calculating the degree of reduction in sharpness. As described above, by using the projection image data obtained by capturing the actual projection image 502, it is possible to cope with a change in the relationship between the projection image data and the projection image 502 due to the deterioration of the image projection apparatus 2 over time. It becomes possible.
 また、本実施例では、投影状態として投影距離を取得し、投影距離に基づいて拡大率Eを算出し、鮮鋭度の低下度合いを算出する例について述べた。しかし、投影状態は上記の一例に限定されない。例えば、画像投影装置2から投影された画像の光軸と形成画像501とがなす角φによる投影画像502の台形化を抑制するために、予め投影画像502に公知の台形歪み補正(キーストーン補正)を施す場合がある。上記の場合には、なす角φによって台形歪み補正が施された投影画像502に基づいて、鮮鋭度の低下度合いを算出する必要がある。以下に、投影状態であるφを考慮した投影画像データの生成を行う例を示す。 In this embodiment, an example in which the projection distance is acquired as the projection state, the enlargement ratio E is calculated based on the projection distance, and the degree of reduction in sharpness is calculated has been described. However, the projection state is not limited to the above example. For example, a known trapezoidal distortion correction (keystone correction) is previously applied to the projected image 502 in order to suppress trapezoidalization of the projected image 502 due to an angle φ formed by the optical axis of the image projected from the image projection apparatus 2 and the formed image 501. ) May be applied. In the above case, it is necessary to calculate the degree of reduction in sharpness based on the projection image 502 that has been corrected for trapezoidal distortion by the angle φ formed. In the following, an example of generating projection image data in consideration of φ being a projection state will be shown.
 まず、投影状態として上述したなす角φを第1生成部104で取得する。さらに、入力画像データを台形画像に変形するための公知のアフィン変換パラメータ(台形歪み補正係数)をφ毎に保持し、取得したφに応じたアフィン変換パラメータを用いて入力画像データを台形の画像を表す画像データに変換する。アフィン変換パラメータは例えば、φに応じて入力画像データの画素位置毎に、水平方向の移動量、垂直方向の移動量、水平方向の変倍率、垂直方向の変倍率を備える。尚、台形の画像データに変換する際の解像度変換には公知のバイキュービック法を用いる。その後、台形の画像を表す画像データに対し、上記アフィン変換パラメータを用いて逆変換することで、入力画像データと同じ矩形に戻す。逆変換する際は、公知の最近傍法を用いる。逆変換の際に、近傍の画素を複製する最近傍法を用いることで、台形の画像を表す画像データに変換する際に失われる鮮鋭度を想定した投影画像データを生成することが可能となる。さらに、逆変換した入力画像データに基づいて、第1色変換LUT105を参照し、投影画像データを生成する。投影状態であるφを考慮した投影画像データを生成後の処理は同一のため、説明を省略する。尚、上記は形成画像501と投影画像502の光軸とのなす角φに応じた入力画像データの変形を補正する例について述べたが、上記の一例に限定されない。例えば、液晶パネルと投影レンズとのなす角によって発生する歪みや、投影レンズの屈折が中央部より周辺部にかけて理想状態よりも離れることで発生する投影レンズの歪みを補正する場合もある。これらの場合もそれぞれの補正に応じたアフィン変換パラメータを保持し、上述の台形歪み補正と同様の変換処理を行っても良い。 First, the first generation unit 104 acquires the angle φ formed as the projection state. Furthermore, a known affine transformation parameter (trapezoidal distortion correction coefficient) for transforming input image data into a trapezoidal image is held for each φ, and the input image data is converted into a trapezoidal image using the affine transformation parameters corresponding to the obtained φ. Is converted into image data. The affine transformation parameter includes, for example, a horizontal movement amount, a vertical movement amount, a horizontal scaling factor, and a vertical scaling factor for each pixel position of the input image data in accordance with φ. A known bicubic method is used for resolution conversion when converting to trapezoidal image data. Thereafter, the image data representing the trapezoidal image is inversely transformed using the affine transformation parameters to return to the same rectangle as the input image data. When performing inverse transformation, a known nearest neighbor method is used. By using the nearest neighbor method that replicates neighboring pixels at the time of inverse conversion, it becomes possible to generate projection image data that assumes sharpness lost when converting to image data representing a trapezoidal image. . Furthermore, based on the reversely converted input image data, the first color conversion LUT 105 is referred to generate projection image data. Since the processing after generating the projection image data in consideration of φ being the projection state is the same, the description is omitted. Although the above describes the example of correcting the deformation of the input image data in accordance with the angle φ formed by the formed image 501 and the optical axis of the projected image 502, the present invention is not limited to the above example. For example, there is a case where distortion caused by an angle formed between the liquid crystal panel and the projection lens, or distortion of the projection lens caused when the refraction of the projection lens moves away from the ideal state from the central part to the peripheral part. In these cases, the affine transformation parameters corresponding to the respective corrections may be held, and the conversion process similar to the above-described trapezoidal distortion correction may be performed.
 また、投影画像502の中心部に比べ、周辺部において発生する光量低下によって画像の明るさが低下することがある。例えば、上記の投影画像502の位置毎の光量ばらつきを補正するために、予め入力画像データに補正処理を施す場合がある。上記の場合には、補正処理が施された入力画像データに基づいて鮮鋭度の低下度合いを算出する必要がある。以下に、投影状態であるα及びβを考慮した投影画像データの生成を行う例を示す。 Also, the brightness of the image may be reduced due to a decrease in the amount of light generated in the peripheral portion as compared with the central portion of the projected image 502. For example, in order to correct the variation in the amount of light at each position of the projection image 502 described above, the input image data may be corrected in advance. In the above case, it is necessary to calculate the degree of reduction in sharpness based on input image data that has been subjected to correction processing. An example in which projection image data is generated in consideration of α and β which are projection states will be shown below.
 まず、投影状態として投影画像502の画素位置毎に、光量低下量αと画像全体の光量を調整する係数βとを第1生成部104で取得する。取得したαx,y及びβx,yと入力画像データのRチャンネルの画素値I_Rx,y(xは水平方向の画素位置、yは垂直方向の画素位置)と以下の式(6)とを基に、補正後の入力画像データの画素値I_R’x,yを算出する。 First, the first generation unit 104 acquires a light amount reduction amount α and a coefficient β for adjusting the light amount of the entire image for each pixel position of the projection image 502 as a projection state. The obtained α x, y and β x, y and the R channel pixel value I_R x, y (x is the pixel position in the horizontal direction, y is the pixel position in the vertical direction) of the input image data, and the following equation (6): Based on the above, the pixel value I_R ′ x, y of the corrected input image data is calculated.
I_R’x,y=(I_Rx,y+αx,y)×βx,y・・・式(6) I_R ′ x, y = (I_R x, y + α x, y ) × β x, y (6)
 上記の投影画像502の画素位置毎の光量低下量αと画像全体の光量を調整する係数βとは、信号値が既知の入力画像データを基づいて画像を投影し、投影した画像を測定することで予め決定しておく。補正処理した入力画像データに基づいて、第1色変換LUT105を参照し、投影画像データを生成する。投影状態であるα及びβを考慮した投影画像データを生成後の処理は同一のため、説明を省略する。 The light amount reduction amount α for each pixel position of the projected image 502 and the coefficient β for adjusting the light amount of the entire image are obtained by projecting an image based on input image data whose signal value is known, and measuring the projected image. To determine in advance. Based on the corrected input image data, the first color conversion LUT 105 is referred to generate projection image data. Since the processing after generating the projection image data in consideration of the projection states α and β is the same, the description is omitted.
 また、本実施例では、入力画像データの鮮鋭度の強調を、入力画像データの画素値と鮮鋭度の低下度合いを表す画像データの画素値とを加算処理することによって行ったが、鮮鋭度の強調処理は上記一例には限定されない。例えば、鮮鋭度の低下度合いの値に応じた補正値を別途保持しておき、この補正値を入力画像データの画素値に加算しても良い。また、鮮鋭度の低下度合いに応じたガンマ(γ)値を予め保持しておき、保持しておいたγ値を用いた公知のγ補正処理によって入力画像データの鮮鋭度を強調してもよい。また、画像の微細部を強調する強調度合いの異なる公知のエッジ強調フィルタを複数保持しておき、鮮鋭度の低下度合いに応じてエッジ強調フィルタを使い分けてもよい。 In this embodiment, the sharpness of the input image data is enhanced by adding the pixel value of the input image data and the pixel value of the image data representing the degree of reduction of the sharpness. The enhancement process is not limited to the above example. For example, a correction value corresponding to the value of the degree of sharpness reduction may be separately stored, and this correction value may be added to the pixel value of the input image data. Further, a gamma (γ) value corresponding to the degree of reduction in sharpness may be stored in advance, and the sharpness of input image data may be enhanced by a known γ correction process using the stored γ value. . Alternatively, a plurality of known edge enhancement filters with different enhancement levels for emphasizing fine portions of an image may be held, and the edge enhancement filters may be used properly according to the degree of sharpness reduction.
 また、本実施例では、入力画像データに基づいて投影画像データの生成を行った後、形成画像データの生成を行ったが、本実施例の処理は上記一例には限定されない。例えば、予め入力画像データに基づいて投影画像データを生成しておき、HDD1412などに記憶させておく。入力画像データと、予め生成しておいた投影画像データと、を取得し、入力画像データと投影画像データとに基づいて形成画像データを生成する。 In this embodiment, the projection image data is generated based on the input image data, and then the formation image data is generated. However, the processing of this embodiment is not limited to the above example. For example, projection image data is generated based on input image data in advance and stored in the HDD 1412 or the like. Input image data and projection image data generated in advance are acquired, and formation image data is generated based on the input image data and the projection image data.
 [実施例2]
 実施例1では、投影画像データと入力画像データとから算出した鮮鋭度の低下度合いに基づいて、入力画像データが表す画像の鮮鋭度を強調し、形成画像データを生成する例について述べた。しかし、形成画像501で表現可能な輝度レンジは照明4によって決定される環境光に応じて変化する。すなわち、形成画像501は照射される光を反射することで記録された画像を表現する画像である。そのため、照明光が少ない(暗い)場合には形成画像501で表現可能な輝度レンジは狭く、逆に照明光が多い(明るい)場合には形成画像501で表現可能な輝度レンジは広くなる傾向がある。そこで、本実施例においては、上記に示すように環境光に応じて変化する形成画像501の輝度レンジを考慮し、鮮鋭度を強調する処理の強調度合いを制御する。その結果、環境光によって鮮鋭度の低下を抑制する効果が変動してしまうことを低減させることが可能となる。上記の処理を実現する例について実施例1と異なる部分を主に説明する。
[Example 2]
In the first embodiment, the example in which the formed image data is generated by enhancing the sharpness of the image represented by the input image data based on the degree of reduction in the sharpness calculated from the projection image data and the input image data has been described. However, the luminance range that can be expressed by the formed image 501 changes according to the ambient light determined by the illumination 4. In other words, the formed image 501 is an image representing an image recorded by reflecting the irradiated light. Therefore, when the illumination light is small (dark), the luminance range that can be expressed by the formed image 501 is narrow, and conversely, when the illumination light is large (bright), the luminance range that can be expressed by the formed image 501 tends to be wide. is there. Therefore, in the present embodiment, as described above, the enhancement degree of the process for enhancing the sharpness is controlled in consideration of the luminance range of the formed image 501 that changes according to the ambient light. As a result, it is possible to reduce fluctuations in the effect of suppressing the reduction in sharpness due to ambient light. An example of realizing the above processing will be described mainly with respect to differences from the first embodiment.
 <画像処理装置1の機能構成>
 画像処理装置1の機能構成を図8に示す。第2生成部107は、第4出力端子111を介して、環境光情報を取得する。環境光情報は、重畳画像に照射される環境光の強度を表す4ビットデータである。環境光情報は、ユーザによる入力、もしくは照明4と画像処理装置1を接続することで直接取得する。尚、環境光情報の取得は上記の一例に限定されない。例えば、予め想定される重畳画像の観察シーン(屋外晴天、屋外曇天、室内スポットライト照明、室内オフィス照明など)の光の強度情報をそれぞれ保持し、ユーザが最も近いと判断して入力した条件の光の強度情報を取得しても良い。上記以外の構成は実施例1と同様のため、説明を省略する。
<Functional Configuration of Image Processing Apparatus 1>
A functional configuration of the image processing apparatus 1 is shown in FIG. The second generation unit 107 acquires ambient light information via the fourth output terminal 111. The ambient light information is 4-bit data representing the intensity of the ambient light irradiated on the superimposed image. The ambient light information is directly acquired by input by the user or by connecting the illumination 4 and the image processing apparatus 1. The acquisition of ambient light information is not limited to the above example. For example, the light intensity information of a presumed superimposed image observation scene (outdoor clear sky, outdoor cloudy sky, indoor spotlight lighting, indoor office lighting, etc.) is held, Light intensity information may be acquired. Since the configuration other than the above is the same as that of the first embodiment, the description thereof is omitted.
 <画像処理装置1の処理内容>
 実施例2における処理内容として、実施例1と異なるS204について記載する。S204において、第4入力端子111より取得した環境光情報に基づいて、S203で算出された鮮鋭度の低下度合いを補正し、補正した鮮鋭度の低下度合いによって入力画像データが表す画像の鮮鋭度を強調する。さらに、鮮鋭度が強調された画像を表す入力画像データに基づいて、実施例1と同様に予め保持する第2色変換LUT108を参照し、形成画像データを生成する。強調処理については、解像度変換が施された入力画像データの画素値に対し、環境光情報に応じた補正係数で補正した鮮鋭度の低下成分を加算処理することで実現する。上述した通り、形成画像501は照射される光を反射することで記録された画像を表現する画像である。そのため、環境光が少ない場合には形成画像501で表現可能な輝度レンジは狭く、逆に環境光が多い場合には形成画像501で表現可能な輝度レンジは広くなる傾向がある。よって、環境光が多い場合に比べ環境光が少ない場合は、投影画像502が表現しきれず形成画像501のみが表現している鮮鋭度成分をより強調する補正を施すことで、環境光の変化による鮮鋭度の低下を抑制する効果の変動を低減させることができる。以下に詳細な処理内容について述べる。
<Processing content of image processing apparatus 1>
As processing contents in the second embodiment, S204 different from the first embodiment is described. In S204, the degree of reduction in sharpness calculated in S203 is corrected based on the ambient light information acquired from the fourth input terminal 111, and the sharpness of the image represented by the input image data is determined by the corrected degree of reduction in sharpness. Emphasize. Further, based on the input image data representing the image with enhanced sharpness, the formation image data is generated by referring to the second color conversion LUT 108 held in advance as in the first embodiment. The enhancement process is realized by adding a sharpness reduction component corrected with a correction coefficient corresponding to the ambient light information to the pixel value of the input image data subjected to resolution conversion. As described above, the formed image 501 is an image representing an image recorded by reflecting the irradiated light. Therefore, when the ambient light is small, the luminance range that can be expressed by the formed image 501 tends to be narrow, and conversely, when the ambient light is large, the luminance range that can be expressed by the formed image 501 tends to be wide. Therefore, when the ambient light is small compared to the case where there is a lot of ambient light, the projection image 502 cannot be fully expressed, and a correction that emphasizes the sharpness component expressed only by the formed image 501 is performed, thereby changing the ambient light. It is possible to reduce fluctuations in the effect of suppressing the reduction in sharpness. Detailed processing contents will be described below.
 まず、予め保持する環境光情報と補正係数との対応関係を保持するLUT113を参照し、環境光情報より補正係数Zを決定する。LUT113の一例を図6に示す。LUT113は、予め環境光を変えた観察環境毎に、形成画像501で表現可能な輝度レンジを測定し、輝度レンジの比率に応じて決定する。加算処理はS2033と同様に、各画像の各画素について、R、G、Bの各チャンネルで独立に行う。入力画像データのRプレーンの画素値をI_Rx,y(xは水平方向の画素位置、yは垂直方向の画素位置)、鮮鋭度の低下度合いのRプレーンの画素値をQ_Rx,y、強調処理後の入力画像データのRプレーンの画素値をI_R’x,yとする。さらに、補正係数Zを用い、以下の式(7)より算出する。 First, the correction coefficient Z is determined from the ambient light information with reference to the LUT 113 that holds the correspondence relationship between the ambient light information and the correction coefficient held in advance. An example of the LUT 113 is shown in FIG. The LUT 113 measures the luminance range that can be expressed by the formed image 501 for each observation environment in which ambient light is changed in advance, and determines the luminance range according to the ratio of the luminance range. Similar to S2033, the addition processing is performed independently on each of the R, G, and B channels for each pixel of each image. The pixel value of the R plane of the input image data is I_R x, y (x is the pixel position in the horizontal direction, y is the pixel position in the vertical direction), the pixel value of the R plane of the degree of sharpness reduction is Q_R x, y , and emphasis Let the pixel value of the R plane of the input image data after processing be I_R ′ x, y . Further, the correction coefficient Z is used to calculate from the following equation (7).
I_R’x,y=I_Rx,y+(Q_Rx,y)×Z・・・式(7) I_R ′ x, y = I_R x, y + (Q_R x, y ) × Z (7)
 以上、説明した通り、環境光に応じて入力画像データに対する強調処理の強調度合いを制御することで、環境光による鮮鋭度の低下を抑制する効果の変動を低減させることができる。 As described above, by controlling the enhancement degree of the enhancement processing on the input image data according to the ambient light, it is possible to reduce the fluctuation of the effect of suppressing the sharpness degradation due to the ambient light.
 <変形例>
 尚、本実施例では、環境光情報と補正係数との対応関係を保持する変換LUT113を保持する例を示したが上記の一例に限定されない。例えば、環境光情報と補正係数との対応関係を予測する計算式を構築し、上記計算式を用いて入力された環境光情報に応じて補正係数を算出しても良い。
<Modification>
In this embodiment, an example in which the conversion LUT 113 that holds the correspondence between the ambient light information and the correction coefficient is shown, but the present invention is not limited to the above example. For example, a calculation formula for predicting the correspondence between the ambient light information and the correction coefficient may be constructed, and the correction coefficient may be calculated according to the ambient light information input using the above calculation formula.
 また、本実施例では、環境光情報として1つのデータを用いる例を示したが、上記の一例に限定されない。例えば、形成画像501の領域毎の環境光情報を基に、入力画像データと同様の二次元の環境光情報を取得し、入力画像データの領域(画素)毎に環境光情報に応じた強調処理の強調度合いを制御しても良い。 In the present embodiment, an example is shown in which one piece of data is used as the ambient light information, but the present invention is not limited to the above example. For example, based on the ambient light information for each region of the formed image 501, two-dimensional ambient light information similar to the input image data is acquired, and enhancement processing according to the ambient light information is performed for each region (pixel) of the input image data. The degree of emphasis may be controlled.
 [実施例3]
 実施例1では、投影画像データと入力画像データとから算出した鮮鋭度の低下度合いに基づいて、入力画像データが表す画像の鮮鋭度を強調し、形成画像データを生成する例について述べた。上述した通り、投影画像の解像度Rは、画像の拡大率に応じて解像度が変化する。そのため、例えば、入力画像を縮小して表示する場合(拡大率Eが1を下回る場合)は、形成画像の解像度Rよりも投影画像の解像度Rが高くなる場合がある。そこで本実施例では、入力画像と拡大率Eとに基づいて形成画像で発生する鮮鋭度の低下度合いを予測し、予測した鮮鋭度の低下度合いに基づいて投影画像の鮮鋭度を強調する。上記の処理を実現する例について実施例1と異なる部分を主に説明する。尚、本実施例では、拡大率Eが1を下回るため、拡大率Eを縮小率Sと呼ぶ。ここで縮小率は、画像を縮小する際の倍率である。
[Example 3]
In the first embodiment, the example in which the formed image data is generated by enhancing the sharpness of the image represented by the input image data based on the degree of reduction in the sharpness calculated from the projection image data and the input image data has been described. As described above, the resolution R p of the projected image, the resolution varies depending on the magnification of the image. Therefore, for example, when displaying by reducing the input image (if the enlargement ratio E is below 1) may resolution R p of the projected image is higher than the resolution R f of the formed image. Therefore, in this embodiment, the degree of reduction in sharpness occurring in the formed image is predicted based on the input image and the enlargement ratio E, and the sharpness of the projection image is emphasized based on the predicted degree of reduction in sharpness. An example of realizing the above processing will be described mainly with respect to differences from the first embodiment. In this embodiment, since the enlargement ratio E is less than 1, the enlargement ratio E is referred to as a reduction ratio S. Here, the reduction ratio is a magnification when the image is reduced.
 <画像処理装置1の機能構成>
 画像処理装置1の機能構成を図20に示す。本実施例における画像処理装置1の構成においては、画像処理装置1は第2出力端子110を介して画像投影装置2と接続され、画像処理装置1は第1出力端子109を介して画像形成装置3と接続されている。第1生成部104は、入力画像データに基づいて、第1色変換LUT105を参照し、画像形成装置3に入力する画像データ(形成画像データ)を生成する。算出部106は、入力画像データと形成画像データとを取得し、投影状態に応じて入力画像データが表す画像の解像度及び形成画像データが表す画像の解像度を変換する。さらに、解像度が変換された画像を表す入力画像データ及び形成画像データに基づいて、入力画像データが表す画像に対する形成画像データが表す画像の鮮鋭度の低下度合いを算出する。第2生成部107は、入力画像データと鮮鋭度の低下度合いとに基づいて、入力画像データが表す画像の鮮鋭度を強調する。さらに、鮮鋭度が強調された画像を表す入力画像データに基づいて、第2色変換LUT108を参照し、画像投影装置3に入力する画像データ(投影画像データ)を生成する。第1生成部104で生成された形成画像データは、第1出力端子109を介して、画像形成装置3に出力され、第2生成部107で生成された投影画像データは、第2出力端子110を介して、画像投影装置2に出力される。
<Functional Configuration of Image Processing Apparatus 1>
A functional configuration of the image processing apparatus 1 is shown in FIG. In the configuration of the image processing apparatus 1 in this embodiment, the image processing apparatus 1 is connected to the image projection apparatus 2 via the second output terminal 110, and the image processing apparatus 1 is connected to the image forming apparatus via the first output terminal 109. 3 is connected. The first generation unit 104 refers to the first color conversion LUT 105 based on the input image data, and generates image data (formed image data) to be input to the image forming apparatus 3. The calculation unit 106 acquires the input image data and the formed image data, and converts the resolution of the image represented by the input image data and the resolution of the image represented by the formed image data according to the projection state. Further, the degree of reduction in the sharpness of the image represented by the formed image data relative to the image represented by the input image data is calculated based on the input image data representing the image whose resolution has been converted and the formed image data. The second generation unit 107 enhances the sharpness of the image represented by the input image data based on the input image data and the degree of sharpness reduction. Further, based on input image data representing an image with enhanced sharpness, the second color conversion LUT 108 is referred to, and image data (projected image data) to be input to the image projection device 3 is generated. The formed image data generated by the first generation unit 104 is output to the image forming apparatus 3 via the first output terminal 109, and the projection image data generated by the second generation unit 107 is output by the second output terminal 110. Is output to the image projection apparatus 2.
 <画像処理装置1の処理内容>
 次に、上述した機能構成を備えた画像処理装置1の処理内容について、図2Bのフローチャートを用いて説明する。S201は実施例1と同様のため説明を省略し、S202’とS203’とS204’とを説明する。
<Processing content of image processing apparatus 1>
Next, processing contents of the image processing apparatus 1 having the above-described functional configuration will be described with reference to the flowchart of FIG. 2B. Since S201 is the same as that of the first embodiment, a description thereof will be omitted, and S202 ′, S203 ′, and S204 ′ will be described.
 S202’において、第1生成部105は、画像形成装置3が出力する画像の画素数に基づいて、取得部103で取得した入力画像データが表す画像の解像度を変換する。解像度の変換には公知のバイキュービック法を用いるが、バイリニア法など他の解像度変換方法を用いても良い。さらに、解像度を変換した画像を表す入力画像データに基づいて、予め保持する第1色変換LUT105を参照し、形成画像データを生成する。参照される第1色変換LUT105は実施例1における第2色変換LUT108と同一のため説明を省略する。S203’において、算出部106は、S201で取得した入力画像データ及び投影状態と、S202’で生成した形成画像データとを取得する。さらに、取得した投影状態に基づいて、入力画像データが表す画像のサイズに対する画像投影装置2が投影する画像のサイズの縮小率Sを算出する。算出した縮小率Sに基づいて、形成画像データが表す画像の解像度及び入力画像データが表す画像の解像度を変換する。解像度を変換した後に、入力画像データが表す画像に対する投影画像データが表す画像の鮮鋭度の低下度合いを算出する。算出された鮮鋭度の低下度合いは第2生成部107に送られる。S204’において、第2生成部107は、S203’で算出された鮮鋭度の低下度合いに基づいて、S2033’において解像度変換された入力画像データが表す画像の鮮鋭度を強調する。鮮鋭度の強調は実施例1と同様に加算処理によって行う。さらに、鮮鋭度が強調された画像を表す入力画像データに基づいて、予め保持する第2色変換LUT108を参照し、投影画像データを生成する。参照される第2色変換LUT108は実施例1における第1色変換LUT105と同一のため説明を省略する。 In S <b> 202 ′, the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image forming apparatus 3. A known bicubic method is used for resolution conversion, but other resolution conversion methods such as a bilinear method may be used. Further, based on the input image data representing the image whose resolution has been converted, the first color conversion LUT 105 that is held in advance is referred to, and formed image data is generated. The first color conversion LUT 105 referred to is the same as the second color conversion LUT 108 in the first embodiment, and a description thereof will be omitted. In S203 ', the calculation unit 106 acquires the input image data and the projection state acquired in S201, and the formation image data generated in S202'. Furthermore, based on the acquired projection state, a reduction ratio S of the size of the image projected by the image projection apparatus 2 with respect to the size of the image represented by the input image data is calculated. Based on the calculated reduction ratio S, the resolution of the image represented by the formed image data and the resolution of the image represented by the input image data are converted. After converting the resolution, the degree of reduction in the sharpness of the image represented by the projection image data with respect to the image represented by the input image data is calculated. The calculated degree of sharpness reduction is sent to the second generator 107. In S <b> 204 ′, the second generation unit 107 enhances the sharpness of the image represented by the input image data whose resolution is converted in S <b> 2033 ′, based on the degree of sharpness reduction calculated in S <b> 203 ′. Sharpness enhancement is performed by addition processing as in the first embodiment. Further, based on the input image data representing the image with enhanced sharpness, the second color conversion LUT 108 held in advance is referred to generate projection image data. Since the second color conversion LUT 108 referred to is the same as the first color conversion LUT 105 in the first embodiment, the description thereof is omitted.
 以下に、S203’の詳細な処理について、図5Bに示すフローチャートを用いて説明する。S2031’において、算出部106は、実施例1と同様に、入力画像データ、投影画像データ、投影状態を取得する。S2032’において、算出部106は、取得した投影状態(D及びθ)と、入力画像データが表す画像のサイズと投影画像のサイズとが等倍となる投影状態(投影距離D及び画角θ)とを基に、以下に示す式(8)を用いて縮小率Sを算出する。
S=D/D+θ/θ・・・式(8)
Hereinafter, the detailed processing of S203 ′ will be described using the flowchart shown in FIG. 5B. In S2031 ′, the calculation unit 106 acquires input image data, projection image data, and a projection state, as in the first embodiment. In S2032 ′, the calculation unit 106 obtains the obtained projection state (D and θ) and the projection state in which the size of the image represented by the input image data and the size of the projection image are equal (projection distance D 0 and angle of view θ). 0 )), the reduction ratio S is calculated using the following equation (8).
S = D / D 0 + θ / θ 0 (8)
 S2033’において、算出部106は、S2032’で算出した縮小率Sと、画像投影装置2で投影する画像の解像度Rとに基づいて、入力画像データが表す画像の解像度及び形成画像データが表す画像の解像度を変換する。形成画像データが表す画像の解像度変換は、縮小率Sと解像度R、画像形成装置3が形成する画像の解像度Rを基に以下の式(9)を用いて、形成画像データの変倍率Fを算出する。
=(R/S)/R・・・式(9)
'In the calculation unit 106, S2032' S2033 and the reduction ratio S calculated at, on the basis of the resolution R p of the image projected by the image projection apparatus 2, represents the resolution and the formed image data of an image represented by the input image data is Convert image resolution. The resolution conversion of the image represented by the formed image data is performed by using the following equation (9) based on the reduction ratio S, the resolution R p , and the resolution R f of the image formed by the image forming apparatus 3, and the scaling ratio of the formed image data F f is calculated.
F f = (R p / S) / R f Formula (9)
 算出した変倍率Fを基に、形成画像データの各画素をF個に解像度変換する。この解像度変換には、公知の最近傍法を用いる。尚、解像度の変換には、最近傍法ではなくバイキュービック法など他の方法でも良い。同様に、入力画像データが表す画像の解像度変換は、まず、縮小率Sと解像度R、入力画像データが表す画像の解像度Rinを基に以下の式(10)を用いて、入力画像データの変倍率Finを算出する。
in=(Rin/E)/R・・・式(10)
Based on the calculated scaling factor F f , the resolution of each pixel of the formed image data is converted to F f . A known nearest neighbor method is used for this resolution conversion. For resolution conversion, other methods such as the bicubic method may be used instead of the nearest neighbor method. Similarly, the resolution conversion of the image represented by the input image data is performed by first using the following expression (10) based on the reduction ratio S and the resolution R p and the resolution R in of the image represented by the input image data. to calculate the scaling factor F in the.
F in = (R in / E) / R p (10)
 算出した変倍率Finを基に、入力画像データの各画素をFin個に解像度変換する。この解像度変換には、公知のバイキュービック法を用いる。尚、上述した解像度R及びRは、ユーザによる入力によって取得するか、もしくは画像投影装置2又は画像形成装置3と画像処理装置1とを接続することで直接取得する。解像度R及びRはどちらも各デバイスが出力できる最高解像度であることが望ましい。 The calculated scaling ratio F in the group, to resolution conversion of each pixel of the input image data to F in pieces. A known bicubic method is used for the resolution conversion. Note that the resolutions R f and R p described above are acquired by user input or directly by connecting the image projection apparatus 2 or the image forming apparatus 3 and the image processing apparatus 1. Both the resolutions R f and R p are preferably the highest resolution that each device can output.
 S2034’において、算出部106は、S2033’で解像度変換された画像を表す入力画像データの画素値と形成画像データの画素値とを減算処理し、減算処理して得た差分を鮮鋭度の低下度合いとする。鮮鋭度の低下度合いを算出し、入力画像の鮮鋭度を強調した画像を生成する方法は実施例1と同様のため説明を省略する。 In S2034 ′, the calculation unit 106 subtracts the pixel value of the input image data representing the image whose resolution has been converted in S2033 ′ and the pixel value of the formed image data, and reduces the difference obtained by the subtraction process to reduce the sharpness. The degree. The method for calculating the degree of reduction in sharpness and generating an image in which the sharpness of the input image is emphasized is the same as that in the first embodiment, and a description thereof will be omitted.
 上記に説明した処理制御を行うことによって、再現対象の画像に対する形成画像の鮮鋭度の低下(微細部のコントラストの低下)を、投影画像の鮮鋭度を強調することで補うことができる。その結果、画像形成装置3によって形成された形成画像の上に画像投影装置2によって投影された投影画像を重畳した重畳画像における鮮鋭度の低下を抑制することができる。 By performing the processing control described above, it is possible to compensate for the reduction in the sharpness of the formed image with respect to the image to be reproduced (decrease in the contrast of the fine portions) by enhancing the sharpness of the projected image. As a result, it is possible to suppress a reduction in sharpness in a superimposed image obtained by superimposing a projection image projected by the image projection device 2 on a formed image formed by the image forming device 3.
 <変形例>
 尚、本実施例では、画像投影装置2による投影画像と画像形成装置3による形成画像との重畳画像で発生する鮮鋭度の低下を抑制する例を述べた。しかしながら、重畳画像を生成するための装置の組み合わせは上記一例に限定されない。表現可能な解像度が異なる2つの画像出力装置により出力された画像を重畳した重畳画像を生成する場合に、表現可能な解像度が低い装置の出力画像で発生する鮮鋭度の低下を、より解像度が高い装置の出力画像で補えれば、どのような組み合わせであってもよい。例えば、図21に示すように、表現可能な解像度が異なる複数の画像投影装置を用いて、表現可能な解像度が高い第2画像投影装置2bの投影画像によって、より解像度の低い第1画像投影装置2aによる投影画像の鮮鋭度の低下を補ってもよい。
<Modification>
In the present embodiment, an example in which a reduction in sharpness generated in a superimposed image of a projection image by the image projection device 2 and a formation image by the image forming device 3 has been described. However, the combination of apparatuses for generating a superimposed image is not limited to the above example. When a superimposed image is generated by superimposing images output by two image output devices having different expressible resolutions, the sharpness reduction generated in the output image of the device having a low expressible resolution is higher. Any combination may be used as long as it can be compensated by the output image of the apparatus. For example, as shown in FIG. 21, a plurality of image projecting apparatuses having different expressible resolutions are used, and a first image projecting apparatus having a lower resolution is obtained by a projection image of the second image projecting apparatus 2b having a higher expressible resolution. You may supplement the fall of the sharpness of the projection image by 2a.
 また、上述した実施例では、画像を出力する装置として、画像を投影する画像投影装置と記録媒体上に画像を形成する画像形成装置とを用いる例を述べたが、画像投影装置及び画像形成装置以外の画像出力装置にも上述した処理を適用可能である。装置の組み合わせは、重畳画像を生成可能な2つ以上の画像出力装置の組み合わせであればよい。例えば、液晶ディスプレイや有機ELディスプレイなどの画像表示装置を用いても良い。画像表示装置を用いた重畳画像の生成の例としては、画像形成装置によってOHPシートなどの光を透過する記録媒体上に画像を形成し、画像が形成されたOHPシートを画像表示装置の上に乗せる。このように、一方が画像形成装置が形成する形成画像であり、他方が画像表示装置が表示する表示画像である場合の画像の重畳についても上述した処理を適用することができる。 In the above-described embodiments, an example in which an image projecting apparatus that projects an image and an image forming apparatus that forms an image on a recording medium is used as an apparatus that outputs an image. The above-described processing can be applied to other image output apparatuses. The device combination may be a combination of two or more image output devices capable of generating a superimposed image. For example, an image display device such as a liquid crystal display or an organic EL display may be used. As an example of generating a superimposed image using an image display device, an image is formed on a recording medium that transmits light, such as an OHP sheet, by the image forming device, and the OHP sheet on which the image is formed is placed on the image display device. Put it on. As described above, the above-described processing can be applied to image superposition when one is a formed image formed by the image forming apparatus and the other is a display image displayed by the image display apparatus.
 [実施例4]
 上述した実施例では、重畳画像の画像サイズに応じて表現可能な解像度が低くなる画像出力装置の出力画像において発生する鮮鋭度の低下を、より表現可能な解像度が高い画像出力装置の出力画像で補う例を示した。しかし、重畳画像の鮮鋭度の低下は、重畳画像のサイズに応じた鮮鋭度の低下に限定されない。画像の鮮鋭度の低下は、画像出力装置の出力特性にも応じて生じる。例えば、画像形成装置によって形成される画像は、色材(インク)の着弾位置ずれ、色材が記録媒体に定着する際の滲み(メカニカルドットゲイン)、光学的暈け(オプティカルドットゲイン)等により、入力画像に比べての鮮鋭度が低下することが知られている。本実施例では、重畳画像のサイズに応じた鮮鋭度の低下に加えて、画像出力装置の出力特性に応じた鮮鋭度の低下も抑制する例を説明する。本実施例では、予め画像形成装置3の出力特性(形成画像の鮮鋭度の特性)を測定し、測定された特性と周波数空間上で逆特性となるフィルタ(以下、補償フィルタと呼ぶ)を作成しておく。予め作成しておいたフィルタを入力画像データに畳み込み処理を行う。畳み込み処理を行った入力画像データを基に、実施例3における処理を実行することによって、重畳画像のサイズに応じた鮮鋭度の低下に加えて、画像出力装置の出力特性に応じた鮮鋭度の低下も抑制する。上記の処理を実現する例について実施例3と異なる部分を主に説明する。
[Example 4]
In the embodiment described above, the sharpness reduction that occurs in the output image of the image output device whose resolution that can be expressed is reduced according to the image size of the superimposed image can be reduced by the output image of the image output device that can express higher resolution. An example to supplement was shown. However, the reduction of the sharpness of the superimposed image is not limited to the reduction of the sharpness according to the size of the superimposed image. The reduction in image sharpness also occurs according to the output characteristics of the image output apparatus. For example, an image formed by an image forming apparatus is caused by a deviation in the landing position of a color material (ink), bleeding (mechanical dot gain) when the color material is fixed on a recording medium, optical blur (optical dot gain), and the like. It is known that the sharpness is lower than that of the input image. In this embodiment, an example will be described in which, in addition to a reduction in sharpness corresponding to the size of the superimposed image, a reduction in sharpness corresponding to the output characteristics of the image output apparatus is also suppressed. In the present embodiment, the output characteristics (characteristics of the sharpness of the formed image) of the image forming apparatus 3 are measured in advance, and a filter (hereinafter referred to as a compensation filter) that is opposite to the measured characteristics in frequency space is created. Keep it. A filter created in advance is convolved with the input image data. By executing the processing in the third embodiment based on the input image data subjected to the convolution processing, in addition to the reduction in the sharpness according to the size of the superimposed image, the sharpness according to the output characteristics of the image output device The decrease is also suppressed. An example of realizing the above processing will be described mainly with respect to differences from the third embodiment.
 <画像処理装置1の機能構成と処理内容>
 実施例4における画像処理装置1の機能構成を図22に示す。実施例3の構成に加え、画像形成装置3の出力特性に対して逆特性となる補償フィルタ113を予め備える。処理フローは、S202’以外は実施例3と同様のため説明を省略し、実施例3と異なるS202’について説明する。S202’において、第1生成部105は、画像形成装置3の出力する画像の画素数に基づいて、取得部103で取得した入力画像データが表す画像の解像度を変換する。さらに、解像度を変換した入力画像データに対し、予め備える画像形成装置3の出力特性に応じた補償フィルタを畳み込み処理する。以下で予め作成しておく補償フィルタの作成方法を説明する。
<Functional Configuration and Processing Contents of Image Processing Apparatus 1>
FIG. 22 shows a functional configuration of the image processing apparatus 1 according to the fourth embodiment. In addition to the configuration of the third embodiment, a compensation filter 113 having a reverse characteristic with respect to the output characteristic of the image forming apparatus 3 is provided in advance. Since the processing flow is the same as that of the third embodiment except for S202 ′, the description is omitted, and S202 ′ different from that of the third embodiment will be described. In S202 ′, the first generation unit 105 converts the resolution of the image represented by the input image data acquired by the acquisition unit 103 based on the number of pixels of the image output from the image forming apparatus 3. Furthermore, a convolution process is performed on the input image data whose resolution has been converted, according to the output characteristics of the image forming apparatus 3 provided in advance. A method for creating a compensation filter created in advance will be described below.
 <補償フィルタの作成方法>
 補償フィルタは、図23に示すような周波数が異なる複数の正弦波パターン画像と均一パターン画像とを含むチャートを記録媒体上に印字し、印字したチャートを測定することによって作成する。以下に、補償フィルタの作成方法の詳細を述べる。
<Compensation filter creation method>
The compensation filter is created by printing a chart including a plurality of sine wave pattern images and uniform pattern images having different frequencies as shown in FIG. 23 on a recording medium and measuring the printed chart. Details of the method for creating the compensation filter will be described below.
 まず、公知の画像取得装置(スキャナ、カメラ、顕微鏡等)を用い、出力されたチャートの反射率分布を取得する。取得した反射率分布を用い、以下の式(11)により画像形成装置3の出力特性である周波数応答値fi(u)を算出する。uは正弦波の周波数であり、Max(u)とMin(u)とは、それぞれ、周波数uに応じて変化する画像の最大反射率、最小反射率である。さらに、式(11)において、WhiteとBlackとは、それぞれ、均一パターンの反射率とする。
fi(u)=MTF(u)=C(u)/C’・・・式(11)
C(u)=(Max(u)-Min(u))/(Max(u)+Min(u))
C’=(White-Black)/(White+Black)
First, the reflectance distribution of the output chart is acquired using a known image acquisition device (scanner, camera, microscope, etc.). Using the acquired reflectance distribution, a frequency response value fi (u) that is an output characteristic of the image forming apparatus 3 is calculated by the following equation (11). u is the frequency of the sine wave, and Max (u) and Min (u) are the maximum reflectance and the minimum reflectance of the image that change according to the frequency u, respectively. Furthermore, in Expression (11), White and Black are the reflectances of the uniform pattern, respectively.
fi (u) = MTF (u) = C (u) / C ′ (11)
C (u) = (Max (u) −Min (u)) / (Max (u) + Min (u))
C ′ = (White-Black) / (White + Black)
 次に、取得した周波数応答値fi(u)と以下の式(12)とを用いて、補償フィルタの周波数特性Rxを算出する。
Rx(u)=1/fi(u)・・・式(12)
Next, the frequency characteristic Rx of the compensation filter is calculated using the acquired frequency response value fi (u) and the following equation (12).
Rx (u) = 1 / fi (u) (12)
 上記のRxに対して公知の逆フーリエ変換を行い、逆フーリエ変換によって算出されたフィルタを補償フィルタとして用いる。尚、上記の補償フィルタを用いて高周波成分までを補償すると、ノイズの発生や明るさの変動が起こる。そこで、公知の視覚特性上で感度が低い4Cycle/mm以上は、補償する強度(強調度合い)を4Cycle/mm未満よりも下げることが望ましい。 A known inverse Fourier transform is performed on the above Rx, and a filter calculated by the inverse Fourier transform is used as a compensation filter. Note that when the above compensation filter is used to compensate up to a high-frequency component, noise and brightness fluctuations occur. Therefore, it is desirable that the compensation intensity (enhancement degree) is lower than less than 4 Cycle / mm when the sensitivity is 4 Cycle / mm or more, which has a low sensitivity in the known visual characteristics.
 上記に説明した処理制御を行うことによって、形成画像を形成する際の出力特性に応じて発生する鮮鋭度の低下を補うための入力画像データを生成することが可能となる。その結果、重畳画像のサイズに応じた鮮鋭度の低下に加えて、画像出力装置の出力特性に応じた鮮鋭度の低下も抑制することができる。 By performing the processing control described above, it is possible to generate input image data to compensate for the reduction in sharpness that occurs according to the output characteristics when forming a formed image. As a result, in addition to a reduction in sharpness according to the size of the superimposed image, a reduction in sharpness according to the output characteristics of the image output device can be suppressed.
 <変形例>
 尚、本実施例では画像形成装置3によって形成される画像の出力特性(色材の着弾位置ずれや滲み、光学的暈け等)による鮮鋭度の低下を予め予測し、入力画像を強調処理する例を示した。しかしながら、出力特性に応じた鮮鋭度低下が発生する装置は画像形成装置3に限定されない。画像投影装置2においても投影レンズの光学的暈けによって鮮鋭度が低下する。また、ディスプレイなどの画像表示装置においても、液晶パネルによって光学的暈けが発生し、鮮鋭度が低下する。これら画像出力装置の出力特性に応じた鮮鋭度低下についても本実施例の処理を適用できる。尚、本実施例では複数備える装置のうちいずれか1つの装置に対し、出力特性に応じた補償を加える例を示したが、重畳画像の生成に用いる各装置の出力特性に応じた補償処理を行う構成が望ましい。
<Modification>
In this embodiment, a sharpness reduction due to the output characteristics of the image formed by the image forming apparatus 3 (color material landing position deviation, bleeding, optical blur, etc.) is predicted in advance, and the input image is enhanced. An example is shown. However, the apparatus in which the sharpness reduction according to the output characteristics occurs is not limited to the image forming apparatus 3. Also in the image projector 2, the sharpness is lowered by the optical blurring of the projection lens. Also in an image display device such as a display, an optical blur is generated by the liquid crystal panel, and the sharpness is lowered. The processing of the present embodiment can also be applied to sharpness reduction according to the output characteristics of these image output devices. In the present embodiment, an example in which compensation according to output characteristics is applied to any one of a plurality of devices is shown. However, compensation processing according to the output characteristics of each device used for generating a superimposed image is performed. A configuration to perform is desirable.
 また、本実施例では予め画像形成装置3によって形成される画像の出力特性と逆特性となる1つのフィルタを保持し、入力画像の鮮鋭度を強調処理する例を示した。しかしながら、上記の出力特性は、画像の印字条件(記録媒体、インクの種類、パス数、キャリッジ速度、走査方向、ハーフトーン処理)によって変化する。そのため、上記の印字条件に応じた複数の逆特性フィルタを保持し、印字条件に応じて、切り替え処理するのが望ましい。また、複数の逆特性フィルタを保持せずに、1つの逆特性フィルタと印字条件ごとのフィルタ補正係数とを備え、印字条件に応じてフィルタ補正係数を切り替えることで複数の逆特性フィルタを生成しても良い。 In the present embodiment, an example is shown in which one filter having characteristics opposite to the output characteristics of the image formed by the image forming apparatus 3 is held in advance and the sharpness of the input image is enhanced. However, the output characteristics described above vary depending on image printing conditions (recording medium, ink type, number of passes, carriage speed, scanning direction, halftone processing). Therefore, it is desirable to hold a plurality of inverse characteristic filters corresponding to the above printing conditions and perform the switching process according to the printing conditions. In addition, without having a plurality of inverse characteristic filters, one inverse characteristic filter and a filter correction coefficient for each printing condition are provided, and a plurality of inverse characteristic filters are generated by switching the filter correction coefficient according to the printing conditions. May be.
 また、本実施例では入力画像データに対し、逆特性フィルタによる鮮鋭度の強調処理を行ったが、逆特性フィルタを含めた画像処理の手順は上記の例に限定されない。例えば、実施例1の処理によって、各画像出力装置に出力する出力画像データを生成した後に、各画像出力装置の出力特性に基づく逆特性フィルタ処理を施しても良い。 In this embodiment, the sharpness enhancement process is performed on the input image data using the inverse characteristic filter. However, the image processing procedure including the inverse characteristic filter is not limited to the above example. For example, after generating output image data to be output to each image output device by the processing of the first embodiment, an inverse characteristic filtering process based on the output characteristics of each image output device may be performed.
 また、本実施例では式(11)を用いて出力特性を算出する例を述べたが、出力特性の算出方法は上記の例に限定されない。式(11)は、正弦波の周波数uに応じて出力画像の平均明度が変化するような場合、明部に対して暗部での応答値が過大となる。そのため、出力画像の平均明度が変化するような場合には、以下の式(13)を用いる。
fi(u)=MTF(u)=(Max(u)-Min(u))/(White-Black)・・・式(13)
In this embodiment, the example in which the output characteristic is calculated using the equation (11) has been described. However, the output characteristic calculation method is not limited to the above example. In the expression (11), when the average brightness of the output image changes according to the frequency u of the sine wave, the response value in the dark part is excessive with respect to the bright part. Therefore, when the average brightness of the output image changes, the following formula (13) is used.
fi (u) = MTF (u) = (Max (u) −Min (u)) / (White-Black) (13)
 尚、Max(u)、Min(u)、White、Blackは反射率として記述したが、輝度や濃度、デバイスのRGB値を用いてもよい。また、出力画像の出力特性を取得するためのチャートは図23の例に限定されない。周波数毎の応答性を算出可能であれば、正弦波パターンの代わりに矩形波パターンを用いてもよい。このとき、矩形波パターンに対して式(11)を適用することにより算出されるCTF値を周波数特性fi(u)として用いる。もしくは、CTF値を周波数特性とせずに、公知のコルトマン補正式を用いてMTF値に変換してもよい。 Note that Max (u), Min (u), White, and Black are described as reflectance, but luminance, density, and RGB values of the device may be used. Further, the chart for acquiring the output characteristics of the output image is not limited to the example of FIG. A rectangular wave pattern may be used instead of the sine wave pattern as long as the responsiveness for each frequency can be calculated. At this time, the CTF value calculated by applying Expression (11) to the rectangular wave pattern is used as the frequency characteristic fi (u). Alternatively, the CTF value may be converted into the MTF value using a known Coltman correction equation without using the frequency characteristic.
 また、本実施例では予め逆特性フィルタを生成し、保持する例を述べたが、チャートの反射率分布をユーザが入力するための入力部を設け、入力された反射率分布に応じて逆特性フィルタを生成しても良い。 In this embodiment, an example in which an inverse characteristic filter is generated and held in advance has been described. However, an input unit is provided for the user to input the reflectance distribution of the chart, and the inverse characteristic is determined according to the input reflectance distribution. A filter may be generated.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, in order to make the scope of the present invention public, the following claims are attached.
 本願は、2016年11月28日提出の日本国特許出願特願2016-229696と2017年8月25日提出の日本国特許出願特願2017-161799を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application claims priority based on Japanese Patent Application No. 2016-229696 filed on Nov. 28, 2016 and Japanese Patent Application No. 2017-161799 filed on Aug. 25, 2017. All the descriptions are incorporated herein.

Claims (19)

  1.  入力画像に基づいて第1画像出力装置から出力された第1出力画像と、前記入力画像に基づいて第2画像出力装置から出力され、前記第1出力画像よりも高解像度な第2出力画像と、を重畳することによって1つの画像を生成するために、前記第2画像出力装置に出力する画像データを生成する画像処理装置であって、
     前記入力画像を表す入力画像データを取得する第1取得手段と、
     前記入力画像データに基づいて生成された、前記第1画像出力装置に出力する第1出力画像データを取得する第2取得手段と、
     前記入力画像データと、前記第1出力画像データと、に基づいて、前記第2画像出力装置に出力する第2出力画像データを生成する第1生成手段と、を有し、
     前記第2出力画像データが表す画像の鮮鋭度は、前記第1出力画像データが表す画像の鮮鋭度に応じていることを特徴とする画像処理装置。
    A first output image output from the first image output device based on the input image; a second output image output from the second image output device based on the input image and having a higher resolution than the first output image; , For generating one image by generating image data to be output to the second image output device,
    First acquisition means for acquiring input image data representing the input image;
    Second acquisition means for acquiring first output image data generated based on the input image data and output to the first image output device;
    First generation means for generating second output image data to be output to the second image output device based on the input image data and the first output image data;
    The image processing apparatus characterized in that the sharpness of the image represented by the second output image data depends on the sharpness of the image represented by the first output image data.
  2.  前記第1画像出力装置と前記第2画像出力装置との少なくともいずれかはプロジェクタであることを特徴とする請求項1に記載の画像処理装置。 2. The image processing apparatus according to claim 1, wherein at least one of the first image output apparatus and the second image output apparatus is a projector.
  3.  前記第1画像出力装置はプロジェクタであって、前記第2画像出力装置はプリンタであることを特徴とする請求項2に記載の画像処理装置。 3. The image processing apparatus according to claim 2, wherein the first image output device is a projector, and the second image output device is a printer.
  4.  前記第1画像出力装置と前記第2画像出力装置とはどちらもプロジェクタであることを特徴とする請求項2に記載の画像処理装置。 3. The image processing apparatus according to claim 2, wherein both the first image output apparatus and the second image output apparatus are projectors.
  5.  前記第1画像出力装置と前記第2画像出力装置とは、一方がプリンタであり、他方がディスプレイであることを特徴とする請求項1に記載の画像処理装置。 2. The image processing apparatus according to claim 1, wherein one of the first image output apparatus and the second image output apparatus is a printer and the other is a display.
  6.  前記入力画像データに基づいて、前記第1出力画像データを生成する第2生成手段をさらに有し、
     前記第2取得手段は、前記第2生成手段によって生成された前記第1出力画像データを取得することを特徴とする請求項1乃至請求項5のいずれか一項に記載の画像処理装置。
    A second generating unit configured to generate the first output image data based on the input image data;
    The image processing apparatus according to claim 1, wherein the second acquisition unit acquires the first output image data generated by the second generation unit.
  7.  前記第2生成手段は、前記第1画像出力装置が出力する画像の画素数に応じて前記入力画像データの解像度を変換し、解像度が変換された前記入力画像データに基づいて、前記第1出力画像データを生成することを特徴とする請求項6に記載の画像処理装置。 The second generation means converts the resolution of the input image data according to the number of pixels of the image output by the first image output device, and the first output is based on the input image data whose resolution has been converted. The image processing apparatus according to claim 6, wherein the image data is generated.
  8.  前記入力画像データと前記第1出力画像データとに基づいて、前記入力画像データが表す画像の鮮鋭度に対する前記第1出力画像データが表す画像の鮮鋭度の低下度合いを算出する算出手段をさらに有し、
     前記第1生成手段は、前記算出手段によって算出された前記鮮鋭度の低下度合いに基づいて、前記第2出力画像データを生成することを特徴とする請求項1乃至請求項7のいずれか一項に記載の画像処理装置。
    There is further provided calculation means for calculating a degree of reduction in the sharpness of the image represented by the first output image data with respect to the sharpness of the image represented by the input image data based on the input image data and the first output image data. And
    The said 1st production | generation means produces | generates the said 2nd output image data based on the fall degree of the said sharpness calculated by the said calculation means, The any one of Claim 1 thru | or 7 characterized by the above-mentioned. An image processing apparatus according to 1.
  9.  前記入力画像データが表す画像のサイズに対する前記第1出力画像のサイズの第1倍率を取得する第3取得手段をさらに有し、
     前記算出手段は、前記第1倍率に基づいて、前記入力画像データと前記第1出力画像データとを解像度変換するための第2倍率を算出し、前記第2倍率に基づいて、前記入力画像データと前記第1出力画像データとを解像度変換し、前記解像度変換された前記入力画像データと前記第1出力画像データとに基づいて、前記鮮鋭度の低下度合いを算出することを特徴とする請求項8に記載の画像処理装置。
    A third obtaining unit for obtaining a first magnification of the size of the first output image with respect to the size of the image represented by the input image data;
    The calculating means calculates a second magnification for converting the resolution of the input image data and the first output image data based on the first magnification, and the input image data based on the second magnification. And the first output image data are converted in resolution, and the reduction degree of the sharpness is calculated based on the input image data and the first output image data subjected to the resolution conversion. The image processing apparatus according to 8.
  10.  前記算出手段は、前記鮮鋭度の低下度合いとして、前記入力画像データが有する画素値と前記第1出力画像データが有する画素値との差分を算出することを特徴とする請求項8又は請求項9に記載の画像処理装置。 The calculation unit calculates a difference between a pixel value included in the input image data and a pixel value included in the first output image data as the reduction degree of the sharpness. An image processing apparatus according to 1.
  11.  前記算出手段は、前記入力画像データに対して前記第2倍率に応じたハイパスフィルタを適用することによって、前記鮮鋭度の低下度合いを算出することを特徴とする請求項8又は請求項9に記載の画像処理装置。 The said calculating means calculates the fall degree of the said sharpness by applying the high-pass filter according to the said 2nd magnification with respect to the said input image data. Image processing apparatus.
  12.  前記第1生成手段は、前記入力画像データと前記鮮鋭度の低下度合いとに基づいて、前記入力画像データが表す画像の鮮鋭度を強調し、鮮鋭度を強調された画像を表す前記入力画像データに基づいて、前記第2出力画像データを生成することを特徴とする請求項8乃至請求項11のいずれか一項に記載の画像処理装置。 The first generation means emphasizes the sharpness of an image represented by the input image data based on the input image data and the degree of reduction of the sharpness, and the input image data representing an image with enhanced sharpness. The image processing apparatus according to claim 8, wherein the second output image data is generated based on the image data.
  13.  前記第1生成手段は、前記入力画像データが有する画素値に前記鮮鋭度の低下度合いを表す値を加算することによって、前記入力画像データが表す画像の鮮鋭度の強調を行うことを特徴とする請求項12に記載の画像処理装置。 The first generation means enhances the sharpness of an image represented by the input image data by adding a value representing the degree of reduction in the sharpness to a pixel value of the input image data. The image processing apparatus according to claim 12.
  14.  前記第1生成手段は、前記鮮鋭度の低下度合いに応じたγ値を用いて前記入力画像データに対してγ補正を行うことによって、前記入力画像データが表す画像の鮮鋭度の強調を行うことを特徴とする請求項12に記載の画像処理装置。 The first generation means enhances the sharpness of an image represented by the input image data by performing γ correction on the input image data using a γ value corresponding to the degree of reduction in the sharpness. The image processing apparatus according to claim 12.
  15.  前記第1生成手段は、前記鮮鋭度の低下度合いに応じたフィルタを用いて前記入力画像データに対してエッジ強調を行うことによって、前記入力画像データが表す画像の鮮鋭度の強調を行うことを特徴とする請求項12に記載の画像処理装置。 The first generation means performs edge enhancement on the input image data using a filter according to the degree of reduction in the sharpness, thereby enhancing the sharpness of the image represented by the input image data. The image processing apparatus according to claim 12, characterized in that:
  16.  画像の重畳を行う環境における環境光の強度を取得する第4取得手段をさらに有し、
     前記第1生成手段は、さらに前記環境光の強度に基づいて、前記第2出力画像データを生成することを特徴とする請求項1乃至請求項15のいずれか一項に記載の画像処理装置。
    A fourth acquisition means for acquiring the intensity of ambient light in an environment in which the image is superimposed;
    The image processing apparatus according to claim 1, wherein the first generation unit further generates the second output image data based on the intensity of the ambient light.
  17.  前記第1画像出力装置と前記第2画像出力装置との少なくともいずれかの、画像を出力する際の出力特性を取得する第5取得手段をさらに有し、
     前記第1取得手段は、前記出力特性に応じたフィルタを用いて補正された前記入力画像データを取得することを特徴とする請求項1乃至請求項16のいずれか一項に記載の画像処理装置。
    A fifth acquisition means for acquiring an output characteristic when outputting an image of at least one of the first image output device and the second image output device;
    The image processing apparatus according to claim 1, wherein the first acquisition unit acquires the input image data corrected using a filter corresponding to the output characteristic. .
  18.  コンピュータを請求項1乃至請求項17のいずれか一項に記載の画像処理装置の各手段として機能させるためのプログラム。 A program for causing a computer to function as each unit of the image processing apparatus according to any one of claims 1 to 17.
  19.  入力画像に基づいて第1画像出力装置から出力された第1出力画像と、前記入力画像に基づいて第2画像出力装置から出力され、前記第1出力画像よりも高解像度な第2出力画像と、を重畳することによって1つの画像を生成するために、前記第2画像出力装置に出力する画像データを生成する画像処理方法であって、
     前記入力画像を表す入力画像データを取得する第1取得ステップと、
     前記入力画像データに基づいて生成された、前記第1画像出力装置に出力する第1出力画像データを取得する第2取得ステップと、
     前記入力画像データと、前記第1出力画像データと、に基づいて、前記第2画像出力装置に出力する第2出力画像データを生成する第1生成ステップと、を有し、
     前記第2出力画像データが表す画像の鮮鋭度は、前記第1出力画像データが表す画像の鮮鋭度に応じていることを特徴とする画像処理方法。
    A first output image output from the first image output device based on the input image; a second output image output from the second image output device based on the input image and having a higher resolution than the first output image; Are image processing methods for generating image data to be output to the second image output device in order to generate one image by superimposing
    A first acquisition step of acquiring input image data representing the input image;
    A second acquisition step of acquiring first output image data generated based on the input image data and output to the first image output device;
    A first generation step of generating second output image data to be output to the second image output device based on the input image data and the first output image data;
    An image processing method, wherein the sharpness of an image represented by the second output image data is in accordance with the sharpness of an image represented by the first output image data.
PCT/JP2017/041739 2016-11-28 2017-11-21 Image processing device, image processing method, and program WO2018097114A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/396,039 US10868938B2 (en) 2016-11-28 2019-04-26 Image processing apparatus, image processing method for suppressing decrease in sharpness of superimpose image, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2016229696 2016-11-28
JP2016-229696 2016-11-28
JP2017161799A JP2018093472A (en) 2016-11-28 2017-08-25 Image processing apparatus, image processing method and program
JP2017-161799 2017-08-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/396,039 Continuation US10868938B2 (en) 2016-11-28 2019-04-26 Image processing apparatus, image processing method for suppressing decrease in sharpness of superimpose image, and storage medium

Publications (1)

Publication Number Publication Date
WO2018097114A1 true WO2018097114A1 (en) 2018-05-31

Family

ID=62195080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/041739 WO2018097114A1 (en) 2016-11-28 2017-11-21 Image processing device, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2018097114A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026879A (en) * 2006-06-19 2008-02-07 Seiko Epson Corp Display system and method
JP2008122558A (en) * 2006-11-10 2008-05-29 Seiko Epson Corp Display device
JP2010103863A (en) * 2008-10-24 2010-05-06 Canon Inc Image processing system, image processing apparatus, and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008026879A (en) * 2006-06-19 2008-02-07 Seiko Epson Corp Display system and method
JP2008122558A (en) * 2006-11-10 2008-05-29 Seiko Epson Corp Display device
JP2010103863A (en) * 2008-10-24 2010-05-06 Canon Inc Image processing system, image processing apparatus, and image processing method

Similar Documents

Publication Publication Date Title
JP4902837B2 (en) How to convert to monochrome image
US11146738B2 (en) Image processing apparatus, control method, and non-transitory computer-readable storage medium
JP7117915B2 (en) Image processing device, control method, and program
JP2017092872A (en) Image processing apparatus and image processing method
JP6895821B2 (en) Image processing device and image processing method
JP5257108B2 (en) Projector, projection system, image display method, and image display program
JP5451313B2 (en) Image processing apparatus, image processing method, and program
US8000554B2 (en) Automatic dynamic range adjustment in digital imaging
KR20200002683A (en) Image processing apparatus, image processing method, and computer program
JP2019029826A (en) Image processing apparatus, image processing method, and program
US20190068832A1 (en) Image processing apparatus, method thereof, and image forming apparatus
JP6703788B2 (en) Image processing apparatus and image processing method
JP2019204439A (en) Image processing device, image processing method, and program
JP5426953B2 (en) Image processing apparatus and method
JP2006120030A (en) Contrast adjusting device and contrast adjusting method
US20130215436A1 (en) Image processing device and image processing method
US9218552B2 (en) Image processing apparatus and image processing method
JP7296745B2 (en) Image processing device, image processing method, and program
US9875524B2 (en) Image processing apparatus, image processing method, and computer-readable storage medium
WO2018097114A1 (en) Image processing device, image processing method, and program
JP2019205103A (en) Information processing apparatus, information processing method, and program
JP5293923B2 (en) Image processing method and apparatus, image display apparatus and program
US10868938B2 (en) Image processing apparatus, image processing method for suppressing decrease in sharpness of superimpose image, and storage medium
JP2014011505A (en) Image processing apparatus and image processing method, print manufacturing apparatus and print manufacturing method, image processing program, and printed material
KR102470242B1 (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874811

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17874811

Country of ref document: EP

Kind code of ref document: A1