WO2016039301A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2016039301A1 WO2016039301A1 PCT/JP2015/075373 JP2015075373W WO2016039301A1 WO 2016039301 A1 WO2016039301 A1 WO 2016039301A1 JP 2015075373 W JP2015075373 W JP 2015075373W WO 2016039301 A1 WO2016039301 A1 WO 2016039301A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- transparency
- input image
- value
- visibility
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 68
- 238000003672 processing method Methods 0.000 title claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims abstract description 51
- 239000002131 composite material Substances 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 35
- 230000008569 process Effects 0.000 claims description 25
- 230000004044 response Effects 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 230000035945 sensitivity Effects 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 2
- 238000002834 transmittance Methods 0.000 abstract description 13
- 230000006870 function Effects 0.000 description 12
- 238000002156 mixing Methods 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000010365 information processing Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008904 neural response Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 210000000857 visual cortex Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/37—Details of the operation on graphic patterns
- G09G5/377—Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/39—Control of the bit-mapped memory
- G09G5/395—Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
- G09G5/397—Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/66—Transforming electric information into light information
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/066—Adjustment of display parameters for control of contrast
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/16—Calculation or use of calculated indices related to luminance levels in display data
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/026—Control of mixing and/or overlay of colours in general
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/10—Intensity circuits
Definitions
- Some embodiments according to the present invention relate to an image processing apparatus and an image processing method.
- HMDs head-mounted displays
- glasses-type wearable displays progressing.
- a virtual image showing a virtual object or information is often superimposed on a real scene (background) that is directly viewed through a display or a background image displayed on the display.
- background real scene
- the virtual image if the virtual image completely blocks the background or an obstacle in the background image, danger may occur during walking. Therefore, the virtual image is often made translucent and superimposed on the background or background image.
- the virtual image is semi-transparently superimposed, it is difficult to appropriately set the visibility of the virtual object as a drawing result.
- a background image and a virtual image are mixed and displayed at a certain ratio by alpha blending, the visibility of the virtual image is similar to the brightness of the background image and the virtual image, or the texture of the background image is high in contrast. If it is, it will be significantly reduced.
- Patent Document 1 discloses an image display device capable of see-through the outside world, and means for displaying an image and means for changing an image display mode according to an image obtained by an imaging operation for receiving an optical image of a subject.
- An apparatus comprising: is disclosed.
- Some aspects of the present invention have been made in view of the above-described problems, and an object thereof is to provide an image processing apparatus and an image processing method that allow a user to visually recognize a virtual image.
- An information processing apparatus receives an input between a first input image and a second input image in which a virtual image is superimposed on the first input image with a first transparency.
- Receiving input means a first transmittance lower than the first transmittance, and a value obtained by subtracting the second transmittance from 1 to the first input image and the second input image, respectively.
- a computing means for calculating a value indicating visibility at the second transparency by comparing a computation result obtained by adding the multiplied values and the first input image; and Update means for updating the second transparency according to a comparison result between the value indicating the visibility in the transparency and a target value, and the computing means includes the updated second transparency. The value indicating the visibility is calculated again using.
- An information processing apparatus includes a first input image, a second input image in which a virtual image is superimposed on the first input image with a first transparency, and the first input image.
- Input means for receiving an input of a third input image in which a virtual image is superimposed on one input image with the maximum brightness; and the second input image and the third input image include a second By comparing the calculation results for the first input image and the synthesized image obtained by adding the transparency and a value obtained by multiplying the value obtained by subtracting the second transparency from 1, respectively, the second transparency is obtained.
- An information processing method receives an input of a first input image and a second input image in which a virtual image is superimposed on the first input image with a first transparency. Step, the first input image and the second input image are respectively multiplied by a second transparency lower than the first transparency and a value obtained by subtracting the second transparency from 1
- a value indicating the visibility using the updated second transparency comprising the step of updating the second transparency according to a comparison result between the value indicating the visibility and a target value. Is calculated again.
- An information processing method includes: a first input image; a second input image in which a virtual image is superimposed on the first input image with a first transparency; A step of receiving an input of a third input image in which a virtual image is superimposed on one input image with maximum brightness; and a second transmission to the second input image and the third input image.
- a program includes: a first input image; and a process of receiving an input of a second input image in which a virtual image is superimposed on the first input image with a first transparency.
- a process of calculating a value indicating visibility in the second transparency by comparing the calculation results for the composite image obtained by addition and the first input image, and the visual recognition in the second transparency A program for causing a computer to execute a process of updating the second transparency according to a comparison result between a value indicating a property and a target value, and using the updated second transparency A value indicating visibility is calculated again.
- a program includes a first input image, a second input image in which a virtual image is superimposed on the first input image with a first transparency, and the first input image.
- the computer executes a process for calculating a value indicating the property, a value indicating the visibility in the second transparency, and a process for updating the second transparency according to a comparison result with a target value.
- “part”, “means”, “apparatus”, and “system” do not simply mean physical means, but “part”, “means”, “apparatus”, “system”. This includes the case where the functions possessed by "are realized by software. Further, even if the functions of one “unit”, “means”, “apparatus”, and “system” are realized by two or more physical means or devices, two or more “parts” or “means”, The functions of “device” and “system” may be realized by a single physical means or device.
- FIG. 1 is a block diagram illustrating a configuration example of an image processing device according to an embodiment. It is a block diagram which shows the specific example of the hardware constitutions which can mount the image processing apparatus shown in FIG. 1 is a block diagram illustrating a configuration example of an image processing device according to an embodiment.
- the background image is described as being a real image and the virtual image is an image generated by, for example, CG (Computer Graphics), but is not necessarily limited thereto.
- CG Computer Graphics
- a background image may be generated by CG, or an image obtained by photographing a real landscape may be used as a virtual image.
- the user wears an optical transmission type device such as glasses or a head-mounted display having an image display function, and the virtual image is displayed on the device, and the background is displayed by the user.
- an optical transmission type device such as glasses or a head-mounted display having an image display function
- the virtual image is displayed on the device, and the background is displayed by the user.
- Processing in the case of optical see-through that is directly visually recognized by optical transmission will be described.
- image processing for video see-through and optical see-through will be mainly described.
- the present invention is not necessarily limited to this, and similar processing may be applied to image processing in general.
- Alpha blending is widely known as a technique for synthesizing two different images.
- the transparency of a virtual object can be changed by adjusting the weight ( ⁇ value) of one image with respect to the other image.
- the ⁇ value is a parameter indicating the transparency.
- the magnitude of the ⁇ value does not necessarily correspond to the visibility of the blended virtual image.
- the visibility of the virtual image greatly depends on the brightness and texture of the background image and the virtual image. That is, even if the images are synthesized with a constant ⁇ value, the visibility of the virtual image varies greatly depending on the types of the background image and the virtual image.
- the ⁇ value used at the time of synthesis may be optimized using a model that can predict visibility.
- a visibility prediction model that has been developed to evaluate the visibility of image compression noise or the like generated during image compression or the like is partially corrected and used.
- the neural predictive model the neural response of the brain to each of the original image and the distorted image is simulated, and the difference between the responses is used as the magnitude of distortion visibility. The simulation of the neural response is performed based on the calculation of the initial visual cortex (V1) of the brain.
- the input image is replaced with the background image before the virtual image is synthesized and the image after the synthesis, and the visibility of the virtual image is given as a difference in the neural response to these images.
- the reason why some of the visibility prediction models that have been used for evaluation of image compression noise in the past have been modified is to reduce the calculation cost in order to enable real-time computation, This is to obtain a satisfactory result.
- the weight ( ⁇ value) at the time of composition of the virtual image is optimized for each pixel so that the visibility of the virtual image approaches a predetermined arbitrary level. This makes it possible to synthesize images with constant and uniform visibility regardless of the types of background images and virtual images.
- the image processing apparatus 100 includes an image input unit 110, a color space conversion unit 120, a calculation unit 130, an ⁇ value update unit 140, and an image output unit 150.
- image input unit 110 a color space conversion unit 120
- calculation unit 130 a calculation unit 130
- ⁇ value update unit 140 a calculation unit 130
- image output unit 150 an image output unit 150
- the image input unit 110 receives a background image and a composite image in which a virtual image is superimposed on the background image.
- ⁇ value 1, that is, in a non-transparent state.
- the background image and the composite image are expressed in the RGB color space. It should be noted that the ⁇ value at the time of generating the composite image is not necessarily 1 if it is sufficiently large.
- the color space conversion unit 120 converts the input background image and composite image from the RGB color space to the CIE L * a * b * color space. This is because the L * a * b * color space is designed so that perceptual uniformity is maintained, and is therefore preferable to a representation method in a color space such as YUV.
- the color space conversion unit 120 outputs only the L * channel image of the background image and the composite image obtained as a result of the color space conversion to the calculation unit 130. That is, the a * channel and b * channel which are equal luminance color contrast components are ignored. This is because the sensitivity to the color contrast is smaller than the sensitivity to the luminance, and it is considered that the influence of the color contrast is not so great at least under the condition where the luminance contrast exists. By ignoring the a * channel and the b * channel in this way, the overall calculation amount can be reduced, so that the ⁇ value can be calculated in real time (real time).
- the color space conversion unit 120 outputs the a * channel and the b * channel to the calculation unit 130. May be.
- the luminance component (L * channel) of the background image and the virtual image are almost the same (the luminance contrast is small), but the color component (a *, b * channel) Are greatly different (color contrast is large).
- the visibility calculated by the luminance channel is close to zero, and therefore, a correct visibility estimation result can be obtained for the first time by considering the color contrast.
- the color space conversion unit 120 may switch whether the color space conversion unit 120 outputs the color component to the calculation unit 130 according to the luminance contrast of the background image, the size of the color contrast, and the like.
- the calculation unit 130 corresponds to the visibility prediction model processing that reproduces the visual processing in the initial visual cortex (V1).
- the computing unit 130 simulates the response of the neuron corresponding to the subband component of each image, calculates the difference between the responses of the same neuron for the two input images for each pixel, and calculates the difference between the subbands.
- the pooled data is output as a value indicating visibility.
- specific processing of the calculation unit 130 will be described.
- the arithmetic unit 130 first performs a QMF (Quadrature Mirror Filter) wavelet transform process on the input background image and L * channel image of the composite image.
- QMF Quadrature Mirror Filter
- the input image is linearly decomposed into a plurality of subbands.
- a vector w consisting of 12 wavelet coefficients (4 spatial frequency levels ⁇ 3 azimuth components). And one DC component are obtained.
- the calculation unit 130 performs QMF wavelet conversion including these channels. The same applies to the subsequent processing related to the contrast sensitivity characteristic regarding whether or not the a * and b * channels are considered. In the following description, the case where only the L * channel is considered will be mainly described.
- the calculation unit 130 calculates the wavelet coefficient and the DC component of the synthesized image synthesized with the ⁇ value by the following formula using the initial value of the ⁇ value.
- the arithmetic unit 130 reproduces the contrast sensitivity characteristic (CSF: Contrast Sensitivity Function) by multiplying the twelve coefficients included in the wavelet coefficient vector w by a linear gain S according to the following equation. To do.
- CSF Contrast Sensitivity Function
- c i and w i indicate the wavelet coefficients of the i-th filter after multiplication and before multiplication, respectively.
- S i is a gain for the i-th filter and is obtained by the following function.
- o 1, 2, 3 respectively.
- a o represents the maximum gain in the direction o.
- s is the width of the function, and ⁇ is a constant that determines the sharpness of attenuation.
- the arithmetic unit 130 performs a division normalization process on the wavelet coefficient that has been subjected to such a process, and obtains a contrast response r.
- ⁇ i is a saturation constant for the i-th filter and determines the point at which response saturation begins in addition to preventing division by zero.
- H ik is a weighting function that determines the strength of suppression given to the i-th filter by the k-th filter.
- ⁇ is a constant.
- the weighting function H ik basically increases the weight (suppression works greatly) as the k-th filter has a feature closer to the i-th filter, and is defined by the following equation.
- (E, o) and (e ′, o ′) represent the spatial frequency level and direction in which the i-th and k-th filters have selections, respectively.
- K is a constant determined so that the sum of H ik for all k is 1.
- ⁇ e and ⁇ o are constants that determine the spread of weights in the spatial frequency dimension and the azimuth dimension, respectively.
- the weighting function H ik does not assume an effect from pixels near the pixel to be processed. This is because it is computationally expensive to access the surrounding pixels each time in order to determine the visibility of each pixel, and in order to reproduce the contrast response, the influence of neighboring pixels is not necessarily limited. This is because it is not considered large.
- the ⁇ value can be calculated in real time (real time) by not considering the operation of neighboring pixels in the divi- sion normalization process.
- the weighting function H ik considering the influence of neighboring pixels may be used.
- the calculation unit 130 also calculates a response r L based on local brightness for the DC component obtained as a result of the wavelet transform by the following formula.
- w L represents a coefficient of the low-pass filter
- ⁇ is a linear gain for the coefficient.
- the calculation unit 130 similarly uses the expressions (2), (4), and (6) to calculate the contrast response and the response to local brightness for the wavelet coefficients and DC components of the input background image. Calculate for each pixel.
- the calculation unit 130 pools the difference between the response to the composite image based on the current ⁇ value and the response to the background image for each pixel according to the following formula, and sets this as the current visibility value.
- d xy represents the difference between the pooled responses at pixel (x, y).
- r L and r L ′ represent the response to local brightness for each of the two images.
- the ⁇ value update unit 140 searches for an optimum ⁇ value for each pixel in order to realize the visibility as a target (target value).
- the search is performed by a binary search method.
- the target value of visibility can be individually set by the user.
- the ⁇ value update unit 140 calculates a value indicating visibility when blended with the current ⁇ value and a value indicating target visibility (target value) calculated by the calculation unit 130. In comparison, if the current visibility is higher than the target, the ⁇ value is decreased by the step size, and if it is lower, the ⁇ value is increased by the step size. After updating the ⁇ value in this way, the process of outputting the ⁇ value to the calculation unit 130 and calculating the value indicating the visibility of the composite image again is repeated a predetermined number of steps (for example, 8 steps). At this time, the initial value of the step size can be set to 0.25, for example. This step size value is reduced by half each time the step proceeds. The ⁇ value obtained after repeating a predetermined number of steps can be the optimum value obtained as a result of the search.
- the ⁇ value is optimized for each pixel. If this value is used as it is, discontinuity may occur between pixels, and an unnatural result may occur. Therefore, the ⁇ value updating unit 140 may smooth the change of the ⁇ value for each pixel by averaging the ⁇ values of the respective pixels within the range of the square window having a certain size.
- the image output unit 150 blends the background image and the composite image using the calculated optimal ⁇ value. Blending is performed in accordance with the above equation (1) on the L * a * b * color space, and after being combined, the image returned to the RGB color space is output.
- the image processing apparatus 100 includes a CPU (Central Processing Unit) 201, a memory 203, a GPU (Graphics Processing Unit) 205, a display device 207, an input interface (I / F) unit 209, a storage device 211, A communication I / F unit 213 and a data I / F unit 215 are included.
- a CPU Central Processing Unit
- memory 203 a memory 203
- GPU Graphics Processing Unit
- I / F input interface
- storage device 211 A communication I / F unit 213 and a data I / F unit 215 are included.
- the CPU 201 controls various processes in the image processing apparatus 100.
- the CPU 201 passes the camera image (background image) stored in the memory 203 that is the source of the input image of the process described with reference to FIG. 1 to the GPU 205, and then generates a synthesized image and optimizes the ⁇ value for the image.
- the GPU 205 can be instructed to perform processing related to the above.
- the processing performed by the CPU 201 can be realized as a program that is temporarily stored in the memory 203 and mainly operates on the CPU 201.
- the memory 203 is a storage medium such as a RAM (Random Access Memory).
- the memory 203 temporarily stores data including a program code of a program executed by the CPU 201 and a background image necessary for executing the program.
- the GPU 205 is a processor provided separately from the CPU for performing image processing, for example.
- the processing performed by the image input unit 110, the color space conversion unit 120, the calculation unit 130, the ⁇ value update unit 140, and the image output unit 150 illustrated in FIG. 1 can be all realized on the GPU 205.
- the program related to the calculation unit 130 and the ⁇ value update unit 140 is realized by a pixel shader that can be programmed in units of pixels, these processes can be executed in units of pixels in parallel. Thereby, the ⁇ value estimation performed by the image processing apparatus 100 can be realized in real time.
- the display device 207 is a device for displaying a composite image processed by the GPU 205.
- Specific examples of the display device 207 include an HMD, a liquid crystal display, an organic EL (Electro-Luminescence) display, and the like.
- the display device 207 may be provided outside the image processing apparatus 100. In that case, the display device 207 is connected to the image processing apparatus 100 via, for example, a display cable.
- the input I / F unit 209 is a device for receiving input from the user.
- the visibility target value required by the ⁇ value update unit 140 can be input by the user from the input I / F unit 209.
- Specific examples of the input I / F unit 209 include a keyboard, a mouse, and a touch panel.
- the input I / F unit 209 may be connected to the image processing apparatus 100 via a communication interface such as USB (Universal Serial Bus), for example.
- USB Universal Serial Bus
- the storage device 211 is a non-volatile storage medium such as a hard disk or a flash memory.
- the storage device 211 can store an operating system, various programs for realizing an image processing function using the GPU 205, a virtual image to be synthesized, and the like. Programs and data stored in the storage device 211 are referred to by the CPU 201 by being loaded into the memory 203 as necessary.
- the communication I / F unit 213 is a device for performing data communication with a device external to the image processing device 100, for example, a photographing device (camera device) (not shown) attached to the image processing device 100 by wire or wireless. is there. In this case, the communication I / F unit 213 receives an input of an image frame constituting a video that is a background image.
- the data I / F unit 215 is a device for inputting data from the outside of the image processing apparatus 100.
- a specific example of the data I / F unit 215 is a drive device for reading data stored in various storage devices.
- the data I / F unit 215 may be provided outside the image processing apparatus 100. In that case, the data I / F unit 215 is connected to the image processing apparatus 100 via an interface such as a USB.
- the preferable ⁇ value is calculated by estimating the visibility mainly considering only the luminance contrast (L * channel).
- the present invention is not limited to this.
- the visibility can be estimated appropriately. In such a case, it is considered that the visibility is estimated in consideration of all three channels L *, a *, and b *, and a suitable ⁇ value is calculated. More specifically, for example, depending on whether or not the difference in the luminance channel is below a certain threshold value, the calculation unit 130 performs the calculation only with the L * channel or includes the a * and b * channels. You may make it switch whether calculation is performed.
- the necessity of the color component changes depending on the presence or absence of contrast masking. Specifically, in a situation where the contrast masking effect from the background image is high, if visibility is calculated using only the luminance channel, the visibility is likely to be evaluated low. In such a situation, if the color of the virtual image is significantly different from the background image, the visibility obtained from the color channel increases, so by considering all the channels of L *, a *, b *, Visibility can be suitably evaluated.
- the technique using the image processing apparatus according to the present embodiment (hereinafter referred to as “Visibility-enhanced blending”) sets the brightness of the virtual image for each pixel so that the visibility of the displayed virtual image is equal to or higher than a predetermined level. Optimize to.
- the image processing apparatus 300 includes an image input unit 310, a color space conversion unit 320, a calculation unit 330, an ⁇ value update unit 340, and an image output unit 350.
- image input unit 310 receives an image input signal from a color space conversion unit 320
- calculation unit 330 calculates an ⁇ value update unit 340
- image output unit 350 outputs an image output unit 350.
- the image input unit 310 includes (A) a background image obtained by capturing a background visually recognized by the user, (B) a simulation result image obtained by predicting a video visually recognized by the user in a state where the visibility improving process is not performed, and (C) a virtual image.
- A a background image obtained by capturing a background visually recognized by the user
- B a simulation result image obtained by predicting a video visually recognized by the user in a state where the visibility improving process is not performed
- C a virtual image.
- the input image (B) is generated by a process of adding the virtual object multiplied by ⁇ to the entire background image multiplied by (1 ⁇ ).
- the input image of (C) is generated by maximizing the value of the L * channel on the CIE L * a * b * color space.
- the processing performed by the color space conversion unit 320 and the ⁇ value update unit 340 is the same as that of the color space conversion unit 120 in the first embodiment, and thus description thereof is omitted.
- the processing performed by the calculation unit 330 is basically the same as that of the calculation unit 130 in the first embodiment. However, the wavelet coefficient and the DC component that are the targets for which the arithmetic unit 330 performs the CSF simulation are different.
- I 1 and I 2 in the expression (1) are the wavelet coefficients and DC components of the background image and the synthesized image, but in the second embodiment, I 1 and I 2 are the input image, respectively. It is the wavelet coefficient and DC component which performed wavelet transformation on the L * a * b * color space of (C) and (B).
- the image output unit 350 uses the ⁇ value optimized by the ⁇ value update unit 340 searching for a predetermined number of steps, for example, 8 steps, in the L * a * b * color space.
- the virtual image included in (B) and the virtual image with the maximum luminance (the virtual image included in the input image (C)) are blended, and the result of returning this to the RGB color space is the final output.
- the image output unit 350 displays the optimized virtual image of the ⁇ value output from the image output unit 350 on the display device.
- An example of a hardware configuration when the above-described image processing apparatus 300 is realized by a computer is the same as that of the first embodiment except that an optical see-through device can be used for the display device 207.
- the visibility of the virtual image can be raised to an arbitrary level without depending on the brightness or texture of the background.
- Image processing device 110 Image input unit 120: Color space conversion unit 130: Calculation unit 140: ⁇ value update unit 150: Image output unit 201: CPU 203: Memory 207: Display device 209: Input interface unit 211: Storage device 213: Communication interface unit 215: Data interface unit 300: Image processing device 310: Image input unit 320: Color space conversion unit 330: Calculation unit 340: ⁇ value Update unit 350: image output unit
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
なお、以下の説明では、ビデオ・シースルー及びオプティカル・シースルーに対する画像処理の場合を中心に説明するが、必ずしもこれに限られるものではなく、画像処理全般に同様の処理を適用することも考えられる。
上述したように、第1の実施形態では、撮影された背景画像に仮想画像を重畳するビデオ・シースルーを行う画像処理装置について説明する。本画像処理装置による手法(以下、Visibility-based blendingともいう。)は、背景画像と背景画像の明るさやテクスチャに依存せずに、常に任意の視認性を得ることを可能とする。
以下、図2を参照しながら、上述してきた画像処理装置100をコンピュータにより実現する場合のハードウェア構成の一例を説明する。図2に示すように、画像処理装置100は、CPU(Central Processing Unit)201、メモリ203、GPU(Graphics Processing Unit)205、表示装置207、入力インタフェース(I/F)部209、記憶装置211、通信I/F部213、及びデータI/F部215を含む。
本実施形態に係るVisibility-based blendingを用いれば、背景画像の明るさやテクスチャ等に依存せずに、設定された視認性に応じた均一な視認性で仮想画像を提示することが可能となる。
上述の実施形態では、主に輝度コントラスト(L*チャネル)のみを考慮して視認性を推定することにより、好適なα値を算出していたが、これに限られるものではない。上述の通り、例えば合成画像と背景画像との間に輝度(L*チャネル)の差が殆ど無い、即ち輝度コントラストが小さい場合には、色コントラスト(a*、b*チャネル)を考慮した方が、視認性を好適に推定することができる。このような場合には、L*、a*、b*の3チャネル全てを考慮して視認性を推定し、好適なα値を算出することが考えられる。より具体的には、例えば、輝度チャネルの差が一定の閾値を下回っているか否かに応じて、L*チャネルのみで演算を行うか、或いはa*及びb*チャネルを含めて演算部130が演算を行うかを切り替えるようにしても良い。
続いて、第2の実施形態について説明する。上述の通り、第1の実施形態では、撮影された背景画像に仮想画像を重畳するビデオ・シースルーを行う画像処理装置について説明した。第2の実施形態では、現実世界の光とデバイスから出力された光とを、ハーフミラーを通して合算したものをユーザが見るオプティカル・シースルーを行う画像処理装置について説明する。オプティカル・シースルーでは、仮想画像は必然的に半透明表示となる。こうした状況下では、現実風景から入射する光の強さやデバイスが発する光の強さに加えて、現実風景中において仮想情報が重なる部分のテクスチャや構造も、仮想画像の視認性に影響すると考えられる。
色空間変換部320及びα値更新部340が行う処理は、第1の実施形態における色空間変換部120と同様であるため、説明を省略する。
110 :画像入力部
120 :色空間変換部
130 :演算部
140 :α値更新部
150 :画像出力部
201 :CPU
203 :メモリ
207 :表示装置
209 :入力インタフェース部
211 :記憶装置
213 :通信インタフェース部
215 :データインタフェース部
300 :画像処理装置
310 :画像入力部
320 :色空間変換部
330 :演算部
340 :α値更新部
350 :画像出力部
Claims (18)
- 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像との入力を受ける入力手段と、
前記第1の入力画像及び前記第2の入力画像に、前記第1の透過度よりも低い第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を算出する演算手段と、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新する更新手段と
を備え、
前記演算手段は、更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
画像処理装置。 - 前記演算手段による演算及び前記更新手段による更新は、画素毎に行われる、
請求項1記載の画像処理装置。 - 前記更新手段は、2分探索法による探索により前記第2の透過度を更新する、
請求項1又は請求項2記載の画像処理装置。 - 前記視認性を示す値は、コントラスト感度特性及びコントラスト応答に関する、
請求項1乃至請求項3のいずれか1項記載の画像処理装置。 - 前記演算手段は、
前記第1の入力画像及び前記第2の入力画像をウェーブレット変換し、
当該変換の結果得られる前記第1の入力画像に関するウェーブレット係数及びDC成分と、前記第2の入力画像に関するウェーブレット係数及びDC成分とを、前記第2の透過度と、1から前記第2の透過度を減算した値とでそれぞれ乗算した値を加算する、
請求項1乃至請求項4のいずれか1項記載の画像処理装置。 - 前記演算手段は、
前記加算の結果得られる値にゲインを乗算し、
前記ゲインを乗算した値にDivisive Normalization処理することによりコントラスト応答値を算出する、
請求項5記載の画像処理装置。 - 前記演算手段は、前記第1の入力画像及び前記第2の入力画像の輝度成分に基づき、前記視認性を示す値を算出する、
請求項1乃至請求項6のいずれか1項記載の画像処理装置。 - 前記演算手段は、前記第1の入力画像と前記仮想画像に応じて、輝度成分のみに基づき前記視認性を示す値を算出するか、輝度成分及び色成分の両者に基づき前記視認性を示す値を算出するかを切り替える、
請求項1乃至請求項6のいずれか1稿記載の画像処理装置。 - 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像と、前記第1の入力画像の上に仮想画像が最大の明るさで重畳された第3の入力画像との入力を受ける入力手段と、
前記第2の入力画像及び前記第3の入力画像に、第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を算出する演算手段と、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新する更新手段と
を備え、
前記演算手段は、更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
画像処理装置。 - 前記演算手段による演算及び前記更新手段による更新は、画素毎に行われる、
請求項9記載の画像処理装置。 - 前記更新手段は、2分探索法による探索により前記第2の透過度を更新する、
請求項7又は請求項8記載の画像処理装置。 - 前記視認性を示す値は、コントラスト感度特性及びコントラスト応答に関する、
請求項9乃至請求項11のいずれか1項記載の画像処理装置。 - 前記演算手段は、
前記第1の入力画像、前記第2の入力画像、及び前記第3の入力画像をウェーブレット変換し、
当該変換の結果得られる前記第2の入力画像に関するウェーブレット係数及びDC成分と、前記第3の入力画像に関するウェーブレット係数及びDC成分とを、前記第2の透過度と、1から前記第2の透過度を減算した値とでそれぞれ乗算した値を加算する、
請求項9乃至請求項12のいずれか1項記載の画像処理装置。 - 前記演算手段は、
前記加算の結果得られる値にゲインを乗算し、
前記ゲインを乗算した値にDivisive Normalization処理することによりコントラスト応答値を算出する、
請求項13記載の画像処理装置。 - 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像の入力を受けるステップと、
前記第1の入力画像及び前記第2の入力画像に、前記第1の透過度よりも低い第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を演算するステップと、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新するステップと
を備え、
更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
画像処理装置の画像処理方法。 - 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像と、前記第1の入力画像の上に仮想画像が最大の明るさで重畳された第3の入力画像との入力を受けるステップと、
前記第2の入力画像及び前記第3の入力画像に、第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を算出するステップと、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新するステップと
を備え、
更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
画像処理装置の画像処理方法。 - 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像の入力を受ける処理と、
前記第1の入力画像及び前記第2の入力画像に、前記第1の透過度よりも低い第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を演算する処理と、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新する処理と
をコンピュータに実行させるプログラムであって、
更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
プログラム。 - 第1の入力画像と、前記第1の入力画像の上に仮想画像が第1の透過度により重畳された第2の入力画像と、前記第1の入力画像の上に仮想画像が最大の明るさで重畳された第3の入力画像との入力を受ける処理と、
前記第2の入力画像及び前記第3の入力画像に、第2の透過度、及び1から前記第2の透過度を減算した値をそれぞれ乗算した値を加算して得られる合成画像と、前記第1の入力画像とに対する演算結果の比較により、前記第2の透過度における視認性を示す値を算出する処理と、
前記第2の透過度における前記視認性を示す値と、目標値との比較結果に応じて、前記第2の透過度を更新する処理と
をコンピュータに実行させるプログラムであって、
更新された前記第2の透過度を用いて前記視認性を示す値を再度算出する、
プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15840032.5A EP3193327B1 (en) | 2014-09-08 | 2015-09-07 | Image processing device and image processing method |
JP2016547437A JP6653103B2 (ja) | 2014-09-08 | 2015-09-07 | 画像処理装置および画像処理方法 |
US15/509,113 US10055873B2 (en) | 2014-09-08 | 2015-09-07 | Image processing device and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014182629 | 2014-09-08 | ||
JP2014-182629 | 2014-09-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016039301A1 true WO2016039301A1 (ja) | 2016-03-17 |
Family
ID=55459049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/075373 WO2016039301A1 (ja) | 2014-09-08 | 2015-09-07 | 画像処理装置および画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10055873B2 (ja) |
EP (1) | EP3193327B1 (ja) |
JP (1) | JP6653103B2 (ja) |
WO (1) | WO2016039301A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112771473A (zh) * | 2018-09-07 | 2021-05-07 | 苹果公司 | 将来自真实环境的影像插入虚拟环境中 |
JP7544359B2 (ja) | 2021-05-18 | 2024-09-03 | 日本電信電話株式会社 | 最適化装置、訓練装置、合成装置、それらの方法、およびプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6996147B2 (ja) | 2017-07-27 | 2022-01-17 | 株式会社大林組 | 検査処理システム、検査処理方法及び検査処理プログラム |
JP6939195B2 (ja) * | 2017-07-27 | 2021-09-22 | 株式会社大林組 | 検査処理システム、検査処理方法及び検査処理プログラム |
TWI808321B (zh) * | 2020-05-06 | 2023-07-11 | 圓展科技股份有限公司 | 應用於畫面顯示的物件透明度改變方法及實物投影機 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007121625A (ja) * | 2005-10-27 | 2007-05-17 | Konica Minolta Photo Imaging Inc | 画像表示装置 |
JP2008054195A (ja) * | 2006-08-28 | 2008-03-06 | Funai Electric Co Ltd | 映像信号処理装置 |
WO2012162806A1 (en) * | 2011-06-01 | 2012-12-06 | Zhou Wang | Method and system for structural similarity based perceptual video coding |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6023302A (en) * | 1996-03-07 | 2000-02-08 | Powertv, Inc. | Blending of video images in a home communications terminal |
CA2679864A1 (en) * | 2007-03-07 | 2008-09-12 | Telefonaktiebolaget L M Ericsson (Publ) | Display controller, electronic apparatus and method for creating a translucency effect using color model transform |
JP4909176B2 (ja) * | 2007-05-23 | 2012-04-04 | キヤノン株式会社 | 複合現実感提示装置及びその制御方法、コンピュータプログラム |
US9087471B2 (en) * | 2011-11-04 | 2015-07-21 | Google Inc. | Adaptive brightness control of head mounted display |
DE102012105170B3 (de) * | 2012-06-14 | 2013-09-26 | Martin Göbel | Vorrichtung zur Erzeugung eines virtuellen Lichtabbilds |
WO2014160342A1 (en) * | 2013-03-13 | 2014-10-02 | The University Of North Carolina At Chapel Hill | Low latency stabilization for head-worn displays |
CN106030664B (zh) * | 2014-02-18 | 2020-01-07 | 索尼公司 | 在电子显示器上叠加两个图像的方法和计算装置 |
US10268041B2 (en) * | 2014-05-24 | 2019-04-23 | Amalgamated Vision Llc | Wearable display for stereoscopic viewing |
US10101586B2 (en) * | 2014-12-24 | 2018-10-16 | Seiko Epson Corporation | Display device and control method for display device |
-
2015
- 2015-09-07 WO PCT/JP2015/075373 patent/WO2016039301A1/ja active Application Filing
- 2015-09-07 EP EP15840032.5A patent/EP3193327B1/en active Active
- 2015-09-07 US US15/509,113 patent/US10055873B2/en active Active
- 2015-09-07 JP JP2016547437A patent/JP6653103B2/ja active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007121625A (ja) * | 2005-10-27 | 2007-05-17 | Konica Minolta Photo Imaging Inc | 画像表示装置 |
JP2008054195A (ja) * | 2006-08-28 | 2008-03-06 | Funai Electric Co Ltd | 映像信号処理装置 |
WO2012162806A1 (en) * | 2011-06-01 | 2012-12-06 | Zhou Wang | Method and system for structural similarity based perceptual video coding |
Non-Patent Citations (3)
Title |
---|
FUKIAGE, TAIKI ET AL.: "Visibility-based blending for real-time applications", IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY 2014, 10 September 2014 (2014-09-10), pages 63 - 72, XP032676209, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6948410&newsearch=true&queryText=Visibility-based%20blending%20for%20real-time%20applications><DOI:10.1109/ISMAR.2014.6948410> doi:10.1109/ISMAR.2014.6948410 * |
See also references of EP3193327A4 * |
TAIKI FUKIAGE ET AL.: "Reduction of contradictory partial occlusion in Mixed Reality by using characteristics of transparency perception", IPSJ SIG NOTES 2012 (HEISEI 24) NENDO 5, vol. 2012, no. 1, 15 February 2013 (2013-02-15), pages 1 - 8, XP032309059, ISSN: 1884-0930 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112771473A (zh) * | 2018-09-07 | 2021-05-07 | 苹果公司 | 将来自真实环境的影像插入虚拟环境中 |
US12094069B2 (en) | 2018-09-07 | 2024-09-17 | Apple Inc. | Inserting imagery from a real environment into a virtual environment |
JP7544359B2 (ja) | 2021-05-18 | 2024-09-03 | 日本電信電話株式会社 | 最適化装置、訓練装置、合成装置、それらの方法、およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP6653103B2 (ja) | 2020-02-26 |
US10055873B2 (en) | 2018-08-21 |
EP3193327A1 (en) | 2017-07-19 |
EP3193327B1 (en) | 2021-08-04 |
JPWO2016039301A1 (ja) | 2017-06-15 |
EP3193327A4 (en) | 2018-03-07 |
US20170256083A1 (en) | 2017-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1958185B1 (en) | Methods and apparatus for determining high quality sampling data from low quality sampling data | |
WO2016039301A1 (ja) | 画像処理装置および画像処理方法 | |
JP7504120B2 (ja) | 高分解能なリアルタイムでのアーティスティックスタイル転送パイプライン | |
CN109817170A (zh) | 像素补偿方法、装置和终端设备 | |
CN107451974B (zh) | 一种高动态范围图像的自适应再现显示方法 | |
US11948245B2 (en) | Relighting images and video using learned lighting and geometry | |
Yu et al. | Underwater vision enhancement based on GAN with dehazing evaluation | |
US20240127402A1 (en) | Artificial intelligence techniques for extrapolating hdr panoramas from ldr low fov images | |
CN115205157B (zh) | 图像处理方法和系统、电子设备和存储介质 | |
Tiant et al. | GPU-accelerated local tone-mapping for high dynamic range images | |
CN110009676B (zh) | 一种双目图像的本征性质分解方法 | |
Li et al. | A structure and texture revealing retinex model for low-light image enhancement | |
Bae et al. | Non-iterative tone mapping with high efficiency and robustness | |
WO2019224947A1 (ja) | 学習装置、画像生成装置、学習方法、画像生成方法及びプログラム | |
Tariq et al. | Perceptually adaptive real-time tone mapping | |
Cao et al. | A Perceptually Optimized and Self-Calibrated Tone Mapping Operator | |
US20240312091A1 (en) | Real time film grain rendering and parameter estimation | |
Zhao et al. | RIRO: From Retinex-Inspired Reconstruction Optimization Model to Deep Low-Light Image Enhancement Unfolding Network | |
TWI547902B (zh) | 採用梯度域梅特羅波利斯光傳輸以呈像圖形之方法與系統 | |
CN118762117A (zh) | 虚拟模型的灯光渲染方法、设备、介质和程序产品 | |
Krishna et al. | A subjective and objective quality assessment of tone-mapped images | |
Ling et al. | Visualization of high dynamic range image with retinex algorithm | |
Mollis | Real-time Hardware Based Tone Reproduction | |
Piao et al. | HDR image display combines weighted least square filtering with color appearance model | |
KR20090057547A (ko) | 단말기의 이미지 필터링을 위한 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15840032 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016547437 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15509113 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2015840032 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015840032 Country of ref document: EP |