EP3419012A1 - Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage - Google Patents

Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage Download PDF

Info

Publication number
EP3419012A1
EP3419012A1 EP17305767.0A EP17305767A EP3419012A1 EP 3419012 A1 EP3419012 A1 EP 3419012A1 EP 17305767 A EP17305767 A EP 17305767A EP 3419012 A1 EP3419012 A1 EP 3419012A1
Authority
EP
European Patent Office
Prior art keywords
information
display
light
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP17305767.0A
Other languages
German (de)
English (en)
Inventor
Philippe Robert
Sylvain Duchene
Jurgen Stauder
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to EP17305767.0A priority Critical patent/EP3419012A1/fr
Priority to PCT/EP2018/066017 priority patent/WO2018234195A1/fr
Publication of EP3419012A1 publication Critical patent/EP3419012A1/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/14Detecting light within display terminals, e.g. using a single or a plurality of photosensors
    • G09G2360/144Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light being ambient light

Definitions

  • the present disclosure relates to the domain of image processing, for example in the context of adapting an image displayed on a display device to the lighting environment of the display device or in the context of augmented-reality content displayed on a display device.
  • Scene with artificial illumination such as indoor environment, may have more than one source of illumination, which results in a complex lighting environment.
  • This lighting environment may disturb a user watching a screen surface - such as a television set or a tablet - due to its interaction with the lighting environment.
  • light interactions with object surfaces include diffuse, specular reflections and cast shadows.
  • Diffuse reflections occur for any surfaces where the amount of light is reflected equally in all directions. Specular reflections, as opposed to diffuse reflections, occur at glossy surfaces of objects where substantial amounts of light are reflected in the user direction. Specular reflections will cause the human visual system to lower its sensitivity and details of an image displayed on a screen surface are less visible for the user. Similarly, in mixed lighting conditions the hue of the lighting sources might be different. Diffuse reflections will produce as well different hue reflection on the screen surface that will degrade the user experience. In both cases, the visual quality is reduced.
  • Cast shadows occur if a first object hinders light of a light source to reach a second object, e.g. the screen of a display device. Cast shadows on a screen surface are often much darker than the surrounding areas leading to fading out of image contrast outside the cast shadow while preserving contrast within the cast shadow.
  • references in the specification to "one embodiment”, “an embodiment”, “an example embodiment”, “a particular embodiment” indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • the present disclosure relates to a method of processing an image, the method comprising:
  • the determining of the second information is further according to a distance between the display and each device of the plurality of devices.
  • the determining of the second information is further according to a third information representative of at least a type associated with each device, the at least a type belonging to a group of types comprising:
  • the determining of the second information is further according to the pose of the display with regard to a determined point of view.
  • the processing comprises modifying at least a parameter of spatial areas of the image according to the location of the spatial areas in the image.
  • the at least a parameter belongs to a group of parameters comprising:
  • the display corresponds to a device of said plurality of devices.
  • the present disclosure also relates to a device configured to perform the abovementioned method of processing an image.
  • the device comprises a memory associated with a processor configured to:
  • the present disclosure also relates to a device for processing an image, the device comprising:
  • the present disclosure also relates to a computer program product comprising instructions of program code for executing, by at least one processor, the abovementioned method of processing a first image, when the program is executed on a computer.
  • the present disclosure also relates to a (non-transitory) processor readable medium having stored therein instructions for causing a processor to perform at least the abovementioned method of processing an image.
  • a first information representative of the lighting of the environment is obtained, for example received, from each light source of at least a part of the light sources.
  • the first information corresponds for example to the intensity of the light emitted by a light source and/or the location of the light source and/or the color of the light emitted by the light source.
  • a second information that is representative of the spatial modelling of the lighting of an area surrounding and/or comprising a display screen of the environment is determined, the second information being determined from the first information and from a pose of the display screen with regard to the light source(s).
  • An image displayed on the display screen may then be processed knowing the second information.
  • processing the image(s) displayed in the display screen enables to increase the quality of the displayed image(s) by for example considering the reflections of light induced by the light sources lighting the display screen or the shadows casted by the light sources on the display screen or the variation of lighting over the surface of the display screen for example.
  • Processing the image(s) displayed in the display screen may also include display screen technology dependent parameters. For example, if the display screen technology is Liquid Crystal Display (LCD) with localized backlighting, processing the image(s) may include to process the image in order to calculate the control of the LCD panel and the control of the localized backlighting layer.
  • LCD Liquid Crystal Display
  • Figure 1 shows a scene 1 comprising a plurality of display devices 10, 11 and 12 and a plurality of light sources 101 to 103, according to a particular and non-limiting embodiment of the present principles.
  • the scene 1 is an indoor environment according to the example of figure 1 .
  • the light sources 101, 102 and 103 are of different nature.
  • the light sources 101 corresponds to point light source, for example spots.
  • the light sources 102 corresponds to area light sources, for example neon light or fluorescent light.
  • the light source 103 also corresponds to an area light source but with a lighting that is more diffuse than the lighting of the light sources 102.
  • the display devices 10 to 12 may also be considered as light sources as they emit light while displaying image(s).
  • the scene 1 may comprise further light sources, for example windows bringing outdoor light (e.g. from the sun or street lamp(s)) or doors.
  • the display devices 10 to 12 are of different nature in the example of figure 1 .
  • the display device 11 corresponds for example to a tablet and the display devices 10 and 12 each corresponds to a screen such as a television screen, for example a LCD (Liquid Crystal Display) screen, an OLED (Organic Light-Emitting Diode) screen or a QLED (Quantum Light-Emitting Diode) screen.
  • the display device 11 is onto a table and the screens 10 and 12 are each arranged on a different wall of the room of the scene 1.
  • the scene 1 further comprises objects, such as a table, chairs 1001, 1002 or cups that may be considered as indirect light source (as they may reflect a part of the light they receive from the light sources 101 to 103) and that may cast shadows on part(s) of the display devices 10 to 12.
  • chairs 1001, 1002 or cups that may be considered as indirect light source (as they may reflect a part of the light they receive from the light sources 101 to 103) and that may cast shadows on part(s) of the display devices 10 to 12.
  • Chairs 101 and 1002 provide two examples of different points of view for image(s) or video content(s) displayed on the display devices 10 to 12.
  • the number and the nature of light sources is not limited to the example of figure 1 .
  • the number and the nature of the display devices is not limited to the example of figure 1 .
  • the scene 1 may comprise only one display device or any number of display devices of any nature, for example a screen onto which is projected an image by a video projector.
  • Figure 2 shows the obtaining of lighting information, called first information, by a display device corresponding to the tablet 11 of the scene 1, according to a particular and non-limiting embodiment of the present principles.
  • the display device 11 receives the first information from different light sources 101 and 102 and from a device 20 corresponding for example to a webcam or to a light sensor.
  • the light sensor corresponds to a photosensor or to an array of photosensors, a photosensor being for example a photoresistor, a photodiode or a phototransistor, possibly with a color filter in front in order to be spectrally selective.
  • a webcam or any image acquisition device comprises a photosensor and color filter array that acquires information about at least a part of the lighting of an environment, e.g. the scene 1.
  • Each light source 101, 102 may be a wireless connected device that transmits light information (the so-called first information) wirelessly to the tablet 11.
  • the transmission of the first information may be based on WiFi® (IEEE 802.11-2016 for example), on the Zigbee Light Link protocol that is part of the ZigBee 3.0 standard or on the Z-Wave protocol for example.
  • the device 20 may also be a wireless connected device that transmits light information to the tablet.
  • the light information may be determined by the photosensor(s) of the device.
  • the device may for example be used to measure outdoor light received by the environment of the scene 1 through windows or to measure the ambient light of the environment of the scene 1.
  • the device 20 is connected to the tablet via a wire, for example using USB (Universal Serial Bus).
  • the device 20 is embedded into the tablet and corresponds for example to the light sensor of the tablet 11 (used to determine ambient lighting and to adjust automatically the brightness of the screen of the tablet) or to the front and/or rear camera of the tablet 11.
  • the first information transmitted by the light source(s) 101, 102 and/or by the device 20 (and received by the tablet) comprises for example one or more lighting parameters that belong to a group of parameters comprising:
  • the first information corresponds to photometric information with optionally shape information.
  • each light source 101, 102 and/or the device 20 may transmit information on its location in the scene, for example its coordinates (x, y and z) in the space / framework of the scene 1 and/or its orientation.
  • Figure 5 shows a diagram of a connected device, specifically a connected light source 101, according to a particular and non-limiting embodiment of the present principles.
  • the connected light source 101 is for example a LED light source.
  • the LED light source 101 comprises a housing 51, a LED driving power circuit 52, a RF (Radio-Frequency) circuit 53 (e.g. a ZigBee module, a WiFi® module, a Bluetooth module or a Z-wave module), a LED lamp panel 54 and a lamp shade 55.
  • the RF circuit 53 is adapted to transmit and/or receive data.
  • the RF circuit 53 is for example configured to transmit first information on the lighting characteristics of the light source 101 and/or information representative of the location of the light source 101 within the scene and/or information regarding the type of the light source 101. 1
  • the RF circuit 53 is for example configured to receive control parameters to control the operation of the light source 101, for example to control the intensity of the lighting and/or the duration of the lighting and/or the color of the lighting.
  • Figure 3 shows the obtaining of lighting information, called first information, by a display device corresponding to the tablet 11 of the scene 1, according to a particular and non-limiting embodiment of the present principles.
  • the example of Figure 3 takes over the elements of the example of Figure 2 by arranging a remote device 30 between the light sources 101, 102 and the device 20 on one hand and the display device 11 on the other hand.
  • the display device 11 receives the first information from the remote device 30 that received light information from different light sources 101 and 102 and from the device 20.
  • the remote device 30 may for example be a set-top box, a gateway, a computer, a server, a storage unit or any apparatus adapted to receive the first information from the light sources 101, 102 and device 20 and transmit said first information to the display device 11.
  • the remote device 30 processes the received first information to determine a second information representative of the modelling of the lighting around the display device.
  • the remote device may transmit the second information to the display device 11.
  • the remote device 30 may receive the first information wirelessly from the light sources 101, 102 and/or the device 20 or via wired connection (e.g. vie USB or Ethernet).
  • the remote device 30 may transmit the first information and/or the second information wirelessly (e.g. using any wireless transmission protocol such as WiFi®, Zigbee, Z-Wave or Bluetooth) or via wired connection (e.g. vie USB or Ethernet) to the display device 11.
  • Figure 4 shows a process to determine a model of at least a part of the lighting environment of the scene of figure 1 , according to a particular and non-limiting embodiment of the present principles.
  • the process of figure 4 will be described with the part of the scene 1 that comprises the display device 11 to determine how this area of the scene 1 reflects light received from the different light sources of the environment of the scene 1.
  • Said part of the scene 1 comprises the surface formed by the screen of the display device with optionally an area surrounding the display device, for example an area having a determined width around the display device, e.g. 20 cm, 50 cm or 1 m.
  • the Phong reflection model which is a local illumination model, is used to determine the spatial model of light associated with said part of the scene 1.
  • components is and i d are defined as the intensities (e.g. as RGB values) of the specular and diffuse components of each light source.
  • a single term i a may control the ambient lighting, which may for example be computed as a sum of contributions from all light sources that neither creates shadows nor specular effects.
  • only one light source 101 is considered for clarity purpose of illustration. It is naturally understood that the same process may be applied for all light sources 101 to 103 or to a part of them, for example to decrease the computation costs.
  • the Phong reflection model provides an equation for computing the illumination I r (x) of each surface point x, for example the surface point x 41:
  • I r x k a i a + ⁇ m ⁇ lights k d L m ⁇ . N ⁇ i m , d + k s R m ⁇ . V ⁇ i m , s ⁇
  • the diffuse term is not affected by the viewer direction V .
  • the specular term is large only when the viewer direction V is aligned with the reflection direction R m . Their alignment is measured by the ⁇ power of the cosine of the angle between them. The cosine of the angle between the normalized vectors R m and V is equal to their dot product.
  • is large, in the case of a nearly mirror-like reflection, the specular highlight will be small, because any viewpoint not aligned with the reflection will have a cosine less than one which rapidly approaches zero when raised to a high power.
  • the specular term should only be included if the dot product of the diffuse term is positive.
  • this equation is typically modeled separately for R, G and B intensities, allowing different reflections constants k a , k d and k s for the plurality of color channels.
  • the display is a connected object and the intrinsic photometric parameters of its screen are supposed to be known: in the above Phong model, this is shininess ⁇ and reflectance constants k a , k d and k s .
  • the interconnected network has information about locations and orientations of the different objects (emitter, receiver and observer devices) in a world coordinate system. So, directions N ⁇ , L ⁇ , R , V and I r ( x ) at each point x of the screen S may be computed.
  • the connected system can continuously update these parameters as the scene is changing. In this case, updated first information is transmitted and updated second information is determined and transmitted. Therefore, the light reflected by the screen can change over time (we note it I r ( x,t )), the time being noted t.
  • the different values of illumination for the different points of the surface forming the spatial modelling of the lighting of the surface encompassing the display device 11 with optionally an area surrounding the display device 11.
  • well-known global illumination modelling methods may be used such as the ray tracing method that allows to consider the propagation of light from the light sources of the scene including reflections and multiple reflections on surfaces of the object(s) of the scene 1.
  • a global illumination modelling method may be combined with a local illumination model.
  • a local modelling method such as the Phong reflection model may be used to model the illumination of screen surfaces of the display devices of the scene 1.
  • Figure 6 shows a method of processing an image displayed on a display device of the scene of 1, for example on the display device 11, according to a particular and non-limiting embodiment of the present principles.
  • first information representative of the lighting of a scene 1 or of a part of the scene 1 is obtained, i.e. received and/or determined from a plurality of devices of the scene.
  • the plurality of devices comprises for example one or more light sources and/or one or more display devices and/or one or more light sensor devices.
  • the first information may for example be received from each light source of at least a part of the light sources of the scene 1 and /or determined from one or more light sensors (e.g. a camera comprising a photosensor and color filter array and/or one or more light sensors of the display device 11).
  • the first information may comprise one parameter from the following list or any combination of two or more parameters of the following list of parameters:
  • second information representative of a spatial model of the lighting of at least an area of the scene 1 encompassing the display device 11 is determined.
  • the second information is determined according to the first information obtained in the first operation 61 and according to pose information of the display device 11 with regard to the other devices of the scene, the other devices providing for example the first information on the lighting of the environment.
  • Pose information enables for example to determine incidence angle of incident light emitted by the light source(s) and reaching at least a part of the display device 11.
  • the screen can be simply described by the 3D location of its four corners.
  • the area of the screen lit directly by light sources can be identified.
  • the 3D surface of the screen can be advantageously modeled. If planar, the model can correspond to the 3D location of the four corners. If not planar, a more complex model can be used to describe the curvature of the screen.
  • a possible representation of the screen can be a 3D mesh.
  • a micro structure can be modelled based on parameters received in the first information, the microstructure may model effects such as surface roughness, surface pigments, partial transparency, fluorescence and polarization.
  • the second information is determined by considering the distance between the display device 11 and the light sources providing first information.
  • the distance information may for example be used to determine the attenuation of the light along the path between the considered light source and the display device.
  • the distance information may for example also be used to determine the dispersion of light between the considered light source and the display device.
  • the determining of the second information is further according to a third information representative of at least a type associated with each device, the at least a type belonging to a group of types comprising light emitter, light receiver, and light sensor device.
  • a device of the scene may have two or three types associated with it, for example a display device may be both, a light receiver and a light emitter, as it receives the light from emitting devices and as it creates light as a display device when displaying one or more images.
  • the display device may be further of the type 'light sensor device'.
  • the parameters to be considered when determining the second information may depend from the type(s) associated with this device.
  • light receiver objects such as for example furniture or electrical consumer devices may share information about their position, orientation, size, color, texture, transparency and surface reflection properties to determine at least a part of the spatial model.
  • the type of information associated with a device may be assigned at the manufacturing stage of the device (and for example stored in a memory of the device) or may be assigned later, for example when building the network of interconnected devices of the scene 1.
  • the third information may determine order of processing of first information in order to determine the second information. For example, first information from objects of type "light emitter" is gathered in order to establish a first list of objects emitting light.
  • first information from objects of type "light emitter” is gathered and processed in order to obtain the second information.
  • the third information may determine the priority of processing of first information in order to determine second information. For example, to save time and processing capacity, only first information from objects of type "light emitter” is gathered and processed together with first information from a display device 11 with optionally an area surrounding the display device 11 in order to determine the spatial modelling of the lighting of the surface encompassing this display with optionally a surrounding area.
  • the second information is determined by further considering the pose of the display device 11 with respect to a determined point of view, for example the point of view of a viewer watching at the image(s) displayed on the display device 11.
  • the pose may be used to determine the viewing direction that may be used to determine the illumination at points of the surface of the screen of the display device for the viewer specifically, as in the example of Figure 4 .
  • the spatial model of the lighting may be restrained and streamlined to model only light going into the direction of this point of view. Another possibility would be to model light going into the direction corresponding to a point of view with higher accuracy than light going to other directions.
  • the image to be displayed on the display device 11 is processed according to the second information determined at operation 62.
  • the processing may be done for example to remove, compensate or reduce unwanted visual effects of scene lighting on the screen of the display device 11. For example, for a point of view, highlights on the screen caused by specular reflections indicated by second information are compensated by increasing the image luminance everywhere but in the region of highlights. In another example, the unwanted visual effect of a cast shadow on the screen that is indicated by second information is compensated by increasing the contrast of the image everywhere but in the region corresponding to the cast shadow.
  • the processing may comprise modifying at least a parameter of spatial areas of the image(s) to be displayed according to the location of these areas in the image.
  • the processing applied to a part of the image may be different from the processing applied to one or more other part(s) of the image.
  • the processing may for example correspond to a color balance to modify the at least a parameter.
  • the at least a parameter that may be modified belongs to a group of parameters comprising:
  • 3D location of the observer is the 3D location of the observer. This information combined with the relative position and orientation of the screen and the relative position and orientation of the light sources, allow to identify the spatial location of highlights caused by specular reflection due to light sources on the screen as seen from the observer point of view. For example, the location on the screen of the projection of the vertices of each light source model will be computed if visible on the screen from the observer point of view.
  • the operations 61 to 63 may be reiterated for each image of a sequence of images, lighting conditions may vary over time.
  • Figure 7 diagrammatically shows a hardware embodiment of an apparatus 7 configured for process and/or transmit an image (e.g. to a display device).
  • the apparatus 7 is also configured for the creation of display signals of one or several images.
  • the apparatus 7 corresponds for example to a tablet, a Smartphone, a games console, a computer, a laptop or a Set-top box.
  • the apparatus may for example correspond to the tablet 11, or may be embedded in a television set 10 or may be comprised in the remote device 30.
  • the apparatus 7 comprises the following elements, connected to each other by a bus 75 of addresses and data that also transports a clock signal:
  • the apparatus 7 may also comprise one or more display devices 73 of display screen type directly connected to the graphics card 72 to display images calculated in the graphics card, for example live.
  • the use of a dedicated bus to connect the display device 73 to the graphics card 72 offers the advantage of having much greater data transmission bitrates and thus reducing the latency time for the displaying of images composed by the graphics card.
  • a display device is external to the apparatus 7 and is connected to the apparatus 7 by a cable or wirelessly for transmitting the display signals.
  • the apparatus 7, for example the graphics card 72 comprises an interface for transmission or connection (not shown in figure 7 ) adapted to transmit a display signal to an external display means such as for example the first display device (e.g. an HMD), a LCD or plasma screen or a video-projector.
  • register used in the description of memories 721, 76, and 77 designates in each of the memories mentioned, both a memory zone of low capacity (some binary data) as well as a memory zone of large capacity (enabling a whole program to be stored or all or part of the data representative of data calculated or to be displayed).
  • the microprocessor 71 When switched-on, the microprocessor 71 loads and executes the instructions of the program contained in the RAM 77.
  • the random-access memory 77 notably comprises:
  • the algorithms implementing the steps of the method(s) specific to the present disclosure are stored in the memory GRAM 721 of the graphics card 72 associated with the apparatus 7 implementing these steps.
  • the graphic processors 720 of the graphics card 72 load these parameters into the GRAM 721 and execute the instructions of these algorithms in the form of microprograms of "shader" type using HLSL (High Level Shader Language) language or GLSL (OpenGL Shading Language) for example.
  • HLSL High Level Shader Language
  • GLSL OpenGL Shading Language
  • the random-access memory GRAM 721 notably comprises:
  • a part of the RAM 77 is assigned by the CPU 71 for storage of the identifiers and the distances if the memory storage space available in GRAM 721 is insufficient.
  • This variant however causes greater latency time in the composition of an image comprising a representation of the environment composed from microprograms contained in the GPUs as the data must be transmitted from the graphics card to the random-access memory 77 passing by the bus 75 for which the transmission capacities are generally inferior to those available in the graphics card for transmission of data from the GPUs to the GRAM and vice-versa.
  • the power supply 79 is external to the apparatus 7.
  • the apparatus 7 does not include any ROM but only RAM, the algorithms implementing the steps of the method specific to the present disclosure and described with regard to figures 4 and 6 being stored in the RAM.
  • the apparatus 7 comprises a SSD (Solid-State Drive) memory instead of the ROM and/or the RAM.
  • the apparatus 7 does not comprise any GPU but only one or more CPUs.
  • the present disclosure is not limited to a method of processing an image but also extends to a method for displaying the processed image.
  • the present disclosure also extends to a method and device for modelling the lighting of a scene or of a part of the scene.
  • the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, Smartphones, tablets, computers, mobile phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications associated with data encoding, data decoding, view generation, texture processing, and other processing of images and related texture information and/or depth information.
  • equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices.
  • the equipment may be mobile and even installed in a mobile vehicle.
  • the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
  • the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
  • a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on a processor-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
EP17305767.0A 2017-06-21 2017-06-21 Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage Withdrawn EP3419012A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17305767.0A EP3419012A1 (fr) 2017-06-21 2017-06-21 Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage
PCT/EP2018/066017 WO2018234195A1 (fr) 2017-06-21 2018-06-15 Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP17305767.0A EP3419012A1 (fr) 2017-06-21 2017-06-21 Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage

Publications (1)

Publication Number Publication Date
EP3419012A1 true EP3419012A1 (fr) 2018-12-26

Family

ID=59285120

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17305767.0A Withdrawn EP3419012A1 (fr) 2017-06-21 2017-06-21 Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage

Country Status (2)

Country Link
EP (1) EP3419012A1 (fr)
WO (1) WO2018234195A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100079426A1 (en) * 2008-09-26 2010-04-01 Apple Inc. Spatial ambient light profiling
US20120133790A1 (en) * 2010-11-29 2012-05-31 Google Inc. Mobile device image feedback
US20130321618A1 (en) * 2012-06-05 2013-12-05 Aravind Krishnaswamy Methods and Apparatus for Reproducing the Appearance of a Photographic Print on a Display Device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100079426A1 (en) * 2008-09-26 2010-04-01 Apple Inc. Spatial ambient light profiling
US20120133790A1 (en) * 2010-11-29 2012-05-31 Google Inc. Mobile device image feedback
US20130321618A1 (en) * 2012-06-05 2013-12-05 Aravind Krishnaswamy Methods and Apparatus for Reproducing the Appearance of a Photographic Print on a Display Device

Also Published As

Publication number Publication date
WO2018234195A1 (fr) 2018-12-27

Similar Documents

Publication Publication Date Title
US9953556B2 (en) Color correction method for optical see-through displays
US10403032B2 (en) Rendering an image from computer graphics using two rendering computing devices
US10206268B2 (en) Interlaced data architecture for a software configurable luminaire
CN111311723B (zh) 像素点识别及光照渲染方法、装置、电子设备和存储介质
US10049426B2 (en) Draw call visibility stream
JP6009099B2 (ja) 3d画像を向上させるための装置、プログラムおよびシステム
CN112116692B (zh) 模型渲染方法、装置、设备
US10636336B2 (en) Mixed primary display with spatially modulated backlight
JP6199856B2 (ja) カラーグレーディングおよびコンテンツ承認における表示限度を管理するための方法および装置
KR20120013977A (ko) 이미지 디스플레이에 대한 광 검출, 컬러 표현 모델들, 및 동적 범위의 수정
Hincapié-Ramos et al. SmartColor: Real-time color correction and contrast for optical see-through head-mounted displays
US10074211B2 (en) Method and device for establishing the frontier between objects of a scene in a depth map
US10083495B2 (en) Multi-processor system and operations to drive display and lighting functions of a software configurable luminaire
KR20150140514A (ko) 투명 디스플레이 장치의 색 보상 방법
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
US10121451B2 (en) Ambient light probe
CN113648652B (zh) 对象渲染方法和装置、存储介质及电子设备
WO2018202435A1 (fr) Procédé et dispositif de détermination d'informations d'éclairage d'une scène 3d
EP3419012A1 (fr) Procédé et dispositif de traitement d'une image en fonction d'informations d'éclairage
KR102235679B1 (ko) 시각 효과를 가지는 객체를 디스플레이하는 장치 및 방법
KR20180108184A (ko) 모바일용 전체 조명 실시간 렌더링 방법
US10650712B2 (en) Ordered mapping on a three-dimensional projection surface
KR20230112022A (ko) 전자 장치 및 그 제어 방법
US11804004B1 (en) Systems and methods for prioritized rendering and streaming based on risk maps that predict change in a three-dimensional environment
US20210383771A1 (en) Rendering images on displays

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20190627