CN118160296A - Context-dependent color mapping of image and video data - Google Patents

Context-dependent color mapping of image and video data Download PDF

Info

Publication number
CN118160296A
CN118160296A CN202280068868.8A CN202280068868A CN118160296A CN 118160296 A CN118160296 A CN 118160296A CN 202280068868 A CN202280068868 A CN 202280068868A CN 118160296 A CN118160296 A CN 118160296A
Authority
CN
China
Prior art keywords
image
processor
region
transmission system
video transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280068868.8A
Other languages
Chinese (zh)
Inventor
T·孔克尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Priority claimed from PCT/US2022/045050 external-priority patent/WO2023064105A1/en
Publication of CN118160296A publication Critical patent/CN118160296A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

Systems and methods for performing color mapping operations. A system includes a processor for performing post-production editing of image data. The processor is configured to identify a first region of the image and identify a second region of the image. The first region includes a first white point having a first hue and the second region includes a second white point having a second hue. The processor is further configured to determine a color mapping function based on the first tone, apply the color mapping function to a second region of the image, and generate an output image.

Description

Context-dependent color mapping of image and video data
Background
Cross Reference to Related Applications
The present application claims priority from European patent application number 21201948.3 filed on day 10, 11 of 2021 and U.S. provisional patent application number 63/254,196 filed on day 10, 11 of 2021, the contents of each of which are hereby incorporated by reference in their entirety.
Technical Field
The present application relates generally to systems and methods for image color mapping.
Background
Digital image and video data typically includes unwanted noise and mismatched hues. Image processing techniques are commonly used to alter images. Such imaging techniques may include, for example, applying filters, changing colors, identifying objects, and the like. Noise and mismatched colors may be the result of limitations of how to depict a display device and a camera for capturing image or video data within the image or video data (e.g., a captured or photographed television). Ambient illumination may produce unwanted noise or otherwise affect the hue of the image or video data.
Disclosure of Invention
The content captured with the camera may include electron emissive displays that project light having a color temperature or hue that is different from other nearby light sources both intra and extra-frame. For example, the color temperature of the white point within the image frame may be different. The dynamic range and the light brightness range of the captured display and the camera used to capture the image frames may be different. Additionally, the captured display or light source and their reflected overall color volume rendering may be different from the camera used to capture the image frames. Accordingly, techniques for correcting color temperature and hue within an image frame have been developed. The techniques may further consider device characteristics of a camera used to capture the image frames.
Aspects of the present disclosure relate to devices, systems, and methods for color mapping image and video data.
In one exemplary aspect of the present disclosure, a video transmission system for context-dependent color mapping is provided. The video transmission system includes a processor for performing post-production editing of video data comprising a plurality of image frames. The processor is configured to identify a first region of one of the image frames and identify a second region of one of the image frames. The first region includes a first white point having a first hue and the second region includes a second white point having a second hue. The processor is further configured to determine a color mapping function based on the first hue and the second hue, apply the color mapping function to the second region, and generate an output image for each of the plurality of image frames.
In another exemplary aspect of the present disclosure, a method for context-dependent color mapping of image data is provided. The method includes identifying a first region of an image and identifying a second region of the image. The first region includes a first white point having a first hue and the second region includes a second white point having a second hue. The method comprises the following steps: determining a color mapping function based on the first hue and the second hue; applying the color mapping function to a second region of the image; and generating an output image.
In another exemplary aspect of the disclosure, a non-transitory computer-readable medium storing instructions that, when executed by a processor of an image transmission system, cause the image transmission system to perform operations comprising: identifying a first region of the image; and identifying a second region of the image. The first region includes a first white point having a first hue and the second region includes a second white point having a second hue. The operations further comprise: determining a color mapping function based on the first hue and the second hue; applying the color mapping function to a second region of the image; and generating an output image.
In this way, various aspects of the present disclosure provide for the display of images with high dynamic range, wide color gamut, high frame rate, and high resolution, and improvements are realized at least in the technical fields of image projection, image display, holography, signal processing, and the like.
Drawings
These and other more detailed and specific features of the various embodiments are more fully disclosed in the following description, with reference to the accompanying drawings, in which:
FIG. 1 depicts an example process of an image transfer pipeline.
FIG. 2 depicts an example image captured by a camera.
FIG. 3 depicts an example process for identifying a light source.
FIG. 4 depicts an example process for identifying reflections.
Fig. 5A-5B depict example ray tracing operations.
Fig. 6 depicts an example process of a color mapping operation.
Fig. 7 depicts the example image frame of fig. 2 after a color mapping operation.
FIG. 8 depicts an example content capture environment.
Fig. 9 depicts an example process of a color mapping operation.
FIG. 10 depicts an example pipeline for performing the process of FIG. 9.
Fig. 11 depicts an example process of a color mapping operation.
Detailed Description
The present disclosure and aspects thereof may be embodied in various forms including: hardware, devices or circuits controlled by computer implemented methods, computer program products, computer systems and networks, user interfaces and application programming interfaces; and hardware implemented methods, signal processing circuits, memory arrays, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), and the like. The foregoing is intended merely to give a general idea of various aspects of the present disclosure and is not intended to limit the scope of the present disclosure in any way.
In the following description, numerous details are set forth, such as optical device configurations, timings, operations, etc., to provide an understanding of one or more aspects of the present disclosure. It will be apparent to one skilled in the art that these specific details are merely exemplary and are not intended to limit the scope of the application.
Video encoding of HDR signals
FIG. 1 depicts an example process of an image transfer pipeline (100) showing various stages from image capture to image content display. An image generation block (105) is used to capture or generate an image (102) that may include a sequence of video frames (102). The image (102) may be captured digitally (e.g., by a digital camera) or generated by a computer (e.g., using a computer animation) to provide image data (107). Alternatively, the image (102) may be captured on film by a film camera. The film is converted to a digital format to provide image data (107). In the production phase (110), the image data (107) is edited to provide an image production stream (112).
The image data of the production stream (112) is then provided to a processor (or one or more processors, such as a Central Processing Unit (CPU)) at block (115) for post-production editing. The block (115) post-production editing may include adjusting or modifying colors or brightness in particular regions of the image to enhance image quality or to achieve a particular appearance of the image according to the authoring intent of the image creator. This is sometimes referred to as "color adjustment" or "color grading (color grading)". The methods described herein may be performed by a processor at block (115). Other edits (e.g., scene selection and ordering, image cropping, adding computer-generated visual effects, etc.) may be performed at block (115) to produce a final version (117) of the work for release. During post-production editing (115), an image or video image is viewed on a reference display (125). The reference display (125) may be a consumer-level display or projector, if desired.
After post-production (115), the image data of the final work (117) may be transmitted to an encoding block (120) for downstream transmission to decoding and playback devices such as computer monitors, televisions, set-top boxes, movie theatres, and the like. In some embodiments, the encoding block (120) may include audio encoders and video encoders as defined by ATSC, DVB, DVD, blu-ray, and other transport formats to generate the encoded bitstream (122). In the receiver, the encoded bitstream (122) is decoded by a decoding unit (130) to generate a decoded signal (132) representing the same or a near-similar version of the signal (117). The receiver may be attached to a target display (140) that may have entirely different characteristics than the reference display (125). In this case, the display management block (135) may be configured to map the dynamic range of the decoded signal (132) to the characteristics of the target display (140) by generating a display mapping signal (137). Additional methods described herein may be performed by the decoding unit (130) or the display management block (135). Both the decoding unit (130) and the display management block (135) may comprise their own processors or may be integrated into a single processing unit. While the present disclosure relates to a target display (140), it should be understood that this is merely an example. It should further be appreciated that the target display (140) may include any device configured to display or project light; such as computer displays, televisions, OLED displays, LCD displays, quantum dot displays, movie theatres, consumer and other commercial projection systems, heads-up displays, virtual reality displays, and the like.
Method for identifying different light temperatures
As described above, an image captured, such as image (102), may contain a plurality of different light sources, such as video displays (e.g., televisions, computer monitors, etc.) and indoor lighting devices (e.g., luminaires, windows, overhead lights, etc.), as well as reflections of these lighting devices. The captured image may include one or more still images and one or more image frames in the video. For example, fig. 2 provides an image frame (200) having a first light source (202) and a second light source (204). The first light source (202) and the second light source (204) may each emit light of different temperatures (or hues). In the example of fig. 2, the first light source (202) emits light of a warmer hue than the second light source (204). Light projected from both the first light source (202) and the second light source (204) is reflected within the image frame (200). Specifically, the second light source (204) generates a reflection (206) upon light striking an object within the image frame (200). In some implementations, the image frame (200) includes additional objects that may be illuminated by light from the first light source (202), the second light source (204), or a combination thereof.
Fig. 3 provides a method (300) of identifying light sources of different hues within an image. The method (300) may be performed by, for example, a processor at block (115) for post-production editing. At step (302), the processor receives an image, such as an image frame (200). In some implementations, after receiving the image frame (200), the processor corrects or otherwise alters the image frame (200) to account for shading artifacts and lens geometry of a camera used to capture the image frame (200).
At step (304), the processor identifies a first region having a first white point. For example, the first light source (202) may be identified as a first region. The processor may identify a plurality of pixels having the same or similar hue values as the first region. At step (306), the processor identifies a second region having a second white point. For example, the second light source (204) may be identified as a second region. The identification of the first and second regions may be performed using computer vision algorithms or similar machine learning based algorithms. In some embodiments, the processor further identifies a contour of the second region. For example, the outline (or boundary) (208) may be identified as containing direct light emitted by the second light source (204). At step (308), the processor stores the contour of the second region, the alpha mask, and/or other information identifying the second region.
Method for identifying light reflection
If desired, the reflection from the light source may be determined. For example, FIG. 4 provides a method (400) of identifying a reflection of a light source. The method (400) may be performed by, for example, a processor at block (115) for post-production editing. At step (402), a processor receives an input depth map for an image frame (200). The depth map may be received via a light detection and ranging (LiDAR) device, radar, sodar, machine learning algorithms, depth of stereo images, other techniques, or a combination of these and other techniques. At step (404), the processor converts the input depth map into a surface mesh. The mesh defines the spatial position of objects within the depth map and their surface orientations. At step (406), the processor identifies points in the surface grid that are spatially located within the second region. Accordingly, an object or device that projects the second light source (204) is identified. In some implementations, the processor maps the contours (208) to a surface grid.
At step (408), the processor performs ray tracing operations from the point of view of the camera into the scene within the image frame (200). For example, fig. 5A and 5B provide example ray tracing operations. Fig. 5A illustrates light WP1 (e.g., light having a first white point) projected by a first light source (202) within the first environment (500). Dashed line 504 is an outline of the content that is visible to the camera (502) within the environment (500) (and thus included in the corresponding image frame (200)). In other words, the dashed line (504) illustrates the field of view of the camera 502. The content includes an object (506) and a display device (508) (e.g., a television). Light WP2 (e.g., light with a second white point) is projected directly into the camera (502) by the display device (508) (e.g., the second light source (204)). The dashed line (510) illustrates the portion of the field of view of the camera (502) where the display device (508) appears. The solid line (512) represents the light rays WP1 projected by the first light source (202) (not shown). The light rays (512) include light rays received directly from the first light source (202) and reflected light rays that have been reflected from a surface. A surface normal of the object (506) may be determined based on the depth map.
Fig. 5B illustrates light projected by a second light source (204) (e.g., a display device (508) within a second environment (550)). Similar to the first environment (500), the dashed line (504) is an outline of what is visible to the internal phase (502) in the second environment (550). The dashed line (510) identifies the profile of light projected by the display device (508) that is directly received by the camera (502). Light projected by the display device (508) toward the object (506) is represented by a solid line (555). Light represented by the solid line (555) may reflect off of the object (506) before being received by the camera (502). A surface normal of the object (506) may be determined based on the depth map.
The reflection of light projected by the display device (508) (or some other light source such as the second light source (204)) may be determined by following rays on the surface normal of the grid surface identified at step (406). In case the ray travels from the viewport camera (502), reflects off the surface normal, and eventually intersects a point within the contour (208) of the second light source (204), the ray is determined to be a reflection of the second light source (204). At step (410), the processor generates a binary or alpha mask based on the ray tracing operation. For example, each point of the surface map may be assigned a binary or alpha value based on whether the light from the second light source (204) hits (i.e., intercepts) the corresponding point. Accordingly, the reflection of the light projected by the second light source (204) is determined. In some embodiments, instead of binary values, probability values or alpha values may be assigned to each point of the surface map, such as values of 1 to 10, values of 1 to 100, decimal values of 0 to 1, and so on. The reflection may be determined based on the probability value exceeding a threshold. For example, in the range of 1 to 10, any value higher than 5 is determined as the reflection of the second light source (204). The probability threshold may be adjusted during post-production editing (115).
In some implementations, the reflection of light projected by the second light source (204) is determined by analyzing a plurality of sequential image frames. For example, multiple image frames within video data (102) may capture the same or similar environment (e.g., environment 500). Within the environment and from one image frame to the next (and assuming that the position of the camera and scene are otherwise relatively stationary), a pixel value change may be determined to be caused by a change in light projected by the second light source (204). Accordingly, the processor may identify which pixels within the subsequent image frame have changed in value. These changed pixels may be used to identify contours (208). Additionally, the processor may observe which pixels within subsequent image frames outside of the outline (208) have their values changed to determine the reflection of the second light source (204).
Color remodeling method
After identifying the first and second regions and obtaining the binary or alpha mask, the processor performs a color mapping operation to adjust the hue of the second region and its corresponding reflection. Fig. 6 provides a method (600) for performing a color-reshaping operation. The method (600) may be performed by, for example, a processor at block (115) for post-production editing. At step (602), the processor identifies pixels outside the second region. For example, the processor identifies the exterior of the contour 208. At step (604), the processor creates a three-dimensional color point cloud. For example, the processor creates a color volume in ICtCp color space, CIELAB color space, or the like. At step (606), the processor marks each point within the color point cloud based on the results from the ray tracing operation performed in step (408). For example, each point within the color point cloud may be labeled "WP1" (e.g., as part of, or within, the first region, the first white point, the first light source (202), etc.), as "WP2 direct" (e.g., as a direct or immediate reflection from the second light source (204), or as "WP2 indirect" (e.g., as an indirect or secondary reflection from the second light source (204)).
At step (608), the processor identifies a boundary between the first region and the second region. These boundaries define color values (e.g., R, G and B values) that are used to determine whether a pixel is marked as WP1 or WP 2. In some embodiments, a cluster analysis operation, such as k-means clustering, is used to determine color boundaries. In cluster analysis, a number of algorithms may be used to group sets of objects that are similar to each other. In k-means clustering, each pixel is provided as a vector. The algorithm takes each vector and represents each pixel cluster with a single average vector. Other clustering methods may be used, such as hierarchical clustering, bi-clustering, self-organizing maps, and the like. The processor may use the results of the cluster analysis to confirm the accuracy of the ray tracing operation.
At step (610), the processor calculates the color proximity of each pixel to each white point. For example, the value of each pixel labeled "WP2 direct" or "WP2 indirect" is compared to the value of the second white point and the value of the k-means boundary identified at step (608). The value of each pixel labeled "WP1" is compared to the value of the first white point and the value of the k-means boundary identified at step (608). At step (612), the processor generates a weight map for white point adjustment. For example, each pixel having a color proximity distance between the first white point and the k-means boundary that is less than a distance threshold (e.g., 75%) does not receive a white point adjustment (e.g., a weight value of 0.0). Each pixel having a color proximity distance between the second white point and the k-means boundary that is less than the distance threshold receives a full white point adjustment (e.g., a weight value of 1.0). Pixels between these distance thresholds are weighted to be between the first white point and the second white point (e.g., a value between 0.0 and 1.0) to avoid an attractive color boundary.
After the weight map is generated, pixels within the outline (208) are given weight values of 1.0 for full white point adjustment. At step (614), the processor applies a weight map to the image frame (200). For example, any pixel having a weight value of 1.0 is color mapped to a hue similar to the hue of the first white point. Any pixel having a weight value between 0.0 and 1.0 is color mapped to a hue between the first white point and the second white point. Accordingly, both the second light source (204) and the reflection of the second light source (204) undergo a hue adjustment to match or resemble the hue of the first light source (202). At step (616), the processor generates an output image for each image frame (200) of the video data (102). In some embodiments, the output image is an image frame (200) to which a weight map is applied. In other embodiments, the output image is an image frame (200) and the weight map is provided as metadata. Fig. 7 provides an exemplary second image frame (700) that is a color mapped version of the image frame (200). As shown in the second image frame (700), the mapped second light source (704) and the mapped reflection (706) have undergone tonal adjustment as compared to the second light source (204) and reflection (206) of the image frame (200).
In some embodiments, the amount of hue adjustment of the second light source (204) depends on the position of the corresponding pixel and the gradient of the image frame (200). For example, after marking each point within the color point cloud based on the results from the ray tracing operation (at step (606)), the processor may calculate a boundary mask indicating how far each pixel in the image frame (200) is from the outline (208). For example, the mask m exp may be an alpha mask indicating how likely the pixel is to be part of the first light source (202). Mask m cont may be an alpha mask of how likely a pixel is to be part of the second light source (204) or a reflection of the second light source (204). Masks m exp and m cont are determined with respect to the source mask m s or the output of step (606). In particular, m exp is a pixel mask that extends beyond (or away from) the outline (208) of the second light source (204), and m cont is a pixel mask that contracts within the outline (208) or toward the center of the second light source (204).
After determining m exp and m cont, the processor creates a weight map for white point adjustment. The weight map is based on both the distance from each pixel to the contour (208) as provided by masks m exp and m cont, and the surface normal gradient change between masks m exp and m cont. In some implementations, to determine the surface normal gradient change m Gradient, the processor observes m exp,xy、mcont,xy and m s,xy, or the corresponding mask values for a given pixel (x, y). If m s,xy is equal to 1 and m cont,xy is not equal to 1, the processor determines with high certainty (e.g., about 100% determination) that pixel (x, y) is related to the second light source (204), and m Gradient,xy is given a value of 1, which results in a complete white point adjustment of pixel (x, y). If m s,xy is equal to 0 and m exp,xy is not equal to 1, the processor determines with high certainty that pixel (x, y) is associated with the first light source (202), and m Gradient,xy is given a value of 0, which does not cause white point adjustment of pixel (x, y). For pixels falling between these ranges, the processor identifies the surface gradient change between the individual pixels of m exp and m cont (from step (404) to step (408)) based on the surface normal of each pixel. The alpha mask m Gradient is weighted based on the surface gradient changes such that pixels (x, y) with lower gradient changes receive a greater amount of white point adjustment and pixels (x, y) with greater gradient changes receive less white point adjustment. In some implementations, the processor may use a predetermined threshold to determine the value of m Gradient. For example, the pixels may be weighted linearly for any gradient change value between 2% and 10%. Accordingly, any pixel having a gradient change of less than 2% is assigned a value of 1, and any pixel having a gradient change of greater than 10% is assigned a value of 0. These thresholds may be altered based on user input during post-production editing.
In some implementations, to determine the distance from each pixel to the contour (208), the boundary defined by the contour (208) is determined to be "center", and the distance mask m Distance has a value of 0.5 for pixels directly on the contour (208). For pixels spatially directed towards m exp,xy (i.e., away from the center of the second light source (204)), the value of m Distance decreases. For example, a pixel (x, y) one pixel away from the contour (208) may have an m Distance of 0.4, a pixel (x, y) two pixels away from the contour (208) may have an m Distance value of 0.3, and so on. Alternatively, the value of m Distance increases for pixels spatially directed towards m cont,xy (i.e., towards the center of the second light source (204)). For example, a pixel (x, y) one pixel away from the contour (208) may have an m Distance of 0.6, a pixel (x, y) two pixels away from the contour (208) may have an m Distance value of 0.7, and so on. The rate at which m Distance increases or decreases may be altered during post-production editing based on user input.
The final alpha mask for white point adjustment, m Final, may be based on m Gradient and m Distance. In particular, m Gradient and m Distance may be multiplied to generate the final alpha mask m Final. The use of m Final causes a smoothing of the spatial boundary between the second light source (204) and any background light generated by the first light source (202). However, if the surface gradient between the second light source (204) and pixels outside the outline (208) varies very much, the final alpha mask m Final may not smooth the adjustment between these pixels and may preserve the difference between the pixels. In some implementations, to avoid sharp spatial boundaries between pixels, the weight map generated at step (612), or alternatively the weight map m Final, may be blurred via gaussian convolution. The convolution kernel of the gaussian convolution may have different parameters based on whether the pixel is within the second light source (204) or as a reflection (206).
In some implementations, the user can alter the amount of color mapping performed by the processor. For example, the post-production block (115) may allow the user to adjust the hue of the mapped second light source (704). In some implementations, the user selects the hue of the mapped second light source (704) and the mapped reflection (706) by moving the hue slider. The hue slider may define a plurality of hues between the hues of the first light source (202) and the second light source (204). In some embodiments, the user may select additional hues in addition to hues similar to the first light source (202) and the second light source (204). The user may directly adjust the values of the weight map or the values of the binary or alpha mask. Additionally, the user may directly select the second light source (204) within the image frame (200) by directly providing the contour (208). Additionally, the user may adjust the color proximity distance threshold for determining whether the pixel is assigned a full white point adjustment (e.g., a weight value of 1.0), an intermediate white point adjustment (e.g., a weight value between 0.0 and 1.0), or is not assigned a white point adjustment.
In some implementations, ambient light from the first light source (202) may bounce off the second light source (204) itself and form reflections (e.g., mostly global, diffuse, or lambertian ambient reflections) within the display of the second light source (204). In such an example, the weight map may weight pixels labeled "WP1" to the second white point according to the pixel brightness or relative brightness of the second light source (204). This may be facilitated or implemented by adding global weights to all pixels "WP 2". Accordingly, the overall image hue influence of the second light source (204) may be reduced. Additionally, the global weight may be modulated by the brightness or brightness of the pixel labeled "WP 2". Dark pixels are more likely to be affected by "WP1" by diffuse reflection of the display screen, while bright pixels represent the active white point ("WP 2") of the display.
The weight map and binary or alpha mask may be provided as metadata included with the encoded bitstream (122). Accordingly, instead of performing color mapping at the post-production block (115), the color mapping is performed by the decoding unit (130) or a processor associated with the decoding unit (130). In some implementations, a user can receive the weight map as metadata and manually change the hue color or value of the weight map after decoding of the encoded bitstream (122).
In embodiments where the second light source (204) is a display device, such as display device (508), the processor may identify content shown on the display device. In particular, the processor may determine that content provided on the display device is stored in a server associated with the video transmission pipeline (100). Techniques for such determination may be found in U.S. patent No. 9,819,974"Image Metadata Creation for Improved Image Processing and Content Delivery, incorporated herein by reference in its entirety, for improved image processing and image metadata creation for content transmission. The processor may then retrieve the content from the server and replace the content shown on the display device within the image frame (200) with the content from the server. The content about the server may have a higher resolution and/or a higher dynamic range. Accordingly, the details lost from the image frame (200) may be retrieved via such replacement. In some implementations, rather than replacing the entire content shown on the display, the processor identifies differences between the content shown on the display and the content stored in the server. The processor then adds or replaces only the identified differences. Additionally, in some implementations, noise may be added back to the replaced content. The amount of noise may be set by a user or determined by a processor to maintain authenticity between surrounding content and replaced content of the image frame (200). A machine learning program or other algorithm that is optimized to improve or alter the dynamic range of existing video content may also be applied.
Dynamic color mapping
Content production is increasingly using active displays such as Light Emitting Diode (LED) walls to display the background in real time. Accordingly, a part of the photographing environment or even the entire photographing environment may be constituted by the display device. Fig. 8 provides an example photographic environment (800) composed of multiple display devices (802) (e.g., such as a first display 802a, a second display 802b, and a third display 802 c), and a camera 804). However, the luminance range of multiple display devices (802) and/or the capabilities of the camera (804) may not reach the limits required for HDR content production. For example, the maximum luminance limit in the plurality of display devices (802) may be too low to make, such that the captured footage (foote) appears clumsy or unrealistic when combined with additional scene illumination from outside the captured scene. The range of light brightness of the plurality of display devices (802) may also be different from the capabilities of the camera (804), thereby creating a quality difference between content provided on the plurality of display devices (802) and content captured by the camera (804).
Fig. 9 provides a method (900) for performing color mapping operations within a photographic environment, such as the photographic environment (800). The method (900) may be performed by, for example, a processor at block (115) for post-production editing. At step (902), a processor identifies operational characteristics of a backdrop display, such as a plurality of display devices (802). For example, the processor may identify the spectral emission capability of the backdrop display and the viewing angle properties of the backdrop display, such as angle dependence. These can assist in identifying how the color and brightness of the backdrop display changes based on the capture angle of the image. At step (904), the processor identifies operational characteristics of a camera, such as a camera (804), for capturing a photographic environment (800). The operational characteristics of the camera (804) may include, for example, spectral transmittance of the lens, vignetting of the lens, shading of the lens, edges of the lens, spectral sensitivity of the camera, light sensitivity of the camera, and signal-to-noise ratio (SNR) (or dynamic range) of the camera, among others. The operating characteristics of the plurality of display devices (802) and the operating characteristics of the camera (804) may both be identified based on metadata retrieved by the processor.
At step (906), the processor performs a tone mapping operation on content provided via the backdrop display. Tone mapping the content fits the content to the operating characteristics of the plurality of display devices (802) and cameras (804) and thus fits their limitations. In some embodiments, the tone mapping operation includes clipping tone details in some pixels to achieve higher luminance. This may match signals provided by multiple display devices (802) with signals captured by a camera (804). At step (908), the processor records the corresponding mapping function. For example, the mapping function of the tone mapping operation is stored as metadata so that the downstream processor can reverse the tone curve of the captured image.
At step (910), a processor captures a scene using a camera (804) to generate an image frame (200). At step (912), the processor processes the image frame (200). For example, the processor may receive a depth map of the photographic environment (800). The processor may identify light reflections within the capture environment (800), as described with respect to the method (400). However, the illumination source may be present outside the viewport of the camera (804). In some implementations, these illumination sources may be known to the processor and used for viewpoint adjustment. Alternatively, a mirror ball may be used to obtain illumination outside the viewport of the camera (804). The camera (804) and/or the processor may use the mirror ball to determine the origin of any light in the photographic environment (800). LiDAR capture or similar techniques may also be used to determine illumination outside the viewport of the camera (804). For example, a test pattern may be provided on each of a plurality of display devices (802). Any non-display light sources are turned on to provide illumination. With the illumination on and the test pattern displayed, the capture of LiDAR is used to scan the photographic environment (800). In this way, a model of the shooting environment (800) captured by the camera (804) may be brought into the same geometric world view as the depth map from the viewpoint of the camera (804).
FIG. 10 provides an exemplary pipeline (1000) capable of performing the method (900). Additionally, the pipeline (1000) provides a process for adding or altering content provided on a backdrop display during post-production editing. At block (1004), a backdrop renderer (1002) generates backdrop content. The backdrop content is an HDR signal provided to a first processor (1006). The first processor (1006) performs a tone mapping operation on the HDR signal to map the HDR signal to a physical limit of a backdrop display, such as a plurality of display devices (802). Metadata describing the tone mapping process may be provided from the first processor (1006) to the second processor (1008). The mapped HDR signal is then provided on a plurality of display devices (802) within a shooting environment (800). The camera (804) captures a shooting environment (800) including content provided via the display device (802), an object (806) within the shooting environment (800), and ambient light provided to and reflected within the shooting environment (800).
The second processor (1008) receives the captured shooting environment (800) and a depth map of the shooting environment (800). The second processor (1008) performs operations described with respect to the method (900), including: detecting a world geometry of a shooting environment (800); identifying light reflections within the capture environment (800); determining and applying an inverse of the tone mapping function; and determining characteristics of the camera (804) and the plurality of display devices (802). An output HDR signal for viewing content captured by a camera (804) is provided to a post-processing and distribution block (1010) that provides the content to a target display (140).
Although the method (900) depicts a global tone mapping function (e.g., a single tone curve for multiple display devices (802)), a local tone mapping function may also be used. Fig. 11 illustrates a process (1100) for applying spatial weighting to a tone mapping function. The source HDR image (1102) is provided to a first processor (1104). At step (1106), a first processor (1104) separates a source HDR image (1102) into a foreground layer and a modulation layer. The separation may be achieved via a double modulated light field algorithm (such as the algorithm used with dolby professional reference monitors or the algorithm from the double layer encoding algorithm) or other reversible local tone mapping operator. At step (1108), a "zoomed out" image is then provided via the plurality of display devices (802). The shooting environment (800) is captured by a camera (804), and both the captured video data and the modulation layer are provided to a second processor (1110). The second processor (110) detects content provided via the plurality of display devices (802) and applies an inverse tone mapping function. At step (1112), the reconstructed HDR video data and modulation layer are then provided to a downstream pipeline.
Temporal compensation may also be used when performing the previously described color mapping operations. For example, content provided via multiple display devices (802) may change during the course of video data. These changes over time may be recorded by the first processor (1006). As one example, the processor (1006) may process changes in light level of content provided by the plurality of display devices (802), but the plurality of display devices (802) provide only minor illumination changes during the capture. After capturing the shooting environment (800), the expected lighting changes are "put" into the video data during post-processing. As another example, a "night photography" or night illumination may be simulated during post-processing to maintain adequate scene illumination. Other lighting preference settings or appearances may also be implemented.
Non-display light sources such as matrix Light Emitting Display (LED) devices may also be implemented in the photographic environment (800). Off-screen fill and key lights may be controlled and balanced to avoid over-expansion of video data after capture. The post-production processor (115) may balance diffuse reflection with direct reflection highlights on real objects in the photographic environment (800). For example, the reflection may be from a smooth surface, such as the eye, moisture, oily skin, etc. Additionally, on-screen light sources captured directly by the camera (804) may be displayed via a plurality of display devices (802). The on-screen light sources may then be tone mapped and restored to their intended brightness after capture.
The video transmission system and method described above may provide brightness adjustment based on viewer-adapted status. Systems, methods, and devices according to the present disclosure may employ any one or more of the following configurations.
(1) A video transmission system for context-dependent color mapping, the video transmission system comprising: a processor for performing post-production editing of video data comprising a plurality of image frames. The processor is configured to: identifying a first region of one of the image frames, the first region including a first white point having a first hue; identifying a second region of one of the image frames, the second region including a second white point having a second hue; determining a color mapping function based on the first hue and the second hue; applying the color mapping function to the second region; and generating an output image for each of the plurality of image frames.
(2) The video transmission system of claim (1), wherein the processor is further configured to: a depth map associated with one of the image frames is received and converted to a surface mesh.
(3) The video transmission system of claim (2), wherein the processor is further configured to: performing a ray tracing operation from a camera viewpoint of one of the image frames using the surface grid, and creating a binary mask based on the ray tracing operation, wherein the binary mask indicates reflections of the first white point and the second white point.
(4) The video transmission system of any one of (2) to (3), wherein the processor is further configured to: a surface normal gradient change alpha mask is generated for one of the image frames based on the surface grid, a spatial distance alpha mask is generated for one of the image frames, and the color mapping function is determined based on the surface normal gradient change alpha mask and the spatial distance alpha mask.
(5) The video transmission system of any one of (1) to (4), wherein the processor is further configured to: a three-dimensional color point cloud is created for the background image and each point cloud pixel is labeled.
(6) The video transmission system of any one of (1) to (5), wherein the processor is further configured to: a value of the pixel, a distance between the first hue and the second hue is determined for each pixel in one of the image frames.
(7) The video transmission system of any one of (1) to (6), wherein the second region is a video display device, and wherein the processor is further configured to: identifying a secondary image within the second region; receiving a copy of the secondary image from a server, wherein the copy of the secondary image received from the server has at least one of a higher resolution, a higher dynamic range, or a wider color gamut than the secondary image identified within the second region; and replacing the secondary image within the second region with a copy of the secondary image.
(8) The video transmission system of any one of (1) to (7), wherein the processor is further configured to: determining an operating characteristic of a camera associated with the video data and determining an operating characteristic of a backdrop display, wherein the color mapping function is further based on the operating characteristic of the camera and the operating characteristic of the backdrop display.
(9) The video transmission system of any one of (1) to (8), wherein the processor is further configured to: subtracting the second region from one of the image frames to create a background image, identifying a change in a value of at least one pixel in the background image in a subsequent image frame, and applying the color mapping function to at least one pixel in the background image in response to the change in the value.
(10) The video transmission system of any one of (1) to (9), wherein the processor is further configured to: a tone mapping operation is performed on second video data displayed via a backdrop display, a mapping function is recorded based on the tone mapping operation, and an inverse of the mapping function is applied to one of the image frames.
(11) The video transmission system of any one of (1) to (10), wherein the processor is further configured to: a third region of one of the image frames is identified, the third region comprising a reflection of a light source defined by the second region, and the color mapping function is applied to the third region.
(12) The video transmission system of any one of (1) to (11), wherein the processor is further configured to: the color mapping function is stored as metadata and the output image are transmitted to an external device.
(13) A method for context-dependent color mapping of image data, the method comprising: identifying a first region of an image, the first region including a first white point having a first hue; identifying a second region of the image, the second region including a second white point having a second hue; determining a color mapping function based on the first hue and the second hue; applying the color mapping function to a second region of the image; and generating an output image.
(14) The method according to (13), further comprising: identifying a third region of the image, the third region comprising a reflection of a light source defined by the second region; and applying the color mapping function to the third region.
(15) The method according to any one of (13) to (14), further comprising: receiving a depth map associated with the image; and converting the depth map into a surface mesh.
(16) The method according to (15), further comprising: performing a ray tracing operation from a camera viewpoint of one of the image frames using the surface mesh; and creating a binary mask based on the ray tracing operation, wherein the binary mask indicates reflections of the first white point and the second white point.
(17) The method according to any one of (15) to (16), further comprising: generating a surface normal gradient change alpha mask for the image based on the surface mesh; generating a spatial distance alpha mask for the image; and determining the color mapping function based on the surface normal gradient change alpha mask and the spatial distance alpha mask.
(18) The method according to any one of (13) to (17), further comprising: subtracting the second region from the image to create a background image; creating a three-dimensional color point cloud for the background image; and labeling each point cloud pixel.
(19) The method according to any one of (13) to (18), further comprising: determining an operating characteristic of a camera associated with the image data and determining an operating characteristic of a backdrop display, wherein the color mapping function is further based on the operating characteristic of the camera and the operating characteristic of the backdrop display.
(20) A non-transitory computer-readable medium storing instructions that, when executed by an electronic processor, cause the electronic processor to perform the operations of any one of (13) to (19).
With respect to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, while the steps of such processes, etc. have been described as occurring in a particular ordered sequence, such processes may be practiced with the described steps performed in an order different than that described herein. It is further understood that certain steps may be performed concurrently, other steps may be added, or certain steps described herein may be omitted. In other words, the process descriptions herein are provided for the purpose of illustrating certain embodiments and should in no way be construed as limiting the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and applications other than the examples provided will be apparent from a reading of the above description. The scope should be determined not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that the technology discussed herein will evolve in the future, and that the disclosed systems and methods will be incorporated into such future embodiments. In summary, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the art described herein unless an explicit indication to the contrary is made herein. In particular, the use of singular articles such as "a," "the," "said," and the like should be understood to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments incorporate more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.
Aspects of the invention may be understood from the example embodiments (EEEs) enumerated below:
EEE 1. A video transmission system for context-dependent color mapping, the video transmission system comprising:
a processor for performing post-production editing of video data comprising a plurality of image frames, the processor configured to:
Identifying a first region of one of the image frames, the first region including a first white point having a first hue;
Identifying a second region of one of the image frames, the second region including a second white point having a second hue;
determining a color mapping function based on the first hue and the second hue;
applying the color mapping function to the second region; and
An output image is generated for each of the plurality of image frames.
The video transmission system of EEE 2. EEE 1 wherein the processor is further configured to:
Receiving a depth map associated with one of the image frames; and
The depth map is converted to a surface mesh.
The video transmission system of EEE 3 wherein the processor is further configured to:
Performing a ray tracing operation from a camera viewpoint of one of the image frames using the surface mesh; and
A binary mask is created based on the ray tracing operation, wherein the binary mask indicates reflections of the first white point and the second white point.
The video transmission system of EEE 2 or 3, wherein the processor is further configured to:
Generating a surface normal gradient change alpha mask for one of the image frames based on the surface mesh; generating a spatial distance alpha mask for one of the image frames; and
The color mapping function is determined based on the surface normal gradient change alpha mask and the spatial distance alpha mask.
The video transmission system of any one of EEEs 1-4, wherein the processor is further configured to:
creating a three-dimensional color point cloud for one of the image frames; and
Each point cloud pixel is marked.
The video transmission system of any one of EEEs 1-5, wherein the processor is further configured to:
A value of the pixel, a distance between the first hue and the second hue is determined for each pixel in one of the image frames.
The video transmission system of any one of EEEs 1-6, wherein the second region is a video display device, and wherein the processor is further configured to:
Identifying a secondary image within the second region;
Receiving a copy of the secondary image from a server, wherein the copy of the secondary image received from the server has at least one of a higher resolution, a higher dynamic range, or a wider color gamut than the secondary image identified within the second region; and
And replacing the secondary image in the second area with a copy of the secondary image.
The video transmission system of any one of EEEs 1-7, wherein the processor is further configured to:
Determining an operational characteristic of a camera associated with the video data; and
The operating characteristics of the background display are determined,
Wherein the color mapping function is further based on an operating characteristic of the camera and an operating characteristic of the background display.
The video transmission system of any one of EEEs 1-8, wherein the processor is further configured to:
subtracting the second region from one of the image frames to create a background image;
Identifying a change in a value of at least one pixel in the background image in a subsequent image frame; and
The color mapping function is applied to at least one pixel in the background image in response to the value change.
The video transmission system of any one of EEEs 1-9, wherein the processor is further configured to:
Performing a tone mapping operation on second video data displayed via a background display;
Recording a mapping function based on the tone mapping operation; and
An inverse of the mapping function is applied to one of the image frames.
The video transmission system of any one of EEEs 1-10, wherein the processor is further configured to:
identifying a third region of one of the image frames, the third region comprising a reflection of a light source defined by the second region; and
The color mapping function is applied to the third region.
The video transmission system of any one of EEEs 1-11, wherein the processor is further configured to:
Storing the color mapping function as metadata; and
The metadata and the output image are transmitted to an external device.
EEE 13. A method for context-dependent color mapping of image data, the method comprising:
Identifying a first region of an image, the first region including a first white point having a first hue;
identifying a second region of the image, the second region including a second white point having a second hue;
determining a color mapping function based on the first hue and the second hue;
applying the color mapping function to a second region of the image; and
An output image is generated.
EEE 14. The method of EEE 13, further comprises:
Identifying a third region of the image, the third region comprising a reflection of a light source defined by the second region; and
The color mapping function is applied to the third region.
EEE 15. The method of EEE 13 or 14, further comprising:
receiving a depth map associated with the image; and
The depth map is converted to a surface mesh.
EEE 16. The method of EEE 15, further comprising:
Performing a ray tracing operation from a camera viewpoint of the image using the surface mesh; and
A binary mask is created based on the ray tracing operation, wherein the binary mask indicates reflections of the first white point and the second white point.
EEE 17 the method of EEE 15 or 16, further comprising:
generating a surface normal gradient change alpha mask for the image based on the surface mesh;
generating a spatial distance alpha mask for the image; and
The color mapping function is determined based on the surface normal gradient change alpha mask and the spatial distance alpha mask.
The method of any one of EEEs 13-17, further comprising:
Subtracting the second region from the image to create a background image;
creating a three-dimensional color point cloud for the background image; and
Each point cloud color pixel is marked.
The method of any one of EEEs 13-18, further comprising:
Determining an operating characteristic of a camera associated with the image data; and
The operating characteristics of the backdrop display are determined,
Wherein the color mapping function is further based on an operating characteristic of the camera and an operating characteristic of the background display.
EEE 20. A non-transitory computer readable medium storing instructions that, when executed by an electronic processor, cause the electronic processor to perform operations comprising the method of any of EEEs 13-20.
The video transmission system of any one of EEEs 1-12, wherein the processor is further configured to:
Creating a cloud of color points in a three-dimensional color space for one of the image frames; and
Each point cloud pixel is marked.
The video transmission system of EEE 22. EEE 5 or 21 wherein the processor is configured to mark each point cloud pixel within the color point cloud based on results from the ray tracing operation.
The video transmission system of any of EEEs 5, 21 or 22, wherein the flag indicates whether the pixel is identified as being in the first region or the second region.
The EEE 24. The video transmission system of claim 6 wherein the processor is configured to determine a distance between the value of the pixel, the first hue, and the second hue based on one or more of a label of the pixel and a boundary between color regions.
The EEE 25. The video transmission system of claim 8, wherein the camera associated with the video data is a camera used when capturing the video data and/or a camera used to capture the video data.
The video transmission system of EEE 8 or 25 wherein at least a portion of the second region of one of the image frames represents at least a portion of the backdrop display.
The video transmission system of any one of EEEs 1-12 or 21-24, wherein the processor is further configured to:
Creating a background image from one of the image frames based on the second region, such as based on a difference between the second region and the first region;
Identifying a change in a value of at least one pixel in the background image in a subsequent image frame; and
The color mapping function is applied to at least one pixel in the background image in response to the value change.
The video transmission system of any one of EEEs 1 through 12 or 21-26, wherein the processor is further configured to:
performing a tone mapping operation on second video data provided to a backdrop display, the second video data possibly to be displayed via the backdrop display;
Recording a mapping function based on the tone mapping operation; and
An inverse of the mapping function is applied to one of the image frames.

Claims (15)

1. A video transmission system for context-dependent color mapping, the video transmission system comprising:
a processor for performing post-production editing of video data comprising a plurality of image frames, the processor configured to:
Identifying a first region of one of the image frames, the first region including a first white point having a first hue;
identifying a second region of said one of said image frames, said second region comprising a second white point having a second hue;
determining a color mapping function based on the first hue and the second hue;
applying the color mapping function to the second region; and
An output image is generated for each of the plurality of image frames.
2. The video transmission system of claim 1, wherein the processor is further configured to:
receiving a depth map associated with the one of the image frames; and
The depth map is converted to a surface mesh.
3. The video transmission system of claim 2, wherein the processor is further configured to:
Performing a ray tracing operation from a camera viewpoint of said one of said image frames using said surface mesh; and
A binary mask is created based on the ray tracing operation, wherein the binary mask indicates reflections of the first white point and the second white point.
4. The video transmission system of any of claims 2-3, wherein the processor is further configured to:
generating a surface normal gradient change alpha mask for said one of said image frames based on said surface grid;
generating a spatial distance alpha mask for said one of said image frames; and
The color mapping function is determined based on the surface normal gradient change alpha mask and the spatial distance alpha mask.
5. The video transmission system of any of claims 1 to 4, wherein the processor is further configured to:
creating a three-dimensional color point cloud for said one of said image frames; and
Each point cloud pixel is marked.
6. The video transmission system of any one of claims 1 to 5, wherein the processor is further configured to:
A value of the pixel, a distance between the first hue and the second hue is determined for each pixel in the one of the image frames.
7. The video transmission system of any of claims 1-6, wherein the second region is a video display device, and wherein the processor is further configured to:
Identifying a secondary image within the second region;
Receiving a copy of the secondary image from a server, wherein the copy of the secondary image received from the server has at least one of a higher resolution, a higher dynamic range, or a wider color gamut than the secondary image identified within the second region; and
Replacing the secondary image within the second region with a copy of the secondary image.
8. The video transmission system of any of claims 1 to 7, wherein the processor is further configured to:
Determining an operational characteristic of a camera associated with the video data; and
The operating characteristics of the background display are determined,
Wherein the color mapping function is further based on an operating characteristic of the camera and an operating characteristic of the background display.
9. The video transmission system of claim 8, wherein the camera associated with the video data is a camera for capturing the video data, and wherein at least a portion of the second region of the one of the image frames represents at least a portion of the background display.
10. The video transmission system of any of claims 1 to 9, wherein the processor is further configured to:
Subtracting the second region from the one of the image frames to create a background image;
Identifying a change in a value of at least one pixel in the background image in a subsequent image frame; and
The color mapping function is applied to at least one pixel in the background image in response to a change in the value.
11. The video transmission system of any of claims 1 to 10, wherein the processor is further configured to:
Performing a tone mapping operation on second video data displayed via a background display;
Recording a mapping function based on the tone mapping operation; and
An inverse of the mapping function is applied to the one of the image frames.
12. The video transmission system of any one of claims 1 to 11, wherein the processor is further configured to:
Identifying a third region of said one of said image frames, said third region comprising a reflection of a light source defined by said second region; and
The color mapping function is applied to the third region.
13. The video transmission system of any of claims 1 to 12, wherein the processor is further configured to:
Storing the color mapping function as metadata; and
The metadata and the output image are transmitted to an external device.
14. A method comprising operations that the processor of the video transmission system of any one of claims 1 to 13 is configured to perform.
15. A non-transitory computer-readable medium storing instructions that, when executed by an electronic processor, cause the electronic processor to perform operations comprising the method of claim 13.
CN202280068868.8A 2021-10-11 2022-09-28 Context-dependent color mapping of image and video data Pending CN118160296A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163254196P 2021-10-11 2021-10-11
US63/254,196 2021-10-11
EP21201948.3 2021-10-11
PCT/US2022/045050 WO2023064105A1 (en) 2021-10-11 2022-09-28 Context-dependent color-mapping of image and video data

Publications (1)

Publication Number Publication Date
CN118160296A true CN118160296A (en) 2024-06-07

Family

ID=91298969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280068868.8A Pending CN118160296A (en) 2021-10-11 2022-09-28 Context-dependent color mapping of image and video data

Country Status (1)

Country Link
CN (1) CN118160296A (en)

Similar Documents

Publication Publication Date Title
US11877086B2 (en) Method and system for generating at least one image of a real environment
CN103891294B (en) The apparatus and method coded and decoded for HDR image
US9459820B2 (en) Display processing apparatus, display processing method, and computer program product
JP5108093B2 (en) Imaging apparatus and imaging method
JP5611457B2 (en) Gradation and color gamut mapping method and apparatus
CN104284119B (en) The equipment, system and method for projected image on the predefined part of object
US10607324B2 (en) Image highlight detection and rendering
US10950039B2 (en) Image processing apparatus
US9854176B2 (en) Dynamic lighting capture and reconstruction
WO2019047985A1 (en) Image processing method and device, electronic device, and computer-readable storage medium
CN111447425A (en) Display method and display device
JP2013127774A (en) Image processing device, image processing method, and program
US10121271B2 (en) Image processing apparatus and image processing method
CN116324961A (en) Enhancing image data for different types of displays
CN117880557A (en) Network live broadcast image synthesis method and device, equipment and medium
KR20180080618A (en) Method and apparatus for realistic rendering based augmented reality
KR20180094949A (en) Methods, devices, terminal equipment and associated computer programs for processing digital images
CN118160296A (en) Context-dependent color mapping of image and video data
US20230171508A1 (en) Increasing dynamic range of a virtual production display
EP4090006A2 (en) Image signal processing based on virtual superimposition
EP4416918A1 (en) Context-dependent color-mapping of image and video data
US20240249422A1 (en) Single Image 3D Photography with Soft-Layering and Depth-aware Inpainting
JP2024516080A (en) Brightness adjustment based on viewer adaptation state
CN117941340A (en) Information processing apparatus, video processing method, and program
WO2016189774A1 (en) Display method and display device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination