WO2020155072A1 - Procédé et appareil de traitement de couche mixte - Google Patents

Procédé et appareil de traitement de couche mixte Download PDF

Info

Publication number
WO2020155072A1
WO2020155072A1 PCT/CN2019/074307 CN2019074307W WO2020155072A1 WO 2020155072 A1 WO2020155072 A1 WO 2020155072A1 CN 2019074307 W CN2019074307 W CN 2019074307W WO 2020155072 A1 WO2020155072 A1 WO 2020155072A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
dynamic range
image
target area
hdr
Prior art date
Application number
PCT/CN2019/074307
Other languages
English (en)
Chinese (zh)
Inventor
李蒙
赵可强
齐致远
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980065334.8A priority Critical patent/CN112805745A/zh
Priority to PCT/CN2019/074307 priority patent/WO2020155072A1/fr
Publication of WO2020155072A1 publication Critical patent/WO2020155072A1/fr

Links

Images

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • This application relates to the field of image processing, and in particular, to a hybrid layer processing method and device.
  • Dynamic range is used in many fields to express the ratio of the maximum and minimum of each variable.
  • the dynamic range represents the ratio between the maximum gray value and the minimum gray value within the displayable range of the image, that is, the number of gray levels divided by the image from "brightest” to "darkest” .
  • the greater the dynamic range of an image the richer the brightness levels it can represent, and the more realistic the visual effect of the image.
  • high dynamic range high dynamic range
  • HDR high dynamic range
  • the dynamic range of ordinary images is low dynamic range (LDR), which can sometimes be called standard dynamic range (SDR).
  • HDR images and SDR images are mixedly displayed, such as an SDR image advertisement that pops up when a user uses a mobile phone to play an HDR video.
  • the current solution does not distinguish between HDR video and SDR image, but directly merges the images of multiple layers in the video, and then obtains a unified image and adaptively adjusts the image in the image. Adjust the resolution, hue, dynamic range, etc., and then adapt the display according to the screen characteristics of the phone.
  • the HDR image and the SDR image cannot be distinguished, but the two are processed as the same type of image, which will cause the final displayed image to have various problems such as color confusion and invisible text.
  • the present application provides a mixed layer processing method and device, which are used to solve the problem of display errors in the image obtained by the mixed layer processing in the prior art.
  • a hybrid layer processing method includes: acquiring a first layer, a second layer, and a third layer, wherein the first layer and the second layer have a first dynamic range, The third layer has a second dynamic range, the first layer is a transparent layer, and the second layer is a non-transparent layer; the dynamic range of the first layer is converted to the second dynamic range, and the second image is determined The first target area in the layer; merge the second layer, the third layer and the converted first layer to obtain the first image; convert the dynamic range of the second target area in the first image to the second The dynamic range is used to obtain the second image, where the second target area is a corresponding area of the first target area in the first image.
  • the dynamic range of the first layer is converted to the second dynamic range, and the dynamic range of the second target area in the first image is converted to the second dynamic range after merging, thereby ensuring the second image
  • the dynamic range of the different image areas in the image is the same, so that the second image will not have problems such as color confusion, invisible text, etc., and improve the performance and user experience of the image processing device in processing mixed layers; in addition, the above solution is not Before merging multiple layers, the dynamic range of each layer is converted to the same dynamic range, only the dynamic range of the first layer is converted to the second dynamic range, after the layers are merged, the first image
  • the dynamic range of the second target area is converted to the second dynamic range to unify the image content, so that compared to converting the dynamic range of each layer to the same dynamic range before merging multiple layers, it can be compared Under the premise of obtaining the same image quality, the hardware cost is greatly reduced.
  • the first dynamic range is a standard dynamic range SDR
  • the second dynamic range is a high dynamic range HDR
  • the first dynamic range is HDR
  • the second dynamic range is SDR.
  • the method before acquiring the first layer, the second layer, and the third layer, the method further includes: determining that the first layer has a dynamic range identifier according to the dynamic range identifier of the first layer First dynamic range; according to the dynamic range identifier of the second layer, it is determined that the second layer has the first dynamic range; according to the dynamic range identifier of the third layer, it is determined that the third layer has the second dynamic range.
  • the dynamic range of each layer can be simply and effectively determined according to the dynamic range identifier of each layer, so as to quickly distinguish layers with different dynamic ranges, thereby improving the efficiency of mixed layer processing .
  • the first aspect before obtaining the first layer, the second layer, and the third layer, it further includes: determining that the first layer is the first layer according to the layer transparency of the first layer Transparent layer; according to the layer transparency of the second layer, the second layer is determined to be a non-transparent layer.
  • the first layer is the first layer according to the layer transparency of the first layer Transparent layer
  • the second layer is determined to be a non-transparent layer.
  • determining the first target region in the second layer includes: determining the first target region through ROI recognition.
  • combining the second layer, the third layer, and the converted first layer to obtain the first image includes: according to the layer transparency of the first layer, The layer transparency of the second layer and the layer transparency of the third layer are weighted to calculate the pixel value of the first layer, the pixel value of the second layer, and the pixel value of the third layer after conversion. Obtain the first image.
  • the first image obtained by merging can include the images of each layer, and the images of different layers will not be affected by the merging, thereby improving the processing of mixed layers compared with the prior art. Performance.
  • the pixel value of the area outside the first target area in the second layer is 0. In the above possible implementation manner, it is possible to avoid the influence of the area outside the first target area in the second layer on other layers during the layer merging process.
  • converting the dynamic range of the first layer to the second dynamic range includes: when the first dynamic range is SDR and the second dynamic range is HDR, through inverse tone mapping Convert the dynamic range of the first layer to HDR; when the first dynamic range is HDR and the second dynamic range is SDR, the dynamic range of the first layer is converted to SDR through tone mapping.
  • the dynamic range of the first layer can be converted to HDR or the dynamic range of the first layer can be converted to SDR according to different dynamic ranges, thereby improving the flexibility of mixed layer processing .
  • converting the dynamic range of the second target area in the first image to the second dynamic range to obtain the second image includes: when the dynamic range of the second target area is When SDR and the second dynamic range are HDR, the dynamic range of the second target area is converted to HDR through inverse tone mapping to obtain the second image; when the dynamic range of the second target area is HDR and the second dynamic range is SDR , Convert the dynamic range of the second target area to SDR through tone mapping to obtain the second image.
  • the dynamic range of the second target area can be converted to HDR or the dynamic range of the second target area can be converted to SDR according to different dynamic ranges, thereby improving the flexibility of mixed layer processing.
  • the method further includes: displaying the second image.
  • various display errors such as color confusion and invisible text can be avoided in the displayed second image.
  • a hybrid layer processing system which includes: an image reading interface for acquiring multiple input layers to be processed; an image type detector for receiving the input image in the image reading interface Layer, and, according to the dynamic range identifier and/or layer transparency of the input layer, determine the dynamic range and transparent layer properties of the input layer.
  • the transparent layer properties include transparent layers and non-transparent layers
  • the first calculation The engine is used to receive the first layer when the image type detector determines that the input layer is the first layer, and convert the dynamic range of the first layer into the target dynamic range, where the first layer is Transparent layer, and the dynamic range of the first layer is different from the target dynamic range
  • the image area calibrator is used to receive the second layer when the image type detector determines that the input layer is the second layer, and, Determine the first target area of the second layer, where the second layer is a non-transparent layer, and the dynamic range of the second layer is different from the target dynamic range
  • the weighting calculator is used to receive the processing by the first calculation engine The first layer, the second layer determined by the image type detector, and the third layer determined by the image type detector as the input layer of the third layer, and the first layer, the second layer and the third layer Merge into the first image, the third layer has the target dynamic range
  • the second calculation engine is used to receive the first image processed by the weighting calculator and the first target area determined
  • the dynamic range of the first layer is converted to the second dynamic range, and the dynamic range of the second target area in the first image is converted to the second dynamic range after merging, thereby ensuring the second image
  • the dynamic range of the different image areas in the image is the same, so that the second image will not have problems such as color confusion, invisible text, etc., and improve the performance and user experience of the image processing device in processing mixed layers; in addition, the above solution is not Before merging multiple layers, the dynamic range of each layer is converted to the same dynamic range, only the dynamic range of the first layer is converted to the second dynamic range, after the layers are merged, the first image
  • the dynamic range of the second target area is converted to the second dynamic range to unify the image content, so that compared to converting the dynamic range of each layer to the same dynamic range before merging multiple layers, it can be compared Under the premise of obtaining the same image quality, the hardware cost is greatly reduced.
  • the weighting calculator is specifically used to: according to the layer transparency of the first layer, the layer transparency of the second layer, and the layer transparency of the third layer, The pixel value of the first layer processed by a calculation engine, the pixel value of the second layer determined by the image type detector, and the pixel value of the third layer determined by the image type detector are weighted to obtain the first image .
  • the pixel value of the area outside the first target area in the second layer is 0.
  • a hybrid layer processing device includes: an acquiring unit for acquiring a first layer, a second layer, and a third layer, wherein the first layer and the second layer have The first dynamic range, the third layer has the second dynamic range, the first layer is a transparent layer, and the second layer is a non-transparent layer; the conversion unit is used to convert the dynamic range of the first layer into the first layer 2.
  • Dynamic range determining unit for determining the first target area in the second layer; merging unit for merging the second layer, the third layer and the converted first layer to obtain the first image
  • the conversion unit is also used to convert the dynamic range of the second target area in the first image into a second dynamic range to obtain a second image, where the second target area is the first target area in the first image Corresponding area.
  • the first dynamic range is a standard dynamic range SDR
  • the second dynamic range is a high dynamic range HDR
  • the first dynamic range is HDR and the second dynamic range is SDR.
  • the determining unit is further configured to: determine that the first layer has the first dynamic range according to the dynamic range identifier of the first layer; and according to the dynamic range identifier of the second layer, It is determined that the second layer has the first dynamic range; according to the dynamic range identifier of the third layer, it is determined that the third layer has the second dynamic range.
  • the determining unit is further used to: determine that the first layer is a transparent layer according to the layer transparency of the first layer; and determine the layer transparency of the second layer The second layer is a non-transparent layer.
  • the determining unit is further configured to: determine the first target region through ROI recognition.
  • the merging unit is specifically used to: according to the layer transparency of the first layer, the layer transparency of the second layer, and the layer transparency of the third layer, perform the conversion
  • the pixel values of the first layer, the pixel values of the second layer, and the pixel values of the third layer are weighted to obtain the first image.
  • the pixel value of the area outside the first target area in the second layer is 0.
  • the conversion unit is specifically configured to: when the first dynamic range is SDR and the second dynamic range is HDR, convert the dynamic range of the first layer to HDR through inverse tone mapping ; When the first dynamic range is HDR and the second dynamic range is SDR, the dynamic range of the first layer is converted to SDR through tone mapping.
  • the conversion unit is specifically configured to: when the dynamic range of the second target area is SDR and the second dynamic range is HDR, the dynamic range of the second target area is reduced by inverse tone mapping. Convert to HDR to obtain a second image; when the dynamic range of the second target area is HDR and the second dynamic range is SDR, the dynamic range of the second target area is converted to SDR through tone mapping to obtain the second image.
  • the device further includes: a display unit configured to display the second image.
  • the present application also provides a hybrid layer processing device, which includes a memory and a processor coupled with the memory, the memory stores instructions and data, and the processor runs the instructions in the memory so that the processor executes the foregoing
  • a hybrid layer processing device which includes a memory and a processor coupled with the memory, the memory stores instructions and data, and the processor runs the instructions in the memory so that the processor executes the foregoing
  • the first aspect or the hybrid layer processing method provided by any possible implementation manner of the first aspect.
  • a hybrid layer processing method comprising: obtaining a first layer and a second layer, wherein the first layer has a first dynamic range, and the second layer has a second dynamic range, The first layer is a transparent layer; the dynamic range of the first layer is converted to the second dynamic range; the second layer and the third layer are combined with the converted first layer to obtain the first image.
  • the first dynamic range is SDR and the second dynamic range is HDR; or, the first dynamic range is HDR and the second dynamic range is SDR.
  • the method before obtaining the first layer and the second layer, the method further includes: determining that the first layer has the first dynamic range according to the dynamic range identifier of the first layer; According to the dynamic range identifier of the second layer, it is determined that the second layer has the second dynamic range.
  • the method before obtaining the first layer and the second layer, the method further includes: determining the first layer as a transparent layer according to the layer transparency of the first layer.
  • merging the second layer and the converted first layer includes: performing the conversion according to the layer transparency of the first layer and the layer transparency of the second layer Then the pixel values of the first layer and the pixel values of the second layer are weighted to obtain the first image.
  • converting the dynamic range of the first layer into the second dynamic range includes: when the first dynamic range is SDR and the second dynamic range is HDR, through inverse tone mapping Convert the dynamic range of the first layer to HDR; when the first dynamic range is HDR and the second dynamic range is SDR, the dynamic range of the first layer is converted to SDR through tone mapping.
  • the method further includes: displaying the first image.
  • a hybrid layer processing device which can implement the function of the hybrid layer processing method provided by any one of the possible implementation manners of the fourth aspect to the fourth aspect.
  • the functions can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more units corresponding to the above functions.
  • the hybrid layer processing device may include an acquisition unit, a conversion unit, and a merging unit.
  • the structure of the hybrid layer processing device includes a processor, a memory, a communication interface, and a bus, the memory is used to store program codes, the processor, the memory And the communication interface is connected through the bus.
  • the program code is executed by the processor, the information transmission device is caused to execute the steps in the mixed layer processing method provided by any one of the possible implementation manners of the fourth aspect to the fourth aspect.
  • a hybrid layer processing method comprising: obtaining a first layer and a second layer, wherein the first layer has a first dynamic range, and the second layer has a second dynamic range,
  • the first layer is a non-transparent layer; determine the first target area in the first layer; merge the first layer and the second layer to obtain the first image; combine the second target area in the first image
  • the dynamic range is converted into a second dynamic range to obtain a second image, where the second target area is a corresponding area of the first target area in the first image.
  • the first dynamic range is SDR and the second dynamic range is HDR; or, the first dynamic range is HDR and the second dynamic range layer is SDR.
  • the method before acquiring the first layer, the second layer, and the third layer, the method further includes: determining that the first layer has a dynamic range identifier according to the dynamic range identifier of the first layer The first dynamic range; according to the dynamic range identifier of the second layer, it is determined that the second layer has the second dynamic range.
  • the method before obtaining the first layer, the second layer, and the third layer, the method further includes: determining that the first layer is based on the layer transparency of the first layer Non-transparent layer.
  • determining the first target area in the first layer includes: determining the first target area through ROI recognition.
  • merging the first layer and the second layer to obtain the first image includes: according to the layer transparency of the first layer and the layer transparency of the second layer , Performing weighted calculation on the pixel value of the first layer and the pixel value of the second layer to obtain the first image.
  • the pixel value of the area outside the first target area in the first layer is 0.
  • converting the dynamic range of the second target area in the first image to the second dynamic range to obtain the second image includes: when the dynamic range of the second target area is When SDR and the second dynamic range are HDR, the dynamic range of the second target area is converted to HDR through inverse tone mapping to obtain the second image; when the dynamic range of the second target area is HDR and the second dynamic range is SDR , Convert the dynamic range of the second target area to SDR through tone mapping to obtain the second image.
  • the method further includes: displaying the second image.
  • a hybrid layer processing device which can implement the function of the hybrid layer processing method provided by any one of the possible implementation manners of the sixth aspect to the sixth aspect.
  • the functions can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more units corresponding to the above functions.
  • the hybrid layer processing device may include an acquisition unit, a conversion unit, and a merging unit.
  • the structure of the hybrid layer processing device includes a processor, a memory, a communication interface, and a bus, the memory is used to store program codes, the processor, the memory And the communication interface is connected through the bus.
  • the program code is executed by the processor, the information transmission device is caused to execute the steps in the hybrid layer processing method provided by any one of the possible implementation manners of the sixth aspect to the sixth aspect.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the first aspect or the first aspect described above.
  • a hybrid layer processing method provided by any possible implementation.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the fourth aspect or the fourth aspect described above.
  • a hybrid layer processing method provided by any possible implementation.
  • a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the sixth aspect or the sixth aspect described above.
  • a hybrid layer processing method provided by any possible implementation.
  • a computer program product is provided.
  • the computer program product runs on a computer, the computer executes the hybrid layer provided by the first aspect or any possible implementation of the first aspect. Approach.
  • a computer program product is provided.
  • the computer program product runs on a computer, the computer executes the hybrid layer provided by the fourth aspect or any one of the possible implementation manners of the fourth aspect. Approach.
  • a computer program product is provided.
  • the computer program product runs on a computer, the computer executes the hybrid layer provided by the sixth aspect or any one of the possible implementation manners of the sixth aspect. Approach.
  • any system, device, computer storage medium or computer program product of the hybrid layer processing method provided above is used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can be referred to The beneficial effects in the corresponding methods provided above will not be repeated here.
  • FIG. 1 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • FIG. 2 is a first schematic flowchart of a method for processing a mixed layer according to an embodiment of the application
  • FIG. 3 is a graph of a PQ photoelectric conversion function provided by an embodiment of the application.
  • FIG. 4 is a graph of HLG photoelectric conversion function provided by an embodiment of the application.
  • FIG. 5 is a graph of a SLG photoelectric conversion function provided by an embodiment of the application.
  • FIG. 6 is a second schematic flowchart of a hybrid layer processing method provided by an embodiment of this application.
  • FIG. 7 is a third schematic flowchart of a hybrid layer processing method provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of layers with different dynamic ranges to be merged according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of a display error in an image after layer merging provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of the hardware implementation structure of a hybrid layer processing system provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of a hybrid layer processing apparatus provided by an embodiment of the application.
  • At least one refers to one or more, and “multiple” refers to two or more.
  • And/or describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the following at least one item (a) or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c or a-b-c, where a, b, and c can be single or multiple.
  • the character "/" generally indicates that the associated objects are in an "or” relationship.
  • words such as "first” and “second” do not limit the number and execution order.
  • FIG. 1 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • the image processing device may be a mobile phone, a tablet computer, a computer, a notebook computer, a video camera, a camera, a wearable device, a vehicle-mounted device, or a terminal device.
  • the above-mentioned devices are collectively referred to as image processing devices in this application.
  • the image processing device is a mobile phone as an example for description.
  • the mobile phone includes: a memory 101, a processor 102, a sensor component 103, a multimedia component 104, an audio component 105, and a power supply component 106.
  • the memory 101 can be used to store data, software programs, and modules; it mainly includes a program storage area and a data storage area.
  • the storage program area can store an operating system and at least one application program required for a function, such as a sound playback function, an image playback function, etc. ;
  • the storage data area can store data created based on the use of the mobile phone, such as audio data, image data, phone book, etc.
  • the mobile phone may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the processor 102 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire device. It executes by running or executing software programs and/or modules stored in the memory 101, and calling data stored in the memory 101. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 102 may be a single-processor structure, a multi-processor structure, a single-threaded processor, a multi-threaded processor, etc.; in some feasible embodiments, the processor 102 may include a central processing unit Unit, general-purpose processor, digital signal processor, digital signal processor, microcontroller or microprocessor, etc.
  • the processor 102 may further include other hardware circuits or accelerators, such as application specific integrated circuits, field programmable gate arrays or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules and circuits described in conjunction with the disclosure of this application.
  • the processor 102 may also be a combination that implements computing functions, for example, includes a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the sensor component 103 includes one or more sensors, which are used to provide various aspects of state evaluation for the mobile phone.
  • the sensor component 103 may include a light sensor, such as a CMOS or CCD image sensor, for detecting the distance between an external object and the mobile phone, or used in imaging applications, that is, it becomes a component of a camera or a camera.
  • the sensor component 103 can also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor. Through the sensor component 103, the acceleration/deceleration, orientation, open/close state of the mobile phone, the relative positioning of the components, or The temperature change of the mobile phone, etc.
  • the multimedia component 104 provides a screen with an output interface between the mobile phone and the user.
  • the screen may be a touch panel, and when the screen is a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 104 further includes at least one camera.
  • the multimedia component 104 includes a front camera and/or a rear camera. When the mobile phone is in an operating mode, such as shooting mode or video mode, the front camera and/or the rear camera can receive external multimedia data.
  • Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 105 may provide an audio interface between the user and the mobile phone.
  • the audio component 105 may include an audio circuit, a speaker, and a microphone.
  • the audio circuit can transmit the electrical signal converted from the received audio data to the speaker, which is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is converted into audio after being received by the audio circuit Data, and then output audio data to send to, for example, another mobile phone, or output audio data to processor 102 for further processing.
  • the power supply component 106 is used to provide power to various components of the mobile phone.
  • the power supply component 106 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power by the mobile phone.
  • the mobile phone may also include a wireless fidelity (Wireless Fidelity, WiFi) module, a Bluetooth module, etc., which will not be repeated in the embodiment of the present application.
  • a wireless fidelity Wireless Fidelity, WiFi
  • a Bluetooth module etc.
  • FIG. 2 is a schematic flowchart of a method for processing a mixed layer according to an embodiment of the application. The method can be applied to the image processing device shown in FIG. 1. Referring to FIG. 2, the method includes the following steps.
  • S201 Acquire a first layer, a second layer, and a third layer, where the first layer and the second layer have a first dynamic range, the third layer has a second dynamic range, and the first layer is Transparent layer, the second layer is non-transparent layer.
  • the dynamic range can be used to characterize the ratio between the maximum brightness and the minimum brightness within the displayable range of the image, that is, the number of gray levels divided by the image from "brightest” to “darkest", and its unit is candela Per square meter (cd/m 2 ), it can also be expressed as nits.
  • the first dynamic range may be a standard dynamic range (standard dynamic range, SDR), and the second dynamic range may be a high dynamic range (high dynamic range, HDR); or, the first dynamic range may be HDR, the first dynamic range
  • the second dynamic range is SDR.
  • the hybrid layer can include at least one SDR layer and at least one HDR layer.
  • the SDR layer can be divided into a transparent layer and a non-transparent layer, and the HDR layer can also be divided into a transparent layer and a non-transparent layer.
  • the SDR here may refer to the dynamic range of an ordinary image, for example, the dynamic range of an image captured by an ordinary camera, and each color channel in the SDR image can be represented by 256 gray levels.
  • HDR here can refer to the dynamic range of natural scenes in the real world.
  • HDR image is an image type that can represent a wide range of brightness changes in the actual scene, and can better represent the optical characteristics of bright and dark areas in the scene.
  • HDR The range of pixel values to be represented by an image is usually very large, sometimes reaching hundreds of thousands or even millions.
  • HDR images require more data bits per color channel.
  • HDR images can be synthesized using multiple SDR images with different exposure times, or can be captured by professional HDR shooting equipment. HDR images can better reflect the visual effects in the real environment.
  • the method may further include: determining that the first layer has the first dynamic range according to the dynamic range identifier of the first layer; The dynamic range identifier of the second layer determines that the second layer has the first dynamic range; according to the dynamic range identifier of the third layer, it is determined that the third layer has the second dynamic range.
  • the method may further include: determining the first layer as a transparent layer according to the layer transparency of the first layer; and determining the second layer as a non-transparent layer according to the layer transparency of the second layer .
  • each layer can correspond to multiple layer parameters, and multiple layer parameters can include dynamic range identifier and layer transparency.
  • the dynamic range identifier can be used to distinguish SDR layers from HDR layers, and layer transparency can be used to distinguish Transparent layers and non-transparent layers.
  • the dynamic range identification can be a photoelectric conversion function.
  • the photoelectric conversion functions of the SDR layer and the HDR layer are different, so the SDR layer and the HDR map can be distinguished by the photoelectric conversion function of different layers
  • the description of the photoelectric conversion function of different layers please refer to the description below.
  • the photoelectric conversion function of the SDR layer is the gamma photoelectric conversion function
  • the photoelectric conversion function of the HDR layer may be a perceptual quantizer (PQ) photoelectric conversion function, a hybrid log-gamma (hybrid log-gamma, HLG) photoelectric conversion function and scene luminance fidelity (scene luminance fidelity, SLF) photoelectric conversion function can be determined according to the type of photoelectric conversion function of each layer in the mixed multiple layers SDR layer or HDR layer.
  • PQ perceptual quantizer
  • HLG hybrid log-gamma
  • SLF scene luminance fidelity
  • a transparent layer and a non-transparent layer are distinguished by the transparency of the image.
  • Each pixel of the layer corresponds to an alphy value to represent the transparency of the pixel.
  • the range of the alphy value can be [0, 1].
  • the alpht value is equal to 1, it means opaque, and when the alpht value is not When it is 1, it means transparent, and the different alpht values between 0 and 1 correspond to different degrees of transparency.
  • the alphy value of all pixels of the layer is 1, it means that the layer is non-transparent, and in other cases, it means the layer is transparent. It should be understood that different transparent layers may also have different transparency.
  • the first dynamic range layer is an SDR layer and the second dynamic range layer is an HDR layer. It is assumed that the mixed multiple layers include 9 layers, and the 9 layers include 2 transparent layers. SDR layer, 1 non-transparent SDR layer and 6 HDR layers.
  • S202 Convert the dynamic range of the first layer to the second dynamic range, and determine the first target area in the second layer.
  • the dynamic range of the first layer can be converted to HDR through inverse tone mapping (inverse tone mapping).
  • inverse tone mapping here may refer to a process of mapping a transparent SDR layer to a transparent HDR layer, and the detailed description of the inverse tone mapping is not specifically limited in the embodiment of the present application.
  • the mixed layer includes 9 layers and represents layer 1 to layer 9 respectively. If layer 1 to layer 3 are SDR layers, layer 1 and layer 2 are transparent layers, and layer 3 is Non-transparent layers, layer 4-layer 9 are HDR layers, you can convert the dynamic range of layer 1 and layer 2 to HDR through inverse tone mapping, that is, convert layer 1 and layer 2 to HDR image Floor.
  • the dynamic range of the first layer can be converted to SDR through tone mapping (tone mapping).
  • tone mapping may refer to a process of mapping a transparent HDR layer to a transparent SDR layer, and the detailed description of the tone mapping is not specifically limited in the embodiment of the present application.
  • the hybrid layer includes 9 layers and represents layer 1 to layer 9 respectively. If layer 1 to layer 3 are HDR layers, layer 1 and layer 2 are transparent layers, and layer 3 is Non-transparent layers, layer 4-layer 9 are SDR layers, you can convert the dynamic range of layer 1 and layer 2 to SDR through tone mapping, that is, convert layer 1 and layer 2 to SDR layer .
  • determining the first target region in the second layer may specifically include: identifying the first target region in the second layer according to a region of interest (ROI) recognition technology.
  • ROI region of interest
  • the region of interest ROI here can refer to the area that needs to be processed from the processed image in the form of a box, circle, ellipse, irregular polygon, etc. It can specifically refer to the second layer from the second layer. Boxes, circles, ellipses, irregular polygons, etc. outline the area to be processed.
  • the first target region recognized by the ROI recognition technology may be represented by its position information on the second layer.
  • the specific process of recognizing the first target region in the second layer by using the ROI recognition technology can refer to the detailed description of the ROI recognition technology in the prior art, which is not specifically limited in the embodiment of the application.
  • S203 Combine the second layer, the third layer, and the converted first layer to obtain the first image.
  • merging the second layer, the third layer, and the converted first layer may include: according to the layer transparency of the first layer, the layer transparency of the second layer, and the layer transparency of the third layer , Performing weighted calculation on the converted pixel values of the first layer, the second layer, and the third layer to obtain the first image.
  • the conversion of the dynamic range does not affect the layer transparency, so the layer transparency of the first layer is the same as the layer transparency of the first layer after the conversion.
  • each layer can include multiple pixel values, and each pixel value corresponds to a layer transparency, and for the pixel values at the same position in different layers, the layer transparency corresponding to the pixel values in the different layers
  • the sum is 1.
  • the pixel value of the pixel (i, j) of the first layer after conversion is expressed as A(i, j)
  • the corresponding layer transparency is expressed as X( i, j)
  • the pixel value of the pixel point (i, j) of the second layer is expressed as B(i, j)
  • the corresponding layer transparency is expressed as Y(i, j)
  • the pixel value of point (i, j) is expressed as C(i, j)
  • the corresponding layer transparency is expressed as Z(i, j), where the value range of i is 1, 2, ..., m, j If the value range is 1, 2,..., n, the pixel value D(i,j) of the pixel point (
  • each pixel corresponds to a transparency
  • a layer with pixels whose transparency is not 1 is called a transparent layer.
  • the transparency corresponding to each pixel is used as its weight to participate in the weighted calculation of the pixels of each layer.
  • the value of the pixel is 0, its product with the transparency value is also 0.
  • the above only uses three layers as an example to illustrate the process of merging the second layer, the third layer, and the converted first layer.
  • the above method can also be used to Two layers, four layers, and other layers of different numbers are merged, which is not specifically limited in the embodiment of the present application.
  • the pixel value of the area outside the first target area in the second layer is 0. Since the second layer is a non-transparent layer, that is, the layer transparency of the second layer is 1, at this time, the pixel value of the area outside the first target area in the second layer is 0, and the layers are merged in the above manner , The product of the pixel value 0 of the area outside the first target area in the second layer and the layer transparency is 0, so that the merged first image only includes the first target area in the second layer, not The area outside the first target area in the second layer.
  • merging the second layer, the third layer, and the converted first layer to obtain the first image can also be understood as: merging the third layer and the converted first layer to obtain the first image For three images, the image of the first target area is used to replace the image of the corresponding area of the third image to obtain the first image.
  • S204 Convert the dynamic range of the second target area in the first image to a second dynamic range to obtain a second image, where the second target area is a corresponding area of the first target area in the first image.
  • the second target area in the first image is from a non-transparent layer and has a first dynamic range
  • the other image areas in the first image are all from a layer with a second dynamic range, that is, different images in the first image
  • the dynamic ranges of the regions are not consistent. Therefore, the dynamic range of the second target region in the first image can be converted to the second dynamic range, so that the dynamic ranges of different image regions in the second image are consistent.
  • the position of the second target area in the first image may be determined by the position information of the first target area in the second layer obtained in S202.
  • the dynamic range of the second target area is SDR and the second dynamic range is HDR
  • the dynamic range of the second target area is converted to HDR through inverse tone mapping to obtain the second image
  • the dynamic range of the second target area is converted to SDR through tone mapping to obtain the second image.
  • the dynamic range of the first target area may be converted to the second dynamic range before merging multiple layers; when the shape of the first target area is In the case of a regular shape, the dynamic range of the second target area may be converted into the second dynamic range in S204, and the second target area is a corresponding area of the first target area in the first image.
  • the second image may be displayed. Since the dynamic range of the different image areas in the second image is the same, there will be no problems such as color confusion and invisible text in the second image that is finally displayed, which can improve the performance of the image processing equipment in processing mixed layers. It also improves the user experience.
  • the dynamic range of each layer in the mixed layer is not converted to the same dynamic range before the multiple layers are merged. It converts the dynamic range of the first layer (that is, the transparent layer, and has the first dynamic range) into the second dynamic range.
  • the second target area in the first image that is, with the second The area corresponding to the first target area in the layer, the second layer is a non-transparent layer, and has the first dynamic range.
  • the dynamic range is converted to the second dynamic range to unify the image content, which is compared with merging Compared with converting the dynamic range of each layer in the mixed layer to the same dynamic range before multiple layers, it can greatly reduce the hardware cost under the premise of obtaining the same image quality.
  • the photoelectric conversion functions of the SDR image and the HDR image are respectively illustrated below.
  • the photoelectric conversion function of SDR images is usually the Gamma function.
  • the photoelectric transfer function based on the "Gamma" function is defined in the ITU-R Recommendation BT.1886 standard, as shown in the following formula (1), and the L in formula (1) represents Optical signal, V represents electrical signal.
  • the image quantized to 8 bits by the formula (1) is an SDR image.
  • the SDR image and the above-mentioned transfer function perform well on traditional display devices (illuminance is about 100 cd/m 2 ).
  • the PQ photoelectric transfer function proposes a perceptual quantization transfer function, as shown in the following formula (2). Plot the perceptual quantization transfer function of formula (2) in the image to get the curve shown in Figure 3.
  • L in the above formula represents an optical signal
  • V represents an electrical signal
  • m 1 , m 2 , c 1 , c 2 and c3 are the parameters in the formula.
  • the HLG transfer function is different from the traditional Gamma transfer function. It has been improved on the basis of the traditional Gamma curve.
  • the traditional Gamma is applied in the low section, and the log curve is supplemented in the high section, and the Hybrid Log-Gamma transfer function is proposed.
  • L represents an optical signal
  • V represents an electrical signal
  • a and b are the parameters of the formula (3). Plot the HLG transfer function of formula (3) in the image to get the curve shown in Figure 4.
  • the input light signal is unitized.
  • the maximum brightness is defined as 10000cd/m 2 , and then it is converted by the conversion function shown in the following formula (4).
  • the L in formula (4) represents light Signal
  • V represents an electrical signal
  • p, m, a, and b are the parameters of formula (4).
  • the curve shown in FIG. 5 can be obtained according to the brightness characteristics of human eyes and combined with the brightness distribution of the existing HDR image sequence scene.
  • the foregoing embodiment introduces a processing method when a transparent layer with a first dynamic range, a non-transparent layer with a first dynamic range, and a layer with a second dynamic range exist in the mixed layer.
  • a transparent layer with a first dynamic range and a layer with a second dynamic range exist in the mixed layer at the same time, or only a non-transparent layer with a first dynamic range and a layer with the second dynamic range exist at the same time.
  • FIGS. 6 and 7 and corresponding embodiments will be introduced respectively.
  • FIG. 6 is a schematic flowchart of another hybrid layer processing method provided by an embodiment of the application. The method can be applied to the image processing device shown in FIG. 1. Referring to FIG. 6, the method includes the following steps.
  • S601 Acquire a first layer and a second layer, where the first layer has a first dynamic range, the second layer has a second dynamic range, and the first layer is a transparent layer.
  • the first dynamic range may be SDR and the second dynamic range may be HDR; or, the first dynamic range may be HDR and the second dynamic range may be SDR.
  • the hybrid layer can include at least one SDR layer and at least one HDR layer.
  • the SDR layer can be divided into a transparent layer and a non-transparent layer, and the HDR layer can also be divided into a transparent layer and a non-transparent layer.
  • the mixed layer includes only the transparent layer with the first dynamic range and the layer with the second dynamic range
  • the first layer and the second layer can be obtained from the mixed layer.
  • the specific obtaining process is the same as that described above.
  • the process in S201 is the same. For details, refer to related descriptions, and details are not described in the embodiment of the present application.
  • S602 Convert the dynamic range of the first layer to the second dynamic range.
  • the specific process of converting the dynamic range of the first layer into the second dynamic range is the same as the process in S202 above. For details, please refer to related descriptions, and details are not repeated in the embodiment of the present application.
  • S603 Combine the second layer and the converted first layer to obtain the first image.
  • the first layer and the second layer are merged to obtain the specific description of the first image and the second layer, the third layer, and the converted first layer are merged in S203 to obtain the first image.
  • the descriptions are consistent. For details, please refer to the description in S203 above, and details are not described in the embodiment of the present application.
  • the first image can also be displayed. Since the first image is obtained by merging multiple layers with the second dynamic range, its dynamic range is the same, so the final displayed first image will not have problems such as color confusion and invisible text. It can improve the performance of image processing equipment to process mixed layers and at the same time improve user experience.
  • FIG. 7 is a schematic flow chart of another hybrid layer processing method provided by an embodiment of the application. The method can be applied to the image processing device shown in FIG. 1. Referring to FIG. 7, the method includes the following steps.
  • S701 Acquire a first layer and a second layer, where the first layer has a first dynamic range, the second layer has a second dynamic range, and the first layer is a non-transparent layer.
  • the first dynamic range may be SDR and the second dynamic range may be HDR; or, the first dynamic range may be HDR and the second dynamic range may be SDR.
  • the hybrid layer can include at least one SDR layer and at least one HDR layer.
  • the SDR layer can be divided into a transparent layer and a non-transparent layer, and the HDR layer can also be divided into a transparent layer and a non-transparent layer.
  • the mixed layer includes only the non-transparent layer with the first dynamic range and the layer with the second dynamic range
  • the first layer and the second layer can be obtained from the mixed layer.
  • the specific obtaining process is the same as The process in the above S201 is the same.
  • S702 Determine the first target area in the first layer.
  • the specific description of determining the first target area in the first layer is consistent with the description of determining the first target area in the second layer in S202.
  • S703 Combine the first layer and the second layer to obtain the first image.
  • the first layer and the second layer are merged to obtain the specific description of the first image and the second layer, the third layer, and the converted first layer are merged in S203 to obtain the first image.
  • the descriptions are consistent. For details, please refer to the description in S203 above, and details are not described in the embodiment of the present application.
  • S704 Convert the dynamic range of the second target area in the first image into a second dynamic range to obtain a second image, where the second target area is a corresponding area of the first target area in the first image.
  • the second image may be displayed. Since the dynamic range of the different image areas in the second image is the same, there will be no problems such as color confusion and invisible text in the second image that is finally displayed, which can improve the performance of the image processing equipment in processing mixed layers. It also improves the user experience.
  • the following takes the hybrid layer including the SDR layer and the HDR layer, and the first dynamic range is SDR and the second dynamic range is HDR as an example to illustrate the method provided in the embodiment of the present application.
  • the hybrid layer includes one or more SDR layers and one or more HDR layers
  • exemplary includes an HDR layer (a layer containing a giraffe) and an SDR transparent image Layers (layers containing thumbnails) and an SDR non-transparent layer (layers containing the playback bar).
  • HDR layers and SDR layers are not distinguished, but directly mixed
  • the merging of multiple layers will cause incorrect display because the image content is not unified.
  • the final image display will have an incorrect display as shown in Figure 9.
  • the embodiment of the present application converts the transparent SDR layer into a transparent HDR layer, and converts the SDR image area in the non-transparent SDR layer into an HDR image area (the first image obtained after merging)
  • the SDR image area is converted into an HDR image area), so that the images in the second image obtained after the conversion are unified, thereby solving the problems in the prior art.
  • the SDR layer is converted to an HDR layer, and the SDR layer is converted to a transparent HDR layer
  • Each of the mixed multiple layers uses tone mapping or inverse tone mapping technology, and multiple layers have been unified into SDR layers or HDR layers, but as many layers as there are, you need to add as many
  • the conversion unit used to implement tone mapping or inverse tone mapping technology usually has more than nine mixed layers, so the hardware cost is relatively large.
  • the first dynamic range image area of the transparent first dynamic range layer and the non-transparent first dynamic range layer are distinguished, and the transparent first dynamic range layer is converted into the second dynamic range layer.
  • the number of transparent first dynamic range layers is less than or equal to 3
  • the image area of the first dynamic range image in the merged image is passed through a conversion unit (realizing inverse tone mapping or tone mapping technology) Module) to unify the image content to obtain the final image.
  • a conversion unit realizing inverse tone mapping or tone mapping technology
  • the image processing device includes hardware structures and/or software modules corresponding to various functions.
  • the image processing device includes hardware structures and/or software modules corresponding to various functions.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • FIG. 10 shows a schematic diagram of a possible hardware implementation structure of the hybrid layer processing system involved in the foregoing embodiment.
  • the hybrid layer processing system includes: an image reading interface 1001 for acquiring multiple inputs to be processed Layer; image type detector 1002, used to receive the input layer in the image reading interface 1001, and according to the dynamic range identification and/or layer transparency of the input layer, determine the dynamic range and transparency of the input layer Layer attributes, the transparent layer attributes include transparent layers and non-transparent layers; the first calculation engine 1003 is used to receive the first layer when the image type detector 1002 determines that the input layer is the first layer , And convert the dynamic range of the first layer into the target dynamic range, where the first layer is a transparent layer, and the dynamic range of the first layer is different from the target dynamic range; the image area calibrator 1004 is used when When the image type detector 1002 determines that the input layer is the second layer, it receives the second layer and determines the first target area of the second layer, where the second layer is a non-transparent layer and the second layer The dynamic
  • the third layer has the target dynamic range; the second calculation engine 1006 is used for Receiving the first image processed by the weighting calculator 1005 and the first target area determined by the image area calibrator 1004, and converting the dynamic range of the second target area in the first image into the target dynamic range to obtain the second image,
  • the second target area is a corresponding area of the first target area in the first image.
  • the weighting calculator 1005 is specifically used to: according to the layer transparency of the first layer, the layer transparency of the second layer, and the layer transparency of the third layer, the first calculation engine
  • the pixel values of the first layer processed by 1003, the pixel values of the second layer determined by the image type detector 1002, and the pixel values of the third layer determined by the image type detector 1003 are weighted to obtain the first image .
  • the hardware structure of the foregoing hybrid layer processing system can be implemented by, for example, one or more digital signal processors (DSP), general-purpose microprocessors, application-specific integrated circuits (ASICs), and field programmable logic arrays (FPGAs). ) Or other equivalent integrated or discrete logic circuits and other one or more processors.
  • DSP digital signal processors
  • ASICs application-specific integrated circuits
  • FPGAs field programmable logic arrays
  • the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided in dedicated hardware and/or software modules configured for image processing, or incorporated In the combined image processor.
  • the technology can be fully implemented in one or more circuits or logic elements.
  • the embodiment of the application can divide the functional modules according to the hybrid layer processing system and the hybrid layer processing device corresponding to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be divided.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 11 shows another possible structural schematic diagram of the hybrid layer processing device involved in the foregoing embodiment.
  • the device may be an image processing device or an image processing device.
  • the chip or system-on-chip in the image processing device may also be a circuit, module, or unit used to implement the foregoing method embodiments in an image processing device.
  • the device includes: an acquisition unit 1101, a conversion unit 1102, and a merging unit 1103.
  • the obtaining unit 1101 is used to support the device to execute S201, S601, or S701 in the method embodiment; the conversion unit 1102 is used to support the device to perform S202, S204, S602, S702, or S704 in the method embodiment; the merging unit 1103 is used to Support the device to execute S203, S603, or S703 of the method embodiment.
  • the device further includes: a display unit 1104; the display unit 1104 is configured to support the device to perform the steps of displaying the first image or displaying the image in the method embodiment.
  • the technology of this application can be implemented in a variety of devices or devices, including wireless handsets, integrated circuits (ICs), or a set of ICs (for example, chipsets).
  • ICs integrated circuits
  • a set of ICs for example, chipsets.
  • Various components, modules, or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but they do not necessarily need to be implemented by different hardware units.
  • various units can be combined with appropriate software and/or firmware in a hardware unit for image processing, or through interoperating hardware units (including one or more processors as described above). provide.
  • hybrid layer processing device in the embodiment of the present application from the perspective of modular functional entities
  • hybrid layer processing device in the embodiment of the present application from the perspective of hardware processing.
  • An embodiment of the present application also provides a hybrid layer processing device.
  • the structure of the hybrid layer processing device may be as shown in FIG. 1.
  • the processor 102 is configured to process one or more steps of S201-S204, S601-S603, and S701-S704 in the foregoing method embodiment, and/or other technologies described herein process.
  • the memory 101 may be used to store a mixed layer, a first layer, a second layer, a third layer, a first image, and/or a second image, etc.
  • the display panel in the multimedia component 104 can be used to display the first image or the second image.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the embodiment of the present application also provides a computer-readable storage medium that stores instructions in the computer-readable storage medium when it runs on a device (for example, the device may be a single-chip microcomputer, chip, computer, or processor, etc.) At this time, the device is caused to execute one or more steps of S201-S204, S601-S603, and S701-S704 in the foregoing method embodiments, and/or other technical processes described herein. If each component module of the above-mentioned image processing device is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in the computer readable storage medium.
  • the embodiments of the present application also provide a computer program product containing instructions.
  • the technical solution of the present application is essentially or a part that contributes to the existing technology, or all or part of the technical solution can be a software product.
  • the computer software product is stored in a storage medium and includes several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) or a processor therein execute the various embodiments of the application All or part of the steps of the method.

Abstract

L'invention concerne un procédé et un appareil de traitement de couche mixte, se rapportant au domaine du traitement d'images, et utilisés pour résoudre le problème, rencontré dans l'état antérieur de la technique, de l'existence d'une erreur d'affichage dans une image obtenue au moyen d'un traitement de couche mixte. Le procédé comporte les étapes consistant à: acquérir une première couche, une deuxième couche et une troisième couche, la première couche et la deuxième couche présentant une première plage dynamique, la troisième couche présentant une seconde plage dynamique, la première couche étant une couche transparente, et la deuxième couche étant une couche non transparente; convertir la plage dynamique de la première couche en la seconde plage dynamique, et déterminer une première zone cible dans la deuxième couche; fusionner la deuxième couche, la troisième couche et la première couche convertie pour obtenir une première image; et convertir une plage dynamique d'une seconde zone cible dans la première image en la seconde plage dynamique pour obtenir une seconde image, la seconde zone cible étant une zone correspondante, dans la première image, de la première zone cible.
PCT/CN2019/074307 2019-01-31 2019-01-31 Procédé et appareil de traitement de couche mixte WO2020155072A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980065334.8A CN112805745A (zh) 2019-01-31 2019-01-31 一种混合图层处理方法及装置
PCT/CN2019/074307 WO2020155072A1 (fr) 2019-01-31 2019-01-31 Procédé et appareil de traitement de couche mixte

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/074307 WO2020155072A1 (fr) 2019-01-31 2019-01-31 Procédé et appareil de traitement de couche mixte

Publications (1)

Publication Number Publication Date
WO2020155072A1 true WO2020155072A1 (fr) 2020-08-06

Family

ID=71841496

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074307 WO2020155072A1 (fr) 2019-01-31 2019-01-31 Procédé et appareil de traitement de couche mixte

Country Status (2)

Country Link
CN (1) CN112805745A (fr)
WO (1) WO2020155072A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117632826A (zh) * 2022-08-15 2024-03-01 万有引力(宁波)电子科技有限公司 数据传输方法、装置、系统、设备及存储介质
CN117130511A (zh) * 2023-02-24 2023-11-28 荣耀终端有限公司 亮度控制方法及其相关设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009567A (zh) * 2013-02-21 2015-10-28 杜比实验室特许公司 用于合成叠加图形的外观映射的系统和方法
CN107172370A (zh) * 2016-03-08 2017-09-15 豪威科技股份有限公司 增强的高动态范围
WO2018000126A1 (fr) * 2016-06-27 2018-01-04 Intel Corporation Procédé et système de mélange vidéo multicouche à plage dynamique multiple avec bande latérale de canal alpha pour la lecture vidéo
WO2018066482A1 (fr) * 2016-10-06 2018-04-12 株式会社ソニー・インタラクティブエンタテインメント Dispositif et procédé de traitement d'informations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105009567A (zh) * 2013-02-21 2015-10-28 杜比实验室特许公司 用于合成叠加图形的外观映射的系统和方法
CN107172370A (zh) * 2016-03-08 2017-09-15 豪威科技股份有限公司 增强的高动态范围
WO2018000126A1 (fr) * 2016-06-27 2018-01-04 Intel Corporation Procédé et système de mélange vidéo multicouche à plage dynamique multiple avec bande latérale de canal alpha pour la lecture vidéo
WO2018066482A1 (fr) * 2016-10-06 2018-04-12 株式会社ソニー・インタラクティブエンタテインメント Dispositif et procédé de traitement d'informations

Also Published As

Publication number Publication date
CN112805745A (zh) 2021-05-14

Similar Documents

Publication Publication Date Title
CN111654594B (zh) 图像拍摄方法、图像拍摄装置、移动终端及存储介质
CN107810505B (zh) 实时图像捕获参数的机器学习
US9451173B2 (en) Electronic device and control method of the same
CN109040603A (zh) 高动态范围图像获取方法、装置及移动终端
JP2016505968A (ja) 深度マッピング及び光源合成を用いる3d画像の向上のための装置
KR20190082080A (ko) 피처 매칭을 이용하는 멀티-카메라 프로세서
CN109120862A (zh) 高动态范围图像获取方法、装置及移动终端
CN114640783B (zh) 一种拍照方法及相关设备
US11257443B2 (en) Method for processing image, and display device
WO2020172888A1 (fr) Procédé et dispositif de traitement d'image
WO2023016320A1 (fr) Procédé et appareil de traitement d'image, dispositif et support
CN112071267B (zh) 亮度调整方法、亮度调整装置、终端设备及存储介质
US20220086382A1 (en) Method controlling image sensor parameters
CN112017577B (zh) 屏幕显示校准方法及装置
WO2020155072A1 (fr) Procédé et appareil de traitement de couche mixte
CN113596428A (zh) 映射曲线参数的获取方法和装置
CN108932703B (zh) 图片处理方法、图片处理装置及终端设备
CN109427041A (zh) 一种图像白平衡方法及系统、存储介质及终端设备
CN110618852A (zh) 视图处理方法、视图处理装置及终端设备
CN111901519B (zh) 屏幕补光方法、装置及电子设备
CN116668656A (zh) 图像处理方法及电子设备
CN114038370B (zh) 显示参数调整方法、装置、存储介质及显示设备
WO2021179819A1 (fr) Procédé et appareil de traitement de photographie, ainsi que support de stockage et dispositif électronique
CN116055699A (zh) 一种图像处理方法及相关电子设备
CN111800626B (zh) 拍照一致性评价方法、装置、移动终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19913982

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19913982

Country of ref document: EP

Kind code of ref document: A1