CN112805745A - Mixed layer processing method and device - Google Patents

Mixed layer processing method and device Download PDF

Info

Publication number
CN112805745A
CN112805745A CN201980065334.8A CN201980065334A CN112805745A CN 112805745 A CN112805745 A CN 112805745A CN 201980065334 A CN201980065334 A CN 201980065334A CN 112805745 A CN112805745 A CN 112805745A
Authority
CN
China
Prior art keywords
layer
image
dynamic range
hdr
sdr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980065334.8A
Other languages
Chinese (zh)
Inventor
李蒙
赵可强
齐致远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112805745A publication Critical patent/CN112805745A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Abstract

The application provides a mixed layer processing method and device, relates to the field of image processing, and is used for solving the problem that an image obtained by mixed layer processing in the prior art has a display error. The method comprises the following steps: acquiring a first layer, a second layer and a third layer, wherein the first layer and the second layer have a first dynamic range, the third layer has a second dynamic range, the first layer is a transparent layer, and the second layer is a non-transparent layer; converting the dynamic range of the first image layer into a second dynamic range, and determining a first target area in the second image layer; combining the second image layer, the third image layer and the converted first image layer to obtain a first image; and converting the dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.

Description

Mixed layer processing method and device Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for processing a mixed layer.
Background
Dynamic Range (DR) is used in many fields to represent the ratio of the maximum value and the minimum value of each variable. In digital images, the dynamic range characterizes the ratio between the maximum and minimum grey values within the displayable range of the image, i.e. the number of levels of grey division of the image from "brightest" to "darkest". The larger the dynamic range of an image is, the richer the brightness gradation which can be represented by the image is, and the more vivid the visual effect of the image is. The dynamic range of the natural scene in the real world is 10-3To 106In between, the dynamic range is very large, so called High Dynamic Range (HDR). The dynamic range of a normal image is a Low Dynamic Range (LDR), which may also be referred to as a Standard Dynamic Range (SDR), relative to a high dynamic range image.
In the prior art, scenes in which HDR images and SDR images are displayed in a mixed manner often exist, for example, SDR image advertisements popped up when a user plays HDR videos using a mobile phone. In such a scenario, the current solution does not distinguish between an HDR video and an SDR image, but directly combines images of multiple layers in the video, and then adaptively adjusts resolution, color tone, dynamic range, and the like in the images after obtaining a uniform image, and then adaptively displays the images according to the screen characteristics of the mobile phone. However, in the above scheme, the HDR image and the SDR image cannot be distinguished from each other in the processing procedure, and they are processed as the same type of image, which causes various problems such as color confusion and invisible characters in the finally displayed image.
Disclosure of Invention
The application provides a mixed layer processing method and device, which are used for solving the problem that an image obtained by mixed layer processing in the prior art has display errors.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a mixed layer processing method is provided, where the method includes: the method comprises the steps of obtaining a first layer, a second layer and a third layer, wherein the first layer and the second layer have a first dynamic range, the third layer has a second dynamic range, the first layer is a transparent layer, and the second layer is a non-transparent layer; converting the dynamic range of the first image layer into a second dynamic range, and determining a first target area in the second image layer; combining the second image layer, the third image layer and the converted first image layer to obtain a first image; and converting the dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
In the technical scheme, the dynamic range of the first image layer is converted into the second dynamic range, and the dynamic range of the second target area in the first image is converted into the second dynamic range after combination, so that the dynamic ranges of different image areas in the second image are consistent, the problems of color confusion, invisible characters and the like of the second image are avoided, and the performance of image processing equipment for processing the mixed image layer and the user experience are improved; in addition, according to the scheme, before the plurality of layers are combined, the dynamic range of each layer is not converted into the same dynamic range, only the dynamic range of the first layer is converted into the second dynamic range, and after the layers are combined, the dynamic range of the second target area in the first image is converted into the second dynamic range to unify the image contents, so that compared with the case that the dynamic range of each layer is converted into the same dynamic range before the plurality of layers are combined, the hardware cost can be greatly reduced on the premise that the same image quality is obtained.
In a possible implementation manner of the first aspect, the first dynamic range is a standard dynamic range SDR, and the second dynamic range is a high dynamic range HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR. In the possible implementation manner, when the dynamic range of the mixed layer includes SDR and HDR, the dynamic range of the finally obtained second image can be made SDR or HDR, so that the flexibility of the mixed layer processing can be improved.
In a possible implementation manner of the first aspect, before obtaining the first layer, the second layer, and the third layer, the method further includes: determining that the first image layer has a first dynamic range according to the dynamic range identifier of the first image layer; determining that the second image layer has a first dynamic range according to the dynamic range identifier of the second image layer; and determining that the third layer has a second dynamic range according to the dynamic range identifier of the third layer. In the possible implementation manner, the dynamic range of each layer can be simply and effectively determined according to the dynamic range identifier of each layer, so that layers with different dynamic ranges can be rapidly distinguished, and the efficiency of mixed layer processing is improved.
In a possible implementation manner of the first aspect, before obtaining the first layer, the second layer, and the third layer, the method further includes: determining the first layer as a transparent layer according to the layer transparency of the first layer; and determining the second layer as a non-transparent layer according to the layer transparency of the second layer. In the possible implementation manner, each layer can be simply and effectively determined to be a transparent layer or a non-transparent layer according to the layer transparency of each layer, so that the transparent layer and the non-transparent layer can be rapidly distinguished from the layer with the first dynamic range, and the processing efficiency of the mixed layer is further improved.
In a possible implementation manner of the first aspect, determining the first target area in the second layer includes: the first target region is determined by region of interest ROI identification. The possible implementation mode can simply and effectively determine the first target area in the second image layer, so that the efficiency of mixed image layer processing is improved.
In a possible implementation manner of the first aspect, merging the second image layer, the third image layer, and the converted first image layer to obtain the first image includes: and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the converted pixel value of the first layer, the converted pixel value of the second layer and the converted pixel value of the third layer to obtain a first image. In the possible implementation manner, the first image obtained by merging can include the image of each layer, and the images of different layers are not affected by merging, so that the performance of mixed layer processing can be improved compared with the prior art.
In a possible implementation manner of the first aspect, a pixel value of a region outside the first target region in the second layer is 0. In the possible implementation manner, the influence of the region outside the first target region in the second layer on other layers in the layer merging process can be avoided.
In a possible implementation manner of the first aspect, converting the dynamic range of the first layer into the second dynamic range includes: when the first dynamic range is SDR and the second dynamic range is HDR, converting the dynamic range of the first image layer into HDR through inverse tone mapping; and when the first dynamic range is HDR and the second dynamic range is SDR, converting the dynamic range of the first image layer into SDR through tone mapping. In the possible implementation manner, the dynamic range of the first layer can be converted into HDR or SDR according to different dynamic ranges, so that the flexibility of the mixed layer processing can be improved.
In a possible implementation manner of the first aspect, converting a dynamic range of the second target region in the first image into a second dynamic range to obtain the second image includes: when the dynamic range of the second target area is SDR and the second dynamic range is HDR, converting the dynamic range of the second target area into HDR through inverse tone mapping to obtain a second image; and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image. In the possible implementation manner, the dynamic range of the second target area can be converted into HDR or SDR according to different dynamic ranges, so that the flexibility of the mixed layer processing can be improved.
In a possible implementation manner of the first aspect, the method further includes: the second image is displayed. In the possible implementation manner, various display errors such as color confusion and invisible characters of the displayed second image can be avoided.
In a second aspect, a mixed layer processing system is provided, including: the image reading interface is used for acquiring a plurality of input image layers to be processed; the image type detector is used for receiving the input image layer in the image reading interface, and determining the dynamic range and the transparent image layer attribute of the input image layer according to the dynamic range identifier and/or the image layer transparency of the input image layer, wherein the transparent image layer attribute comprises a transparent image layer and a non-transparent image layer; the first computing engine is used for receiving the first image layer and converting the dynamic range of the first image layer into a target dynamic range when the image type detector determines that the input image layer is the first image layer, wherein the first image layer is a transparent image layer, and the dynamic range of the first image layer is different from the target dynamic range; the image area calibrator is used for receiving a second image layer and determining a first target area of the second image layer when the image type detector determines that the input image layer is the second image layer, wherein the second image layer is a non-transparent image layer, and the dynamic range of the second image layer is different from the target dynamic range; the weighting calculator is used for receiving a first image layer processed by the first calculation engine, a second image layer determined by the image type detector and an input image layer determined by the image type detector and serving as a third image layer, and combining the first image layer, the second image layer and the third image layer into a first image, wherein the third image layer has a target dynamic range; and the second calculation engine is used for receiving the first image processed by the weighting calculator and the first target area determined by the image area calibrator, and converting the dynamic range of the second target area in the first image into a target dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
In the technical scheme, the dynamic range of the first image layer is converted into the second dynamic range, and the dynamic range of the second target area in the first image is converted into the second dynamic range after combination, so that the dynamic ranges of different image areas in the second image are consistent, the problems of color confusion, invisible characters and the like of the second image are avoided, and the performance of image processing equipment for processing the mixed image layer and the user experience are improved; in addition, according to the scheme, before the plurality of layers are combined, the dynamic range of each layer is not converted into the same dynamic range, only the dynamic range of the first layer is converted into the second dynamic range, and after the layers are combined, the dynamic range of the second target area in the first image is converted into the second dynamic range to unify the image contents, so that compared with the case that the dynamic range of each layer is converted into the same dynamic range before the plurality of layers are combined, the hardware cost can be greatly reduced on the premise that the same image quality is obtained.
In a possible implementation manner of the second aspect, the weighting calculator is specifically configured to: according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, the pixel value of the first layer processed by the first calculation engine, the pixel value of the second layer determined by the image type detector and the pixel value of the third layer determined by the image type detector are subjected to weighting calculation to obtain a first image.
In a possible implementation manner of the second aspect, the pixel value of the region outside the first target region in the second layer is 0.
In a third aspect, an apparatus for processing a mixed layer is provided, where the apparatus includes: the device comprises an obtaining unit, a judging unit and a judging unit, wherein the obtaining unit is used for obtaining a first image layer, a second image layer and a third image layer, the first image layer and the second image layer are provided with a first dynamic range, the third image layer is provided with a second dynamic range, the first image layer is a transparent image layer, and the second image layer is a non-transparent image layer; the conversion unit is used for converting the dynamic range of the first image layer into a second dynamic range; the determining unit is used for determining a first target area in the second image layer; a merging unit, configured to merge the second layer, the third layer, and the converted first layer to obtain a first image; the conversion unit is further configured to convert a dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, where the second target area is a corresponding area of the first target area in the first image.
In a possible implementation manner of the third aspect, the first dynamic range is a standard dynamic range SDR, and the second dynamic range is a high dynamic range HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR.
In a possible implementation manner of the third aspect, the determining unit is further configured to: determining that the first image layer has a first dynamic range according to the dynamic range identifier of the first image layer; determining that the second image layer has a first dynamic range according to the dynamic range identifier of the second image layer; and determining that the third layer has a second dynamic range according to the dynamic range identifier of the third layer.
In a possible implementation manner of the third aspect, the determining unit is further configured to: determining the first layer as a transparent layer according to the layer transparency of the first layer; and determining the second layer as a non-transparent layer according to the layer transparency of the second layer.
In a possible implementation manner of the third aspect, the determining unit is further configured to: the first target region is determined by region of interest ROI identification.
In a possible implementation manner of the third aspect, the merging unit is specifically configured to: and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the converted pixel value of the first layer, the converted pixel value of the second layer and the converted pixel value of the third layer to obtain a first image.
In a possible implementation manner of the third aspect, the pixel value of the area outside the first target area in the second layer is 0.
In a possible implementation manner of the third aspect, the conversion unit is specifically configured to: when the first dynamic range is SDR and the second dynamic range is HDR, converting the dynamic range of the first image layer into HDR through inverse tone mapping; and when the first dynamic range is HDR and the second dynamic range is SDR, converting the dynamic range of the first image layer into SDR through tone mapping.
In a possible implementation manner of the third aspect, the conversion unit is specifically configured to: when the dynamic range of the second target area is SDR and the second dynamic range is HDR, converting the dynamic range of the second target area into HDR through inverse tone mapping to obtain a second image; and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image.
In a possible implementation manner of the third aspect, the apparatus further includes: and a display unit for displaying the second image.
Optionally, the present application further provides a mixed layer processing apparatus, where the apparatus includes: a memory, and a processor coupled to the memory, the memory storing instructions and data, the processor executing the instructions in the memory to cause the processor to perform the mixed layer processing method provided in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, a mixed layer processing method is provided, where the method includes: acquiring a first image layer and a second image layer, wherein the first image layer has a first dynamic range, the second image layer has a second dynamic range, and the first image layer is a transparent image layer; converting the dynamic range of the first image layer into a second dynamic range; and combining the second image layer, the third image layer and the converted first image layer to obtain a first image.
In a possible implementation manner of the fourth aspect, the first dynamic range is SDR, and the second dynamic range is HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR.
In a possible implementation manner of the fourth aspect, before obtaining the first layer and the second layer, the method further includes: determining that the first image layer has a first dynamic range according to the dynamic range identifier of the first image layer; and determining that the second image layer has a second dynamic range according to the dynamic range identifier of the second image layer.
In a possible implementation manner of the fourth aspect, before obtaining the first layer and the second layer, the method further includes: and determining the first layer as a transparent layer according to the layer transparency of the first layer.
In a possible implementation manner of the fourth aspect, the merging the second layer and the converted first layer includes: and according to the layer transparency of the first layer and the layer transparency of the second layer, performing weighted calculation on the pixel value of the converted first layer and the pixel value of the second layer to obtain a first image.
In a possible implementation manner of the fourth aspect, converting the dynamic range of the first layer into the second dynamic range includes: when the first dynamic range is SDR and the second dynamic range is HDR, converting the dynamic range of the first image layer into HDR through inverse tone mapping; and when the first dynamic range is HDR and the second dynamic range is SDR, converting the dynamic range of the first image layer into SDR through tone mapping.
In one possible implementation manner of the fourth aspect, the method further includes: the first image is displayed.
In a fifth aspect, a mixed layer processing apparatus is provided, where the mixed layer processing apparatus may implement the functions of the mixed layer processing method provided in any possible implementation manner of the fourth aspect to the fourth aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software comprises one or more units corresponding to the functions. Illustratively, the mixed layer processing device may include an obtaining unit, a converting unit, and a merging unit.
In a possible implementation manner of the fifth aspect, the hybrid layer processing apparatus structurally includes a processor, a memory, a communication interface, and a bus, where the memory is used to store program codes, and the processor, the memory, and the communication interface are connected through the bus. Wherein the program code, when executed by the processor, causes the information transmission apparatus to execute the steps in the mixed layer processing method provided by any one of the possible implementation manners of the fourth aspect to the fourth aspect.
In a sixth aspect, a mixed layer processing method is provided, where the method includes: acquiring a first image layer and a second image layer, wherein the first image layer has a first dynamic range, the second image layer has a second dynamic range, and the first image layer is a non-transparent image layer; determining a first target area in a first image layer; combining the first image layer and the second image layer to obtain a first image; and converting the dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
In a possible implementation manner of the sixth aspect, the first dynamic range is SDR, and the second dynamic range is HDR; or the first dynamic range is HDR, and the second dynamic range layer is SDR.
In a possible implementation manner of the sixth aspect, before obtaining the first layer, the second layer, and the third layer, the method further includes: determining that the first image layer has a first dynamic range according to the dynamic range identifier of the first image layer; and determining that the second image layer has a second dynamic range according to the dynamic range identifier of the second image layer.
In a possible implementation manner of the sixth aspect, before obtaining the first layer, the second layer, and the third layer, the method further includes: and determining the first layer as a non-transparent layer according to the layer transparency of the first layer.
In a possible implementation manner of the sixth aspect, determining the first target area in the first layer includes: the first target region is determined by region of interest ROI identification.
In a possible implementation manner of the sixth aspect, merging the first image layer and the second image layer to obtain the first image includes: and according to the layer transparency of the first layer and the layer transparency of the second layer, performing weighted calculation on the pixel value of the first layer and the pixel value of the second layer to obtain a first image.
In a possible implementation manner of the sixth aspect, the pixel value of the area outside the first target area in the first layer is 0.
In a possible implementation manner of the sixth aspect, converting the dynamic range of the second target region in the first image into the second dynamic range to obtain the second image includes: when the dynamic range of the second target area is SDR and the second dynamic range is HDR, converting the dynamic range of the second target area into HDR through inverse tone mapping to obtain a second image; and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image.
In one possible implementation manner of the sixth aspect, the method further includes: the second image is displayed.
A seventh aspect provides a mixed layer processing apparatus, which may implement the functions of the mixed layer processing method provided in any possible implementation manner of the sixth aspect to the sixth aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software comprises one or more units corresponding to the functions. Illustratively, the mixed layer processing device may include an obtaining unit, a converting unit, and a merging unit.
In a possible implementation manner of the seventh aspect, the hybrid layer processing apparatus includes a processor, a memory, a communication interface, and a bus, where the memory is used to store program codes, and the processor, the memory, and the communication interface are connected through the bus. Wherein the program code, when executed by the processor, causes the information transmission apparatus to execute the steps in the mixed layer processing method provided by any one of the possible implementation manners of the sixth aspect to the sixth aspect.
In yet another aspect of the present application, a computer-readable storage medium is provided, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is caused to execute the mixed layer processing method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In yet another aspect of the present application, a computer-readable storage medium is provided, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is caused to execute the mixed layer processing method provided in the fourth aspect or any possible implementation manner of the fourth aspect.
In yet another aspect of the present application, a computer-readable storage medium is provided, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a computer, the computer is caused to execute the mixed layer processing method provided in the sixth aspect or any possible implementation manner of the sixth aspect.
In another aspect of the present application, a computer program product is provided, which when running on a computer, causes the computer to execute the mixed layer processing method provided in the first aspect or any one of the possible implementation manners of the first aspect.
In another aspect of the present application, a computer program product is provided, which when running on a computer, causes the computer to execute the mixed layer processing method provided in the fourth aspect or any possible implementation manner of the fourth aspect.
In another aspect of the present application, a computer program product is provided, which when running on a computer, causes the computer to execute the mixed layer processing method provided in any one of the above-mentioned sixth aspect or any one of the possible implementation manners of the sixth aspect.
It should be understood that, the system, the apparatus, the computer storage medium, or the computer program product of any of the above-provided mixed layer processing methods is used to execute the corresponding methods provided above, and therefore, the beneficial effects that can be achieved by the system and the apparatus can refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 2 is a first flowchart illustrating a mixed layer processing method according to an embodiment of the present application;
fig. 3 is a graph of a PQ photoelectric conversion function provided in an embodiment of the present application;
fig. 4 is a graph of an HLG photoelectric conversion function provided in an embodiment of the present application;
fig. 5 is a graph of an SLG photoelectric conversion function provided in an embodiment of the present application;
fig. 6 is a flowchart illustrating a second method for processing a mixed layer according to an embodiment of the present application;
fig. 7 is a third schematic flowchart of a hybrid layer processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of layers with different dynamic ranges to be merged according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating that a display error exists in an image after layer merging according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware implementation structure of a hybrid layer processing system according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a hybrid layer processing apparatus according to an embodiment of the present application.
Detailed Description
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c or a-b-c, wherein a, b and c can be single or multiple. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, in the embodiments of the present application, the words "first", "second", and the like do not limit the number and the execution order.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Fig. 1 is a schematic structural diagram of an image processing device according to an embodiment of the present disclosure, where the image processing device may be a mobile phone, a tablet computer, a notebook computer, a video camera, a wearable device, an in-vehicle device, or a terminal device. For convenience of description, the above-mentioned apparatuses are collectively referred to as an image processing apparatus in the present application. The embodiment of the present application is described by taking the image processing apparatus as a mobile phone as an example, where the mobile phone includes: memory 101, processor 102, sensor component 103, multimedia component 104, audio component 105, and power component 106, among others.
The following describes each component of the mobile phone in detail with reference to fig. 1:
memory 101 may be used to store data, software programs, and modules; the system mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system and application programs required by at least one function, such as a sound playing function, an image playing function and the like; the storage data area may store data created according to the use of the cellular phone, such as audio data, image data, a phonebook, and the like. In addition, the handset may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 102 is a control center of the mobile phone, connects various parts of the entire apparatus by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 101 and calling data stored in the memory 101, thereby performing overall monitoring of the mobile phone. In some possible embodiments, the processor 102 may be a single processor structure, a multi-processor structure, a single threaded processor, a multi-threaded processor, or the like; in some possible embodiments, the processor 102 may include a central processing unit, a general purpose processor, a digital signal processor, a microcontroller or microprocessor, or the like. In addition, the processor 102 may further include other hardware circuits or accelerators, such as application specific integrated circuits, field programmable gate arrays or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 102 may also be a combination that performs a computing function, such as a combination comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like.
The sensor component 103 includes one or more sensors for providing various aspects of state assessment for the handset. The sensor assembly 103 may include, among other things, a light sensor, such as a CMOS or CCD image sensor, for detecting the distance of an external object from the mobile phone, or for use in imaging applications, i.e., as an integral part of a camera or a video camera. In addition, the sensor assembly 103 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor or a temperature sensor, and acceleration/deceleration, orientation, on/off state of the cellular phone, relative positioning of the components, or temperature change of the cellular phone, etc. may be detected by the sensor assembly 103.
The multimedia component 104 is a screen providing an output interface between the handset and the user, which may be a touch panel, and when the screen is a touch panel, the screen may be implemented as a touch screen to receive an input signal from the user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In addition, the multimedia component 104 may further include at least one camera, for example, the multimedia component 104 may include a front camera and/or a rear camera. When the mobile phone is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 105 may provide an audio interface between the user and the handset, for example, the audio component 105 may include audio circuitry, a speaker, and a microphone. The audio circuit can transmit the electric signal converted from the received audio data to the loudspeaker, and the electric signal is converted into a sound signal by the loudspeaker to be output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuitry and converted into audio data, which is then output for transmission to, for example, another cell phone, or to the processor 102 for further processing.
The power component 106 is used to provide power to the various components of the handset, and the power component 106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to the handset.
Although not shown, the mobile phone may further include a Wireless Fidelity (WiFi) module, a bluetooth module, and the like, which is not described herein again in this embodiment of the present application. Those skilled in the art will appreciate that the handset configuration shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Fig. 2 is a schematic flowchart of a hybrid layer processing method according to an embodiment of the present application, where the method is applicable to the image processing apparatus shown in fig. 1, and referring to fig. 2, the method includes the following steps.
S201: the method comprises the steps of obtaining a first image layer, a second image layer and a third image layer, wherein the first image layer and the second image layer have a first dynamic range, the third image layer has a second dynamic range, the first image layer is a transparent image layer, and the second image layer is a non-transparent image layer.
Wherein the dynamic range can be used to characterize the ratio between the maximum luminance and the minimum luminance within the displayable range of the image, i.e. the number of levels of the gray division of the image from "brightest" to "darkest",the unit is candela per square meter (cd/m)2) It may also be expressed as nits (nits).
In the embodiment of the present application, the first dynamic range may be a Standard Dynamic Range (SDR), and the second dynamic range may be a High Dynamic Range (HDR); alternatively, the first dynamic range is HDR and the second dynamic range is SDR. The mixed layer may include at least one SDR layer and at least one HDR layer, the SDR layer may be divided into a transparent layer and a non-transparent layer, and the HDR layer may also be divided into a transparent layer and a non-transparent layer.
It should be noted that, where SDR may refer to the dynamic range of a general image, for example, the dynamic range of an image captured by a general camera, each color channel in the SDR image may be represented by 256 gradations. HDR can refer to the dynamic range of a natural scene in the real world, an HDR image is an image type that can represent a wide variation of brightness in an actual scene, and can better represent the optical characteristics of bright areas and dark areas in the scene, the range of pixel values to be represented by the HDR image is usually large, sometimes hundreds of thousands, even millions, and each color channel of the HDR image needs more data bits than that of the SDR image. The HDR image can be synthesized by utilizing a plurality of SDR images with different exposure times and can also be shot and acquired by an HDR professional shooting device. HDR images are better able to reflect visual effects in real environments.
Optionally, before obtaining the first image layer, the second image layer, and the third image layer, the method may further include: determining that the first image layer has a first dynamic range according to the dynamic range identifier of the first image layer; determining that the second image layer has a first dynamic range according to the dynamic range identifier of the second image layer; and determining that the third layer has a second dynamic range according to the dynamic range identifier of the third layer. Optionally, the method may further include: determining the first layer as a transparent layer according to the layer transparency of the first layer; and determining the second layer as a non-transparent layer according to the layer transparency of the second layer.
Each layer may correspond to a plurality of layer parameters, the plurality of layer parameters may include a dynamic range identifier and a layer transparency, the dynamic range identifier may be used to distinguish an SDR layer from an HDR layer, and the layer transparency may be used to distinguish a transparent layer from a non-transparent layer. Optionally, the dynamic range identifier may be a photoelectric conversion function, and in practical applications, the photoelectric conversion functions of the SDR layer and the HDR layer are different, so that the SDR layer and the HDR layer may be distinguished by the photoelectric conversion functions of different layers, and the description about the photoelectric conversion functions of different layers may be referred to as the following description.
For example, the photoelectric conversion function of the SDR layer is a gamma photoelectric conversion function, the photoelectric conversion function of the HDR layer may be any one of a Perceptual Quantization (PQ) photoelectric conversion function, a hybrid log-gamma (HLG) photoelectric conversion function, and a Scene Luminance Fidelity (SLF) photoelectric conversion function, and it may be determined as the SDR layer or the HDR layer according to the type of the photoelectric conversion function of each of the mixed layers.
Illustratively, the transparent layer and the non-transparent layer are distinguished by the transparency of the image. Each pixel point of the layer corresponds to an alphy value to represent the transparency of the pixel point, the value range of the alphy value can be [0, 1], when the alphy value is equal to 1, the pixel point is non-transparent, when the alphy value is not equal to 1, the pixel point is transparent, and different transparency degrees are corresponding to different alphy values between 0 and 1. And when the alphy values of all the pixel points of the layer are 1, the layer is represented as a non-transparent layer, and the layer is represented as a transparent layer under other conditions. It should be understood that different transparent layers may also have different degrees of transparency.
Illustratively, the first dynamic range layer is an SDR layer, the second dynamic range layer is an HDR layer, and it is assumed that the mixed multiple layers include 9 layers, and the 9 layers include 2 transparent SDR layers, 1 non-transparent SDR layers, and 6 HDR layers.
S202: and converting the dynamic range of the first image layer into a second dynamic range, and determining a first target area in the second image layer.
When the first dynamic range is SDR and the second dynamic range is HDR, the dynamic range of the first image layer may be specifically converted into HDR through inverse tone mapping (inverse tone mapping). The inverse tone mapping herein may refer to a process of mapping a transparent SDR layer to a transparent HDR layer, and the detailed description of the inverse tone mapping is not specifically limited in this embodiment. For example, the mixed layer includes 9 layers and represents layer 1 to layer 9, and if layer 1 to layer 3 are SDR layers, layer 1 and layer 2 are transparent layers, layer 3 is a non-transparent layer, and layer 4 to layer 9 are HDR layers, the dynamic ranges of layer 1 and layer 2 may be converted into HDR through inverse tone mapping, that is, layer 1 and layer 2 are converted into HDR layers.
When the first dynamic range is HDR and the second dynamic range is SDR, the dynamic range of the first image layer may be specifically converted into SDR by tone mapping (tone mapping). The tone mapping here may refer to a process of mapping a transparent HDR layer to a transparent SDR layer, and the detailed description of the tone mapping is not specifically limited in this embodiment of the present application. For example, the mixed layer includes 9 layers and represents layer 1 to layer 9, and if layer 1 to layer 3 are HDR layers, layer 1 and layer 2 are transparent layers, layer 3 is a non-transparent layer, and layer 4 to layer 9 are SDR layers, the dynamic ranges of layer 1 and layer 2 may be converted into SDR by tone mapping, that is, layer 1 and layer 2 are converted into SDR layers.
In addition, determining the first target area in the second layer may specifically include: and identifying a first target area in the second image layer according to a region of interest (ROI) identification technology. The region of interest ROI herein may be a region to be processed, which is delineated from the processed image in a manner of a box, a circle, an ellipse, an irregular polygon, or the like. Optionally, the first target region identified by the ROI identification technique may be represented by its position information in the second layer.
It should be noted that, the specific process of identifying the first target region in the second layer by using the ROI identification technology may refer to the detailed description about the ROI identification technology in the prior art, which is not specifically limited in this embodiment of the application.
S203: and combining the second image layer, the third image layer and the converted first image layer to obtain a first image.
The merging the second layer, the third layer, and the converted first layer may include: and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the converted pixel value of the first layer, the converted pixel value of the second layer and the converted pixel value of the third layer to obtain a first image.
It should be understood that, in general, the layer transparency is not affected by the conversion of the dynamic range, and therefore, the layer transparency of the first layer is the same as the layer transparency of the converted first layer.
Specifically, each layer may include a plurality of pixel values, each pixel value may correspond to one layer transparency, and for pixel values at the same position in different layers, the sum of the layer transparencies corresponding to the pixel values in the different layers is 1. Assuming that each layer includes m × n pixel points, the pixel value of the pixel point (i, j) of the converted first layer is represented as a (i, j), and the corresponding layer transparency is represented as X (i, j), the pixel value of the pixel point (i, j) of the second layer is represented as B (i, j), and the corresponding layer transparency is represented as Y (i, j), the pixel value of the pixel point (i, j) of the third layer is represented as C (i, j), and the corresponding layer transparency is represented as Z (i, j), where the value range of i is 1, 2, …, and the value range of m, j is 1, 2, …, and n, the pixel value D (i, j) of the pixel point (i, j) in the merged first image can be represented by the following formula (0):
D(i,j)=A(i,j)×X(i,j)+B(i,j)×Y(i,j)+C(i,j)×Z(i,j) (0)
for any pixel point (i, j), X (i, j) + Y (i, j) + Z (i, j) ═ 1 is satisfied, that is, the sum of layer transparencies corresponding to pixel values at the same position in different layers is 1.
According to the formula, the relation among the transparency corresponding to the pixel point, the pixel value of the pixel point and the transparency sensed visually can be further understood. Specifically, as described above, each pixel corresponds to a transparency, and a layer having pixels with a transparency not equal to 1 is referred to as a transparent layer. And when the layers are combined, the transparency corresponding to each pixel point is used as the weight to participate in the weighted calculation of the pixel points of each layer. For a certain pixel point, when the value of the pixel point is 0, the product of the value and the transparency value is also 0, and at the moment, the pixel point is considered to be transparent visually by people, namely the pixel value of the pixel point is not reflected in the combined layer; when the value of the transparency is 0, the product of the value of the transparency and the pixel value is also 0, and at this time, the pixel point is also considered to be transparent visually by people.
It should be noted that, the process of combining the second layer, the third layer, and the converted first layer is illustrated by taking only three layers as an example, and in practical application, different numbers of layers such as two layers, four layers, and the like may also be combined in the above manner, which is not limited in this embodiment of the application.
Optionally, the pixel value of the area outside the first target area in the second layer is 0. Since the second layer is a non-transparent layer, that is, the layer transparency of the second layer is 1, at this time, the pixel value of the region outside the first target region in the second layer is 0, when layer merging is performed according to the above manner, the product of the pixel value 0 of the region outside the first target region in the second layer and the layer transparency thereof is 0, so that the merged first image only includes the first target region in the second layer, and does not include the region outside the first target region in the second layer. In other words, combining the second image layer, the third image layer, and the converted first image layer to obtain the first image may also be understood as: and combining the third image layer and the converted first image layer to obtain a third image, and replacing the image of the corresponding area of the third image with the image of the first target area to obtain a first image.
S204: and converting the dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
The second target area in the first image is from the non-transparent layer and has the first dynamic range, and the other image areas in the first image are from the layer having the second dynamic range, that is, the dynamic ranges of the different image areas in the first image are not consistent, so that the dynamic range of the second target area in the first image can be converted into the second dynamic range, so that the dynamic ranges of the different image areas in the second image are consistent. Optionally, the position of the second target area in the first image may be determined by the position information of the first target area in the second image layer, which is obtained in the above step S202.
Specifically, when the dynamic range of the second target region is SDR and the second dynamic range is HDR, the dynamic range of the second target region is converted into HDR through inverse tone mapping to obtain a second image; and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image.
It should be noted that the inverse tone mapping and the tone mapping herein are consistent with the inverse tone mapping and the tone mapping in S202, and specific reference may be made to the related description in S202, which is not repeated herein.
Optionally, when the shape of the first target area is an irregular shape, the dynamic range of the first target area may also be converted into a second dynamic range before the plurality of layers are combined; when the shape of the first target region is a regular shape, the dynamic range of the second target region, which is a corresponding region of the first target region in the first image, may be converted into the second dynamic range in S204.
Further, after the dynamic range of the second target area in the first image is converted into the second dynamic range to obtain the second image, the second image can be displayed. Because the dynamic ranges of different image areas in the second image are consistent, the problems of color confusion, invisible characters and the like do not occur in the finally displayed second image, so that the performance of processing the mixed image layer by the image processing equipment can be improved, and the user experience is also improved. In addition, when the plurality of layers are unified into the same dynamic range, the dynamic range of each layer in the mixed layer is not converted into the same dynamic range before the plurality of layers are merged, only the dynamic range of the first layer (i.e. the transparent layer and having the first dynamic range) is converted into the second dynamic range, after the layers are merged, the dynamic range of a second target area in the first image (i.e. the area corresponding to the first target area in the second layer, which is a non-transparent layer and has a first dynamic range) is converted into a second dynamic range to unify the image contents, compared with the method that the dynamic range of each layer in the mixed layer is converted into the same dynamic range before the plurality of layers are combined, the method can greatly reduce the hardware cost on the premise of obtaining the same image quality.
Exemplarily, in order to better distinguish the difference between the SDR image and the HDR image, the photoelectric conversion functions of the SDR image and the HDR image are illustrated below respectively.
The photoelectric conversion function of an SDR image is generally a Gamma function, and a photoelectric transfer function based on the Gamma function is defined in the ITU-R Recommendation bt.1886 standard, specifically as shown in the following formula (1), L in formula (1) represents an optical signal, and V represents an electrical signal. The image quantized to 8 bits as shown in formula (1) is the SDR image, which and the above transfer function are displayed in conventional display devices (luminance at 100 cd/m)2Left and right) performed well.
Figure PCTCN2019074307-APPB-000001
With the upgrade of display devices, the luminance range of display devices is increasing, and the current HDR display at the consumer level can reach 600cd/m2The illumination information of the HDR display at the high end can reach 2000cd/m2. Photoelectric conversion of HDR imagesThe functions are generally as follows: PQ, HLG, and SLF, the three photoelectric conversion functions are AVS standard specified conversion functions. Three curves are described below.
The PQ photoelectric transfer function proposes a perceptual quantization transfer function according to a human eye brightness perception model, and is specifically represented by the following formula (2). The perceptual quantization transfer function of equation (2) is plotted in the image to obtain the curve shown in fig. 3.
Figure PCTCN2019074307-APPB-000002
Wherein:
Figure PCTCN2019074307-APPB-000003
Figure PCTCN2019074307-APPB-000004
Figure PCTCN2019074307-APPB-000005
Figure PCTCN2019074307-APPB-000006
Figure PCTCN2019074307-APPB-000007
Figure PCTCN2019074307-APPB-000008
in the above formula, L represents an optical signal, V represents an electrical signal, and m represents1、m 2、c 1、c 2And c3 is a parameter in the formula.
The HLG transfer function is different from the traditional Gamma transfer function, the improvement is carried out on the basis of the traditional Gamma curve, the traditional Gamma is applied to the low-band part, the Log curve is supplemented to the high-band part, and the Hybrid Log-Gamma transfer function is provided, and particularly as shown in the following formula (3), L in the formula (3) represents an optical signal, V represents an electric signal, and a and b are parameters of the formula (3). Plotting the HLG transfer function of equation (3) in the image results in the curve shown in fig. 4.
Figure PCTCN2019074307-APPB-000009
The SLF photoelectric conversion function is obtained by unitizing the input optical signal, where the maximum luminance is defined as 10000cd/m2And then converted by a conversion function as shown in the following formula (4), where L denotes an optical signal, V denotes an electrical signal, and p, m, a, and b are parameters of the formula (4).
Figure PCTCN2019074307-APPB-000010
Wherein, the parameter values in the formula (4) are selected as follows: p is 2.2, m is 0.14, a is 1.12672, b is-0.12672, and the corresponding curve shape is shown in fig. 5. The curve shown in fig. 5 can be obtained according to the luminance characteristics of human eyes and combining the scene luminance distribution of the existing HDR image sequence.
It should be understood that the above-described embodiment describes the processing method when the transparent layer having the first dynamic range, the non-transparent layer having the first dynamic range, and the layer having the second dynamic range coexist in the mixed layer. In some possible embodiments, only the transparent layer with the first dynamic range and the layer with the second dynamic range coexist in the mixed layer, or only the non-transparent layer with the first dynamic range and the layer with the second dynamic range coexist, which will be described in fig. 6 and fig. 7 and the corresponding embodiments, respectively.
Fig. 6 is a schematic flowchart of another hybrid layer processing method provided in an embodiment of the present application, where the method is applicable to the image processing apparatus shown in fig. 1, and referring to fig. 6, the method includes the following steps.
S601: and acquiring a first layer and a second layer, wherein the first layer has a first dynamic range, the second layer has a second dynamic range, and the first layer is a transparent layer.
Wherein the first dynamic range may be SDR and the second dynamic range may be HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR. The mixed layer may include at least one SDR layer and at least one HDR layer, the SDR layer may be divided into a transparent layer and a non-transparent layer, and the HDR layer may also be divided into a transparent layer and a non-transparent layer.
When the mixed layer only includes the transparent layer with the first dynamic range and the layer with the second dynamic range, the first layer and the second layer may be obtained from the mixed layer, and a specific obtaining process is consistent with the process in S201.
S602: and converting the dynamic range of the first image layer into a second dynamic range. The specific process of converting the dynamic range of the first layer into the second dynamic range is consistent with the process in S202, which may specifically refer to related descriptions, and this embodiment of the present application is not described herein again.
S603: and combining the second image layer and the converted first image layer to obtain a first image. The specific description of the first image obtained by combining the first image layer and the second image layer is consistent with the description of the first image obtained by combining the second image layer, the third image layer and the converted first image layer in S203, which is specifically referred to the description in S203, and the description of the embodiment of the present application is not repeated here.
Further, after the first image is obtained, the first image may also be displayed. Because the first image is obtained by combining a plurality of layers with the second dynamic range, the dynamic ranges of the first image are consistent, the problems of color confusion, invisible characters and the like cannot occur in the finally displayed first image, the performance of processing the mixed layers by the image processing equipment can be improved, and the user experience is also improved.
Fig. 7 is a schematic flowchart of another hybrid layer processing method provided in an embodiment of the present application, which can be applied to the image processing apparatus shown in fig. 1, and referring to fig. 7, the method includes the following steps.
S701: and acquiring a first layer and a second layer, wherein the first layer has a first dynamic range, the second layer has a second dynamic range, and the first layer is a non-transparent layer.
Wherein the first dynamic range may be SDR and the second dynamic range may be HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR. The mixed layer may include at least one SDR layer and at least one HDR layer, the SDR layer may be divided into a transparent layer and a non-transparent layer, and the HDR layer may also be divided into a transparent layer and a non-transparent layer.
When the mixed layer only includes the non-transparent layer with the first dynamic range and the layer with the second dynamic range, the first layer and the second layer may be obtained from the mixed layer, and a specific obtaining process is consistent with the process in S201.
S702: and determining a first target area in the first image layer. The specific description for determining the first target area in the first layer is consistent with the description for determining the first target area in the second layer in S202, which is specifically referred to the description in S202, and this embodiment of the present application is not described herein again.
S703: and combining the first image layer and the second image layer to obtain a first image. The specific description of the first image obtained by combining the first image layer and the second image layer is consistent with the description of the first image obtained by combining the second image layer, the third image layer and the converted first image layer in S203, which is specifically referred to the description in S203, and the description of the embodiment of the present application is not repeated here.
S704: and converting the dynamic range of a second target area in the first image into a second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
It should be noted that the specific process of converting the dynamic range of the second target region in the first image into the second dynamic range is consistent with the process in S204, which may specifically refer to related descriptions, and the embodiment of the present application is not described herein again.
Further, after the dynamic range of the second target area in the first image is converted into the second dynamic range, and the second image is obtained, the second image may be displayed. Because the dynamic ranges of different image areas in the second image are consistent, the problems of color confusion, invisible characters and the like do not occur in the finally displayed second image, so that the performance of processing the mixed image layer by the image processing equipment can be improved, and the user experience is also improved.
For convenience of understanding, the method provided by the embodiment of the present application is illustrated below by taking an example that the mixed layer includes an SDR layer and an HDR layer, and the first dynamic range is SDR and the second dynamic range is HDR.
When the mixed image layer includes one or more SDR image layers and one or more HDR image layers, as shown in fig. 8, for example, a HDR image layer (an image layer including giraffes), an SDR transparent image layer (an image layer including thumbnails) and an SDR non-transparent image layer (an image layer including playbars) are included, if the mixed image layers are directly merged according to the prior art processing method without distinguishing the HDR image layer and the SDR image layer, an error display may be caused because the image content is not uniform, for example, the error display shown in fig. 9 may occur in the final image display.
In order to solve the above problem, in the embodiments of the present application, a transparent SDR layer is converted into a transparent HDR layer, and an SDR image region in a non-transparent SDR layer is converted into an HDR image region (that is, an SDR image region in a first image obtained after merging is converted into an HDR image region), so that images in a second image obtained after conversion are unified, thereby solving the problems existing in the prior art.
It should be noted that, when a plurality of mixed layers are unified into the same layer (for example, an SDR layer is converted into an HDR layer, and an SDR layer is converted into a transparent HDR layer), in theory, a tone mapping or inverse tone mapping technique may be used for each of the plurality of mixed layers, and the plurality of layers have already been unified into an SDR layer or an HDR layer, but how many layers exist needs to add how many conversion units for implementing the tone mapping or inverse tone mapping technique, and the number of the plurality of mixed layers is usually greater than 9, so the hardware cost overhead is large. In the embodiment of the application, by distinguishing the first dynamic range image region of the transparent first dynamic range image layer and the first dynamic range image region of the non-transparent first dynamic range image layer, converting the transparent first dynamic range image layer into the second dynamic range image layer (usually, the number of the transparent first dynamic range image layers is less than or equal to 3), and then unifying the image contents by passing the first dynamic range image region in the image after the image layers are combined through a conversion unit (a module for realizing inverse tone mapping or tone mapping technology), so as to obtain a final image, compared with a method in which a conversion unit is added in each layer, on the premise of obtaining the same image quality, the hardware cost is greatly reduced.
The mixed layer processing method provided by the embodiment of the present application is mainly described from the perspective of an image processing device. It is to be understood that the image processing apparatus includes hardware structures and/or software modules corresponding to the respective functions for realizing the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative structures and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 10 is a schematic diagram illustrating a possible hardware implementation structure of the hybrid layer processing system according to the foregoing embodiment, where the hybrid layer processing system includes: an image reading interface 1001 for acquiring a plurality of input layers to be processed; an image type detector 1002, configured to receive an input layer in the image reading interface 1001, and determine a dynamic range and transparent layer attributes of the input layer according to a dynamic range identifier and/or layer transparency of the input layer, where the transparent layer attributes include a transparent layer and a non-transparent layer; a first calculation engine 1003, configured to receive the first layer and convert a dynamic range of the first layer into a target dynamic range when the image type detector 1002 determines that the input layer is a first layer, where the first layer is a transparent layer, and the dynamic range of the first layer is different from the target dynamic range; an image area calibrator 1004, configured to receive a second layer and determine a first target area of the second layer when the image type detector 1002 determines that the input layer is the second layer, where the second layer is a non-transparent layer, and a dynamic range of the second layer is different from a target dynamic range; a weighting calculator 1005, configured to receive the first layer processed by the first calculation engine 1003, the second layer determined by the image type detector 1002, and an input layer determined by the image type detector 1002 as a third layer, and merge the first layer, the second layer, and the third layer into a first image, where the third layer has a target dynamic range; the second calculation engine 1006 is configured to receive the first image processed by the weighting calculator 1005 and the first target region determined by the image region calibrator 1004, and convert a dynamic range of a second target region in the first image into a target dynamic range to obtain a second image, where the second target region is a corresponding region of the first target region in the first image.
In a possible embodiment, the weight calculator 1005 is specifically configured to: according to the layer transparency of the first layer, the layer transparency of the second layer, and the layer transparency of the third layer, a weighted calculation is performed on the pixel value of the first layer processed by the first calculation engine 1003, the pixel value of the second layer determined by the image type detector 1002, and the pixel value of the third layer determined by the image type detector 1003, so as to obtain a first image.
In the embodiment of the present application, each component of the mixed layer processing system is respectively configured to implement a function of each step of the corresponding mixed layer processing method, and since each step has been described in detail in the embodiment of the mixed layer processing method, details are not described here.
The hardware structure of the hybrid layer processing system provided in the embodiments of the present application may be implemented by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits. Additionally, in some aspects, the functions described in the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for image processing, or incorporated in a combined image processor. Also, the techniques may be fully implemented in one or more circuits or logic elements.
In the embodiment of the present application, functional modules may be divided according to the mixed layer processing system and the mixed layer processing apparatus corresponding to the method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Fig. 11 shows another possible structural schematic diagram of the mixed layer processing apparatus according to the foregoing embodiment when functional modules are divided according to respective functions, where the apparatus may be an image processing device or a chip or a system on a chip in the image processing device, or may be a circuit, a module, or a unit in the image processing device for implementing the foregoing method embodiment. The device includes: an acquisition unit 1101, a conversion unit 1102 and a merging unit 1103. The obtaining unit 1101 is configured to support the apparatus to perform S201, S601, or S701 in the method embodiment; the conversion unit 1102 is configured to support the apparatus to perform S202, S204, S602, S702, S704, or the like of the method embodiment; the merging unit 1103 is configured to support the apparatus to perform S203, S603, or S703 of the method embodiment. Further, the apparatus further comprises: a display unit 1104; the display unit 1104 is used to support the apparatus to perform the steps of displaying the first image or displaying the image in the method embodiment, and the like.
The techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this application to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in hardware units for image processing, in conjunction with suitable software and/or firmware, or provided by interoperating hardware units (including one or more processors as described above).
A hybrid layer processing apparatus in the embodiment of the present application is described above from the perspective of a modular functional entity, and a hybrid layer processing apparatus in the embodiment of the present application is described below from the perspective of hardware processing.
The embodiment of the present application further provides a mixed layer processing apparatus, and a structure of the mixed layer processing apparatus may be as shown in fig. 1. In an embodiment of the present application, the processor 102 is configured to process one or more steps of S201-S204, S601-S603, and S701-S704 in the above-described method embodiments, and/or other technical processes described herein.
In some possible embodiments, the memory 101 may be configured to store a mixture layer, a first layer, a second layer, a third layer, a first image and/or a second image, and the like. The display panel in the multimedia component 104 may be used to display a first image or a second image, etc.
In the embodiment of the present application, each component of the image processing apparatus is respectively configured to implement a function of each step of the corresponding mixed layer processing method, and since each step has been described in detail in the embodiment of the mixed layer processing method, no further description is given here.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Embodiments of the present application further provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed on a device (for example, the device may be a single chip, a computer, or a processor, and the like), the device is caused to perform one or more steps of S201 to S204, S601 to S603, and S701 to S704 in the foregoing method embodiments, and/or other technical processes described herein. The respective constituent modules of the image processing apparatus described above may be stored in the computer-readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
Based on such understanding, the embodiments of the present application also provide a computer program product containing instructions, where a part of or all or part of the technical solution that substantially contributes to the prior art may be embodied in the form of a software product stored in a storage medium, and the computer program product contains instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor therein to execute all or part of the steps of the method described in the embodiments of the present application.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (25)

  1. A mixed layer processing method is characterized by comprising the following steps:
    acquiring a first layer, a second layer and a third layer, wherein the first layer and the second layer have a first dynamic range, the third layer has a second dynamic range, the first layer is a transparent layer, and the second layer is a non-transparent layer;
    converting the dynamic range of the first image layer into the second dynamic range, and determining a first target area in the second image layer;
    combining the second image layer, the third image layer and the converted first image layer to obtain a first image;
    and converting the dynamic range of a second target area in the first image into the second dynamic range to obtain a second image, wherein the second target area is a corresponding area of the first target area in the first image.
  2. The method of claim 1, wherein the first dynamic range is a standard dynamic range SDR, the second dynamic range is a high dynamic range HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR.
  3. The method according to claim 1 or 2, wherein before the obtaining the first layer, the second layer, and the third layer, further comprising:
    determining that the first image layer has the first dynamic range according to the dynamic range identifier of the first image layer;
    determining that the second image layer has the first dynamic range according to the dynamic range identifier of the second image layer;
    and determining that the third layer has the second dynamic range according to the dynamic range identifier of the third layer.
  4. The method according to any one of claims 1 to 3, wherein before the obtaining the first layer, the second layer, and the third layer, further comprising:
    determining the first layer to be the transparent layer according to the layer transparency of the first layer;
    and determining the second layer to be the non-transparent layer according to the layer transparency of the second layer.
  5. The method according to any of claims 1 to 4, wherein the determining the first target area in the second layer comprises:
    the first target region is determined by region of interest ROI identification.
  6. The method according to claim 4 or 5, wherein said merging the second layer, the third layer, and the converted first layer to obtain a first image comprises:
    and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the converted pixel value of the first layer, the converted pixel value of the second layer and the converted pixel value of the third layer to obtain the first image.
  7. A method according to claim 6, characterized in that the pixel values of the areas in said second layer outside said first target area are 0.
  8. The method according to any of claims 1 to 7, wherein the converting the dynamic range of the first layer into the second dynamic range comprises:
    when the first dynamic range is SDR and the second dynamic range is HDR, converting the dynamic range of the first image layer into HDR through inverse tone mapping;
    when the first dynamic range is HDR and the second dynamic range is SDR, converting the dynamic range of the first image layer into SDR through tone mapping.
  9. The method according to any one of claims 1 to 8, wherein converting the dynamic range of the second target region in the first image into the second dynamic range to obtain a second image comprises:
    when the dynamic range of the second target area is SDR and the second dynamic range is HDR, converting the dynamic range of the second target area into HDR through inverse tone mapping to obtain a second image;
    and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image.
  10. The method of any one of claims 1 to 9, further comprising:
    and displaying the second image.
  11. A hybrid layer processing system, comprising:
    the image reading interface is used for acquiring a plurality of input image layers to be processed;
    the image type detector is used for receiving the input image layer in the image reading interface, and determining a dynamic range and transparent image layer attributes of the input image layer according to a dynamic range identifier and/or image layer transparency of the input image layer, wherein the transparent image layer attributes comprise a transparent image layer and a non-transparent image layer;
    a first calculation engine, configured to receive the first layer and convert a dynamic range of the first layer into a target dynamic range when the image type detector determines that the input layer is a first layer, where the first layer is the transparent layer and the dynamic range of the first layer is different from the target dynamic range;
    an image area calibrator, configured to receive the second layer and determine a first target area of the second layer when the image type detector determines that the input layer is a second layer, where the second layer is the non-transparent layer and a dynamic range of the second layer is different from the target dynamic range;
    a weighting calculator, configured to receive a first layer processed by the first calculation engine, the second layer determined by the image type detector, and an input layer determined by the image type detector as a third layer, and merge the first layer, the second layer, and the third layer into a first image, where the third layer has the target dynamic range;
    and the second calculation engine is configured to receive the first image processed by the weighting calculator and the first target region determined by the image region calibrator, and convert a dynamic range of a second target region in the first image into the target dynamic range to obtain a second image, where the second target region is a corresponding region of the first target region in the first image.
  12. The system of claim 11, wherein the weight calculator is specifically configured to:
    and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the pixel value of the first layer processed by the first calculation engine, the pixel value of the second layer determined by the image type detector and the pixel value of the third layer determined by the image type detector to obtain the first image.
  13. A hybrid layer processing apparatus, the apparatus comprising:
    the image processing device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for obtaining a first image layer, a second image layer and a third image layer, the first image layer and the second image layer have a first dynamic range, the third image layer has a second dynamic range, the first image layer is a transparent image layer, and the second image layer is a non-transparent image layer;
    a conversion unit, configured to convert the dynamic range of the first layer into the second dynamic range;
    a determining unit, configured to determine a first target area in the second layer;
    a merging unit, configured to merge the second layer, the third layer, and the converted first layer to obtain a first image;
    the conversion unit is further configured to convert a dynamic range of a second target area in the first image into the second dynamic range to obtain a second image, where the second target area is a corresponding area of the first target area in the first image.
  14. The apparatus of claim 13, wherein the first dynamic range is a standard dynamic range SDR and the second dynamic range is a high dynamic range HDR; alternatively, the first dynamic range is HDR and the second dynamic range is SDR.
  15. The apparatus according to claim 13 or 14, wherein the determining unit is further configured to:
    determining that the first image layer has the first dynamic range according to the dynamic range identifier of the first image layer;
    determining that the second image layer has the first dynamic range according to the dynamic range identifier of the second image layer;
    and determining that the third layer has the second dynamic range according to the dynamic range identifier of the third layer.
  16. The apparatus according to any one of claims 13 to 15, wherein the determining unit is further configured to:
    determining the first layer to be the transparent layer according to the layer transparency of the first layer;
    and determining the second layer to be the non-transparent layer according to the layer transparency of the second layer.
  17. The apparatus according to any one of claims 13 to 16, wherein the determining unit is specifically configured to:
    the first target region is determined by region of interest ROI identification.
  18. The apparatus according to claim 16 or 17, wherein the merging unit is specifically configured to:
    and according to the layer transparency of the first layer, the layer transparency of the second layer and the layer transparency of the third layer, performing weighted calculation on the converted pixel value of the first layer, the converted pixel value of the second layer and the converted pixel value of the third layer to obtain the first image.
  19. The apparatus according to claim 18, wherein the pixel values of the areas in said second layer outside said first target area are 0.
  20. The apparatus according to any one of claims 13 to 19, wherein the conversion unit is specifically configured to:
    when the first dynamic range is SDR and the second dynamic range is HDR, converting the dynamic range of the first image layer into HDR through inverse tone mapping;
    when the first dynamic range is HDR and the second dynamic range is SDR, converting the dynamic range of the first image layer into SDR through tone mapping.
  21. The apparatus according to any one of claims 13 to 20, wherein the conversion unit is specifically configured to:
    when the dynamic range of the second target area is SDR and the second dynamic range is HDR, converting the dynamic range of the second target area into HDR through inverse tone mapping to obtain a second image;
    and when the dynamic range of the second target area is HDR and the second dynamic range is SDR, converting the dynamic range of the second target area into SDR through tone mapping to obtain a second image.
  22. The apparatus of any one of claims 13 to 21, further comprising:
    and the display unit is used for displaying the second image.
  23. A hybrid layer processing apparatus, the apparatus comprising: a memory, and a processor coupled to the memory, the memory storing instructions and data, the processor executing the instructions in the memory to cause the processor to perform the hybrid layer processing method according to any one of claims 1 to 10.
  24. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the mixed layer processing method according to any one of claims 1 to 10.
  25. A computer program product, which, when run on a computer, causes the computer to perform the hybrid layer processing method according to any one of claims 1 to 10.
CN201980065334.8A 2019-01-31 2019-01-31 Mixed layer processing method and device Pending CN112805745A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/074307 WO2020155072A1 (en) 2019-01-31 2019-01-31 Mixed layer processing method and apparatus

Publications (1)

Publication Number Publication Date
CN112805745A true CN112805745A (en) 2021-05-14

Family

ID=71841496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980065334.8A Pending CN112805745A (en) 2019-01-31 2019-01-31 Mixed layer processing method and device

Country Status (2)

Country Link
CN (1) CN112805745A (en)
WO (1) WO2020155072A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130511A (en) * 2023-02-24 2023-11-28 荣耀终端有限公司 Brightness control method and related equipment
WO2024037251A1 (en) * 2022-08-15 2024-02-22 万有引力(宁波)电子科技有限公司 Data transmission method, apparatus and system, device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014130213A1 (en) * 2013-02-21 2014-08-28 Dolby Laboratories Licensing Corporation Systems and methods for appearance mapping for compositing overlay graphics
US9900527B2 (en) * 2016-03-08 2018-02-20 Omnivision Technologies, Inc. Enhanced high dynamic range
US10638105B2 (en) * 2016-06-27 2020-04-28 Intel Corporation Method and system of multi-dynamic range multi-layer video blending with alpha channel sideband for video playback
JP6855205B2 (en) * 2016-10-06 2021-04-07 株式会社ソニー・インタラクティブエンタテインメント Information processing equipment and image processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037251A1 (en) * 2022-08-15 2024-02-22 万有引力(宁波)电子科技有限公司 Data transmission method, apparatus and system, device, and storage medium
CN117130511A (en) * 2023-02-24 2023-11-28 荣耀终端有限公司 Brightness control method and related equipment

Also Published As

Publication number Publication date
WO2020155072A1 (en) 2020-08-06

Similar Documents

Publication Publication Date Title
CN111476309B (en) Image processing method, model training method, device, equipment and readable medium
CN106211804B (en) Automatic white balance is carried out using to the colour measurement of raw image data
CN107293265B (en) Display screen picture adjusting method, display terminal and readable storage medium
US10978027B2 (en) Electronic display partial image frame update systems and methods
CN113763856B (en) Method and device for determining ambient illumination intensity and storage medium
CN110070551B (en) Video image rendering method and device and electronic equipment
CN114640783B (en) Photographing method and related equipment
CN109740519B (en) Control method and electronic device
CN110084204B (en) Image processing method and device based on target object posture and electronic equipment
CN111724316B (en) Method and apparatus for processing high dynamic range image
CN112017577B (en) Screen display calibration method and device
CN112071267B (en) Brightness adjusting method, brightness adjusting device, terminal equipment and storage medium
CN112840636A (en) Image processing method and device
CN111551348B (en) Gamma debugging method and device
CN113596428B (en) Method and device for acquiring mapping curve parameters
CN112805745A (en) Mixed layer processing method and device
US11721003B1 (en) Digital image dynamic range processing apparatus and method
CN112950525A (en) Image detection method and device and electronic equipment
CN117274109A (en) Image processing method, noise reduction model training method and electronic equipment
CN111369431A (en) Image processing method and device, readable medium and electronic equipment
WO2022083081A1 (en) Image rendering method and apparatus, and device and storage medium
CN114038370A (en) Display parameter adjusting method and device, storage medium and display equipment
CN111970451B (en) Image processing method, image processing device and terminal equipment
CN112232125A (en) Key point detection method and key point detection model training method
CN111800626B (en) Photographing consistency evaluation method and device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination