CN114463191B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN114463191B
CN114463191B CN202110991378.2A CN202110991378A CN114463191B CN 114463191 B CN114463191 B CN 114463191B CN 202110991378 A CN202110991378 A CN 202110991378A CN 114463191 B CN114463191 B CN 114463191B
Authority
CN
China
Prior art keywords
image
brightness
tone mapping
face region
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110991378.2A
Other languages
Chinese (zh)
Other versions
CN114463191A (en
Inventor
王宁
王宇
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110991378.2A priority Critical patent/CN114463191B/en
Publication of CN114463191A publication Critical patent/CN114463191A/en
Application granted granted Critical
Publication of CN114463191B publication Critical patent/CN114463191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application discloses an image processing method and electronic equipment, wherein the method comprises the following steps: acquiring an image to be processed; identifying a face region in the image to be processed, and determining the brightness range of the face region; adjusting an initial tone mapping curve according to the brightness range of the face region to obtain a tone mapping curve of the face region; and tone mapping is carried out on the non-face area based on the initial tone mapping curve, and tone mapping is carried out on the face area based on the tone mapping curve of the face area to obtain a processed image, so that tone mapping of other areas of the whole image can be unaffected, the contrast of the face area is ensured, and distortion of the non-face area is avoided.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and an electronic device.
Background
Dynamic Range (DR) is used in many fields to represent the ratio of the maximum value to the minimum value of a variable. In a digital image, the dynamic range characterizes the ratio between the maximum luminance and the minimum luminance within the displayable range of the image, i.e. the number of gradations of the image dividing from the "brightest" pixel to the "darkest" pixel. The larger the dynamic range of an image is, the richer the brightness gradation which can be represented by the image is, and the more vivid the visual effect of the image is. A High Dynamic Range (HDR) image (hereinafter referred to as an HDR image) can provide more dynamic range and image details, and thus can better reflect the visual effect in a real environment.
The contrast ratio is a ratio of maximum brightness to minimum brightness in a picture, and can be divided into a global contrast ratio and a local contrast ratio.
In a backlight portrait scene, tone mapping during HDR image imaging has the problem that the human face contrast is ensured and the global image distortion is avoided, which cannot be simultaneously considered.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which can solve the problem that in a backlight portrait scene, when tone mapping is carried out during HDR image imaging, human face contrast is guaranteed, and global image distortion cannot be considered at the same time.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring an image to be processed; identifying a face region in the image to be processed, and determining the brightness range of the face region; adjusting an initial tone mapping curve according to the brightness range of the face region to obtain a tone mapping curve of the face region; and tone mapping is carried out on the non-face area based on the initial tone mapping curve, and tone mapping is carried out on the face area based on the tone mapping curve of the face area to obtain a processed image.
Illustratively, the image to be processed is an HDR high bit width image. The HDR high bit width image can be obtained directly from the sensor, or can be obtained by synthesizing a long exposure frame and a short exposure frame.
Illustratively, the processed image may be an HDR low bit width image.
In the embodiment of the application, the face identification is carried out on the image to be processed to find the corresponding face area, the individual tone mapping curve adjustment is carried out on the face area when the global tone mapping is carried out, and the other areas still use the global tone mapping curve (namely the initial tone mapping curve), so that the tone mapping of other areas of the whole image can not be influenced, the contrast of the face area is ensured, and the distortion of the non-face area is avoided.
In a possible implementation manner of the first aspect, before adjusting an initial tone mapping curve according to a brightness range of the face region to obtain a tone mapping curve of the face region, the method further includes: and determining an initial tone mapping curve according to the exposure coefficient of the image to be processed.
Here, the exposure coefficient of the image to be processed may be an exposure ratio of a long exposure frame and a short exposure frame in the image to be processed.
For example, the determining of the default initial tone mapping curve based on the image to be processed may be determining a calibration curve by obtaining an exposure ratio of the image to be processed, and obtaining the initial tone mapping curve corresponding to the image to be processed by a histogram equalization method.
In a possible implementation manner of the first aspect, the recognizing a face region in the image to be processed and determining a brightness range of the face region includes: carrying out face recognition through a neural network model, and determining a face area in the image to be processed; acquiring the brightness of each pixel point in the face area; and determining the brightness range of the face region based on the brightness of each pixel point in the face region.
Here, the face region may be recognized using an NPU. That is, a neural network model for detecting a face region is configured in the NPU, and after an image to be processed is acquired, the image to be processed is input into the neural network model in the NPU for processing, so that a face region (region of interest (ROI)) in the image to be processed can be acquired. The neural network model may be a CNN model, such as a fast R-CNN model, a RetinaFace model, and other common neural network models, which are not described herein again.
In a possible implementation manner of the first aspect, the determining a brightness range of the face region based on brightness of each pixel point in the face region includes:
determining average brightness according to the brightness of each pixel point in the face region, increasing preset brightness based on the average brightness to obtain first brightness, and decreasing the preset brightness based on the average brightness to obtain second brightness, wherein the second brightness is a lower limit value of the brightness range, and the first brightness is an upper limit value of the brightness range.
In a possible implementation manner of the first aspect, the determining the brightness range of the face region based on the brightness of each pixel point in the face region includes: determining the highest brightness of the face region and the lowest brightness of the face region according to the brightness of each pixel point in the face region, and determining the brightness range based on the highest brightness and the lowest brightness.
Here, after the face area is determined, the brightness range of the face area may be directly output through an Auto Exposure (AE) algorithm in the ISP. The luminance range may be from the lowest luminance of the face region to the highest luminance of the face region. The brightness range may also be determined based on the average brightness of the face region, for example, after determining the average brightness of the face region, increasing the preset brightness to obtain a first brightness, decreasing the preset brightness to obtain a second brightness, for example, increasing 10 to 20 points and decreasing 10 to 20 points, to obtain the brightness range of the face region (i.e., from the second brightness to the first brightness). It should be noted here that the maximum brightness, the minimum brightness, and the average brightness of the face region can be directly determined by AE. The maximum brightness of the Face region can be obtained by directly calling max (Face _ area) function through the code access AE, and the minimum brightness of the Face region can be obtained by directly calling min (Face _ area) function through the code access AE.
In a possible implementation manner of the first aspect, after the adjusting an initial tone mapping curve according to the brightness range of the face region to obtain a tone mapping curve of the face region, the image processing method further includes: and carrying out fusion processing on the edge area of the human face based on the tone mapping curve of the human face area and the initial tone mapping curve.
And obtaining the tone mapping value of the face edge area after the fusion processing is carried out on the face edge area.
The face edge region refers to a region where a face and a background are connected, and in order to prevent the contrast of the whole image after tone mapping from generating too large sudden change, the junction of the two regions where tone mapping is performed by using different tone mapping curves (i.e., the face edge region) needs to be fused, so that the two regions can be in smooth transition.
Illustratively, the fusion process includes average fusion and nonlinear fusion.
Correspondingly, tone mapping is carried out on the non-face area based on the initial tone mapping curve, tone mapping is carried out on the face area based on the tone mapping curve of the face area, and the processed image is obtained and comprises the following steps: and tone mapping is carried out on the non-face area based on the initial tone mapping curve, tone mapping is carried out on the face area based on the tone mapping curve of the face area, and tone mapping is carried out on the face edge area based on the fused tone mapping value of the face edge area to obtain a processed image.
The image processing method provided by the embodiment of the application not only can ensure the contrast of the face region and simultaneously avoid the distortion of the non-face region, but also can enable the face region and the non-face region to be in smooth transition, and avoid the large mutation of the overall contrast of the image.
In a possible implementation manner of the first aspect, after performing tone mapping on a non-face region based on an initial tone mapping curve, performing tone mapping on a face region based on a tone mapping curve of the face region, and obtaining a processed image, the method further includes: and carrying out local tone mapping adjustment on the processed image to obtain a target image after the local tone mapping adjustment. After global tone mapping is carried out, local tone mapping adjustment can be carried out on the image, and a target image with a better display effect is obtained.
The local tone mapping adjustment is a tone mapping method for adjusting an image according to peripheral pixels of each pixel point, the positions of the pixels are different, the brightness values of the pixels after mapping are possibly different, the mapping result of the pixel points is influenced by the peripheral pixel points, the local contrast and the image details of the highlight and shadow parts can be better protected, and the tone mapping effect is improved.
The local tone mapping adjustment method may adopt an existing local tone mapping method, such as a luminance and reflection separation (reflection) method, a bilateral filtering (bilateral filter), and the like, and will not be described herein again.
In a second aspect, an embodiment of the present application provides an electronic device, including:
the acquisition module is used for acquiring an image to be processed;
the recognition module is used for recognizing a face region in the image to be processed and determining the brightness range of the face region;
the adjusting module is used for adjusting an initial tone mapping curve according to the brightness range of the face region to obtain a tone mapping curve of the face region;
and the tone mapping module is used for carrying out tone mapping on the non-face area based on the initial tone mapping curve and carrying out tone mapping on the face area based on the tone mapping curve of the face area to obtain a processed image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to any one of the first aspect is implemented.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method according to any one of the above first aspects.
In a fifth aspect, embodiments of the present application provide a chip system, where the chip system includes a processor, and the processor is coupled with a memory, and executes a computer program stored in the memory to implement the method according to any one of the above first aspects. The chip system can be a single chip or a chip module consisting of a plurality of chips.
In a sixth aspect, embodiments of the present application provide a computer program product, which, when run on an electronic device, causes the electronic device to perform the method of any one of the above first aspects.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
FIG. 1 is a schematic diagram of a high dynamic range image and a low dynamic range image provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a high contrast image and a low contrast image provided by an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of the mobile phone 100 according to an embodiment of the present disclosure;
fig. 4 is a schematic software architecture diagram of the mobile phone 100 according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a graphical user interface under some application scenarios provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a graphical user interface in some second application scenarios provided in an embodiment of the present application;
fig. 7 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an initial tone mapping curve and a tone mapping region of a face region according to an embodiment of the present application;
fig. 9 is a schematic flowchart of an image processing method according to another embodiment of the present application;
fig. 10 is a schematic flowchart of an image processing method according to another embodiment of the present application.
Detailed Description
The following is an exemplary description of relevant matters that may be involved in embodiments of the present application.
In the description of this application, "/" denotes "or" means, for example, a/B may denote a or B, unless otherwise indicated. "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. Further, "at least one" means one or more, "a plurality" means two or more. The terms "first", "second", and the like do not necessarily limit the number and execution order, and the terms "first", "second", and the like do not necessarily limit the difference.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In the following, some terms in the embodiments of the present application are explained:
the exposure coefficient according to the embodiment of the present application is a parameter set by an electronic device when an image is captured. The exposure parameter may be used to indicate the amount of light that the device receives from the scene when it is capturing the scene. The exposure parameters may include exposure duration and/or exposure intensity, etc.
In general, the magnitude of the value of the exposure parameter may determine the magnitude of the luminance value of the finally captured image. For example, if the exposure time is long or the exposure intensity is high, the amount of light entering the device when capturing an image is large, and therefore the brightness of the captured image is high. If the exposure time period is short or the exposure intensity is small, the amount of light entering the apparatus when capturing an image is small, so the brightness of the captured image is small.
The pixel related to the embodiment of the present application is a minimum imaging unit on one frame image. One pixel may correspond to one coordinate point on the image. A pixel may correspond to a single parameter (e.g., gray scale) or a collection of parameters (e.g., gray scale, brightness, color, etc.).
The dynamic range referred to in the embodiments of the present application is a luminance range of pixels in an image.
The expression for dynamic range may be: dynamic range =20log 10 (bright/dark); where dynamic range refers to the dynamic range, bright refers to the luminance of the "brightest" pixel, and dark refers to the luminance of the "darkest" pixel.
The brightness of the "brightest" pixel and the brightness of the "darkest" pixel may be represented by luminance values. The dynamic range under natural scene can reach 10 -5 To 10 8 The dynamic range visible to human eyes can reach 1 to 10 4 And the dynamic range that the display can display is only around 1 to 300.
Exemplarily, referring to fig. 1, (a) in fig. 1 is an exemplary diagram of a Low Dynamic Range (LDR) image, and (b) in fig. 1 is an exemplary diagram of a High Dynamic Range (HDR) image. As can be seen in fig. 1, a high dynamic range image (i.e., an HDR image) may provide more dynamic range and image detail.
The illuminance value is a luminous flux of visible light received by a pixel, and is abbreviated as illuminance, and the unit is Lux (Lux).
The contrast referred to in the embodiments of the present application refers to a ratio of maximum brightness to minimum brightness in a picture, and refers to a magnitude of gray contrast of an image, which can represent a contrast of a light and a shade of the image.
For example, referring to fig. 2, (a) in fig. 2 is an image with low contrast, and (b) in fig. 2 is an image with high contrast, as can be seen from fig. 2, the contrast of the image with high contrast is more obvious than that of the image with low contrast, and more image details can be embodied.
Tone mapping (tone mapping) according to the embodiments of the present application refers to a process of mapping and changing brightness of an image, and the tone mapping may enable a processed image to better express information and features in the image.
Tone mapping is a way to compress the dynamic range of a high dynamic range image, and is usually implemented by a tone mapping curve, and the luminance of a pixel point in an HDR high bit width synthesized image to be tone mapped corresponds to the value of the abscissa of the tone mapping curve, and the luminance (the value corresponding to the ordinate) corresponding to a low bit width image after the pixel point is mapped can be determined by the tone mapping curve.
It should be noted that the HDR high-bit-width composite image generally refers to a digital image with a dynamic range higher than 8 bits, such as a 10bit image, a 12bit image, and a 16bit image, and the low-bit-width image generally refers to a digital image with a dynamic range lower than 8 bits, such as a 6bit image, a 4bit image, and the like.
Illustratively, for the tone mapping curve, at each exposure coefficient, there are multiple calibration curves drawn according to empirical values (these calibration curves may correspond to different scene categories, such as a face scene and a non-face scene). And performing tone mapping through the calibration curves, comparing the image quality of the image obtained after the tone mapping, and selecting the calibration curve corresponding to the image with the best image quality as the tone mapping curve corresponding to the exposure coefficient.
It should be noted that comparing the image quality of the image obtained after tone mapping can be achieved through visual judgment, and can also be performed through machine identification, which is not described herein again.
In a backlit portrait scene, because the high dynamic range is compressed to the low dynamic range during tone mapping, and the brightness of a backlit face area is usually only in a small brightness interval, the brightness difference obtained after tone mapping becomes very small, which results in loss of contrast of the face area.
In order to avoid the loss of the contrast of the face region, currently, a tone mapping curve corresponding to a face scene is used to perform tone mapping on a global image during tone mapping, but although the contrast of the face region is ensured, distortion of a non-face region is caused.
In order to solve the above problem, an embodiment of the present application provides an image processing method and an electronic device, where a face identification is performed on an image to be processed to find a corresponding face region, and when global tone mapping is performed, a separate tone mapping curve adjustment is performed on the face region, and other regions still use a global tone mapping curve, so that tone mapping of other regions in a full image is not affected, and distortion of a non-face region is avoided while the contrast of the face region is ensured.
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system architectures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
In some embodiments of the present application, the execution subject of the image processing method may be an electronic device. The electronic device may be a portable terminal with an image processing function, such as a mobile phone, a tablet computer, and the like. The portable electronic device may also be other portable electronic devices, such as a digital camera, a wearable device. It should also be understood that in other embodiments of the present application, the electronic device may not be a portable electronic device, but may be a desktop computer with an image processing function.
Typically, electronic devices support a variety of applications. Such as one or more of the following applications: a camera application, an instant messaging application, a gallery, and the like. The user can take images or record videos through the camera application, call the camera to take images or videos through the instant messaging application, and open images stored in the electronic device through the gallery.
Taking the above-mentioned electronic device as a mobile phone as an example, referring to fig. 3, fig. 3 shows a schematic diagram of a hardware structure of the mobile phone 100.
The handset 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 11, an antenna 12, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a microphone 170C, a sensor module 180, buttons 190, a motor 191, an indicator 192, a camera 193, a display 194, and a sim card interface 195. Wherein the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, an acceleration sensor 180E, a distance sensor 180F, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, a charger, a flash, a camera 193, etc. through different I2C bus interfaces, respectively.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the camera function of the handset 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the mobile phone 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the mobile phone 100. In other embodiments of the present application, the mobile phone 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The mobile phone 100 implements display functions through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the cell phone 100 may include 1 or N display screens 294, N being a positive integer greater than 1.
The mobile phone 100 may implement a shooting/recording function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP may also perform algorithm optimization on noise, brightness, skin color, sharpness, and color of the image, for example, perform optimization such as denoising, demosaicing, color adjustment, brightness adjustment, and the like. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the handset 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the handset 100 is in frequency bin selection, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. Thus, the handset 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the mobile phone 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In some embodiments, the NPU or other processor may be configured to perform operations such as face detection and face region brightness tracking on an image containing a face in a video/image captured or stored by the mobile phone 100, and output parameters such as an average brightness, a maximum brightness, and a minimum brightness of a face region.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The cellular phone 100 determines the intensity of the pressure according to the change of the capacitance. When a touch operation is applied to the display screen 194, the mobile phone 100 detects the intensity of the touch operation according to the pressure sensor 180A. The cellular phone 100 can also calculate the touched position based on the detection signal of the pressure sensor 180A.
In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the camera application icon, executing an instruction for turning on the camera. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the camera application icon, executing an instruction for opening the gallery.
The fingerprint sensor 180H is used to collect a fingerprint. The mobile phone 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call with the fingerprint, and the like.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the mobile phone 100, different from the position of the display 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The cellular phone 100 may receive a key input, and generate a key signal input related to user setting and function control of the cellular phone 100.
After the hardware architecture of the electronic device is described, the software system architecture of the electronic device will be described below.
The software system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application takes a layered architecture as an example, and exemplifies a software structure of an electronic device. Fig. 4 is a block diagram of a software configuration of the mobile phone 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include application packages, for example, the application packages may include voice assistant, camera, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. Illustratively, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. The telephone manager is used for providing a communication function of the electronic equipment. Such as management of call status (including on, off, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Some possible application scenarios are specifically described below with reference to the accompanying drawings:
the application scene one: the user selects a scene for processing the image through the gallery.
For example, please refer to fig. 5, which is a schematic diagram of some Graphical User Interfaces (GUIs) that may be involved in the first application scenario.
As shown in fig. 5, the main display 51 of the mobile phone 100 includes applications such as an application mall, a camera, a memo, a gallery, and music.
By clicking on the icon 52 of the gallery application, the cell phone 100 can detect this clicking operation based on the touch sensor 180K and/or the pressure sensor 180A. After detecting the click operation, the mobile phone 100 will respond to the click operation, open the gallery application, and display the gallery application interface 53, where the image stored in the mobile phone 100 is displayed (the user may preview the stored image through a sliding operation).
The user can click and select the image 54 to be processed, the mobile phone 100 can also detect the click operation based on the touch sensor 180K and/or the pressure sensor 180A, and after detecting the click operation, the mobile phone 100 will respond to the click operation, acquire the image as an image to be processed, and transmit the image to an image processor for processing (at this time, a processed picture can be displayed on a display interface).
Or, after the user clicks and selects the image 54 which is desired to be processed, the pop-up prompt box 55 inquires whether the user needs to process the image 54 selected by the user, and if the user selects the control 56 (yes), the image is acquired as the image to be processed, and the image is transmitted to the image processor to be processed.
It should be noted that the image processing performed by the image processor is triggered after the user selects the image to be processed or selects "yes" in the prompt box, the image processing may include performing face detection and face region brightness tracking by an NPU or other processors, determining a brightness range of the face region, adjusting an initial tone mapping curve by a digital signal processor based on the face region range to obtain a tone mapping curve of the face region, performing tone mapping on a non-face region based on the initial tone mapping curve, and performing tone mapping on the face region based on the tone mapping curve of the face region to obtain a processed image.
Application scenario two: and shooting a scene.
For example, the user takes a self-timer using the cellular phone 100.
For example, please refer to fig. 6, which is a schematic diagram of some Graphical User Interfaces (GUIs) that may be involved in the application scenario two. As shown in fig. 6, the main interface 61 of the mobile phone 100 includes applications such as an application mall, a camera, a memo, a gallery, and music.
After the user clicks the icon 62 of the camera application, the mobile phone 100 can detect the click operation based on the touch sensor 180K and/or the pressure sensor 180A. After detecting the click operation, the mobile phone 100 will respond to the click operation, open the camera application, start the camera 193, and display the viewing interface 63 at the same time. At this time, the user can select a favorite person, object, scenery, etc. to take a picture or record a video (here, self-shooting is taken as an example), when the user presses the shooting button 64, the mobile phone 100 will respond to the operation, control the camera 193 to shoot/record the image information in the viewfinder interface 63, and acquire the image as the image to be processed, and perform image processing through the image processor.
Or, after the user clicks the shooting button 64, the shot image 65 is displayed on the display interface, the prompt box 66 pops up to inquire whether the user needs to process the shot image 65, and if the user selects the control 67 (yes), the image is acquired as the image to be processed and then transmitted to the image processor for processing.
Here, the processed image may be stored in a gallery.
The image processing is triggered after shooting an image by clicking a shooting button or selecting 'yes' in a prompt box, the image processing can comprise face detection and face area brightness tracking through an NPU or other processors, the brightness range of a face area is determined, an initial tone mapping curve is adjusted through a digital signal processor based on the face area range to obtain a tone mapping curve of the face area, tone mapping is carried out on a non-face area based on the initial tone mapping curve, tone mapping is carried out on the face area based on the tone mapping curve of the face area, and a processed image is obtained.
Of course, the image processing method provided in the embodiment of the present application is also applicable to video scenes, for example, selecting a stored video file from a gallery, and then processing each frame of image in the video file. For example, when a video is captured by a camera, each frame of image in the captured video can be processed.
The specific step of processing each frame of image in the video file may be to perform face detection on each frame of image in the video file, if a face region exists, adjust an initial tone mapping curve based on a brightness range of the face region to obtain a tone mapping curve of the face region, then perform tone mapping on a non-face region based on the initial tone mapping curve, perform tone mapping on the face region based on the tone mapping curve of the face region to obtain a processed frame image, and perform tone mapping on the entire frame image using the initial tone mapping curve to obtain the processed frame image for a frame image without the face region.
In the above, with reference to the accompanying drawings, an application scenario to which the image processing method provided in the embodiment of the present application is applied is described. In order to better understand the image processing method provided by the embodiments of the present application, specific implementation procedures thereof are exemplarily described below with reference to the accompanying drawings.
The following embodiments may be implemented in an electronic device (the cell phone 100) having the above-described hardware structure and software architecture.
Next, a process of implementing the image processing method provided in the embodiment of the present application is described, please refer to fig. 7, and fig. 7 is a flowchart illustrating the image processing method provided in the embodiment of the present application. As shown in fig. 7, the method includes:
s101: the mobile phone 100 acquires an image to be processed in response to a user operation.
In the embodiment of the present application, the user operation may be different operations for different application scenarios. For example, in the first application scenario, the user operation may be a click operation in which the user clicks to select an image that is desired to be processed, or a click operation in which the user clicks "yes" in the prompt box. For another example, in the second application scenario, the user operation may be a click operation of a user motor shooting button, or a click operation of a user clicking "yes" in a prompt box.
After the mobile phone 100 detects the user operation, the mobile phone 100 is triggered to acquire the image to be processed, which may specifically refer to the above related contents, and details are not described here again to avoid repetition.
In this embodiment, the image to be processed is an image including a backlight face region.
In the embodiment of the present application, the format of the to-be-processed image may be an image in a RAW format, and certainly, the to-be-processed image may also be an image in an RGB format, which is not limited herein.
In an embodiment of the present application, the image to be processed is an HDR high bit width image.
The HDR high bit width composite image can be obtained directly from the sensor, or can be obtained by compositing the long exposure frame and the short exposure frame. Here, the long exposure frame and the short exposure frame may be output by a sensor, and then the long exposure frame and the short exposure frame are fused, so that the HDR high-bit-width image can be obtained.
It should be noted that the exposure time of the long exposure frame is longer than that of the short exposure frame, and the exposure ratio of the specific long exposure frame to the specific short exposure frame may be 64:1.
the exposure ratio is a ratio of a product obtained by multiplying the exposure time length of the long exposure frame by the sensitivity (ISO) to a product obtained by multiplying the exposure time length of the short exposure frame by the sensitivity.
It should be noted that the HDR high bit width image generally refers to a digital image with a dynamic range higher than 8bit, such as a 10bit image, a 12bit image, a 16bit image, and so on.
S102: the method comprises the steps of identifying a face area in an image to be processed, and determining the brightness range of the face area.
In the embodiment of the present application, the face region may be recognized by using a neural-Network Processing Unit (NPU). Namely, a neural network model for detecting the face area is configured in the NPU, and the face area in the image to be processed is identified through the neural network.
Specifically, after the mobile phone 100 acquires the image to be processed in response to the user operation, the image to be processed may be input to the neural network model in the NPU for processing, so as to obtain the face area in the image to be processed. The neural network model may be a CNN model, such as a fast R-CNN model, a RetinaFace model, and other common neural network models, which are not described herein again.
It should be noted that, when the NPU outputs, the face region may be represented by a region of interest (ROI), that is, the face region is delineated from the image to be processed in a manner of a square frame, a circle, an ellipse, an irregular polygon, and the like, and for example, the face region may be delineated by a square frame.
After the face region is determined, the brightness range of the face region may be directly output through an Auto Exposure (AE) algorithm in an Image Signal Processor (ISP). The brightness range may be represented by a range from a brightness value of a pixel with the lowest brightness in the face region to a brightness value of a pixel with the highest brightness in the face region, that is, the brightness value of the pixel with the lowest brightness in the face region is a lower limit value of the brightness range, and the brightness value of the pixel with the highest brightness in the face region is an upper limit value of the brightness range.
The brightness range may also be determined based on the average brightness of the face region, for example, after determining the average brightness of the face region, increasing the preset brightness to obtain the first brightness, decreasing the preset brightness to obtain the second brightness, for example, the obtained average brightness is 20, on the basis of which 10% of the average brightness (i.e. 2) is increased to obtain the first brightness 22, and on the basis of which 10% of the average brightness (i.e. 2) is decreased to obtain the second brightness 18, the brightness range is 18-22. It should be noted that the preset brightness may be set according to practical applications, for example, 10% or 20% of the average brightness, and the application is not limited thereto.
It should be noted here that the maximum brightness, the minimum brightness, and the average brightness of the face region can be directly determined by AE. The AE may determine the brightness of each pixel point in the face region, and then determine the maximum brightness, the minimum brightness, and the average brightness of the face region through corresponding formulas.
For example, the AE algorithm is used to count the brightness of the pixels in the face area, and then the average brightness of the face area is calculated by the following formula:
Figure BDA0003232454040000111
wherein, luma face Refers to the average brightness, luma, of the face region pi The brightness of each pixel point in the face area is indicated, and N is the number of the pixel points contained in the face area.
For another example, the maximum function (max (Face _ area) function) may be directly called by the code access AE to obtain the maximum brightness of the Face region, and the minimum function (min (Face _ area) function) may be called to obtain the minimum brightness of the Face region.
S103: and adjusting an initial tone mapping curve according to the brightness range of the face region to obtain the tone mapping curve of the face region.
In the embodiment of the application, an initial tone mapping curve can be determined based on an image to be processed, and then the range of the output brightness of the initial tone mapping curve is determined according to the brightness range of the face region, so that the tone mapping curve for the face region is obtained.
The determining of the initial tone mapping curve based on the image to be processed may be determining a calibration curve by obtaining an exposure ratio of the image to be processed, and obtaining the initial tone mapping curve corresponding to the image to be processed by a histogram equalization method.
It should be noted that the initial tone mapping curve is determined based on the exposure ratio of the image to be processed, and therefore, the initial tone mapping curve may be used to perform tone mapping on a non-face region of the image to be processed, and a portion corresponding to the face region may be adjusted based on the luminance range of the face region by using the initial tone mapping curve, so as to obtain a tone mapping curve corresponding to the face region.
For example, the exposure ratio of the image to be processed can be determined by obtaining the exposure time and ISO of the long exposure frame and the exposure time and ISO of the short exposure frame, respectively, from the image to be processed obtained by synthesizing the long exposure frame and the short exposure frame.
The calibration curve can be obtained by interpolation based on the exposure of the image to be processed, and the histogram equalization method can equalize the image based on the exposure ratio of the image to be processed, so as to determine the initial tone mapping curve corresponding to the image to be processed.
The initial tone mapping curve may be stored in the ISP in the form of a LUT-up-table.
The adjusting the initial tone mapping curve according to the brightness range of the face region may be increasing the range of the output brightness of the initial tone mapping curve corresponding to the brightness range of the face region by a preset range, for example, the preset range may be 10% to 20%.
The tone mapping curve of the adjusted face region may also be stored in the ISP in the form of an LUT table.
It should be noted that the LUT table maps the luminance before tone mapping and the luminance after transformation, that is, the input value is the luminance value before tone mapping (i.e., the actual luminance value of each pixel in the image to be processed), and the output value is the luminance value obtained after a series of transformations (such as threshold, inversion, binarization, contrast adjustment, linear transformation, etc.).
Illustratively, as shown in table 1, the LUT table includes two fields (iput and output), where the iput field is the original luminance value of a pixel in an image, and the second field is the tone-mapped luminance value corresponding to the original luminance value.
Table 1:
input output
2^4 2^6
2^6 2^7
2^10 2^8
2^12 2^10
2^16 2^12
2^18 2^14
here, since the high dynamic range is compressed to the low dynamic range during tone mapping, and the brightness of the backlit face region is usually only in a small brightness interval, the brightness difference obtained after tone mapping becomes small, which may result in loss of contrast of the face region, so that for the face region, increasing the range of the output brightness corresponding to the tone mapping curve may increase the brightness difference (i.e., contrast) after tone mapping, thereby avoiding loss of contrast of the face region.
Specifically, referring to fig. 8, after the initial tone mapping curve L1 is determined, the tone mapping curve L2 of the face region can be obtained by adjusting the initial tone mapping curve.
It should be noted that the abscissa of the coordinate system in fig. 8 represents the luminance of a pixel of the image to be processed, the ordinate of the coordinate system in fig. 8 represents the luminance of a pixel of the processed image, and the range of the output luminance refers to the range of the ordinate.
As can be seen from fig. 8, for the initial tone mapping curve L1 of the Face region (the abscissa corresponds to Face _ th1 to Face _ th 2), the ordinate (the range of the output luminance) is out _ th1 to out _ th2, which is significantly smaller than the ordinate (out _ th1 to out _ th2 s) corresponding to the tone mapping curve L2 of the Face region, that is, the range of the output luminance for tone mapping the Face region based on the tone mapping curve L2 of the Face region is larger than the range of the output luminance for tone mapping the Face region based on the initial tone mapping curve L1.
S104: and tone mapping is carried out on the non-face area based on the initial tone mapping curve, and tone mapping is carried out on the face area based on the tone mapping curve of the face area to obtain a processed image.
In the embodiment of the present application, the processed image may be an HDR low bit width image.
In the embodiment of the application, for a face area, after the brightness of each pixel of the face area in an image to be processed is obtained through an ISP, the corresponding output brightness can be obtained through conversion of a tone mapping curve of the face area, that is, the brightness of each pixel of the face area in the processed image can be determined.
For the non-face area, the brightness of each pixel of the non-face area in the image to be processed can be obtained through the ISP, and then the corresponding output brightness can be obtained through conversion of the initial tone mapping curve, that is, the brightness of each pixel of the non-face area in the processed image can be determined.
It should be noted that, for the initial tone mapping curve, under each exposure coefficient, there are multiple calibration curves drawn according to empirical values (these calibration curves may correspond to different scene categories, such as a face scene and a non-face scene). Therefore, when determining the initial tone mapping curve, tone mapping may be performed through the calibration curves, and then the image quality of the image obtained after tone mapping is compared, and the calibration curve corresponding to the image with the best image quality is selected as the initial tone mapping curve corresponding to the exposure coefficient.
It should be noted that comparing the image quality of the image obtained after tone mapping may be achieved by visual determination, or may be performed by machine identification, which is not limited in this application.
It can be seen from the above that, according to the method and device for adjusting the tone mapping curve of the face region, the face region to be processed is subjected to face recognition to find the corresponding face region, the face region is subjected to individual tone mapping curve adjustment during global tone mapping, and other regions still use the global tone mapping curve, so that tone mapping of other regions of the whole image is not affected, the contrast of the face region is ensured, and distortion of non-face regions is avoided.
Referring to fig. 9, different from the previous embodiment, after S103, the image processing method according to another embodiment of the present application further includes the following steps:
s105: and carrying out fusion processing on the edge area of the human face based on the tone mapping curve of the human face area and the initial tone mapping curve.
In the embodiment of the application, the tone mapping value of the face edge region can be obtained after the fusion processing is performed on the face edge region.
In the embodiment of the present application, the face edge region refers to a region where a face and a background are connected, and in order to prevent the contrast of the entire tone-mapped image from generating too large abrupt changes, it is necessary to perform fusion processing on a connection point (i.e., the face edge region) of two regions where tone mapping is performed using different tone mapping curves, so that the two regions can be in smooth transition.
In a specific application, the gain value of the initial tone mapping curve and the gain value of the tone mapping curve of the face region may be fused.
It should be noted that the gain value refers to an offset, and in the embodiment of the present application, the gain value may refer to an output value of the tone mapping curve (i.e., a value corresponding to the vertical axis in fig. 8).
In specific application, a face edge region may be determined based on a face detection method, then the brightness of each pixel point in the face edge region is determined, then an output brightness (hereinafter referred to as a first output brightness) corresponding to an initial tone mapping curve at the brightness and an output brightness (hereinafter referred to as a second output brightness) corresponding to the tone mapping curve of the face region at the brightness are correspondingly found, then the first output brightness and the second output brightness are fused in an average fusion manner, that is, 50% of the first output brightness and 50% of the second output brightness are taken, and then the output brightness of the pixel point during tone mapping is obtained by adding.
In specific application, besides an average fusion mode, a nonlinear fusion method can be adopted for fusion, for example, for a pixel close to a face region, 30% of the first output brightness is taken, 70% of the second output brightness is taken for fusion, and for a pixel close to a non-face region, 70% of the first output brightness is taken, and 30% of the second output brightness is taken for fusion. Certainly, the fusion can also be realized based on a fusion mode of linear transition, that is, the value proportions of the first output brightness and the second output brightness are set according to the distance from the pixel point to the face region, for example, the closer to the pixel point of the face region, the higher the value proportion of the second output brightness is, the farther away from the pixel point of the face region, the lower the value proportion of the second output brightness is, and the like (which may be sequentially set to 90%, 80%, 70%, 60%, 50%, 40%, 30%, 20%, 10%, and the like). The specific gravity of the first output luminance at this time may be set to 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, etc., respectively. It should be clear that the above-mentioned values are merely examples and are not limiting.
After the initial tone mapping curve and the tone mapping curve of the face region are fused in the modes of average fusion, nonlinear fusion, linear transition fusion and the like, the tone mapping value of the face edge region can be determined, and similarly, the tone mapping value of the face edge region can be stored in the ISP in the form of an LUT (look-up table). Here, the LUT table corresponding to the tone mapping value of the face edge region can also be referred to as table 1, where the value corresponding to the output field is changed to the tone mapping value of the face edge region.
Accordingly, S104 includes: and tone mapping is carried out on the non-face area based on the initial tone mapping curve, tone mapping is carried out on the face area based on the tone mapping curve of the face area, and tone mapping is carried out on the face edge area based on the fused tone mapping value of the face edge area to obtain a processed image.
In the embodiment of the application, besides determining the tone mapping curve and the initial tone mapping curve of the face region, the tone mapping value of the fused face edge region is also obtained, so that the output brightness of each pixel point of the face edge region can be determined through the tone mapping value of the face edge region, and further the brightness of each pixel of the processed face edge region is determined.
Therefore, the image processing method provided by the embodiment of the application can not only ensure the contrast of the face region and avoid the distortion of the non-face region, but also enable the face region and the non-face region to be in smooth transition, and avoid the large mutation of the overall contrast of the image.
Referring to fig. 10, different from the previous embodiment, the image processing method according to another embodiment of the present application further includes the following steps after S104:
s106: and carrying out local tone mapping adjustment on the processed image to obtain a target image after the local tone mapping adjustment.
Since the global tone mapping processes all the pixel points by the same method, it is not distinguished whether the area where the pixel is located is a brighter area or a darker area, and thus the obtained image loses much local contrast and detail. The local tone mapping adjustment is a tone mapping method for adjusting an image according to peripheral pixels of each pixel point, and because the positions of the pixels are different, the brightness of the peripheral pixels is also different (when the pixels are located in a brighter area, the brightness of the peripheral pixels is higher, and when the pixels are located in a darker area, the brightness of the peripheral pixels is lower), so that the brightness values obtained after the local tone mapping are also different, the mapping result of the pixel points is influenced by the peripheral pixel points, the local contrast and the image details of a highlight and shadow part can be better protected, and the tone mapping effect is improved.
In the embodiment of the present application, the local tone mapping adjustment method may adopt an existing local tone mapping method, such as a luminance and reflection separation (reflection) method, a bilateral filtering method (bilateral filter), and the like, which is not described herein again.
As can be seen from the above, in the embodiment of the present application, in order to improve the tone mapping effect, after performing global tone mapping, local tone mapping adjustment may also be performed on the image, so as to obtain a target image with a better display effect.
Based on the same inventive concept, as an implementation of the foregoing method, an embodiment of the present application provides an electronic device, where the embodiment of the electronic device corresponds to the foregoing method embodiment, and details in the foregoing method embodiment are not repeated in this embodiment for convenience of reading, but it should be clear that a device in this embodiment can correspondingly implement all the contents in the foregoing method embodiment.
It should be noted that, for the information interaction, execution process, and other contents between the above devices/units, the specific functions and technical effects thereof based on the same concept as those of the method embodiment of the present application can be specifically referred to the method embodiment portion, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the foregoing method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/electronic device, a recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, characterized by comprising:
acquiring an image to be processed;
identifying a face region in the image to be processed, and determining the brightness range of the face region;
adjusting an initial tone mapping curve according to the brightness range of the face area to obtain a tone mapping curve of the face area; the initial tone mapping curve is determined based on an exposure ratio of the image to be processed;
carrying out tone mapping on a non-face area based on an initial tone mapping curve, carrying out tone mapping on a face area based on a tone mapping curve of the face area, and carrying out fusion processing on an edge area of the face based on the tone mapping curve of the face area and the initial tone mapping curve to obtain a processed image.
2. The image processing method of claim 1, wherein before adjusting the initial tone mapping curve according to the brightness range of the face region to obtain the tone mapping curve of the face region, the method further comprises:
and determining an initial tone mapping curve according to the exposure coefficient of the image to be processed.
3. The image processing method according to claim 1, wherein the recognizing a face region in the image to be processed and determining a brightness range of the face region comprises:
performing face recognition through a neural network model, and determining a face area in the image to be processed;
acquiring the brightness of each pixel point in the face area;
and determining the brightness range of the face region based on the brightness of each pixel point in the face region.
4. The image processing method according to claim 3, wherein the determining the brightness range of the face region based on the brightness of each pixel point in the face region comprises:
determining the average brightness of the pixels in the face region according to the brightness of each pixel in the face region, increasing the preset brightness based on the average brightness to obtain a first brightness, and decreasing the preset brightness based on the average brightness to obtain a second brightness; the second brightness is a lower limit value of the brightness range, and the first brightness is an upper limit value of the brightness range.
5. The image processing method according to claim 3, wherein the determining the brightness range of the face region based on the brightness of each pixel point in the face region comprises:
determining the highest brightness of the face region and the lowest brightness of the face region according to the brightness of each pixel point in the face region, and determining the brightness range based on the highest brightness and the lowest brightness.
6. The image processing method according to claim 1, wherein the image to be processed is an HDR high bit width image.
7. The image processing method according to any one of claims 1 to 6, further comprising, after performing tone mapping on the non-face region based on the initial tone mapping curve and performing tone mapping on the face region based on the tone mapping curve of the face region to obtain a processed image:
and carrying out local tone mapping adjustment on the processed image to obtain a target image after the local tone mapping adjustment.
8. An electronic device comprising a processor and a memory, the processor and memory coupled,
the memory is for storing a computer program which, when executed by the processor, causes the electronic device to perform the steps of the method of any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a computer program which, when run on a computer, causes the computer to carry out the steps of the method according to any one of claims 1 to 7.
10. A chip comprising a processor coupled to a memory for storing computer program instructions which, when executed by the processor, cause the chip to perform the steps of the method of any one of claims 1 to 7.
CN202110991378.2A 2021-08-26 2021-08-26 Image processing method and electronic equipment Active CN114463191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110991378.2A CN114463191B (en) 2021-08-26 2021-08-26 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110991378.2A CN114463191B (en) 2021-08-26 2021-08-26 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114463191A CN114463191A (en) 2022-05-10
CN114463191B true CN114463191B (en) 2023-01-31

Family

ID=81406639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110991378.2A Active CN114463191B (en) 2021-08-26 2021-08-26 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114463191B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117395495A (en) * 2023-12-08 2024-01-12 荣耀终端有限公司 Image processing method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198152A (en) * 2018-02-07 2018-06-22 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110047060A (en) * 2019-04-15 2019-07-23 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110473156A (en) * 2019-08-12 2019-11-19 Oppo广东移动通信有限公司 Processing method, device, storage medium and the electronic equipment of image information
CN111131722A (en) * 2019-12-30 2020-05-08 维沃移动通信有限公司 Image processing method, electronic device, and medium
CN111784607A (en) * 2020-06-30 2020-10-16 Oppo广东移动通信有限公司 Image tone mapping method, device, terminal equipment and storage medium
CN112215760A (en) * 2019-07-11 2021-01-12 华为技术有限公司 Image processing method and device
CN112351195A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image processing method, device and electronic system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2898474A1 (en) * 2012-09-12 2015-07-29 Koninklijke Philips N.V. Making hdr viewing a content owner agreed process
CN108876886B (en) * 2017-05-09 2021-07-27 腾讯科技(深圳)有限公司 Image processing method and device and computer equipment
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
CN110033418B (en) * 2019-04-15 2023-03-24 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112565636B (en) * 2020-12-01 2023-11-21 影石创新科技股份有限公司 Image processing method, device, equipment and storage medium
CN113132696B (en) * 2021-04-27 2023-07-28 维沃移动通信有限公司 Image tone mapping method, image tone mapping device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198152A (en) * 2018-02-07 2018-06-22 广东欧珀移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110047060A (en) * 2019-04-15 2019-07-23 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN112215760A (en) * 2019-07-11 2021-01-12 华为技术有限公司 Image processing method and device
CN110473156A (en) * 2019-08-12 2019-11-19 Oppo广东移动通信有限公司 Processing method, device, storage medium and the electronic equipment of image information
CN111131722A (en) * 2019-12-30 2020-05-08 维沃移动通信有限公司 Image processing method, electronic device, and medium
CN111784607A (en) * 2020-06-30 2020-10-16 Oppo广东移动通信有限公司 Image tone mapping method, device, terminal equipment and storage medium
CN112351195A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image processing method, device and electronic system

Also Published As

Publication number Publication date
CN114463191A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2020168956A1 (en) Method for photographing the moon and electronic device
US11800221B2 (en) Time-lapse shooting method and device
WO2021052111A1 (en) Image processing method and electronic device
WO2022262497A1 (en) Photographing method, graphical interface and related apparatus
WO2023015990A1 (en) Point light source image detection method and electronic device
WO2023015991A1 (en) Photography method, electronic device, and computer readable storage medium
CN113963659A (en) Adjusting method of display equipment and display equipment
CN113891009B (en) Exposure adjusting method and related equipment
CN114466134A (en) Method and electronic device for generating HDR image
CN117201930B (en) Photographing method and electronic equipment
CN113452969B (en) Image processing method and device
CN114463191B (en) Image processing method and electronic equipment
CN112950499B (en) Image processing method, device, electronic equipment and storage medium
CN116668862B (en) Image processing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
WO2022267608A1 (en) Exposure intensity adjusting method and related apparatus
CN113781959B (en) Interface processing method and device
CN112581903B (en) Pixel compensation method and electronic equipment
US20240137659A1 (en) Point light source image detection method and electronic device
CN115705663B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
CN115760652B (en) Method for expanding dynamic range of image and electronic equipment
CN116723417B (en) Image processing method and electronic equipment
CN115460343B (en) Image processing method, device and storage medium
WO2023160221A1 (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant