CN116668838A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN116668838A
CN116668838A CN202211635006.7A CN202211635006A CN116668838A CN 116668838 A CN116668838 A CN 116668838A CN 202211635006 A CN202211635006 A CN 202211635006A CN 116668838 A CN116668838 A CN 116668838A
Authority
CN
China
Prior art keywords
image
camera
electronic device
conversion
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211635006.7A
Other languages
Chinese (zh)
Other versions
CN116668838B (en
Inventor
黎浩翔
郗东苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN116668838A publication Critical patent/CN116668838A/en
Application granted granted Critical
Publication of CN116668838B publication Critical patent/CN116668838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

The application provides an image processing method and electronic equipment, relates to the field of image processing, and can improve the quality of image processing and meet the demands of users. The electronic equipment receives a first operation of a user on a camera application program, and starts a camera to acquire a first image in response to the first operation. The electronic device responds to the first operation and also displays a preview interface, the hue and saturation of the first image are displayed in the preview interface, the second image is obtained after the first conversion relation of any two color values in the RGB color space is adjusted, and the first control for shooting is also displayed in the preview interface. The electronic device may further obtain environmental information including at least a luminance value and a correlated color temperature, and store, in response to a second operation of the first control by the user, a target image obtained after the hue and saturation of the first image are adjusted based on a second conversion relationship of three color values acting on the RGB color space according to the environmental information.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and an electronic device.
Background
Look-up tables (LUTs) are widely used for image processing; for example, the look-up table may be used for image color correction, image enhancement, or image gamma correction, etc.; specifically, a look-up table may be loaded in the image signal processor (Image Signal Processing, ISP), and the original image may be processed according to the look-up table, and the pixel value mapping of the original image may be performed, so as to change the color style of the image, thereby realizing different image effects.
At present, the chip system can adjust the image through a limited number of lookup tables preset in the terminal, and the color correction effect of the limited lookup tables on the image is single, so that the requirements of users cannot be met.
Therefore, in the case where the chip system is limited, how to further improve the image quality of the image processing becomes a problem to be solved.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can be used for carrying out image processing on an original image by combining environment information and improving the image quality of a target image.
In a first aspect, the present application provides an image processing method applied to an electronic device. The electronic equipment receives a first operation of a user on a camera application program, and starts a camera to acquire a first image in response to the first operation. The electronic device responds to the first operation and also displays a preview interface, the hue and saturation of the first image are displayed in the preview interface, the second image is obtained after the first conversion relation of any two color values in the RGB color space is adjusted, and the first control for shooting is also displayed in the preview interface. The electronic device may further obtain environmental information including at least a luminance value and a correlated color temperature, and store, in response to a second operation of the first control by the user, a target image obtained after the hue and saturation of the first image are adjusted based on a second conversion relationship of three color values acting on the RGB color space according to the environmental information.
The image processing method provided by the application displays the second image on the preview interface, wherein the second image is obtained by adjusting the hue and saturation of the first image by using the first conversion relation for any two color values in the RGB color space, and the hue and saturation of the first image are further adjusted by using the second conversion relation for three color values in the RGB color space according to the environment information during storage, so that the color adjustment effect matched with the environment information can be presented, and the user requirements can be met. In addition, the image processing method provided by the application processes the hue and saturation of the first image by using the first conversion relation during preview, and the camera algorithm library calls the graphic processor to operate the first image during storage, so that the adjustment effect can be displayed to the user more quickly, the power consumption is saved, and the user experience is improved.
In one possible implementation, the electronic device stores a plurality of conversion relationships in advance, and after the electronic device receives the second operation, determines a third conversion relationship acting on three color values in the RGB color space from the plurality of conversion relationships based on the first image. The electronic device may further determine the second conversion relationship based on the environmental information and the third conversion relationship parameter.
The image processing method provided by the application can obtain the dynamically changing adjusting effect by using the limited conversion relation and the dynamically changing environment information, so that the adjusting effect of the image is better, and the user requirement can be met.
In one possible implementation, the electronic device determines that the first image includes a portrait, and determines the third conversion relationship by identifying one or more pieces of characteristic information in gender, age, and living area.
The electronic device executes a face recognition algorithm, and if the first image is found to include a portrait, the electronic device may execute a face analysis algorithm to obtain gender, age and life region information of the portrait, and determine a third conversion relationship according to one or more of these information. The electronic equipment can make different figures be processed more naturally and better in effect by using different conversion relations for figures of different sexes, ages and living regions.
In one possible implementation, the electronic device determines the third conversion relationship based on using a front-facing camera or a rear-facing camera.
The main body of the picture photographed by the user when the front camera and the rear camera are used is generally different, more pictures are photographed when the front camera is used, scenery or figures may be photographed when the rear camera is used, and when the front camera is used, the electronic equipment is very close to the photographed object, so that the adjustment of light rays, angles and the like of the user can be better, and when the rear camera is used, the electronic equipment is far away from the photographed object, and the adjustment measures of the user are limited, so that the electronic equipment can obtain better image processing effect by judging whether the front camera or the rear camera is used by the user at the moment to determine the third conversion relation.
In one possible implementation manner, the preview interface displayed by the electronic device includes a plurality of second controls indicating shooting templates, each second control indicating one shooting template, and in response to a third operation of the second control by the user, the electronic device may directly determine a third conversion relationship corresponding to the shooting template indicated by the second control.
The image processing method provided by the application can directly determine the shooting template according to the selection of the user, provides more free selection for the user and improves the user experience.
In one possible implementation manner, when the electronic device determines that the brightness value and the correlated color temperature meet the first threshold range, determining the second conversion relationship is: LUT (x, y) =lut 00.
In one possible implementation manner, when the electronic device determines that the brightness value and the correlated color temperature meet the second threshold range, determining the second conversion relationship is:
in one possible implementation manner, when the electronic device determines that the brightness value and the correlated color temperature meet the third threshold range, determining the second conversion relationship is: where LUT00, LUT10, LUT01, LUT11 represent a third conversion relationship, and LUT (x, y) represents a second conversion relationship.
The electronic equipment converts the third conversion relation into the second conversion relation by using an interpolation method according to the brightness value and the range of the correlated color temperature, and can obtain a more suitable second conversion relation by adjusting the pre-stored limited third conversion relation, so that the image processing effect is better.
In one possible implementation, after determining the second conversion relationship, the electronic device converts the first image in the first color space into a third image in the second color space, processes the target area of the third image to obtain a fourth image, and then converts the fourth image in the second color space into the target image in the first color space.
The electronic equipment converts the first image into a third image in a second color space from the first color space, so that the third image is convenient to further process by using a second conversion relation, the processing effect of the second conversion relation on the image in the second color space is better than that of the first color space, and the second image is converted back to the image in the first color space after the processing is finished, so that the subsequent processing flow is convenient.
In one possible implementation, after the electronic device determines that the first image includes the portrait, the electronic device further identifies a face area of the portrait, and then uses the second conversion relationship to process the face area of the third image.
Under the condition that the face is in the image shot by the user, the user can pay more attention to the face part, the electronic equipment uses the face recognition algorithm to recognize the face area, and then uses the second conversion relation to adjust the face area, so that the face area in the image is more prominent, the color adjustment effect is better, and the power consumption is also more saved.
In one possible implementation, the electronic device obtains the environmental information through parsing of the first image.
Because the electronic device needs to analyze the image to determine the third conversion relationship, the number of times of algorithm execution can be reduced by simultaneously obtaining the environmental information during analysis, and the power consumption is further reduced.
In one possible implementation, the electronic device receives environmental information collected by the sensor.
The environmental information acquired by the sensor is more accurate, the electronic equipment receives the environmental information transmitted by the sensor and then fuses the environmental information with the third conversion relation, so that the more accurate second conversion relation can be obtained, and a better image processing effect is obtained. In a second aspect, the present application provides a chip system for application to an electronic device, the chip system comprising a processor for invoking computer instructions to cause the electronic device to perform any of the image processing methods of the first aspect. In a third aspect, the present application provides an electronic device comprising: processor, memory, camera and display screen. Wherein the display screen is for displaying a preview interface and the memory is for storing computer program code comprising computer instructions that the processor invokes to cause the electronic device to perform any of the image processing methods of the first aspect.
In one possible implementation, the electronic device includes an image signal processor that invokes computer instructions to cause the electronic device to adjust the hue and saturation of the first image to obtain the second image based on the first conversion relationship.
In one possible implementation, the electronic device includes a graphics processor that invokes computer instructions to cause the electronic device to adjust the hue and saturation of the first image to a target image based on the second conversion relationship.
The algorithm for adjusting the first image to obtain the second image is simpler than the algorithm for adjusting the first image to obtain the target image, and the algorithm executed in the image signal processor is lower in power consumption and higher in speed than the algorithm executed by the image processor, so that the algorithm executed by the image signal processor in preview can reduce power consumption and improve speed. In a fourth aspect, there is provided a computer-readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform any one of the image processing methods of the first aspect.
In a fifth aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any one of the image processing methods of the first aspect.
In a sixth aspect, the present application provides an image processing apparatus comprising: the display unit comprises a display screen and a camera and is used for displaying a preview interface; the processing unit is configured to perform any one of the image processing methods of the first aspect.
In one possible implementation, the image processing apparatus stores a plurality of conversion relations in advance, and the processing unit determines a third conversion relation acting on three color values in the RGB color space from the plurality of conversion relations based on the first image after receiving the second operation. The processing unit may further determine the second conversion relation based on the environment information and the third conversion relation parameter.
In a possible implementation manner, the processing unit determines that the first image includes a portrait, and determines the third conversion relationship by identifying one or more pieces of characteristic information in gender, age, and life domain.
In a possible implementation, the processing unit is configured to determine the third conversion relation according to using a front camera or a rear camera.
In a possible implementation manner, the preview interface displayed by the display unit includes a plurality of second controls for indicating shooting templates, each second control indicates one shooting template, and the processing unit is used for responding to a third operation of the second controls by a user, so as to determine a third conversion relation corresponding to the shooting template indicated by the second control.
In a possible implementation manner, the processing unit is configured to convert the first image in the first color space into the third image in the second color space, then process the target area of the third image according to the second conversion relationship to obtain a fourth image, and convert the fourth image in the second color space into the target image in the first color space.
In one possible implementation manner, after the processing unit determines that the first image includes the portrait, the processing unit further identifies a face area of the portrait, and then uses the second conversion relationship to process the face area of the third image.
In one possible implementation, the image processing apparatus includes an image signal processor and a graphics processor; the image signal processor is used for calling a computer instruction to enable the electronic equipment to adjust the hue and saturation of the first image based on the first conversion relation to obtain a second image; the graphic processor is used for calling computer instructions to enable the electronic device to adjust the hue and saturation of the first image based on the second conversion relation to obtain a target image.
Drawings
FIG. 1 is a schematic diagram of a hardware system suitable for use with the present application;
FIG. 2 is a schematic diagram of a system architecture provided by an embodiment of the present application;
Fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 5 (a), 5 (b), 5 (c) and 5 (d) are schematic views of a display interface of an electronic device for executing the image processing method provided by the embodiment of the present application;
fig. 6 (a), 6 (b), 6 (c) and 6 (d) are schematic diagrams of another display interface of the electronic device for executing the image processing method provided by the embodiment of the present application;
fig. 7 (a), fig. 7 (b), and fig. 7 (c) are schematic diagrams of another display interface of an electronic device for executing the image processing method provided by the embodiment of the present application;
FIG. 8 is a schematic diagram of image data processing according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a method for fusing three-dimensional lookup tables according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another method for fusing three-dimensional look-up tables provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of another method for fusing three-dimensional look-up tables provided by an embodiment of the present application;
fig. 12 (a), 12 (b) and 12 (c) are schematic diagrams of RGB color space changes of pixels processed by the image processing method according to the embodiment of the present application;
fig. 13 (a), 13 (b), 13 (c) and 13 (d) are graphs showing comparison of changes before and after processing in the image processing method according to the embodiment of the present application;
Fig. 14 (a) and fig. 14 (b) are schematic diagrams of signal interaction in an image processing method according to an embodiment of the present application;
FIG. 15 is a schematic diagram of signal interaction in another image processing method according to an embodiment of the present application;
fig. 16 (a), 16 (b), 16 (c), 16 (d), 16 (e) and 16 (f) are schematic views of another display interface of an electronic device for performing the image processing method provided by the embodiment of the present application;
fig. 17 is a schematic diagram of a chip system according to an embodiment of the present application.
Detailed description of the preferred embodiments
The terminology used in the following examples is for the purpose of describing the examples and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "one or more" means one, two or more than two. The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Embodiments of an electronic device, an image processing method for such an electronic device, and an apparatus for using such an image processing method are described below. In some embodiments, the electronic device may be a portable electronic device such as a cell phone, tablet computer, wearable electronic device (e.g., smart watch) with wireless communication capabilities, etc., that also includes other functionality such as personal digital assistant and/or music player functionality. Exemplary embodiments of portable electronic devices include, but are not limited to, piggy-back Or other operating system facilitiesA portable electronic device. The portable electronic device may also be other portable electronic devices such as a Laptop computer (Laptop) or the like. It should also be appreciated that in other embodiments, the electronic device described above may not be a portable electronic device, but rather a desktop computer.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. The electronic device 100 may include a processor 110, a display 120, a camera 130, an internal memory 140, a subscriber identity module (subscriber identification module, SIM) card interface 150, a universal serial bus (universal serial bus, USB) interface 160, a charge management module 170, a power management module 171, a battery 172, a sensor module 180, a mobile communication module 190, a wireless communication module 200, an antenna 1, and an antenna 2, among others. The sensor modules 180 may include, among other things, pressure sensors 180A, fingerprint sensors 180B, touch sensors 180C, ambient light sensors 180D, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include a central processing unit (central processing unit, CPU), an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. The central processing unit is also called a central processor and the graphics processing unit is also called a graphics processor. In some embodiments, the electronic device 100 may also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the electronic device 100 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include inter-integrated circuit (inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuiit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 160 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. The USB interface 160 may also be used to connect headphones through which audio is played.
It should be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is used for illustration, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 190, the wireless communication module 200, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 implements display functions through a GPU, a display screen 120, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 120 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 120 is used to display images, videos, and the like. The display 120 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a Miniled, microLed, micro-oeled, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or more display screens 120.
In some embodiments of the present application, when the display panel is made of OLED, AMOLED, FLED, the display screen 120 in fig. 1 may be folded. Here, the display 120 may be folded, which means that the display may be folded at any angle at any portion and may be held at the angle, for example, the display 120 may be folded in half from the middle. Or folded up and down from the middle.
The display 120 of the electronic device 100 may be a flexible screen that is currently of great interest due to its unique characteristics and great potential. Compared with the traditional screen, the flexible screen has the characteristics of strong flexibility and bending property, can provide a new interaction mode based on the bending property for a user, and can meet more requirements of the user on electronic equipment. For electronic devices equipped with foldable display screens, the foldable display screen on the electronic device can be switched between a small screen in a folded configuration and a large screen in an unfolded configuration at any time. Accordingly, users use split screen functions on electronic devices configured with foldable display screens, as well as more and more frequently.
The internal memory 140 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic device 100 to execute the display method provided in some embodiments of the present application, as well as various applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 140. The internal memory 140 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data created during use of the electronic device 100 (e.g., photos, contacts, etc.), and so on. In addition, the internal memory 140 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage units, flash memory units, universal flash memory (universal flash storage, UFS), and the like. In some embodiments, processor 110 may cause electronic device 100 to perform the image processing methods provided in embodiments of the present application, as well as other applications and data processing, by executing instructions stored in internal memory 140, and/or instructions stored in a memory provided in processor 110.
The internal memory 140 may be used to store a related program of the image processing method provided in the embodiment of the present application, and the processor 110 may be used to call the related program of the image processing method stored in the internal memory 140 at the time of image processing, to perform the image processing method of the embodiment of the present application.
The sensor module 180 may include a pressure sensor 180A, a fingerprint sensor 180B, a touch sensor 180C, an ambient light sensor 180D, and the like.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 120. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates with conductive material, and when a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes, and the electronic device 100 determines the strength of the pressure based on the change in capacitance. When a touch operation acts on the display screen 120, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The fingerprint sensor 180B is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, capturing and receiving an incoming call.
The touch sensor 180C, also referred to as a touch device. The touch sensor 180C may be disposed on the display screen 120, and the touch sensor 180C and the display screen 120 form a touch screen, which is also referred to as a touch screen. The touch sensor 180C is used to detect a touch operation acting thereon or thereabout. The touch sensor 180C may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 120. In other embodiments, the touch sensor 180C may also be disposed on the surface of the electronic device 100 and at a different location than the display 120.
Ambient light sensor 180D is used to sense ambient light level values. The electronic device 100 may adaptively adjust the display 120 brightness value based on the perceived ambient light level value. The ambient light sensor 180D may also be used to automatically adjust white balance at the time of photographing. Ambient light sensor 180D may also communicate the ambient information in which the device is located to the GPU.
The electronic device 100 may acquire an image through the camera 130, process the image through an ISP, a GPU, a video codec, an NPU, etc. in the processor 110, and implement interaction with a user in the shooting process through the display screen 120.
The ISP is used to process the data fed back by the camera 130. For example, when shooting, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing, so that the electric signal is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness value and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 130.
The camera 130 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, etc. format image signal. In some embodiments, the electronic device 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface.
The software architecture may include an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250.
The application layer 210 may include camera, gallery, etc. applications.
The application framework layer 220 provides an application programming interface (application programming interface, API) and programming framework for application layer applications. The application framework layer may also include some predefined functions.
For example, the application framework layer 220 may include a camera access interface. Camera management and camera devices may be included in the camera access interface. Wherein camera management may be used to provide an access interface to manage the camera, and the camera device may be used to provide an interface to access the camera.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera abstraction layer and other hardware device abstraction layers. The camera abstraction layer may include the camera device 1, the camera device 2, and the like. The camera hardware abstraction layer may be coupled to a camera algorithm library, and the camera hardware abstraction layer may invoke algorithms in the camera algorithm library.
The camera algorithm library may include algorithm instructions for camera algorithms, image algorithms, etc., and perform part of the image processing steps.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the driver layer may include a camera driver, a digital signal processor driver, and a graphics processor driver.
The hardware layer 250 may include sensors, image processors, digital signal processors, graphics processors, memory, and other hardware devices.
For easy understanding, the following embodiments of the present application will take a mobile phone having a structure shown in fig. 1 and fig. 2 as an example, and the image processing method provided by the embodiments of the present application will be specifically described with reference to the accompanying drawings.
Fig. 3 is an application scenario schematic diagram of an image processing method according to an embodiment of the present application. The image processing method of the present application can be applied to image processing. For example, the target image can be obtained by performing color adjustment on all areas of the shot image according to the environment information, so that a better effect is achieved, and the requirements of users are met.
Fig. 4 is a schematic view of another application scenario of the image processing method according to the embodiment of the present application. The image processing method of the present application can be applied to local image processing. For example, the face area of the shot image can be subjected to color adjustment according to the environment information to obtain the target image, so that a better effect is realized in the face area, and the requirements of users are met.
The look-up table is a technique of storing color values in a buffer table in advance and directly indexing the corresponding color values from the table when an operation is required. The lookup table may be classified into a one-dimensional lookup table (1D look up table,1D LUT), a two-dimensional lookup table (2D look up table,2D LUT), and a three-dimensional lookup table (3D look up table,3D LUT) according to the number of variables. For example, the 1D LUT can be adjusted only according to the change of an independent variable, that is, the R, G or B value is independently adjusted in the RGB color space, so that only adjustment of single brightness, contrast, white balance, etc. can be achieved, and accurate color conversion cannot be achieved; the 2D LUT and the 3D LUT can be related and adjusted aiming at RGB values, so that more accurate color conversion can be realized, and the effect of adjusting the hue and the saturation of an image is achieved. Of them threeA dimensional look-up table can be understood as a function with three independent variables R, G, B, when R is input 1 ,G 1 ,B 1 After three values, R corresponding to the three values is output 2 ,G 2 ,B 2 Three values. Therefore, the three-dimensional lookup table has more accurate color conversion effect, not only can adjust hue and saturation, but also can adjust colors with different brightness values of the same hue. The image processing method provided by the embodiment of the present application is described in detail below with reference to fig. 5 (a) to 17.
Example 1
By way of example, fig. 5 (a) shows a graphical user interface (graphical user interface, GUI) of an electronic device, the GUI being the desktop 301 of the electronic device. When the electronic device detects an operation in which the user clicks an icon 302 of a camera Application (APP) on the desktop 301, the camera application may be started, displaying another GUI as shown in fig. 5 (b). The GUI shown in fig. 5 (b) may be a display interface of the camera APP in the photographing mode, and the GUI may include a viewfinder 303 and a control. For example, the controls may include a control 304 for indicating a photograph and a control 305 for indicating an album. The GUI also includes a functional area and a shooting mode selection area.
The functional areas include an intelligent picture recognition control, an AI photography control 306, a flash control, a LUT control, and a setup control. The controls shown in fig. 5 (b) do not constitute a specific limitation on the functional area. In other embodiments of the present application, the functional area may include more or fewer controls than those shown in FIG. 5 (b), or the functional area may include a combination of some of the controls shown in FIG. 5 (b), or the functional area may include sub-controls of some of the controls shown in FIG. 5 (b).
The shooting mode selection area comprises a large aperture mode, a night scene mode, a portrait mode, a shooting mode, a video mode, a multi-mirror video mode and more options. The mode shown in fig. 5 (b) does not constitute a specific limitation of the shooting mode selection area. In other embodiments of the present application, the mode selection area may include more or less modes than those shown in fig. 5 (b), or the photographing mode selection area may include a combination of some of the modes shown in fig. 5 (b).
An image captured by the camera can be displayed in the viewfinder 303.
As shown in fig. 5 (b), the electronic device detects the operation of clicking the AI photography control 306 by the user, starts the AI photography function, intelligently optimizes the color effect according to the photographic subject, and photographs the beauty map by one key. In response to a user's operation of the AI photography control 306, the AI control 306 changes the display effect from before opening, such as highlighting, changing color, etc.
As shown in fig. 5 (c), when the AI photographing function is turned on, the electronic device automatically identifies the preview image, and when it is identified that a face exists in the preview image, a portrait mode is automatically started to focus on the face, a focusing frame 307 is displayed in the face area, and a portrait mode identifier 308 is displayed in the preview frame, so that the user can click the photographing control 304 to photograph. As shown in fig. 5 (d), in response to the photographing operation by the user, the electronic device saves the picture adjusted by the second conversion relationship in the album, and displays the picture adjusted by the second conversion relationship in the control 305.
In one possible implementation, after the electronic device initiates the portrait mode, the electronic device may display a preview image in the viewfinder 303 that has been adjusted in hue and saturation by the first transition relationship before the user clicks the capture control 304 to capture.
In one possible implementation, after the electronic device initiates the portrait mode, the electronic device may display a preview image in the viewfinder 303 that has been adjusted in hue and saturation by the second conversion relationship before the user clicks the capture control 304 to capture.
In one possible implementation, the AI photography control 306 is in a default on state, and the AI photography control has been on when the user launched the camera application.
In one possible implementation, the electronic device will also automatically recognize the preview image when the AI photography control 306 is closed.
In one possible implementation, as shown in fig. 5 (a), after the electronic device detects that the user clicks on the icon 302 of the camera application on the desktop 301, the camera application may be launched, displaying another GUI as shown in fig. 6 (a), including a LUT selection control 309 in the functional area of the GUI.
As shown in fig. 6 (a), the electronic device detects an operation in which the user clicks the LUT control 309. As shown in fig. 6 (b), the electronic apparatus may display a plurality of photographing template selection boxes 310 of different image effects in the view-finding box 303 in response to a user's operation, and the photographing template selection boxes 310 may include templates 1, 2, 3, 4, 5, and the like, and the user may click on the template 2. As shown in fig. 6 (c), the electronic apparatus, in response to an operation by the user, displays the preview image in the viewfinder 303 in the same manner as the template 2, and the user can click on the photographing control 304. As shown in fig. 6 (d), the electronic device saves the picture of which the hue and saturation are adjusted by the second conversion relationship in the album in response to the operation of the user, and displays the picture of which the hue and saturation are adjusted by the third conversion relationship in the control 305.
In one possible implementation, as shown in fig. 5 (a), after the electronic device detects that the user clicks the icon 302 of the camera application on the desktop 301, the camera application may be started, and a GUI shown in fig. 7 (a) is displayed, where the GUI includes a control 311 indicating the camera before and after the conversion, and the user may click the control 311 to enter into the front camera photographing interface. The electronic apparatus displays another GUI as shown in fig. 7 (b) including a function area and a mode selection area in response to an operation by the user. The electronic device automatically enters a portrait mode, when recognizing that a face exists in the preview image, the electronic device focuses the face, displays a focusing frame 312 in a face area, and the user can click a shooting control 304 to shoot. As shown in fig. 7 (c), the electronic device saves the picture with the hue and saturation adjusted by the second conversion relationship in the album in response to the photographing operation of the user, and displays the picture with the adjusted second conversion relationship in the control 305.
In one possible implementation, after entering portrait mode, the electronic device may display a preview image in the viewfinder that has been adjusted in hue and saturation by the first conversion relationship before the user clicks the capture control 304 to capture. In one possible implementation, after entering portrait mode, the electronic device may display a preview image in the viewfinder that has been adjusted in hue and saturation by the second conversion relationship before the user clicks the capture control 304 to capture.
In one possible implementation, after the user clicks on the icon 302 of the camera application on the desktop 301, the camera application may be started, and the camera application directly enters the photographing interface of the front-facing camera, and displays the GUI as shown in fig. 7 (b).
In one possible implementation, the first conversion relationship refers to a two-dimensional lookup table and the second conversion relationship refers to a three-dimensional lookup table.
It should be understood that the above-mentioned image processing method is not limited to the photographing mode, but also includes use in camera modes such as portrait mode, video recording mode, movie mode, etc., and is not limited to photographing, but also includes photographing, and the above-mentioned photographing mode is used for illustration and does not limit the present application in any way.
It should be understood that the above operation of the user to indicate the shooting behavior may include the user clicking on the shooting control 304, or may include the user device indicating the electronic device to perform the shooting behavior through voice, or may also include other user devices indicating the electronic device to perform the shooting behavior. The foregoing is illustrative and not intended to limit the application in any way.
The graphical display interface for a user to operate on an electronic device is described above in connection with fig. 5 (a) to 7 (c), and the algorithm for operation of the electronic device is described below in connection with fig. 8 to 15.
The dead pixel correction 401 (defect pixel correction, DPC) solves for defects in the array of light-collecting dots on the sensor or for erroneous luminance values during the conversion of the light signal by taking the average of other surrounding pixels over the luminance range.
The black level correction 402 (black level correction, BLC) is used to correct the black level, which is the level of the video signal that is not output by a line of light on a display device that has been calibrated. Black level correction is performed on the one hand because the image sensor has dark current, resulting in a problem of voltage output of the pixel also in the absence of illumination; on the other hand, because the image sensor performs analog-to-digital conversion with insufficient accuracy. Taking an 8 bit (bit) as an example, each pixel has an effective range of 0 to 255, the image sensor may not be able to translate information close to 0. Depending on the visual characteristics of the user (which is sensitive to dark details), manufacturers of image sensors typically add a fixed offset to the output pixels between 5 (not a fixed value) and 255 during analog-to-digital conversion, and then transmit the output pixels to ISP processing for subtraction to adjust the pixel 5 (not a fixed value) to 0 so that the effective range of each pixel is 0 to 255.
Lens shading correction 403 (lens shading correction, LSC) is used to eliminate the problem of color around the image and the inconsistency of luminance values with the center of the image due to the lens optics.
Noise reduction 404 (NR) may refer to Raw domain noise reduction. The Raw domain noise reduction is used to reduce noise in the image. Noise present in the image can affect the visual experience of the user, and the image quality of the image can be improved to some extent by noise reduction.
An automatic white balance 405 (auto white balance, AWB) is used to enable the white camera to restore it to white at any color temperature. White paper tends to yellow at low color temperatures and blue at high color temperatures due to the influence of the color temperature. The purpose of the white balance is to make a white object r=g=b at any color temperature, thereby exhibiting white color.
Color interpolation 406 (demosaic) is used to include three components of RGB simultaneously on each pixel.
The color correction matrix 407 (color correction matrix, CCM) is used to calibrate the accuracy of colors other than white.
Global tone mapping 408 (global tone mapping, GTM) is used to solve the problem of uneven gray value distribution of high-dynamic images.
The gamma process 409 (gamma) is used to adjust the brightness value, contrast, dynamic range, etc. of the image by adjusting the gamma curve.
Rgb→yuv410 is used to convert an image of the RGB color space into an image of the YUV color space.
Color noise reduction 411 (noise reduction in Chroma, NR Chroma) is used to reduce hue and saturation of YUV color space images (UV).
Luminance value denoising 412 (noise reduction in Luma, NR Luma) is used for denoising luminance for YUV color space image (Y).
The first process 413 is used to adjust the width of the YUV image. For example, if the image in the YUV color space does not satisfy the 128 byte alignment constraint, i.e., the image width of the image is not an integer multiple of 128, the image width may be adjusted. For example, the image width may be adjusted by adding 0 at the end of the image value.
Yuv→rgb414 is used to convert the image in YUV color space into the image in RGB color space, and different conversion matrices can be selected according to different photographing modes. For example, because the HDR mode is 10 bits and the SDR mode is 8 bits, the electronic device uses the first matrix to perform conversion in the HDR mode and selects the second matrix to perform conversion in the SDR mode.
The fusion three-dimensional lookup table 415 is used for fusing the third conversion relationship according to the environmental information, for example, the second conversion relationship is obtained from the third conversion relationship based on the environmental information, and the fusion can be linear, nonlinear, two-dimensional, three-dimensional or multidimensional. The environment information may include a Luminance Value (LV), a correlated color temperature (correlated color temperature, CCT), a Light Ratio (LR), and the like.
In one possible implementation, the environment information is a luminance value and a correlated color temperature, which are fused into linearity, for example, the luminance value and the correlated color temperature are respectively in 2 levels, and then the second conversion relationship is divided into three cases:
let x be the luminance value, y be the correlated color temperature, x 0 ,x 1 Is two specific brightness values and 0 < x 0 <x 1 ,y 0 ,y 1 For two specific correlated color temperatures and 0 < y 0 <y 1 LUT00, LUT01, LUT10 and LUT11 are four differentThree conversion relationships, i.e., four different three-dimensional look-up tables.
First, when the incoming luminance value and correlated color temperature are located between predetermined LUT table coverage areas, such as the point 1 position in FIG. 9 (i.e., the first threshold range is 0 < x 0 ,0<y<y 0 ) When, a predetermined LUT table, that is, LUT (x, y) =lut 00, is directly used;
second, when the incoming luminance value and correlated color temperature are located between two predetermined LUT tables, such as the point 2 position in FIG. 10 (i.e. the second threshold range is x 0 <x<x 1 ,0<y<y 0 ) When the method is used, the corresponding two LUT tables are used for linear interpolation, specifically:
third, when the incoming luminance value and correlated color temperature are located between four LUT tables set in advance, such as the point 3 position in FIG. 11 (i.e. the third threshold range is x 0 <x<x 1 ,y 0 <y<y 1 ) When the method is used, bilinear interpolation is carried out by using four corresponding LUT tables, specifically:
For example, when one person irradiates with indoor fluorescent light, the ambient color temperature is low, the RGB color space of the pixel point a in the original image has the values (r=223, g=166, b=124), and after the image is processed by using the three-dimensional lookup table fused with the low correlated color temperature, the RGB color space of the point B corresponding to the point a has the values (r=233, g=187, b=155).
When the person goes from indoor to outdoor, the ambient color temperature increases under the irradiation of sunlight, the pixel point at the position of the pixel point a in the original image is changed to the pixel point a ', the position of the pixel point a in the three-dimensional space is shown in fig. 12 (a), the values of the RGB color spaces are (r=224, g=192, b=187), and if the original three-dimensional lookup table after the fusion of the low correlated color temperatures is still used for processing the image, the position of the point B ' corresponding to the point a ' in the RGB three-dimensional space is shown in fig. 12 (B), and the values of the RGB color spaces are (r=224, g=192, b=187). At this time, if the image processing method in the embodiment of the present application is used, after the image is processed using the three-dimensional lookup table fused with the higher correlated color temperature at this time, the position of the point C corresponding to the point a' in the RGB three-dimensional space is as shown in fig. 12 (C), and the values of the RGB color space are (r=240, g=193, b=172).
Fig. 13 (a) is a schematic diagram of the three pixel points a ', B', and C in the same RGB three-dimensional space, and it is easy to see that the third conversion relationships are fused according to different correlated color temperatures during shooting to obtain a second conversion relationship, and then the images are processed based on the different second conversion relationships, so that the obtained image results are different.
Fig. 13 (B), 13 (C) and 13 (d) are actual effect graphs corresponding to the three pixels a ', B' and C in fig. 13 (a), namely, fig. 13 (B) is an image obtained at a higher color temperature, fig. 13 (C) is an image processed at a higher color temperature by using a three-dimensional lookup table fused with a lower color temperature, fig. 13 (d) is an image processed at a higher color temperature by using a three-dimensional lookup table fused with a higher color temperature, the effects of fig. 13 (C) and 13 (d) are different, and compared with fig. 13 (B), the whole image of fig. 13 (C) is brighter, and the image of fig. 13 (d) is more natural and has better effects.
The interpolation conversion 416 is used to perform color correction processing on the input image according to the fused second conversion relationship.
Exemplary interpolation algorithms include linear interpolation, bilinear interpolation, trilinear interpolation, tetrahedral interpolation, and the like.
Wherein the tetrahedral interpolation algorithm may comprise the steps of:
step one: constructing a three-dimensional color space according to the second conversion relation; uniformly dividing the three-dimensional color space to obtain a plurality of cubes; for example, each dimension of the three-dimensional space can be uniformly divided into 32 parts, obtain 32X 32 cubes;
step two: acquiring a pixel value of a pixel point in an image of an RGB color space, and determining a nearest neighbor cubic block of the pixel point in the three-dimensional color space according to the pixel value;
step three: determining 4 points nearest to the pixel value among eight vertices of the cube;
step four: carrying out weighted average processing on the pixel values of the 4 points to obtain a pixel value mapped by the pixel point; and traversing each pixel point in the image in turn to perform the tetrahedron interpolation algorithm processing, so as to obtain the target image.
Rgb→yuv417 refers to an image that converts an image of an RGB color space into a YUV color space.
The second process 418 is for adjusting the output image to be the same as the format of the input image if the image format of the input image does not satisfy the 128 byte alignment constraint.
Edge enhancement 419 is used to highlight, strengthen, and improve the boundaries and contours between different gray scale regions in the image.
Contrast 420 is used to adjust the contrast of an excessively dark or bright image to make the image more vivid.
The formatted output 421 is used to output images of different formats.
The direct memory access 422 is used to enable interaction of hardware devices at different speeds.
Illustratively, the image acquired by the camera 130 is input to a dead pixel correction module to solve the defects of an array formed by the light-collected points on the sensor or the errors in the process of converting the light signals. The image output by the dead pixel correction module is input into the black level correction module, and the effective range of each pixel is adjusted to 0 to 255. The image output by the black level correction module is input into the lens shading correction module, and the color and brightness values of the periphery of the image are adjusted to be consistent with the center of the image. The image output by the lens shading correction module is input into the noise reduction module, so that noise in the image is reduced. The image output by the noise reduction module passes through the automatic white balance module, and the RGB value of the white object influenced by the color temperature is adjusted to be R=G=B, so that the white color is displayed. The image output by the automatic white balance module is input into the color interpolation module, and each pixel simultaneously contains R, G, B components. The image output by the color interpolation module is input into the color correction matrix module to correct the accuracy of other colors except white. The image output by the color correction matrix module is input into the global tone mapping module, so that the problem of uneven gray value distribution of the high-dynamic image is solved. The image output by the global tone mapping module is input into a gamma processing module, and the brightness, contrast and dynamic range of the image are adjusted. The image output by the gamma processing module is input into an RGB-YUV module, and the image in the RGB color space is converted into the image in the YUV color space. And inputting the UV image output by the RGB-YUV module into a color noise reduction module, and carrying out hue and saturation noise reduction on the image. And inputting the Y image output by the RGB-YUV module into a brightness noise reduction module to reduce the brightness of the image. The UV image output by the color noise reduction module and the Y image output by the brightness noise reduction module are input into the first processing module, and the width of the YUV image is adjusted to be an integer multiple of 128. The image output by the first processing module is input into a YUV-RGB module, and the image in the YUV color space is converted into the image in the RGB color space.
And the electronic equipment inputs the environmental information into a fusion three-dimensional lookup table module, and fuses the environmental information and the third conversion relation to obtain a second conversion relation. The image output by the YUV-RGB module and the second conversion relation output by the fusion three-dimensional lookup table module are input into the interpolation conversion module, and the color correction processing is carried out on the image and the second conversion relation through tetrahedral interpolation. The image output by the interpolation conversion module is input into an RGB-YUV module, and the image in the RGB color space is converted into the image in the YUV color space. The image output by the RGB-YUV module is input into a second processing module, and the width of the YUV image is adjusted to the width of the acquired image.
And the Y image output by the brightness noise reduction module is input into the edge enhancement module, and the boundary and the outline between different gray areas in the image are highlighted, enhanced and improved. The image output by the edge enhancement module is input into the contrast module to adjust the contrast of the excessively dark or excessively bright image. And inputting the image output by the second processing module and the image output by the contrast module into a formatting output module to output a target image.
The modules illustrated in fig. 8 are not particularly limited to the image processing method of the present application, and the image processing module illustrated in fig. 8 is merely a functional module, and the image processing method of the present application may include more or less modules than those illustrated, or may combine some modules, or may split some modules, or may be arranged in different modules. The illustrated modules may be implemented in hardware, software, or a combination of software and hardware.
Preferably, the image processing method of the present application includes modules 401-422 at the same time, so that better image processing effect can be obtained.
The processor 110 includes a central processor, a graphic processor, an image signal processor, a digital signal processor, etc., the modules 401 to 412 are executed by the image signal processor, and the modules 413 to 418 are executed by the camera algorithm library call graphic processor, digital signal processor, or central processor. Illustratively, all of the modules may be executed by an image signal processor, or by a graphics processor, digital signal processor, or central processing unit.
In one example, as shown in fig. 8, the first color space may refer to a YUV color space; the second color space may refer to an RGB color space.
Illustratively, as shown in fig. 8, processing the target area of the third image to obtain the fourth image may refer to processing the image of the RGB color space through a tetrahedral interpolation algorithm in the interpolation conversion 416 to obtain the mapped image of the RGB color space.
It should be understood that the target area of the third image may be the entire area of the third image, or may be a partial area of the third image, such as a face area, a building area, a plant area, an animal area, a special graphic area, and the like.
It should be understood that, as illustrated by the YUV color space and the RGB color space, the first color space and the second color space may refer to different color spaces, and the first color space and the second color space are not limited in any way.
In one possible implementation, the electronic device may select an appropriate first conversion relationship based on the environmental information, such as higher correlated color temperature, higher luminance value, more biased image towards a warm hue with higher saturation, etc.; thereby processing the hue and saturation of the image based on the environmental information and the first conversion relation.
In one possible implementation, the electronic device may directly adjust the display effect of the image based on the environmental information, such as higher brightness value, higher image saturation, and the like.
Fig. 14 (a) and 14 (b) are schematic interaction diagrams of the image processing method provided in the present embodiment. The method 500 includes steps S501 to S516, which are described in detail below.
Step S501, the camera application program sends a start instruction.
In one example, an electronic device detects an operation of a user clicking on a camera application, and starts a camera in response to the operation of the user; after the camera application is running, the camera application may send a start instruction for instructing the camera to capture an image.
It will be appreciated that the above steps may be performed in a photographing mode of the camera, and may be performed in other modes such as a portrait mode, a video recording mode, and the like.
Illustratively, as shown in fig. 2, the user issues a startup instruction, which may be transmitted to the hardware layer 250 through the application layer 210, the application framework layer 220, the hardware abstraction layer 230, and the driver layer 240. And after the sensor receives the starting instruction, acquiring a real-time image acquired by the camera.
Step S502, after the image signal processor acquires the first image, analyzing the first image to obtain an analysis result and environment information, and adjusting the hue and saturation of the first image into a preview image based on the first conversion relation.
For example, the image signal processor may acquire the first image based on the camera after detecting a user click to start the camera application.
The first image may be, for example, an image obtained by the image signal processor executing blocks 401 to 412 in fig. 8 on an image acquired by the camera.
For example, the image signal processor may analyze the first image, for example, whether the image has a portrait, whether the image has a building, whether the image has a specific landscape (e.g., sunrise, sunset, full moon, forest, building, etc.), the sex, age (young children, young, middle-aged, elderly, etc.), living area (asian area, european area, african area, middle-east area, etc.), etc. of the portrait in the image.
Illustratively, the image signal processor further includes analyzing the image to obtain environmental information when photographing.
Illustratively, blocks 401-412 and 419-421 of FIG. 8 are performed by an image signal processor, block 413 and block 418 are performed by a camera algorithm library, and blocks 414-417 are performed by a graphics processor or digital signal processor or central processor.
It should be understood that steps 401-421 in fig. 8 may be all accomplished by the image signal processor or by the camera algorithm library.
For example, the image signal processor adjusting the hue and saturation of the first image to the preview image based on the first conversion relationship may refer to processing the first image using a two-dimensional color lookup table, and the two-dimensional color lookup table may adjust the hue and saturation of the first image to obtain the preview image. The two-dimensional color lookup table is used for processing the first image to generate the preview image, so that on one hand, similar processing effects can be displayed under the condition that a camera algorithm library is not used for calling a graphic processor, the processing speed is increased, on the other hand, the power consumption of the electronic equipment can be reduced, the energy is saved, the heating is reduced, and the user experience is improved.
Step S503, the image signal processor sends the preview image to the camera application.
The image signal processor sends the preview image to the camera application program through the camera device driver, the camera hardware abstraction layer and the camera access interface, and the user can see the preview image adjusted by the first conversion relation in the display preview interface of the camera application program, and the image has similar effect to the image adjusted by the second conversion relation, so that the user can conveniently adjust the photographing angle, the environment position and the like of the user.
Step S504, the camera application program sends a shooting instruction to the image signal processor.
For example, the electronic device detects an operation of clicking a photographing control by a user, and in response to the operation of the user, the camera application program transmits a photographing instruction to the image signal processor.
Step S505, the image signal processor sends the first image, the analysis result and the environmental information to the camera hardware abstraction layer.
It should be appreciated that the camera hardware abstraction layer is located at a hardware abstraction layer, which includes the camera hardware abstraction layer and the abstraction layer of other hardware devices; the hardware abstraction layer is an interface layer between the operating system kernel and the hardware circuitry for abstracting the hardware, and can be seen in the system architecture shown in fig. 2.
Illustratively, the image signal processor transmits the first image, the parsing result, and the environment information to the hardware abstraction layer 230 through the driving layer 240.
And step S506, the camera hardware abstraction layer determines the identification of the third conversion relation according to the analysis result.
For example, the third conversion relationship may refer to a pre-stored three-dimensional lookup table, and the camera hardware abstraction layer selects one or more three-dimensional lookup tables according to the analysis result and obtains the corresponding identification thereof.
Step S507, the camera hardware abstract layer sends the identification of the second image, the environment information and the third conversion relation to the camera algorithm library.
It is understood that the first image, the environmental information, and the identification of the third conversion relationship may be transmitted simultaneously or may be transmitted separately.
Step S508, the camera algorithm library performs first processing on the first image to obtain a third image, and determines a third conversion relation according to the identification of the third conversion relation.
It should be appreciated that the camera algorithm library may include algorithm instructions for camera algorithms, image algorithms, etc., and perform part of the image processing steps.
For example, the camera algorithm library may align a first image that does not meet 128 byte alignment in width, such as adding 0 at the end of the image.
Step S509, the camera algorithm library sends the third image, the environmental information, and the third conversion relation to the graphic processor.
In step S510, the graphics processor obtains a second conversion relationship based on the environmental information and the third conversion relationship, and converts the third image into a fourth image based on the second conversion relationship.
The graphics processor may fuse the third conversion relationship based on the environmental information to obtain the second conversion relationship.
Illustratively, the second transformation relationship includes a linear relationship, a nonlinear relationship, a three-dimensional relationship, and the like. In one example, the graphics processor may perform image processing on the third image according to the second conversion relationship and the interpolation algorithm to obtain a fourth image.
Step S511, the graphics processor sends the fourth image to the camera algorithm library.
And step S512, performing second processing on the fourth image by the camera algorithm library to obtain a fifth image.
For example, when the first image width does not satisfy the 128-byte alignment, the camera algorithm library may adjust the width of the fourth image to the same fifth image as the first image, restoring the original image width.
Step S513, the camera algorithm library sends the fifth image to the camera hardware abstraction layer.
Step S514, the camera hardware abstraction layer sends the fifth image to the image signal processor.
Step S515, the image signal processor processes the fifth image and the adjusted first image to obtain a target image.
Illustratively, the image signal processor receives the sixth image adjusted by the second conversion relation and processes it with the first image adjusted by the edge enhancement 419 and the contrast 420 as shown in fig. 8 to obtain the target image.
S516, the image signal processor sends the target image to the camera application.
For example, after the camera application receives the target image, the target image is displayed on a display interface of the camera.
In one possible implementation, the camera application program sends the use condition of the camera to the camera hardware abstraction layer through the camera access interface and the camera hardware abstraction layer; the camera hardware abstraction layer determines the identification of a pre-stored three-dimensional lookup table according to the use condition of the camera, for example, the camera algorithm library selects a shooting template 1 according to the use condition of the front camera, selects a shooting template 2 according to the use condition of the rear camera, selects a shooting template 3 according to the use condition of the long-focus lens, and the like.
It should be appreciated that the use of the cameras described above may include whether cameras are used, which camera/cameras are used, front or rear cameras are used, periscope cameras are used, tele cameras are used, and the like.
In one possible implementation, the sensor may directly obtain the environmental information when photographing, and send the environmental information obtained by parsing the first image with the image signal processor to the graphics processor through the camera hardware abstraction layer by the camera algorithm library. The environmental information may be acquired by one sensor or by a plurality of sensors. For example, the image signal processor may not analyze the image to obtain the environmental information, and the image processor may process the image only according to the environmental information obtained by the sensor.
In one possible implementation, the image signal processor may send the first image to the camera algorithm library through the camera hardware abstraction layer, and the camera algorithm library directly performs the first processing on the first image to obtain the fourth image.
Fig. 15 is another schematic interaction diagram of the image processing method provided in the present embodiment. The method 600 includes steps S601 to S614, which are described in detail below.
Step S601, the camera application program transmits a start instruction.
In step S602, the camera application program sends the identification of the third conversion relationship to the camera hardware abstraction layer.
Illustratively, the user selects to determine a shooting template in the shooting interface, the shooting template corresponds to the third conversion relation, and the camera application program sends the identification of the third conversion relation corresponding to the shooting template to the camera hardware abstraction layer through the camera access interface.
Step S603, after the image signal processor acquires the first image, the first image is parsed to obtain environmental information.
Step S604, the image signal processor sends the first image and the environmental information to the camera hardware abstraction layer.
Step S605, the camera hardware abstraction layer sends the first image, the environmental information and the identification of the third conversion relation to the camera algorithm library.
Step S606, the camera algorithm library performs first processing on the first image to obtain a third image, and determines a third conversion relation according to the identification of the third conversion relation.
Step S607, the camera algorithm library sends the third image, the environmental information, and the third conversion relation to the graphic processor.
In step S608, the graphics processor determines a second conversion relationship according to the environmental information and the third conversion relationship, and converts the third image into a fourth image according to the second conversion relationship.
Step S609, the graphics processor sends the fourth image to the camera algorithm library.
And step S610, performing second processing on the fourth image by the camera algorithm library to obtain a fifth image.
Step S611, the camera algorithm library sends a fifth image to the hardware abstraction layer.
Step S612, the camera hardware abstraction layer sends the fifth image to the image signal processor.
Step S613, the image signal processor processes the fifth image and the adjusted first image to obtain a target image.
Step S614, the image signal processor transmits the target image to the camera application.
It should be appreciated that the above description is exemplified with the process of performing processing of an image according to the second conversion relation in the GPU; the processing of the image according to the second conversion relation may also be performed in a DSP or a CPU, or in other target processors, which may refer to processors for image processing supporting parallel computation and which are independent of the image signal processing ISP chip.
Example two
In one example, a user may acquire a first image that completes shooting in a gallery and select a shooting template, the electronic device sends the first image that completes shooting to an image signal processor, and sends an identifier of a third conversion relationship corresponding to the shooting template to a camera hardware abstraction layer; the image signal processor analyzes the first image to obtain environment information, and sends the first image and the environment information to the camera hardware abstraction layer; the camera hardware abstraction layer sends the first image, the environment information and the identification of the third conversion relation to the camera algorithm library; the camera algorithm library performs first processing on the first image to obtain a third image, determines a third conversion relation according to the identification of the third conversion relation, and sends the third image, the environmental information and the third conversion relation to the graphic processor; the graphic processor determines a second conversion relation according to the environment information and the third conversion relation, converts the third image into a fourth image according to the second conversion relation, and sends the fourth image to the camera algorithm library; the camera algorithm library carries out second processing on the fourth image to obtain a fifth image, and the fifth image is sent to the image signal processor through the camera hardware abstraction layer; the image signal processor processes the fifth image and the adjusted first image to obtain a target image.
Illustratively, fig. 16 (a) shows a schematic view of a display desktop 301 of an electronic device. After the electronic device detects that the user clicks the icon 313 of the gallery application on the desktop 301, the gallery application may be started, a picture is selected in the gallery application, and then another GUI as shown in fig. 16 (b) is displayed; the display interface shown in fig. 16 (b) includes a view finder 314, and a picture is displayed in the view finder 314; the display interface also comprises an editing option 315, the electronic device detects the operation of clicking the editing option 315 by a user, and the electronic device displays the editing interface in response to the operation of the user, as shown in fig. 16 (c); other options such as LUT option 317, clipping option, adjustment option, more option and the like are included in the interface of fig. 16 (c), and after the electronic device detects the operation of clicking LUT option 317 by the user, the interface shown in fig. 16 (d) is displayed; a shooting template selection frame 318 displaying a plurality of different filter effects in the interface of fig. 16 (d), where the mode selection frame 318 may include a template 1, a template 2, a template 3, a template 4, a template 5, and so on, and after the electronic device detects that the user clicks the template 2 in the shooting template selection frame 318, the electronic device performs image processing on the picture 1 according to the template 2, and the processing flow may refer to the flowchart shown in fig. 8, and displays the interface shown in fig. 16 (e); displaying the image processed by the template 2 in the interface shown in fig. 16 (e), wherein the interface shown in fig. 16 (e) further comprises a save control 319; after detecting the operation of clicking the save control 319 by the user, the electronic device saves the image processed by the template 2 in the gallery, as shown in fig. 16 (f).
It should be appreciated that the above description is exemplified with the process of performing processing of an image according to the second conversion relation in the GPU; the processing of the image according to the second conversion relation may also be performed in a DSP or a CPU, or in other target processors, which may refer to processors for image processing supporting parallel computation and which are independent of the image signal processing ISP chip. It should be understood that in the second embodiment, an image processing instruction is triggered by the gallery application, a three-dimensional lookup table algorithm in a camera algorithm library is called by the image processing instruction, and the camera algorithm library sends the image and the second conversion relationship to the GPU or the DSP for processing, so as to obtain a target image; in addition, since the captured image is acquired, the captured image and the environmental information are not required to be acquired through a sensor, and the captured image and the stored environmental information acquired during capturing can be directly called from the gallery.
It will be appreciated that the environmental information at this time may also be derived from analysis of the captured image by an image signal processor or a library of camera algorithms.
As shown in fig. 17, the present application further provides a chip system applied to the electronic device 100, where the chip system includes one or more processors 110, and the processors 110 are configured to invoke computer instructions to cause the electronic device 100 to execute the image processing method according to any of the method embodiments of the present application.
In one possible implementation, the chip system further includes an input and output interface for inputting and outputting picture data.
The present application also provides a computer program product which, when executed by the processor 110, implements the image processing method according to any one of the method embodiments of the present application.
The computer program product may be stored in the internal memory 140 or in an external memory, and the computer program product may be subjected to preprocessing, compiling, assembling, linking, etc. to be converted into an executable object file that can be executed by the processor 110.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, implements the image processing method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
The image processing apparatus provided by the application comprises a display unit and a processing unit. The display unit comprises a display screen 120 and a camera 130, and is used for displaying a preview interface, and the preview interface also displays a first control for shooting; the processing unit includes a processor 110, and is configured to receive a first operation of a camera application by a user, and initiate a camera to acquire a first image in response to the first operation. The processing unit is further configured to adjust a hue and saturation of the first image based on a second image obtained after the first conversion relationship applied to any two color values in the RGB color space. The processing unit may further obtain environmental information including at least a luminance value and a correlated color temperature, and store, in response to a second operation of the first control by the user, a target image obtained after the hue and saturation of the first image are adjusted based on a second conversion relationship of three color values acting on the RGB color space according to the environmental information.
The processing device is embodied in the form of a functional unit. The term "unit" herein may be implemented in software and/or hardware, without specific limitation.
For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein. In several embodiments provided by the present application, the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are illustrative, the division of units is a logic function division, there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In summary, the above embodiments are preferred embodiments of the present application, and are not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (18)

1. An image processing method, characterized by being applied to an electronic device, comprising:
receiving a first operation of a user on a camera application program, starting a camera in response to the first operation, and acquiring a first image based on the camera;
responding to the first operation to display a preview interface, wherein the preview interface comprises a second image and a first control, the second image is an image obtained after the hue and saturation of the first image are adjusted based on a first conversion relation, the first conversion relation is acted on any two color values in an RGB color space, and the first control is a shooting control;
Acquiring environment information, wherein the environment information at least comprises a brightness value and a correlated color temperature;
and receiving a second operation of the first control by a user, and responding to the second operation to save a target image, wherein the target image is an image obtained after the hue and saturation of the first image are adjusted based on a second conversion relation, and the second conversion relation is acted on three color values in an RGB color space according to the environment information.
2. The image processing method according to claim 1, wherein the electronic device includes a plurality of conversion relationships stored in advance, the method further comprising, after receiving the second operation:
determining a third conversion relation from the plurality of conversion relations based on the first image, the third conversion relation being applied to three color values in an RGB color space;
the second conversion relationship is determined based on the environment information and the third conversion relationship.
3. The image processing method according to claim 2, characterized in that the method further comprises:
and judging that the first image comprises a portrait, identifying characteristic information of the portrait, and determining the third conversion relation from the conversion relations, wherein the characteristic information comprises one or more of gender, age and life domain.
4. The image processing method according to claim 2, wherein the camera includes a front camera and a rear camera, the method further comprising:
judging whether the started camera is the front camera or the rear camera, and determining the third conversion relation from the conversion relations.
5. The image processing method according to claim 2, characterized in that the method further comprises:
the preview interface comprises a plurality of second controls, each second control indicates a shooting template, and each shooting template corresponds to one conversion relation in the plurality of conversion relations;
receiving a third operation of the second control by a user;
and responding to the third operation to determine the third conversion relation corresponding to the shooting template indicated by the second control.
6. The image processing method according to any one of claims 2 to 5, characterized in that the method further comprises:
judging that the brightness value x and the correlated color temperature y meet a first threshold range, and determining the second conversion relation as follows:
LUT(x,y)=LUT00,
LUT00 represents the third conversion relation, and LUT (x, y) represents the second conversion relation.
7. The image processing method according to any one of claims 2 to 6, characterized in that the method further comprises:
Judging that the brightness value x and the correlated color temperature y meet a second threshold range, and determining the second conversion relation as follows:
LUT00 and LUT10 represent a third conversion relationship, and LUT (x, y) represents a second conversion relationship.
8. The image processing method according to any one of claims 2 to 7, characterized in that the method further comprises:
judging that the brightness value x and the correlated color temperature y meet a third threshold range, and determining the second conversion relation as follows:
LUT00, LUT10, LUT01, LUT11 represent a third conversion relationship, and LUT (x, y) represents a second conversion relationship.
9. The image processing method according to any one of claims 2 to 8, characterized in that after determining the second conversion relation, the method further comprises:
converting the first image into a third image, wherein the first image is an image of a first color space and the third image is an image of a second color space;
obtaining a fourth image after the target area of the third image is adjusted based on the second conversion relation;
and converting the fourth image into the target image, wherein the fourth image is an image of the second color space, and the target image is an image of the first color space.
10. The image processing method according to claim 9, characterized in that the method further comprises:
judging that the first image comprises a portrait, and identifying a face area of the portrait;
and determining the target area of the third image as the face area.
11. The image processing method according to any one of claims 1 to 10, characterized in that the method further comprises: and analyzing the first image to obtain the environment information.
12. The image processing method according to any one of claims 1 to 10, characterized in that the method further comprises: and receiving the environmental information acquired by the sensor.
13. A chip system for application to an electronic device, the chip system comprising a processor for invoking computer instructions to cause the electronic device to perform the image processing method of any of claims 1-12.
14. An electronic device, the electronic device comprising: the device comprises a processor, a memory, a camera and a display screen; the display screen is used for displaying the preview interface; the memory is for storing computer program code comprising computer instructions that the processor invokes to cause the electronic device to perform the image processing method of any one of claims 1 to 12.
15. The electronic device of claim 14, wherein the processor comprises an image signal processor that invokes the computer instructions to cause the electronic device to perform adjusting the hue and saturation of the first image to obtain the second image based on the first conversion relationship.
16. The electronic device of claim 14 or 15, wherein the processor comprises a graphics processor that invokes the computer instructions to cause the electronic device to perform adjusting the hue and saturation of the first image to the target image based on the second conversion relationship.
17. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the image processing method of any one of claims 1 to 12.
18. A computer program product, characterized in that the computer program product comprises computer program code which, when executed by a processor, causes the processor to perform the image processing method of any of claims 1 to 12.
CN202211635006.7A 2022-11-22 2022-12-19 Image processing method and electronic equipment Active CN116668838B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211467250 2022-11-22
CN2022114672507 2022-11-22

Publications (2)

Publication Number Publication Date
CN116668838A true CN116668838A (en) 2023-08-29
CN116668838B CN116668838B (en) 2023-12-05

Family

ID=87726607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211635006.7A Active CN116668838B (en) 2022-11-22 2022-12-19 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116668838B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778190A (en) * 2009-01-08 2010-07-14 华晶科技股份有限公司 Method for regulating skin color of digital image
JP2013115546A (en) * 2011-11-28 2013-06-10 Casio Comput Co Ltd Image processing apparatus and program
CN110808002A (en) * 2019-11-28 2020-02-18 北京迈格威科技有限公司 Screen display compensation method and device and electronic equipment
JP2020088709A (en) * 2018-11-29 2020-06-04 キヤノン株式会社 Image processing apparatus, image processing method and program
US20200374447A1 (en) * 2017-06-30 2020-11-26 Huawei Technologies Co., Ltd. Color Detection Method and Terminal
CN112887582A (en) * 2019-11-29 2021-06-01 深圳市海思半导体有限公司 Image color processing method and device and related equipment
CN113132695A (en) * 2021-04-21 2021-07-16 维沃移动通信有限公司 Lens shadow correction method and device and electronic equipment
CN113727017A (en) * 2021-06-16 2021-11-30 荣耀终端有限公司 Shooting method, graphical interface and related device
CN113810602A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Shooting method and electronic equipment
CN113965694A (en) * 2021-08-12 2022-01-21 荣耀终端有限公司 Video recording method, electronic device and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778190A (en) * 2009-01-08 2010-07-14 华晶科技股份有限公司 Method for regulating skin color of digital image
JP2013115546A (en) * 2011-11-28 2013-06-10 Casio Comput Co Ltd Image processing apparatus and program
US20200374447A1 (en) * 2017-06-30 2020-11-26 Huawei Technologies Co., Ltd. Color Detection Method and Terminal
JP2020088709A (en) * 2018-11-29 2020-06-04 キヤノン株式会社 Image processing apparatus, image processing method and program
CN110808002A (en) * 2019-11-28 2020-02-18 北京迈格威科技有限公司 Screen display compensation method and device and electronic equipment
CN112887582A (en) * 2019-11-29 2021-06-01 深圳市海思半导体有限公司 Image color processing method and device and related equipment
CN113132695A (en) * 2021-04-21 2021-07-16 维沃移动通信有限公司 Lens shadow correction method and device and electronic equipment
CN113727017A (en) * 2021-06-16 2021-11-30 荣耀终端有限公司 Shooting method, graphical interface and related device
CN113810602A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Shooting method and electronic equipment
CN113965694A (en) * 2021-08-12 2022-01-21 荣耀终端有限公司 Video recording method, electronic device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘子希;: "基于3DLUT技术在照片调色中的研究", 福建信息技术教育, no. 02 *

Also Published As

Publication number Publication date
CN116668838B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN112150399B (en) Image enhancement method based on wide dynamic range and electronic equipment
CN106797453B (en) Image processing apparatus, photographic device, image processing method and image processing program
US20150332636A1 (en) Image display device and method
CN105409211A (en) Automatic white balancing with skin tone correction for image processing
WO2023130922A1 (en) Image processing method and electronic device
US20190251670A1 (en) Electronic device and method for correcting images using external electronic device
EP4175275A1 (en) White balance processing method and electronic device
CN114463191A (en) Image processing method and electronic equipment
CN116668862B (en) Image processing method and electronic equipment
CN116437198B (en) Image processing method and electronic equipment
CN117135471A (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
US20230058472A1 (en) Sensor prioritization for composite image capture
CN116258633A (en) Image antireflection method, training method and training device for image antireflection model
CN117395495B (en) Image processing method and electronic equipment
CN116051368B (en) Image processing method and related device
CN116723417B (en) Image processing method and electronic equipment
CN115426458B (en) Light source detection method and related equipment thereof
WO2023160221A1 (en) Image processing method and electronic device
CN115514947B (en) Algorithm for automatic white balance of AI (automatic input/output) and electronic equipment
EP4231621A1 (en) Image processing method and electronic device
CN115988339B (en) Image processing method, electronic device, storage medium, and program product
CN115705663B (en) Image processing method and electronic equipment
CN117135293B (en) Image processing method and electronic device
CN115955611B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant