WO2024103746A1 - Image processing method, electronic device, computer program product, and storage medium - Google Patents

Image processing method, electronic device, computer program product, and storage medium Download PDF

Info

Publication number
WO2024103746A1
WO2024103746A1 PCT/CN2023/103443 CN2023103443W WO2024103746A1 WO 2024103746 A1 WO2024103746 A1 WO 2024103746A1 CN 2023103443 W CN2023103443 W CN 2023103443W WO 2024103746 A1 WO2024103746 A1 WO 2024103746A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
light source
shooting scene
local
Prior art date
Application number
PCT/CN2023/103443
Other languages
French (fr)
Chinese (zh)
Inventor
英豪
杨杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024103746A1 publication Critical patent/WO2024103746A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/82Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction
    • H04N23/83Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction specially adapted for colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Definitions

  • the present application relates to the field of terminals, and in particular to an image processing method, an electronic device, a computer program product, and a computer-readable storage medium.
  • the image captured by the camera can be processed for color restoration.
  • the existing color restoration processing method is generally based on the color correction parameters of several typical light sources that have been calibrated in advance.
  • the light source composition in the actual shooting scene is relatively complex, resulting in various light source illumination situations in the shooting area. Based on the above processing method, it is difficult for the camera to accurately restore the color, resulting in a large color deviation between the captured photos and the real scene observed by the human eye.
  • a first aspect of an embodiment of the present application discloses a color restoration method, which is applied to an electronic device.
  • the method includes: acquiring a multispectral image and an original imaging image of a shooting scene; performing spectral estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene; determining color restoration parameters based on the spectral power, and performing color restoration on the original imaging image based on the color restoration parameters to obtain a color restored image.
  • a multispectral image of the shooting scene is obtained.
  • the original imaging image is obtained based on imaging of the three primary colors (red, green, blue, RGB) image sensor
  • the multispectral image has a narrower channel bandwidth. Therefore, the imaging spectral resolution is higher.
  • the light source spectrum of the shooting scene can be accurately estimated to obtain the light source spectrum of the shooting scene, and then the color restoration parameters corresponding to the shooting scene can be accurately obtained to correct the color deviation between the photos taken by the camera and the real scenes observed by the human eye, so that the overall color perception of the captured image matches human vision, thereby improving the user experience.
  • spectral estimation is performed based on the multispectral image to obtain the spectral power of the light source in the shooting scene, including: performing spectral estimation based on the multispectral image of the shooting scene to obtain the local spectral power of each area in the shooting scene; determining color restoration parameters according to the spectral power, including: determining the local color restoration parameters of each area according to the local spectral power of each area in the shooting scene.
  • spectral estimation is performed based on a multispectral image of a captured scene to obtain local spectral power of each area in the captured scene, including: performing highlight detection on the multispectral image to obtain a highlight area of the multispectral image; and obtaining the local spectral power of each area based on the highlight area.
  • the photographed object may form a highlight area after reflection. It is easier to determine the spectral power of the light source based on the highlight area, and then the local spectral power of each area can be accurately obtained, saving calculation amount.
  • the multispectral image includes an auxiliary accessory for highlight detection, and highlight detection is performed on the multispectral image to obtain a highlight area of the multispectral image, including: performing highlight detection on the auxiliary accessory in the multispectral image, and obtaining the highlight area of the multispectral image based on the position of the auxiliary accessory determined to be a highlight.
  • highlight detection is performed on a multispectral image to obtain a highlight area of the multispectral image, including: counting the brightness of each pixel in the multispectral image, and taking the area where pixels with pixel brightness ranking before a preset position are located as the highlight area of the multispectral image; or detecting the brightness of each pixel in the multispectral image, and taking the area where pixels with pixel brightness greater than a preset brightness threshold are located as the highlight area of the multispectral image.
  • the highlight area of the multispectral image can be accurately extracted by taking the area where pixels with pixel brightness ranking before the preset position, or the area where pixels with pixel brightness greater than the preset brightness threshold, as the highlight area of the multispectral image, thereby improving the success rate and accuracy of highlight area detection.
  • the local spectral power of each area is obtained, including: performing principal component analysis on the highlight area to obtain a first principal component vector of the highlight area and a second principal component vector of the highlight area; projecting the image data of the highlight area to a plane formed by the first principal component vector of the highlight area and the second principal component vector of the highlight area; determining a linear cluster with a linear distribution based on the distribution of the projected image data in the plane; performing principal component analysis on the linear cluster to obtain a first principal component vector of the linear cluster; and obtaining the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area, and the first principal component vector of the linear cluster.
  • the noise in the image data can be eliminated, the calculation overhead of subsequent spectral power analysis can be reduced, and the local spectral power corresponding to the area of different shooting scenes can be generated to improve the accuracy of the local spectral power.
  • local spectral power is obtained based on the first principal component vector of the highlight area, the second principal component vector of the highlight area and the first principal component vector of the linear cluster, including: performing pseudo-inverse operation on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; and obtaining local spectral power based on the pseudo-inverse matrix and the first principal component vector of the linear cluster.
  • the local spectral power corresponding to each local area can be accurately calculated, thereby improving the accuracy of the local spectral power.
  • the electronic device includes a camera module
  • the local color restoration parameters include a local color correction matrix
  • the local color restoration parameters of each area are determined according to the local spectral power of each area in the shooting scene, including: obtaining the color card reflectance function, spectral sensitivity function and standard observer color matching function of the camera module; obtaining the photosensitive data of the camera module based on the color card reflectance function, spectral sensitivity function and local spectral power; obtaining the standard observer tristimulus values based on the color card reflectance, the standard observer color matching function and the local spectral power; and obtaining the local color correction matrix according to the photosensitive data of the camera module and the standard observer tristimulus values.
  • the local color correction matrix, the color card reflectance function, the camera sensitivity function and the standard observer color matching function are fitted in the process of calculating the color correction matrix, thereby improving the accuracy of the color correction matrix and further improving the accuracy of color reproduction.
  • the local color restoration parameters include a local chromatic adaptation conversion matrix
  • the local color restoration parameters of each area in the shooting scene are determined according to the local spectral power of each area in the shooting scene, including: obtaining a standard observer color matching function and a spectral power of a target light source; obtaining white point tristimulus values of the light source in the shooting scene based on the local spectral power and the standard observer color matching function; obtaining white point tristimulus values of the target light source based on the spectral power of the target light source and the standard observer color matching function; obtaining a local chromatic adaptation conversion matrix according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
  • a local chromatic adaptation conversion matrix is fitted based on the standard observer color matching function, the spectral power of the target light source, and the local spectral power, so that when color restoration is performed, the three stimulus values of the light source in the shooting scene can be converted to the three stimulus values under the target light source, thereby improving the accuracy of color restoration. For example, if the target light source is D65 light source.
  • a local chromatic adaptation conversion matrix is obtained according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source, including: obtaining a first response value based on the white point tristimulus values of the light source in the shooting scene and a preset chromatic adaptation model, the first response value being the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene; obtaining a second response value based on the white point tristimulus values of the target light source and the chromatic adaptation model, the second response value being the response value of the human eye to the long wave, medium wave and short wave of the target light source; and obtaining the local chromatic adaptation conversion matrix according to the first response value and the second response value.
  • the three stimulus values are converted into the response values of the human eye to the long-wave, medium-wave and short-wave of the light source and the target light source in the shooting scene, and then the local chromatic adaptation conversion matrix is calculated, so that the calculated chromatic adaptation conversion matrix is more compatible with human vision.
  • the color restoration parameters include: a color correction matrix and a color adaptation conversion matrix, and color restoration is performed on the original image based on the color restoration parameters to obtain a color restored image, including: correcting each pixel in the original image based on the color correction matrix to obtain a color corrected image; converting the color corrected image based on the color adaptation conversion matrix to obtain a color restored image.
  • the color correction matrix can convert the camera's photosensitivity data into the three stimulus values of a standard observer, and the chromatic adaptation correction matrix can convert the three stimulus values of the light source in the shooting scene into the three stimulus values under the target light source, thereby achieving color restoration.
  • spectral estimation is performed based on a multispectral image to obtain the spectral power of the light source in the shooting scene, including: performing spectral estimation based on the multispectral image to obtain the spectral power and light source distribution information of the light source in the shooting scene; performing color restoration on the original imaging image based on color restoration parameters to obtain a color restored image, and further including: determining critical pixels in the color restored image based on the light source distribution information, the critical pixels being located in the light source boundary area; and smoothing the critical pixels to obtain a smoothed image.
  • spectral estimation is performed based on a multispectral image to obtain the spectral power of the light source in the shooting scene, including: in response to a request to enter a color fidelity mode, generating a human-computer interaction interface for setting the light source information of the shooting scene; obtaining the light source information of the shooting scene set by the user from the human-computer interaction interface; and performing light source spectrum estimation based on the shooting scene light source information and the multispectral image to obtain the spectral power of the light source in the shooting scene.
  • the spectral power of the light source in the shooting scene can be estimated more accurately.
  • Spectral estimation is only performed in color fidelity mode, which can reduce the energy consumption of electronic equipment and improve user experience when the user has low demand for color restoration of the image.
  • obtaining a multispectral image of a captured scene includes: obtaining a multispectral initial image of the captured scene through multiple multispectral image sensors; and merging the multispectral initial images to obtain a multispectral image of the captured scene.
  • an embodiment of the present application provides a computer-readable storage medium, including computer instructions.
  • the computer instructions When the computer instructions are executed on an electronic device, the electronic device executes the image processing method as described in the first aspect.
  • an embodiment of the present application provides an electronic device, which includes a processor and a memory, the memory is used to store instructions, and the processor is used to call the instructions in the memory, so that the electronic device executes the image processing method described in the first aspect.
  • an embodiment of the present application provides a computer program product.
  • the computer program product is run on an electronic device (such as a computer)
  • the electronic device executes the image processing method described in the first aspect.
  • a device wherein the device has the function of implementing the electronic device behavior in the method provided in the first aspect.
  • the function can be implemented by hardware, or by hardware executing corresponding software.
  • the software includes one or more modules corresponding to the above functions.
  • the computer-readable storage medium described in the second aspect, the electronic device described in the third aspect, the computer program product described in the fourth aspect, and the device described in the fifth aspect all correspond to the method of the first aspect. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
  • FIG1 is a schematic diagram of color restoration of a captured image based on a color correction matrix provided by an embodiment of the present application
  • FIG2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present application.
  • FIG3 is a schematic diagram of a software structure of an electronic device provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application.
  • FIG5 is a schematic diagram of the architecture of an electronic device performing color restoration according to an embodiment of the present application.
  • FIG6 is a flow chart of color restoration performed by an electronic device according to an embodiment of the present application.
  • FIG7 is a schematic diagram of the architecture of an electronic device performing color restoration according to another embodiment of the present application.
  • FIG8 is a schematic diagram of an interface for setting light source information of a shooting scene in a high-fidelity mode provided by an electronic device according to an embodiment of the present application;
  • FIG9 is a schematic diagram of an interface for setting a highlight area of an electronic device provided by an embodiment of the present application.
  • FIG10 is a schematic diagram of a multispectral filter array of a multispectral image sensor provided in an embodiment of the present application.
  • FIG11 is a schematic diagram of a spectral sensitivity curve of a multi-spectral image sensor with two staggered peaks provided by an embodiment of the present application;
  • FIG12 is a schematic diagram of a scenario in which an electronic device performs spectrum estimation according to an embodiment of the present application.
  • FIG13 is a schematic diagram of a process of performing spectrum estimation by an electronic device provided in an embodiment of the present application.
  • FIG14 is a schematic diagram of setting auxiliary accessories in a shooting scene according to an embodiment of the present application.
  • FIG15 is a schematic diagram of the structure of a multi-spectral color temperature sensor provided in an embodiment of the present application.
  • FIG. 16 is a flow chart of an image processing method provided in an embodiment of the present application.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • the dispersed monochromatic light is arranged in sequence according to wavelength or frequency.
  • the full name is optical spectrum.
  • a multispectral image is an image that contains many bands, each band is a grayscale image that represents the brightness of the scene according to the sensitivity of the sensor used to produce that band.
  • Spectral reflectivity When light source shines on the surface of an object, the object will selectively reflect electromagnetic waves of different wavelengths. Spectral reflectivity refers to the ratio of the luminous flux reflected by the object in a certain band to the luminous flux incident on the object, which characterizes the essential properties of the object's surface.
  • SSF Spectral Sensitivity Function
  • Color Matching Function The color matching function is the quantity of red, green and blue primary colors required to match each monochromatic light in the equal-energy spectrum. It is the basic data for color measurement and calculation.
  • Tristimulus value The amount of stimulation of the three primary colors that causes the human retina to perceive a certain color, expressed as X (red primary color stimulation), Y (green primary color stimulation) and Z (blue primary color stimulation).
  • CA Chromatic Adaptation
  • Spectral Power Distribution The distribution of light source radiation power according to wavelength, also known as spectral power.
  • the CIE 1931 XYZ color space (also called the CIE 1931 color space) is a mathematically defined color space created by the International Commission on Illumination (CIE) in 1931.
  • the photosensitivity of the imaging sensor in the camera is different from that of the human eye. Therefore, when the camera shoots a scene, there is a certain difference between the original color information of the scene obtained by the camera and the color information of the scene directly observed by the human eye. For example, after shooting mahogany-colored solid wood furniture, the color displayed in the image after imaging may turn blood red; after shooting yellow-green curtains, the color displayed in the image after imaging may be reddish. In order to make the color of the final image generated by the camera consistent with the real color observed by the human eye, the original imaging image obtained by the camera can be color restored. The role of color restoration is to convert the color information of the scene captured by the camera into the color information perceived by the human eye.
  • Lighting conditions will affect the color information obtained by the human eye and the camera, but the lighting conditions of the actual shooting scene are uncontrollable.
  • there may be multiple types of light sources such as natural light and fluorescent lights. It is difficult for the camera to accurately restore the color based on the light source of the actual shooting scene, resulting in color deviations between the captured photos and the real scene observed by the human eye.
  • the electronic device can obtain in advance the standard color card photographed by the camera under various typical light sources to obtain known data such as the spectral reflectance of the standard color card, the camera's spectral sensitivity function, and the standard observer color matching function; the color correction matrix (Color Correction Matrix, CCM) of several typical light sources can be calibrated based on these known data; for example, when the camera shoots an actual scene, the two typical light sources close to each light source in the actual shooting scene and the weights of the two typical light sources can be calculated; a weighted sum is performed based on the color correction matrices of the two typical light sources and the weights of the two typical light sources, and the weighted sum value is used as the color correction matrix corresponding to the light source in the shooting scene.
  • CCM Color Correction Matrix
  • CCM_A accounts for 40%
  • CCM_D accounts for 60%
  • 40%*CCM_A and CCM_D*60% can be used as the color correction matrices corresponding to the light sources in the shooting scene.
  • the above method directly uses the linear combination of the color correction matrices of two typical light sources as the color correction matrix corresponding to the light source in the shooting scene, which will result in low accuracy of the color correction matrix and difficulty in accurate color restoration.
  • the above method relies on the calibration of typical light sources. If the number of calibrated typical light sources is too small or the calibration is incorrect, it will be difficult to perform color restoration based on the calibrated light sources.
  • the electronic device may also perform the following steps: S1: taking the sum of the p parameters of each channel as the objective function, calculating the optimal spectral transformation matrix from the camera spectral sensitivity function to the CIE1931 XYZ color matching function, and satisfying the chromatic aberration constraints of the ideal reflective surface under several typical light sources; wherein the p parameter is defined as the degree of approximation between a pair of sensitivity functions s1( ⁇ ) and s2( ⁇ ) with respect to the wavelength ⁇ ; S2: for the original imaging image to be color corrected, directly applying the spectral transformation matrix obtained in S1 to the original RGB response value of each pixel to convert it to the CIE1931 XYZ color space; at the same time, directly applying the spectral transformation matrix obtained in S1 to the light source color response value estimated by the automatic white balance module to convert it to the CIE1931 XYZ color space; S3: using the CAT02 color adaptation transformation model in the CIECAM02 color appearance model to calculate the CAT02 color adaptation transformation
  • the above method converts the original RGB response signal of the camera into the device-independent CIE1931 XYZ space by calculating the spectral transformation relationship between the camera spectral sensitivity function and the CIE1931 color matching function, and uses the CAT02 color adaptation transformation model to calculate the corresponding color response value after color adaptation under the reference light source, thereby realizing a color correction process that does not rely on pre-calibrated parameters.
  • this method only considers the light source spectrum function and the object spectral reflectance function when performing color correction. The parameters considered are relatively simple, and it is difficult to accurately determine the spectrum of the shooting scene, resulting in low color restoration effect.
  • the embodiment of the present application also provides an image processing method, which can accurately obtain the color restoration parameters corresponding to the shooting scene to correct the color deviation between the photos taken by the camera and the real scene observed by the human eye, so that the overall color perception of the captured image matches human vision.
  • the image processing method provided in the embodiment of the present application can be applied to electronic devices, and the electronic devices can communicate with other electronic devices or servers through a communication network.
  • the electronic devices of the present application may include mobile phones, foldable electronic devices, tablet computers, personal computers (personal computers, PCs), laptop computers, handheld computers, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, cellular phones, personal digital assistants (personal digital assistants, PDAs), augmented reality (augmented reality, AR) devices, virtual reality (virtual reality, VR) devices, artificial intelligence (artificial intelligence, AI) devices, wearable devices, smart home devices, and at least one of smart city devices.
  • the embodiment of the present application does not impose special restrictions on the specific type of electronic devices.
  • the communication network can be a wired network or a wireless network.
  • the communication network can be a local area network (LAN) or a wide area network (WAN), such as the Internet.
  • LAN local area network
  • WAN wide area network
  • the communication network may be a short-distance communication network such as a wireless fidelity (Wi-Fi) hotspot network, a Wi-Fi P2P network, a Bluetooth network, a Zigbee network, or a near field communication (NFC) network.
  • Wi-Fi wireless fidelity
  • Wi-Fi P2P Wireless Fidelity
  • Bluetooth Bluetooth
  • Zigbee Zigbee
  • NFC near field communication
  • the communication network may be a third-generation wireless telephone technology (3G) network, a fourth-generation mobile communication technology (4G) network, a fifth-generation mobile communication technology (5G) network, a future-evolved public land mobile network (PLMN) or the Internet.
  • 3G third-generation wireless telephone technology
  • 4G fourth-generation mobile communication technology
  • 5G fifth-generation mobile communication technology
  • PLMN public land mobile network
  • the electronic device may install one or more APPs (Applications).
  • APP can be referred to as an application, which is a software program that can implement one or more specific functions.
  • communication applications may include text messaging applications, for example.
  • Image capture applications may include shooting applications (system cameras or third-party shooting applications).
  • Video applications may include Huawei Video, for example.
  • Audio applications may include Huawei Music.
  • the applications mentioned in the following embodiments may be system applications installed on the electronic device when it leaves the factory, or they may be third-party applications downloaded from the Internet or obtained from other electronic devices by the user during the use of the electronic device.
  • Electronic equipment includes but is not limited to Windows or other operating systems.
  • FIG. 2 shows a schematic structural diagram of an electronic device 10 .
  • the electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, an antenna 1, an antenna 2, a mobile communication module 130, a wireless communication module 140, an audio module 150, a sensor module 160, a camera module 170, a display screen 180, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 10.
  • the electronic device 10 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural network processor.
  • AP application processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural network processor
  • different processing units can be independent devices or integrated in one or more processors.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 may be a cache memory.
  • the memory may store instructions or data that have been used or are frequently used by the processor 110. If the processor 110 needs to use the instructions or data, it may be directly called from the memory. This avoids repeated accesses, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface.
  • the processor 110 may be connected to an audio module, a wireless communication module, a display, a camera, and other modules through at least one of the above interfaces.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is only a schematic illustration and does not constitute a structural limitation on the electronic device 10.
  • the electronic device 10 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the wireless communication function of the electronic device 10 can be implemented through the antenna 1, the antenna 2, the mobile communication module 130, the wireless communication module 140, the modem processor and the baseband processor.
  • the wireless communication module 140 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), Bluetooth low energy (BLE), ultra wide band (UWB), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc., which are applied to the electronic device 10.
  • WLAN wireless local area networks
  • BT Bluetooth
  • BLE Bluetooth low energy
  • UWB ultra wide band
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the electronic device 10 can realize the display function through a GPU, a display screen 180, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 180 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the camera module 170 includes a camera.
  • the display screen 180 is used to display images, videos, etc.
  • the display screen 180 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 10 may include one or more display screens 180.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 10.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function.
  • the internal memory 121 can be used to store computer executable program codes, which include instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the data storage area may store data created during the use of the electronic device 10, etc.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional methods or data processing of the electronic device 10 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the audio module 150 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals.
  • the audio module 150 can also be used to encode and decode audio signals.
  • the audio module 150 can be arranged in the processor 110, or some functional modules of the audio module 150 can be arranged in the processor 110.
  • the software system of the electronic device 10 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device 10.
  • FIG. 3 is a software structure block diagram of the electronic device 10 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, each with a clear role and division of labor.
  • the layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, from top to bottom: application layer, application framework layer, Android runtime (Android runtime, ART) and native C/C++ library, hardware abstract layer (HAL) and kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interface (API) and programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like.
  • the window manager provides window management services (Window Manager Service, WMS).
  • WMS can be used for window management, window animation management, surface management and as a transit station for the input system.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying images, etc.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
  • the Activity Manager can provide Activity Manager Service (AMS).
  • AMS can be used to start, switch, and schedule system components (such as activities, services, content providers, broadcast receivers) as well as manage and schedule application processes.
  • the Android runtime includes the core library and the Android runtime.
  • the Android runtime is responsible for converting source code into machine code.
  • the Android runtime mainly uses the ahead-of-time (AOT) compilation technology and the just-in-time (JIT) compilation technology.
  • the core library is mainly used to provide basic Java class library functions, such as basic data structures, mathematics, IO, tools, databases, networks, etc.
  • the core library provides an API for users to develop Android applications.
  • the native C/C++ library can include multiple functional modules, such as surface manager, media framework, libc, OpenGL ES, SQLite, Webkit, etc.
  • the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
  • the media framework supports playback and recording of multiple commonly used audio and video formats, as well as static image files, etc.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • OpenGL ES provides drawing and operation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of electronic devices 10.
  • the hardware abstraction layer runs in user space, encapsulates kernel layer drivers, and provides a calling interface to the upper layer.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the electronic device 10 is taken as a mobile phone with a camera function, and the mobile phone may have a front camera and/or a rear camera, which is not limited in the present embodiment.
  • the electronic device 10 includes a camera module 170 and a display screen 180, and the camera module 170 includes a multispectral image sensor and an RGB image sensor.
  • the electronic device 10 may also be a vehicle-mounted camera device, etc., but the embodiments are not limited thereto.
  • the above-mentioned camera module 170 and display screen 180 may also be arranged on another electronic device that is communicatively connected to the electronic device 10.
  • the other electronic device may be communicatively connected to the electronic device 10 wirelessly.
  • the electronic device 10 may be used to control the RGB image sensor in the camera module to obtain a first image based on the imaging of the shooting scene, and control the multispectral image sensor to obtain a second image based on the imaging of the shooting scene.
  • the electronic device 10 is also used to perform color restoration on the first image based on the second image, and to control the display screen 180 to display the color-restored image, that is, the color-restored image is used as the final displayed captured image.
  • the color of many sales items serves as an important sales attribute, where the sales items may include jade, furniture, lipstick, clothes, and colorful pictures.
  • the deviation in jade color may have a great impact on the price. Therefore, when shooting jade with a mobile phone camera, the accuracy of color restoration is very high; the color difference caused by lipstick is sufficient to cause deviation in color number. If the above-mentioned items cannot be accurately restored in color after being imaged by a mobile phone camera, disputes may arise due to color difference.
  • the mobile phone can use the image processing method provided in the embodiment of the present application to accurately restore the color of the imaged image of the camera module 170, correct the color deviation between the imaged image of the camera module 170 and the real scene observed by the human eye, so that the overall color perception of the image finally displayed on the display screen 180 is consistent with human vision. Match and improve user experience.
  • FIG5 it is a core architecture diagram of color restoration of an electronic device 10 according to an embodiment of the present application.
  • the electronic device 10 may include a multi-spectral image sensor, an RGB image sensor, a processor and a display screen.
  • the color restoration of the electronic device 10 may include the following implementation process:
  • the multispectral image sensor images the shooting scene to obtain a multispectral image of the shooting scene.
  • the RGB image sensor images the shooting scene to obtain an RGB image of the shooting scene (i.e., the original imaging image).
  • the processor performs spectrum estimation based on the multi-spectral image and the light source information of the shooting scene to obtain the spectral power and light source distribution information of the light source in the shooting scene.
  • the light source information of the shooting scene can be set by the user.
  • the electronic device 10 is installed with a shooting application.
  • the multispectral image sensor can image the shooting scene to obtain a multispectral image of the shooting scene
  • the RGB image sensor can image the shooting scene to obtain an RGB image of the shooting scene.
  • the shooting application also has a human-computer interaction interface for setting the light source information of the shooting scene.
  • the light source information of the shooting scene that can be set includes: the number of light sources, the highlight area, the light source position, the light source boundary, the light source type, etc.
  • the processor obtains color restoration parameters according to the spectral power of the light source and the light source distribution information.
  • the color restoration parameters include a color correction matrix and a color adaptation conversion matrix.
  • the color correction matrix can be used to correct the RGB image, converting the RGB image from the RGB color space to the XYZ color space, and the chromatic adaptation conversion matrix can be used to perform chromatic adaptation transformation on the image in the XYZ color space, chromatic adaptation transformation of the XYZ image data under the current shooting scene light source to the XYZ image data under the target light source, for example, the target light source is D65 light source (also known as international standard artificial daylight (Artificial Daylight), whose color temperature is 6500K).
  • D65 light source also known as international standard artificial daylight (Artificial Daylight)
  • the processor performs color restoration on the RGB image according to the color restoration parameters to obtain a color restored RGB image.
  • the color restored RGB image may be output to a display screen for display.
  • FIG6 is a flowchart of color restoration of the electronic device 10 according to an embodiment of the present application
  • FIG7 is an overall architecture diagram of color restoration of the electronic device 10 according to an embodiment of the present application. The color restoration process is further described below in conjunction with FIG6 and FIG7.
  • Step 601 If a color fidelity mode information configuration request is received, a human-computer interaction interface for setting the shooting scene light source information is generated, and the scene light source information set by the user is obtained from the human-computer interaction interface.
  • the electronic device 10 may be installed with a shooting application, and the shooting application may provide a color fidelity mode. In the color fidelity mode, the electronic device 10 may accurately restore the color of the imaging of the shooting scene.
  • FIG 8 is a schematic diagram of the interface for the electronic device 10 to set the shooting scene light source information in the color fidelity mode. If the user has a high-fidelity color shooting requirement, the electronic device 10 can respond to the user's operation instructions to start the shooting application and enter the color fidelity mode. The electronic device 10 can also respond to the user's operation instructions to enter the information configuration interface of the color fidelity mode.
  • the information configuration interface is a human-computer interaction interface for setting the shooting scene light source information; the electronic device 10 can obtain the shooting scene light source information set by the user from the human-computer interaction interface.
  • the electronic device 10 may be configured to perform color restoration processing only in the color high-fidelity mode, which can improve the camera performance of the electronic device 10 and save power consumption of the electronic device 10.
  • the electronic device 10 may also display an information configuration interface when entering the color fidelity mode.
  • the information configuration interface may include an "OK" icon. After the user completes the information configuration and clicks the "OK" icon, the information configuration interface may be closed, and the shooting application displays a preview screen shot in the color fidelity mode.
  • the light source information that can be set in the human-computer interaction interface includes: the number of light sources, highlight area, light source position, light source boundary, and light source type.
  • the light source types include daylight, halogen light, fluorescence, and light emitting diode (LED).
  • FIG 9 is a schematic diagram of setting the highlight area.
  • the light source shines on the object and then reflects to the human body.
  • each part of the object has its corresponding brightness in the human eye.
  • the user can set the highlight area in the displayed shooting scene, and the brightest point on the object can be set as the highlight area.
  • the human eye observes that there is a highlight area 802 on the jade 801, and a mark 803 can be placed on the highlight area 802.
  • the mode for color restoration is called the color fidelity mode.
  • it can also be named “color mode”, “color authenticity mode”, “color-specific mode” and other similar names.
  • color mode In actual application, it can also be named “color mode”, “color authenticity mode”, “color-specific mode” and other similar names. The embodiment of the present application does not limit this.
  • Step 602 imaging the shooting scene by using a multispectral image sensor to obtain a multispectral image of the shooting scene.
  • FIG10 is a schematic diagram of a multispectral filter array (MSFA), in which a number represents a frequency band, and the MSFA has 16 frequency bands.
  • MSFA multispectral filter array
  • a plurality of multispectral image sensors may also be installed on the electronic device, and narrow-band distribution intervals are staggered between different multispectral image sensors.
  • FIG11 is a schematic diagram of a spectral sensitivity curve of a multispectral image sensor with two staggered peaks.
  • the electronic device 10 can obtain a multispectral initial image of the shooting scene through a plurality of multispectral image sensors; and merge the multispectral initial images to obtain a multispectral image of the shooting scene.
  • This embodiment merges the multispectral images generated by a plurality of multispectral image sensors, and uses the merged multispectral image as the input of spectral estimation, which can improve the spectral resolution of the multispectral image, and thus improve the accuracy of subsequent spectral estimation.
  • Fig. 12 is a schematic diagram of a spectrum estimation scene. After acquiring the multi-spectral image, spectrum estimation can be performed based on the multi-spectral image.
  • the spectrum estimation step can refer to step 603.
  • Step 603 performing spectrum estimation based on the multi-spectral image and the light source information of the shooting scene to obtain the spectral power and light source distribution information of the light source in the shooting scene.
  • the spectral power of the light source may also be referred to as the light source spectrum.
  • the light source distribution information may include: position information of the light source distribution in the shooting scene.
  • performing spectrum estimation according to the multispectral image in step 603 to obtain the spectral power and light source distribution information of the light source in the shooting scene may include:
  • Step 6031 perform highlight detection on the multispectral image to obtain the highlight area on the multispectral image.
  • Highlights are bright spots on objects observed by the human eye when light source shines on the object and then reflects into the human eye.
  • the brightness of each pixel in the multispectral image can be counted, and the pixels with the top 5% brightness among the pixels can be used as highlight areas.
  • a brightness threshold can also be set. If a pixel exceeds the brightness threshold, the pixel can be used as a highlight area.
  • the embodiments of the present application are not limited to this.
  • the highlight area can be preliminarily detected through a two-color reflection model, a center surround filter or a dark channel method, and then, the edges of the preliminarily detected highlight area that are misjudged as the highlight area are removed through a low-pass filter to obtain the highlight area on the multispectral image.
  • an auxiliary accessory 1301 may be further provided in the shooting scene.
  • the electronic device 10 may perform highlight detection on the auxiliary accessories in the multispectral image, obtain the auxiliary accessories in the multispectral image that are in a highlight state and the position information of these auxiliary accessories in the highlight state, and then determine the highlight area on the multispectral image based on the position information of these auxiliary accessories in the highlight state.
  • the auxiliary accessories can improve the success rate of highlight area detection and improve the accuracy of spectral estimation.
  • the auxiliary accessory may be an object of neutral color that can reflect the light emitted by the light source, and the size and position of the auxiliary accessory may be adjusted according to the actual scene.
  • the auxiliary accessory is a gray sphere with a glossy surface, and the gray spheres are evenly distributed in the shooting scene.
  • the gray spheres can reflect the highlights formed by the light from the light source, and the electronic device 10 can provide more accurate highlight area data for spectral estimation based on the highlights detected by the gray spheres.
  • the highlight area may be determined in combination with the scene light source information obtained in step 601.
  • the highlight area marked by the user may be obtained from the human-computer interaction interface in step 601, and the position corresponding to the marked highlight area may be determined in the multispectral image, and the position may also be used as the highlight area in the multispectral image.
  • Step 6032 Perform principal component analysis on the highlight area to obtain the first principal component vector and the second principal component vector. quantity.
  • PCA Principal Component Analysis
  • a set of variables that may be correlated is converted into a set of linearly unrelated variables.
  • the converted set of variables is called principal components.
  • PCA is a dimensionality reduction method, which is often used to reduce the dimensionality of high-dimensional data sets. Its main idea is to map high-dimensional features to k dimensions. This k dimension is the principal component and can retain most of the information of the original variable. The information here refers to the variance of the original variable.
  • the one with the largest variance among the principal components is called the first principal component vector, and the one with the second largest variance is called the second principal component vector.
  • Step 6033 projecting the image data of the highlight area onto the plane formed by the first principal component vector and the second principal component vector.
  • Step 6034 determining the light source direction information according to the distribution of the image data of the highlight area on the plane.
  • a linear cluster of linear distribution can be determined in the distribution of the projected image data in the plane, and the linear cluster represents the specular reflection of the light source on the object; then, principal component analysis is performed on the linear cluster to obtain the first principal component vector of the linear cluster, and the first principal component vector of the linear cluster represents the light source direction information of the corresponding area in the shooting scene.
  • Step 6035 obtaining the local spectral power according to the light source direction information, the first principal component vector of the highlight area, and the second principal component vector of the highlight area.
  • a pseudo-inverse operation may be performed on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; the pseudo-inverse matrix is multiplied by the first principal component vector of the linear cluster to obtain the local spectral power.
  • the above steps 6032 to 6035 may be performed to obtain the local spectral power of each region in the multispectral image.
  • Step 6036 obtaining light source distribution information according to the local spectral power of each area in the multi-spectral image.
  • the regional distribution information of various spectral powers of the multispectral image can be obtained.
  • the regional distribution information of spectral power represents the light source distribution information.
  • the above steps 602 and 603 are based on obtaining a multispectral image by a multispectral image sensor, and performing spectral estimation based on the multispectral image.
  • the electronic device can obtain the original RGB image of the camera and the intensity values of each channel of the multispectral color temperature sensor; the above two types of data are input into a pre-trained neural network model to obtain the probability that the light source of the current shooting scene belongs to each type of typical light source; based on the probability and the known spectrum of the typical light source, the light source spectrum of the current scene is determined.
  • Multispectral color temperature sensors cannot form images, have no spatial resolution, and have low accuracy.
  • the light received by the multi-spectral color temperature sensor includes the light from the light source as well as the light reflected by the object, which is equivalent to the object color mixed into the light source.
  • the color temperature sensor cannot distinguish between the object color and the light source color, resulting in the inability to accurately calculate the true light source spectrum in the shooting scene.
  • the light source spectrum of this embodiment is obtained based on known typical light source spectrum data after judging and classifying the multi-spectral color temperature sensor data.
  • the light source spectrum obtained based on the typical light source is not accurate.
  • This embodiment calculates the global light source spectrum in the shooting scene.
  • the local light source spectra of different areas of the shooting scene are not the same. Therefore, the color restoration of the global light source spectrum is not accurate.
  • the spectrum estimation scheme of step 602 and step 603 in the embodiment of the present application has the following effects:
  • Step 602 and step 603 use a multispectral image sensor.
  • the multispectral image obtained by the multispectral image sensor has a higher spectral resolution. Based on this, the spectrum of the light source at the shooting scene can be accurately estimated. Get the light source spectrum of the shooting scene.
  • the multispectral image sensor can distinguish the color of the object and the light source, so as to accurately obtain the real light source spectrum of the shooting scene.
  • Step 602 and step 603 do not rely on a typical light source spectrum that is pre-calibrated, but can improve the accuracy of spectrum estimation by estimating the light source spectrum of the multi-spectral image of the captured scene in real time.
  • the embodiment of the present application can estimate the local light source spectrum of each area in the shooting scene based on the local multi-spectral image, so that the subsequent color restoration is more accurate.
  • step 604 may be performed.
  • Step 604 based on the spectral power, fit the color restoration parameters corresponding to the spectral power.
  • Color restoration parameters may include a color correction matrix (CCM) and a color adaptation conversion matrix.
  • CCM color correction matrix
  • color adaptation conversion matrix For the local spectral power of each area in the shooting scene, local color restoration parameters corresponding to the local spectral power, namely, the local color correction matrix and the local color adaptation conversion matrix, may be fitted.
  • the electronic device 10 fits a local color correction matrix, which may include: obtaining a color card reflectance function of the camera module 170, a spectral sensitivity function of the camera module 170, and a standard observer color matching function; integrating the color card reflectance function, the camera spectral sensitivity function, and the local spectral power to obtain photosensitivity data of the camera module 170; integrating the color card reflectance, the standard observer color matching function, and the local spectral power to obtain standard observer tristimulus values; and obtaining a local color correction matrix based on the photosensitivity data of the camera module 170 and the standard observer tristimulus values.
  • the local spectral power is recorded as E
  • the color card reflectance function is recorded as R
  • the camera spectral sensitivity function is recorded as S cam
  • the standard observer color matching function is recorded as S cmf .
  • the local color correction matrix CCM can be obtained by fitting. For example, through a conversion model such as a linear model, a polynomial model, or a root polynomial model, the local color correction matrix used in the process of converting the human eye photosensitive data into the standard tristimulus value is fitted and calculated.
  • the above-mentioned camera spectral sensitivity function can be measured by selecting a certain model of camera module, and the color card reflectance function and the standard observation color matching function can also be measured in advance.
  • the embodiment of the present application does not limit the measurement method of the above-mentioned embodiment.
  • the CCM is calculated by taking into account the spectral power, the standard observer color matching function, the camera spectral sensitivity function and the color card reflectance function, and the parameters considered are more accurate, so that the calculated CCM is more comprehensive.
  • the step of fitting the chromatic adaptation conversion matrix by the electronic device 10 may include:
  • the target light source may be a D65 light source
  • the spectral power E tgt of the D65 light source may be recorded as ED65
  • the tristimulus values of the D65 light source may be recorded as XYZ D65
  • the red primary color stimulation amount of the D65 light source may be recorded as X WD
  • the green primary color stimulation amount may be recorded as Y WD
  • the blue primary color stimulation amount may be recorded as Z WD .
  • the local chromatic adaptation conversion matrix is obtained according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
  • the white point tristimulus values of the light source in the shooting scene can be multiplied by a preset color adaptation model to obtain a first response value, which is the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene (also known as the degree of color adaptation).
  • the preset color adaptation model can be the CAT02 color adaptation model, that is, M cat02 , and the first response value can be recorded as [ ⁇ S ⁇ S ⁇ S ], the formula is as follows:
  • the white point tristimulus values of the target light source are multiplied by the color adaptation model to obtain a second response value, which is the response value of the human eye to the long wave, medium wave and short wave of the target light source.
  • the second response value can be recorded as [ ⁇ D ⁇ D ⁇ D ], and the formula is as follows:
  • the local chromatic adaptation conversion matrix M ca is obtained according to the first response value and the second response value.
  • Step 605 Perform color restoration on the original image captured by the camera module based on the color restoration parameters.
  • each pixel of the original imaging image may be processed through a color correction matrix to convert the original imaging image from an RGB color space to an XYZ color space to obtain a color-corrected image XYZsrc, where XYZsrc is the tristimulus value corresponding to the light source in the shooting scene. Therefore, XYZsrc may also be processed through a chromatic adaptation conversion matrix to convert the tristimulus values corresponding to the light source in the shooting scene to the tristimulus values under the target light source (e.g., a D65 light source) to obtain a color-restored image XYZ tgt .
  • a target light source e.g., a D65 light source
  • the above-mentioned original imaging image may be an RGB image obtained by imaging the shooting scene by the RGB image sensor in the camera module.
  • the initially obtained raw image may be a Bayer format image, namely a Bayerraw image. Therefore, the Bayerraw image may be subjected to de-mosaicing and other processing to obtain the RGB image raw image.
  • the local color correction matrix and the local color adaptation conversion matrix corresponding to the pixel can be determined first, and then the pixel can be processed based on the corresponding local color correction matrix and the local color adaptation conversion matrix.
  • each area in the above-mentioned shooting scene has a corresponding local color correction matrix and a local color adaptation conversion matrix.
  • color correction is performed based on the local color correction matrix, it may cause a sudden color change in the light source boundary area. Therefore, in the embodiment of the present application, when processing each pixel of the original imaging image through the color correction matrix, the critical pixel in the image after color restoration can be determined according to the light source distribution information.
  • the critical pixel is located in the light source boundary area; after the color adaptation conversion matrix is processed, the critical pixel is smoothed to obtain a smoothed image.
  • the light source boundary area is smoothed to avoid sudden color changes in the light source boundary area and improve the smoothness of the image color.
  • the light source boundary area is a boundary area between a first area and a second area in the shooting scene
  • the first area The local color correction matrix of the first region is CCM1
  • the local color correction matrix of the second region is CCM2.
  • Each pixel located in the light source boundary region can be processed by interleaving CCM1 and CCM2, such as the first pixel located in the light source boundary region is processed by CCM1, the second pixel is processed by CCM2, the third pixel is processed by CCM1, and so on.
  • CCM1 and CCM2 can be averaged, and each pixel in the light source boundary region can be processed by the average.
  • a smoothing process may be performed based on the light source distribution information, and the tristimulus values of the target light source after the smoothing process may be converted to a standard RGB color space such as standard Red Green Blue (sRGB) for display on the display screen 180.
  • sRGB standard Red Green Blue
  • the image XYZ tgt after color restoration may refer to an image in the sRGB color space, and the image in the sRGB color space is used as the captured image to be finally displayed.
  • an embodiment of the present application provides an image processing method, which is applied to an electronic device 10.
  • the image processing method may include:
  • Step 1601 obtaining a multispectral image and an original imaging image of a shooting scene.
  • the electronic device 10 may acquire a multispectral image and an original imaging image of the shooting scene in response to a shooting instruction.
  • the electronic device 10 may also respond to a shooting instruction in a color fidelity mode to obtain a multispectral image and an original imaging image of the shooting scene to save power consumption, while in other shooting modes, the original imaging image of the shooting scene is obtained without performing color restoration processing on the original imaging image.
  • At least one multispectral image sensor and an RGB image sensor may be installed on the electronic device 10.
  • the electronic device 10 may obtain a multispectral image of the shooting scene through the multispectral image sensor and obtain an original imaging image of the shooting scene through the RGB image sensor.
  • the narrow-band distribution intervals between the multispectral image sensors can be staggered so as to obtain a multispectral image including more light source information.
  • the electronic device 10 obtains the multispectral initial image of the shooting scene through each multispectral image sensor, the multispectral initial images can be merged, and the merged multispectral image is used as the multispectral image of the shooting scene.
  • the spectrum estimation is performed through the merged multispectral image, which can improve the accuracy of the spectrum estimation.
  • Step 1602 perform spectrum estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene.
  • the local spectral power of each area in the shooting scene can be obtained based on the multispectral image.
  • a highlight detection may be performed on a multispectral image to obtain a highlight region of the multispectral image; and each local spectral power may be obtained based on each highlight region.
  • an auxiliary accessory may be placed in the shooting scene, and the highlight region of the shooting scene may be determined by detecting the auxiliary accessory.
  • the brightness of each pixel in the multispectral image can be counted, and the area where pixels with a preset brightness ranking in the front (such as the top 5%) are located is taken as the highlight area of the multispectral image; or the brightness of each pixel in the multispectral image can be detected, and the area where pixels with a brightness greater than a preset brightness threshold (which can be set according to actual needs) are located is taken as the highlight area of the multispectral image.
  • a preset brightness ranking in the front such as the top 5%
  • highlight detection is performed on a multispectral image, and the highlight area of the multispectral image can be preliminarily detected through a two-color reflection model, a center surround filter, or a dark channel method, and then a low-pass filter is used to remove the edges of the preliminarily detected highlight area that are misjudged as the highlight area, thereby obtaining the highlight area on the multispectral image.
  • the electronic device 10 generates a human-computer interaction interface for setting the light source information of the shooting scene in response to a request to enter the color fidelity mode.
  • the light source information that can be set may include: the number of light sources, the highlight area, the light source position, the light source boundary, and the light source type, such as sunlight, halogen light, fluorescence, and light-emitting diode.
  • the electronic device 10 can obtain the light source information of the shooting scene from the human-computer interaction interface, and based on the shooting scene The light source information of the scene is used to obtain the highlight area.
  • obtaining each local spectral power according to each highlight area includes: performing principal component analysis on the highlight area to obtain the principal component vector of the highlight area; obtaining the local spectral power based on the principal component vector of the highlight area.
  • the electronic device 10 can determine the first principal component vector and the second principal component vector in the principal component vector of the highlight area; project the image data of the highlight area to the plane formed by the first principal component vector of the highlight area and the second principal component vector of the highlight area; determine a linear cluster with a linear distribution in the distribution of the projected image data in the plane; perform principal component analysis on the linear cluster to obtain the first principal component vector of the linear cluster; obtain the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area, and the first principal component vector of the linear cluster.
  • the first principal component vector of the linear cluster represents the light source direction of the light source, and the distribution of the light source can be determined by the above method.
  • obtaining the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area and the first principal component vector of the linear cluster can include: performing a pseudo-inverse operation on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; multiplying the pseudo-inverse matrix with the principal component vector of the linear cluster to obtain the local spectral power.
  • Step 1603 determining a color restoration parameter according to the spectral power, and performing color restoration on the original image based on the color restoration parameter to obtain a color restored image.
  • the color restoration parameters may include: a color correction matrix and a color adaptation conversion matrix.
  • a color correction matrix For the local spectral power of each area in the shooting scene, local color restoration parameters corresponding to the local spectral power, namely, the local color correction matrix and the local color adaptation conversion matrix, can be fitted.
  • each pixel in the original imaging image is processed by a color correction matrix to obtain a color-corrected image; and the color-corrected image is processed by a chromatic adaptation conversion matrix to obtain a color-restored image.
  • processing each pixel in the original imaging image through a color correction matrix to obtain a color-corrected image can include: determining critical pixels in the color-restored image based on light source distribution information, where the critical pixels are located in the light source boundary area; smoothing the critical area based on the local color correction matrix corresponding to each pixel in the light source boundary area to obtain a smoothed image, which can avoid large color differences in the image in the light source boundary area and achieve smooth color transition, thereby improving the color restoration effect.
  • the electronic device 10 fits a local color correction matrix, which may include: obtaining a color card reflectance function, a spectral sensitivity function, and a standard observer color matching function of a camera module; integrating the color card reflectance function, the camera spectral sensitivity function, and the local spectral power to obtain photosensitivity data of the camera module; integrating the color card reflectance, the standard observer color matching function, and the local spectral power to obtain standard observer tristimulus values; and obtaining a local color correction matrix based on the photosensitivity data of the camera module and the standard observer tristimulus values.
  • the step of fitting the local chromatic adaptation conversion matrix by the electronic device 10 may include: obtaining the standard observer color matching function and the spectral power of the target light source; multiplying the local spectral power by the standard observer color matching function to obtain the white point tristimulus values of the light source in the shooting scene; multiplying the spectral power of the target light source by the standard observer color matching function to obtain the white point tristimulus values of the target light source; and obtaining the local chromatic adaptation conversion matrix according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
  • obtaining a local chromatic adaptation conversion matrix based on the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source can include: multiplying the white point tristimulus values of the light source in the shooting scene with a preset chromatic adaptation model to obtain a first response value, where the first response value is the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene; multiplying the white point tristimulus values of the target light source with the chromatic adaptation model to obtain a second response value, where the second response value is the response value of the human eye to the long wave, medium wave and short wave of the target light source; and obtaining a local chromatic adaptation conversion matrix based on the first response value and the second response value.
  • obtaining the local chromatic adaptation conversion matrix according to the first response value and the second response value may include: generating a diagonal matrix based on the first response value and the second response value, wherein the values of the diagonal matrix respectively include: the ratio of the long wave of the target light source to the long wave of the light source in the shooting scene, the ratio of the medium wave of the target light source to the medium wave of the light source in the shooting scene, and the ratio of the short wave of the target light source to the short wave of the light source in the shooting scene; converting the inverse matrix of the chromatic adaptation model into The matrix is multiplied by the diagonal matrix and the chromatic adaptation model to obtain the local chromatic adaptation transformation matrix.
  • This embodiment performs spectral estimation of the real light source of the shooting scene based on the multispectral image by region to obtain the light source spectrum, and then obtains the local color conversion parameters corresponding to each region by region, such as the local color correction matrix and the color adaptation conversion matrix, so that the calculated color conversion parameters are more accurate and reasonable, and the color restoration of each object in the shooting scene is more accurate, which can effectively improve the overall color cast of the image and achieve the effect of matching the overall color perception of the scene with human vision.
  • the embodiment of the present application performs spectral estimation through multispectral images.
  • the multispectral images have high spectral resolution and can accurately estimate the light source spectrum of each area in the scene.
  • the light source information of the shooting scene set in the color fidelity mode is also combined, thereby further improving the accuracy of the light source spectrum estimation.
  • the color restoration parameters are calculated based on the actual spectrum of the mixed light source scene, and the colors of each area in the shooting scene can be accurately restored through the local color correction matrix and the local color adaptation conversion matrix of the divided regions.
  • the embodiment of the present application gets rid of the dependence on the typical light source in the calibration, covers a wider range of scenes, and ensures the accuracy of color restoration in different scenes.
  • the internal memory 121 can be used to store instructions, and the processor 110 can be used to call the instructions in the internal memory 121, so that the electronic device 10 executes the above-mentioned related method steps to implement the image processing method in the above-mentioned embodiment.
  • An embodiment of the present application further provides a computer storage medium, in which computer instructions are stored.
  • the computer instructions When the computer instructions are executed on the electronic device 10, the electronic device 10 executes the above-mentioned related method steps to implement the image processing method in the above-mentioned embodiment.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on an electronic device, the electronic device executes the above-mentioned related steps to implement the image processing method in the above-mentioned embodiment.
  • an embodiment of the present application also provides a device, which can specifically be a chip, component or module, and the device may include a connected processor and memory; wherein the memory is used to store computer-executable instructions, and when the device is running, the processor can execute the computer-executable instructions stored in the memory so that the chip executes the image processing method in the above-mentioned method embodiments.
  • the computer storage medium, computer program product or chip provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are schematic.
  • the division of the modules or units is a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be in the form of a software product.
  • the software product is stored in a storage medium, including several instructions for enabling a device (which may be a single-chip microcomputer, chip, etc.) or a processor to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Embodiments of the present application relate to the field of terminals, and provide an image processing method, an electronic device, a computer program product, and a computer readable storage medium. The image processing method comprises: acquiring a multispectral image and an original formed image of a photography scene; performing spectral estimation on the basis of the multispectral image to obtain spectral power of a light source in the photography scene; and determining color reduction parameters according to the spectral power, and performing color reduction on the original formed image on the basis of the color reduction parameters to obtain a color-reduced image. According to the present application, the color deviation between an image captured by an electronic device and a real scene observed by human eyes can be corrected, so that the overall color perception of the captured image matches human vision, and the use experience of users is improved.

Description

图像处理方法、电子设备、计算机程序产品及存储介质Image processing method, electronic device, computer program product and storage medium
本申请要求于2022年11月18日提交中国专利局,申请号为202211449305.1、申请名称为“图像处理方法、电子设备、计算机程序产品及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on November 18, 2022, with application number 202211449305.1 and application name “Image processing method, electronic device, computer program product and storage medium”, all contents of which are incorporated by reference in this application.
技术领域Technical Field
本申请涉及终端领域,尤其涉及一种图像处理方法、电子设备、计算机程序产品及计算机可读存储介质。The present application relates to the field of terminals, and in particular to an image processing method, an electronic device, a computer program product, and a computer-readable storage medium.
背景技术Background technique
相机的成像传感器的感光能力与人眼的感光能力存在差异,导致相机获取的图像的原始色彩信息与人眼实际看到的色彩信息可能不一致。There is a difference between the photosensitivity of the camera's imaging sensor and that of the human eye, which may cause the original color information of the image obtained by the camera to be inconsistent with the color information actually seen by the human eye.
为了使得相机捕获到的场景的色彩信息与人眼感受到的色彩信息一致,可对相机拍摄的图像进行色彩还原处理。现有的色彩还原处理方式一般是基于事先标定出几种典型光源的色彩校正参数进行色彩还原处理。然而,实际拍摄场景下的光源组成较为复杂,导致拍摄区域内可能存在各种光源照射情形,基于上述处理方式,相机难以准确进行色彩还原,从而导致拍摄出的照片与人眼观察的真实场景存在较大的色彩偏差。In order to make the color information of the scene captured by the camera consistent with the color information perceived by the human eye, the image captured by the camera can be processed for color restoration. The existing color restoration processing method is generally based on the color correction parameters of several typical light sources that have been calibrated in advance. However, the light source composition in the actual shooting scene is relatively complex, resulting in various light source illumination situations in the shooting area. Based on the above processing method, it is difficult for the camera to accurately restore the color, resulting in a large color deviation between the captured photos and the real scene observed by the human eye.
发明内容Summary of the invention
有鉴于此,有必要提供一种图像处理方法,解决现有技术中难以准确对拍摄图像进行色彩还原,导致拍摄出的图像与人眼观察的真实场景存在较大的色彩偏差的问题。In view of this, it is necessary to provide an image processing method to solve the problem in the prior art that it is difficult to accurately restore the color of the captured image, resulting in a large color deviation between the captured image and the real scene observed by the human eye.
本申请实施例第一方面公开了一种色彩还原方法,应用于电子设备,该方法包括:获取拍摄场景的多光谱图像及原始成像图像;基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率;根据光谱功率确定色彩还原参数,及基于色彩还原参数对原始成像图像进行色彩还原,得到色彩还原后的图像。A first aspect of an embodiment of the present application discloses a color restoration method, which is applied to an electronic device. The method includes: acquiring a multispectral image and an original imaging image of a shooting scene; performing spectral estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene; determining color restoration parameters based on the spectral power, and performing color restoration on the original imaging image based on the color restoration parameters to obtain a color restored image.
采用上述技术方案,通过获取拍摄场景的多光谱图像,多光谱图像相较于摄像头成像得到的原始成像图像(一般是基于三原色光(red green blue,RGB)图像传感器成像得到原始成像图像),其具有更窄的通道带宽,因此,成像光谱分辨率更高,基于此,能够准确对拍摄现场的光源光谱进行估计,得到拍摄现场的光源光谱,进而准确得到拍摄场景对应的色彩还原参数,以校正相机拍摄的照片与人眼观察的真实场景的色彩偏差,使得拍摄图像的整体颜色感知与人类视觉相匹配,提升用户使用体验。By adopting the above technical scheme, a multispectral image of the shooting scene is obtained. Compared with the original imaging image obtained by camera imaging (generally, the original imaging image is obtained based on imaging of the three primary colors (red, green, blue, RGB) image sensor), the multispectral image has a narrower channel bandwidth. Therefore, the imaging spectral resolution is higher. Based on this, the light source spectrum of the shooting scene can be accurately estimated to obtain the light source spectrum of the shooting scene, and then the color restoration parameters corresponding to the shooting scene can be accurately obtained to correct the color deviation between the photos taken by the camera and the real scenes observed by the human eye, so that the overall color perception of the captured image matches human vision, thereby improving the user experience.
在一些实施例中,基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率,包括:基于拍摄场景的多光谱图像进行光谱估计,得到拍摄场景中各区域的局部光谱功率;根据光谱功率确定色彩还原参数,包括:根据拍摄场景中各区域的局部光谱功率,确定各区域的局部色彩还原参数。In some embodiments, spectral estimation is performed based on the multispectral image to obtain the spectral power of the light source in the shooting scene, including: performing spectral estimation based on the multispectral image of the shooting scene to obtain the local spectral power of each area in the shooting scene; determining color restoration parameters according to the spectral power, including: determining the local color restoration parameters of each area according to the local spectral power of each area in the shooting scene.
采用上述技术方案,可以在同一拍摄场景各区域的光谱功率不相同的情形下,针对各区域进行色彩还原,使得同一拍摄场景下的各区域的图像颜色均能得到准确的还原,提高原始成像图像的色彩还原的准确性。By adopting the above technical solution, when the spectral powers of different areas in the same shooting scene are different, color restoration can be performed for each area, so that the image color of each area in the same shooting scene can be accurately restored, thereby improving the accuracy of color restoration of the original imaging image.
在一些实施例中,基于拍摄场景的多光谱图像进行光谱估计,得到拍摄场景中各区域的局部光谱功率,包括:对多光谱图像进行高光检测,得到多光谱图像的高光区域;根据高光区域,得到各区域的局部光谱功率。In some embodiments, spectral estimation is performed based on a multispectral image of a captured scene to obtain local spectral power of each area in the captured scene, including: performing highlight detection on the multispectral image to obtain a highlight area of the multispectral image; and obtaining the local spectral power of each area based on the highlight area.
采用该技术方案,由于光源照射拍摄物品,拍摄物品经反射可能会形成高光区域, 基于高光区域更易确定光源的光谱功率,进而可准确得到各区域的局部光谱功率,节省计算量。With this technical solution, as the light source illuminates the photographed object, the photographed object may form a highlight area after reflection. It is easier to determine the spectral power of the light source based on the highlight area, and then the local spectral power of each area can be accurately obtained, saving calculation amount.
在一些实施例中,多光谱图像包括用于进行高光检测的辅助配件,对多光谱图像进行高光检测,得到多光谱图像的高光区域,包括:对多光谱图像中的辅助配件进行高光检测,基于判定为高光的辅助配件的位置,得到多光谱图像的高光区域。In some embodiments, the multispectral image includes an auxiliary accessory for highlight detection, and highlight detection is performed on the multispectral image to obtain a highlight area of the multispectral image, including: performing highlight detection on the auxiliary accessory in the multispectral image, and obtaining the highlight area of the multispectral image based on the position of the auxiliary accessory determined to be a highlight.
采用该技术方案,通过辅助配件能够提高高光区域的检测成功率以及准确率。By adopting this technical solution, the detection success rate and accuracy of high-light areas can be improved through auxiliary accessories.
在一些实施例中,对多光谱图像进行高光检测,得到多光谱图像的高光区域,包括:统计多光谱图像中各像素的亮度,及将像素亮度排名前预设位的像素所在的区域作为多光谱图像的高光区域;或检测多光谱图像中各像素的亮度,及将像素亮度大于预设亮度阈值的像素所在的区域作为多光谱图像的高光区域。In some embodiments, highlight detection is performed on a multispectral image to obtain a highlight area of the multispectral image, including: counting the brightness of each pixel in the multispectral image, and taking the area where pixels with pixel brightness ranking before a preset position are located as the highlight area of the multispectral image; or detecting the brightness of each pixel in the multispectral image, and taking the area where pixels with pixel brightness greater than a preset brightness threshold are located as the highlight area of the multispectral image.
采用该技术方案,通过将像素亮度排名前预设位的像素所在的区域,或者像素亮度大于预设亮度阈值的像素所在的区域作为多光谱图像的高光区域,能够准确提取多光谱图像的高光区域,提高高光区域的检测成功率以及准确率。By adopting this technical solution, the highlight area of the multispectral image can be accurately extracted by taking the area where pixels with pixel brightness ranking before the preset position, or the area where pixels with pixel brightness greater than the preset brightness threshold, as the highlight area of the multispectral image, thereby improving the success rate and accuracy of highlight area detection.
在一些实施例中,根据高光区域,得到各区域的局部光谱功率,包括:对高光区域进行主成分分析,得到高光区域的第一主成分向量和高光区域的第二主成分向量;将高光区域的图像数据投影至高光区域的第一主成分向量和高光区域的第二主成分向量组成的平面中;基于投影后的图像数据在平面中的分布,确定成线性分布的线性簇;对线性簇进行主成分分析,得到线性簇的第一主成分向量;基于高光区域的第一主成分向量、高光区域的第二主成分向量和线性簇的第一主成分向量,得到局部光谱功率。In some embodiments, based on the highlight area, the local spectral power of each area is obtained, including: performing principal component analysis on the highlight area to obtain a first principal component vector of the highlight area and a second principal component vector of the highlight area; projecting the image data of the highlight area to a plane formed by the first principal component vector of the highlight area and the second principal component vector of the highlight area; determining a linear cluster with a linear distribution based on the distribution of the projected image data in the plane; performing principal component analysis on the linear cluster to obtain a first principal component vector of the linear cluster; and obtaining the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area, and the first principal component vector of the linear cluster.
采用该技术方案,通过对高光区域进行主成分分析,能够消除图像数据中的噪声,减少后续分析光谱功率的计算开销,便于针对不同拍摄场景的区域生成局部对应的局部光谱功率,以提高局部光谱功率的准确性。By adopting this technical solution, by performing principal component analysis on the highlight area, the noise in the image data can be eliminated, the calculation overhead of subsequent spectral power analysis can be reduced, and the local spectral power corresponding to the area of different shooting scenes can be generated to improve the accuracy of the local spectral power.
在一些实施例中,基于高光区域的第一主成分向量、高光区域的第二主成分向量和线性簇的第一主成分向量,得到局部光谱功率,包括:对高光区域的第一主成分向量和高光区域的第二主成分向量进行伪逆运算,得到伪逆矩阵;基于伪逆矩阵与线性簇的第一主成分向量,得到局部光谱功率。In some embodiments, local spectral power is obtained based on the first principal component vector of the highlight area, the second principal component vector of the highlight area and the first principal component vector of the linear cluster, including: performing pseudo-inverse operation on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; and obtaining local spectral power based on the pseudo-inverse matrix and the first principal component vector of the linear cluster.
采用该技术方案,引入伪逆运算可以准确计算得到与各个局部区域对应的局部光谱功率,提高局部光谱功率的准确性。By adopting this technical solution and introducing a pseudo-inverse operation, the local spectral power corresponding to each local area can be accurately calculated, thereby improving the accuracy of the local spectral power.
在一些实施例中,电子设备包括摄像模组,局部色彩还原参数包括局部色彩校正矩阵,根据拍摄场景中各区域的局部光谱功率,确定各区域的局部色彩还原参数,包括:获取摄像模组的色卡反射率函数、光谱灵敏度函数和标准观察者色匹配函数;基于色卡反射率函数、光谱灵敏度函数和局部光谱功率,得到摄像模组的感光数据;基于色卡反射率、标准观察者色匹配函数和局部光谱功率,得到标准观察者三刺激值;根据摄像模组的感光数据和标准观察者三刺激值,得到局部色彩校正矩阵。In some embodiments, the electronic device includes a camera module, the local color restoration parameters include a local color correction matrix, and the local color restoration parameters of each area are determined according to the local spectral power of each area in the shooting scene, including: obtaining the color card reflectance function, spectral sensitivity function and standard observer color matching function of the camera module; obtaining the photosensitive data of the camera module based on the color card reflectance function, spectral sensitivity function and local spectral power; obtaining the standard observer tristimulus values based on the color card reflectance, the standard observer color matching function and the local spectral power; and obtaining the local color correction matrix according to the photosensitive data of the camera module and the standard observer tristimulus values.
采用该技术方案,在计算色彩校正矩阵的过程中拟合了局部色彩校正矩阵、色卡反射率函数、相机灵敏度函数和标准观察者色匹配函数,从而能够提高色彩校正矩阵的准确性,进而提高色彩还原的准确性。By adopting this technical solution, the local color correction matrix, the color card reflectance function, the camera sensitivity function and the standard observer color matching function are fitted in the process of calculating the color correction matrix, thereby improving the accuracy of the color correction matrix and further improving the accuracy of color reproduction.
在一些实施例中,局部色彩还原参数包括局部色适应转换矩阵,根据拍摄场景中各区域的局部光谱功率,确定各区域的局部色彩还原参数,包括:获取标准观察者色匹配函数和目标光源的光谱功率;基于局部光谱功率和标准观察者色匹配函数,得到拍摄场景中的光源的白点三刺激值;基于目标光源的光谱功率和标准观察者色匹配函数,得到目标光源的白点三刺激值;根据拍摄场景中的光源的白点三刺激值和目标光源的白点三刺激值,得到局部色适应转换矩阵。In some embodiments, the local color restoration parameters include a local chromatic adaptation conversion matrix, and the local color restoration parameters of each area in the shooting scene are determined according to the local spectral power of each area in the shooting scene, including: obtaining a standard observer color matching function and a spectral power of a target light source; obtaining white point tristimulus values of the light source in the shooting scene based on the local spectral power and the standard observer color matching function; obtaining white point tristimulus values of the target light source based on the spectral power of the target light source and the standard observer color matching function; obtaining a local chromatic adaptation conversion matrix according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
采用该技术方案,基于标准观察者色匹配函数、目标光源的光谱功率和局部光谱功率拟合出局部色适应转换矩阵,以使得在进行色彩还原时,够将拍摄场景下的光源的三刺激值转换到目标光源下的三刺激值,提高色彩还原的准确性,例如目标光源为 D65光源。With this technical solution, a local chromatic adaptation conversion matrix is fitted based on the standard observer color matching function, the spectral power of the target light source, and the local spectral power, so that when color restoration is performed, the three stimulus values of the light source in the shooting scene can be converted to the three stimulus values under the target light source, thereby improving the accuracy of color restoration. For example, if the target light source is D65 light source.
在一些实施例中,根据拍摄场景中的光源的白点三刺激值和目标光源的白点三刺激值,得到局部色适应转换矩阵,包括:基于拍摄场景中的光源的白点三刺激值与预设的色适应模型,得到第一响应值,第一响应值为人眼对拍摄场景中的光源的长波、中波和短波的响应值;基于目标光源的白点三刺激值与色适应模型,得到第二响应值,第二响应值为人眼对目标光源的长波、中波和短波的响应值;根据第一响应值和第二响应值,得到局部色适应转换矩阵。In some embodiments, a local chromatic adaptation conversion matrix is obtained according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source, including: obtaining a first response value based on the white point tristimulus values of the light source in the shooting scene and a preset chromatic adaptation model, the first response value being the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene; obtaining a second response value based on the white point tristimulus values of the target light source and the chromatic adaptation model, the second response value being the response value of the human eye to the long wave, medium wave and short wave of the target light source; and obtaining the local chromatic adaptation conversion matrix according to the first response value and the second response value.
采用该技术方案,将三刺激值转换为人眼对拍摄场景中的光源及目标光源的长波、中波和短波的响应值,再进行局部色适应转换矩阵的计算,使得计算出的色适应转换矩阵与人类视觉更加匹配。By adopting this technical solution, the three stimulus values are converted into the response values of the human eye to the long-wave, medium-wave and short-wave of the light source and the target light source in the shooting scene, and then the local chromatic adaptation conversion matrix is calculated, so that the calculated chromatic adaptation conversion matrix is more compatible with human vision.
在一些实施例中,色彩还原参数包括:色彩校正矩阵和色适应转换矩阵,基于色彩还原参数对原始成像图像进行色彩还原,得到色彩还原后的图像,包括:基于色彩校正矩阵对原始成像图像中的各像素进行校正处理,得到色彩校正后的图像;基于色适应转换矩阵对色彩校正后的图像进行转换处理,得到色彩还原的图像。In some embodiments, the color restoration parameters include: a color correction matrix and a color adaptation conversion matrix, and color restoration is performed on the original image based on the color restoration parameters to obtain a color restored image, including: correcting each pixel in the original image based on the color correction matrix to obtain a color corrected image; converting the color corrected image based on the color adaptation conversion matrix to obtain a color restored image.
采用该技术方案,色彩校正矩阵能够将相机感光数据转换到标准观察者的三刺激值,色适应校正矩阵能够将拍摄场景下的光源的三刺激值转换到目标光源下的三刺激值,从而实现色彩的还原。Using this technical solution, the color correction matrix can convert the camera's photosensitivity data into the three stimulus values of a standard observer, and the chromatic adaptation correction matrix can convert the three stimulus values of the light source in the shooting scene into the three stimulus values under the target light source, thereby achieving color restoration.
在一些实施例中,基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率,包括:基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率和光源分布信息;基于色彩还原参数对原始成像图像进行色彩还原,得到色彩还原后的图像之后,还包括:根据光源分布信息,确定色彩还原后的图像中的临界像素,临界像素位于光源分界区域;对临界像素进行平滑处理,得到平滑处理后的图像。In some embodiments, spectral estimation is performed based on a multispectral image to obtain the spectral power of the light source in the shooting scene, including: performing spectral estimation based on the multispectral image to obtain the spectral power and light source distribution information of the light source in the shooting scene; performing color restoration on the original imaging image based on color restoration parameters to obtain a color restored image, and further including: determining critical pixels in the color restored image based on the light source distribution information, the critical pixels being located in the light source boundary area; and smoothing the critical pixels to obtain a smoothed image.
采用该技术方案,基于光源分布信息对色彩还原后的图像中的临界像素进行平滑处理,避免处于光源分界区域的图像颜色差异大,实现颜色的平滑过渡,从而提高色彩还原效果。By adopting this technical solution, critical pixels in the color restored image are smoothed based on the light source distribution information, avoiding large color differences in the image in the light source boundary area, achieving smooth color transition, and thus improving the color restoration effect.
在一些实施例中,基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率,包括:响应于进入颜色保真模式的请求,生成用于设置拍摄场景光源信息的人机交互界面;从人机交互界面获取用户设置的拍摄场景光源信息;基于拍摄场景光源信息和多光谱图像进行光源光谱估计,得到拍摄场景中光源的光谱功率。In some embodiments, spectral estimation is performed based on a multispectral image to obtain the spectral power of the light source in the shooting scene, including: in response to a request to enter a color fidelity mode, generating a human-computer interaction interface for setting the light source information of the shooting scene; obtaining the light source information of the shooting scene set by the user from the human-computer interaction interface; and performing light source spectrum estimation based on the shooting scene light source information and the multispectral image to obtain the spectral power of the light source in the shooting scene.
采用该技术方案,结合获取的拍摄场景的光源信息以及多光谱图像进行光谱估计,能够更准确地估计出拍摄场景中的光源的光谱功率,在颜色保真模式下,才进行光谱估计,能够使得在用户对图像的色彩还原需求较低的情况下,降低电子设备能耗,提高用户使用感。By adopting this technical solution and combining the acquired light source information of the shooting scene with the multispectral image for spectral estimation, the spectral power of the light source in the shooting scene can be estimated more accurately. Spectral estimation is only performed in color fidelity mode, which can reduce the energy consumption of electronic equipment and improve user experience when the user has low demand for color restoration of the image.
在一些实施例中,获取拍摄场景的多光谱图像,包括:通过多个多光谱图像传感器获取拍摄场景的多光谱初始图像;将各多光谱初始图像进行合并,得到拍摄场景的多光谱图像。In some embodiments, obtaining a multispectral image of a captured scene includes: obtaining a multispectral initial image of the captured scene through multiple multispectral image sensors; and merging the multispectral initial images to obtain a multispectral image of the captured scene.
采用该技术方案,将多个多光谱初始图像合并,使得得到的拍摄场景的多光谱图像的包含的信息更多,光谱功率估计更加准确。By adopting this technical solution, a plurality of multispectral initial images are merged, so that the obtained multispectral image of the shooting scene contains more information and the spectral power estimation is more accurate.
第二方面,本申请实施例提供一种计算机可读存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面所述的图像处理方法。In a second aspect, an embodiment of the present application provides a computer-readable storage medium, including computer instructions. When the computer instructions are executed on an electronic device, the electronic device executes the image processing method as described in the first aspect.
第三方面,本申请实施例提供一种电子设备,电子设备包括处理器和存储器,存储器用于存储指令,处理器用于调用存储器中的指令,使得电子设备执行如第一方面所述的图像处理方法。In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, the memory is used to store instructions, and the processor is used to call the instructions in the memory, so that the electronic device executes the image processing method described in the first aspect.
第四方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在电子设备(如计算机)上运行时,使得电子设备执行如第一方面所述的图像处理方法。In a fourth aspect, an embodiment of the present application provides a computer program product. When the computer program product is run on an electronic device (such as a computer), the electronic device executes the image processing method described in the first aspect.
第五方面,提供一种装置,所述装置具有实现上述第一方面所提供的方法中电子设备行为的功能。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬 件或软件包括一个或多个与上述功能相对应的模块。In a fifth aspect, a device is provided, wherein the device has the function of implementing the electronic device behavior in the method provided in the first aspect. The function can be implemented by hardware, or by hardware executing corresponding software. The software includes one or more modules corresponding to the above functions.
可以理解地,上述提供的第二方面所述的计算机可读存储介质,第三方面所述的电子设备,第四方面所述的计算机程序产品,第五方面所述的装置均与上述第一方面的方法对应,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。It can be understood that the computer-readable storage medium described in the second aspect, the electronic device described in the third aspect, the computer program product described in the fourth aspect, and the device described in the fifth aspect all correspond to the method of the first aspect. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本申请一实施例提供的基于色彩校正矩阵对拍摄图像进行色彩还原的示意图;FIG1 is a schematic diagram of color restoration of a captured image based on a color correction matrix provided by an embodiment of the present application;
图2为本申请一实施例提供的一种电子设备的结构示意图;FIG2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present application;
图3为本申请一实施例提供的一种电子设备的软件结构示意图;FIG3 is a schematic diagram of a software structure of an electronic device provided in an embodiment of the present application;
图4为本申请一实施例提供的图像处理方法的应用场景示意图;FIG4 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application;
图5为本申请一实施例提供的电子设备进行色彩还原的架构示意图;FIG5 is a schematic diagram of the architecture of an electronic device performing color restoration according to an embodiment of the present application;
图6为本申请一实施例提供的电子设备进行色彩还原的流程图;FIG6 is a flow chart of color restoration performed by an electronic device according to an embodiment of the present application;
图7为本申请另一实施例提供的电子设备进行色彩还原的架构示意图;FIG7 is a schematic diagram of the architecture of an electronic device performing color restoration according to another embodiment of the present application;
图8为本申请一实施例提供的电子设备在高保真模式下设置拍摄场景光源信息的界面示意图;FIG8 is a schematic diagram of an interface for setting light source information of a shooting scene in a high-fidelity mode provided by an electronic device according to an embodiment of the present application;
图9为本申请一实施例提供的电子设备设置高光区域的界面示意图;FIG9 is a schematic diagram of an interface for setting a highlight area of an electronic device provided by an embodiment of the present application;
图10为本申请一实施例提供的多光谱图像传感器的多光谱滤光器阵列的示意图;FIG10 is a schematic diagram of a multispectral filter array of a multispectral image sensor provided in an embodiment of the present application;
图11为本申请一实施例提供的两个峰值错开的多光谱图像传感器的光谱灵敏度曲线示意图;FIG11 is a schematic diagram of a spectral sensitivity curve of a multi-spectral image sensor with two staggered peaks provided by an embodiment of the present application;
图12为本申请一实施例提供的电子设备进行光谱估计的场景示意图;FIG12 is a schematic diagram of a scenario in which an electronic device performs spectrum estimation according to an embodiment of the present application;
图13为本申请一实施例提供的电子设备进行光谱估计的流程示意图;FIG13 is a schematic diagram of a process of performing spectrum estimation by an electronic device provided in an embodiment of the present application;
图14为本申请一实施例提供的在拍摄场景中设置辅助配件的示意图;FIG14 is a schematic diagram of setting auxiliary accessories in a shooting scene according to an embodiment of the present application;
图15为本申请一实施例提供的多光谱色温传感器的结构示意图;FIG15 is a schematic diagram of the structure of a multi-spectral color temperature sensor provided in an embodiment of the present application;
图16为本申请一实施例提供的图像处理方法的流程示意图。FIG. 16 is a flow chart of an image processing method provided in an embodiment of the present application.
具体实施方式Detailed ways
需要说明的是,本申请中“至少一个”是指一个或者多个,“多个”是指两个或多于两个。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。本申请的说明书和权利要求书及附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不是用于描述特定的顺序或先后次序。It should be noted that in this application, "at least one" means one or more, and "more than one" means two or more than two. "And/or" describes the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The terms "first", "second", "third", "fourth", etc. (if any) in the specification, claims and drawings of this application are used to distinguish similar objects, rather than to describe a specific order or sequence.
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。In the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplary" or "for example" is intended to present related concepts in a specific way.
光谱(spectrum):复色光经过色散系统分光后,被色散开的单色光按波长或频率大小而依次排列的图案,全称为光学频谱。Spectrum: After the complex light is split by the dispersion system, the dispersed monochromatic light is arranged in sequence according to wavelength or frequency. The full name is optical spectrum.
多光谱图像:多光谱图像是指包含很多带的图像,每个带是一幅灰度图像,它表示根据用来产生该带的传感器的敏感度得到的场景亮度。Multispectral image: A multispectral image is an image that contains many bands, each band is a grayscale image that represents the brightness of the scene according to the sensitivity of the sensor used to produce that band.
光谱反射率:当光源照射到物体表面,物体会对不同波长的电磁波产生选择性反射,光谱反射率指在某波段被物体反射的光通量与入射到物体上的光通量之比,表征物体表面的本质属性。Spectral reflectivity: When light source shines on the surface of an object, the object will selectively reflect electromagnetic waves of different wavelengths. Spectral reflectivity refers to the ratio of the luminous flux reflected by the object in a certain band to the luminous flux incident on the object, which characterizes the essential properties of the object's surface.
光谱灵敏度函数(Spectral Sensitivity Function,SSF):衡量对不同波长的光线的敏感度的函数。 Spectral Sensitivity Function (SSF): A function that measures the sensitivity to light of different wavelengths.
色匹配函数(Color Matching Function,CMF):色匹配函数是匹配等能光谱中各单色光所需要的红绿蓝三原色色光的数量,是颜色度量和计算的基础数据。Color Matching Function (CMF): The color matching function is the quantity of red, green and blue primary colors required to match each monochromatic light in the equal-energy spectrum. It is the basic data for color measurement and calculation.
三刺激值:引起人体视网膜对某种颜色感觉的三种原色的刺激程度之量的表示,用X(红原色刺激量)、Y(绿原色刺激量)和Z(蓝原色刺激量)表示。Tristimulus value: The amount of stimulation of the three primary colors that causes the human retina to perceive a certain color, expressed as X (red primary color stimulation), Y (green primary color stimulation) and Z (blue primary color stimulation).
色适应(Chromatic Adaptation,CA):人的视觉系统在观察条件发生变化时,自动调节视网膜三种锥体细胞的相对灵敏度,以尽量保持对一定物理目标表面的颜色感知即色貌保持不变的现象。Chromatic Adaptation (CA): When the observation conditions change, the human visual system automatically adjusts the relative sensitivity of the three types of cone cells in the retina to try to keep the color perception of a certain physical target surface unchanged, that is, the color appearance remains unchanged.
光谱功率分布(Spectra Power Distribution,SPD):光源辐射功率按波长的分布,也可简称为光谱功率。Spectral Power Distribution (SPD): The distribution of light source radiation power according to wavelength, also known as spectral power.
CIE 1931 XYZ色彩空间(也叫做CIE 1931色彩空间)是由国际照明委员会(International Commission on illumination,CIE)于1931年创立的采用数学方式来定义的色彩空间。The CIE 1931 XYZ color space (also called the CIE 1931 color space) is a mathematically defined color space created by the International Commission on Illumination (CIE) in 1931.
相机中的成像传感器(sensor)的感光能力与人眼的感光能力不同,因此,相机拍摄某场景时,相机获取的拍摄场景的原始色彩信息与人眼直接观察到的该拍摄场景的色彩信息存在一定的差异。例如,红木色的实木家具经拍摄,成像后的图像中显示的颜色可能会变成血红色;黄绿色的窗帘经拍摄,成像后的图像中显示的颜色可能会偏红。为了使得相机生成的最终图像的色彩与人眼观察到的真实色彩保持一致,可对相机获取到的原始成像图像进行色彩还原,色彩还原的作用即是将相机捕获到的场景的色彩信息转换成人眼感受到的色彩信息。The photosensitivity of the imaging sensor in the camera is different from that of the human eye. Therefore, when the camera shoots a scene, there is a certain difference between the original color information of the scene obtained by the camera and the color information of the scene directly observed by the human eye. For example, after shooting mahogany-colored solid wood furniture, the color displayed in the image after imaging may turn blood red; after shooting yellow-green curtains, the color displayed in the image after imaging may be reddish. In order to make the color of the final image generated by the camera consistent with the real color observed by the human eye, the original imaging image obtained by the camera can be color restored. The role of color restoration is to convert the color information of the scene captured by the camera into the color information perceived by the human eye.
光照条件会影响人眼和相机获取到的色彩信息,而实际拍摄场景的照明条件是不可控的,例如可能会存在自然光、日光灯等多种类型光源,相机难以结合实际拍摄场景的光源准确进行色彩还原,从而导致拍摄出的照片与人眼观察的真实场景存在色彩偏差。Lighting conditions will affect the color information obtained by the human eye and the camera, but the lighting conditions of the actual shooting scene are uncontrollable. For example, there may be multiple types of light sources such as natural light and fluorescent lights. It is difficult for the camera to accurately restore the color based on the light source of the actual shooting scene, resulting in color deviations between the captured photos and the real scene observed by the human eye.
如参考图1所示,电子设备可预先获取相机在各种典型光源下拍摄的标准色卡,以得到标准色卡的光谱反射率、相机光谱灵敏度函数、标准观察者色匹配函数等已知数据;根据这些已知数据标定出几种典型光源的色彩校正矩阵(Color Correction Matrix,CCM);例如,在相机对实际场景拍摄时,可计算出该实际拍摄场景中的各光源所接近的两种典型光源和该两种典型光源的权重;根据该两种典型光源的色彩校正矩阵和该两种典型光源的权重进行加权求和,将加权求和值作为拍摄场景中的光源对应的色彩校正矩阵。As shown in reference figure 1, the electronic device can obtain in advance the standard color card photographed by the camera under various typical light sources to obtain known data such as the spectral reflectance of the standard color card, the camera's spectral sensitivity function, and the standard observer color matching function; the color correction matrix (Color Correction Matrix, CCM) of several typical light sources can be calibrated based on these known data; for example, when the camera shoots an actual scene, the two typical light sources close to each light source in the actual shooting scene and the weights of the two typical light sources can be calculated; a weighted sum is performed based on the color correction matrices of the two typical light sources and the weights of the two typical light sources, and the weighted sum value is used as the color correction matrix corresponding to the light source in the shooting scene.
例如,图1中存在日光灯光源和自然光光源等多种混合光源,最接近的两种典型光源的色彩校正矩阵分别对应CCM_A和CCM_D,CCM_A占比40%,CCM_D占比60%,可将40%*CCM_A和CCM_D*60%作为拍摄场景中的光源对应的色彩校正矩阵。For example, in FIG1 , there are multiple mixed light sources such as fluorescent light sources and natural light sources. The color correction matrices of the two closest typical light sources correspond to CCM_A and CCM_D, respectively. CCM_A accounts for 40% and CCM_D accounts for 60%. 40%*CCM_A and CCM_D*60% can be used as the color correction matrices corresponding to the light sources in the shooting scene.
上述方式将两种典型光源的色彩校正矩阵的线性组合直接作为拍摄场景中的光源对应的色彩校正矩阵,会导致色彩校正矩阵准确度较低,难以准确进行色彩还原,另外,上述方式依赖于典型光源的标定,若标定的典型光源过少或标定错误,难以基于标定的光源进行色彩还原。The above method directly uses the linear combination of the color correction matrices of two typical light sources as the color correction matrix corresponding to the light source in the shooting scene, which will result in low accuracy of the color correction matrix and difficulty in accurate color restoration. In addition, the above method relies on the calibration of typical light sources. If the number of calibrated typical light sources is too small or the calibration is incorrect, it will be difficult to perform color restoration based on the calibrated light sources.
在一些图像处理方法中,电子设备还可执行如下步骤:S1:以各通道p参数之和为目标函数,计算相机光谱灵敏度函数至CIE1931 XYZ色匹配函数的最佳光谱变换矩阵,且需满足理想反射表面在若干典型光源下的色差约束;其中p参数定义为关于波长λ的一对灵敏度函数s1(λ)与s2(λ)之间的近似程度;S2:对于待进行颜色校正的原始成像图像,将S1中得到的光谱变换矩阵直接作用于各个像素的原始RGB响应值,将其转换至CIE1931 XYZ色彩空间中;同时,将S1中得到的光谱变换矩阵直接作用于自动白平衡模块估计出的光源色响应值,将其转换至CIE1931 XYZ色彩空间中;S3:使用CIECAM02色貌模型中的CAT02色适应变换模型,计算物体色经色适应后在标准光源下的对应色;S4:将各个像素经色适应后的CIE1931 XYZ三刺激值转换至相机最终输出或进行文件存储的目标色彩空间中,完成颜色校正流程。 In some image processing methods, the electronic device may also perform the following steps: S1: taking the sum of the p parameters of each channel as the objective function, calculating the optimal spectral transformation matrix from the camera spectral sensitivity function to the CIE1931 XYZ color matching function, and satisfying the chromatic aberration constraints of the ideal reflective surface under several typical light sources; wherein the p parameter is defined as the degree of approximation between a pair of sensitivity functions s1(λ) and s2(λ) with respect to the wavelength λ; S2: for the original imaging image to be color corrected, directly applying the spectral transformation matrix obtained in S1 to the original RGB response value of each pixel to convert it to the CIE1931 XYZ color space; at the same time, directly applying the spectral transformation matrix obtained in S1 to the light source color response value estimated by the automatic white balance module to convert it to the CIE1931 XYZ color space; S3: using the CAT02 color adaptation transformation model in the CIECAM02 color appearance model to calculate the corresponding color of the object color under the standard light source after color adaptation; S4: converting the CIE1931 color response value of each pixel after color adaptation to the CIE1931 The XYZ tristimulus values are converted to the target color space for the camera's final output or file storage, completing the color correction process.
上述通过计算相机光谱灵敏度函数与CIE1931色匹配函数之间的光谱变换关系,将相机原始RGB响应信号转换至设备无关的CIE1931 XYZ空间中,并利用CAT02色适应变换模型计算其在参考光源下色适应后的对应色响应值,从而实现不依赖于事先标定参数的颜色校正流程。然而,该种方式在进行色彩校正时,仅考虑了光源光谱函数和物体光谱反射率函数,所考量参数较简单,难以准确确定拍摄场景现场的光谱,导致色彩还原效果低。The above method converts the original RGB response signal of the camera into the device-independent CIE1931 XYZ space by calculating the spectral transformation relationship between the camera spectral sensitivity function and the CIE1931 color matching function, and uses the CAT02 color adaptation transformation model to calculate the corresponding color response value after color adaptation under the reference light source, thereby realizing a color correction process that does not rely on pre-calibrated parameters. However, this method only considers the light source spectrum function and the object spectral reflectance function when performing color correction. The parameters considered are relatively simple, and it is difficult to accurately determine the spectrum of the shooting scene, resulting in low color restoration effect.
为解决上述技术问题,本申请实施例还提供一种图像处理方法,可以准确得到拍摄场景对应的色彩还原参数,以校正相机拍摄的照片与人眼观察的真实场景的色彩偏差,使得拍摄图像的整体颜色感知与人类视觉相匹配。To solve the above technical problems, the embodiment of the present application also provides an image processing method, which can accurately obtain the color restoration parameters corresponding to the shooting scene to correct the color deviation between the photos taken by the camera and the real scene observed by the human eye, so that the overall color perception of the captured image matches human vision.
本申请实施例提供的图像处理方法,可应用于电子设备中,电子设备可以通过通信网络与其他电子设备或服务器进行通信。本申请的电子设备可以包括手机、可折叠电子设备、平板电脑、个人电脑(personal computer,PC)、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、智能家居设备、及智慧城市设备中的至少一种,本申请实施例对电子设备的具体类型不作特殊限制。通信网络可以是有线网络,也可以是无线网络。例如,通信网络可以是局域网(local area networks,LAN),也可以是广域网(wide area networks,WAN),例如互联网。当该通信网络为局域网时,示例性的,该通信网络可以是无线保真(wireless fidelity,Wi-Fi)热点网络、Wi-Fi P2P网络、蓝牙网络、zigbee网络或近场通信(near field communication,NFC)网络等近距离通信网络。当该通信网络为广域网时,示例性的,该通信网络可以是第三代移动通信技术(3rd-generation wireless telephone technology,3G)网络、第四代移动通信技术(the 4th generation mobile communication technology,4G)网络、第五代移动通信技术(5th-generation mobile communication technology,5G)网络、未来演进的公共陆地移动网络(public land mobile network,PLMN)或因特网等。The image processing method provided in the embodiment of the present application can be applied to electronic devices, and the electronic devices can communicate with other electronic devices or servers through a communication network. The electronic devices of the present application may include mobile phones, foldable electronic devices, tablet computers, personal computers (personal computers, PCs), laptop computers, handheld computers, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPCs), netbooks, cellular phones, personal digital assistants (personal digital assistants, PDAs), augmented reality (augmented reality, AR) devices, virtual reality (virtual reality, VR) devices, artificial intelligence (artificial intelligence, AI) devices, wearable devices, smart home devices, and at least one of smart city devices. The embodiment of the present application does not impose special restrictions on the specific type of electronic devices. The communication network can be a wired network or a wireless network. For example, the communication network can be a local area network (LAN) or a wide area network (WAN), such as the Internet. When the communication network is a local area network, illustratively, the communication network may be a short-distance communication network such as a wireless fidelity (Wi-Fi) hotspot network, a Wi-Fi P2P network, a Bluetooth network, a Zigbee network, or a near field communication (NFC) network. When the communication network is a wide area network, illustratively, the communication network may be a third-generation wireless telephone technology (3G) network, a fourth-generation mobile communication technology (4G) network, a fifth-generation mobile communication technology (5G) network, a future-evolved public land mobile network (PLMN) or the Internet.
在一些实施例中,电子设备可以安装一个或多个APP(Application)。APP可以简称应用,为能够实现某项或多项特定功能的软件程序。例如,通讯类应用、视频类应用、音频类应用、图像拍摄类应用、云桌面类应用等等。其中,通信类应用,例如可以包括短信应用。图像拍摄类应用,例如可以包括拍摄应用(系统相机或第三方拍摄应用)。视频类应用,例如可以包括华为视频。音频类应用,例如可以包括华为音乐。以下实施例中提到的应用,可以是电子设备出厂时已安装的系统应用,也可以是用户在使用电子设备的过程中从网络下载或其他电子设备获取的第三方应用。In some embodiments, the electronic device may install one or more APPs (Applications). APP can be referred to as an application, which is a software program that can implement one or more specific functions. For example, communication applications, video applications, audio applications, image capture applications, cloud desktop applications, and the like. Among them, communication applications may include text messaging applications, for example. Image capture applications may include shooting applications (system cameras or third-party shooting applications). Video applications may include Huawei Video, for example. Audio applications may include Huawei Music. The applications mentioned in the following embodiments may be system applications installed on the electronic device when it leaves the factory, or they may be third-party applications downloaded from the Internet or obtained from other electronic devices by the user during the use of the electronic device.
电子设备包括但不限于搭载Windows 或其他操作系统。Electronic equipment includes but is not limited to Windows or other operating systems.
图2示意了一种电子设备10的结构示意图。FIG. 2 shows a schematic structural diagram of an electronic device 10 .
电子设备10可以包括处理器110,外部存储器接口120,内部存储器121,天线1,天线2,移动通信模块130,无线通信模块140,音频模块150,传感器模块160,摄像模组170,显示屏180等。The electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, an antenna 1, an antenna 2, a mobile communication module 130, a wireless communication module 140, an audio module 150, a sensor module 160, a camera module 170, a display screen 180, etc.
可以理解的是,本申请实施例示意的结构并不构成对电子设备10的具体限定。在本申请另一些实施例中,电子设备10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It is to be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 10. In other embodiments of the present application, the electronic device 10 may include more or fewer components than shown in the figure, or combine some components, or split some components, or arrange the components differently. The components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural- network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processor (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural network processor. Network processing unit (NPU), etc. Among them, different processing units can be independent devices or integrated in one or more processors.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器可以为高速缓冲存储器。所述存储器可以保存处理器110用过或使用频率较高的指令或数据。如果处理器110需要使用所述指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in the processor 110 may be a cache memory. The memory may store instructions or data that have been used or are frequently used by the processor 110. If the processor 110 needs to use the instructions or data, it may be directly called from the memory. This avoids repeated accesses, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。处理器110可以通过以上至少一种接口连接音频模块、无线通信模块、显示器、摄像头等模块。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (SIM) interface, and/or a universal serial bus (USB) interface. The processor 110 may be connected to an audio module, a wireless communication module, a display, a camera, and other modules through at least one of the above interfaces.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备10的结构限定。在本申请另一些实施例中,电子设备10也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It is understandable that the interface connection relationship between the modules illustrated in the embodiment of the present application is only a schematic illustration and does not constitute a structural limitation on the electronic device 10. In other embodiments of the present application, the electronic device 10 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
电子设备10的无线通信功能可以通过天线1,天线2,移动通信模块130,无线通信模块140,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 10 can be implemented through the antenna 1, the antenna 2, the mobile communication module 130, the wireless communication module 140, the modem processor and the baseband processor.
移动通信模块130可以提供应用在电子设备10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块130可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。在一些实施例中,移动通信模块130的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块130的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 130 may provide solutions for wireless communications including 2G/3G/4G/5G, etc., applied to the electronic device 10. The mobile communication module 130 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), etc. In some embodiments, at least some functional modules of the mobile communication module 130 may be arranged in the processor 110. In some embodiments, at least some functional modules of the mobile communication module 130 may be arranged in the same device as at least some modules of the processor 110.
无线通信模块140可以提供应用在电子设备10上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),蓝牙低功耗(bluetooth low energy,BLE),超宽带(ultra wide band,UWB),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块140可以是集成至少一个通信处理模块的一个或多个器件。The wireless communication module 140 can provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), Bluetooth low energy (BLE), ultra wide band (UWB), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc., which are applied to the electronic device 10. The wireless communication module 140 can be one or more devices integrating at least one communication processing module.
在一些实施例中,电子设备10可以通过无线通信技术与网络和其他电子设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the electronic device 10 can communicate with a network and other electronic devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
电子设备10可以通过GPU,显示屏180,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏180和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。 The electronic device 10 can realize the display function through a GPU, a display screen 180, and an application processor. The GPU is a microprocessor for image processing, which connects the display screen 180 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
摄像模组170包括摄像头。显示屏180用于显示图像,视频等。显示屏180包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备10可以包括1个或多个显示屏180。The camera module 170 includes a camera. The display screen 180 is used to display images, videos, etc. The display screen 180 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc. In some embodiments, the electronic device 10 may include one or more display screens 180.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备10的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 10. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备10使用过程中所创建的数据等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备10的各种功能方法或数据处理。The internal memory 121 can be used to store computer executable program codes, which include instructions. The internal memory 121 may include a program storage area and a data storage area. The program storage area may store an operating system, an application required for at least one function (such as a sound playback function, an image playback function, etc.), etc. The data storage area may store data created during the use of the electronic device 10, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash storage (UFS), etc. The processor 110 executes various functional methods or data processing of the electronic device 10 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
音频模块150用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块150还可以用于对音频信号编码和解码。在一些实施例中,音频模块150可以设置于处理器110中,或将音频模块150的部分功能模块设置于处理器110中。The audio module 150 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. The audio module 150 can also be used to encode and decode audio signals. In some embodiments, the audio module 150 can be arranged in the processor 110, or some functional modules of the audio module 150 can be arranged in the processor 110.
电子设备10的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备10的软件结构。The software system of the electronic device 10 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes the Android system of the layered architecture as an example to exemplify the software structure of the electronic device 10.
图3是本申请一实施例的电子设备10的软件结构框图。FIG. 3 is a software structure block diagram of the electronic device 10 according to an embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime,ART)和原生C/C++库,硬件抽象层(Hardware Abstract Layer,HAL)以及内核层。The layered architecture divides the software into several layers, each with a clear role and division of labor. The layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into five layers, from top to bottom: application layer, application framework layer, Android runtime (Android runtime, ART) and native C/C++ library, hardware abstract layer (HAL) and kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in FIG. 3 , the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides application programming interface (API) and programming framework for the applications in the application layer. The application framework layer includes some predefined functions.
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器,活动管理器,输入管理器等。As shown in FIG. 3 , the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like.
窗口管理器提供窗口管理服务(Window Manager Service,WMS),WMS可以用于窗口管理、窗口动画管理、surface管理以及作为输入系统的中转站。The window manager provides window management services (Window Manager Service, WMS). WMS can be used for window management, window animation management, surface management and as a transit station for the input system.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make it accessible to applications. The data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying images, etc. The view system can be used to build applications. A display interface can be composed of one or more views. For example, a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。 The resource manager provides various resources for applications, such as localized strings, icons, images, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be a notification that appears in the system top status bar in the form of a chart or scroll bar text, such as notifications of applications running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is displayed in the status bar, a prompt sound is emitted, an electronic device vibrates, an indicator light flashes, etc.
活动管理器可以提供活动管理服务(Activity Manager Service,AMS),AMS可以用于系统组件(例如活动、服务、内容提供者、广播接收器)的启动、切换、调度以及应用进程的管理和调度工作。The Activity Manager can provide Activity Manager Service (AMS). AMS can be used to start, switch, and schedule system components (such as activities, services, content providers, broadcast receivers) as well as manage and schedule application processes.
输入管理器可以提供输入管理服务(Input Manager Service,IMS),IMS可以用于管理系统的输入,例如触摸屏输入、按键输入、传感器输入等。IMS从输入设备节点取出事件,通过和WMS的交互,将事件分配至合适的窗口。The input manager can provide input management service (IMS), which can be used to manage the input of the system, such as touch screen input, key input, sensor input, etc. IMS takes events from input device nodes and distributes them to the appropriate window through interaction with WMS.
安卓运行时包括核心库和安卓运行时。安卓运行时负责将源代码转换为机器码。安卓运行时主要包括采用提前(ahead or time,AOT)编译技术和及时(just in time,JIT)编译技术。The Android runtime includes the core library and the Android runtime. The Android runtime is responsible for converting source code into machine code. The Android runtime mainly uses the ahead-of-time (AOT) compilation technology and the just-in-time (JIT) compilation technology.
核心库主要用于提供基本的Java类库的功能,例如基础数据结构、数学、IO、工具、数据库、网络等库。核心库为用户进行安卓应用开发提供了API。The core library is mainly used to provide basic Java class library functions, such as basic data structures, mathematics, IO, tools, databases, networks, etc. The core library provides an API for users to develop Android applications.
原生C/C++库可以包括多个功能模块。例如:表面管理器(surface manager),媒体框架(Media Framework),libc,OpenGL ES、SQLite、Webkit等。The native C/C++ library can include multiple functional modules, such as surface manager, media framework, libc, OpenGL ES, SQLite, Webkit, etc.
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体框架支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。OpenGL ES提供应用程序中2D图形和3D图形的绘制和操作。SQLite为电子设备10的应用程序提供轻量级关系型数据库。Among them, the surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media framework supports playback and recording of multiple commonly used audio and video formats, as well as static image files, etc. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. OpenGL ES provides drawing and operation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of electronic devices 10.
硬件抽象层运行于用户空间(user space),对内核层驱动进行封装,向上层提供调用接口。The hardware abstraction layer runs in user space, encapsulates kernel layer drivers, and provides a calling interface to the upper layer.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
下面结合图4,示例性的介绍本申请实施例提供的图像处理方法的应用场景图。The following is an illustrative introduction to an application scenario diagram of the image processing method provided in an embodiment of the present application in conjunction with FIG. 4 .
在本申请一实施例中,以电子设备10为带摄像功能的手机为例,手机可以具有前置摄像头和/或后置摄像头,本申请实施例对此不作限定。如图4所示,电子设备10包括摄像模组170与显示屏180,摄像模组170包括多光谱图像传感器及RGB图像传感器。In an embodiment of the present application, the electronic device 10 is taken as a mobile phone with a camera function, and the mobile phone may have a front camera and/or a rear camera, which is not limited in the present embodiment. As shown in FIG4 , the electronic device 10 includes a camera module 170 and a display screen 180, and the camera module 170 includes a multispectral image sensor and an RGB image sensor.
在本申请其他实施例中,电子设备10还可以为车载摄像设备等,实施例中不限于此。上述摄像模组170和显示屏180也可以设置在与电子设备10通信连接的另一电子设备上,例如,该另一电子设备可以通过无线方式与电子设备10通信连接,电子设备10可用于控制摄像模组中的RGB图像传感器基于拍摄场景成像,得到第一图像,控制多光谱图像传感器基于拍摄场景成像得到第二图像。电子设备10还用于基于第二图像对第一图像进行色彩还原,以及控制显示屏180显示色彩还原后的图像,即色彩还原后的图像作为最终显示的拍摄图像。In other embodiments of the present application, the electronic device 10 may also be a vehicle-mounted camera device, etc., but the embodiments are not limited thereto. The above-mentioned camera module 170 and display screen 180 may also be arranged on another electronic device that is communicatively connected to the electronic device 10. For example, the other electronic device may be communicatively connected to the electronic device 10 wirelessly. The electronic device 10 may be used to control the RGB image sensor in the camera module to obtain a first image based on the imaging of the shooting scene, and control the multispectral image sensor to obtain a second image based on the imaging of the shooting scene. The electronic device 10 is also used to perform color restoration on the first image based on the second image, and to control the display screen 180 to display the color-restored image, that is, the color-restored image is used as the final displayed captured image.
在对拍摄场景中的物体颜色有高保真需求的应用场景中,例如,在手机视频直播带货中,很多销售物品的颜色作为一个重要的销售属性,其中,销售物品可以包括玉石、家具、口红、衣服和彩色图画等。玉石颜色的偏差可能会对价格造成很大的影响,因此,通过手机摄像头对玉石进行拍摄时,对色彩还原的准确度要求很高;口红产生的色差足以导致色号的偏离。若上述物品经手机摄像头成像后无法进行准确的色彩还原,很可能会因色差引起纠纷。手机使用本申请实施例提供的图像处理方法可对摄像模组170的成像图像进行准确的色彩还原,校正摄像模组170的成像图像与人眼观察的真实场景的色彩偏差,使得显示屏180最终显示的图像的整体颜色感知与人类视觉 相匹配,提升用户使用体验。In application scenarios where there is a high-fidelity requirement for the color of objects in the shooting scene, for example, in mobile phone video live streaming, the color of many sales items serves as an important sales attribute, where the sales items may include jade, furniture, lipstick, clothes, and colorful pictures. The deviation in jade color may have a great impact on the price. Therefore, when shooting jade with a mobile phone camera, the accuracy of color restoration is very high; the color difference caused by lipstick is sufficient to cause deviation in color number. If the above-mentioned items cannot be accurately restored in color after being imaged by a mobile phone camera, disputes may arise due to color difference. The mobile phone can use the image processing method provided in the embodiment of the present application to accurately restore the color of the imaged image of the camera module 170, correct the color deviation between the imaged image of the camera module 170 and the real scene observed by the human eye, so that the overall color perception of the image finally displayed on the display screen 180 is consistent with human vision. Match and improve user experience.
如图5所示,为本申请一实施例的电子设备10进行色彩还原的核心架构图。电子设备10可以包括多光谱图像传感器、RGB图像传感器、处理器及显示屏,电子设备10进行色彩还原可以包括如下实现过程:As shown in FIG5 , it is a core architecture diagram of color restoration of an electronic device 10 according to an embodiment of the present application. The electronic device 10 may include a multi-spectral image sensor, an RGB image sensor, a processor and a display screen. The color restoration of the electronic device 10 may include the following implementation process:
51、多光谱图像传感器对拍摄场景进行成像,得到拍摄场景的多光谱图像。51. The multispectral image sensor images the shooting scene to obtain a multispectral image of the shooting scene.
52、RGB图像传感器对拍摄场景进行成像,得到拍摄场景的RGB图像(即原始成像图像)。52. The RGB image sensor images the shooting scene to obtain an RGB image of the shooting scene (i.e., the original imaging image).
53、处理器根据多光谱图像和拍摄场景光源信息进行光谱估计,得到拍摄场景中的光源的光谱功率和光源分布信息。53. The processor performs spectrum estimation based on the multi-spectral image and the light source information of the shooting scene to obtain the spectral power and light source distribution information of the light source in the shooting scene.
在一些实施例中,拍摄场景光源信息可以由用户进行设置,例如电子设备10安装有拍摄应用,电子设备10启动拍摄应用之后,多光谱图像传感器可以对拍摄场景进行成像,得到拍摄场景的多光谱图像,RGB图像传感器可以对拍摄场景进行成像,得到拍摄场景的RGB图像。拍摄应用还具有设置拍摄场景光源信息的人机交互界面,比如可设置的拍摄场景光源信息包括:光源种数、高光区域、光源位置、光源边界、光源类型等。In some embodiments, the light source information of the shooting scene can be set by the user. For example, the electronic device 10 is installed with a shooting application. After the electronic device 10 starts the shooting application, the multispectral image sensor can image the shooting scene to obtain a multispectral image of the shooting scene, and the RGB image sensor can image the shooting scene to obtain an RGB image of the shooting scene. The shooting application also has a human-computer interaction interface for setting the light source information of the shooting scene. For example, the light source information of the shooting scene that can be set includes: the number of light sources, the highlight area, the light source position, the light source boundary, the light source type, etc.
54、处理器根据光源的光谱功率和光源分布信息得到色彩还原参数,色彩还原参数包括色彩校正矩阵及色适应转换矩阵。54. The processor obtains color restoration parameters according to the spectral power of the light source and the light source distribution information. The color restoration parameters include a color correction matrix and a color adaptation conversion matrix.
在一些实施例中,色彩校正矩阵可用于对RGB图像进行校正,将RGB图像从RGB色彩空间转换到XYZ色彩空间,色适应转换矩阵可用于对XYZ色彩空间的图像进行色适应变换,将当前拍摄场景光源下的XYZ图像数据色适应变换到目标光源下的XYZ图像数据,例如目标光源为D65光源(又称国际标准人工日光(Artificial Daylight),其色温为6500K)。In some embodiments, the color correction matrix can be used to correct the RGB image, converting the RGB image from the RGB color space to the XYZ color space, and the chromatic adaptation conversion matrix can be used to perform chromatic adaptation transformation on the image in the XYZ color space, chromatic adaptation transformation of the XYZ image data under the current shooting scene light source to the XYZ image data under the target light source, for example, the target light source is D65 light source (also known as international standard artificial daylight (Artificial Daylight), whose color temperature is 6500K).
55、处理器根据色彩还原参数对RGB图像进行色彩还原,得到色彩还原后的RGB图像。55. The processor performs color restoration on the RGB image according to the color restoration parameters to obtain a color restored RGB image.
在一些实施例中,色彩还原后的RGB图像可以输出至显示屏进行显示。In some embodiments, the color restored RGB image may be output to a display screen for display.
参照图6和图7所示,图6为本申请一实施例电子设备10进行色彩还原的流程图,图7为本申请一实施例电子设备10进行色彩还原的整体架构图。以下结合图6和图7进一步说明色彩还原过程。6 and 7, FIG6 is a flowchart of color restoration of the electronic device 10 according to an embodiment of the present application, and FIG7 is an overall architecture diagram of color restoration of the electronic device 10 according to an embodiment of the present application. The color restoration process is further described below in conjunction with FIG6 and FIG7.
步骤601,若接收到颜色保真模式的信息配置请求,生成用于设置拍摄场景光源信息的人机交互界面,及从人机交互界面获取用户设置的场景光源信息。Step 601: If a color fidelity mode information configuration request is received, a human-computer interaction interface for setting the shooting scene light source information is generated, and the scene light source information set by the user is obtained from the human-computer interaction interface.
在本申请一实施例中,电子设备10可安装有拍摄应用,拍摄应用可提供一种颜色保真模式,电子设备10在颜色保真模式下可对拍摄场景的成像进行准确的色彩还原。In one embodiment of the present application, the electronic device 10 may be installed with a shooting application, and the shooting application may provide a color fidelity mode. In the color fidelity mode, the electronic device 10 may accurately restore the color of the imaging of the shooting scene.
参考图8所示,图8为电子设备10在颜色保真模式下设置拍摄场景光源信息的界面示意图,若用户具有颜色高保真的拍摄需求,电子设备10可以响应于用户操作指令开启拍摄应用并进入颜色保真模式,电子设备10还可以响应于用户操作指令进入颜色保真模式的信息配置界面,信息配置界面为用于设置拍摄场景光源信息的人机交互界面;电子设备10可以从该人机交互界面获取用户设置的拍摄场景光源信息。Refer to Figure 8, which is a schematic diagram of the interface for the electronic device 10 to set the shooting scene light source information in the color fidelity mode. If the user has a high-fidelity color shooting requirement, the electronic device 10 can respond to the user's operation instructions to start the shooting application and enter the color fidelity mode. The electronic device 10 can also respond to the user's operation instructions to enter the information configuration interface of the color fidelity mode. The information configuration interface is a human-computer interaction interface for setting the shooting scene light source information; the electronic device 10 can obtain the shooting scene light source information set by the user from the human-computer interaction interface.
在一些实施例中,电子设备10可以配置为在颜色高保真模式下,才进行色彩还原处理,可以提升电子设备10的拍照性能,又可节省电子设备10的功耗。电子设备10也可以在进入颜色保真模式时,先展示信息配置界面,信息配置界面可以包括“确定”图标,用户完成信息配置并点击“确定”图标之后,可以关闭信息配置界面,拍摄应用显示在颜色保真模式下拍摄的预览画面。In some embodiments, the electronic device 10 may be configured to perform color restoration processing only in the color high-fidelity mode, which can improve the camera performance of the electronic device 10 and save power consumption of the electronic device 10. The electronic device 10 may also display an information configuration interface when entering the color fidelity mode. The information configuration interface may include an "OK" icon. After the user completes the information configuration and clicks the "OK" icon, the information configuration interface may be closed, and the shooting application displays a preview screen shot in the color fidelity mode.
例如,在图8中,人机交互界面中可设置的光源信息包括:光源种数、高光区域、光源位置、光源边界、光源类型,如光源类型包括日光、卤素光、荧光和发光二极管(Light Emitting Diode,LED)等。For example, in Figure 8, the light source information that can be set in the human-computer interaction interface includes: the number of light sources, highlight area, light source position, light source boundary, and light source type. For example, the light source types include daylight, halogen light, fluorescence, and light emitting diode (LED).
参考图9所示,图9为设置高光区域的示意图。光源照射到物体然后反射到人的 眼睛里时,物体的各部分在人眼中均有其对应的亮度,用户可在显示的拍摄场景中对高光区域进行设置,可将物体上最亮的那个点设置为高光区域。例如,人眼观测到玉石801上有高光区域802,可在高光区域802处打上标记803。Refer to Figure 9, which is a schematic diagram of setting the highlight area. The light source shines on the object and then reflects to the human body. When the eyes see the object, each part of the object has its corresponding brightness in the human eye. The user can set the highlight area in the displayed shooting scene, and the brightest point on the object can be set as the highlight area. For example, the human eye observes that there is a highlight area 802 on the jade 801, and a mark 803 can be placed on the highlight area 802.
可以理解的是,本实施例将进行色彩还原的模式称为颜色保真模式,在实际应用过程中,也可命名为“颜色模式”、“颜色真实模式”、“颜色专用模式”等类似的名称,本申请实施例对此不作限定。It can be understood that in this embodiment, the mode for color restoration is called the color fidelity mode. In actual application, it can also be named "color mode", "color authenticity mode", "color-specific mode" and other similar names. The embodiment of the present application does not limit this.
步骤602,通过多光谱图像传感器对拍摄场景成像,得到拍摄场景的多光谱图像。Step 602: imaging the shooting scene by using a multispectral image sensor to obtain a multispectral image of the shooting scene.
多光谱图像具有多个频带。例如,参考图10所示,图10为多光谱滤光器阵列(Multi Spectra Filter Array,MSFA)的示意图,图10中一个数字代表一个频带,该MSFA具有16个频带。多光谱图像传感器的光谱分辨率越高,空间分辨率越大,光谱估计的准确性越高。A multispectral image has multiple frequency bands. For example, referring to FIG10 , FIG10 is a schematic diagram of a multispectral filter array (MSFA), in which a number represents a frequency band, and the MSFA has 16 frequency bands. The higher the spectral resolution of a multispectral image sensor, the greater the spatial resolution, and the higher the accuracy of spectral estimation.
在一些实施例中,电子设备上也可安装有多个多光谱图像传感器,不同多光谱图像传感器之间错开窄带分布间隔。如图11所示,图11为两个峰值错开的多光谱图像传感器的光谱灵敏度曲线示意图。电子设备10可通过多个多光谱图像传感器获取拍摄场景的多光谱初始图像;将各多光谱初始图像合并,得到拍摄场景的多光谱图像。该实施例将多个多光谱图像传感器生成的各多光谱图像合并后,将合并后的多光谱图像作为光谱估计的输入,能够提高多光谱图像的光谱分辨率,进而提高后续光谱估计的准确度。In some embodiments, a plurality of multispectral image sensors may also be installed on the electronic device, and narrow-band distribution intervals are staggered between different multispectral image sensors. As shown in FIG11 , FIG11 is a schematic diagram of a spectral sensitivity curve of a multispectral image sensor with two staggered peaks. The electronic device 10 can obtain a multispectral initial image of the shooting scene through a plurality of multispectral image sensors; and merge the multispectral initial images to obtain a multispectral image of the shooting scene. This embodiment merges the multispectral images generated by a plurality of multispectral image sensors, and uses the merged multispectral image as the input of spectral estimation, which can improve the spectral resolution of the multispectral image, and thus improve the accuracy of subsequent spectral estimation.
例如,参考图12所示,图12为光谱估计的场景示意图。在获取多光谱图像后,可基于多光谱图像进行光谱估计,光谱估计步骤可参见步骤603。For example, referring to Fig. 12, Fig. 12 is a schematic diagram of a spectrum estimation scene. After acquiring the multi-spectral image, spectrum estimation can be performed based on the multi-spectral image. The spectrum estimation step can refer to step 603.
步骤603,根据多光谱图像和拍摄场景光源信息进行光谱估计,得到拍摄场景中的光源的光谱功率和光源分布信息。Step 603 , performing spectrum estimation based on the multi-spectral image and the light source information of the shooting scene to obtain the spectral power and light source distribution information of the light source in the shooting scene.
光源的光谱功率也可称为光源光谱。光源分布信息可以包括:光源在拍摄场景中分布的位置信息。The spectral power of the light source may also be referred to as the light source spectrum. The light source distribution information may include: position information of the light source distribution in the shooting scene.
可参考图13所示,在一些实施例中,步骤603中根据多光谱图像进行光谱估计,得到拍摄场景中的光源的光谱功率及光源分布信息,可以包括:Referring to FIG. 13 , in some embodiments, performing spectrum estimation according to the multispectral image in step 603 to obtain the spectral power and light source distribution information of the light source in the shooting scene may include:
步骤6031,对多光谱图像进行高光检测,得到多光谱图像上的高光区域。Step 6031, perform highlight detection on the multispectral image to obtain the highlight area on the multispectral image.
高光为光源照射到物体然后反射到人的眼睛里时,人眼所观测到的物体上的亮点。Highlights are bright spots on objects observed by the human eye when light source shines on the object and then reflects into the human eye.
例如,在电子设备10实际进行高光检测时,可统计多光谱图像中各像素的亮度,将各像素中亮度为前5%的像素作为高光区域,也可设置亮度阈值,若像素超过该亮度阈值,该像素可作为高光区域。本申请实施例对此不作限定。For example, when the electronic device 10 actually performs highlight detection, the brightness of each pixel in the multispectral image can be counted, and the pixels with the top 5% brightness among the pixels can be used as highlight areas. A brightness threshold can also be set. If a pixel exceeds the brightness threshold, the pixel can be used as a highlight area. The embodiments of the present application are not limited to this.
又例如,可通过双色反射模型、中心环绕滤波器或者暗通道等方法初步检测出高光区域,然后,通过低通滤波器在初步检测出的高光区域中,剔除被误判为高光区域的边缘,从而得到多光谱图像上的高光区域。For another example, the highlight area can be preliminarily detected through a two-color reflection model, a center surround filter or a dark channel method, and then, the edges of the preliminarily detected highlight area that are misjudged as the highlight area are removed through a low-pass filter to obtain the highlight area on the multispectral image.
在一些实施例中,参考图14所示,还可在拍摄场景中设置辅助配件1301。电子设备10可对多光谱图像中的辅助配件进行高光检测,得到多光谱图像中处于高光状态的辅助配件及这些处于高光状态的辅助配件的位置信息,进而可以基于这些处于高光状态的辅助配件的位置信息确定多光谱图像上的高光区域。该实施例,通过辅助配件能够提高高光区域检测的成功率,提高光谱估计的准确度。In some embodiments, as shown in FIG. 14 , an auxiliary accessory 1301 may be further provided in the shooting scene. The electronic device 10 may perform highlight detection on the auxiliary accessories in the multispectral image, obtain the auxiliary accessories in the multispectral image that are in a highlight state and the position information of these auxiliary accessories in the highlight state, and then determine the highlight area on the multispectral image based on the position information of these auxiliary accessories in the highlight state. In this embodiment, the auxiliary accessories can improve the success rate of highlight area detection and improve the accuracy of spectral estimation.
辅助配件可以为中性色且能反射光源发出的光的物体,辅助配件的大小和位置可以根据实际场景进行调整。例如,辅助配件为具有光泽表面的灰色球体,灰色球体均匀分布于拍摄场景中,灰色球体能够反射光源的光形成的高光,电子设备10可以基于灰色球体检测出的高光为光谱估计提供更准确的高光区域数据。The auxiliary accessory may be an object of neutral color that can reflect the light emitted by the light source, and the size and position of the auxiliary accessory may be adjusted according to the actual scene. For example, the auxiliary accessory is a gray sphere with a glossy surface, and the gray spheres are evenly distributed in the shooting scene. The gray spheres can reflect the highlights formed by the light from the light source, and the electronic device 10 can provide more accurate highlight area data for spectral estimation based on the highlights detected by the gray spheres.
在一些实施例中,还可结合步骤601中获取的场景光源信息确定高光区域。例如可从步骤601中的人机交互界面中获取用户标记的高光区域,在多光谱图像中,确定与标记的高光区域对应的位置,将该位置也作为多光谱图像中的高光区域。In some embodiments, the highlight area may be determined in combination with the scene light source information obtained in step 601. For example, the highlight area marked by the user may be obtained from the human-computer interaction interface in step 601, and the position corresponding to the marked highlight area may be determined in the multispectral image, and the position may also be used as the highlight area in the multispectral image.
步骤6032,对高光区域进行主成分分析,得到第一主成分向量和第二主成分向 量。Step 6032: Perform principal component analysis on the highlight area to obtain the first principal component vector and the second principal component vector. quantity.
主成分分析(Principal Component Analysis,PCA)是一种统计方法。通过正交变换将一组可能存在相关性的变量转换为一组线性不相关的变量,转换后的这组变量叫主成分。PCA是一种降维方法,常用于对高维数据集作降维,它的主要思想是将高维的特征映射到k维上。这k维就是主成分,并能保留原始变量的大部分信息,这里的信息是指原始变量的方差,在各主成分中方差最大的称为第一主成分向量,方差第二大的称为第二主成分向量。Principal Component Analysis (PCA) is a statistical method. Through orthogonal transformation, a set of variables that may be correlated is converted into a set of linearly unrelated variables. The converted set of variables is called principal components. PCA is a dimensionality reduction method, which is often used to reduce the dimensionality of high-dimensional data sets. Its main idea is to map high-dimensional features to k dimensions. This k dimension is the principal component and can retain most of the information of the original variable. The information here refers to the variance of the original variable. The one with the largest variance among the principal components is called the first principal component vector, and the one with the second largest variance is called the second principal component vector.
其中,主成分分析可以通过奇异值分解(Singular Value Decomposition,SVD)等方法实现,本申请实施例对此不作限定。Among them, principal component analysis can be implemented through methods such as singular value decomposition (SVD), which is not limited in the embodiments of the present application.
步骤6033,将高光区域的图像数据投影至第一主成分向量和第二主成分向量组成的平面。Step 6033, projecting the image data of the highlight area onto the plane formed by the first principal component vector and the second principal component vector.
步骤6034,根据高光区域的图像数据在该平面的分布,确定光源方向信息。Step 6034, determining the light source direction information according to the distribution of the image data of the highlight area on the plane.
例如,可在投影后的图像数据在平面中的分布中,确定成线性分布的线性簇,线性簇代表光源照射在物体上的镜面反射;然后,对线性簇进行主成分分析,得到线性簇的第一主成分向量,线性簇的第一主成分向量即代表拍摄场景中对应区域的光源方向信息。For example, a linear cluster of linear distribution can be determined in the distribution of the projected image data in the plane, and the linear cluster represents the specular reflection of the light source on the object; then, principal component analysis is performed on the linear cluster to obtain the first principal component vector of the linear cluster, and the first principal component vector of the linear cluster represents the light source direction information of the corresponding area in the shooting scene.
步骤6035,根据光源方向信息、高光区域的第一主成分向量和高光区域的第二主成分向量,得到局部光谱功率。Step 6035, obtaining the local spectral power according to the light source direction information, the first principal component vector of the highlight area, and the second principal component vector of the highlight area.
例如,可以对高光区域的第一主成分向量和高光区域的第二主成分向量进行伪逆运算,得到伪逆矩阵;将所述伪逆矩阵与所述线性簇的第一主成分向量相乘,得到局部光谱功率。For example, a pseudo-inverse operation may be performed on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; the pseudo-inverse matrix is multiplied by the first principal component vector of the linear cluster to obtain the local spectral power.
即,伪逆矩阵包括[高光区域的第一主成分向量的伪逆,高光区域的第二主成分向量的伪逆],局部光谱功率=伪逆矩阵*线性簇的第一主成分向量。That is, the pseudo-inverse matrix includes [the pseudo-inverse of the first principal component vector of the highlight area, the pseudo-inverse of the second principal component vector of the highlight area], and the local spectral power = pseudo-inverse matrix * the first principal component vector of the linear cluster.
针对步骤6031中得到的多光谱图像上的各高光区域,均可执行上述步骤6032至步骤6035,以此得到多光谱图像中各区域的局部光谱功率。For each highlight region on the multispectral image obtained in step 6031, the above steps 6032 to 6035 may be performed to obtain the local spectral power of each region in the multispectral image.
步骤6036,根据多光谱图像中各区域的局部光谱功率,得到光源分布信息。Step 6036, obtaining light source distribution information according to the local spectral power of each area in the multi-spectral image.
基于多光谱图像中各区域的局部光谱功率,可以得到多光谱图像的各种光谱功率的区域分布信息。光谱功率的区域分布信息即表征了光源分布信息。Based on the local spectral power of each region in the multispectral image, the regional distribution information of various spectral powers of the multispectral image can be obtained. The regional distribution information of spectral power represents the light source distribution information.
上述步骤602和步骤603是基于多光谱图像传感器得到多光谱图像,并基于该多光谱图像进行光谱估计,在另一些实施例中,参考图15所示,图15为多光谱色温传感器的示意图,电子设备可以获取摄像头的原始RGB图像和多光谱色温传感器各通道强度值;将上述两种数据输入预先训练的神经网络模型,得到当前拍摄场景光源属于各类典型光源的概率;根据该概率及已知的典型光源的光谱,确定出当前场景的光源光谱。The above steps 602 and 603 are based on obtaining a multispectral image by a multispectral image sensor, and performing spectral estimation based on the multispectral image. In other embodiments, referring to FIG. 15 , which is a schematic diagram of a multispectral color temperature sensor, the electronic device can obtain the original RGB image of the camera and the intensity values of each channel of the multispectral color temperature sensor; the above two types of data are input into a pre-trained neural network model to obtain the probability that the light source of the current shooting scene belongs to each type of typical light source; based on the probability and the known spectrum of the typical light source, the light source spectrum of the current scene is determined.
然而,该实施例存在以下问题:However, this embodiment has the following problems:
1.多光谱色温感器无法成像,无空间分辨率,精度低。1. Multispectral color temperature sensors cannot form images, have no spatial resolution, and have low accuracy.
2.多光谱色温传感器接收到的光线包含光源光线的同时也包含了物体反射的光线,相当于光源中混入了物体色,该色温传感器无法区分物体色和光源色,导致无法准确计算拍摄场景下真实的光源光谱。2. The light received by the multi-spectral color temperature sensor includes the light from the light source as well as the light reflected by the object, which is equivalent to the object color mixed into the light source. The color temperature sensor cannot distinguish between the object color and the light source color, resulting in the inability to accurately calculate the true light source spectrum in the shooting scene.
3.该实施例的光源光谱是根据多光谱色温传感器数据进行判断分类后,根据已知的典型光源光谱数据获取的,依据典型光源得到的光源光谱并不准确。3. The light source spectrum of this embodiment is obtained based on known typical light source spectrum data after judging and classifying the multi-spectral color temperature sensor data. The light source spectrum obtained based on the typical light source is not accurate.
4.该实施例计算出的是拍摄场景中的全局光源光谱,而实际应用过程中,拍摄场景各区域的局部光源光谱并不相同,因此,全局光源光谱进行色彩还原并不准确。4. This embodiment calculates the global light source spectrum in the shooting scene. However, in actual application, the local light source spectra of different areas of the shooting scene are not the same. Therefore, the color restoration of the global light source spectrum is not accurate.
因此,相较于上述通过多光谱色温传感器进行光谱估计的方案,本申请实施例中步骤602和步骤603的光谱估计方案具有如下效果:Therefore, compared with the above-mentioned scheme of performing spectrum estimation by using a multi-spectral color temperature sensor, the spectrum estimation scheme of step 602 and step 603 in the embodiment of the present application has the following effects:
1.步骤602和步骤603采用多光谱图像传感器,通过多光谱图像传感器获取的多光谱图像成像光谱分辨率更高,基于此,能够准确对拍摄现场的光源光谱进行估计, 得到拍摄现场的光源光谱。1. Step 602 and step 603 use a multispectral image sensor. The multispectral image obtained by the multispectral image sensor has a higher spectral resolution. Based on this, the spectrum of the light source at the shooting scene can be accurately estimated. Get the light source spectrum of the shooting scene.
2.通过多光谱图像传感器能够区分物体色和光源,从而能够准确得到拍摄场景的真实光源光谱。2. The multispectral image sensor can distinguish the color of the object and the light source, so as to accurately obtain the real light source spectrum of the shooting scene.
3.步骤602和步骤603不依赖于预先标定的典型的光源光谱,而是可以通过对拍摄场景的多光谱图像的光源光谱进行实时估计,提高光谱估计的准确性。3. Step 602 and step 603 do not rely on a typical light source spectrum that is pre-calibrated, but can improve the accuracy of spectrum estimation by estimating the light source spectrum of the multi-spectral image of the captured scene in real time.
4.本申请实施例可基于局部多光谱图像估计拍摄场景中各区域的局部光源光谱,使得后续色彩还原更加准确。4. The embodiment of the present application can estimate the local light source spectrum of each area in the shooting scene based on the local multi-spectral image, so that the subsequent color restoration is more accurate.
在获取到光谱功率后,可以执行步骤604。After the spectral power is acquired, step 604 may be performed.
步骤604,基于光谱功率,拟合出与光谱功率对应的色彩还原参数。Step 604: based on the spectral power, fit the color restoration parameters corresponding to the spectral power.
色彩还原参数可以包括色彩校正矩阵(Color Correction Matrix,CCM)和色适应转换矩阵。针对拍摄场景中各区域的局部光谱功率,均可拟合出于该局部光谱功率对应的局部色彩还原参数,即局部色彩校正矩阵和局部色适应转换矩阵。Color restoration parameters may include a color correction matrix (CCM) and a color adaptation conversion matrix. For the local spectral power of each area in the shooting scene, local color restoration parameters corresponding to the local spectral power, namely, the local color correction matrix and the local color adaptation conversion matrix, may be fitted.
在一些实施例中,电子设备10拟合局部色彩校正矩阵,可以包括:获取摄像模组170的色卡反射率函数、摄像模组170的光谱灵敏度函数和标准观察者色匹配函数;对所述色卡反射率函数、所述相机光谱灵敏度函数和所述局部光谱功率进行积分运算,得到摄像模组170的感光数据;对所述色卡反射率、所述标准观察者色匹配函数和所述局部光谱功率进行积分运算,得到标准观察者三刺激值;根据摄像模组170的感光数据和所述标准观察者三刺激值,得到局部色彩校正矩阵。In some embodiments, the electronic device 10 fits a local color correction matrix, which may include: obtaining a color card reflectance function of the camera module 170, a spectral sensitivity function of the camera module 170, and a standard observer color matching function; integrating the color card reflectance function, the camera spectral sensitivity function, and the local spectral power to obtain photosensitivity data of the camera module 170; integrating the color card reflectance, the standard observer color matching function, and the local spectral power to obtain standard observer tristimulus values; and obtaining a local color correction matrix based on the photosensitivity data of the camera module 170 and the standard observer tristimulus values.
例如,将局部光谱功率记作E,色卡反射率函数记作R,相机光谱灵敏度函数记作Scam,标准观察者色匹配函数记作Scmf,基于上述计算摄像模组170的感光数据∫E*R*Scam和标准观察者三刺激值∫E*R*Scmf进行拟合可得到局部色彩校正矩阵CCM。例如,通过线性模型、多项式模型或者根多项式模型等转换模型,拟合计算出将人眼感光数据转换成标准三刺激值这一过程所用到的局部色彩校正矩阵。For example, the local spectral power is recorded as E, the color card reflectance function is recorded as R, the camera spectral sensitivity function is recorded as S cam , and the standard observer color matching function is recorded as S cmf . Based on the above calculation of the photosensitive data ∫E*R*S cam of the camera module 170 and the standard observer tristimulus value ∫E*R*S cmf, the local color correction matrix CCM can be obtained by fitting. For example, through a conversion model such as a linear model, a polynomial model, or a root polynomial model, the local color correction matrix used in the process of converting the human eye photosensitive data into the standard tristimulus value is fitted and calculated.
具体地,可参考下述公式:Specifically, refer to the following formula:
∫E(λ)*R(λ)*Scam(λ)=CCM*∫E(λ)*R(λ)*Scmf(λ)。∫E(λ)*R(λ)* Scam (λ)=CCM*∫E(λ)*R(λ)* Scmf (λ).
上述相机光谱灵敏度函数可选取某一型号的摄像头模组测量得到,色卡反射率函数以及标准观察这色匹配函数也均可预先测量得到,本申请实施例不对上述实施例的测量方法进行限定。The above-mentioned camera spectral sensitivity function can be measured by selecting a certain model of camera module, and the color card reflectance function and the standard observation color matching function can also be measured in advance. The embodiment of the present application does not limit the measurement method of the above-mentioned embodiment.
该实施例中,考虑到光谱功率、标准观察者色匹配函数、相机光谱灵敏度函数和色卡反射率函数计算CCM,考虑的参数更加准确,使得计算的CCM更加全面。In this embodiment, the CCM is calculated by taking into account the spectral power, the standard observer color matching function, the camera spectral sensitivity function and the color card reflectance function, and the parameters considered are more accurate, so that the calculated CCM is more comprehensive.
电子设备10拟合色适应转换矩阵的步骤,可以包括:The step of fitting the chromatic adaptation conversion matrix by the electronic device 10 may include:
(1)将局部光谱功率E和标准观察者色匹配函数Scmf相乘,得到所述拍摄场景中的光源的白点三刺激值XYZWS,XYZWS中的红原色刺激量记作XWS,绿原色刺激量记作YWS,蓝原色刺激量记作ZWS,即,XYZWS=E*Scmf(1) Multiply the local spectral power E and the standard observer color matching function S cmf to obtain the white point tristimulus value XYZ WS of the light source in the shooting scene, where the red primary color stimulus amount in XYZ WS is recorded as X WS , the green primary color stimulus amount is recorded as Y WS , and the blue primary color stimulus amount is recorded as Z WS , that is, XYZ WS =E*S cmf .
(2)将目标光源的光谱功率Etgt和标准观察者色匹配函数Scmf相乘,得到目标光源的白点三刺激值XYZtgt,XYZtgt中的红原色刺激量记作Xtgt,绿原色刺激量记作Ytgt,蓝原色刺激量记作Ztgt,即XYZtgt=Etgt*Scmf(2) Multiply the spectral power E tgt of the target light source and the standard observer color matching function S cmf to obtain the white point tristimulus value XYZ tgt of the target light source. The red primary color stimulus amount in XYZ tgt is recorded as X tgt , the green primary color stimulus amount is recorded as Y tgt , and the blue primary color stimulus amount is recorded as Z tgt , that is, XYZ tgt = E tgt * S cmf .
例如,目标光源可以为D65光源,D65光源的光谱功率Etgt可以记作ED65,D65光源的三刺激值可记作XYZD65,D65光源的红原色刺激量记作XWD,绿原色刺激量记作YWD,蓝原色刺激量记作ZWDFor example, the target light source may be a D65 light source, the spectral power E tgt of the D65 light source may be recorded as ED65 , the tristimulus values of the D65 light source may be recorded as XYZ D65 , the red primary color stimulation amount of the D65 light source may be recorded as X WD , the green primary color stimulation amount may be recorded as Y WD , and the blue primary color stimulation amount may be recorded as Z WD .
(3)根据拍摄场景中的光源的白点三刺激值和目标光源的白点三刺激值,得到所述局部色适应转换矩阵。(3) The local chromatic adaptation conversion matrix is obtained according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
例如,可以将拍摄场景中的光源的白点三刺激值与预设的色适应模型相乘,得到第一响应值,第一响应值为人眼对所述拍摄场景中的光源的长波、中波和短波的响应值(也可称为色适应程度)。For example, the white point tristimulus values of the light source in the shooting scene can be multiplied by a preset color adaptation model to obtain a first response value, which is the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene (also known as the degree of color adaptation).
预设的色适应模型可以为CAT02色适应模型,即Mcat02,第一响应值可记作 [ρSγSβS],公式如下所示:

The preset color adaptation model can be the CAT02 color adaptation model, that is, M cat02 , and the first response value can be recorded as [ρ S γ S β S ], the formula is as follows:

然后,将目标光源的白点三刺激值与所述色适应模型相乘,得到第二响应值,第二响应值为人眼对所述目标光源的长波、中波和短波的响应值。第二响应值可记作[ρDγDβD],公式如下所示:
Then, the white point tristimulus values of the target light source are multiplied by the color adaptation model to obtain a second response value, which is the response value of the human eye to the long wave, medium wave and short wave of the target light source. The second response value can be recorded as [ρ D γ D β D ], and the formula is as follows:
最后,根据第一响应值和第二响应值,得到局部色适应转换矩阵McaFinally, the local chromatic adaptation conversion matrix M ca is obtained according to the first response value and the second response value.
具体地,公式可参考如下所示,其中为Mcat02的逆矩阵:
Specifically, the formula can be referred to as follows, where is the inverse matrix of M cat02 :
步骤605,基于色彩还原参数对摄像模组拍摄得到的原始成像图像进行色彩还原。Step 605: Perform color restoration on the original image captured by the camera module based on the color restoration parameters.
在一些实施例中,可将原始成像图像的各像素通过色彩校正矩阵处理,以将原始成像图像从RGB色彩空间转换至XYZ色彩空间,得到色彩校正后的图像XYZsrc,该XYZsrc是拍摄场景下的光源对应的三刺激值,因此,还可以通过色适应转换矩阵处理XYZsrc,以将拍摄场景下的光源对应的三刺激值转换至目标光源(例如D65光源)下的三刺激值,得到色彩还原后的图像XYZtgtIn some embodiments, each pixel of the original imaging image may be processed through a color correction matrix to convert the original imaging image from an RGB color space to an XYZ color space to obtain a color-corrected image XYZsrc, where XYZsrc is the tristimulus value corresponding to the light source in the shooting scene. Therefore, XYZsrc may also be processed through a chromatic adaptation conversion matrix to convert the tristimulus values corresponding to the light source in the shooting scene to the tristimulus values under the target light source (e.g., a D65 light source) to obtain a color-restored image XYZ tgt .
上述的原始成像图像可为摄像模组中的RGB图像传感器对拍摄场景进行成像得到的RGB图像。The above-mentioned original imaging image may be an RGB image obtained by imaging the shooting scene by the RGB image sensor in the camera module.
在一些实施例中,RGB图像传感器对拍摄场景进行成像时,初始得到的原始成像图像可能为拜尔(bayer)格式的图像,即bayerraw图像,因此,可将bayerraw图像进行去马赛克等处理,从而得到RGB图像原始成像图像。In some embodiments, when the RGB image sensor images the captured scene, the initially obtained raw image may be a Bayer format image, namely a Bayerraw image. Therefore, the Bayerraw image may be subjected to de-mosaicing and other processing to obtain the RGB image raw image.
在色彩校正矩阵为拍摄场景中各区域的局部色彩校正矩阵,色适应转换矩阵为拍摄场景中各区域的局部色适应转换矩阵的情况下,在处理原始成像图像的像素前,可先确定该像素对应的局部色彩校正矩阵以及局部色适应转换矩阵,然后,基于对应的局部色彩校正矩阵以及局部色适应转换矩阵对该像素进行处理。When the color correction matrix is the local color correction matrix of each area in the shooting scene, and the color adaptation conversion matrix is the local color adaptation conversion matrix of each area in the shooting scene, before processing the pixel of the original imaging image, the local color correction matrix and the local color adaptation conversion matrix corresponding to the pixel can be determined first, and then the pixel can be processed based on the corresponding local color correction matrix and the local color adaptation conversion matrix.
然而,上述拍摄场景中各区域具有对应的局部色彩校正矩阵和局部色适应转换矩阵,在基于局部色彩校正矩阵进行色彩校正时,可能会导致光源分界区域发生颜色骤变,因此,本申请实施例在将原始成像图像的各像素通过色彩校正矩阵处理的过程中可以根据光源分布信息,确定所述色彩还原后的图像中的临界像素,所述临界像素位于光源分界区域;在进行色适应转换矩阵处理之后,对所述临界像素进行平滑处理,得到平滑处理后的图像。该实施例中,对光源分界区域进行平滑处理,能够避免光源分界区域发生颜色骤变,提高图像画面色彩的流畅性。However, each area in the above-mentioned shooting scene has a corresponding local color correction matrix and a local color adaptation conversion matrix. When color correction is performed based on the local color correction matrix, it may cause a sudden color change in the light source boundary area. Therefore, in the embodiment of the present application, when processing each pixel of the original imaging image through the color correction matrix, the critical pixel in the image after color restoration can be determined according to the light source distribution information. The critical pixel is located in the light source boundary area; after the color adaptation conversion matrix is processed, the critical pixel is smoothed to obtain a smoothed image. In this embodiment, the light source boundary area is smoothed to avoid sudden color changes in the light source boundary area and improve the smoothness of the image color.
例如,若光源分界区域为拍摄场景中第一区域和第二区域之间的分界区域,第一 区域的局部色彩校正矩阵为CCM1,第二区域的局部色彩校正矩阵为CCM2,可以通过CCM1和CCM2穿插处理位于光源分界区域的各像素,如位于光源分界区域的第一个像素通过CCM1处理,第二像素则通过CCM2处理,第三个像素通过CCM1处理,以此类推。又例如,可将CCM1和CCM2求均值,通过均值处理光源分界区域的各像素。For example, if the light source boundary area is a boundary area between a first area and a second area in the shooting scene, the first area The local color correction matrix of the first region is CCM1, and the local color correction matrix of the second region is CCM2. Each pixel located in the light source boundary region can be processed by interleaving CCM1 and CCM2, such as the first pixel located in the light source boundary region is processed by CCM1, the second pixel is processed by CCM2, the third pixel is processed by CCM1, and so on. For another example, CCM1 and CCM2 can be averaged, and each pixel in the light source boundary region can be processed by the average.
在实际应用过程中,还可使用其他处理方法,本申请实施例不对平滑处理的具体方式进行限定。In actual application, other processing methods may also be used, and the embodiments of the present application do not limit the specific method of smoothing processing.
例如,在得到目标光源下的三刺激值后,可以基于光源分布信息进行平滑处理,平滑处理后的目标光源的三刺激值可以转换到标准红绿蓝(standard Red Green Blue,sRGB)等标准RGB色彩空间,以便于在显示屏180上进行显示。即色彩还原后的图像XYZtgt可以是指sRGB色彩空间的图像,该sRGB色彩空间的图像作为最终显示的拍摄图像。For example, after obtaining the tristimulus values under the target light source, a smoothing process may be performed based on the light source distribution information, and the tristimulus values of the target light source after the smoothing process may be converted to a standard RGB color space such as standard Red Green Blue (sRGB) for display on the display screen 180. That is, the image XYZ tgt after color restoration may refer to an image in the sRGB color space, and the image in the sRGB color space is used as the captured image to be finally displayed.
参照图16所示,本申请一实施例提供一种图像处理方法,应用于电子设备10。本实施例中,图像处理方法可以包括:16 , an embodiment of the present application provides an image processing method, which is applied to an electronic device 10. In this embodiment, the image processing method may include:
步骤1601,获取拍摄场景的多光谱图像及原始成像图像。Step 1601, obtaining a multispectral image and an original imaging image of a shooting scene.
在一些实施例中,电子设备10可以响应于拍摄指令,获取拍摄场景的多光谱图像及原始成像图像。In some embodiments, the electronic device 10 may acquire a multispectral image and an original imaging image of the shooting scene in response to a shooting instruction.
在一些实施例中,电子设备10也可以响应于在颜色保真模式下的拍摄指令,获取拍摄场景的多光谱图像及原始成像图像,以节省功耗,而在其他拍摄模式下,获取拍摄场景的原始成像图像,不对原始成像图像进行色彩还原处理。In some embodiments, the electronic device 10 may also respond to a shooting instruction in a color fidelity mode to obtain a multispectral image and an original imaging image of the shooting scene to save power consumption, while in other shooting modes, the original imaging image of the shooting scene is obtained without performing color restoration processing on the original imaging image.
在一些实施例中,电子设备10上可安装有至少一个多光谱图像传感器及RGB图像传感器,电子设备10可以通过多光谱图像传感器获取拍摄场景的多光谱图像,通过RGB图像传感器获取拍摄场景的原始成像图像。In some embodiments, at least one multispectral image sensor and an RGB image sensor may be installed on the electronic device 10. The electronic device 10 may obtain a multispectral image of the shooting scene through the multispectral image sensor and obtain an original imaging image of the shooting scene through the RGB image sensor.
在电子设备10上可安装有多个多光谱图像传感器的情况下,各多光谱图像传感器之间可以错开窄带分布间隔,以便于能获取到包括的光源信息更多的多光谱图像。电子设备10通过各多光谱图像传感器获取拍摄场景的多光谱初始图像后,可将各多光谱初始图像合并,将合并后的多光谱图像作为拍摄场景的多光谱图像,通过合并后的多光谱图像进行光谱估计,可以提高光谱估计的准确度。In the case where multiple multispectral image sensors can be installed on the electronic device 10, the narrow-band distribution intervals between the multispectral image sensors can be staggered so as to obtain a multispectral image including more light source information. After the electronic device 10 obtains the multispectral initial image of the shooting scene through each multispectral image sensor, the multispectral initial images can be merged, and the merged multispectral image is used as the multispectral image of the shooting scene. The spectrum estimation is performed through the merged multispectral image, which can improve the accuracy of the spectrum estimation.
步骤1602,基于多光谱图像进行光谱估计,得到拍摄场景中光源的光谱功率。Step 1602: perform spectrum estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene.
在一些实施例,可基于多光谱图像得到拍摄场景中各区域的局部光谱功率。采用该技术方案,能够在同一拍摄场景各区域的光谱功率不相同的情况下,针对各区域进行色彩还原,使得同一拍摄场景下的各区域颜色均能得到准确的还原。In some embodiments, the local spectral power of each area in the shooting scene can be obtained based on the multispectral image. With this technical solution, when the spectral power of each area in the same shooting scene is different, color restoration can be performed for each area, so that the color of each area in the same shooting scene can be accurately restored.
在一些实施例中,可对多光谱图像进行高光检测,得到多光谱图像的高光区域;根据各高光区域,得到各局部光谱功率。为了准确检测到拍摄场景的高光区域,还可在拍摄场景中放置辅助配件,通过对辅助配件进行检测,确定拍摄场景的高光区域。In some embodiments, a highlight detection may be performed on a multispectral image to obtain a highlight region of the multispectral image; and each local spectral power may be obtained based on each highlight region. In order to accurately detect the highlight region of the shooting scene, an auxiliary accessory may be placed in the shooting scene, and the highlight region of the shooting scene may be determined by detecting the auxiliary accessory.
例如,还可以统计多光谱图像中各像素的亮度,及将像素亮度排名前预设位(比如前5%)的像素所在的区域作为多光谱图像的高光区域;或检测多光谱图像中各像素的亮度,及将像素亮度大于预设亮度阈值(可以根据实际需求进行设定)的像素所在的区域作为多光谱图像的高光区域。For example, the brightness of each pixel in the multispectral image can be counted, and the area where pixels with a preset brightness ranking in the front (such as the top 5%) are located is taken as the highlight area of the multispectral image; or the brightness of each pixel in the multispectral image can be detected, and the area where pixels with a brightness greater than a preset brightness threshold (which can be set according to actual needs) are located is taken as the highlight area of the multispectral image.
在一些实施例中,对多光谱图像进行高光检测,得到多光谱图像的高光区域可以通过双色反射模型、中心环绕滤波器或者暗通道等方法初步检测出高光区域,再通过低通滤波器在初步检测出的高光区域中,剔除被误判为高光区域的边缘,从而得到多光谱图像上的高光区域。In some embodiments, highlight detection is performed on a multispectral image, and the highlight area of the multispectral image can be preliminarily detected through a two-color reflection model, a center surround filter, or a dark channel method, and then a low-pass filter is used to remove the edges of the preliminarily detected highlight area that are misjudged as the highlight area, thereby obtaining the highlight area on the multispectral image.
在另一些实施例中,电子设备10响应于进入颜色保真模式的请求,生成用于设置拍摄场景光源信息的人机交互界面,在该界面中,可设置的光源信息可以包括:光源种数、高光区域、光源位置、光源边界、光源类型,如日光、卤素光、荧光和发光二极管等。电子设备10可以从人机交互界面获取拍摄场景的光源信息,基于拍摄场 景的光源信息得到高光区域。In other embodiments, the electronic device 10 generates a human-computer interaction interface for setting the light source information of the shooting scene in response to a request to enter the color fidelity mode. In the interface, the light source information that can be set may include: the number of light sources, the highlight area, the light source position, the light source boundary, and the light source type, such as sunlight, halogen light, fluorescence, and light-emitting diode. The electronic device 10 can obtain the light source information of the shooting scene from the human-computer interaction interface, and based on the shooting scene The light source information of the scene is used to obtain the highlight area.
在一些实施例中,根据各高光区域,得到各局部光谱功率包括:对高光区域进行主成分分析,得到高光区域的主成分向量;基于高光区域的主成分向量得到局部光谱功率。具体地,电子设备10可确定高光区域的主成分向量中的第一主成分向量和第二主成分向量;将高光区域的图像数据投影至高光区域的第一主成分向量和高光区域的第二主成分向量组成的平面中;在投影后的图像数据在平面中的分布中,确定成线性分布的线性簇;对线性簇进行主成分分析,得到线性簇的第一主成分向量;基于高光区域的第一主成分向量和高光区域的第二主成分向量和线性簇的第一主成分向量,得到局部光谱功率。In some embodiments, obtaining each local spectral power according to each highlight area includes: performing principal component analysis on the highlight area to obtain the principal component vector of the highlight area; obtaining the local spectral power based on the principal component vector of the highlight area. Specifically, the electronic device 10 can determine the first principal component vector and the second principal component vector in the principal component vector of the highlight area; project the image data of the highlight area to the plane formed by the first principal component vector of the highlight area and the second principal component vector of the highlight area; determine a linear cluster with a linear distribution in the distribution of the projected image data in the plane; perform principal component analysis on the linear cluster to obtain the first principal component vector of the linear cluster; obtain the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area, and the first principal component vector of the linear cluster.
其中,线性簇的第一主成分向量即表征了光源的光源方向,通过上述方法能够确定光源的分布。The first principal component vector of the linear cluster represents the light source direction of the light source, and the distribution of the light source can be determined by the above method.
进一步地,基于高光区域的第一主成分向量、高光区域的第二主成分向量和线性簇的第一主成分向量,得到局部光谱功率,可以包括:对高光区域的第一主成分向量和高光区域的第二主成分向量进行伪逆运算,得到伪逆矩阵;将伪逆矩阵与线性簇的主成分向量相乘,得到局部光谱功率。Furthermore, obtaining the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area and the first principal component vector of the linear cluster can include: performing a pseudo-inverse operation on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix; multiplying the pseudo-inverse matrix with the principal component vector of the linear cluster to obtain the local spectral power.
步骤1603,根据光谱功率确定色彩还原参数,及基于色彩还原参数对原始成像图像进行色彩还原,得到色彩还原后的图像。Step 1603, determining a color restoration parameter according to the spectral power, and performing color restoration on the original image based on the color restoration parameter to obtain a color restored image.
在一些实施例中,色彩还原参数可以包括:色彩校正矩阵和色适应转换矩阵。针对拍摄场景中各区域的局部光谱功率,均可拟合出于该局部光谱功率对应的局部色彩还原参数,即局部色彩校正矩阵和局部色适应转换矩阵。In some embodiments, the color restoration parameters may include: a color correction matrix and a color adaptation conversion matrix. For the local spectral power of each area in the shooting scene, local color restoration parameters corresponding to the local spectral power, namely, the local color correction matrix and the local color adaptation conversion matrix, can be fitted.
例如,通过色彩校正矩阵处理原始成像图像中的各像素,得到色彩校正后的图像;通过色适应转换矩阵处理色彩校正后的图像,得到色彩还原的图像。For example, each pixel in the original imaging image is processed by a color correction matrix to obtain a color-corrected image; and the color-corrected image is processed by a chromatic adaptation conversion matrix to obtain a color-restored image.
在一些实施例中,通过色彩校正矩阵处理原始成像图像中的各像素,得到色彩校正后的图像,可以包括:根据光源分布信息,确定色彩还原后的图像中的临界像素,临界像素位于光源分界区域;根据光源分界区域中各像素对应的局部色彩校正矩阵对临界区域进行平滑处理,得到平滑处理后的图像,可避免处于光源分界区域的图像颜色差异大,实现颜色的平滑过渡,从而提高色彩还原效果。In some embodiments, processing each pixel in the original imaging image through a color correction matrix to obtain a color-corrected image can include: determining critical pixels in the color-restored image based on light source distribution information, where the critical pixels are located in the light source boundary area; smoothing the critical area based on the local color correction matrix corresponding to each pixel in the light source boundary area to obtain a smoothed image, which can avoid large color differences in the image in the light source boundary area and achieve smooth color transition, thereby improving the color restoration effect.
在一些实施例中,电子设备10拟合局部色彩校正矩阵,可以包括:获取摄像模组的色卡反射率函数、光谱灵敏度函数和标准观察者色匹配函数;对色卡反射率函数、相机光谱灵敏度函数和局部光谱功率进行积分运算,得到摄像模组的感光数据;对色卡反射率、标准观察者色匹配函数和局部光谱功率进行积分运算,得到标准观察者三刺激值;根据摄像模组的感光数据和标准观察者三刺激值,得到局部色彩校正矩阵。In some embodiments, the electronic device 10 fits a local color correction matrix, which may include: obtaining a color card reflectance function, a spectral sensitivity function, and a standard observer color matching function of a camera module; integrating the color card reflectance function, the camera spectral sensitivity function, and the local spectral power to obtain photosensitivity data of the camera module; integrating the color card reflectance, the standard observer color matching function, and the local spectral power to obtain standard observer tristimulus values; and obtaining a local color correction matrix based on the photosensitivity data of the camera module and the standard observer tristimulus values.
在一些实施例中,电子设备10拟合局部色适应转换矩阵的步骤,可以包括:获取标准观察者色匹配函数和目标光源的光谱功率;将局部光谱功率和标准观察者色匹配函数相乘,得到拍摄场景中的光源的白点三刺激值;将目标光源的光谱功率和标准观察者色匹配函数相乘,得到目标光源的白点三刺激值;根据拍摄场景中的光源的白点三刺激值和目标光源的白点三刺激值,得到局部色适应转换矩阵。In some embodiments, the step of fitting the local chromatic adaptation conversion matrix by the electronic device 10 may include: obtaining the standard observer color matching function and the spectral power of the target light source; multiplying the local spectral power by the standard observer color matching function to obtain the white point tristimulus values of the light source in the shooting scene; multiplying the spectral power of the target light source by the standard observer color matching function to obtain the white point tristimulus values of the target light source; and obtaining the local chromatic adaptation conversion matrix according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
具体地,根据拍摄场景中的光源的白点三刺激值和目标光源的白点三刺激值,得到局部色适应转换矩阵,可以包括:将拍摄场景中的光源的白点三刺激值与预设的色适应模型相乘,得到第一响应值,第一响应值为人眼对拍摄场景中的光源的长波、中波和短波的响应值;将目标光源的白点三刺激值与色适应模型相乘,得到第二响应值,第二响应值为人眼对目标光源的长波、中波和短波的响应值;根据第一响应值和第二响应值,得到局部色适应转换矩阵。Specifically, obtaining a local chromatic adaptation conversion matrix based on the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source can include: multiplying the white point tristimulus values of the light source in the shooting scene with a preset chromatic adaptation model to obtain a first response value, where the first response value is the response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene; multiplying the white point tristimulus values of the target light source with the chromatic adaptation model to obtain a second response value, where the second response value is the response value of the human eye to the long wave, medium wave and short wave of the target light source; and obtaining a local chromatic adaptation conversion matrix based on the first response value and the second response value.
其中,根据第一响应值和所述第二响应值,得到局部色适应转换矩阵可以包括:基于第一响应值和第二响应值生成对角矩阵,对角矩阵的值分别包括:目标光源的长波与拍摄场景中的光源的长波的比值、目标光源的中波与拍摄场景中的光源的中波的比值,以及目标光源的短波与拍摄场景中的光源的短波的比值;将色适应模型的逆矩 阵与对角矩阵以及色适应模型相乘,得到局部色适应转换矩阵。Wherein, obtaining the local chromatic adaptation conversion matrix according to the first response value and the second response value may include: generating a diagonal matrix based on the first response value and the second response value, wherein the values of the diagonal matrix respectively include: the ratio of the long wave of the target light source to the long wave of the light source in the shooting scene, the ratio of the medium wave of the target light source to the medium wave of the light source in the shooting scene, and the ratio of the short wave of the target light source to the short wave of the light source in the shooting scene; converting the inverse matrix of the chromatic adaptation model into The matrix is multiplied by the diagonal matrix and the chromatic adaptation model to obtain the local chromatic adaptation transformation matrix.
本实施例基于多光谱图像分区域对拍摄场景的真实光源进行光谱估计,得到光源光谱,进而分区域得到各区域对应的局部色彩转换参数,如局部色彩校正矩阵、色适应转换矩阵,使得计算的色彩转换参数更加准确、合理,使拍摄场景各物体颜色还原更加准确,能够有效改善图像的整体偏色,实现场景整体颜色感知与人类视觉相匹配的效果。This embodiment performs spectral estimation of the real light source of the shooting scene based on the multispectral image by region to obtain the light source spectrum, and then obtains the local color conversion parameters corresponding to each region by region, such as the local color correction matrix and the color adaptation conversion matrix, so that the calculated color conversion parameters are more accurate and reasonable, and the color restoration of each object in the shooting scene is more accurate, which can effectively improve the overall color cast of the image and achieve the effect of matching the overall color perception of the scene with human vision.
另外,本申请实施例通过多光谱图像进行光谱估计,多光谱图像的光谱分辨率高,能够准确的估计的场景中各区域的光源光谱,并且在光谱估计过程中还结合颜色保真模式中设置的拍摄场景的光源信息,从而能够进一步提高光源光谱估计的准确性。In addition, the embodiment of the present application performs spectral estimation through multispectral images. The multispectral images have high spectral resolution and can accurately estimate the light source spectrum of each area in the scene. In the spectral estimation process, the light source information of the shooting scene set in the color fidelity mode is also combined, thereby further improving the accuracy of the light source spectrum estimation.
而且,色彩还原参数是基于混合光源场景实际光谱计算得到的,并通过分区域的局部色彩校正矩阵和局部色适应转换矩阵,使得拍摄场景中各区域颜色都能得到准确还原,本申请实施例摆脱对于标定中典型光源的依赖,场景覆盖范围更广,保证不同场景的色彩还原准确度。Moreover, the color restoration parameters are calculated based on the actual spectrum of the mixed light source scene, and the colors of each area in the shooting scene can be accurately restored through the local color correction matrix and the local color adaptation conversion matrix of the divided regions. The embodiment of the present application gets rid of the dependence on the typical light source in the calibration, covers a wider range of scenes, and ensures the accuracy of color restoration in different scenes.
本申请实施例提供的电子设备10,内部存储器121可用于存储指令,处理器110可用于调用内部存储器121中的指令,使得电子设备10执行上述相关方法步骤实现上述实施例中的图像处理方法。In the electronic device 10 provided in the embodiment of the present application, the internal memory 121 can be used to store instructions, and the processor 110 can be used to call the instructions in the internal memory 121, so that the electronic device 10 executes the above-mentioned related method steps to implement the image processing method in the above-mentioned embodiment.
本申请实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机指令,当所述计算机指令在电子设备10上运行时,使得电子设备10执行上述相关方法步骤实现上述实施例中的图像处理方法。An embodiment of the present application further provides a computer storage medium, in which computer instructions are stored. When the computer instructions are executed on the electronic device 10, the electronic device 10 executes the above-mentioned related method steps to implement the image processing method in the above-mentioned embodiment.
本申请实施例还提供了一种计算机程序产品,当所述计算机程序产品在电子设备上运行时,使得电子设备执行上述相关步骤,以实现上述实施例中的图像处理方法。The embodiment of the present application also provides a computer program product. When the computer program product is run on an electronic device, the electronic device executes the above-mentioned related steps to implement the image processing method in the above-mentioned embodiment.
另外,本申请实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,所述装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的图像处理方法。In addition, an embodiment of the present application also provides a device, which can specifically be a chip, component or module, and the device may include a connected processor and memory; wherein the memory is used to store computer-executable instructions, and when the device is running, the processor can execute the computer-executable instructions stored in the memory so that the chip executes the image processing method in the above-mentioned method embodiments.
其中,本申请实施例提供的计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Among them, the computer storage medium, computer program product or chip provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, and will not be repeated here.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。Through the description of the above implementation methods, technical personnel in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional modules is used as an example. In actual applications, the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the device can be divided into different functional modules to complete all or part of the functions described above.
在本申请所提供的几个实施例中,应所述理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例是示意性的,例如,所述模块或单元的划分,为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are schematic. For example, the division of the modules or units is a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another device, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place or distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者所述技术方案的全部或部分可以以软件产品的形式 体现出来,所述软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be in the form of a software product. It is embodied that the software product is stored in a storage medium, including several instructions for enabling a device (which may be a single-chip microcomputer, chip, etc.) or a processor to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, etc., various media that can store program codes.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。 The above description is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any changes or substitutions within the technical scope disclosed in the present application should be included in the protection scope of the present application.

Claims (17)

  1. 一种图像处理方法,应用于电子设备,其特征在于,所述方法包括:An image processing method, applied to electronic equipment, is characterized in that the method comprises:
    获取拍摄场景的多光谱图像及原始成像图像;Acquire multispectral images and original imaging images of the shooting scene;
    基于所述多光谱图像进行光谱估计,得到所述拍摄场景中光源的光谱功率;Performing spectral estimation based on the multispectral image to obtain spectral power of the light source in the shooting scene;
    根据所述光谱功率确定色彩还原参数,及基于所述色彩还原参数对所述原始成像图像进行色彩还原,得到色彩还原后的图像。Color restoration parameters are determined according to the spectral power, and color restoration is performed on the original image based on the color restoration parameters to obtain a color restored image.
  2. 如权利要求1所述的图像处理方法,其特征在于,所述基于所述多光谱图像进行光谱估计,得到所述拍摄场景中光源的光谱功率,包括:The image processing method according to claim 1, characterized in that the performing spectral estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene comprises:
    基于所述拍摄场景的多光谱图像进行光谱估计,得到所述拍摄场景中各区域的局部光谱功率;Performing spectral estimation based on the multispectral image of the shooting scene to obtain local spectral power of each area in the shooting scene;
    所述根据所述光谱功率确定色彩还原参数,包括:The determining of the color restoration parameter according to the spectral power comprises:
    根据所述拍摄场景中各区域的局部光谱功率,确定所述各区域的局部色彩还原参数。According to the local spectral power of each area in the shooting scene, the local color restoration parameter of each area is determined.
  3. 如权利要求2所述的图像处理方法,其特征在于,所述基于所述拍摄场景的多光谱图像进行光谱估计,得到所述拍摄场景中各区域的局部光谱功率,包括:The image processing method according to claim 2, characterized in that the performing spectral estimation based on the multispectral image of the shooting scene to obtain the local spectral power of each area in the shooting scene comprises:
    对所述多光谱图像进行高光检测,得到所述多光谱图像的高光区域;Performing highlight detection on the multispectral image to obtain a highlight area of the multispectral image;
    根据所述高光区域,得到所述各区域的局部光谱功率。According to the highlight areas, the local spectral power of each area is obtained.
  4. 如权利要求3所述的图像处理方法,其特征在于,所述多光谱图像包括用于进行高光检测的辅助配件,所述对所述多光谱图像进行高光检测,得到所述多光谱图像的高光区域,包括:The image processing method according to claim 3, characterized in that the multispectral image includes an auxiliary accessory for highlight detection, and the highlight detection of the multispectral image to obtain the highlight area of the multispectral image includes:
    对所述多光谱图像中的辅助配件进行高光检测,基于判定为高光的辅助配件的位置,得到所述多光谱图像的高光区域。Highlight detection is performed on the auxiliary accessories in the multispectral image, and based on the position of the auxiliary accessories determined as highlights, a highlight area of the multispectral image is obtained.
  5. 如权利要求3所述的图像处理方法,其特征在于,所述对所述多光谱图像进行高光检测,得到所述多光谱图像的高光区域,包括:The image processing method according to claim 3, characterized in that the step of performing highlight detection on the multispectral image to obtain a highlight area of the multispectral image comprises:
    统计所述多光谱图像中各像素的亮度,及将像素亮度排名前预设位的像素所在的区域作为所述多光谱图像的高光区域;或Counting the brightness of each pixel in the multispectral image, and taking the area where the pixels with the highest brightness ranking are located as the highlight area of the multispectral image; or
    检测所述多光谱图像中各像素的亮度,及将像素亮度大于预设亮度阈值的像素所在的区域作为所述多光谱图像的高光区域。The brightness of each pixel in the multispectral image is detected, and the area where the pixels whose brightness is greater than a preset brightness threshold are located is taken as the highlight area of the multispectral image.
  6. 如权利要求3所述的图像处理方法,其特征在于,所述根据所述高光区域,得到所述各区域的局部光谱功率,包括:The image processing method according to claim 3, characterized in that obtaining the local spectral power of each region according to the highlight region comprises:
    对所述高光区域进行主成分分析,得到所述高光区域的第一主成分向量和所述高光区域的第二主成分向量;Performing principal component analysis on the highlight area to obtain a first principal component vector of the highlight area and a second principal component vector of the highlight area;
    将所述高光区域的图像数据投影至所述高光区域的第一主成分向量和所述高光区域的第二主成分向量组成的平面中;Projecting the image data of the highlight area onto a plane formed by a first principal component vector of the highlight area and a second principal component vector of the highlight area;
    基于投影后的图像数据在所述平面中的分布,确定成线性分布的线性簇;Determining a linear cluster of linear distribution based on the distribution of the projected image data in the plane;
    对所述线性簇进行主成分分析,得到所述线性簇的第一主成分向量;Performing principal component analysis on the linear cluster to obtain a first principal component vector of the linear cluster;
    基于所述高光区域的第一主成分向量、所述高光区域的第二主成分向量和所述线性簇的第一主成分向量,得到所述局部光谱功率。The local spectral power is obtained based on the first principal component vector of the highlight area, the second principal component vector of the highlight area, and the first principal component vector of the linear cluster.
  7. 如权利要求6所述的图像处理方法,其特征在于,所述基于所述高光区域的第一主成分向量、所述高光区域的第二主成分向量和所述线性簇的第一主成分向量,得到所述局部光谱功率,包括:The image processing method according to claim 6, characterized in that the obtaining of the local spectral power based on the first principal component vector of the highlight area, the second principal component vector of the highlight area and the first principal component vector of the linear cluster comprises:
    对所述高光区域的第一主成分向量和所述高光区域的第二主成分向量进行伪逆运算,得到伪逆矩阵;Performing a pseudo-inverse operation on the first principal component vector of the highlight area and the second principal component vector of the highlight area to obtain a pseudo-inverse matrix;
    基于所述伪逆矩阵与所述线性簇的第一主成分向量,得到所述局部光谱功率。The local spectral power is obtained based on the pseudo-inverse matrix and the first principal component vector of the linear cluster.
  8. 如权利要求2所述的图像处理方法,其特征在于,所述电子设备包括摄像模组,所述局部色彩还原参数包括局部色彩校正矩阵,所述根据所述拍摄场景中各区域的局 部光谱功率,确定所述各区域的局部色彩还原参数,包括:The image processing method according to claim 2, characterized in that the electronic device includes a camera module, the local color restoration parameters include a local color correction matrix, and the local color restoration parameters include a local color correction matrix according to the local color of each area in the shooting scene. The local spectral power of each region is determined to determine the local color restoration parameters of each region, including:
    获取所述摄像模组的色卡反射率函数、光谱灵敏度函数和标准观察者色匹配函数;Obtaining a color card reflectance function, a spectral sensitivity function, and a standard observer color matching function of the camera module;
    基于所述色卡反射率函数、所述光谱灵敏度函数和所述局部光谱功率,得到所述摄像模组的感光数据;Based on the color card reflectance function, the spectral sensitivity function and the local spectral power, obtaining the photosensitivity data of the camera module;
    基于所述色卡反射率、所述标准观察者色匹配函数和所述局部光谱功率,得到标准观察者三刺激值;Obtaining standard observer tristimulus values based on the color card reflectance, the standard observer color matching function and the local spectral power;
    根据所述摄像模组的感光数据和所述标准观察者三刺激值,得到所述局部色彩校正矩阵。The local color correction matrix is obtained according to the photosensitive data of the camera module and the standard observer tristimulus values.
  9. 如权利要求2所述的图像处理方法,其特征在于,所述局部色彩还原参数包括局部色适应转换矩阵,所述根据所述拍摄场景中各区域的局部光谱功率,确定所述各区域的局部色彩还原参数,包括:The image processing method according to claim 2, characterized in that the local color restoration parameters include a local chromatic adaptation conversion matrix, and determining the local color restoration parameters of each area according to the local spectral power of each area in the shooting scene includes:
    获取标准观察者色匹配函数和目标光源的光谱功率;Obtain the standard observer color matching function and the spectral power of the target light source;
    基于所述局部光谱功率和所述标准观察者色匹配函数,得到所述拍摄场景中的光源的白点三刺激值;Based on the local spectral power and the standard observer color matching function, obtaining white point tristimulus values of the light source in the shooting scene;
    基于所述目标光源的光谱功率和所述标准观察者色匹配函数,得到所述目标光源的白点三刺激值;Obtaining white point tristimulus values of the target light source based on the spectral power of the target light source and the standard observer color matching function;
    根据所述拍摄场景中的光源的白点三刺激值和所述目标光源的白点三刺激值,得到所述局部色适应转换矩阵。The local chromatic adaptation conversion matrix is obtained according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source.
  10. 如权利要求9所述的图像处理方法,其特征在于,所述根据所述拍摄场景中的光源的白点三刺激值和所述目标光源的白点三刺激值,得到所述局部色适应转换矩阵,包括:The image processing method according to claim 9, characterized in that the step of obtaining the local chromatic adaptation conversion matrix according to the white point tristimulus values of the light source in the shooting scene and the white point tristimulus values of the target light source comprises:
    基于所述拍摄场景中的光源的白点三刺激值与预设的色适应模型,得到第一响应值,所述第一响应值为人眼对所述拍摄场景中的光源的长波、中波和短波的响应值;Based on the white point tristimulus values of the light source in the shooting scene and a preset chromatic adaptation model, a first response value is obtained, where the first response value is a response value of the human eye to the long wave, medium wave and short wave of the light source in the shooting scene;
    基于所述目标光源的白点三刺激值与所述色适应模型,得到第二响应值,所述第二响应值为人眼对所述目标光源的长波、中波和短波的响应值;Based on the white point tristimulus values of the target light source and the chromatic adaptation model, a second response value is obtained, where the second response value is a response value of the human eye to the long wave, the medium wave and the short wave of the target light source;
    根据所述第一响应值和所述第二响应值,得到所述局部色适应转换矩阵。The local chromatic adaptation conversion matrix is obtained according to the first response value and the second response value.
  11. 如权利要求1所述的图像处理方法,其特征在于,所述色彩还原参数包括:色彩校正矩阵和色适应转换矩阵,所述基于所述色彩还原参数对所述原始成像图像进行色彩还原,得到色彩还原后的图像,包括:The image processing method according to claim 1, characterized in that the color restoration parameters include: a color correction matrix and a color adaptation conversion matrix, and the color restoration of the original image based on the color restoration parameters to obtain the color restored image includes:
    基于所述色彩校正矩阵对所述原始成像图像中的各像素进行校正处理,得到色彩校正后的图像;Performing correction processing on each pixel in the original image based on the color correction matrix to obtain a color-corrected image;
    基于所述色适应转换矩阵对所述色彩校正后的图像进行转换处理,得到所述色彩还原的图像。The color-corrected image is converted based on the chromatic adaptation conversion matrix to obtain the color-restored image.
  12. 如权利要求1所述的图像处理方法,其特征在于,所述基于所述多光谱图像进行光谱估计,得到所述拍摄场景中光源的光谱功率,包括:The image processing method according to claim 1, characterized in that the performing spectral estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene comprises:
    基于所述多光谱图像进行光谱估计,得到所述拍摄场景中光源的光谱功率和光源分布信息;Performing spectral estimation based on the multispectral image to obtain spectral power and light source distribution information of the light source in the shooting scene;
    所述基于所述色彩还原参数对所述原始成像图像进行色彩还原,得到色彩还原后的图像之后,图像处理方法还包括:After performing color restoration on the original image based on the color restoration parameters to obtain the color restored image, the image processing method further comprises:
    根据所述光源分布信息,确定所述色彩还原后的图像中的临界像素,所述临界像素位于光源分界区域;Determining critical pixels in the color restored image according to the light source distribution information, wherein the critical pixels are located in a light source boundary area;
    对所述临界像素进行平滑处理,得到平滑处理后的图像。The critical pixels are smoothed to obtain a smoothed image.
  13. 如权利要求1至12中任一项所述的图像处理方法,其特征在于,所述基于所述多光谱图像进行光谱估计,得到所述拍摄场景中光源的光谱功率,包括:The image processing method according to any one of claims 1 to 12, characterized in that the performing spectral estimation based on the multispectral image to obtain the spectral power of the light source in the shooting scene comprises:
    响应于进入颜色保真模式的请求,生成用于设置拍摄场景光源信息的人机交互界面;In response to a request to enter a color fidelity mode, generating a human-computer interaction interface for setting light source information of a shooting scene;
    从所述人机交互界面获取用户设置的拍摄场景光源信息; Acquiring shooting scene light source information set by the user from the human-computer interaction interface;
    基于所述拍摄场景光源信息和所述多光谱图像进行光源光谱估计,得到所述拍摄场景中光源的光谱功率。The light source spectrum is estimated based on the light source information of the shooting scene and the multi-spectral image to obtain the spectral power of the light source in the shooting scene.
  14. 如权利要求1至12中任一项所述的图像处理方法,其特征在于,所述获取拍摄场景的多光谱图像,包括:The image processing method according to any one of claims 1 to 12, characterized in that acquiring a multispectral image of a shooting scene comprises:
    通过多个多光谱图像传感器获取所述拍摄场景的多光谱初始图像;Acquire a multispectral initial image of the shooting scene by using a plurality of multispectral image sensors;
    将各多光谱初始图像进行合并,得到所述拍摄场景的多光谱图像。The multispectral initial images are combined to obtain a multispectral image of the shooting scene.
  15. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至权利要求14中任一项所述的图像处理方法。A computer-readable storage medium, characterized in that it includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device executes the image processing method according to any one of claims 1 to claim 14.
  16. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器用于存储指令,所述处理器用于调用所述存储器中的指令,使得所述电子设备执行权利要求1至权利要求14中任一项所述的图像处理方法。An electronic device, characterized in that the electronic device comprises a processor and a memory, the memory is used to store instructions, and the processor is used to call the instructions in the memory, so that the electronic device executes the image processing method described in any one of claims 1 to claim 14.
  17. 一种计算机程序产品,其特征在于,包括计算机指令,当所述计算机指令在处理器上运行时,使得电子设备执行如权利要求1至权利要求14中任一项所述的图像处理方法。 A computer program product, characterized in that it comprises computer instructions, and when the computer instructions are executed on a processor, an electronic device executes the image processing method according to any one of claims 1 to 14.
PCT/CN2023/103443 2022-11-18 2023-06-28 Image processing method, electronic device, computer program product, and storage medium WO2024103746A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211449305.1 2022-11-18
CN202211449305.1A CN118057830A (en) 2022-11-18 2022-11-18 Image processing method, electronic device, computer program product, and storage medium

Publications (1)

Publication Number Publication Date
WO2024103746A1 true WO2024103746A1 (en) 2024-05-23

Family

ID=91068646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/103443 WO2024103746A1 (en) 2022-11-18 2023-06-28 Image processing method, electronic device, computer program product, and storage medium

Country Status (2)

Country Link
CN (1) CN118057830A (en)
WO (1) WO2024103746A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100073504A1 (en) * 2007-01-29 2010-03-25 Park Jong-Il Method of multispectral imaging and an apparatus thereof
CN105719322A (en) * 2016-01-27 2016-06-29 四川用联信息技术有限公司 Multispectral image compression method based on square matrix transformation
CN111866318A (en) * 2019-04-29 2020-10-30 北京小米移动软件有限公司 Multispectral imaging method, multispectral imaging device, mobile terminal and storage medium
US20220130131A1 (en) * 2018-11-30 2022-04-28 Pcms Holdings, Inc. Method and apparatus to estimate scene illuminant based on skin reflectance database
CN114531578A (en) * 2020-11-23 2022-05-24 华为技术有限公司 Light source spectrum acquisition method and device
US20220295044A1 (en) * 2021-03-15 2022-09-15 Kenneth James Hintz Imaging sensor calibration
CN115314617A (en) * 2022-08-03 2022-11-08 Oppo广东移动通信有限公司 Image processing system and method, computer readable medium, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100073504A1 (en) * 2007-01-29 2010-03-25 Park Jong-Il Method of multispectral imaging and an apparatus thereof
CN105719322A (en) * 2016-01-27 2016-06-29 四川用联信息技术有限公司 Multispectral image compression method based on square matrix transformation
US20220130131A1 (en) * 2018-11-30 2022-04-28 Pcms Holdings, Inc. Method and apparatus to estimate scene illuminant based on skin reflectance database
CN111866318A (en) * 2019-04-29 2020-10-30 北京小米移动软件有限公司 Multispectral imaging method, multispectral imaging device, mobile terminal and storage medium
CN114531578A (en) * 2020-11-23 2022-05-24 华为技术有限公司 Light source spectrum acquisition method and device
US20220295044A1 (en) * 2021-03-15 2022-09-15 Kenneth James Hintz Imaging sensor calibration
CN115314617A (en) * 2022-08-03 2022-11-08 Oppo广东移动通信有限公司 Image processing system and method, computer readable medium, and electronic device

Also Published As

Publication number Publication date
CN118057830A (en) 2024-05-21

Similar Documents

Publication Publication Date Title
US11250550B2 (en) Image processing method and related device
WO2020125410A1 (en) Image processing method and electronic device
CN104428829B (en) Color control method and communication apparatus
Akyüz et al. Color appearance in high-dynamic-range imaging
WO2020102978A1 (en) Image processing method and electronic device
US20220092749A1 (en) Backwards-Compatible High Dynamic Range (HDR) Images
EP4072131A1 (en) Image processing method and apparatus, terminal and storage medium
KR20210118233A (en) Apparatus and method for shooting and blending multiple images for high-quality flash photography using a mobile electronic device
CN114693580B (en) Image processing method and related device
CN113132696B (en) Image tone mapping method, image tone mapping device, electronic equipment and storage medium
US8565523B2 (en) Image content-based color balancing
CN115802183A (en) Image processing method and related device
US11521305B2 (en) Image processing method and device, mobile terminal, and storage medium
WO2024103746A1 (en) Image processing method, electronic device, computer program product, and storage medium
CN116055699B (en) Image processing method and related electronic equipment
CN115550575B (en) Image processing method and related device
US10715774B2 (en) Color conversion for ambient-adaptive digital content
CN111918047A (en) Photographing control method and device, storage medium and electronic equipment
WO2021249504A1 (en) Distributed display method and related device
CN113891008A (en) Exposure intensity adjusting method and related equipment
JP2002010283A (en) Display method and processor for face image
EP4258676A1 (en) Automatic exposure method and electronic device
CN116668838B (en) Image processing method and electronic equipment
US20230052082A1 (en) Global tone mapping with contrast enhancement and chroma boost
CN117395495B (en) Image processing method and electronic equipment