CN115955611A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN115955611A
CN115955611A CN202210313257.7A CN202210313257A CN115955611A CN 115955611 A CN115955611 A CN 115955611A CN 202210313257 A CN202210313257 A CN 202210313257A CN 115955611 A CN115955611 A CN 115955611A
Authority
CN
China
Prior art keywords
image
channels
data
channel
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210313257.7A
Other languages
Chinese (zh)
Other versions
CN115955611B (en
Inventor
金萌
钱彦霖
张莫
罗钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210313257.7A priority Critical patent/CN115955611B/en
Priority to CN202311170622.4A priority patent/CN117425091B/en
Publication of CN115955611A publication Critical patent/CN115955611A/en
Application granted granted Critical
Publication of CN115955611B publication Critical patent/CN115955611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/16Circuitry for reinsertion of dc and slowly varying components of signal; Circuitry for preservation of black or white level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the field of image processing, and provides an image processing method and electronic equipment, which are applied to the electronic equipment, wherein the electronic equipment comprises a multispectral color filter array sensor, and the method comprises the following steps: starting a camera application program in the electronic equipment; acquiring a first image, wherein the first image is an image collected by a multispectral color filter array sensor; performing first image processing on the first image to obtain a second image; performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels; obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between three channels and N channels; obtaining a fourth image based on the second image and the target data; the target image is saved or displayed. Based on the technical scheme of this application, under the prerequisite of make full use of traditional image signal processor, can keep the color reduction degree of the image information of N passageway, improve the color accuracy of image.

Description

Image processing method and electronic equipment
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and an electronic device.
Background
With the rapid development and wide application of multimedia technology and network technology, people use a large amount of image information in daily life and production activities; the traditional image sensor usually acquires three channels of image information; for a multispectral color filter array sensor, typically N channels (e.g., N is an integer greater than or equal to 3) of image information may be acquired; the N channels can comprise red, green and blue channels and other color channels; because the image information of the N channels has more color channels than the image information of the three channels, the color reduction degree and the signal-to-noise ratio of the image can be improved through the image information of the N channels.
However, image data processed by current image signal sensors is typically three-channel image data (e.g., RGB three-channel, or RYB three-channel); if image information with the number of channels larger than three channels needs to be processed, hardware improvement needs to be carried out on the image signal processor, and the difficulty is high.
Therefore, how to process an image and realize that the color restoration degree of the image information of N channels can be maintained on the premise of fully utilizing the conventional image signal processor becomes a problem to be solved urgently.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can keep the color reduction degree of image information of N channels on the premise of fully utilizing a traditional image signal processor; the color accuracy of the image is improved.
In a first aspect, an image processing method is provided, which is applied to an electronic device including a multispectral color filter array sensor, and includes:
starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is an image collected by the multispectral color filter array sensor;
performing first image processing on the first image to obtain a second image, wherein the second image is a three-channel image, or the second image is an image of L channels, and the L channels comprise the three channels;
performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3;
obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between the three channels and the N channels;
obtaining a fourth image based on the second image and the target data, wherein the fourth image is an image of the three channels;
and saving or displaying a target image, wherein the target image is obtained based on the fourth image.
It should be understood that a color filter array sensor refers to a sensor that overlays a mosaic color filter array over a pixel sensor, the color filter array sensor being used to capture color information for an image; a general photoelectric sensor can only sense the intensity of light and cannot distinguish the wavelength (color) of the light; the filter array sensor may obtain color information of the pixel points through color filtering (color filter).
In the embodiment of the present application, an RGBRaw image is obtained by performing pixel rearrangement on a Raw image of multiple channels (for example, an rgbmy Raw image); performing image processing on the RGBRAW image through an image signal processor to obtain an RGB image; the RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCMY channel and the RGB channel can be obtained based on the RGBCMY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
In one example, the Raw image acquired by the multispectral color filter array sensor may be an rgbmy image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow).
In one example, the Raw image acquired by the multispectral color filter array sensor may be an rgbcymg image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), Y denotes yellow (yellow), and M denotes magenta (magenta).
With reference to the first aspect, in some implementations of the first aspect, when the second image is an image of three channels, the performing first image processing on the first image to obtain the second image includes:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
and processing the fifth image through an image signal processor to obtain the second image.
It should be understood that when the image is output using the Remosaic algorithm, the pixels may be rearranged into a Bayer pattern image. For example, when an image is output using a Remosaic algorithm, a Quad Bayer (e.g., four same-color pixels lined up) may be converted to a Bayer format image; alternatively, the rgbmycaw image may be converted to a Bayer pattern image.
With reference to the first aspect, in some implementations of the first aspect, when the second image is an image of the L channel images, the performing first image processing on the first image to obtain a second image includes:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
processing the fifth image through an image signal processor to obtain a sixth image, wherein the sixth image is the image of the three channels;
and processing the sixth image based on a kernel function to obtain the second image.
It should be understood that a kernel function is used to represent a function that maps an input space to a high-dimensional feature space by some non-linear transformation phi (x).
In an embodiment of the present application, the sixth image may be mapped to a high-dimensional feature space by performing kernel function processing on the sixth image; the higher the dimensionality of the feature space is, the more the corresponding parameter quantity in the image is, the higher the accuracy of the target data obtained based on the second image and the third image is; the higher the accuracy of the target data, the higher the color reduction degree of the image, and the higher the color accuracy of the image.
With reference to the first aspect, in certain implementations of the first aspect, the sixth image is an image processed by the image signal processor and then subjected to downsampling processing.
In the image processing method in the embodiment of the present application, since the sixth image is mapped to a feature space of a high dimension; therefore, the overall computation amount is large; in order to reduce the amount of computation of the electronic device, the image processed by the image signal processor may be down-sampled, thereby reducing the size of the sixth image; the calculation amount of the electronic equipment for calculating the target data is reduced.
With reference to the first aspect, in certain implementations of the first aspect, the obtaining target data based on the second image and the third image includes:
and fitting the second image and the third image based on an optimization algorithm to obtain the target matrix.
With reference to the first aspect, in certain implementation manners of the first aspect, the fitting the second image and the third image based on an optimization algorithm to obtain the objective matrix includes:
obtaining the target matrix based on the following formula:
M=argmin(I T1 *M-I T2 );
wherein M represents the object matrix, I T1 Representing said second image, I T2 Representing the third image.
With reference to the first aspect, in certain implementations of the first aspect, the obtaining a fourth image based on the second image and the target data includes:
and multiplying the second image by the target matrix to obtain the fourth image.
In a possible implementation manner, if the second image is three channels, the image data corresponding to the second image may be a 64 × 48 × 3 matrix, and the target matrix may be a 3 × 3 matrix, then the image data of the fourth image object may be obtained by multiplying the 64 × 48 × 3 matrix by a transpose of the 3 × 3 matrix.
In a possible implementation manner, if the second image is a nine-channel image, the image data corresponding to the second image may be a 64 × 48 × 9 matrix, and the target matrix may be a 3 × 9 matrix, then the image data of the fourth image object may be obtained by multiplying the 64 × 48 × 9 matrix by a transpose of the 3 × 9 matrix.
With reference to the first aspect, in certain implementations of the first aspect, before the saving or displaying the target image, the method further includes:
acquiring a histogram of the fourth image, wherein the histogram comprises a first histogram, a second histogram and a third histogram, and the histogram corresponds to the three channels;
acquiring first data, second data and third data, wherein the first data is data of a preset position in the first histogram, the second data is data of the preset position in the second histogram, and the third data is data of the preset position in the third histogram.
With reference to the first aspect, in certain implementation manners of the first aspect, when the first data is smaller than a first preset threshold, the first data is smaller than a second preset threshold, and the third data is smaller than a third preset threshold, the first target image is the fourth image.
With reference to the first aspect, in certain implementations of the first aspect, the saving or displaying the target image includes:
when the first data is greater than or equal to a first preset threshold, the second data is greater than or equal to a second preset threshold, or the third data is greater than or equal to a third preset threshold, performing fusion processing on the fourth image and the second image to obtain the target image;
and saving or displaying the target image.
In an embodiment of the present application, when the data of at least one channel in the fourth image does not satisfy the preset threshold, it indicates that there is an image region with color distortion in the fused image; at this time, the fourth image and the second image may be subjected to the fusion process, thereby effectively reducing the color distortion region in the target image.
With reference to the first aspect, in certain implementations of the first aspect, the second image processing includes black level correction and/or lens shading correction.
With reference to the first aspect, in certain implementations of the first aspect, the three channels refer to a red channel, a green channel, and a blue channel.
In a second aspect, an electronic device is provided that includes one or more processors, a memory, and a multispectral color filter array sensor; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is acquired by the multispectral color filter array sensor;
performing first image processing on the first image to obtain a second image, wherein the second image is an image of three channels, or the second image is an image of L channels, and the L channels comprise the three channels;
performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3;
obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between the three channels and the N channels;
obtaining a fourth image based on the second image and the target data, wherein the fourth image is an image of the three channels;
saving or displaying the fourth image.
With reference to the second aspect, in certain implementations of the second aspect, when the second image is an image of the three channels, the one or more processors invoke the computer instructions to cause the electronic device to perform:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
and processing the fifth image through an image signal processor to obtain the second image.
With reference to the second aspect, in certain implementations of the second aspect, when the second image is an image of the L channel images, the one or more processors invoke the computer instructions to cause the electronic device to perform:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
processing the fifth image through an image signal processor to obtain a sixth image, wherein the sixth image is the image of the three channels;
and processing the sixth image based on a kernel function to obtain the second image.
With reference to the second aspect, in certain implementations of the second aspect, the sixth image is an image processed by the image signal processor and then subjected to downsampling processing.
With reference to the second aspect, in certain implementations of the second aspect, the target data is a target matrix, and the one or more processors invoke the computer instructions to cause the electronic device to perform:
and fitting the second image and the third image based on an optimization algorithm to obtain the target matrix.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
obtaining the target matrix based on the following formula:
M=argmin(I T1 *M-I T2 );
wherein M represents the object matrix I T1 Representing said second image, I T2 Representing the third image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
and multiplying the second image by the target matrix to obtain the fourth image.
With reference to the second aspect, in certain implementations of the second aspect, prior to the saving or displaying of the target image, the one or more processors invoke the computer instructions to cause the electronic device to perform:
acquiring a histogram of the fourth image, wherein the histogram comprises a first histogram, a second histogram and a third histogram, and the histogram corresponds to the three channels;
acquiring first data, second data and third data, wherein the first data is data of a preset position in the first histogram, the second data is data of the preset position in the second histogram, and the third data is data of the preset position in the third histogram.
With reference to the second aspect, in some implementation manners of the second aspect, when the first data is smaller than a first preset threshold, the first data is smaller than a second preset threshold, and the third data is smaller than a third preset threshold, the first target image is the fourth image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
when the first data is greater than or equal to a first preset threshold, the second data is greater than or equal to a second preset threshold, or the third data is greater than or equal to a third preset threshold, performing fusion processing on the fourth image and the second image to obtain the target image;
and saving or displaying the target image.
With reference to the second aspect, in certain implementations of the second aspect, the second image processing includes black level correction, lens shading correction, or automatic white balancing.
With reference to the second aspect, in certain implementations of the second aspect, the three channels refer to a red channel, a green channel, and a blue channel.
In a third aspect, an electronic device is provided that comprises means for performing the first aspect or any one of the image processing methods of the first aspect.
In a fourth aspect, an electronic device is provided that includes one or more processors, memory, and a multispectral color filter array sensor; the memory is coupled to the one or more processors for storing computer program code comprising computer instructions which are invoked by the one or more processors to cause the electronic device to perform the first aspect or any of the methods of the first aspect.
In a fifth aspect, a chip system is provided, which is applied to an electronic device, and includes one or more processors configured to invoke computer instructions to cause the electronic device to perform the method of the first aspect or any one of the first aspects.
A sixth aspect provides a computer readable storage medium having stored computer program code which, when executed by an electronic device, causes the electronic device to perform the method of the first aspect or any of the first aspects.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the methods of the first aspect or the first face.
Embodiments of the present application provide an image processing method, in which an rgbrow image is obtained by performing pixel rearrangement on a Raw image (e.g., an rgbmyc Raw image) of multiple channels; performing image processing on the RGBRAW image through an image signal processor to obtain an RGB image; the RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCMY channel and the RGB channel can be obtained based on the RGBCMY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware system of an electronic device suitable for use in the present application;
FIG. 2 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
FIG. 3 is a schematic diagram of an application scenario suitable for use with embodiments of the present application;
FIG. 4 is a schematic diagram of a system architecture suitable for use in the image processing method of the present application;
FIG. 5 is a spectral response curve of a color filter array sensor provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of an image processing method provided by an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram of another image processing method provided in an embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of another image processing method provided in the embodiments of the present application;
FIG. 9 is a schematic diagram of a pixel rearrangement provided by an embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of another image processing method provided in the embodiments of the present application;
FIG. 11 is a schematic diagram illustrating an effect of an image processing method according to the present application provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
FIG. 13 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, the following terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to imply that the number of indicated technical features is significant. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, the meaning of "a plurality" is two or more unless otherwise specified.
To facilitate understanding of the embodiments of the present application, a brief description of related concepts involved in the embodiments of the present application will be provided.
1. Color Filter Array (CFA) sensor
The color filter array sensor is a sensor covering a mosaic color filter array above the pixel sensor, and is used for acquiring color information of an image; a general photoelectric sensor can only sense the intensity of light and cannot distinguish the wavelength (color) of the light; the filter array sensor may obtain color information of the pixel points by color filtering (color filter).
In embodiments of the present application, the color filter array sensor may also be referred to as a "multispectral color filter array sensor" or "multispectral sensor".
2. Standard Red Green Blue (sRGB)
sRGB stands for the standard red, green, and blue, three basic pigments used for color reproduction in displays, panels, projectors, printers, or other devices. The sRGB color space is based on independent color coordinates, and colors can be assigned to the same color coordinate system in different devices without being affected by the different color coordinates of the devices.
3. Color space
A color is represented by a coordinate in one, two, three or four dimensional space, which can be defined by such a coordinate system as color space.
4. Image sensor with a plurality of pixels
The image sensor may also be referred to as an image sensor, which refers to a sensor for imaging; according to the difference of the devices, the sensor can be divided into two categories, namely a Charge Coupled Device (CCD) sensor and a Metal-Oxide Semiconductor (CMOS) sensor; currently, mobile terminals and digital cameras generally employ CMOS sensors.
5. Kernel function
The kernel function is a statistical term used to denote a function that maps an input space to a high-dimensional feature space by some non-linear transformation phi (x).
6. Black Level Correction (BLC)
The black level correction is a correction process for a black level, which is a video signal level at which no light is output for one line on a display device subjected to a certain calibration.
7. Lens Shading Correction (LSC)
Lens shading correction is used to eliminate the problem of inconsistent image color and brightness with the center of the image due to the lens optics.
8. Automatic White Balance (AWB)
Auto white balance is used to make white color at any color temperature the camera can reduce it to white.
9. Color Correction Matrix (CCM)
The color correction matrix is used to calibrate the accuracy of colors other than white.
10. Down sampling process
The down-sampling process is used to reduce the image size.
11. Pixel rearrangement (Remosaic)
When an image is output by using a Remosaic algorithm, pixels can be rearranged into a Bayer pattern image. For example, when an image is output using the Remosaic algorithm, a Quad Bayer (e.g., four same-color pixels lined up) may be converted to a Bayer format image; alternatively, the rgbcmycraw image may be converted to a Bayer format image.
The following describes an image processing method and an electronic device in an embodiment of the present application with reference to the drawings.
Fig. 1 shows a hardware system of an electronic device suitable for use in the present application.
The electronic device 100 may be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a projector, and the like, and the embodiment of the present application does not limit the specific type of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 1 is not intended to specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown in FIG. 1, or electronic device 100 may include a combination of some of the components shown in FIG. 1, or electronic device 100 may include sub-components of some of the components shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and a neural Network Processor (NPU). The different processing units may be independent devices or integrated devices. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
Illustratively, the processor 110 may be configured to execute the image processing method of the embodiment of the present application; for example, a camera application in the electronic device is launched; acquiring a first image, wherein the first image is an image collected by a multispectral color filter array sensor; performing first image processing on the first image to obtain a second image, wherein the second image is a three-channel image, or the second image is an image of L channels, and the L channels comprise the three channels; performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3; obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between three channels and N channels; obtaining a fourth image based on the second image and the target data, wherein the fourth image is a three-channel image; and saving or displaying the target image, wherein the target image is obtained based on the fourth image.
The connection relationship between the blocks shown in fig. 1 is only illustrative, and does not limit the connection relationship between the blocks of the electronic apparatus 100. Alternatively, the modules of the electronic device 100 may also adopt a combination of the connection manners in the above embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 may implement display functionality through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 may be used to display images or video.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can optimize the algorithm of the noise, brightness and color of the image, and can also optimize the parameters of exposure, color temperature and the like of the shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV, or the like format image signal. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, and MPEG4.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x-axis, y-axis, and z-axis) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 by a reverse movement, thereby achieving anti-shake. The gyro sensor 180B can also be used in scenes such as navigation and motion sensing games.
Acceleration sensor 180E may detect the magnitude of acceleration of electronic device 100 in various directions, typically the x-axis, y-axis, and z-axis. The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the attitude of the electronic device 100 as an input parameter for applications such as horizontal and vertical screen switching and pedometers.
The distance sensor 180F is used to measure a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example in a shooting scene, the electronic device 100 may utilize the range sensor 180F to range for fast focus.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to implement functions such as unlocking, accessing an application lock, taking a picture, and answering an incoming call.
The touch sensor 180K is also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also referred to as a touch screen. The touch sensor 180K is used to detect a touch operation applied thereto or in the vicinity thereof. The touch sensor 180K may pass the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100 and at a different location than the display screen 194.
The hardware system of the electronic device 100 is described above in detail, and the software system of the electronic device 100 is described below.
Fig. 2 is a schematic diagram of a software system of an electronic device provided in an embodiment of the present application.
As shown in fig. 2, the system architecture may include an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250.
The application layer 210 may include camera applications, gallery, calendar, telephony, maps, navigation, WLAN, bluetooth, music, video, short messaging, etc. applications.
The application framework layer 220 provides an Application Programming Interface (API) and a programming framework for the application program of the application layer; the application framework layer may include some predefined functions.
For example, the application framework layer 220 may include a camera access interface; camera management and camera devices may be included in the camera access interface. Wherein camera management may be used to provide an access interface to manage the camera; the camera device may be used to provide an interface for accessing the camera.
The hardware abstraction layer 230 is used to abstract the hardware. For example, the hardware abstraction layer may include a camera abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer may invoke camera algorithms.
For example, the hardware abstraction layer 230 includes a camera hardware abstraction layer 2301 and a camera algorithm 2302; software algorithms for image processing may be included in camera algorithms 2302.
For example, the image processing method provided by the embodiment of the present application may be executed in the camera algorithm 2302.
Illustratively, the algorithms in camera algorithms 2302 may refer to implementations that are not dependent on specific hardware; such as code that may normally run in a CPU, etc.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the driver layer may include camera device drivers.
The hardware layer 250 may include a camera device 2501 and other hardware devices.
Currently, for a multispectral color filter array sensor, image information of N channels (e.g., N is an integer greater than or equal to 3) can be typically acquired; the N channels may include three channels and other color channels; because the image information of the N channels has more image information of the color channels than the image information of the three channels, the color reduction degree and the signal-to-noise ratio of the image can be improved through the image information of the N channels. However, the image data processed by the current image signal sensor is usually three-channel image data; if the image information of the N channels needs to be processed, the hardware of the image signal processor needs to be improved, which is difficult.
In view of this, embodiments of the present application provide an image processing method, which obtains an RGBRaw image by performing pixel rearrangement on a Raw image (e.g., an rgbmy Raw image) of multiple channels; performing image processing on the RGBW image through an image signal processor to obtain an RGB image; the RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCY channel and the RGB channel can be obtained based on the RGBCY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is data of three channels; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
The following describes the image processing method provided by the embodiment of the present application in detail with reference to fig. 3 to 13.
For example, the image processing method in the embodiment of the present application may be applied to the field of photographing, the field of recording video (e.g., single-shot recording, double-shot recording), the field of video call, or other image processing fields; by the image processing method in the embodiment of the application, the color accuracy of the image can be improved.
In an example, as shown in fig. 3, the image processing method according to the embodiment of the present application may be applied to the field of photographing, and the color accuracy of an image can be improved by the image processing method provided in the embodiment of the present application.
Optionally, the image processing method in the embodiment of the present application may also be applied to a preview scene, where the preview scene includes, but is not limited to, the following scenes:
shooting preview, aperture preview, night scene preview, portrait preview, video preview, professional preview, and the like.
It should be understood that the preview scene may refer to a scene in which the electronic device captures an image before the button indicating the photographing is not clicked in a certain photographing mode.
For example, the method for switching the camera in the embodiment of the present application may also be applied to a video call scene, where the video call scene may include, but is not limited to, the following scenes:
the method comprises the following steps of video call, video conference application, long and short video application, video live broadcast application, video network course application, portrait intelligent mirror moving application scene, video recording and video monitoring of a system camera video recording function, or portrait shooting scene such as an intelligent cat eye and the like.
It should be understood that the above description is illustrative of the application scenario and does not limit the application scenario of the present application in any way.
Fig. 4 is a schematic diagram of a system architecture suitable for the image processing method of the present application.
As shown in fig. 4, the system architecture 300 may include a multispectral color filter array sensor 310, an image processing module 320, an image signal processor 330, and a computation module 340.
Illustratively, the multispectral color filter array sensor 310 may be used to acquire Raw images; for example, the Raw image collected by the multispectral color filter array sensor 310 may include RGB color patterns and other color patterns.
Illustratively, the Raw image acquired by the multispectral color filter array sensor 310 may refer to an RGBCYM image, or an rgbcymg image, or an image of another color pattern.
In one example, the Raw image acquired by the multispectral color filter array sensor 310 may be an rgbmy image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow).
In one example, the Raw image acquired by the multispectral color filter array sensor 310 may be an rgbcymg image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), Y denotes yellow (yellow), and M denotes magenta (magenta).
It should be understood that the rgbmy image and the rgbcymg image are exemplified above, and the present application is not limited thereto.
Alternatively, the multispectral color filter array sensor 310 may be an RGBCYYM image, an RGBRYYB image, an rgbrbw image, an RGBCYYM image, or an rgbmyir image, etc.; this is not a limitation of the present application.
Illustratively, the Raw image acquired by the multispectral color filter array sensor 310 may be a multi-channel image; such as a Raw image of 6 channels, a Raw image of 8 channels, or a Raw image of other channel numbers.
For example, fig. 5 shows a spectral response curve of a 6-channel rgbmy image, where B denotes a spectral response curve of a blue (B) channel; c represents the spectral response curve of the cyan (C) channel; m represents the spectral response curve of the magenta (M) channel; y represents the spectral response curve of the yellow (Y) channel; g represents the spectral response curve of the green (G) channel; r represents the spectral response curve of the red (R) channel; in the spectral response curve of the rgbmy image shown in fig. 5, two R channels and M channels, two G channels and Y channels, and two B channels and C channels are associated with each other; as can be seen from fig. 6, the RGBCMY spectrum range of the 6 channels is wider, and therefore the color reduction degree of the Raw image acquired by the multispectral color filter array sensor 310 is higher, i.e., the color accuracy is higher.
Illustratively, the image processing module 320 is configured to perform image processing on the multi-channel Raw image acquired by the multispectral color filter array sensor 310.
Among these, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balance, or color correction matrices, etc.
It should be understood that since the image signal processor 330 is mainly used to process images of three channels (e.g., RGB), images of multiple channels (e.g., 6 channels) cannot be processed; therefore, the Raw image of multiple channels can be processed by the image processing module 320.
Exemplarily, pixel rearrangement may be performed on the multi-channel Raw image acquired by the multispectral color filter array sensor 310 to obtain a 3-channel Raw image; the image signal processor 330 is configured to perform image processing on the Raw image of the 3 channels; the specific implementation process of the pixel rearrangement can refer to the following description of step S502 in fig. 8, step S602 in fig. 8, or fig. 9.
Among these, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices, etc.
Illustratively, the calculating module 340 is configured to calculate a mapping matrix M according to the image processed by the image processing module 320 and the image processed by the image signal processor 330; applying the mapping matrix M to the image processed by the image signal processor 330 to obtain a fused image; the mapping matrix M may be used to represent a mapping relationship between RGB channels and multiple channels (e.g., RGBCMY channels).
In the embodiment of the present application, the mapping matrix M is fitted to the image processed by the image signal processor 330 to obtain a fused image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix M, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the image processed by the image signal processor 330 is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
Fig. 6 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method 400 includes steps that may be performed by the electronic device shown in FIG. 1; the image processing method includes steps S410 to S470, and the steps S410 to S470 are described in detail below, respectively.
It should be understood that the image processing method shown in fig. 6 is applicable to an electronic device including a first sensor; for example, the first sensor is a color filter array sensor; the color filter array sensor may acquire 6 channels of data (e.g., RGBCMY) or more channels of data (e.g., rgbmyk, RGBCYYM, RGBRYYB, etc.).
And step S410, starting a camera application program in the electronic equipment.
Illustratively, the user may instruct the electronic device to launch a camera application by clicking on an icon of the "camera" application. Or, when the electronic device is in the screen locking state, the user may instruct the electronic device to start the camera application through a gesture of sliding rightward on the display screen of the electronic device. Or the electronic device is in a screen locking state, the screen locking interface includes an icon of the camera application program, and the user instructs the electronic device to start the camera application program by clicking the icon of the camera application program. Or when the electronic device runs other applications, the applications have the authority to call the camera application program; the user may instruct the electronic device to launch the camera application by clicking on the corresponding control. For example, while the electronic device is running an instant messaging application, the user may instruct the electronic device to launch a camera application, etc., by selecting a control for a camera function.
Step S420, a first image is acquired.
The first image is an image collected by the multispectral color filter array sensor.
It should be understood that a color filter array sensor refers to a sensor that overlays a mosaic color filter array over a pixel sensor, the color filter array sensor being used to capture color information for an image; a general photoelectric sensor can only sense the intensity of light and cannot distinguish the wavelength (color) of the light; the filter array sensor may obtain color information of the pixel points through color filtering (color filter). Illustratively, the spectral response curve of the multispectral color filter array sensor is shown in fig. 5.
Illustratively, a multispectral color filter array sensor may be used to acquire Raw images; for example, the Raw image collected by the multispectral color filter array sensor may include RGB color patterns and other color patterns.
Illustratively, the Raw image acquired by the multispectral color filter array sensor may refer to an rgbcymaw image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow). Alternatively, the Raw image collected by the multispectral color filter array sensor may refer to an rgbcygraw image, or an image of another color pattern.
In one example, if the image acquired by the multispectral color filter array sensor is an rgbcymaw image, the image acquired by the multispectral color filter array sensor is a 6-channel image.
In one example, if the image acquired by the multispectral color filter array sensor is an rgbcygraw image, the image acquired by the multispectral color filter array sensor is a 7-channel image.
Step S430, perform a first image processing on the first image to obtain a second image.
The second image is an image of three channels, or the second image is an image of L channels, and the L channels comprise three channels.
Alternatively, the three channels may refer to a red channel, a green channel, and a blue channel, i.e., an R channel, a G channel, and a B channel.
Alternatively, the L channels may refer to a multi-channel mapping three channels to a high-dimensional space; for example, the L channels may be 9 channels (R, G, B, R) 2 ,G 2 ,B 2 RG, RB, GB) or L channel may be 19 channels (R, G, B, R) 2 ,G 2 ,B 2 ,RG,RB,GB,R 3 ,G 3 ,B 3 ,R 2 G,R 2 B,G 2 R,G 2 B,B 2 G,B 2 R,RGB)。
Illustratively, a three-channel image may be mapped to L channels by a kernel function.
Exemplarily, when the second image is an image of three channels, performing a first image processing on the first image to obtain a second image, including:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is a three-channel image; and processing the fifth image through an image signal processor to obtain a second image.
Alternatively, a specific implementation manner of the pixel rearrangement algorithm can be seen in the following description of step S602 in fig. 8 and fig. 9.
Exemplarily, when the second image is an image of L channel images, performing a first image processing on the first image to obtain the second image, including:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is a three-channel image; processing the fifth image through an image signal processor to obtain a sixth image, wherein the sixth image is a three-channel image; and obtaining a second image for the sixth image based on the kernel function.
It should be understood that a kernel function is used to represent a function that maps an input space to a high-dimensional feature space by some non-linear transformation phi (x).
In an embodiment of the present application, the sixth image may be mapped to a high-dimensional feature space by performing kernel function processing on the sixth image; the higher the dimensionality of the feature space is, the more the corresponding parameter quantity in the image is, the higher the accuracy of the target data obtained based on the second image and the third image is; the higher the accuracy of the target data, the higher the color reduction degree of the image, and the higher the color accuracy of the image.
Alternatively, a specific implementation manner of the pixel rearrangement algorithm may be as described in the following description of step S602 in fig. 8 and fig. 9; a specific implementation manner of processing the fifth image by the image signal processor may refer to the following description between step S603 in fig. 8.
In one possible implementation manner, the sixth image is an image processed by an image signal processor and then subjected to downsampling processing.
Illustratively, a fifth image is obtained based on the first image and the pixel rearrangement algorithm, and the fifth image is a three-channel image; processing the fifth image through an image signal processor to obtain a processed image; performing down-sampling processing on the processed image to obtain a sixth image, wherein the sixth image is a three-channel image; and obtaining a second image for the sixth image based on the kernel function.
In the image processing method in the embodiment of the present application, since the sixth image is mapped to a feature space of a high dimension; therefore, the overall amount of calculation is large; in order to reduce the amount of computation of the electronic device, the image processed by the image signal processor may be down-sampled, thereby reducing the size of the sixth image; the calculation amount of the electronic equipment for calculating the target data is reduced.
And step S440, performing second image processing on the first image to obtain a third image.
The third image is an image of N channels, and N is an integer greater than or equal to 3.
Illustratively, the second image processing may include black level correction, lens shading correction, or automatic white balance.
Here, the black level correction refers to a correction process of a black level, which is a video signal level at which no light is output for one line on a display device subjected to a certain calibration. Lens shading correction is used to eliminate the problem of inconsistent image surrounding color and brightness with the center of the image due to the lens optics. Auto white balance is used to make white at any color temperature the camera can reduce it to white.
Optionally, the second image processing may further include a color correction matrix.
Wherein the color correction matrix is used to calibrate the accuracy of colors other than white.
And S450, obtaining target data based on the second image and the third image.
And the target data is used for representing the mapping relation between the three channels and the N channels.
Illustratively, the target data may be a target matrix, and the obtaining of the target data based on the second image and the third image includes:
and fitting the second image and the third image based on an optimization algorithm to obtain a target matrix.
Optionally, the specific implementation process of fitting the second image and the third image based on the optimization algorithm to obtain the target matrix may refer to the following description in step S609 in fig. 8.
Illustratively, the optimization algorithm may include, but is not limited to: gradient descent, newton's method, or BFGS algorithm, etc.
For example, the objective matrix may be derived based on the following formula:
M=argmin(I T1 *M-I T2 );
wherein M denotes an object matrix, I T1 Representing a second image,I T2 Representing a third image; argmin () may represent making the objective function (M × I) T1 -I T2 ) Taking the minimum value.
And step S460, obtaining a fourth image based on the second image and the target data.
Wherein the fourth image is a three-channel image.
Illustratively, the second image is multiplied by the target matrix resulting in a fourth image.
For example, if the second image is three channels, the image data corresponding to the second image may be a 64 × 48 × 3 matrix, and the target matrix may be a 3 × 3 matrix, then the image data of the fourth image object may be obtained by multiplying the 64 × 48 × 3 matrix by the transpose of the 3 × 3 matrix.
For example, if the second image is a nine-channel image, the image data corresponding to the second image may be a 64 × 48 × 9 matrix, and the target matrix may be a 3 × 9 matrix, then the image data of the fourth image object may be obtained by multiplying the 64 × 48 × 9 matrix by the transpose of the 3 × 9 matrix.
And step S470, saving or displaying the target image.
Wherein the target image is obtained based on the fourth image.
For example, when the histogram of the fourth image meets a preset condition, the fourth image is a target image; and when the histogram of the fourth image does not meet the preset condition, performing fusion processing on the fourth image and the second image to obtain a target image.
Optionally, before saving or displaying the target image, the method further includes:
acquiring a histogram of a fourth image, wherein the histogram comprises a first histogram, a second histogram and a third histogram, and the histogram corresponds to three channels;
and acquiring first data, second data and third data, wherein the first data is data of a preset position in a first histogram, the second data is data of the preset position in a second histogram, and the third data is data of the preset position in a third histogram.
Illustratively, the first target image is a fourth image when the first data is smaller than a first preset threshold, the first data is smaller than a second preset threshold, and the third data is smaller than a third preset threshold.
Exemplarily, when the first data is greater than or equal to a first preset threshold, the second data is greater than or equal to a second preset threshold, or the third data is greater than or equal to a third preset threshold, performing fusion processing on the fourth image and the second image to obtain a target image; the target image is saved or displayed. Optionally, the specific implementation manner may refer to the following description related to step S713 and step S715 shown in fig. 10.
In one example, if the electronic device is in a preview scene, the target image is displayed.
Illustratively, the preview scenario includes, but is not limited to, the following scenarios:
shooting preview, aperture preview, night scene preview, portrait preview, video preview, professional preview, and the like.
It should be understood that the preview scene may refer to a scene in which the electronic device captures an image before the button indicating the photographing is not clicked in a certain photographing mode.
In one example, if the electronic device is in a video scene, the target image is saved.
Illustratively, a video scene may include a video recording or video call scene; the video call scenario may include, but is not limited to, the following scenarios:
the method comprises the following steps of video call, video conference application, long and short video application, video live broadcast application, video network course application, portrait intelligent mirror moving application scene, video recording and video monitoring of a system camera video recording function, or portrait shooting scene such as an intelligent cat eye and the like.
Embodiments of the present application provide an image processing method, in which an rgbrow image is obtained by performing pixel rearrangement on a Raw image (e.g., an rgbmyc Raw image) of multiple channels; performing image processing on the RGBW image through an image signal processor to obtain an RGB image; RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCMY channel and the RGB channel can be obtained based on the RGBCMY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is data of three channels; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
Fig. 7 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method 500 may be performed by the electronic device shown in FIG. 1; the image processing method includes steps S501 to S506, and the steps S501 to S506 are described in detail below, respectively.
Step S501, a multichannel Raw image (one example of a first image) is acquired.
Illustratively, as shown in fig. 4, a multispectral color filter array sensor 310 in the electronic device may acquire a multi-channel Raw image.
In one example, the multi-channel Raw image acquired by the multispectral color filter array sensor 310 may be a 6-channel RGBCMY image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow).
In one example, the Raw image acquired by the multispectral color filter array sensor 310 may be an rgbcymg image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), Y denotes yellow (yellow), and M denotes magenta (magenta).
It should be understood that a multichannel Raw image may refer to an image of a multichannel Raw color space.
It should be understood that, the rgbmy image and the rgbcym image are used for illustration, the multi-channel Raw image may refer to a Raw image with 6 channels or more than 6 channels, and the application is not limited thereto.
Step S502, performing pixel rearrangement on the multichannel Raw image to obtain an RGBRaw image (an example of a fifth image).
Illustratively, the multi-channel Raw image is an rgbmyraw image, as shown in (a) of fig. 9; pixel rearranging for an RGBCMY raw image can refer to replacing a magenta (M) channel position with a red (R) channel, a yellow (Y) channel position with a green (G) channel, and a cyan (C) channel position with a blue channel in the RGBCMY raw image; after the replacement, the multiple channels are subjected to pixel rearrangement, and subjected to color correction matrix processing to obtain a Raw image in an sRGB color space, as shown in (b) of fig. 9.
In the embodiment of the present application, since sRGB represents the standard red, green, and blue, three basic pigments; the different devices use the same color coordinate system in transmission without being influenced by different color coordinates of the devices; therefore, the multichannel Raw image is converted into the Raw image of the sRGB color space, so that errors caused by difference of color coordinates of different devices can be avoided.
In step S503, the RGBRaw image is subjected to image processing to obtain a processed RGB image (an example of a second image).
It should be understood that the image processing in step S503 refers to an image processing algorithm executed in the image signal processor.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without bright output of one line on a display device subjected to certain calibration; the lens shading correction is used for eliminating the problem that the peripheral color and the brightness of the image are inconsistent with the center of the image due to a lens optical system; automatic white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Optionally, the RGBRaw image is subjected to image processing to obtain a processed Raw image; and performing demosaicing processing on the processed Raw image to obtain an RGB image.
Optionally, if the image processing algorithm further includes a demosaicing algorithm, the RGBRaw image is subjected to image processing to obtain an RGB image.
Step S504 is to perform image processing on the multi-channel Raw image to obtain a processed multi-channel image (an example of a third image).
It should be understood that the image processing in step S504 may refer to a software algorithm by the image processing, i.e., not performed in the image signal processor.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without bright output of one line on a display device subjected to certain calibration; the lens shading correction is used for eliminating the problem that the color and brightness of the periphery of an image are inconsistent with the center of the image due to a lens optical system; automatic white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Optionally, after image processing is performed on the rgbmyraw image, a processed Raw image can be obtained; the processed Raw image can be demosaiced to obtain an rgbmy image.
Optionally, if the image processing algorithm further includes a demosaicing algorithm, the RGBRaw image is subjected to image processing to obtain an rgbmyr image.
In step S505, a mapping matrix M (an example of target data) is obtained from the RGB image and the rgbmy image.
The mapping matrix M may be used to represent a mapping relationship between RGB channels and multiple channels (e.g., RGBCMY channels).
Illustratively, for each pixel (r, g, b), the mapping matrix M is obtained according to the following formula;
h=sum_{1,68×48}{(r,g,b)_T2*M–(r,g,b)_I4};min(h)。
it should be understood that there is a correlation between the magenta channel and the red channel, between the yellow channel and the green channel, and between the cyan channel and the blue channel; for example, the magenta channel is adjacent to the red channel, and the larger the reference value of the red channel is, the larger the pixel value corresponding to the magenta channel is; similarly, the yellow channel is close to the green channel, and the larger the reference value of the green channel is, the larger the pixel value corresponding to the yellow channel is; the cyan channel is adjacent to the blue channel, and the larger the reference value of the blue channel is, the larger the pixel value corresponding to the cyan channel as a whole is.
It should also be understood that the multi-channel data collected by the multi-spectral color filter array sensor corresponds to the information of the same shooting scene; for example, the multi-channel data may include data of RGB channels and data of CMY channels; thus, there is an association between the RGB channel and the CMY channel; theoretically, the difference between the rgbmy image and the RGB image after fitting the mapping matrix M to the rgbmy image is small, and thus the mapping matrix M can be obtained by the above formula.
Step S506 is to perform fusion processing on the RGB image and the mapping matrix M to obtain a fusion image (an example of a fourth image).
Illustratively, the RGB image is multiplied by the mapping matrix M to obtain a fused image.
In the embodiment of the application, the mapping matrix M and the RGB image are fitted to obtain a fusion image; the image can be fused through the mapping matrix M, and the image color reduction degree brought by the multi-channel data can be reserved, namely the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
Fig. 8 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method 600 may be performed by the electronic device shown in FIG. 1; the image processing method includes steps S601 to S610, and the steps S601 to S610 are described in detail below, respectively.
It should be understood that the image processing method shown in fig. 8 is applicable to an electronic device including a first sensor; for example, the first sensor is a color filter array sensor; the color filter array sensor may acquire 6 channels of data (e.g., RGBCMY) or more channels of data (e.g., RGBCMYK, RGBCYYM, RGBRYYB, etc.).
Step S601, a multichannel Raw image (one example of a first image) is acquired.
Illustratively, as shown in fig. 4, a multispectral color filter array sensor 310 in an electronic device may acquire a multi-channel Raw image.
In one example, the multi-channel Raw image acquired by the multi-spectral color filter array sensor 310 may be a 6-channel rgbmy image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow).
In one example, the Raw image acquired by the multispectral color filter array sensor 310 may be an rgbcymg image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), Y denotes yellow (yellow), and M denotes magenta (magenta).
It should be understood that a multichannel Raw image may refer to an image of a multichannel Raw color space.
It should be understood that the above is exemplified by RGBCMY images and RGBCYGM images, and that a multi-channel Raw image includes RGB channels as well as other channels; the multichannel Raw image may refer to a Raw image with 6 channels or more than 6 channels, which is not limited in this application.
Step S602, performing pixel rearrangement on the multichannel Raw image to obtain an RGBRaw image (an example of a fifth image).
For example, the RGBRaw image may refer to the image I1 as shown in fig. 8 or fig. 10.
Illustratively, the multi-channel Raw image is an rgbmyraw image, as shown in (a) of fig. 9; pixel rearranging for an RGBCMY raw image can refer to replacing a magenta (M) channel position with a red (R) channel, a yellow (Y) channel position with a green (G) channel, and a cyan (C) channel position with a blue channel in the RGBCMY raw image; after the replacement, the multiple channels are subjected to pixel rearrangement, and subjected to color correction matrix processing to obtain a Raw image in an sRGB color space, as shown in (b) of fig. 9.
In the embodiment of the present application, since sRGB stands for standard red, green, and blue, three basic pigments; different devices use the same color coordinate system in transmission without being influenced by different color coordinates of the devices; therefore, the multichannel Raw image is converted into the Raw image of the sRGB color space, so that errors caused by difference of color coordinates of different devices can be avoided.
Step S603, image processing is performed on the RGBRaw image, and an image I2 (an example of a sixth image) is obtained.
It should be understood that the image processing in step S503 refers to an image processing algorithm executed in the image signal processor.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without one line of light output on a display device which is subjected to certain calibration; the lens shading correction is used for eliminating the problem that the color and brightness of the periphery of an image are inconsistent with the center of the image due to a lens optical system; auto white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Optionally, the RGBRaw image is subjected to image processing to obtain a processed Raw image.
Optionally, if the image processing algorithm further includes demosaicing, the RGB raw image may be processed to obtain a processed RGB image.
And step S604, performing kernel function processing on the image I2 to obtain an image I5.
It should be understood that a kernel function is a function that represents the mapping of an input space to a high-dimensional feature space by some non-linear transformation phi (x).
Exemplarily, in order to improve the accuracy of the mapping matrix M, RGB channels may be mapped to a feature space with a high dimension; for example, RGB three channels may be mapped to L channels by a kernel function; the L channels may be 9 channels (R, G, B, R) 2 ,G 2 ,B 2 RG, RB, GB) or L channel may be 19 channels (R, G, B, R) 2 ,G 2 ,B 2 ,RG,RB,GB,R 3 ,G 3 ,B 3 ,R 2 G,R 2 B,G 2 R,G 2 B,B 2 G,B 2 R,RGB)。
In the embodiment of the application, the image I2 can be mapped to a high-dimensional feature space by performing kernel function processing on the image I2; the higher the dimension of the feature space is, the more the parameter quantity corresponding to the image I2 is, the higher the accuracy of the mapping matrix M obtained in step S609 is; the higher the accuracy of the mapping matrix M, the higher the degree of color reduction of the image.
In step S605, the down-sampling process is performed on the image I2 to obtain a small-sized image I3.
Illustratively, the size of the image I2 may be 4000 × 3000 × 3; wherein 4000 × 3000 may represent the resolution size of the image I2; 3 may represent the number of channels of image I2; the size of the image I3 may be 64 × 48 × 3; where 64 × 48 may represent the resolution size of the image I3; 3 may represent the number of channels of image I3.
It should be understood that the above is an illustration of the size of the image; this is not a limitation of the present application.
Step S606, performs kernel function processing on the image I3 to obtain an image I4 (an example of a second image).
It should be understood that a kernel function is a function that represents the mapping of an input space to a high-dimensional feature space by some non-linear transformation phi (x).
In the embodiment of the application, the image I3 can be mapped to a high-dimensional feature space by performing kernel function processing on the image I3; the higher the dimension of the feature space is, the more the parameter quantity corresponding to the image I3 is, the higher the accuracy of the mapping matrix M obtained in step S509 is; the higher the accuracy of the mapping matrix M, the higher the color reduction degree of the image.
And step S607, carrying out downsampling processing on the multichannel Raw image to obtain a small-size multichannel Raw image T1.
Illustratively, the size of the multi-channel Raw image may be 4000 × 3000 × 6; wherein 4000 × 3000 may represent a resolution size of a multi-channel Raw image; 6 may represent the number of channels of a multi-channel Raw image. The size of the image T1 may be 68 × 48 × 6; wherein 68 × 48 may represent the resolution size of the image T1; 6 may represent the number of channels of the image T1.
It should be understood that the above is an illustration of the size of the image; this is not a limitation of the present application.
Step S608, image processing is performed on the small-sized multichannel Raw image T1 to obtain an image T2 (an example of a third image).
It should be understood that the image processing in step S608 may refer to a software algorithm by the image processing, i.e., not performed in the image signal processor.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without bright output of one line on a display device subjected to certain calibration; the lens shading correction is used for eliminating the problem that the color and brightness of the periphery of an image are inconsistent with the center of the image due to a lens optical system; automatic white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Alternatively, the rgbmyraw image may be subjected to image processing to obtain a processed rgbmyraw image.
Optionally, if the image processing algorithm further includes demosaicing, the RGBCMY image may be obtained after performing image processing on the RGBCMYRaw image.
It should be understood that image T2 is in the same color space as image I4.
And step S609, obtaining a mapping matrix M according to the image T2 and the image I4.
The mapping matrix M may be used to represent a mapping relationship between RGB channels and multiple channels (e.g., RGBCMY channels).
Illustratively, for each pixel (r, g, b), the mapping matrix M is obtained according to the following formula;
h=sum_{1,68×48}{(r,g,b)_T2*M–(r,g,b)_I4};min(h)。
it should be understood that there is a correlation between the magenta channel and the red channel, between the yellow channel and the green channel, and between the cyan channel and the blue channel; for example, the magenta channel is adjacent to the red channel, and the larger the reference value of the red channel is, the larger the pixel value corresponding to the magenta channel is; similarly, the yellow channel is close to the green channel, and the larger the reference value of the green channel is, the larger the pixel value corresponding to the yellow channel is; the cyan channel is adjacent to the blue channel, and the larger the reference value of the blue channel is, the larger the pixel value corresponding to the cyan channel as a whole is.
It should also be understood that since the image I4 is obtained by pixel rearrangement of a multispectral Raw image; therefore, there is a correlation between the RGB channels in the image I4 and the CMY channels in the multispectral Raw image; theoretically, the difference between the mapping matrix M fitted to the T2 image and the image I4 is small, and therefore the mapping matrix M can be obtained by the above formula.
And step S610, carrying out fusion processing on the image I5 and the mapping matrix M to obtain a fusion image.
Illustratively, image I5 is multiplied by the mapping matrix M to obtain a fused image (one example of a fourth image).
It should be appreciated that a fused image results from fitting the mapping matrix M to the image I5; the image can be fused by mapping the matrix M, and the image color reduction degree brought by the multi-channel data can be reserved, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the image I5 is data of three channels; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
Optionally, step S605, step S606, and step S607 in fig. 8 are optional steps; s605, step S606 and step S607 may not be included in fig. 8; the operation amount of the electronic device can be reduced by steps S605 and S607.
In the embodiment of the present application, an RGBRaw image is obtained by performing pixel rearrangement on a Raw image of multiple channels (for example, an rgbmy Raw image); performing image processing on the RGBW image through an image signal processor to obtain an RGB image; the RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCY channel and the RGB channel can be obtained based on the RGBCY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement on the image signal processor is not needed, and meanwhile, the accuracy of the image color is improved.
In one example, since CMY channels in the multi-channel Raw image have high sensitivity to exposure, in order to avoid a serious color distortion problem in an overexposed region or an overexposed region in the fused image, the image obtained by multiplying the image I5 by the mapping matrix M may be further subjected to a weighted fusion process, so as to ensure the color accuracy of the fused image.
Fig. 10 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method 700 may be performed by the electronic device shown in FIG. 1; the image processing method includes steps S701 to S714, and the steps S701 to S715 are described in detail below, respectively.
It should be understood that the image processing method shown in fig. 10 is applicable to an electronic device including a first sensor; for example, the first sensor is a color filter array sensor; the color filter array sensor may acquire 6 channels of data (e.g., RGBCMY) or more channels of data (e.g., RGBCMYK, RGBCYYM, RGBRYYB, etc.).
And step S701, acquiring a multichannel Raw image.
Illustratively, as shown in fig. 4, a multispectral color filter array sensor 310 in an electronic device may acquire a multi-channel Raw image.
In one example, the multi-channel Raw image acquired by the multi-spectral color filter array sensor 310 may be a 6-channel rgbmy image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), M denotes magenta (magenta), and Y denotes yellow (yellow).
In one example, the Raw image acquired by the multispectral color filter array sensor 310 may be an rgbcymg image; where R denotes red (red), G denotes green (green), B denotes blue (blue), C denotes cyan (cyan), Y denotes yellow (yellow), and M denotes magenta (magenta).
It should be understood that a multichannel Raw image may refer to an image of a multichannel Raw color space.
It should be understood that, the rgbmy image and the rgbcym image are used for illustration, the multi-channel Raw image may refer to a Raw image with 6 channels or more than 6 channels, and the application is not limited thereto.
Step S702, performing pixel rearrangement on the multichannel Raw image to obtain an RGBRaw image (for example, image I1).
Illustratively, the multichannel Raw image is an rgbmyrraw image, as shown in (a) of fig. 9; the pixel rearrangement of the rgbmyrywaw image may refer to replacement of a magenta (M) channel position with a red (R) channel, a yellow (Y) channel position with a green (G) channel, and a cyan (C) channel position with a blue channel in the rgbmyrywaw image; after the replacement, the multiple channels are rearranged in pixels, and subjected to color correction matrix processing to obtain a Raw image in the sRGB color space, as shown in fig. 9 (b).
In the embodiment of the present application, since sRGB stands for standard red, green, and blue, three basic pigments; different devices use the same color coordinate system in transmission without being influenced by different color coordinates of the devices; therefore, the multichannel Raw image is converted into the Raw image of the sRGB color space, so that errors introduced by different devices due to the difference of color coordinates can be avoided.
And step S703, carrying out image processing on the RGBRAW image to obtain an image I2.
It should be understood that the image processing in step S703 refers to an image processing algorithm executed in the image signal processor.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without one line of light output on a display device which is subjected to certain calibration; the lens shading correction is used for eliminating the problem that the peripheral color and the brightness of the image are inconsistent with the center of the image due to a lens optical system; automatic white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Optionally, the RGBRaw image is subjected to image processing to obtain a processed Raw image.
Optionally, if the image processing algorithm further includes demosaicing, the RGB raw image may be processed to obtain a processed RGB image.
Step 704, performing kernel function processing on the image I2 to obtain an image I5.
It should be understood that the kernel function is a statistical term used to refer to a function that maps an input space to a high-dimensional feature space through some non-linear transformation phi (x).
Exemplarily, in order to improve the accuracy of the mapping matrix M, the RGB channel may be mapped to a feature space of a high dimension; for example, RGB three channels may be mapped to L channels by a kernel function; the L channels may be 9 channels (R, G, B, R) 2 ,G 2 ,B 2 RG, RB, GB) or L channel may be 19 channels (R, G, B, R) 2 ,G 2 ,B 2 ,RG,RB,GB,R 3 ,G 3 ,B 3 ,R 2 G,R 2 B,G 2 R,G 2 B,B 2 G,B 2 R,RGB)。
In the embodiment of the application, the image I2 can be mapped to a high-dimensional feature space by performing kernel function processing on the image I2; the higher the dimension of the feature space is, the more the parameter quantity corresponding to the image I2 is, the higher the accuracy of the mapping matrix M obtained in step S709 is; the higher the accuracy of the mapping matrix M, the higher the color reduction degree of the image.
In step S705, the down-sampling process is performed on the image I2 to obtain a small-sized image I3.
Illustratively, the size of the image I2 may be 4000 × 3000 × 3; wherein 4000 × 3000 may represent the resolution size of the image I2; 3 may represent the number of channels of image I2; the size of the image I3 may be 64 × 48 × 3; where 64 × 48 may represent the resolution size of the image I3; 3 may represent the number of channels of image I3.
It should be understood that the above is an illustration of the size of the image; this is not a limitation of the present application.
Step S706, performing kernel function processing on the image I3 to obtain an image I4.
It should be understood that the kernel function is a statistical term used to refer to a function that maps an input space to a high-dimensional feature space through some non-linear transformation phi (x).
In the embodiment of the application, the image I3 can be mapped to a high-dimensional feature space by performing kernel function processing on the image I3; the higher the dimension of the feature space is, the more the parameter quantity corresponding to the image I3 is, the higher the accuracy of the mapping matrix M obtained in step S709 is; the higher the accuracy of the mapping matrix M, the higher the degree of color reduction of the image.
And step S707, carrying out downsampling processing on the multi-channel Raw image to obtain a small-size multi-channel Raw image T1.
Illustratively, the size of the multi-channel Raw image may be 4000 × 3000 × 6; wherein 4000 × 3000 may represent a resolution size of a multi-channel Raw image; 6 may represent the number of channels of a multi-channel Raw image. The size of the image T1 may be 68 × 48 × 6; wherein 68 × 48 may represent the resolution size of the image T1; 6 may represent the number of channels of the image T1.
It should be understood that the above is illustrative of the size of the image; this is not a limitation of the present application.
And step S708, carrying out image processing on the small-size multichannel Raw image T1 to obtain an image T2.
It should be understood that the image processing in step S608 may refer to software algorithms by image processing, i.e., not performed in the image signal processor; for example, as shown in fig. 4, the image processing in step S708 may be performed in the image processing module 320; the image processing module 320 is a module independent of the image signal processor 330.
Illustratively, image processing includes, but is not limited to: black level correction, lens shading correction, automatic white balancing, or color correction matrices.
The black level correction refers to correction processing of a black level, and the black level refers to a video signal level without one line of light output on a display device which is subjected to certain calibration; the lens shading correction is used for eliminating the problem that the color and brightness of the periphery of an image are inconsistent with the center of the image due to a lens optical system; automatic white balance is used to make white color at any color temperature the camera can reduce it to white; the color correction matrix is used to calibrate the accuracy of colors other than white.
Alternatively, the rgbmyraw image may be subjected to image processing to obtain a processed rgbmyraw image.
Optionally, if the image processing algorithm further includes demosaicing, the RGBCMY image may be obtained after performing image processing on the RGBCMYRaw image.
It should be understood that image T2 is in the same color space as image I4.
And step S709, obtaining a mapping matrix M according to the image T2 and the image I4.
The mapping matrix M may be used to represent a mapping relationship between an RGB channel and multiple channels (e.g., RGBCMY channels).
Illustratively, for each pixel (r, g, b), the mapping matrix M is obtained according to the following formula;
h=sum_{1,68×48}{(r,g,b)_T2*M–(r,g,b)_I4};min(h)。
it should be understood that there is a correlation between the magenta channel and the red channel, between the yellow channel and the green channel, and between the cyan channel and the blue channel; for example, the magenta channel is adjacent to the red channel, and the larger the reference value of the red channel is, the larger the pixel value corresponding to the magenta channel is overall; similarly, the yellow channel is close to the green channel, and the larger the reference value of the green channel is, the larger the pixel value corresponding to the yellow channel is; the cyan channel is adjacent to the blue channel, and the larger the reference value of the blue channel is, the larger the pixel value corresponding to the cyan channel as a whole is.
It should also be understood that the multi-channel data collected by the multi-spectral color filter array sensor corresponds to the information of the same shooting scene; for example, the multi-channel data may include data of RGB channels and data of CMY channels; (ii) a Therefore, there is a correlation between the RGB channels in the image I4 and the CMY channels in the multispectral Raw image; theoretically, the difference between the mapping matrix M fitted to the T2 image and the image I4 is small, and therefore the mapping matrix M can be obtained by the above formula.
And step S710, carrying out fusion processing on the image I5 and the mapping matrix M.
Illustratively, image I5 may be multiplied by mapping matrix M.
And step S711, obtaining a fused image.
It will be appreciated that a fused image results from fitting the mapping matrix M to the image I5; the image can be fused through the mapping matrix M, and the image color reduction degree brought by the multi-channel data can be reserved, namely the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the image I5 is data of three channels; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement is not needed to be carried out on the image signal processor, and meanwhile, the accuracy of image colors is improved.
And step 712, obtaining the histogram of each channel in the fused image.
The Histogram (Histogram) may also be called a quality distribution map, and is a statistical report map; the condition of data distribution is represented by a series of longitudinal stripes or line segments with unequal heights; the data type is generally represented by the horizontal axis, and the distribution is represented by the vertical axis.
Illustratively, histograms of R, G, and B channels in the fused image may be obtained; obtaining the Nth in the histogram h Percentile (e.g., N) h = 95), which can be respectively noted as: r h 、G h 、B h
In the embodiment of the application, because an excessively dark region or an excessively bright region in an image may be noise in the image, when data of an R channel, a G channel, and a B channel in a fused image is acquired, in order to avoid that the accuracy of acquired channel data corresponding to the excessively bright region or the excessively dark region is low, the nth region in a histogram may be acquired h A value corresponding to a percentile; e.g. N h Can be greater than or equal to 10 and less than or equal to 95.
Step S713, judging whether the value of each channel is smaller than a preset threshold value corresponding to the channel; if the value of each channel is smaller than the preset threshold corresponding to the channel, step S714 is executed; otherwise, step S715 is executed.
For example, the preset threshold of the R channel may be a first preset threshold, denoted as R Th (ii) a The preset threshold of the G channel may be a second preset threshold, which is denoted as G Th (ii) a The preset threshold of the B channel may be a third preset threshold, which is denoted as B Th (ii) a The Nth histogram of each channel h Percentile (one example of preset position) corresponds toComparing the values with preset threshold values respectively; if R is h <R Th ,G h <G Th And B h <B Th Then go to step S714; if not satisfy R h <R Th ,G h <G Th ,B h <B Th Then step S614 is performed.
Step 714, when the data of each channel in the fused image meets a preset threshold, the fused image is a target fused image (an example of a target image); that is, the fused image obtained in step S711 has no color distortion problem.
Step S715, when the data of part of or all of the channels in the fused image does not satisfy the preset threshold, performing weighting processing on the fused image and the image I2 to obtain a target fused image (an example of a target image).
For example, when data of a part of channels or data of all channels in the fused image are greater than or equal to a preset threshold corresponding to the channels, the fused image and the image I2 are weighted to obtain a target fused image.
Illustratively, the target fusion image = w image I2+ (1-w) fusion image; wherein w is greater than 0 and less than 1; for example, assume max (R) h ,G h ,B)=R h Then w = (R) h -R Th )/(255-R Th )。
In an embodiment of the present application, when the data of at least one channel in the fused image does not satisfy a preset threshold, it indicates that there is an image area with color distortion in the fused image; at this time, the RGB image and the fused image may be subjected to weighting processing, thereby effectively reducing the color distortion region in the target fused image.
Optionally, step S705, step S706, and step S707 in fig. 10 are optional steps; s705, step S706, and step S707 may not be included in fig. 8; the operation amount of the electronic apparatus can be reduced by steps S705 and S707.
In the embodiment of the present application, an RGBRaw image is obtained by performing pixel rearrangement on a Raw image of multiple channels (for example, an rgbmy Raw image); performing image processing on the RGBW image through an image signal processor to obtain an RGB image; the RGBCMY images can be obtained by carrying out image processing on the RGBCMY Raw images; a mapping matrix between the RGBCY channel and the RGB channel can be obtained based on the RGBCY image and the RGB image; obtaining a fusion image based on the mapping matrix and the RGB image; the color reduction degree of the image brought by the multi-channel data can be reserved for the fused image through the mapping matrix, namely, the color accuracy of the image corresponding to the three-channel image data is improved; in addition, the image data corresponding to the RGB image is three-channel data; therefore, the image signal processor still can be free from hardware improvement; the image processing method provided by the embodiment of the application can realize that hardware improvement on the image signal processor is not needed, and meanwhile, the accuracy of the image color is improved.
As shown in fig. 11, (a) in fig. 11 is a preview image obtained from an image captured by an image sensor; fig. 11 (b) is a preview image obtained by processing an image collected by a multispectral color filter array sensor by the image processing method provided in the embodiment of the present application; as can be seen from the preview image shown in (a) in fig. 11, the preview image of the photographic subject has a color distortion problem; compared with the preview image shown in fig. 11 (a), the preview image shown in fig. 11 (b) has higher color accuracy, that is, the color accuracy of the image can be improved by the image processing method provided in the embodiment of the present application.
In an example, a color correction mode may be started in a camera display interface of an electronic device, and the electronic device may execute the image processing method provided by the embodiment of the present application, so as to improve color accuracy of an image.
As shown in fig. 12, a control 620 may be included in the photograph display interface of the electronic device; after detecting the operation of clicking the control 620 by the user, the electronic device may start a color correction mode; namely, the image processing method provided by the embodiment of the application can be used for processing the image, so that the color accuracy of the image is improved.
In one example, a color correction mode may be started in a setting interface of an electronic device, and the electronic device may execute the image processing method provided by the embodiment of the present application, so as to improve color accuracy of an image.
As shown in fig. 13, the GUI shown in (a) in fig. 13 is a desktop 630 of the electronic device; when the electronic device detects an operation of the user clicking the icon 640 set on the desktop 630, another GUI as shown in (b) of fig. 13 may be displayed; the GUI shown in (b) of fig. 13 may be a set display interface in which options such as wireless network, bluetooth, or camera may be included; clicking a camera option, entering a setting interface of the camera, and displaying the camera setting interface as shown in (c) in fig. 13; a control 650 for color correction may be included in the camera settings interface; after detecting that the user clicks the color correction control 650, the electronic device may start a color correction mode; namely, the image processing method provided by the embodiment of the application can be executed, so that the color accuracy of the image is improved.
It is to be understood that the above description is intended to assist those skilled in the art in understanding the embodiments of the present application and is not intended to limit the embodiments of the present application to the particular values or particular scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or changes may be made, and such modifications or changes are intended to fall within the scope of the embodiments of the present application.
The image processing method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 13; an embodiment of the apparatus of the present application will be described in detail below with reference to fig. 14 to 15. It should be understood that the apparatus in the embodiment of the present application may perform the various methods in the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the embodiment of the foregoing methods.
Fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 700 includes a processing module 710, an acquisition module 720, and a multispectral color filter array sensor.
The processing module 710 is configured to start a camera application in the electronic device; the obtaining module 720 is configured to obtain a first image, where the first image is an image collected by the multispectral color filter array sensor; the processing module 710 is further configured to perform first image processing on the first image to obtain a second image, where the second image is a three-channel image, or the second image is an image of L channels, where the L channels include the three channels; performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3; obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between the three channels and the N channels; obtaining a fourth image based on the second image and the target data, wherein the fourth image is an image of the three channels; and saving or displaying a target image, wherein the target image is obtained based on the fourth image.
Optionally, as an embodiment, when the second image is an image of three channels, the processing module 710 is specifically configured to:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
and processing the fifth image through an image signal processor to obtain the second image.
Optionally, as an embodiment, when the second image is an image of the L channel images, the processing module 710 is specifically configured to:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
processing the fifth image through an image signal processor to obtain a sixth image, wherein the sixth image is the image of the three channels;
and processing the sixth image based on a kernel function to obtain the second image.
Optionally, as an embodiment, the sixth image is an image processed by the image signal processor and then subjected to downsampling processing.
Optionally, as an embodiment, the target data is a target matrix, and the processing module 710 is specifically configured to:
and fitting the second image and the third image based on an optimization algorithm to obtain the target matrix.
Optionally, as an embodiment, the processing module 710 is specifically configured to:
obtaining the target matrix based on the following formula:
M=argmin(I T1 *M-I T2 );
wherein M represents the object matrix, I T1 Representing said second image, I T2 Representing the third image.
Optionally, as an embodiment, the processing module 710 is specifically configured to:
and multiplying the second image by the target matrix to obtain the fourth image.
Optionally, as an embodiment, the processing module 710 is further configured to:
acquiring a histogram of the fourth image, wherein the histogram comprises a first histogram, a second histogram and a third histogram, and the histogram corresponds to the three channels;
acquiring first data, second data and third data, wherein the first data is data of a preset position in the first histogram, the second data is data of the preset position in the second histogram, and the third data is data of the preset position in the third histogram.
Optionally, as an embodiment, when the first data is smaller than a first preset threshold, the first data is smaller than a second preset threshold, and the third data is smaller than a third preset threshold, the fourth image is the target image.
Optionally, as an embodiment, the processing module 710 is specifically configured to:
when the first data is greater than or equal to a first preset threshold, the second data is greater than or equal to a second preset threshold, or the third data is greater than or equal to a third preset threshold, performing fusion processing on the fourth image and the second image to obtain the target image;
and saving or displaying the target image.
Optionally, as an embodiment, the second image processing includes black level correction, lens shading correction, or automatic white balance.
Optionally, as an embodiment, the three channels refer to a red channel, a green channel, and a blue channel.
It should be noted that the electronic device 700 is embodied in the form of functional modules. The term "module" herein may be implemented in software and/or hardware, and is not particularly limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.
Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 15 shows a schematic structural diagram of an electronic device provided in the present application. The dashed lines in fig. 15 indicate that the unit or the module is optional; the electronic device 800 may be used to implement the methods described in the method embodiments above.
The electronic device 800 includes one or more processors 801, and the one or more processors 801 may enable the electronic device 800 to implement the image processing method in the method embodiments. The processor 801 may be a general purpose processor or a special purpose processor. For example, the processor 801 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, such as a discrete gate, a transistor logic device, or discrete hardware components.
The processor 801 may be configured to control the electronic device 800, execute software programs, and process data of the software programs. The electronic device 800 may also include a communication unit 805 to enable input (reception) and output (transmission) of signals.
For example, the electronic device 800 may be a chip and the communication unit 805 may be an input and/or output circuit of the chip, or the communication unit 805 may be a communication interface of the chip, and the chip may be an integral part of a terminal device or other electronic device.
Also for example, the electronic device 800 may be a terminal device and the communication unit 805 may be a transceiver of the terminal device, or the communication unit 805 may be a transceiver circuit of the terminal device.
The electronic device 800 may include one or more memories 802, on which programs 804 are stored, and the programs 804 may be executed by the processor 801 to generate instructions 803, so that the processor 801 executes the image processing method described in the above method embodiments according to the instructions 803.
Optionally, data may also be stored in the memory 802.
Alternatively, the processor 801 may also read data stored in the memory 802, the data may be stored at the same memory address as the program 804, and the data may be stored at a different memory address from the program 804.
The processor 801 and the memory 802 may be provided separately or integrated together, for example, on a System On Chip (SOC) of the terminal device.
For example, the memory 802 may be configured to store a related program 804 of the image processing method provided in the embodiment of the present application, and the processor 801 may be configured to call the related program 804 of the image processing method stored in the memory 802 when performing image processing, and perform the image processing method of the embodiment of the present application; for example, a camera application in the electronic device is launched; acquiring a first image, wherein the first image is an image collected by a multispectral color filter array sensor; performing first image processing on the first image to obtain a second image, wherein the second image is a three-channel image, or the second image is an image of L channels, and the L channels comprise the three channels; performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3; obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between the three channels and the N channels; obtaining a fourth image based on the second image and the target data, wherein the fourth image is a three-channel image; and saving or displaying the target image, wherein the target image is obtained based on the fourth image.
The present application also provides a computer program product which, when executed by the processor 801, implements the image processing method of any of the method embodiments of the present application.
The computer program product may be stored in the memory 802, for example, as a program 804, and the program 804 may be pre-processed, compiled, assembled, and linked to obtain an executable object file capable of being executed by the processor 801.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a computer, implements the image processing method described in any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Such as memory 802. The memory 802 may be volatile memory or non-volatile memory, or the memory 802 may include both volatile and non-volatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), SLDRAM (synchronous DRAM), and direct rambus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the modules is only one logical division, and the actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be defined by the claims, and the above description is only a preferred embodiment of the present application, and is not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. An image processing method applied to an electronic device including a multispectral color filter array sensor, the image processing method comprising:
starting a camera application program in the electronic equipment;
acquiring a first image, wherein the first image is acquired by the multispectral color filter array sensor;
performing first image processing on the first image to obtain a second image, wherein the second image is an image of three channels, or the second image is an image of L channels, and the L channels comprise the three channels;
performing second image processing on the first image to obtain a third image, wherein the third image is an image of N channels, and N is an integer greater than or equal to 3;
obtaining target data based on the second image and the third image, wherein the target data is used for representing the mapping relation between the three channels and the N channels;
obtaining a fourth image based on the second image and the target data, wherein the fourth image is an image of the three channels;
and saving or displaying the target image, wherein the target image is obtained based on the fourth image.
2. The image processing method according to claim 1, wherein when the second image is an image of the three channels, the performing first image processing on the first image to obtain a second image comprises:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is an image of the three channels;
and processing the fifth image through an image signal processor to obtain the second image.
3. The image processing method according to claim 1, wherein when the second image is an image of the L channel images, performing first image processing on the first image to obtain a second image, includes:
obtaining a fifth image based on the first image and a pixel rearrangement algorithm, wherein the fifth image is the three-channel image;
processing the fifth image through an image signal processor to obtain a sixth image, wherein the sixth image is an image of the three channels;
and processing the sixth image based on a kernel function to obtain the second image.
4. The image processing method according to claim 3, wherein the sixth image is an image processed by the image signal processor and subjected to down-sampling processing.
5. The image processing method of any of claims 1 to 4, wherein the target data is a target matrix, and the deriving target data based on the second image and the third image comprises:
and fitting the second image and the third image based on an optimization algorithm to obtain the target matrix.
6. The image processing method of claim 5, wherein said fitting the second image to the third image based on an optimization algorithm to obtain the objective matrix comprises:
obtaining the target matrix based on the following formula:
M=argmin(I T1 *M-I T2 );
wherein M represents the object matrix I T1 Representing said second image, I T2 Representing the third image.
7. The image processing method according to claim 5 or 6, wherein said deriving a fourth image based on the second image and the target data comprises:
and multiplying the second image by the target matrix to obtain the fourth image.
8. The image processing method according to any one of claims 1 to 7, further comprising, before the saving or displaying the target image:
acquiring a histogram of the fourth image, wherein the histogram comprises a first histogram, a second histogram and a third histogram, and the histogram corresponds to the three channels;
acquiring first data, second data and third data, wherein the first data is data of a preset position in the first histogram, the second data is data of the preset position in the second histogram, and the third data is data of the preset position in the third histogram.
9. The image processing method of claim 8, wherein said saving or displaying the target image comprises:
and when the first data is smaller than a first preset threshold, the second data is smaller than a second preset threshold, and the third data is smaller than a third preset threshold, storing or displaying the fourth image.
10. The image processing method of claim 8, wherein said saving or displaying the target image comprises:
when the first data is greater than or equal to a first preset threshold, the second data is greater than or equal to a second preset threshold, or the third data is greater than or equal to a third preset threshold, performing fusion processing on the fourth image and the second image to obtain the target image;
and saving or displaying the target image.
11. The image processing method according to any one of claims 1 to 10, wherein the second image processing includes black level correction, lens shading correction, or automatic white balance.
12. The image processing method according to any one of claims 1 to 11, wherein the three channels refer to a red channel, a green channel, and a blue channel.
13. An electronic device, comprising:
one or more processors, memory, and a multispectral color filter array sensor;
the memory coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the one or more processors invoking the computer instructions to cause the electronic device to perform the image processing method of any of claims 1-12.
14. A system-on-chip for application to an electronic device, the system-on-chip comprising one or more processors for invoking computer instructions to cause the electronic device to perform the image processing method of any of claims 1 to 12.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 12.
CN202210313257.7A 2022-03-28 2022-03-28 Image processing method and electronic equipment Active CN115955611B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210313257.7A CN115955611B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment
CN202311170622.4A CN117425091B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210313257.7A CN115955611B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311170622.4A Division CN117425091B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115955611A true CN115955611A (en) 2023-04-11
CN115955611B CN115955611B (en) 2023-09-29

Family

ID=87281229

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202311170622.4A Active CN117425091B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment
CN202210313257.7A Active CN115955611B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202311170622.4A Active CN117425091B (en) 2022-03-28 2022-03-28 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (2) CN117425091B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118368534B (en) * 2024-06-13 2024-08-27 北京赛目科技股份有限公司 Image optimization method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110915204A (en) * 2017-07-21 2020-03-24 索尼公司 Image processing apparatus, image processing method, program, and imaging apparatus
US20210058596A1 (en) * 2019-08-22 2021-02-25 Mahmoud Afifi Systems and methods for sensor-independent illuminant determination
WO2021037934A1 (en) * 2019-08-28 2021-03-04 ams Sensors Germany GmbH Systems for characterizing ambient illumination
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device
CN113518210A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Method and device for automatic white balance of image
CN113676713A (en) * 2021-08-11 2021-11-19 维沃移动通信(杭州)有限公司 Image processing method, apparatus, device and medium
CN113676628A (en) * 2021-08-09 2021-11-19 Oppo广东移动通信有限公司 Multispectral sensor, imaging device and image processing method
CN113938602A (en) * 2021-09-08 2022-01-14 荣耀终端有限公司 Image processing method, electronic device, chip and readable storage medium
CN114223192A (en) * 2019-08-26 2022-03-22 三星电子株式会社 System and method for content enhancement using four-color filtered array sensors

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101926489B1 (en) * 2013-02-04 2018-12-07 한화테크윈 주식회사 Method and System for Image Fusion using Multi-spectral filter array sensor
US20160277721A1 (en) * 2015-03-17 2016-09-22 Stmicroelectronics (Grenoble 2) Sas Color filtered area processing method for improving image processing
CN114079754A (en) * 2020-08-19 2022-02-22 华为技术有限公司 Image sensor, signal processing method and equipment
CN112261391B (en) * 2020-10-26 2022-01-04 Oppo广东移动通信有限公司 Image processing method, camera assembly and mobile terminal
CN112562017B (en) * 2020-12-07 2024-08-23 奥比中光科技集团股份有限公司 Color restoration method of RGB image and computer readable storage medium
CN113810600B (en) * 2021-08-12 2022-11-11 荣耀终端有限公司 Terminal image processing method and device and terminal equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110915204A (en) * 2017-07-21 2020-03-24 索尼公司 Image processing apparatus, image processing method, program, and imaging apparatus
US20210058596A1 (en) * 2019-08-22 2021-02-25 Mahmoud Afifi Systems and methods for sensor-independent illuminant determination
CN114223192A (en) * 2019-08-26 2022-03-22 三星电子株式会社 System and method for content enhancement using four-color filtered array sensors
WO2021037934A1 (en) * 2019-08-28 2021-03-04 ams Sensors Germany GmbH Systems for characterizing ambient illumination
CN113518210A (en) * 2020-04-10 2021-10-19 华为技术有限公司 Method and device for automatic white balance of image
CN112598594A (en) * 2020-12-24 2021-04-02 Oppo(重庆)智能科技有限公司 Color consistency correction method and related device
CN113676628A (en) * 2021-08-09 2021-11-19 Oppo广东移动通信有限公司 Multispectral sensor, imaging device and image processing method
CN113676713A (en) * 2021-08-11 2021-11-19 维沃移动通信(杭州)有限公司 Image processing method, apparatus, device and medium
CN113938602A (en) * 2021-09-08 2022-01-14 荣耀终端有限公司 Image processing method, electronic device, chip and readable storage medium

Also Published As

Publication number Publication date
CN117425091A (en) 2024-01-19
CN115955611B (en) 2023-09-29
CN117425091B (en) 2024-07-30

Similar Documents

Publication Publication Date Title
US10516860B2 (en) Image processing method, storage medium, and terminal
CN114693580B (en) Image processing method and related device
CN115550570B (en) Image processing method and electronic equipment
WO2023040725A1 (en) White balance processing method and electronic device
US20240281931A1 (en) Image processing method and electronic device
WO2023060921A1 (en) Image processing method and electronic device
CN115802183B (en) Image processing method and related device
CN115955611B (en) Image processing method and electronic equipment
CN114331916A (en) Image processing method and electronic device
CN116668862B (en) Image processing method and electronic equipment
CN115767290B (en) Image processing method and electronic device
CN117135471B (en) Image processing method and electronic equipment
CN116668838B (en) Image processing method and electronic equipment
EP4231621A1 (en) Image processing method and electronic device
US20240251180A1 (en) Image processing method and electronic device
CN116437198B (en) Image processing method and electronic equipment
CN115550575B (en) Image processing method and related device
US20240144451A1 (en) Image Processing Method and Electronic Device
CN117135293A (en) Image processing method and electronic device
US20230058472A1 (en) Sensor prioritization for composite image capture
CN116029914B (en) Image processing method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
CN116723409B (en) Automatic exposure method and electronic equipment
CN115696067A (en) Image processing method and device of terminal and terminal equipment
CN116029951A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant